Recently, the Federal Trade Commission (FTC) unanimously approved a resolution that authorizes use of compulsory process in examining companies’ “products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use.” This action underscores the urgency of the FTC and other federal regulators to monitor the swift integration of AI across many sectors of the economy and to provide oversight of its use in consumer products and services.

The resolution grants the FTC staff authority to issue civil investigative demands (CIDs), which function similar to subpoenas. CIDs are a critical tool used by regulators to request information, documents, and testimony from companies about products and services being investigated. The FTC has indicated that future investigations will likely concentrate on the following: 

  • Market competition, specifically the concentration of ownership of the key technologies associated with AI; and
  • Violations of consumer protection laws including privacy breaches, fraud, deceptive marketing, discrimination, and any other unfair or abusive acts or practices.

This resolution follows President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October. Companies should expect inter-agency cooperation and coordination on this issue following the issuance of the executive order. The FTC and other federal regulators are also likely to coordinate with state attorneys general in conducting investigations and enforcement actions.

For instance, earlier in the year, the FTC, alongside the Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission, declared a joint commitment reaffirming their commitment to leveraging their regulatory authority to punish discriminatory actions stemming from AI and automated technologies utilized by businesses. The FTC and CFPB also announced in February an initiative to seek both data and comment on tenant screening tools used in rental housing, including the use of algorithms that may have a discriminatory outcome for tenants.

Most recently, the FTC sued Automators AI, a California-based company, alleging that the business lured customers into investing $22 million into e-storefronts on Amazon and Walmart websites. Among the allegations of false claims and advertising included in the FTC’s lawsuit, the agency charges that the company falsely advertised its use of AI to ensure profits to investors.

Federal regulators are working quickly to adapt existing legal frameworks to the use of AI in regulated industries. For example, the CFPB issued guidance to lenders affirming that the use of AI in credit decisions and underwriting does not allow those companies to avoid providing consumers the specific and accurate reason for credit denial in compliance with ECOA. “Technology marketed as artificial intelligence is expanding the data used for lending decisions, and also growing the list of potential reasons for why credit is denied,” said CFPB Director Rohit Chopra. “Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.”

These regulators believe that companies’ use of emerging AI and automated technologies in consumer products and services does not shield the companies from demonstrating compliance with applicable law. The message is clear - companies should be ready to show their work when the regulators arrive at their doorstep.