Much has already been said and written about the FTC’s recent enforcement initiative, dubbed “Operation AI Comply.” The coordinated sweep announced last month involved five separate FTC enforcement actions against companies using or claiming to use AI tools to enhance consumer goods and services. For example, as part of the sweep, the FTC targeted a company called DoNotPay
that claimed its “AI Lawyer” services could substitute for a human lawyer and “replace the $200-billion-dollar legal industry with artificial intelligence” – claims that we, as KDW attorneys, were glad to see were found not to be substantiated. The sweep also involved enforcement actions against three business opportunity providers that claimed their AI tools could help customers generate passive income via online storefronts.
While the headlines and press releases centered on the role of AI in each case, the core enforcement and substantiation principles in DoNotPay and the three business opportunity cases are fairly straightforward and consistent with the FTC’s historical approach to deception. But a closer read of the fifth enforcement action against Rytr reveals a surprising and sweeping new position on liability that any company providing services to third parties (whether using AI or not) should review closely. Specifically, the FTC’s complaint against Rytr suggests that companies can be held liable under the FTC Act for failing to anticipate how their goods and services could be used by third parties to effectuate deception.
As background, Rytr sold an AI-enabled writing assistant that allowed customers to generate written content of different forms, including email drafting, product description generation, blogs, and articles. One of Rytr’s use cases was labeled “Testimonial & Review” and allowed customers to generate written content for consumer reviews based on keyword and tone selections. The FTC alleged that Rytr violated the FTC Act by providing its “Testimonial & Review” tool because it could be used to generate fake reviews that would mislead consumers deciding to purchase the service or product described. Several aspects stand out in the Rytr complaint:
- The primary count in the complaint is a “means and instrumentalities” count based on Rytr’s furnishing users with the “means to generate written content for consumer reviews that is false and deceptive.” This is unusual: “means and instrumentalities” or M&I counts are typically pled together with one or more Section 5, rule, or statutory violations that allege an underlying deception. In this case, for example, we would have expected an allegation that third-party users posted or used Rytr-generated fake reviews to deceive consumers, but no such allegation is present.
- Not only are there no allegations relating to the use of fake reviews to deceive consumers, but there is no evidence presented in the complaint that any such fake reviews were ever used. The Commission relies instead on user inputs and outputs created by its own investigators, as well as evidence that over the course of time, certain Rytr users generated hundreds or thousands of reviews. Importantly, there is no allegation that such generated reviews were used or that they deceived consumers in a manner that is material.
- The complaint also includes a novel unfairness count alleging that Rytr “offered a service intended to quickly generate unlimited content for consumer reviews and created false and deceptive written content for consumer reviews.” But, again, there is no evidence in the complaint that such false or deceptive reviews were actually used to mislead consumers – raising questions as to how the complaint satisfies the “substantial injury” prong of the unfairness test.
Notably, Rytr is the only case within the FTC’s AI enforcement sweep that was not authorized unanimously, eliciting a 3-2 vote along party lines. Both Republican commissioners issued separate dissenting statements (and joined each other’s statements) expressing concern that the mere possibility that Rytr’s tools could be used to create false or deceptive customer reviews is not in and of itself a violation of Section 5. (As we discussed yesterday here, dissenting statements have been common lately and there is often a lot to unpack in them.)
Specifically, Commissioner Ferguson’s dissenting statement asserted that “[t]reating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents and common sense.” Commissioner Holyoak’s dissenting statement also raises the lack of consumer harm or evidence that “users actually posted any draft reviews,” and argues that the majority “fails to weigh the countervailing benefits Rytr’s service offers to consumers or competition” – an element in any unfairness count. The dissents both point out that some consumers could legitimately seek to use a review generation tool to help them draft truthful, accurate reviews of their own personal experiences with products.
In this regard, the Rytr action is not novel for its entanglement with AI but instead for its approach to holding a company directly liable through a means and instrumentalities count for potential misuse by a third party. Many companies and industries provide tools and services that could hypothetically be used to enhance consumer experiences or to perpetrate fraud. The Rytr case creates uncertainty regarding how the agency may attribute responsibility to service providers in the future.
In sum, the FTC’s “Operation AI Comply” initiative provides a handful of clear-cut, AI-related compliance takeaways that should not be surprising to regular readers of this blog: be cognizant of the limits of your AI tools, make sure any performance claims are fully substantiated, don’t oversell the involvement of AI in any goods and services, and generally expect AI claims to receive the same (or perhaps more) scrutiny as other advertising claims.
The real punchline, however, is the agency’s attempt to expand the reach of M&I liability for products and services that are not themselves deceptive based on how they may be used by third parties. While the Rytr matter ended in a consent settlement, it will be interesting to see what happens should a similar complaint proceed to litigation in the future.