by: Foram Dmytryk

Everything is bigger in Texas, including expectations related to AI tools. On September 18, the Lone Star State entered a petition of assurance of voluntary compliance with Dallas based artificial intelligence (“AI”) technology company, Pieces Technologies Inc.

Piece’s generative AI products purported to summarize, chart and draft clinical notes meant to be relied on by physicians and medical staff when treating patients caught the regulator’s eye.  Pieces advertised its product as having a “critical hallucination rate” and “severe hallucination rate” of >0.001% and >1 per 100,000.  The petition alleges that these representations were false, misleading and deceptive and may have violated the Texas Deceptive Trade Practices – Consumer Protection Act (“DTPA”). 

Pieces voluntarily agreed to 5 years of “Assurances” to:

  • Disclose the meaning of the metric and the method used to calculate said metric in advertising and marketing when quantifying the outputs of its gen. AI products.

  • Refrain from making product misrepresentations related to accuracy, reliability, procedures and methodologies for testing and monitoring, definition of any metrics, and training data. It also agreed to disclose any financial or similar arrangements with third parties who endorse or promote the products.

  • Provide all current and future customers with documentation disclosing any known or reasonably knowable harmful or potentially harmful uses or misuses of the product. The documentation, at minimum will include – type of data and models used for training, detailed explanation of the purpose and use, limitations of products, misuses that may increase the risk of inaccurate outputs or increase risk of harm, documentation necessary to understand the nature and purpose of the output, monitor inaccuracy, and avoid misuse.

Also included is a compliance monitoring requirement, wherein upon a written request from the State for information related to compliance with any specific provision, within 30 business days, Pieces must submit information under penalty of perjury and agree to appear for depositions or produce records.

Key Takeaways When Developing or Implementing AI Products In Your Business

Understand Enforcement PrioritiesRegulators have made it very clear that they are looking into AI. 

  • The Texas Attorney General announced earlier this year that he will not hesitate to act against companies, particularly when it comes to sensitive data (financial, healthcare, geolocation), sale, consent, and use of artificial intelligence (AI).  This year alone TX has settled a biometrics lawsuit against a major social media company for $1.4B and sued a car manufacturer for unlawfully collecting and selling driving data and investigated Pieces.  Other states have also similarly stated that they will regulate AI tools.

  • TX is not the only sheriff on the block, the Federal Trade Commission (“FTC”) has repeatedly posted blogs about AI and Commissioner Lina Khan is outspoken regarding this technology. On September 25, the FTC announced five enforcement actions against companies related to AI tools being used to “trick, mislead, or defraud” people . Unsurprisingly, the common theme is false claims about products without evidence, luring customers into fraudulent money schemes, and generating false content. 

Implement AI Thoughtfully – AI tools with the promise of efficiency may be shiny tools, but all that glitters is not gold especially when under scrutiny.  Be thoughtful when implementing AI tools.  For example, if you use AI in pricing, consider all potential risks associated with the tool.  See our article the FTC Isn’t buying it at Any Price: Surveillance Pricing, Dynamic Pricing and Algorithmic Pricing

Be skeptical of AI products that make claims that are too good to be true e.g., they can fully replace a human on the job.  Always thoroughly vet the vendor and the product to ensure credibility, compliance with legal and regulatory requirements, and confirmation that the product works as intended. 

Data is valuable.  Do your due diligence prior to sharing data with a vendor.  To learn more about questions to ask before sharing your data, see our article 7 Questions to Ask Before Giving A Vendor Access to Your Data Set in An Artificially Intelligent World

Proper Notice and Rights – Use of any AI tool is subject to all existing laws including comprehensive privacy laws requiring notice, access and opt out rights. For more information, see our article regarding Making Proper Disclosures for AI features.  A popular use is internal tools for employment and recruiting, but remember employees have rights too.  See our article 5 Things to Think about When Using AI in Employment and Recruiting to learn more. 

Clear, Conspicuous and Accurate Disclosures – The same advertising principles of substantiation apply.  If you make claims about your AI product, make sure they are accurate and can be backed up with proper evidence e.g., if you say your product’s output is equal to that of a human, you must conduct testing to ensure this is true.  Be sure to clearly disclose any definitions and methods you use to measure your product.  Don’t exaggerate and use “AI” to make your product an attractive sell, be honest.

Documentation – Provide detailed documentation with AI products so customers understand how the AI product was developed, any existing risks and intended uses.  

Disclose Affiliations – If you use a celebrity or influencer (or virtual influencer, yes that is a thing) to market your AI product (or products generally), disclose that relationship. To learn more about legal issues with virtual influencers see our article here.

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.