Skip to main content

In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

Image
FTC AI Turing Test blog post

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side.

Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust. Concern about their malicious use goes well beyond FTC jurisdiction. But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services. Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don’t comprise a class of people protected by anti-discrimination laws.

Another way that marketers could take advantage of these new tools and their manipulative abilities is to place ads within a generative AI feature, just as they can place ads in search results. The FTC has repeatedly studied and provided guidance on presenting online ads, both in search results and elsewhere, to avoid deception or unfairness. This includes recent work relating to dark patterns and native advertising. Among other things, it should always be clear that an ad is an ad, and search results or any generative AI output should distinguish clearly between what is organic and what is paid. People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they’re communicating with a real person or a machine.

Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look. What would look better? We’ve provided guidance in our earlier blog posts and elsewhere. Among other things, your risk assessment and mitigations should factor in foreseeable downstream uses and the need to train staff and contractors, as well as monitoring and addressing the actual use and impact of any tools eventually deployed.

If we haven’t made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers. And for people interacting with a chatbot or other AI-generated content, mind Prince’s warning from 1999: “It’s cool to use the computer. Don’t let the computer use you.”

The FTC has more posts in the AI and Your Business series:

Brian Penny
May 01, 2023

I think it is also important for the AI influencers to properly disclose their financial ties. I know for a fact Adobe and MidJourney aren’t doing this with their influencer Kris Kashtanova, and it is deceptive and hurting a lot of people.

April Beggs
May 02, 2023

It's imperative, creators, Influencer and advertisers are held accountable. With social media updating daily, targeted Audiences, and consumer trust, at stake. Transparency Is A MUST! As a Buisness Owner, And Influencer I get both sides. However I myself am a consumer and parent. Most young adults listen to the Influencer over anything else. The Influencer must be held accountable, if paid for ad be transparent! If I choose to place an ad , I triple check myself. AI Tools are very useful, however if the took users are not held to strict standards... I'm afraid there will be no trust left.

Zoe Maclean
May 02, 2023

Image generation needs to be heavily regulated, they should not be able to use people's photos without their consent in their training data. Whether copyrighted art, photos, or an iphone picture of your kid from Facebook. The last one is especially pertinent because the open source nature of SD makes people use it as an infinite pornography generator and it's built on the bedrock of people's stolen private content (aka LAION).
And Opt-Out doesn't work, it is regularly ignored.

stuart
June 05, 2023

Further to not being able to stop the advertising, I am also not able to have any say in what I am being advertised with; have standard internet due to expense, as does many families with children:

Have noted, as opposed to companies lawfully obtaining GDPR consent; instead quite blatantly have programmed it to be immediately inferred by way of, what is incorrectly being listed as 'essential cookies' (- when in fact pertaining to 'business services'); is then shared with 'partners' ('either of a pair of people engaged together in the same activity') AND under further illegal basis of presenting small section of text implying that persons wishing to unsubscribe to unwelcomed messaging, from complete strangers; does instead assume 'ALL risk' of the persons who has created website, as due to ‘Docket 8888. Complaint, May 24, 1972-Final Order, No'/. lR, 197.' (?):

The FTC point out that ‘firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions’ HOWEVER there is no commentary on the subversive teaching of children that there is no such thing as personal boundaries; this is from an alleged “united states” instead is now wanting to transpose white picket fencing and private post boxes in to future kindling, and for hellfire ?.