The FTC, fresh off announcing an all-new division to take on “snake oil” in tech, has sent another shot across the overzealous industry’s bow with a brutal warning to “Keep your AI claims under control.”
I wrote a while ago (okay, five years) that “AI Powered” is the meaningless technical equivalent of “completely natural”, but it has gone beyond brutal. It seems just about every product out there claims to implement AI in some way, but few go into detail – and even fewer can tell you exactly that it works and why.
The FTC doesn’t like it. No matter what someone means when they say “powered by artificial intelligence” or some version of it, “One thing’s for sure: it’s a marketing term,” the agency writes. “And at the FTC, we know one thing about popular marketing terms: Some advertisers won’t be able to stop overusing and abusing them.”
Everyone says AI reinvents everything, but it’s one thing to do that in a TED talk; it’s quite another to claim it as an official part of your product. And the FTC wants marketers to know that these claims could count as “false or baseless,” something the agency is very experienced in regulating.
So if your product uses AI or your marketing team claims it does, the FTC asks you to consider:
- Are you exaggerating what your AI product can do? If you’re making sci-fi claims that the product can’t live up to, such as reading emotions, improving productivity, or predicting behavior, you may want to tone them down.
- Do you promise that your AI product will do something better than a non-AI product? Sure, you can make those weird claims like “4 out of 5 dentists prefer” your AI-powered toothbrush, but you’d better put all four down on paper. Claiming superiority because of your AI needs proof, “and if such proof is impossible to get, don’t claim it.”
- Are you aware of the risks? “Reasonably foreseeable risks and impact” sounds a bit vague, but your lawyers can help you understand why you shouldn’t go too far here. If your product doesn’t work if certain people use it because you haven’t even tried it, or if the results are skewed because your dataset is poorly constructed…then you’re having a bad time. “And you can’t say you’re not responsible because that technology is a ‘black box’ that you don’t understand or didn’t know how to test,” the FTC added. If you don’t understand it and can’t test it, why offer it, let alone advertise it?
- Does the product actually use AI at all? As I pointed out a long time ago saying something is “AI powered” because an engineer used an ML based tool to optimize a curve or something doesn’t mean your product uses AI but many seem to think that a drop AI means the whole bucket is full of it. The FTC thinks otherwise.
“You don’t need a machine to predict what the FTC might do if those claims aren’t backed up,” it ominously concludes.
Since the agency has already published some common sense guidelines for AI claims in 2021 (there was a lot of “detect and predict COVID” back then), it asks questions to that documentincluding quotes and precedents.