FTC Launches ‘Expansive’ OpenAI Probe In Preemptive Strike

“I’m with the government, and I’m here to help.” “Help investigate you, that is.”

The FTC is coming for the robots before they come for us. Call it a preemptive strike on Skynet. I’m just joking. Sort of.

Earlier this week, OpenAI received a 20-page letter from the agency, containing dozens of questions about the methods the company employs to train its A.I. models, including the technology behind ChatGPT. The FTC is also interested in OpenAI’s protocols for protecting personal data.

CEO Sam Altman is keen to present himself as a proponent of government regulation (at least as it relates to A.I.) and a man committed to the responsible development of a technology he readily admits could pose an existential threat to our species.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read a one-sentence statement released by the Center for A.I. Safety in May. Altman, along with dozens of notable figures, including executives from Google DeepMind and Anthropic, endorsed the message.

On May 16, Altman made a cameo on Capitol Hill, where he played up+ the good guy image. “I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that,” he said, during three hours of testimony. “We want to work with the government to prevent that from happening.”

Now, Altman is staring at what the Washington Post described as an “expansive” probe into the company’s compliance with consumer protection laws. The CID requires OpenAI to retain documents the government might need to… well, to review. Let’s put it that way for now.

Amusingly (or not, depending on whether you find humor in humanity’s witting, enthusiastic pursuit of apocalyptic redundancy), the FTC wants to know more about the extent to which OpenAI’s products might’ve made “false, misleading, disparaging or harmful” statements about people. Some humans, the agency worries, could’ve suffered “reputational harm.” Indeed, there’s already evidence of that, and I wouldn’t be surprised to see defamation suits proliferate, especially given Americans’ litigious tendencies. Are the robots liable for libel?

As the Post emphasized, the FTC has been adamant about the inevitability of regulatory action vis-à-vis A.I., and has at times resorted to sci-fi references to caution the industry against malfeasance.

You can read the CID for yourself below, but suffice to say the FTC wants to know everything, from details of OpenAI’s response to a security incident in March to how the company decides on a release date for products to what steps Altman is taking to mitigate models’ propensity to make things up when the technology receivces a question it can’t readily answer.

Lina Khan, meanwhile, is facing questions from House Republicans curious as to her managerial competence.

Earlier this week, Bloomberg ran an interesting story describing a New York Democrat’s efforts to mandate disclosure for A.I.-altered or generated content in political campaign campaign ads. Yvette Clarke’s concerns stem from an April RNC spot which included fake images of China invading Taiwan and “clearly fictional” depictions of bank looting and soldiers “enforcing martial law in San Francisco.”

As Emily Birnbaum and Laura Davison wrote, the bill “is going nowhere in the GOP-controlled House” even as it speaks to “the degree to which A.I.’s rapid advance has put Washington on the back foot” with the 2024 elections right around the corner.

The OpenAI Civil Investigative Demand can be found below.

OpenAIFTCDocJuly2023

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

5 thoughts on “FTC Launches ‘Expansive’ OpenAI Probe In Preemptive Strike

  1. I just had a small scare about an hour ago. I submitted a brief paragraph about owls in response to a post on a moderated chat board. Within 30 seconds I got a note denying the post and offering me an acceptable rewrite. The rewrite was, in fact, excellent. I couldn’t object to a single word. Now, I know the moderator didn’t create that rewrite, AI did. It was very scary good. If that’s our future I have very mixed emotions. The moderator robot offered me the opportunity to accept their version or try again. What if they hadn’t asked and it hadn’t been a very good substitute?

  2. So far I have been using ChatGPT to help with programming. I have notice that if the issue is simple it responds quickly with code that does work. But in real life the issues tend to be complex, on complex issues it can take 7 or 8 tries to create code that works. But to be fair the process can take a couple of hours, where if I queried a help website it could take a over a week before the moderator responds correctly. So ChatGPT is quicker. It still amazes me that I can ask it a regular question and it can spit out code in 30 seconds.

  3. “ Lina Khan, meanwhile, is facing questions from House Republicans curious as to her managerial competence.” And, good. Hopefully some Democrats are curious as well. Because she is genuinely incompetent as an M&A antitrust enforcer. Outrageous government overreach in attempting to apply misguided and discredited theories. The Activision opinion smack down was epic. And to then lose the appeal for the injunction.

    She shouldn’t be in charge of ordering the cookie tray for the monthly management meeting let alone regulating AI.

  4. Interesting such a letter has not gone to other major generative AI players. Is there a deep game here, where Altman wants to shape coming AI restrictions by offering OpenAI as the guinea pig?

NEWSROOM crewneck & prints