Sam Altman expressed concern about the A.I. programs he’s developing during remarks to US lawmakers on Tuesday.
“I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that,” he said, during some three hours of testimony. “We want to work with the government to prevent that from happening.”
He conceded that OpenAI’s technology will likely eliminate some jobs, even as it creates new employment opportunities. Altman seemed apprised of the risks around government intervention into what many claim is an increasingly perilous A.I. arms race. Indeed, he didn’t present regulation as a risk at all, but rather invited it. Some doubt there’s much government can do, though.
The idea, generally speaking, is for Congress to stay ahead of the curve (too late!) and avoid dropping the ball as they did with Facebook, Twitter and other platforms which, arguably, are doing more harm than good on at least some fronts. As Richard Blumenthal put it, “Congress failed to meet the moment on social media.”
Nothing much came out of Tuesday’s proceedings, and those interested in my take on the evolving risks posed by A.I. are encouraged to read April’s monthly letter, but I wanted to use Altman’s testimony as an excuse to highlight the chart below from Goldman’s S&P 500 “Beige Book,” a collection of themes derived from an analysis of earnings calls.
In short: No one wants to miss this boat. Companies in every sector mentioned A.I. Two-thirds and three quarters of tech and communications services companies, respectively, discussed A.I. on their calls.
Goldman’s economists estimate that A.I. may ultimately drive roughly $7 trillion in global economic growth over a decade, and the bank’s equity analysts reckon a Generative AI Software TAM of about $150 billion. The bank has also suggested that nearly one-fifth (18%) of the work performed around the globe “could be automated by A.I. on an employment-weighted basis.”
Blumenthal on Tuesday used ChatGPT to write his opening remarks and A.I. voice software trained on his speeches to read them.
“We believe the benefits of the tools we have deployed so far vastly outweigh the risks,” Altman told Congress. “But ensuring their safety is vital to our work. My worst fear is that we, the technology industry, cause significant harm to the world.”


Nobody, nobody, nobody seems to have noticed that, while this technology suddenly works way better than it previously did, it still doesn’t work very well.
It’s good for media editing and generation. It can transfer a visual style between images very well. It can actually write a convincing sonnet upon request, which is understandably astounding. But try getting it to write an incisive essay better than a human could, without huge factual mistakes in it. Try to use it to generate working code, as so many people marketing it have claimed it can. Try to use it just to help dig up facts for research, without having to double your research time to root out the complete fictions it produces.
Yes, I understand a computer capable of simulating both an understanding of natural language and the ability to either reply in it or generate very good-looking media in response to it is subjectively astounding. But that’s all it does right now: simulate the appearance of these things. That seems to get lost in all the hype.
but it keeps getting better. It used to be crap at chess, it now gets it. There are plugins for maths.
It can now do a good job of planning a trip (you give it a destination, some parameters, it’ll source planes tickets etc. better than google travel can).
It’s a lot of work processes that could be reorganised/upended even if the ‘hallucinations’ are a real problem for research focused stuff.