Microsoft CEO Satya Nadella, whose company has invested billions of dollars in ChatGPT maker OpenAI, has had it with the constant hype surrounding AI. During an appearance on podcaster Dwarkesh Patel's show this week, Nadella offered a reality check, arguing that OpenAI's long-established goal of establishing "artificial general intelligence," (AGI) an ill-defined term that roughly denotes the point at which an AI can best humans on an intellectual level, is nonsense. "Us self-claiming some AGI
I feel like the current AI stuff has been net negative. It prompted layoffs and hiring freezes, but then didn’t produce quality results.
It gave CEOs an excuse to do layoffs even though they knew it would hurt their human capital long term, and that they would probably have to hire back a lot of those positions long term at higher wages. In the short terms it gave them a few quarters of increased profits. It also let them push out blatantly unfinished products on the promise of future improbable improvements. This will hurt companies reputations long term, but in the short term is let them juice the stock price.
They needed the increased profit and the pie in the sky growth promises to game the stock market, say all the right buzz words and show an improving price to earnings.
Sure they made the companies worse and less sustainable long term, but, they got huge compensation packages right now thanks to the markets, and they probably won’t be running these companies long enough to see the true fallout.
I hope the stock market craters.
We need to do away with capitalism completely, or put it on a very short leash.
I wish governments still believed in regulations instead of whatever this shit is.
Yeah, we need socialism /communism. Either would be better than this.
He looks from company money perspective. And I think AI is difficult to monetize. A google paper explained a long time ago that big company cannot easily have a huge competitive advantage because new techniques exists in the open source world to learn incrementally on top of costly models. Mainly you don’t need millions to make another good quality LLM.
That being said. LLM add some value, but as everything hyped to no end the real value is negligible comparatively to the « market expected value ».
But it isn’t about creating quality results. It is about creating good enough results where the cost of failure in AI over humans is lower than the cost of humans over AI.