Elon Musk was committed to the ethical use of AI very early on. Now he is attacking a company he co-founded: The developers behind the hype AI ChatGPT have to put up with uncomfortable questions from him.
It is one of the biggest technology hypes of recent years: In the last few months, the topic of artificial intelligence has finally arrived in the mainstream. One of the most important drivers is ChatGPT, which has just been released in a new version (read more here). Now Elon Musk has attacked the company behind it, OpenAI: He feels he has been ripped off.
“I’m still confused as to how a not-for-profit foundation to which I donated nearly $100 million grew into a company valued at $30 billion,” Musk said in a recent tweet. And raises a serious accusation: “If it’s all legal – why doesn’t everyone do it?”
harsh criticism
In fact, Musk is one of the co-founders of the company. As early as 2015, he founded OpenAI as a charitable foundation with the current boss Sam Altman and other Valley greats. The goal: to develop artificial intelligence that will not eventually pose a threat to humanity. Musk not only poured his millions into the company, he ran it for years. He only left his post in 2018, seeing a conflict of interest with his companies Tesla and SpaceX.
Apparently he is anything but happy with recent decisions. It’s been a few years since OpenAI started earning money. In order to be able to use the research efforts financially, the foundation founded a subsidiary just under a year after Musk’s departure. OpenAI LP is fully owned by the foundation. The goal was clear: the opportunity to make profits made you much more attractive to external investors. Nevertheless, OpenAI LP is not a classic company either. The profits are capped: the company may only earn 100 times its investment. After that it’s over.
no longer open
On the other hand, Musk only indirectly addresses the second current allegation against the company. The tweet to which he replied also accused OpenAI of no longer being “open”, in addition to the profits. In fact, the openness of one’s own research results was one of the most important principles when the company was founded. Your own AI should be as transparent as possible and available to the public. With the GPT-4 just introduced, that’s over. Unlike its predecessors, OpenAI no longer reveals which data sets the program was trained with, which barriers are intended to prevent misuse.
There was a lot of headwind in the industry for this. But the company ultimately had no choice, Ilya Sutskever, the company’s chief scientist and one of the founders, defended the decision to The Verge. It is “self-explanatory” that OpenAI decided against it for security reasons and to protect against competitors, he thinks. And takes the old ideals to task: “We were just wrong.”
Against your own idea
This was exactly one of the core ideas when it was founded. By making the development of the programs ethical and open, humanity should benefit and not just individual companies and states, so the idea. Through open development, artificial intelligence should become more moral and democratic.
According to Sutskever, however, this has changed. The idea of ​​developing such a powerful tool as ever more potent artificial intelligence open to everyone is simply no longer tenable in view of the gigantic dangers. “In a few years, everyone will understand why that’s just not a good idea.”
Sources: Elon Musk, The Verge