According to Reuters, Ilya Sutskever, co-founder of OpenAI, has raised $1 billion for his new AI startup, Safe Superintelligence (SSI). According to reports, the corporation will utilize the funds to create secure artificial intelligence systems that outperform human skills.
While the valuation at which the funding is raised has not been published, sources tell Reuters that it is $5 billion. The investment will also be used to help the organization hire people and acquire skill strength. According to Reuters, the AI startup plans to establish a trusted team of engineers and researchers in Palo Alto, California, and Tel Aviv, Israel.
Investors in the investment round included Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Furthermore, the round included participation from NFDG funds.
The founders of Safe Superintelligence
OpenAI Co-Founder’s Safe Superintelligence Raises $1 Billion During AI Funding Frenzy
AI Start-up Ilya Sutskever, Daniel Gross, and Daniel Levy cofounded Safe Superintelligence in June of this year. Levy is a former OpenAI researcher who is now the co-founder and optimisation lead of Safe Superintelligence. While Gross, an entrepreneur, serves as the company’s technological strategist. He also cofounded Cue, an AI start-up that was eventually acquired by Apple.
Meanwhile, Sutskever, OpenAI’s co-founder, quit the firm in May of this year to launch his own AI company. Soon after his resignation, he stated on X in May, “After nearly a decade, I have decided to quit OpenAI. The company’s growth has been nothing short of remarkable, and I am certain that OpenAI will develop AGI that is both safe and beneficial.”
Following this, OpenAI’s CEO posted on X, “Ilya and OpenAI will part ways.” This makes me very sad; Ilya is easily one of our generation’s best thinkers, a guiding light in our area, and a good friend.
There has previously been a leadership issue at OpenAI, with the company’s board claiming that Sam Altman lacked openness. Following this, media sources indicated that Sutskever was concentrating on AI safety, whereas Altman and others were working on new technologies. Altman was reinstated in March after being abruptly sacked.
The singular focus of Safe Superintelligence is to advance AI.
The singular focus of Safe Superintelligence is to advance AI.
According to the cofounders, the primary goal of the company is to develop’superintelligence’. This essentially refers to AI that will be smarter than humans. In a company blog post, it was stated: “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
This start-up’s funding comes at a time when funding for AI startups is on the rise. Between April and June, investment in AI start-ups allegedly climbed to $24 billion. Crunchbase statistics shows that this was more than quadruple the prior quarter. In addition, according to Crunch Base, “Thanks to the huge windfall seen in Q2, the first half of this year saw $35.6 billion go to AI start-ups—a 24 per cent increase from the $28.7 billion in H1 last year.”