When Ilya Sutskever, former chief scientist at OpenAI, departed the company in May, it left many people wondering why. Recent internal turmoil at OpenAI and a brief lawsuit by early backer Elon Musk fueled speculation, with the internet buzzing about the “What did Ilya see” meme. This meme suggested that Sutskever might have observed something concerning about CEO Sam Altman’s management of OpenAI.
Enter Safe Superintelligence
Sutskever’s new venture may shed light on his departure. On Wednesday, Sutskever announced via Twitter that he is founding a new company called Safe Superintelligence.
“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small, cracked team,” Sutskever tweeted.
Focus on Safety
The company’s website features a message signed by Sutskever and co-founders Daniel Gross and Daniel Levy. Gross co-founded the search engine Cue, which Apple acquired in 2013, while Levy led the Optimization team at OpenAI. The message underscores safety as the cornerstone of their mission to build artificial superintelligence.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” the message reads. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Sutskever’s Departure and OpenAI’s Safety Concerns
Sutskever has not publicly detailed his reasons for leaving OpenAI, instead praising its “miraculous” progress. However, his new company’s emphasis on safety suggests a possible motivation. Critics, including Musk, have accused OpenAI of recklessness in developing artificial general intelligence (AGI). Sutskever’s exit, along with others from OpenAI’s safety team, hints that the company may have been lax in ensuring AGI’s safe development. Musk has also criticized Microsoft’s involvement in OpenAI, alleging that the nonprofit has become a “closed-source de facto subsidiary” of Microsoft.
READ ALSO: Samsung Galaxy Z Flip 6: Key Upgrade to Thrill Pro Photographers
The Road Ahead for Safe Superintelligence
In an interview with Bloomberg, Sutskever and his co-founders did not reveal any backers, though Gross mentioned that raising capital would not be an issue for the startup. It remains unclear whether Safe Superintelligence’s work will be open source.
The launch of Safe Superintelligence marks a new chapter for Ilya Sutskever and raises important questions about the future of AI safety. As the AI landscape continues to evolve, the balance between rapid advancement and cautious safety measures will be crucial. With Sutskever at the helm, Safe Superintelligence aims to pioneer a path where groundbreaking AI developments are meticulously aligned with robust safety protocols.
REVIEW: Google Pixel Watch 2 Review: The Perfect Blend of Fitbit and Google