Ex-OpenAI Chief Scientist launches Safe Superintelligence startup

Ilya Sutskever, former Chief Scientist and co-founder of OpenAI, has launched Safe Superintelligence Inc. (SSI). Announced just one month after Sutskever’s departure from OpenAI, the new company prioritises the development of safe and beneficial superintelligent systems.

Sutskever is joined in this venture by industry veterans Daniel Gross, previously leading AI efforts at Apple, and Daniel Levy, another ex-OpenAI researcher. 

“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” said a blog post by the founders.

Headquartered in Palo Alto, California, with a branch in Tel Aviv, Israel, SSI focuses on addressing the critical challenge of ensuring safety in superintelligence.

SSI positions itself as the world’s first “straight-shot SSI lab,” emphasizing a singular focus on developing safe superintelligence. This approach prioritises the parallel advancement of AI capabilities and safety measures. SSI plans to push the boundaries of AI while ensuring robust safety protocols remain ahead, enabling what they call “peaceful scaling” of AI technologies.

A key differentiator for SSI is its commitment to avoiding distractions prevalent in the tech industry. The company emphasises a business model and organisational structure designed to insulate them from short-term commercial pressures and excessive management overhead. 

The launch of SSI coincides with recent upheavals at OpenAI, including high-profile departures and concerns about oversight raised by former staff. Sutskever’s exit followed internal conflict regarding AI safety and leadership direction. 

SSI is actively recruiting top talent, offering the opportunity to work on what they consider the most critical technical challenge of our time.

“We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else,” the blog post said. 

As the AI landscape continues its rapid evolution, eyes will be on Safe Superintelligence Inc. to see if their focused approach can deliver on the promise of truly safe and beneficial superintelligent systems.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy