Ex-leader of OpenAI claims safety has been overlooked in favor of flashy products at the company.

OpenAI's CEO pledges to continue working towards their goals.

May 18th 2024.

Ex-leader of OpenAI claims safety has been overlooked in favor of flashy products at the company.
A former leader of OpenAI, who recently resigned from the company, expressed concerns about the prioritization of safety within the influential artificial intelligence organization. In a series of social media posts, Jan Leike, who headed the "Superalignment" team alongside another co-founder who also stepped down, revealed that he joined the San Francisco-based company because he believed it was the ideal place to conduct AI research.

However, Leike has been at odds with the company's leadership regarding their core objectives for quite some time, and it eventually reached a breaking point. As an AI researcher, he believes that there should be greater emphasis on preparing for the next generation of AI models, particularly in terms of safety and assessing the societal impact of these technologies. He stressed that the pursuit of creating "smarter-than-human machines" is inherently dangerous, and OpenAI carries a huge responsibility on behalf of humanity.

Leike called for OpenAI to become a "safety-first AGI company," using the abbreviated term for artificial general intelligence, a futuristic concept where machines possess broad intelligence similar to humans, or at least comparable ability in many areas. OpenAI CEO Sam Altman responded to Leike's posts, expressing his appreciation for Leike's contributions to the company and his sadness over his departure. Altman also promised to write a more in-depth post on the matter in the coming days.

The company also confirmed that Leike's Superalignment team, which was established last year to focus on AI risks, has been disbanded. Its members will now be integrated into the company's other research efforts. Leike's resignation came shortly after OpenAI's co-founder and chief scientist, Ilya Sutskever, announced his departure after nearly ten years with the company. Sutskever was one of four board members who voted to remove Altman last fall, but he was quickly reinstated. It was Sutskever who initially informed Altman of his termination, but he later expressed regret for his actions.

Sutskever shared that he is currently working on a new project that holds personal significance, but he did not disclose any further details. He will be succeeded by Jakub Pachocki as chief scientist. Altman praised Pachocki as one of the greatest minds of their generation and expressed confidence in his leadership to guide the company towards their mission of ensuring that AGI benefits everyone.

On Monday, OpenAI unveiled the latest update to their AI model, which can imitate human cadences in verbal responses and even attempt to detect people's emotions.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]
[Generative AI is experimental.]

 0
 0