Elon Musk and hundreds of world experts in technology and human sciences signed a call on Wednesday (29) for a six-month pause in research on Artificial Intelligences (AIs) more powerful than ChatGPT 4, the OpenAI model launched this month, warning of “great risks to humanity.”

(RFI) In the petition published on the futureoflife.org website, they call for a moratorium until safety systems are established with new regulatory authorities, surveillance of AI systems, techniques that help distinguish between the real and the artificial, and institutions capable of coping with the “dramatic economic and political disruption (especially to democracy) that AI will cause.”

The text is signed by personalities who have expressed their fears about an uncontrollable AI surpassing humans. Among them are Musk, owner of Twitter and founder of SpaceX and Tesla, and historian Yuval Noah Hariri.

The director of Open AI, which created ChatGPT, Sam Altman, acknowledged that he is “a little afraid” that his creation will be used for “large-scale disinformation, or for cyberattacks.”

“The company needs time to adapt,” he recently told broadcaster ABCNews.

“In recent months, we have seen AI labs launching into an uncontrolled race to develop and deploy ever more powerful digital brains that no one, not even their creators, can reliably understand, predict, or control,” they say.

“Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including the rewarding ones? (…) Should we risk losing control of our civilization? These decisions should not be delegated to unelected technological leaders,” they conclude.

The list of signatories also includes Apple co-founder Steve Wozniak, members of Google’s AI lab DeepMind, Stability AI director Emad Mostaque, as well as American AI experts, academics, and executive engineers from OpenAI partner Microsoft.

By admin