Share This

Showing posts with label Sam Altman. Show all posts
Showing posts with label Sam Altman. Show all posts

Monday, 27 November 2023

Rift over future of AI development

Sam Altman was fired on Nov 17 from the company that created the popular ChatGPT chatbot. To many, he was considered the human face of generative AI. — Reuters

OpenAI CEO calls for global cooperation on AI regulation, says ‘China has some of the best AI talent in the world’

THE rift that cost artificial-intelligence whiz kid Sam Altman his CEO job at OpenAI reflects a fundamental difference of opinion over safety, broadly, between two camps developing the world-altering software and pondering its societal impact.

On one side are those, like Altman, who view the rapid development and, especially, public deployment of AI as essential to stress-testing and perfecting the technology. On the other side are those who say the safest path forward is to fully develop and test AI in a laboratory first to ensure it is, so to speak, safe for human consumption.

Altman, 38, was fired on Nov 17 from the company that created the popular ChatGPT chatbot. To many, he was considered the human face of generative AI.

Some caution the hyper-intelligent software could become uncontrollable, leading to catastrophe – a concern among tech workers who follow a social movement called “effective altruism,” who believe AI advances should benefit humanity.

Among those sharing such fears is OpenAI’s Ilya Sutskever, the chief scientist and a board member who approved Altman’s ouster.

A similar division has emerged between developers of self-driving cars – also controlled by AI – who say they must be unleashed among dense urban streets to fully understand the vehicles’ faculties and foibles; whereas others urge restraint, concerned that the technology presents unknowable risks.


Altman attending the Asia-Pacific Economic Cooperation CEO Summit in San Francisco, California. — Reuters

Those worries over generative AI came to a head with the surprise ousting of Altman, who was also OpenAI’s cofounder.

Generative AI is the term for the software that can spit out coherent content, like essays, computer code and photo-like images, in response to simple prompts.

The popularity of OpenAI’s ChatGPT over the past year has accelerated debate about how best to regulate and develop the software.

“The question is whether this is just another product, like social media or cryptocurrency, or whether this is a technology that has the capability to outperform humans and become uncontrollable,” said Connor Leahy, CEO of ConjectureAI and a safety advocate. “Does the future then belong to the machines?”

Sutskever reportedly felt Altman was pushing OpenAI’s software too quickly into users’ hands, potentially compromising safety.

“We don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” he and a deputy wrote in a July blog post. “Humans won’t be able to reliably supervise AI systems much smarter than us.”

Of particular concern, reportedly, was that OpenAI announced a slate of new commercially available products at its developer event earlier this month, including a version of its ChatGPT-4 software and so-called agents that work like virtual assistants.

Sutskever did not respond to a request for comment.

The fate of OpenAI is viewed by many technologists as critical to the development of AI. Discussions on Altman to be reinstalled have fizzled, dashing hopes among the former CEO’s acolytes.

ChatGPT’s release last November prompted a frenzy of investment in AI firms, including US$10bil from Microsoft into OpenAI and billions more for other startups, including from Alphabet and Amazon.com.

That can help explain the explosion of new AI products as firms like Anthropic and ScaleAI race to show investors progress. Regulators, meanwhile, are trying to keep pace with AI’s development, including guidelines from the Biden administration and a push for “mandatory self-regulation” from some countries as the European Union works to enact broad oversight of the software.

While most use generative AI software, such as ChatGPT, to supplement their work, like writing quick summaries of lengthy documents, observers are wary of versions that may emerge known as “artificial general intelligence”, or AGI, which could perform increasingly complicated tasks without any prompting. This has sparked concerns that the software could, on its own, take over defence systems, create political propaganda or produce weapons.

OpenAI was founded as a non-profit eight years ago, in part to ensure its products were not driven by profit-making that could lead it down a slippery slope toward a dangerous AGI, what is referred to in the company’s charter as any threatening to “harm to humanity or unduly concentrate power”. But since then, Altman helped create a for-profit entity within the company for the purpose of raising funds and other aims. — Reuters

Source link

Related posts:

Do not interpret China's participation in global AI safety summit narrowly as Biden signs executive order on AI regulation with Chinese experts warn against US politicizing technology

 


Saturday, 10 June 2023

OpenAI CEO calls for global cooperation on AI regulation, says ‘China has some of the best AI talent in the world’

 

Sam Altman, the CEO of OpenAI (right), delivers an online speech at the conference held by the Beijing Academy of Artificial Intelligence in Beijing on June 10, 2023. Photo: Shen Weiduo/GT

OpenAI's CEO calls on China to help shape AI safety .

OpenAI's CEO Altman Calls on China to Help Shape AI ...



Sam Altman, the CEO of OpenAI, has called for a global cooperation in setting up the regulatory guardrails for powerful artificial intelligence (AI) systems at an industry forum held in Beijing on Saturday, noting that China can play a significant role with some of its "best AI talent in the world."

"China has some of the best AI talent in the world... given the difficulties solving alignment for advanced AI systems requires the best minds from around the world," Altman said at an industry forum hosted by the Beijing Academy of Artificial Intelligence (BAAI) on Saturday.

Altman predicted that in the next decade, artificial general intelligence systems, commonly called AGI, may surpass human expertise "in nearly every domain," and could eventually exceed the collective productivity of some of the largest companies.

"The potential upside here isn't one. AI revolution will create shared wealth and make it possible to dramatically improve the standard of living for everyone. We must manage the risk together in order to get there," Altman said.

At present, a new round of AI boom triggered by ChatGPT is sweeping the world including China. While people have been thrilled for the productivity improvement that AI can bring, they are increasingly aware of the huge risks, with governments slowly stepping in to harness the powerful yet dangerous tool. 

In May, hundreds of industry leaders, including Altman himself, have signed a letter warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Altman made a point to reference the IAEA, the United Nations nuclear watchdog, as an example of how the world came together to oversee nuclear power.

"We need international cooperation to build global trust in the safe development of increasingly powerful AI systems. In a verifiable way, I have no illusions that this will be easy. We will need to devote significant and sustained attention as an international community to get this right," Altman said.

In a possible indication to the current geopolitical conflict's impact on the global AI cooperation, Altman noted that "great powers may have their share of differences." This is true today, as it has been before. But even during the most difficult times, great powers have found ways to cooperate on what matters. "We see great potential for researchers in the US, China, and around the world to work together to achieve the shared goal."

Altman used an ancient Chinese proverb, "a journey of a thousand miles begins with a single step," in describing how the world should start on the difficult but necessary cooperation.

In talking about the next frontier that OpenAI is engaging on in a later Q&A session, Altman said OpenAI is discussing about opening source more models in the future, but does not have a specific model or timetable.

"Certainly, at some point, we'll try to do a GPT-5 model in the sequence, but not anytime soon and we don't know exactly when," Sam said, adding that he's hoping someday the company can get to return to robotics - a sector the company has worked on at the very start.

The two-day conference hosted by the BAAI has invited leading figures in the industry such as AI "godfather" Geoffrey Hinton and Turing Award winner Yann André LeCun. Apart from exchanging on AI frontiers and technology progress, this year's conference placed a special focus on AI regulation and alignment.

Industry players and experts at the forum have also emphasized on the importance of global cooperation in harnessing the "double-sword" technology, which is especially urgent as the large models are development "too fast."

"We are fully aware of the urgency of AI regulation, but under the current global situation, as some in the US are hyping China threat with AI as a key battleground, global cooperation and an international rule could be hard, though some cooperation might be achievable in certain areas,"Wang Peng, a research fellow at the Beijing Academy of Social Sciences, told the Global Times on Saturday at the sidelines of the event.

Looking ahead, the development of large models and major technological breakthroughs is clearly be related to the life and death of human society, Wang said, noting that it's critical for some nations to drop closed and retrogressive minds of "small courtyards and high walls," join forces together with the rest of the world, and concentrate on overcoming difficulties for the benefit of all mankind.

Source link

RELATED ARTICLES
 

 

Related posts:

ARTIFICIAL INTELLIGENCE AND I; Cybercriminals exploit chatbots

In just a matter of months, people have started integrating generative chatbots and other AI tools into their work processes.

 

  Lies, racism and AI: IT experts point to serious flaws in ChatGPT

 

Tech giants explore new OpenAI opportunities as ChatGPT, the latest chatbot launched

 

ChatGPT And The Future Of AI, Turkey Earthquakes.Part 1

Tight job market? AI meets worker shortage

 

Microsoft to enhance search engine, browser

 

Rightways