More on ChatGPT :

Top AI Researchers And Ceos Warn Against ‘Risk Of Extinction’ In 22-Word Statement:

James Vincent

 

A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe AI poses to humanity.

 

The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

 

This statement, published by a San Francisco-based non-profit, the Center for AI Safety, has been co-signed by figures including Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Yoshua Bengio — two of the three AI researchers who won the 2018 Turing Award (sometimes referred to as the “Nobel Prize of computing”) for their work on AI. 

 

There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.

 

There is great concern at the possibility of AI systems rapidly increasing in capabilities, and no longer functioning safely. Many experts point to swift improvements in systems like large language models as evidence of future projected gains in intelligence. They say once AI systems reach a certain level of sophistication, it may become impossible to control their actions.

 

Both AI risk advocates and sceptics agree that, even without improvements in their capabilities, AI systems present a number of threats in the present day — from their use enabling mass surveillance to powering faulty “predictive policing” algorithms and making it easier to generate misinformation and disinformation.


The Coming Disruption

Regardless of any pauses in AI creation, and without any further AI development beyond what is available today, we already know that AI is going to impact how we work and learn.

 

We know this for three reasons. 

First, AI seems to supercharge productivity in ways we have never seen before. Early controlled studies show large-scale improvements at work tasks, as a result of using AI, with time savings of 30%+ and higher quality output for those using AI. 

 

Add to that the test scores achieved by GPT-4, and it is obvious why AI use is already becoming common among students and workers, even if they are keeping it secret.

 

We also know that AI will change how we work and learn because it affects a set of workers who never really faced an automation shock before. Multiple studies show the jobs most exposed to AI (and therefore the people whose jobs will change the most as a result of AI) are the most educated and highly paid workers and the ones with the most creativity in their jobs. The pressure for organizations to take a stand on a technology that affects their most highly-paid workers will be immense, as will the value of these workers becoming more productive.

 

And we know disruption is coming because these tools are about to be deeply integrated into our work environments. Microsoft is releasing Co-Pilot GPT-4 tools for its ubiquitous Office applications, even as Google does the same for its Office tools. And that doesn’t count the changes in education, from Khan Academy’s AI tutors to recent integrations announced by major Learning Management Systems. Disruption is fairly inevitable.

But the way this disruption affects our companies and schools is not inevitable. We get to choose what happens next.

 

Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. 

 

So we need to be having very pragmatic discussions about AI, and we need to have them right now: 

What do we want our world to look like?