The AI Technology Page:

The Danger Of Superhuman AI Is Not What You Think:

Key Point:

The rhetoric over “superhuman” AI implicitly erases what’s most important about being human. Generative AI is often touted as signalling the advent of "superhuman" artificial intelligence, but this framing promotes a harmful ideology that undermines human agency and autonomy. 

Despite their impressive capabilities, these AI tools lack fundamental human qualities such as consciousness, sentience, and the ability to experience emotions. The rhetoric of "superhuman" AI misleadingly equates computational speed and problem-solving with human intelligence, diminishing human cognition and creativity's rich, multifaceted nature. This perspective risks reducing human intelligence to mere task optimisation, ignoring the profound, qualitative aspects of human experience and potentially devaluing what it means to be human. I’m bullish on AI but also bullish on the uniqueness of humans and humanity. We will leverage AI in the right way to progress the world. I hope. 


Key Paragraphs from Shannon Valor's May 2024 Article:

Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love. Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deep fake videos, and more. But an AI tool is dark inside.

 

If we think of our brain as a machine, then why would a machine that works on silicon not be able to perform any of the computations that our brain does?”

 

Yoshua Bengio defines “superhuman AI” as an AI system that “outperforms humans on a vast array of tasks.”  For decades, the AI research community’s holy grail of artificial general intelligence (AGI) was defined by equivalence with human minds — not just the tasks they complete. IBM still echoes this traditional notion in its definition of the AGI-focused research program Strong AI:

 

[AGI] would require intelligence equal to humans; it would have a self-aware consciousness that can solve problems, learn, and plan for the future. …Strong AI aims to create intelligent machines that are indistinguishable from the human mind.

 

But OpenAI and researchers like Geoffrey Hinton and Yoshua Bengio are now telling us a different story. A self-aware machine that is “indistinguishable from the human mind” is no longer the defining ambition for AGI. The latest target is a machine that matches or outperforms us on a vast array of economically valuable tasks. OpenAI, which moved AGI’s goalposts, defines AGI in its charter as “highly autonomous systems that outperform humans at most economically valuable work.”

 

Once you have reduced the concept of human intelligence to what the markets will pay for, suddenly, all it takes to build an intelligent machine—even a superhuman one—is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant.

 

If you try to point out, in a large lecture or online forum on AI, that ChatGPT does not experience and cannot think about the things that correspond to the words and sentences it produces — that it is only a mathematical generator of expected language patterns — chances are that someone will respond, in a completely serious manner: “But so are we.”

 

According to this view, characterisations of human beings as acting wisely, playfully, inventively, insightfully, meditatively, courageously, compassionately or justly are no more than poetic license. According to this view, such humanistic descriptions of our most valued performances convey no added truth of their own. They point to no richer realities of what human intelligence is. They correspond to nothing real beyond the opaque, mechanical calculation of word frequencies and associations. They are merely florid, imprecise words for that same barren task.

 

Bengio refuses to grant that humans are more than task machines executing computational scripts and issuing the statistically expected tokens.

 

If beating us at that game is all it takes to be superhuman, one might think that silicon “superhumans” have been among us since World War II, when the U.K.’s Colossus became the first computer to crack a code faster than humans could. Yet Colossus only beat us at one task; according to Bengio, “superhuman” AI will beat us at a “vast array of tasks.” But that assumes being human is to be nothing more than a particularly versatile task-completion machine. Once you accept that devastating reduction of the scope of our humanity, the production of an equivalently versatile task-machine with “superhuman” task performance doesn’t seem so far-fetched; the notion is almost mundane.