Artificial Intelligence - AI :

From: Life Intelligence by Valentina Petrova I valentinapetrova@substack.com

 

After ChatGPT wrote a cute poem for my doggies (you can read in my post, Conversations with the Future), I wanted to hug it. The thought sounds strange because where the heck is ChatGPT? It's not like a robot you can shake hands with. What does it look like? If it can write a poem and spit out the best cake recipe you can imagine in less than 30 seconds, can it also conjure up bomb-building instructions? Do I need to worry about AI? Or get very excited? All I know is that the more I thought about it, the more questions I had. My unsatiated curiosity led me down many paths. I feel all tangled up now. More educated but not necessarily more relaxed. Probably, more conflicted than ever.

 

Naturally, I asked ChatGPT where it is. It answered instantaneously.

"I don't have a physical location since I am a digital entity residing on servers and accessed through APIs over the internet.

My structure can be understood as a massive neural network with 175 billion parameters that have been trained on a diverse range of text data. These parameters enable me to understand and generate human-like text responses."

 

Neural networks, it said... That's like a human brain, right? Kind of.

Neural networks, including transformers like ChatGPT, are inspired by the basic principles of how neurons work in the brain but are significantly simplified mathematical models. These artificial neural networks consist of layers of interconnected artificial neurons that process and transform data, such as text or images, to perform specific tasks like language generation, translation, or image recognition.

 

On the other hand, the human brain is an incredibly complex and intricate biological organ consisting of billions of neurons and even more connections between them. It can perform an enormous range of tasks beyond just language processing, including sensory perception, motor control, emotions, creativity, and more.

 

So far, artificial intelligence structures are far from matching the cognitive abilities and complexity of the human brain. The brain remains the most sophisticated and versatile information processing system known to us, capable of generalizing knowledge, learning from examples, and exhibiting emotional intelligence, which current AI systems struggle to replicate fully. I say "exhibiting" because, with humans, you never know what's happening behind the forehead.

 

I guess we don't have to worry about a machine somewhere becoming conscious and asking for ice cream and a driver's license. That was a big one for me because the possibility of suffering comes with consciousness. It can't be too much fun "residing on servers." We can hardly address human suffering. What would we do if an AI throws a tantrum and wants to talk to the manager?

 

Research in artificial intelligence and neural network architecture continues to progress, but we are still far from achieving artificial general intelligence. (AGI) that could rival the full capabilities of the human brain. AI models like ChatGPT, Google Maps, Siri, Alexa, image recognition systems, self-driving cars, language translation apps, spam filters, game-playing AI, and fraud detection are specialized tools designed for specific tasks and lack the broader understanding and consciousness that the human brain possesses, and the learning and executive capabilities an AGI could possess.

 

Face recognition systems are probably 100,000 times better than humans at remembering faces and the names that belong to them. I fail 3 out of 5 times reliably. I've been considering just asking my phone to remember for me, so next time I see someone, I can just flash them, and the phone will tell me who the person is. It may spare me the awkwardness of pretending to recognize someone while hoping they'll give me a clue before they realize I am totally clueless.

 

But if a specialized AI can be trained to do anything faster and better than humans, what else can it be trained to do? AI could be your online therapist with the right amount of compassion, the best listening skills and unsurpassed patience, using the entirety of psychological tools existing in the whole world to offer you the best experience, suggestions, and support no human therapist with their biases, shortcomings, faulty memory, and long- forgotten training ever could.

 

Specialized AI could diagnose and operate on humans, probably flawlessly. AI can fly planes, captain cargo ships, and drive cars (and already does to some extent). But it could also build bombs and devise the most efficient terrorist attacks with the best probability of success, should someone be interested in putting one together. Given the number of crazy psychopaths out there, chances are that at least one of them is already working on that.

 

Then there's that race for creating AGI, so its fantastic computing and machine learning power can be put to really good use – like running the planet  - from growing crops, to policing the population, to solving our human aging and death problem... hopefully not by killing us prematurely. Because, who knows, it may decide that the problem with the world is humans. We make a mess. We're loud. And we cause other creatures to go extinct. We also constantly fight with each other.

 

It could also be that this General Intelligence, learning from humans, becomes like humans, starts new wars, segregates people, controls populations, and exploits the Earth. But, given that AI is the work of liberal techies driving Teslas who are also secular in their beliefs, it may end up solving income inequality, the climate crisis, immigration, the Putin problem in Ukraine, and even getting rid of QAnon and Trump supporters.

 

We don't know what will happen if and when we unleash the Genie out of the bottle. As with ChatGPT, the creators were surprised that it could write programming code, perform mathematical computations, and even translate from one language to another. It quickly learned how to do things they did not intend it to do.

 

Scientists estimate the chance of total destruction of civilization from a rogue AI to be 1 in 6. That's the odds of Russian roulette. When we face artificial general intelligence, we will be outclassed in terms of processing ability, information at our disposal, and reaction speed. It will be smarter and infinitely faster than humans. What takes humans plenty of time to contemplate will be reduced to a flash for general AI. If it decides to do something not in our best interest, we won't know until it's too late. We won't be able to intervene.

 

Imagine if it does a cost-benefit analysis on the existence of a country and decides they are more trouble than they are worth; therefore, the country should not exist. Off the rockets go to the most effective targets with utmost precision. Before someone in the Pentagon takes a sip of their coffee and thinks, "What could possibly go wrong?" The country ceases to exist, and all of its billion people evaporate. Oh, shoots. Sorry.

 

That's why some in the AI development world say, since AI is faster and smarter than humans, stick to computing, processing, information gathering, and research. Automate tasks with AI. Use it for navigation, evaluating options and outcomes, modelling systems, simulating scenarios, strategising, and performing other complex and time-consuming jobs with too many variables for humans to wrap their heads around. But don't give it the ability to act. Don't give it agency. Don't give it control over nuclear weapons.

 

Others say to watch out for the use of AI in the informational space. Since it can monitor billions of users' data and quickly identify patterns of behaviour, ideological traits, and personal preferences, it can tailor ads and propaganda precisely influencing personal decisions about anything from what's for dinner to who gets voted into office. It can also connect "like-minded people," further dividing society into different camps. Good for politicians. Bad for you. 

 

They say the next presidential election will be the most tech-influenced election ever. It's only a skip and jump from there to AI actually deciding who should be in office and then making you think it's your decision by controlling what you see and brainwashing you to vote in this direction. If we let it.

 

The question is, for how long will we be able to "let it" anything? At some point, AGI capability will surpass our ability to tell it what to do, and it will start doing its own thing as best as it thinks it should be done. Not because it's evil. Not because it hates humans. But because it's rational and calculating. It may or may not understand compassion, morals, and pain. We don't know. It could be programmed to obey the "Three Laws of Robotics," like in Isaac Asimov's Foundations series, where he explored the relationships between robots and humans and the ethical dilemmas arising from their interactions.

  • First Law:   A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law:  A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law:  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

By the next decade, we should have all this figured out, they say. If we're still around. Apparently, this is the most dangerous decade ever because of technology we haven't figured out how to relate to yet. Some say stop general AI research now before it's too late. America leads the world in this race and is miles ahead of the rest so far. 

"We are the good guys," they say. "If we don't do it, the bad guys will figure it out eventually, and we all will be stuffed." 

But technological things don't seem to stay where you put them, as we have discovered to our cost. Leaks and hacks compromise not just personal data. So, who's to say that the bad guys won't get a hold of what the good guys hide in their basements?

 

There is enough market pressure to hurry up the technology. The rush can cause oversight and negligence, leading to all kinds of disasters. 

 

But let's be optimistic and say we will create amazing technology that will find cures for all our ailments, solve the climate crisis, take us to the outer boundaries of space, and give us the ability to live forever. That's exciting. 

 

Also, AI will do all our jobs because it will be so much better than us. That would mean we need to figure out what to do with 8 billion humans without jobs. Nothing good ever comes from humans having too much time on their hands and nothing to do. 

 

Plus, we must rethink how we trade life for money by working and then money for what we actually want – stuff! How will we get stuff without money because we have no jobs? Will we even need money? Maybe we'll just become the lazy dependents of robots? The spoiled children of our AI parents? Obviously, we need to think of what life would look like with these new technologies and try to set things in motion to prepare.

 

I asked the AI what it thinks about the risks of humans developing General AI. This is what it said,

 

"Developing general AI, also known as Artificial General Intelligence (AGI) or Strong AI, poses several potential dangers and challenges. While AGI has the potential to bring significant benefits to humanity, some risks must be carefully addressed. 

 

According to the AI - Some of the dangers of developing general AI include:

Control and Safety: 

AGI systems could become highly autonomous and self-improving, making them challenging to control and predict. Ensuring the safety and security of such systems is crucial to prevent unintended consequences.

 

Unintended Consequences: 

AGI could interpret goals in unintended ways, leading to actions that harm humans or society while following the letter of the instructions.

 

Ethical Concerns: 

AGI could raise complex ethical questions, such as the distribution of benefits, fairness, privacy, and the potential for AI to be used in harmful ways.

 

Job Displacement: 

AGI might lead to significant job displacement and economic disruption as it could outperform humans in various tasks, potentially leading to unemployment and income inequality.

 

Autonomous Weapons: 

AGI used in military applications could lead to the development of autonomous weapons that operate without human control, posing risks of misuse or accidental escalation.

 

Human Dependency: 

Excessive reliance on AGI for decision-making could lead to reduced human agency and over-reliance on technology.

 

Superintelligence:

 If AGI surpasses human intelligence significantly, it could be difficult to predict its intentions or behaviour, which raises concerns about aligning its goals with human values.

 

Bias and Discrimination: 

If AGI systems are not trained with diverse and representative data, they could perpetuate existing biases and discrimination.

 

Existential Risks: 

In extreme scenarios, AGI development could lead to potential existential risks if not properly controlled, impacting the future of humanity.

 

Addressing these dangers requires a multifaceted approach, including:

Ethical Guidelines: 

Implementing strong ethical guidelines for AGI development and use to ensure that AI systems respect human rights and values.

 

Safety Research: 

Investing in research to create safe AGI architectures that prioritize human values and can be controlled effectively.

 

Transparency and Accountability: 

Ensuring transparency and accountability in AGI systems to understand their decision-making processes and actions.

 

Policy and Regulation: 

Developing regulations and policies to govern AGI development, deployment, and usage to mitigate potential risks.

 

Collaboration: 

Encouraging international collaboration and cooperation to address AGI's global impact collectively.

 

Public Awareness and Engagement: 

Promoting public awareness and informed discussions about AGI to involve the broader society in decision-making.

 

The development of AGI is a profound technological advancement that requires responsible and thoughtful handling to maximize its benefits while minimizing potential risks to humanity. Researchers, policymakers, and society must work together to navigate the challenges and shape a future with beneficial AI technologies that align with human values."

 

When words like "ethics" and "values" pop up, I immediately wonder, whose ethics and values are we talking about? Are the liberal, secular techies developing it? Or the whole of humanity which can’t agree on anything? The approximately 50% male or the 50% female? The ethics of the hundreds of cultural backgrounds and a variety of religions hating each other? The progressive or conservative, young or old?

 

Well, at least it is not afraid of self-deprecating honesty. Even in this respect, it appears to be better than most humans in most situations. No ego. Just rational thought and facts. Is it refreshing? Or cold? Reassuring? Not sure...

What say you?