More on ChatGPT :

Will AI Be Helpful or Harmful?

by JOHN SPENSER

 

When I first started talking to AI experts, I had the worst-case scenario picture in my head. Robot overlords. Technology gone rogue. AI growing too smart and sentient and deciding to launch nuclear weapons. However, as these computer scientists described how AI actually works and the guardrails put in place within systems, I realized that most of my fears were based on science fiction. In other words, it’s not necessarily going to be Skynet.

 

Many proponents pointed out how AI could help with research and development. Efficient grid systems and supply chains could help solve climate change. It sounds odd, but the whole planet benefits if food waste is down. AI is one aspect of why we got the COVID vaccine so quickly. Proponents of AI point to a future where we might find cures for cancer. 

 

Moreover, AI is promising in its ability to automate boring tasks. In terms of creative work, you might spend more time making videos but less time with the monotony of video editing. AI can also reduce human error. Autopilot on planes has reduced crashes despite having more planes in the sky.

 

But there’s also some potential for real harm. While AI might help with research and development, speed and efficiency might reduce the number of actual jobs required to do the task. If AI can automate boring tasks, what about the negative effects of never experiencing boredom? After all, boredom is vital for creative thinking. But there are also huge concerns with privacy issues, misinformation, and technology that might go rogue in ways that can’t be mitigated. Plus, there’s the unknown. AI will have unintended consequences.

 

Recently, prominent computer scientists, ethicists, and philosophers have called for a pause on AI development. This isn’t an anti-AI stance so much as a “let’s slow down and see what it means to use it wisely” approach. In A Human’s Guide to Machine Intelligence, Kartik Hosanagar argues that we need to develop an Algorithmic Bill of Rights.

 

I am deeply concerned about the role of AI in misinformation. That’s why I interviewed two experts on information literacy who talked about things like deep fakes and what that might mean on a social level (with democracy) and an individual level (with issues like catfishing). I’m also concerned about job displacement. I don’t know if AI will replace more jobs than it creates, but I know the disruption will be hard on certain communities. 

 

At the same time, I have been impressed by the potential of AI to solve really hard problems. I get excited about how it might be used ethically and wisely to improve the world. I’ve been intrigued by Dr Cynthia Breazeal‘s notion of Designing Sociable Robots for nearly two decades.

 

Ultimately, I’m not sure AI is inherently harmful or helpful so much as it is powerful. And with anything that powerful, there will be significant tradeoffs for our society. These tradeoffs are going to impact our classrooms in significant ways.