More on ChatGPT :

Some More Truths about AI:

The latest generation of AI uses something called machine learning. This means the algorithm can learn from itself by refining how it makes predictions and performs tasks. These algorithms use statistical techniques to find patterns in the data and learn from them. This then improves their performance over time.

We see this machine learning in everything from image recognition to natural language processing to fraud detection. About a decade ago, researchers began making enormous leaps in the development of deep learning. Deep learning is a type of machine learning with self-adjusting neural networks. Deep learning uses artificial neural networks to process data.

 

More recently, we have seen the emergence of generative AI, which is a type of AI that uses machine learning algorithms to generate new data. This can include generating new text, images, music, or even video that is similar in style and content to existing examples. If you’ve used an art generator, you’ll see how you can create a prompt, and it makes an entirely new picture in a style of your choosing. If you’ve used ChatGPT, you’ve seen the potential for creating new text.

 

This newer type of AI seems to “think” more divergently because of the neural networks that mirror human cognition. The bottom line is that machines aren’t merely processing information but engaging in creativity.

 

This is why it feels so revolutionary. And yet, it’s important that we start the conversation with the AI we already use. If we treat generative AI as something altogether new and different, we fail to recognize the iterative aspects of innovation.

 

We can’t predict how AI will change our world. Right now, many are in a state of panic. Every time we experience a new technology, we also experience a moral panic. We hear dire warnings about what the technology will destroy and how our world will change. When the bicycle was invented, newspapers predicted everything from neurological diseases to distortions of the face (so-called “bicycle faces”) to psychological damages. Columnists warned of young women being addicted to bike riding. When telephones were invented, citizens were concerned that the phones would explode. People fought against telephone poles for fear that they would cause physical harm.

 

Sometimes, though, deep concerns are justified. The concerns about nuclear weapons proved to be valid during the Cuban Missile Crisis. The concerns about industrial technology's environmental impact have also proven valid. Climate change is a very real crisis. On the other hand, the moral panic around comic books and video games both proved to be largely unfounded. At this point, we really don’t know if AI will make the world better or worse.

 

We know that our students will need to navigate whatever reality that eventuates. If AI proves to be largely helpful, our students will need to know how to use AI tools in a changing world. They will need to know how to use AI in a way that is ethical and responsible. In other words, students will need to use AI wisely. We need to make sure our approach to AI is human-driven rather than technology-driven. 

 

If AI turns out to be largely harmful, our students will need to know how to fight against the negative effects of AI. This is why it’s so important that students begin by asking critical questions about how AI is changing our world:

The following are some critical thinking questions we might ask students to consider.

  • Where am I using AI without even thinking?
  • How does AI actually work?
  • How might people try to use AI to inflict harm? 
  • How might people try to use AI to benefit humanity? 
  • What happens when someone tries to use it for good but accidentally causes harm?
  • What does AI do well? 
  • What does it do poorly?
  • What are some things I would like AI to do? 
  • What is the cost of using it?
  • What are some things I don’t want AI to do? 
  • What is the cost of avoiding it?
  • How am I combining AI with my own creative thoughts, ideas, and approaches?
  • What is the danger in treating robots like humans?
  • What are the potential ethical implications of AI, and how can we ensure that AI is aligned with human values? 
  • What guardrails do we need to set up for AI?
  • What are some ways that AI is already replacing human decision-making? 
  • What are the risks and benefits of this?
  • What types of biases do you see in the AI that you are using?
  • Who is benefiting, and who is being harmed by the widespread use of AI and machine learning? 
  • What are some ways AI seems to work invisibly in your world? 
  • What is it powering on a regular basis?

We need to invite students into a conversation about what it means to think ethically about AI in our world and in our schools. 

 

Our Students Will Need to Navigate a Changing World

Students will need to find their own creative voice. Think of it this way. A drum machine is great, but the slight imperfections and quirky idiosyncrasies are why I love listening to Keith Moon riff on old records from The Who.

 

When we write, our humour and humanity, in all its imperfections, make it worth sharing. 

AI can make great digital art, but it can’t make your art. 

Our students need to develop originality in a world of machine learning. We don’t know what types of jobs our students will eventually do. But we do know that they will need to think differently.

 

Students Need a Roadmap, Not a Blueprint

The hardest part of this AI revolution is its sheer unpredictability. We can’t predict precisely how machine learning will change how we think and learn.

There are no blueprints or instruction manuals. We don’t even have a playbook. After all, the rules seem to change as we go. But what we can offer students is a roadmap. Or, if you’d prefer, you can think of it like a trail map. We can explore the newly emerging terrain of machine learning in education. We can walk alongside students in the journey, asking, “What does all of this mean?” We can be the guides on the side.

 

Our students will navigate the maze of an uncertain future. Part of this navigation will involve thinking critically about the pros and cons of when to use AI. Some of it will involve thinking critically about AI and how it is changing our world, including how we engage in creativity or what this looks like for information literacy. Some of it might involve navigating the changing terrain of the actual school subjects. In other words, what does math look like in a world of smart machines? Much of this navigation will involve developing critical human skills that machines cannot do. It will involve finding unusual routes through a complex maze. But it will also involve empowering our students with a sense of ownership over their journey and trusting that they will find a way.