More on ChatGPT :

Cautions for Students Using AI

If you log into ChatGPT, the home screen makes it clear what AI does well and what it does poorly. I love the fact that the technology makes it clear, from the start, what some of its limitations might be. However, there are a few more limitations of ChatGPT that students should consider.

  • ChatGPT is often dated
  • Its neural network relies on information that stops at 2021. This means ChatGPT lacks an understanding of emerging knowledge. For example, when I asked a prompt about Russia and Ukraine, the response lacked any current information about the current Russian invasion of Ukraine.
  • ChatGPT can be inaccurate. 
  • It will make things up to fill in the gaps. I was recently talking to someone who works at MIT and she described some of the inaccurate responses she’s gotten from ChatGPT. This could be due to misinformation in the vast data set it pulls from. But it might also be an unintended consequence of the inherent creativity in A.I. When a tool has the potential to generate new content, there is always the potential that the new content might contain misinformation.
  • ChatGPT may contain biased content. 
  • Like all machine learning models, ChatGPT may reflect the biases in its training data. This means that it may give responses that reflect societal biases, such as gender or racial biases, even if unintentionally. Back in 2016, Microsoft introduced an AI bot named Tay. Within hours, Tay began posting sexist and racist rants on Twitter. So, what happened? It turns out that machine learning began to learn what it means to be human based on interactions with people on Twitter. As trolls and bots spammed Tay with offensive content, the AI learned to be racist and sexist. While this is an extreme example, deeper learning machines will always contain biases. There’s no such thing as a “neutral” AI because it pulls its data from the larger culture. Many of the AI systems used the Enron data files as initial language training. The emails, which were in the public domain, contained a more authentic form of speech. But it was also a form of speech that skewed conservative and male because Enron was a Texas-based energy company.
  • ChatGPT lacks contextual knowledge. 
  • While ChatGPT can analyse the words in a given sentence or paragraph, it may not always understand the context in which those words are used. This can lead to responses that are technically correct but don’t make sense in the larger conversation. If a student writes a personal narrative, they know the context better than any AI could possibly understand. When writing about local issues for a school newspaper or blog, the AI won’t have the local knowledge that a student journalism team demonstrates. This is why it’s critical that students learn how to contextualize knowledge.
  • ChatGPT requires an understanding of command prompts. 
  • This sounds simple but it’s easy to miss. ChatGPT isn’t a mind reader, so if students use it to answer questions, they need to become really good at designing their command prompts.
  • ChatGPT lacks empathy. 
  • ChatGPT may not be able to understand or recognize the emotional context of a conversation. This can lead to inappropriate or insensitive responses. So, it might give insensitive feedback when a student uses it for the revision process. It might lack awareness and empathy when students ask questions and engage in research (consider a student with a
  • Chat GPT lacks common sense: 
  • I’m not sure how to describe this but some of the answers I’ve gotten on ChatGPT seem silly and nonsensical. ChatGPT’s responses are based solely on the patterns and associations it has learned from text data. It may not always have the common sense or practical knowledge to understand the context of a conversation or provide accurate responses.
  • ChatGPT might not be eco-friendly. 
  • Deep learning requires an immense amount of processing power. As AI becomes more pervasive, there’s the potential it could accelerate climate change. Wired Magazine described it this way, “deep learning inherently requires huge swathes of data, and though innovations in chips mean we can do that faster and more efficiently than ever, there’s no question that AI research churns through energy.” On the other hand, certain technologists have looked toward AI as a potential solution for making power grids more efficient and reducing the amount of energy we collectively consume.