The  Technology Page:

How An Electric Motor Works:


How to Use ChatGPT to Help, Not Hurt, Your Thinking and Learning:

What science says about whether large language models are really rotting your brain.

 

 

Eva Keiffenheim MSc

 

 

AI won’t rot your brain in the literal sense — but it is reshaping how we think in ways we’re only beginning to grasp.

A recent MIT study, along with the ensuing media coverage, has brought the debate into the spotlight. In the study, researchers asked 54 university students to write essays using one of three methods: ChatGPT, Google Search, or no digital tools at all.

 

The goal? To understand how different tools shape not just what we write, but how we think in the process.

The researchers monitored participants’ brain activity, memory recall, essay quality, and sense of ownership across these conditions. Here’s what they found:

  • Participants who wrote without tools showed the highest brain engagement, best memory, and strongest sense of ownership.
  • ChatGPT users had the weakest brain activity, no memory of their writing, and described their work as less personal.
  • Essays written with AI were also rated as less original and more repetitive.
  •  

However, and this is important, the study has significant limitations: it involved only 54 participants from elite universities, and participants in the ChatGPT group were told to rely exclusively on AI, which doesn’t reflect the more flexible, real-world ways people use these tools. The study examined only short-term effects, making it hard to generalise the findings.

So, the panic over whether AI is "rotting our brains" is a high-minded distraction from a more important debate about how to use large language models in a way that benefits our brains. This is a science-backed first attempt at uncovering how to do precisely this.


The Core Mechanism Substitution vs. Complementation

 

The impact of an LLM on your learning is not predetermined. According to research by Lehmann, Cornelius, and Sting (2025), the neutral overall effect of AI on learning masks two opposing forces.:

1. The Path to Cognitive Debt (Substitution)

  • What it is: Using an LLM to replace your own cognitive effort. This is prompting "write an analysis of Q3 sales data" or "explain the Meno paradox" before you’ve even tried to think about it yourself.
  • The Consequence: This is the behaviour the MIT study captured. It creates an illusion of productivity. You cover more topics, and the output looks great. A meta-analysis by Wang & Fan (2025) confirms this "performance paradox." ChatGPT use has a significant positive effect on task performance (grades) but a much smaller impact on the development of higher-order thinking. You get the answer, but you bypass the very mental struggle that forges durable knowledge. The high grade masks a learning deficit. This is how you accumulate cognitive debt.

     

2. The Path to a Cognitive Dividend (Complementation)

 

  • What it is: Using an LLM to enhance your own cognitive effort after you’ve already engaged. This is asking, "Here is my draft analysis of Q3 sales data; act as a sceptical CFO and poke holes in it," or "I'm struggling to connect Plato's Meno paradox to modern learning theory; can you offer an analogy?"
  • The Consequence: This approach, by definition, requires a baseline of effort. To ask a clarifying question, you must first engage with the material sufficiently to identify a point of confusion. This transforms the AI from a simple answer machine into a Socratic partner. You don't cover more ground, but your understanding of the ground you do cover becomes profoundly deeper. You are getting work done and building intellectual capital.

To consistently achieve a cognitive dividend, you need a set of operating principles.


The 4 Principles of Cognitive Partnership

 

These four principles, grounded in cognitive science, will help you transition from substitution to complementation and leverage AI as a partner in thinking.

 

Principle 1: Generate, Then Augment.

The Rule: Think first, write first, meet first. Never start a high-stakes cognitive task with a blank page and an AI prompt.

Why it Works: This protects your unique thinking from anchoring bias. By getting your own raw, unfiltered ideas down first, you create a cognitive foundation. AI's output then becomes data for you to work with, rather than a mould that shapes your entire thought process. The effort of the initial generation is where the learning happens.

In Practice: Draft your writing, whether that’s a legal brief or an essay, core arguments before asking AI to help you refine the language, find supporting references, identify your blind spot, or generate related ideas.

 

Principle 2: Seek Friction, Not Fluency.

The Rule: Use AI as a sparring partner, not a minion. Prompt it to challenge, critique, and reveal the flaws in your thinking (more prompts on how to do this in this week’s deep dive).

Why it Works: AI models are optimised for helpful, agreeable outputs. You must deliberately introduce "desirable difficulty." Forcing yourself to confront counterarguments strengthens your position and exposes blind spots, a critical component of robust intellectual work.

In Practice: Prompt your GPT to challenge you: “Focus on substance over praise. Skip all compliments. Challenge my thinking. Identify weak assumptions, point out flaws, and offer counterarguments grounded in evidence.”

 

Principle 3: Scaffold for Skill, Don't Outsource the Process.

The Rule: Use AI to help you navigate your zone of proximal development, not to avoid the journey altogether.

Why it Works: When learning a new skill—from coding to a new language—AI can act as an infinitely patient tutor. Ethan Mollick referenced a study by researchers at the University of Pennsylvania, who found that simply instructing students to use ChatGPT for homework led to worse outcomes because it short-circuited the learning process. But when prompted to act as a tutor, it can be incredibly effective.

In Practice: Shift your prompts from "giving answers" to "guiding thinking." "I'm learning Python. I need to write a script that performs X. Please don't write the code for me. Instead, ask me guiding questions to help me figure out the logic on my own." or “Explain [complex topic] to me using analogies and metaphors from the world of [your domain of expertise].”

 

Principle 4: Automate the Trivial to Amplify the Essential.

The Rule: Strategically offload low-value cognitive work to free up mental bandwidth for high-value thinking.

Why it Works: This is an intelligent application of Cognitive Load Theory. Your working memory is a finite resource. Wasting it on tasks like summarising meeting transcripts, reformatting data, or finding the proper syntax for a function is an inefficient use of your biological hardware.

In Practice: Use AI to summarise a lengthy document to decide if it's worth your deep attention. Use AI to create a first-draft agenda for a meeting, which you then refine with your own strategic priorities.


Your Brain is Safe. Your Thinking is a Choice.

 

The fear of "LLM brain rot" is, at its core, a fear of our passivity. We worry we'll take the easy path offered by the technology and let our most valuable skills atrophy.

We should worry. But we should also remember that we have a choice.

The research is clear: the outcome is not determined by the tool, but by the user's behaviour. The challenge for every knowledge professional is to move from being a passive user of AI to an active architect of our cognitive partnership with it.

Your brain is safe. Your thinking, however, is up to you.

 

Sources

 

Goyal, A. (2025). AI as a Cognitive Partner: A Systematic Review of the Influence of AI on Metacognition and Self-Reflection in Critical Thinking. International Journal of Innovative Science and Research Technology. https://doi.org/10.38124/ijisrt/25mar1427.

Kosmyna, N., Hauptmann, E., Yuan, Y., Situ, J., Liao, X.-H., Beresnitzky, A., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for an essay writing task. arXiv. https://doi.org/10.48550/arXiv.2506.08872

Lee, H., Stinar, F., Zong, R., Valdiviejas, H., Wang, D., & Bosch, N. (2025). Learning Behaviours Mediate the Effect of AI-powered Support for Metacognitive Calibration on Learning Outcomes. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3706598.3713960.

Lehmann, Matthias and Cornelius, Philipp B. and Sting, Fabian J., AI Meets the Classroom: When Do Large Language Models Harm Learning? (March 07, 2025). Available at SSRN: https://ssrn.com/abstract=4941259 or http://dx.doi.org/10.2139/ssrn.4941259

Sidra, S., & Mason, C. (2024). Reconceptualising AI Literacy: The Importance of Metacognitive Thinking in an Artificial Intelligence (AI)-Enabled Workforce. 2024 IEEE Conference on Artificial Intelligence (CAI), 1181-1186. https://doi.org/10.1109/CAI59869.2024.00211.

Wang, J., Fan, W. The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Humanit Soc Sci Commun 12, 621 (2025). https://doi.org/10.1057/s41599-025-04787-y

Weir, M. J. (2024). White Paper: Embracing Responsible Use of ChatGPT in Education. The Interactive Journal of Global Leadership and Learning, 3(1). https://doi.org/10.55354/2692-3394.1049

Xu, X., Qiao, L., Cheng, N., Liu, H., & Zhao, W. (2025). Enhancing self‐regulated learning and learning experience in generative AI environments: The critical role of metacognitive support. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13599.