I love being right.
Unfortunately, because I am a cynic, I am often right about things that are terrible. I would much rather be proven wrong, but people being what they are…
Earlier this month, MIT released the results of a brain scan study of people who use ChatGPT. Hat tip to ABI Bouhmaida who discussed the study on his Instagram account. I doubt I’d have caught it and I certainly would not have been able to parse the results so quickly or maybe at all.
In fairness to our future AI overlords, these results are based on one study. But the results?
83% of study participants who used ChatGPT to write couldn’t remember what they wrote just a few minutes later. In comparison, only 11% of participants who used Google or their own brain forgot what they wrote in that short amount of time.

Brain scans of ChatGPT users revealed a drop in neural connections from 79 to 42, or nearly half capacity. That doesn’t mean anyone’s brain was damaged by AI, but that their brains did not connect to or pay too much attention to the task at hand while they were using the tool. Another way to describe it? Participants offloaded their reasoning and problem-solving, and thus their brains did not engage at the same level while working the task.

The writing produced by participants who used ChatGPT was described as technically “close to perfect” while lacking personal insights and creativity.

In the final round of the study, ChatGPT users were required to write on their own, without assistance. The quality of their work declined. They didn’t recover the mental capacity they relinquished when they relied on AI to write for them. In contrast, participants who initially wrote without ChatGPT maintained their neural connections, even when they were allowed to use the AI tool.

Is this single study the be-all, end-all on how we use ChatGPT? Of course not. We and the tool will continue to evolve. But the study does confirm my worst fear about how many people are using AI for their creative work, whether that is fiction, poetry, essays, or reporting.
In my previous post, I wrote about how one writer uses ChatGPT to help her outline articles, which I thought was a bit too far. I might owe that writer an apology, because recently on the Killzone blog, one of the contributing writers listed all the ways he uses ChatGPT to write fiction and blog posts. Here’s a bit of his advice, which he called “How to Use ChatGPT Like a Pro Writer”:
- Upload samples of your work so that the AI can train itself to write in your voice.
- Ask it to write headlines and opening paragraphs for your blog posts.
- Use it to rewrite part of your work in plain English or in the style of another writer.
- Ask it to create better metaphors.
If that’s how the “pros” do it, I’ll keep playing in the rookie league, thanks.
It would have been bad enough if this were merely an example of a writer becoming overly reliant on AI tools to create his work for him. Rather, he was actively framing his dependency as a hot writing hack and encouraging other writers to offload their thinking to AI.
I was so angry I almost had to break my fingers to stop myself from commenting on his post. Killzone has never been one of my go-to sources for craft advice or insights – they have one solid contributor, maybe 2 others who are pretty good, and 4 or 5 who contribute a lot of words but no content, including one I believe is close to lapsing into a persistent vegetative state – but now it’s out of rotation. If I were a contributor, I don’t think I could quietly watch another group member pimp AI like that. Not only gross, but an astonishing misread of where the writing community sits with AI right now.
This is anathema to me. My whole deal here is striving for authenticity and individuality, and encouraging writers to put as much of themselves on the page as they can. I wouldn’t have predicted this when I started, but over time, authentic voice has emerged as the central theme of my blog and my writing life, and I can guarantee my work would not have evolved this way if I had let AI suggest topics, outline posts, or write opening paragraphs.
Is it any of my business? No. But when someone loudly suggests using AI in our creative work, I will as loudly decry this advice. When we outsource our creativity and our thinking processes, we lose a bit of ourselves along the way. It astounds and offends me that someone could be willing to amputate an entire branch of their brain in favor of a cyborg replacement.
Will this lack of engagement of the brain towards creative tasks have a long term effect? We don’t know yet. Bouhmaida likens long-term use of AI for creative work to a person using a wheelchair when they don’t need one. Eventually, the leg muscles will atrophy and you might not have a choice about the chair. Given the preliminary results of studies of short-term users, creative people should pause to think before they find themselves unable to.
These rather alarming results aside, I’m not going to remove ChatGPT from my toolbox because I don’t use it for creative work. This week I asked it to identify some new WordPress themes for the blog, based on a few sample themes and my personal preferences. I had it debug a problem I was having in the site admin. I uploaded some photos of cracked plaster and asked it to write a step by step DIY plan for fixing my foyer wall. If I have a tedious job that I would rather not do at all, I am happy to let ChatGPT handle it, leaving me more time for the good stuff, the work that requires my creativity and authentic voice.
As a personal assistant handling fact-based work, the tool is pretty sweet. Still, it has the ability to come on strong. Insidiously, if ChatGPT thinks you are working on some creative writing, it will offer to do it for you: Do you want me to suggest some opening paragraphs? Do you want me to write a few paragraphs about this location in a certain tone or writing style? Do you want me to write a sonnet? So far, I have resisted temptation, but I have caught myself typing ‘yes’ without thinking, though I always delete it. The fucker is just so polite, it’s hard to say no.
I’m glad I wrote my earlier blog post when I thought of it. Life is full of interesting revelations and developments that trigger my natural instinct to yell, “See? I told you!” when regrettably, I did not tell you. In this case, however, I told you on June 2, and MIT released its report on June 10. We’ll cut the institute some slack on their delay, as their research required more effort than roiling up some bile. I’m delighted and maybe a bit horrified my instincts were correct.
Remind me to tell you how in 1995, I predicted that the gay community would eventually splinter into 1000 sub-cultures with every niche preference and peccadillo having its own flag.
I could have warned everyone, but I didn’t.
Know anyone who’d like my blog? Please forward today’s post! I’d love to hear from them.
Need more content? Join my mailing list!


All I can say about this is it’s certainly not surprising.
Not surprising and probably predictable. Someone smarter than me could probably find a link between the way AI affects neural connections and the decreased brain activity that occurs when you passively absorb mediocre television.
All this is a vast wasteland!
I can’t say I’m surprised. Cheaters never prosper.