Half-baked opinions on being an AI Luddite

Some half-baked ideas I've had about AI development and usage in 2025. Still refining my thinking, but I want to share.

I'm a little worried that I might be becoming a Luddite. Luddites were skilled textile workers in 1812 who were worried about the effect of new machinery on their own employment. The protested by destroying textile machines and burning down factories. Today, the term has come to refer to anyone who opposes or refuses to use the latest technological innovations.

I'm worried I'm becoming an AI Luddite, not necessarily because I'm worried about my job (though that has crossed my mind), but because I'm worried about what AI is doing to us as people. It's not just an AI problem (mind candy plays a role, for sure), but it's a major factor in people remaining externally productive while, in my mind, missing the point.

LLMs require people's work, but then remove the individual incentive to do the work

LLMs are only as good as they are because of the written work (and code) of millions of people. But at the same time, AI is removing the incentive to create the work that it relies on.

For example, I have this blog. On this blog, I write up my thoughts on various topics, which admittedly are of questionable value. But when I learn how to do something complicated, or figure out how to solve a problem, I also write that up here. When I write something useful, I can see that users visit the site, and that tells me I'm doing something valuable, and encourages me to do more of it. I'm not making money off the site, but knowing that I'm helping people solve a problem gives me a little dopamine boost to do it again the next time.

AI ingests billions of documents, likely including my blog posts, then answers users questions directly. That's awesome for the users, because they get answers more quickly, without having to search through a bunch of blog posts. But the writers of AI's ingested content get no feedback on whether their posts were useful. So we lose the incentive to keep doing it.

If AI is going to continue to improve, one thing it will need is quality, human-generated training data. Otherwise the model collapses under the weight of its own output. But with little incentive to create the content, we might run into trouble.

Shallow engagement

NotebookLM from Google is a super cool product that allows, among other things, the easy creation of an AI-generated, AI-hosted podcast episode about anything you want to read. The AI voices are impressively humanlike, and new features allow you to interrupt the podcast to ask questions if you want to dig deeper on a subject.

But listening to a podcast is not the same as reading a paper. The AI voices approach the paper believing every word, and I believe it tricks you into a false sense of security about the content and quality of the paper and findings. If the AI's default mode is to believe the paper, and your default mode is to believe the AI, you're missing an important level of scrutiny that should be used when you approach a research paper.

Often when I'm reading a paper (especially when doing peer review), key details are contained in a single sentence that give important context. If the AI skips that detail in a summary, you might miss important limitations to the findings, or important contextual cues about the research.

AI summaries of an article (like "tell me the key points from X book or Y article") are similar. I don't think that all writing is equally good, or equally valuable, but if all you're reading is a summary, there's a fair chance you're missing important points. Maybe that's okay, but you should think about the level of engagement you want to have with the material you are reading.

Missing transformation

When I think about papers and books I've read, movies I've seen, and conversations and experiences I've had, I may not be able to remember all the details. Most likely I don't. But all of those things have fed into me becoming the person I am today. They change my perspective, broaden and deepen my understanding, and make me think about something in a new way.

Most frequently, it is active engagement with a topic that has the most transformative effect. Spending hours searching and reviewing literature on a topic embeds it into my brain (my very own, personal neural network) in a way that an AI summary does not. Learning about how to do something with code deepens my understanding both of programming and of my own code in a way that vibe coding can never do.

And maybe sometimes that's okay. Sometimes shallow engagement with a topic is all you're going for. Maybe learning how to code isn't your goal, maybe you only need a vague awareness of what a research paper or book is about. But I think we need to make those decisions thoughtfully, with awareness of what we are missing by engaging only shallow-ly (definitely a word now).