A Vibe Coder's Millennium
I got my first home computer in late 2000 when I was in the middle of 10th grade. One day, unprompted, my technologically illiterate father decided to take us to the Sears in our local mall to buy me a computer. Without much money to his name and purchased entirely with high-interest monthly payments, we quickly settled on a brand new, off-eggshell colored Compaq Presario desktop. It was equipped with the cutting-edge Windows ME operating system, a conspicuously advertised Intel Celeron processor, and a free month of AOL—I was genuinely grateful to my dad and excited to finally have a computer to call my own.
Even by the norms of the era, this was quite late in my education to be getting my first computer. Most of my friends already had one and for several years had been using it to type term papers that I was still submitting handwritten and catfishing in chatrooms I couldn’t enter.
But I wasn’t a noob.
Thanks to Albuquerque Public School’s uncharacteristically prudent investment in school computer labs and education, I had been enthusiastically taking computer classes every year since elementary school in the early 90s. As such, I already had some decent experience with Word, Excel, and PowerPoint, even before that first home computer in 10th grade.
The real credit, however, goes to my technologically savvy mother. Well into her 40s at the time, she stayed on top of her skills in emerging technologies, subscribed to Wired Magazine, and was never intimidated by a rapidly changing personal computing landscape. For as long as I can remember, she had cultivated my interest in computers by encouraging me to learn more about what made the video games I liked work. We would go to the library so she could introduce me to something called Netscape Navigator and the “world wide web.” And, when she had to, she would just let us tinker with her coworkers’ computers when she took my sister and me with her to her office space when she needed to work late.
Thanks to her, the first day I booted up the computer my dad bought me, there was no learning curve.
I was ready.
Transformer
I was not anticipating the impact digital technology—specifically, generative AI—would have on our collective consciousness when I was configuring my Presario twenty-five years ago. But what people have used AI for these past several years is genuinely and undeniably amazing.
From predicting the structure of protein molecules to mental health coaching, applications of generative AI are well beyond what most people thought possible even a few years ago. We often joke about how older folks struggle with opening a PDF, but many college students—so-called “digital natives” who are perfectly adept with apps on their phones—will struggle with basic directory structures and even file uploading. I don’t think it is unfair to say that the average American is not particularly tech savvy. But AI makes everything so effortless and delivers itself to us in a friendly, competent voice that is easy to imagine as the timbre of the most insightful person you’ve ever met. It is no surprise that so many people have come to think of AI as objective oracles, sentient, or otherwise magical tools at our disposal.
Behind my interest in computers all these years was the promise of seeing these technologies being applied to issues I cared about and the prospect of mastering the skills needed to use them accurately—it paid off. I have been using machine learning and computational workflows for over a decade as part of my work and play (you can find the code for my March Madness bracket model on my GitHub). I’ve taught these methods in my courses. I listen to podcasts dedicated to advancements in large language models. I make an active effort to at least test drive many of the latest AI tools.
So how have I found myself being the wettest of blankets in every conversation I have about AI, from the barstool to the faculty meeting? And I’m only becoming more and more astray from the mainstream attitudes towards AI in this ubiquitous cultural moment. Why am I so bearish?
Glass Slippers
In his 1963 book, The Making of the English Working Class, historian E.P. Thompson describes the cultural and economic impact of the early industrial revolution as societies shifted away from the traditional apprenticeship model of skilled-labor training. Previously, for example, a master shoemaker may train their apprentices in all aspects of the shoemaking process. From procuring the proper materials to the sales of the finished product, by the time an apprentice was finished, they were ready to open a shop of their own. As part of their training, they would also be instilled with the moral and social discipline that came with the craft within its cultural landscape. It was slow, meticulous, and esoteric work. Critically, it required the intrinsic desire for mastery that fueled the motivation to sustain expertise and continually push to the next level of proficiency.
However, the arrival of the unskilled automation of the industrial revolution heralded a dramatic shift from the apprenticeship model to the assembly line. Junior shoemakers were no longer trained to eventually have the skills to make a ready-to-sell pair of shoes. Rather, they would learn how to produce one part of the shoe as efficiently as possible so that it could be passed along to the next unskilled worker in the production line. This created a situation in which any individual worker could not completely understand the shoemaking process to a degree that they could gain the economic independence of their craft masters or proprietors. What’s more, where once craft masters may even share living quarters with their apprentices, now there was no longer a venue for the informal learning of the moral and cultural discipline that used to be implicit in the production of the craft.
As much as this shifted the economic realities of the day, it also eroded the traditional sources of collective values and personal motivation that these systems maintained.
The sociomoral vacuums created by these economic shifts are thought to have given rise to even broader societal changes. In the United States, the Second Great Awakening and rise of evangelicalism brought religious revivalism that emphasized the role of conspicuous worship and individual responsibility for salvation. These emulated and eventually replaced the collective values and personal motivation practices of the previous era.
Historian Paul Johnson would take this observation further in his 1978 book, A Shopkeeper’s Millennium. He argued that the Second Great Awakening was not only the response to a rapidly changing economic circumstance but also a calculated effort by societal elites to reassert control over an increasingly agitated working class. By convincing unskilled workers that their fate was completely tied to their own individual responsibility and to adherence to “born-again” values of the local churches they controlled, these elites assumed moral authority over the working class while absolving themselves of responsibility for their workers’ circumstance.
Not only did workers lose the means to acquire the skills they needed for mastery, many became convinced it was their own fault for being too fallible to become masters themselves.
Academia has long been one of the few remaining enterprises clinging to the remnants of the apprenticeship model. Significantly more than anything else I’ve experienced, it feels as though generative AI is changing every aspect of our enterprise. And I worry deeply that it is moving scholars towards a situation that parallels the cultural restructuring and religious revivalism of the 19th century.
The Making of the 21st Century Scholar
Some of my colleagues are steadfastly resisting the ever-increasing creep of AI into the classroom and laboratory. At the same time, other colleagues are enthusiastically turning to AI to streamline their workflows and are calling for greater training in AI to prepare students for a modern workforce. I often catch myself feeling like I’m working my hardest to train world class cobblers from a rustic shop on a dirt road behind the Nike factory.
I see smart students and researchers using AI in clever ways to solve intractable technical issues and expedite an increasing bloat of menial tasks. Others are doing interesting and rigorous empirical studies on the effective use of AI in their subdisciplines and testing the boundaries of the applications of AI to real-world problems—all of this is interesting and exciting.
But more often than not, I see AI being used as a substitute for expertise, as it sneaks into every corner of our work under the guise of increasing productivity. To the degree that these tools are saving time and energy, I am not seeing researchers reinvest those savings into developing a deeper understanding of their subject matter, making connections to new ideas, or otherwise investing in advancing their current skillset. Our preprint servers are not being accelerated by streamlined insights, but instead are being slowed down by a glut of slop and hackneyed scholarship. And the work that remains the most interesting is unaffected by how much AI was infused into its production.
Like many others, I worry about the impact AI will have on the workforce, the environment, and the proliferation of misinformation. In science and higher education, I also worry about cheating, fraud, and illiteracy.
But those are not the concerns that trouble me the most. Instead, I worry more that the way many academics use AI is actively undermining the intrinsic motivation needed to build and refine domain knowledge. I worry that many of us are already so marveled by the speed of the automated knowledge assembly line that our discernment for quality control is distracted and atrophying. I worry that when the inevitable enshittification comes for AI, we will have forgotten the skills necessary to course-correct. I worry about a world where there is so much outsourcing of agency and mastery in our work that we find ourselves paralyzed to guide the production and dissemination of knowledge, and everyone is too placated to care.
If there is to be a Modern Great Awakening, it may not come from widespread adoption of evangelicalism. Instead, it may come from the torrent of uncritical consumption of influencer content, generative advertising, and ascendancy of dogmatic vibe economy fundamentalism. The skills needed to educate people to combat these pressures will be fragmented across the knowledge assembly line, as LinkedIn proprietors succeed in convincing us that we are to blame for not getting ahead of the curve when we had the chance. We will have forfeited our agency in the craft, and with it, our ability to shape how it can affect the world and ourselves.
Autoencoder
My mother still sends me articles from Wired, and more often than not they are about AI. Like me, she didn’t anticipate a world where her white-collar training would be at risk of being automated in the same way it had been for blue-collar jobs. I think she is even more fascinated by it than I am. As more people are predicting the death of computer science, software development, and data science, I am more grateful to her than ever for cultivating my early autodidactic interest in computers. It continues to motivate me to learn more about the mechanisms and details of the technology but also allows me to articulate its limits and cultural parallels to folks who have not done the same.
A colleague of mine recently asked me about my daughter who just started middle school: “what are her favorite subjects?” She loves to write prose and is much better at it than I was at her age. She will go to her room and spend time handwriting paragraphs into a little notebook she keeps with her and will occasionally tell us how she is “working on my book” that she types in Google docs on her Chromebook. When I shared this with my colleague—a scientist and educator—they responded, “well, be careful because AI is going to be doing all of that for us pretty soon.”
Keep writing, sweetie.


