0:00
/
0:00
Transcript

The Cloister and the Starship

The university of the future must revive ancient ways of learning to enable human mastery of artificial intelligence.
15

Near the beginning of Neal Stephenson’s 1995 novel The Diamond Age, there is a memorable exchange between a computer engineer named Hackworth and an “equity lord”—a tech billionaire, as we would say—with the wonderful surname of Finkle-McGraw. The engineer alludes to some research work he has been doing.

“What sort of work?”

“Oh, P.I. stuff mostly,” Hackworth said. Supposedly Finkle-McGraw still kept up with things and would recognize the abbreviation for pseudo-intelligence, and perhaps even appreciate that Hackworth had made this assumption.

Finkle-McGraw brightened a bit. “You know, when I was a lad they called it A.I. Artificial intelligence.”

Hackworth allowed himself a tight, narrow, and brief smile. “Well, there’s something to be said for cheekiness, I suppose.”

I think a lot about P.I. these days, not least because of the catastrophic effect it is having on actual intelligence. The Diamond Age, like so many of Stephenson’s novels, offers us a troubling glimpse of a future we have already partially reached.

It is a world in which nanotechnology is ubiquitous. It is a world in which “matter compilers” can, on demand, provide basic food, blankets, and water, in a fusion of universal basic income and 3D printing. We are not quite there yet, it’s true. But in other, familiar respects, software has eaten this world. Venture capitalists and engineers reign supreme. The twist in the tale is that this networked society has reverted to a kind of tribalism. The most powerful of the phyles (ethnic tribes) are the Anglo-Saxon Neo Victorians, who have reverted to the social strictures of the mid 19th century.

As might be expected of Neo Victorians, there is a slum-dwelling underclass of tribeless thetes. But one little girl finds her way out of the Shanghai gutter when she is given a stolen copy of a highly sophisticated interactive book, The Young Lady's Illustrated Primer, which a modern reader will recognize as a large language model or a chatbot. Immersive, interactive, and adaptive, it gives Nell the education she would otherwise never have received.

If all this seems strange to you, it is no more strange than the present-day world would have seemed to my 19-year-old self if he could have read about it in 1983. Let me take you back to the world when I was finishing my freshman year at Oxford. It was Ronald Reagan’s first term, and the year Margaret Thatcher was reelected. It was the final peak of Cold War tension. Korean Air Lines Flight 007 was shot down by a Soviet Union fighter jet, killing all 269 people on board, including U.S. Congressman Larry McDonald. Soviet officer Stanislav Petrov averted a worldwide nuclear war by correctly identifying a warning of attack by U.S. missiles as a false alarm. 1983 was also the year of Able Archer 83, a NATO exercise the Soviets feared was a U.S. nuclear first strike. Meanwhile, although Deng Xiaoping’s economic reforms were underway, China was still mired in poverty. The People's Daily reported that the country would run out of food and clothes by the year 2000 if the Party’s policy of population control was unsuccessful.

In 1983, personal computing, the Internet, and mobile telephony were all still in their early infancy. It was the year that saw the migration of the ARPANET to TCP/IP; the first-ever commercial mobile cellular telephone call; the release by Microsoft of its word-processing program Multi-Tool Word, later renamed Word.

The only times I used a computer were when I was typesetting a student magazine I edited. We communicated by penning letters or notes, which were delivered to wooden pigeonholes in our colleges. We used coin-operated public phone boxes to call our parents. I never played a video game and looked down on those who did. I read books—a lot of books. Had I read a book that envisioned the hyperconnected world of today, it would have seemed as outlandish as Stephenson’s imagined Diamond Age. In 1983 we had no idea how radically the world would change in our lifetimes. We nearly all assumed we would do one of five things: law, media, government, academia or banking. Not one of us considered starting a business.

You might think our undergraduate studies were a poor preparation for the fast-approaching world of the Internet. But they were not bad, because at root Oxford taught us eight fungible skills:

1. To read copious amounts (five books, five articles a week).

2. To think (I learned that long walks helped with that).

3. To write (one or two handwritten essays a week).

4. To debate (in tutorials, often one-on-one with a “don”).

5. To remain clear-headed under stress (our final exams consisted of ten three-hour papers within a week).

6. To differentiate between bullshit and bona fide content.

7. To cohabit and cooperate with our contemporaries.

8. To live on a tight budget.

The future shock awaiting today’s undergraduates will be even greater than ours was. This is because artificial intelligence has broken through to the extent that Stephenson’s Young Lady's Illustrated Primer—a single device capable of delivering a complete, personalized education to its owner—is now conceivable.

Of course, when Sam Altman says we are on the brink of a new Renaissance and calls GPTo3 “genius-level intelligence,” it is tempting to scoff. We should not. “Do you think you’re smarter than o3 right now?” Altman asked the Financial Times rhetorically in a recent interview. “I don’t … and I feel completely unbothered, and I bet you do too. I’m hugging my baby, enjoying my tea. I’m gonna go do very exciting work all afternoon … I’ll be using o3 to do better work than I was able to do a month ago.”

Altman has every reason to want to soothe us. But it would be strange to be completely unbothered by the speed with which young people are adopting AI. As Altman himself has noted, “older people use ChatGPT like Google. People in their 20s and 30s use it as a life advisor.” And college students “use it like an operating system. They set it up in complex ways, connect it to files, and have detailed prompts memorized or saved to paste in and out.” Only 20% of baby boomers use AI weekly, according to Emerge, compared to 70% of Gen Z.

AI usage is already spreading faster than Internet usage at a comparable stage. According to Hartley et al. (2025), “LLM adoption at work among U.S. survey respondents above 18 has increased rapidly from 30.1% as of December 2024, to 43.2% as of March/April 2025.” The number of ChatGPT active users is now 1 billion. Google’s Gemini has over 400 million active monthly users. And the use cases for AI keep multiplying. McKinsey has a chatbot named Lilli, trained on all its IP. BCG has Deckster, a slide deck editor. Rogo, funded by Thrive, is a chatbot for investment banking analysts. Duolingo is replacing contract workers with AI.

Meanwhile, the computational power (compute, for short) required by each successive LLMS keeps growing. Two and a half years ago, when ChatGPT launched, it required around 3% of the compute required by today's state-of-the-art models. Just two and half years from now, according to Peter Gostev, the models will have 30 times more compute than today and a thousand times more than ChatGPT when it was launched.

As Toby Ord has noted, this also drives up the cost. While charts of AI models’ performance “initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between GPTo1 and GPTo3.” This is because in most such charts it is the x-axis that is on a log scale. This tells us that “the compute (and thus the financial costs and energy use) need to go up exponentially in order to keep making constant progress.”

This in turn means that we are witnessing the biggest capital expenditure boom since the railroads. Capex spending by the big semiconductor companies and the so-called is running at a quarter of a trillion dollars a year. Add in research and development spending and the estimated total for 2022-2027 is more than nearly $3.5 trillion—11% of U.S. gross domestic product.

Partly because the AI works so well and partly because it costs so much, we are also in the early phase of large-scale job destruction. As Deedy Das has pointed out, “Google, Microsoft, Apple, Tesla, Meta, Nvidia and Palantir— the biggest tech employers—have collectively stagnated headcount … This is why Computer Science majors can’t get jobs. Big tech hypergrowth era is over.” And we are beginning to see absolute job losses in areas such as professional writing and manning call centers. AI is just getting started. Within a few years, it ought to destroy even more white-collar jobs than the blue-collar jobs destroyed by China after it joined the World Trade Organization in 2001.

The AI revolution has a geopolitical dimension, too, as it is now the crucial field of superpower competition in Cold War II. In the words of tech guru Mary Meeker at al.:

The global race to build and deploy frontier AI systems is increasingly defined by the strategic rivalry between the United States and China. While USA companies have led the charge in model innovation, custom silicon, and cloud-scale deployment to-date, China is advancing quickly in open-source development, national infrastructure, and state-backed coordination. Both nations view AI not only as an economic tailwind but also as a lever of geopolitical influence. These competing AI ecosystems are amplifying the urgency for sovereignty, security, and speed. In this environment, innovation is not just a business advantage; it is national posture.

No one should underestimate the risks of an AI arms race. Ask yourself: Which did human beings build more of in the past eighty years: nuclear warheads or nuclear power stations? Today there are approximately 12,500 nuclear warheads in the world, and the number is rising as China adds rapidly to its nuclear arsenal. By contrast, there are 436 nuclear reactors in operation. In absolute terms, nuclear electricity generation peaked in 2006. In relative terms, the share of total world electricity production that is nuclear fell from 15.5% in 1996 to 8.6% in 2022.

Nevertheless, I believe the economic and geopolitical consequences of AI pale alongside its educational consequences.

In a recent paper for the Manhattan Institute, Frederick Hess and Greg Fournier asked: “What Do College Students Do All Day?” The answer is not “studying.” Estimates of the amount of time spent by U.S. students on all “education-related activities” range from 12 to 19 hours per week. According to sociologists Richard Arum and Josipa Roksa, this represents a decline of roughly 50% from a few decades ago. Hess and Fournier calculate that a student with a 12-credit course load should spend “at least 36 hours attending class or doing homework each week.” Today’s students are nowhere close that.

The decline in study hours is not because students are moonlighting to pay their way through college. According to the National Center for Education Statistics, just 40% of full-time undergraduates were employed in 2020, compared with 79% in the mid-1990s. What students are doing is, to put it bluntly, goofing off. According to the National Survey of Student Engagement, the average first-year student last year reported spending 14.3 hours per week preparing for class, 5.3 hours per week participating in cocurricular activities, 2.4 hours working on campus, 6.9 hours working off campus, 2.4 hours doing community service, 11.9 hours relaxing and socializing, and four hours apiece caring for dependents and commuting. Presumably the median freshman spent the remainder of the week—by my calculations, 116 hours, or just under 17 hours a day—asleep. Certainly not studying.

One very big reason today’s students are spending so little time studying is AI. As James D. Walsh recently put it in New York Magazine, “Everyone is cheating their way through college.” OpenAI released ChatGPT in November 2022. It caught on with amazing speed. Just two months later, a survey of 1,000 college students found that nearly 90% of them had used the chatbot to help with homework assignments. OpenAI has worked out how to hook the remaining 10%. This year it made ChatGPT Plus—a subscription to which costs $20 a month—free to students during finals. To quote Altman once again: “Writing a paper the old-fashioned way is not going to be the thing.”

The remarkable thing is how open everyone is about this. “College is just how well I can use ChatGPT at this point,” a student in Utah told Walsh. “With ChatGPT, I can write an essay in two hours that normally takes 12,” said Sarah, a freshman at Wilfrid Laurier University in Ontario. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow,” gushed Wendy, a freshman finance major at one of NY’s top universities. “You just don’t really have to think that much.”

Whatever university professors and administrators may think, they do not have effective tools to detect the use of AI in the papers students submit. A June 2024 study used fake student profiles to slip wholly AI-generated work into professors’ grading piles at a UK university. Only 3% were detected as the work of LLMs. AI detectors such as Turnitin and ZeroGPT are simply not accurate enough. Walsh fed one of Wendy’s mostly AI-generated essays into the latter, which wrongly estimated it to be 11.74% non-human work. When he ran the Book of Genesis through the app, “it came back as 93.33% AI-generated.”

Writing in the New Yorker, D. Graham Burnett noted the fatalistic mood that grips many colleges in the face of the AI onslaught. “On campus,” he wrote, “we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: ‘We’ll just tell the kids they can’t use these tools and carry on as before.’ This is, simply, madness.”

Talking of madness, it seems unlikely that tolerating the wholesale out-sourcing of studying will do the students themselves much good. Generation Z is already notoriously susceptible to mental health maladies, real or imagined, thanks to—Jonathan Haidt argues—their childhood exposure and addiction to social networking apps on mobile devices. Enabling “the anxious generation” to shirk the acquisition of skills such as sustained reading, critical thinking, and analytical writing cannot be expected to help matters. Indeed, it would be astonishing if reliance on LLMs at university did not lead to arrested cognitive development. To quote Robert Sternberg, a psychology professor at Cornell University, “The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence, but that it already has.”

What, then, should the university of the future do in response? I am not the first person at the University of Austin to ask this question. Jake Howland and Tim Kane have already weighed in. I suggest five essential steps, to be taken with immediate effect:

1. Create quarantined space in which traditional methods of learning can be maintained and from which all devices are excluded. Call this “the cloister.”

2. Assume that all study outside the cloister will be done with the use of LLMs. Call this “the starship.”

3. Inside the cloister, allocate time to a) reading printed books b) discussion of texts and problems c) writing essays and problem-sets with pen and paper, and d) assessment via oral and written examinations.

4. Require time in the cloister to be around seven hours a day, leaving time on board the starship, as well as vacations, for the use of AI.

5. Revise admissions procedures to ensure the university attracts students capable of coping with the discipline of the cloister as well as the boundles opportunities of the starship.

These suggestions might seem like an over-reaction to the challenge posed by AI—in effect, a return to the monastic origins of the European university in the medieval period. However, my inspiration for the cloister is not history but science fiction. In this model, the starship is as important as the cloister.

In Neal Stephenson’s Anathem (2008), a future world has responded to the calamity of a nuclear war by banishing scientists into “concents” (monastic communities). The “avout” are not only kept apart from “sæcular” society, except on regularly celebrated “aperts,” but also banned from possessing or operating advanced technology. They are also subject to a “Cartasian Discipline,” named after “Saunt Cartas, the founder of the mathic world.” It turns out that only the skills honed in the concents equip the avout to contend with the threat posed to Earth by an alien starship from a parallel world.

Reading Anathem, I found myself thinking that the university of the future will need to resemble much more closely the enclosed word of the monastic orders than the open-access colleges of the present day—institutions so readily accessible to outsiders that, after October 7, 2023, non-student agitators found it quite easy to organize pro-Palestinian encampments on multiple campuses.

Today’s students need to be protected not only from such influences, but also from the temptations of AI. To repeat, that is not to say that the university of the future would prohibit the use of LLMs. On the contrary, we would want our students to excel at writing well-crafted prompts. But one cannot learn to ask good questions—what Germans call the art of Fragestellung—without first submitting to the discipline of the cloister, acquiring the skills that can nowadays be acquired only in strict seclusion from AI.

I see very little prospect of such a radical new regime being adopted at any of the established universities, as they are by definition universities of the past. However, I shall be arguing strongly that we take this approach at the University of Austin from the outset of its next academic year. Students may rest assured that no monastic habit or tonsure will be required of them—nor oaths of celibacy. But strict prohibitions on devices within the cloister, including wearable and implanted technology, will have to be insisted upon—if the rapid advance of Pseudo Intelligence is not to plunge all of humanity into a new Dark Age.

This essay was originally published by Niall Ferguson’s Time Machine. It is based on a talk given at the Austin Union on June 11.

Discussion about this video

User's avatar