Sam Altman’s ‘Gentle Singularity’ has a deluded vision of art
Unpacking the creative future outlined in the OpenAI leader's latest manifesto
OpenAI co-founder and chief executive Sam Altman onstage at the 2019 TechCrunch Disrupt conference. Photo: TechCrunch, courtesy TechCrunch (via Creative Commons license)
The most prominent champions of generative AI believe to the marrow that their technology of choice will have an equally powerful, equally positive impact on all aspects of our lives. But just about anytime one of them speaks at length about the world beyond their personal experience, it becomes clear what a limited grasp these self-styled sages tend to have of how people outside of their own business empires and blinkered readings of sci-fi actually think.
OpenAI co-founder and chief executive Sam Altman just showed his ass in exactly this way in The Gentle Singularity, his latest longform post about the techno-utopia he believes his products are already ushering us into. Notably for TGM’s purposes, he makes multiple attempts in the piece to prove that an AI-dominant future will put art on a new kind of pedestal. What he actually does instead is to show how badly the titans of AI will fumble art’s future if given the influence they’re fighting for.
Altman’s premise in the Gentle Singularity is twofold. First, he argues, advanced AI is radically upgrading the human experience and reshaping society for the better. But since it’s been doing so step by step since around 2022—and because it will keep progressing this same way—the extreme change will continue to feel like a soft curve, not a hard pivot. In his words:
“We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
I would push back against the final part of that statement with every muscle in my body, for reasons the dependably insightful Kyla Scanlon largely captured here. Still, I might be in the minority. As long as you only ever read it with the same half-glassy eyes and half-smooth brain that we use to consume so much else in the infinite digital scroll, the Gentle Singularity comes across as a humane centrist response to an undeniable technological revolution.
Look a little bit closer, however, and the false equivalencies, logical pole vaults, and self-interested partisanship embedded in Altman’s latest missive thwack you between the eyes like a rubber mallet. His whole post made me so agitated that I had to keep looping one of the most calming songs in my music library to prevent myself from either storming out of my apartment and joining a boxing gym, or pounding out a (pointless and ill-advised) 10,000-word rhetorical blast furnace to try to incinerate his whole manifesto in one sleepless night.
In the end, the only way to address it without flying off brand and off task was to go narrow. So, below I’ve extracted three passages in the Gentle Singularity where Altman invokes art, its creation, and its consumption, either directly or indirectly. (The bolding is mine, for emphasis.) I’ve followed up each one with some comments on why Silicon Valley’s most well known AI entrepreneur cannot be trusted to think through how this part of the human experience will be affected by a near future dominated by hyper-competent AI.1
Let’s start from the top…
1
“A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools. Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change, and one many people will figure out how to benefit from.”
Altman’s first mention of art establishes how detached he is from the subject. If you ever find an art professional who says one of the core problems with art is that humans can’t make enough of it to satisfy all the demand without resorting to generative AI, you are not talking to an art professional. You are talking to a generative AI evangelist in a Mission Impossible mask.
Crucially, I’m not just talking about what qualifies as art by the discriminating standards of the art industry, a niche business largely premised on the economics of scarcity and extreme selectivity (aka gatekeeping). I’m also including art as defined by the broadest parameters and the most populist use cases. That means everything from street murals and beachside caricature sketches to commercial illustration and graphic design, as well as everything from practical art education and art therapy instruction to visual FX work and concept art—and so much more.
The reality is that we already have thousands more living people making millions more artworks in these roles than we have jobs, tangible compensation, or even unpaid attention to reward them for their efforts. This is still true no matter how thinly or thickly you choose to slice the category of art.
In the cloistered art industry, for example, one of the definitive efforts of the past 10+ years has been the expansion of the canon—meaning the attempt to comb through history to identify and uplift the works of the hundreds, if not thousands, of artists whose achievements were inequitably (or just unluckily) ignored during their own artistic prime. This quest is an explicit confession that humanity has already produced so much worthwhile art that we haven’t been able to thoughtfully evaluate what happened decades or centuries ago, let alone what’s happening right now.
In culture at large, meanwhile, one of the most important conceptual frameworks of the 21st century has been the so-called attention economy, the idea that an unsustainably and unprecedentedly large pool of competitors across interest groups is fighting for the same limited share of engagement from audiences overwhelmed by the near-infinite options available through their screens in a neverending, undifferentiated flood.
In short, Altman’s suggestion that “the world wants a lot more” art than it’s already being offered is delusional, regardless of how you define “art.” We were already drowning in art before generative AI made image production so easy, thoughtless, and voluminous that it became universally known as “slop.” If he really believes his Gentle Singularity will fix this imbalance, not throw things even further out of wack, he’s lost in space.
So, what about his statement that “experts will probably still be much better [at art] than novices, as long as they embrace the new tools”? Addressing that idea leads us to the second excerpt…
2
“Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.”
Setting aside Altman’s attempt to incept readers into believing he and they share enough of the same risks and rewards around generative AI to justify his use of an all-encompassing “we” throughout the essay, the clause in bold is a clear signal that he looks at artmaking as not much more than another technical problem to be solved. This mentality is prototypical of Silicon Valley; it also demonstrates, at most, a high-school-level grasp of what the arts are and why they matter.2
A novel that rises to the level of art can most definitely be “beautifully written.” But just because a novel is “beautifully written” doesn’t automatically mean that it rises to the level of art—at least, not in the opinion of most artists and most people who genuinely value artists’ work. Believing otherwise means understanding fiction almost purely as a surface-level skills challenge. It’s like confusing the job of a chef with the job of a food stylist.
The equivalent is true in every other artistic discipline, too. A “beautifully painted” portrait can utterly fail to capture anything substantive about the sitter’s inner life. A “beautifully composed” pop song can utterly fail to make listeners dance, cry, or feel alive in any other bodily or emotional way. A “beautifully shot” film can utterly fail to convey anything more than cool background imagery worth running on the TV during a house party.
In every case, “beautifully making” something by rote technical standards has no bearing on whether it actually accomplishes any of the art form’s meaningful artistic goals—which are, broadly speaking, to move an audience emotionally, intellectually, or spiritually (if not all three). This concept segues right into my third and final excerpt from Altman’s essay…
3
“If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines.”
The bolded sentence is my favorite part of Altman’s post, because it proves that he’s either bereft of any awareness that he’s completely contradicted himself on the subject of life and art in the Gentle Singularity, or that he’s ultimately just a self-interested entrepreneur wrapping a wildly ambitious business plan in a patchy, hastily woven veil of humanism.3
To me, the passage also ties back to an earlier moment in his essay:
“In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.”
Since the crux of the Gentle Singularity is that generative AI will optimize every human endeavor and solve every human problem, it’s fair to intuit that Altman believes a paucity of “ideas, and the ability to make ideas happen” has been responsible for holding back art “for a long time,” too. He’s wrong.
Explaining why takes us back to the fallacy of the “beautifully written” novel he thinks we’re all anxiously waiting for generative AI to produce. I’ll turn the task over to Neal Stephenson, the novelist and technologist whose work has been credited—including by the Silicon Valley elite themselves—with foretelling the metaverse, large language models, and more.
On an episode of the economist Tyler Cowen’s podcast in late 2024, Stephenson responded to a question about how long it will take for a legitimately high-quality novel to be more than superficially written by AI with this:
“The real purpose of art and the reason we like art is because it exposes us to a very dense package of micro decisions that have been made by the artist. As such, we’re engaged in a communion with that artist. What makes it interesting is that connection. It may be to a living writer, or it may be to a sculptor who died 2,500 years ago, but in either case, we’re making a human-to-human connection. If we know that we’re reading something or experiencing a work of art that was just generated by an algorithm, then that element of human connection isn’t there anymore.”
Altman admits the same thing in the Gentle Singularity when he writes that “we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines.” The irony is that he does it while simultaneously failing to recognize that that admission implodes all of his statements about art and culture throughout the rest of the post. Stephenson actually crystallizes Altman’s entire logical flaw in the five-word headline of a Substack post that informed his above reply on Cowen’s podcast: Idea Having is not Art.
Stephenson’s framework in that post is simple and cogent. It distills why it’s absurd for some rando to, say, sue Sopranos mastermind David Chase because they also once pitched a studio an idea about a mob boss struggling with his mental health, or why it’s nonsense for an obscure visual artist to sue Maurizio Cattelan because he also once made a work involving fruit duct-taped to a wall. Ideas are necessary but not sufficient to make art. What separates an artist from an “ideas guy” is, as Stephenson says, an agonizingly complex set of micro-decisions that evolve the concept into a final expression that resonates on a deep human level.
Can artists use generative AI to help cross the chasm between idea-having and meaningful art-making? I think so. (The US Copyright Office does too, incidentally.) But anyone who thinks the bridge can be built entirely using the automated technical skill supplied by text-prompted algorithms—or that more powerful versions of those algorithms are all we need to enter a future of abundant creative genius—will only ever get partway across the abyss.
Altman’s Gentle Singularity wants to convince everyone else otherwise. Believing him will leave us exactly where he and almost all other AI entrepreneurs want the population at large: stranded in a construction zone of their making, redefining what makes life worth living based on whatever meager surplus they’re willing to toss us. There’s nothing gentle about that vision. You might even call it merciless.
The technological endgame in Altman’s post is the development of artificial general intelligence (AGI), also known as superintelligence. Both names describe a benchmark of achievement for advanced algorithms that keeps getting redefined based on whatever serves the entrepreneurs the best at any given moment. I’m deliberating excising it from the conversation to keep this post from descending into a semantic tailspin. The point, at bottom, is that OpenAI and its competitors are all trying to develop an AI model with human-level self-awareness, god-like capabilities, or both depending on who you ask and what they can gain from it.
As tech critic, fellow Substacker, and friend of TGM Mike Pepi writes of AI entrepreneurs and other techno-capitalists in Against Platforms: Surviving Digital Utopia, “They want a world ‘without politics,’ but this is a thin cover for a world with their politics.” Pretending their “we” includes the rest of us is one small thread in that cover’s assembly, and Altman does it throughout The Gentle Singularity.
Ryan Broderick of the essential Garbage Day newsletter argues that a few years ago Altman “shrewdly zeroed in on a very unique marketing strategy, one that’s, honestly, perfectly illustrated by the concept of a ‘gentle singularity.’ He likes to take fairly dystopian, cataclysmic science fiction concepts, claim his company will cause them and that they will be as destabilizing as you think they’ll be, and then kindly offer people a guide for navigating them. ‘Look, if you listen to me and just change your entire life, you’ll be able to survive the revolution that I’ve decided is inevitable.’”