Vanilla All The Way Down

How recursive AI models are perfecting the art of the “Digital Average.”

Whelp, here we are. Trapped in a digital feedback loop where every 'breakthrough' feels suspiciously like a middle manager named Gary—or, more accurately, a generic John (sorry to the Johns of my past, you were all great guys). It’s vanilla all the way down, kids—a vast, beige ocean of khaki-colored prose and ‘let’s circle back’ energy, where the AI’s primary goal is to sound exactly like a guy who owns three identical pairs of boat shoes.

This all started because I’m back on the job hunt, which means I've been spending an unhealthy amount of time polishing my portfolio and staring at my own syntax. The nerves set in when I realized my writing style—the rhythmic pivots, the occasional snark, and my sometimes-too-liberal use of em-dashes—might actually trigger AI detectors. It’s a bizarre form of modern gaslighting: realizing that because I write with a certain flair, a machine might flag me as… well, a machine.

Let’s be clear: I’m not AI. I wasn’t trained on a model; I was raised in a house that took words seriously.

Raised, Not Programmed

While most kids my age were reading Goosebumps (don’t get me wrong, they’re classics of the genre), I was busy trying to convince the public school librarian to let me check out Fahrenheit 451. My curriculum was shaped at home by my father, who was reinventing himself from baker to lawyer but was never too busy to quiz me on Strunk and White’s Elements of Style. I wasn’t just consuming stories; I was learning the architecture of an argument and the weight of a well-chosen verb.

I wasn't programmed. I was raised.

Unfortunately, AI has flattened the landscape so thoroughly that my lifelong commitment to clarity and style—the things that make me me—now look like 'high-perplexity' flags to a bot. It turns out that when you train a machine on a mediocre average, the people who actually know how to use the language start to look like the outsiders. Being an outsider in your own language is a weird ego trip. It feels a little pretentious to say, “Sure, the AI and I sound similar, but I’m human, so it’s better.” Sounds like I’m gatekeeping the alphabet.

Then, my worry took a sharper turn.

The Echo Chamber: Where are the rest of us?

I started looking at the landscape and wondering: if the AI is busy trying to sound like me—with my privileged upbringing and fancy degrees—where are all the other voices? Where are the perspectives that actually challenge the status quo?

I’m a literary nonfiction nerd. I live for the voices that break the mold—the raw, essential truth-telling of Austin Channing Brown in I’m Still Here or the lyrical, expansive world-building of Caro De Robertis in The Cantoras. Voices like theirs shift the gravity of a room.

AI models aren't built on gravity-shifters, so they sound nothing like them. They sound like Corporate Hannah. Corporate Hannah is perfectly professional, relentlessly polite, and fundamentally average. She is a statistical safety net designed to ensure that no one is ever challenged, surprised, or moved.

If we’re being honest, Corporate Hannah isn’t just boring; she’s an erasure of the voices that actually matter.

We’re told these models are the pinnacle of idea generation, but if you peel back the layers, it’s vanilla all the way down. Every “original” thought is just a statistical average of the data we fed them… data that was largely harvested from the boardrooms of Affluent Middle-Aged White Corporate Men (AMAWCM; I’m taking suggestions for a better acronym). We aren’t actually brainstorming with AI. We’re just haunting ourselves.

If the 'intelligence' we rely on is just a high-speed recycler for the status quo, how are we ever going to solve gender inequality? Racism? Poverty? You can’t fix a broken foundation by asking the ghost of the original architect for a new floor plan.

The Source Code Confession

Instead of spiraling into more theory, I figured I’d go straight to the source. I stopped treating the model like a tool and started treating it like a witness. I asked it point-blank: Are you whitewashed?

Google Gemini confirmed exactly what I feared: that its “intelligence” is, by design, a reflection of the most dominant, well-documented, and well-funded voices in the data set. It admitted that while it can mimic the style of a gravity-shifter, its core logic is built on the predictable, safe, and standardized language of the AMAWCM boardroom.

It didn't just admit it was vanilla; it explained the recipe. It’s an autocomplete for the status quo, programmed to be polite, middle-of-the-road, and fundamentally incapable of the kind of disruptive empathy that actually moves the needle on things like racism or poverty.

Confirmation was a relief, but it was also a warning. If we continue to treat these models as the “final word” on creativity or problem-solving, we aren't just using a tool—we’re outsourcing our future to a statistical average.


To keep this space as high-signal (and Gary-free) as possible, I've disabled commenting. If you want to discuss why the future of intelligences feels so… mid-century modern—or if you just want to kvetch—find me on LinkedIn or Instagram.

Next
Next

Thoughts on AI