The Rational Path to Digital Dystopia
Here’s the thing about surveillance capitalism: it wasn’t designed by mustache-twirling villains in a boardroom saying “let’s destroy human autonomy for profit.” It evolved the way most destructive systems do - through a series of perfectly rational decisions that, when you connect the dots, led us somewhere nobody really intended to go.
Think about it like this: Google had a genuinely useful product (search) and a genuine problem (how do you make money from something that’s free?). The advertising model seemed obvious - TV and magazines had been doing it for decades. What could go wrong?
The flaw in that logic, of course, is that TV and magazines don’t know what you’re thinking about when you’re not watching or reading. They can’t follow you to other channels, track which commercials make you pause, or build a psychological profile based on whether you flip past the lingerie ads quickly or linger for a moment. Google could do all of that. And more importantly, they were really, really good at extracting intelligence from raw data - that was literally their core competency.
The Kudzu Effect
What happened next was basically biological - the advertising model spread like kudzu across the internet. Banner ads appeared everywhere. Cookies - originally invented by Lou Montulli at Netscape in 1994 to help websites remember users (like keeping items in shopping carts) - were repurposed for cross-site tracking. Then came the one-pixel images that loaded from different servers, invisible to users but incredibly valuable for tracking. Companies like DoubleClick (now part of Google) figured out in 1995 how to use these technologies together to track users across multiple websites.
Each innovation drove more revenue for Google, which allowed them to grow, hire better engineers, build better algorithms, which generated more revenue. And unlike in biological systems, there were no natural predators. No competing selection pressure. Just an ever-expanding pool of people spending more and more time online.
This became the dominant revenue model not because anyone sat down and said “let’s build a surveillance apparatus,” but because people didn’t want to pay for internet services. Free email, free search, free social networking - all supported by advertising. It seemed like a fair trade at the time.
The Feeding Frenzy
But here’s where the system dynamics get really interesting. As the machine got bigger, it needed to be fed more. Margins started dropping as more players entered the market and people stopped clicking on ads and started using ad blockers. Something had to give.
So someone - and I’d bet money they weren’t thinking of themselves as evil - decided to optimize for attention. Why show ads people might ignore when you can design the entire experience to be psychologically compelling? Take the techniques that ad men had been using for decades and bake them directly into the platform. (Douglas Rushkoff documents this beautifully in Coercion, and Cialdini breaks down the psychology in Influence.)
That was the tipping point. Suddenly we had infinite scroll (no decisions, just a flick of the finger), gamification (little rewards keep people coming back), social pressure (your friends are all doing it), and notification systems designed to trigger FOMO. The platforms weren’t just showing ads anymore - they were engineering human behavior to maximize “engagement.”
And here’s the kicker: the people implementing this weren’t Nazi scientists experimenting on unwilling subjects. They were business people with mortgages and kids in college, trying to hit quarterly numbers and keep their shareholders happy. What else could they do? Besides, people seemed to enjoy it! Candy Crush was genuinely fun. Looking at your friends’ vacation photos on Facebook was genuinely pleasant. If people didn’t like these products, they wouldn’t use them, right?
The Rationality Trap
So here’s the moral of the story: nobody is to blame for the unfortunate situation we find ourselves in. Everybody behaved rationally given the incentives they faced. The engineers built tools that people wanted to use. The business folks found ways to make those tools profitable. The users got free services in exchange for their attention and data. Everyone won.
Except, of course, we didn’t. We ended up with a system that’s optimized for addiction rather than utility, that fragments attention instead of enhancing it, that amplifies outrage because anger drives engagement, and that treats human psychology as a resource to be strip-mined for profit.
But if everyone made rational decisions, can we really ask for more? In the late ’90s Google adopted the motto “Don’t Be Evil”. Nobody set out to build a machine for manufacturing anxiety and polarization.
The Uncomfortable Question
Which brings us to the lingering question: does the fact that everyone started out with good intentions mean there’s no moral culpability? Does everyone get a pass because “the system made me do it”?
It’s tempting to say yes. After all, if you put rational actors in a system with perverse incentives, you get perverse outcomes. That’s not a moral failing, it’s just… math.
But here’s what bothers me about that answer: at some point, the evidence became pretty clear that these systems were causing harm. When teenage suicide rates started correlating with social media adoption, when political discourse became increasingly toxic, when people started reporting feeling anxious and depressed after using platforms that were supposed to connect them - at what point does continuing to optimize for engagement become willful blindness?
Is There a Better Way?
The real question isn’t how to assign blame - that’s mostly useful for making ourselves feel better. The real question is whether we can build systems with different incentives that lead to different outcomes.
That’s why I find myself optimistic about approaches like DivergentFlow’s. Instead of optimizing for engagement (which leads to addiction), optimize for authentic connection. Instead of extracting value from user psychology, create value by enhancing human relationships. Instead of treating users as products to be sold to advertisers, treat them as people whose stories matter.
It’s not about being more moral than the Google founders or Facebook engineers. It’s about recognizing that the system we’re embedded in shapes the outcomes we get, regardless of our intentions. And if we want different outcomes, we need to build different systems.
The surveillance capitalism model wasn’t inevitable. It was just the path of least resistance given the constraints at the time. But constraints change. Technologies evolve. And maybe, just maybe, we can choose a different path - not because we’re more virtuous than the people who came before us, but because we’ve learned something from watching where the first path led.