The Every and the Automator
An installment of the Riddle Book Review. Musings on automation and free will.
“What should I do?”
“Am I good?”
These, writes Eggers, are the defining questions of the human experience, in particular because no one but you can answer these questions for yourself.
“What should you do?” — there are plenty of people who will butt in with suggestions (sometimes veiled as mandates), from preachers to philosophers to your micromanaging boss. Even more people will offer their opinions on this second point, “Am I good?” They'll foist their faith and its morals, or perhaps their favorite philosopher and his "categorical imperative." But in both cases there are few real imperatives. Eat. Breathe. Don't kill anyone. Do something useful. Beyond this, there's no science – no equations to solve, nothing beyond you and your gut, in its messy, often irrational glory.
Occupied by musings of this sort, I was loping down a sidewalk on New Year's Eve, whiling away the hours until midnight, when I bumped (almost literally) into my friend, Pessimistic Petey.
“What ho, Petey" said I. He grunted some hybrid of greeting and question in return, which I took as an invitation to unload my thoughts on him. Humans are free. Free to decide what's good and bad, using their gut, even if their gut isn't very good at it. Isn't that marvelous?
"Humph.” Petey said. “I think in the future, when they excavate the remains of our civilization and ask ‘what happened?’ they'll realize that we let all of these people run around with nuclear weapons and greenhouse gases while ‘just trusting their gut.’”
Dark! But what did you expect from a guy called “Pessimistic Petey”?
"But there's another part of it," he said, “I hate that freedom. It's such a burden to be worrying about whether I'm doing the right thing, and doing it well enough. It's an existential angst. For all I know, I could be wasting my life. Heck, I probably am." Without another word, Petey slouched off.
I dearly hope, dear reader, that you aren't as despondent as Pessimistic Petey. But you can probably sympathize with him. Who hasn't felt the quiet terror of choice overload? Who hasn't longed for an authoritative voice to decide for us? I daresay this is why Wirecutter is so popular; and why so many people have smartwatches that tell them when to exercise and for how long, as mine did, while penning these words. "Wug, take a 12-minute walk to close your move ring.”
Eggers’ "The Every" takes these quirks of the present and extrapolates them into a (not too distant) future where our devices don't just prompt us to stand and to wash our hands for 20 seconds, but also to sleep, and for how long; to eat, what and when; to drink water; to talk with friends and family; even to laugh for the recommended 12 minutes a day. In short, Eggers imagines an A.I. conscripted to answer that question "What should I do” for you — taking your preferences into account, of course, and then solving the equations to instruct you how to do everything so as to “optimize" your life. Eggers likewise imagines a single metric, the "SumNum," that would combine vast amounts of data into a single number in answer to that question, "Am I good?”
If you have a high enough SumNum, then yes.
Plenty of cult-leaders, religions and management fads have offered a similar salve to the paradox of freedom and excess choice. But Egger's algorithmic life is different: it's objective. Scientific. Unlike a cult leader, it may genuinely have your best interests in its simulated mind, because all it needs to solve the equations of happiness is your input. In this world, when your watch alerts you to do 4 minutes of jumping jacks, you can genuinely believe that it knows exactly what you should be doing, better than you yourself know.
So why, as Eggers imagines, do the subjects of these algorithmically designed lives feel such an aching hollowness inside?
I considered several answers as I continued my stroll. The easiest way out of Eggers’ perfectly designed dystopia is that the “objective, scientific” recommendations of his algorithms get human life fundamentally wrong. Those doctors who prescribed leeches surely thought of themselves as “objective” and “scientific” — little did they suspect that their medicine was actually sickening their patients! Perhaps the SumNum and the Every’s other algorithmic recommendations commit similar errors. Cue the standard lessons in epistemic humility: “we don’t understand much about why human societies work, and we don’t know what we don’t know, so any efforts to reform society must proceed slowly and with extreme caution…” Yes, thank you, Mr. Burke; an overconfidence in the powers of technology certainly is part of what makes the Every alarming, but that can’t be the whole story.
Indeed, if 'epistemic humility 'were the only thing the Every lacked, the solution would be simple: give them some time to figure things out. Eventually, one presumes, they'll have gathered enough data that your phone's prompts – to do jumping jacks, to have a laugh – will not only be well-intentioned, but impeccably timed, so as to refresh, rather than distract. Suppose this future is reached. Is there really nothing wrong with ceding complete control over one's schedule to a piece of software, provided that software really knows what it's doing?
"Why yes, of course!" is the instinctive answer. We like to think – and Eggers indulges this belief – that human autonomy is irreplaceable, even if using it results in objectively worse outcomes. We cherish the illusion of free will (my god, I’m starting to sound like Pessimistic Petey). But in practice, we are quick to outsource it: to let Google Maps decide where you drive, to let the Apple Watch decide how long you exercise. This outsourcing isn't just convenient, it might to some extent be necessary.
In the 1970s, NASA had automated almost every part of their Shuttle launches and landings, but they left the astronauts the job of deploying the parachute – the heroic astronauts had to have something to do! But deploying the parachute was a delicate business – it had to be timed precisely – and the astronauts, being human, often fudged the timing. Eventually, NASA engineers wrote a program to deploy the parachute, and the astronauts were reportedly delighted by how well it worked.
I continued my perambulations, turning somewhat randomly. I passed many other wanderers – or stumblers, maybe – their heads downcast, eyes transfixed by something in cyberspace. Some chatted animatedly with invisible people, probably coworkers, but possibly hallucinations. There has never been a better time to be a crazy person wandering the streets; with all the craziness of the world, you fit right in.
Then, across the way, I glimpsed the telltale stride of another of my favorite characters, Otto the Automator. Otto was moving quickly, as always. But who better to ask about the automation of free will? Otto was perpetually fiddling with bits of code and building his own tools. His stated purpose was to make his computer do all of his work for him. Wouldn't that make him obsolete?
"Ha!” Otto said. "I made myself obsolete years ago, but I’m still here.” Indeed, his first job had required a painful amount of manual curation of data into spreadsheets. "Boring, mindless work,” he said. “So I automated it!” Thanks to his scripts, he did a year’s work in a week. He then moved on to bigger, better pursuits, and along the way, automated many of them too. “I figure that when something gets boring, I should automate it. But there's always plenty of non-boring stuff to do!" He was currently working on a program to automate the landing of SpaceX shuttles.
Otto's watch dinged. He tapped at it. "I've been experimenting, and I found out that I do my best thinking when my heart rate is in the 110s, so I wrote an app to help me keep it there.“ He paused. “I need to add some sort of snooze button, so I can say, ‘Not now, watch, I'm talking to Wug!" He smiled, eyes suddenly dreamy as he contemplated this improvement.
"But don't you find it demeaning to have an app telling you what to do?''
"Ha! Not when I wrote the app! It's more like my past self helping out my present. And if I don't like how it's working, I can change it. I delight in tweaking these things.”
I considered this. Unlike the Every, Otto had no delusions of an objective accuracy. He built the thing. He trusted it as much as he trusted himself, which is to say: he knew there would be mistakes.
“I get such joy seeing my automations at work.” he said. “When I come home at night, all of my lights turn on. When I go to bed, they all turn off. When it gets dark, all of my blinds close. It’s like I have copies of my past self camping around the house and helping me out. And sure, each of those things saves what – 5 minutes? But I have hundreds like that. It gives me so much time.”
A funny image, this: little copies of Otto crouching around this home, doing his chores. And, come to think of it, my house had plenty of its own little homunculi. What else was my washing machine, or dishwasher, or toaster? This morning, little copies of the engineers at the Acme Toaster Corp readied my bread for marmalade. All technology is like this. It's never been the “machines versus the humans,” because the machines are just like copies of past humans automating some little thing. If there's any struggle, it’s the humans against the humans, as ever: our future selves in tension with our past, or the shareholders of the Every in tension with the rest of us. It's all politics.
"The way I see it," said Otto, "the central challenge of our times is to keep adding to the scaffolding of automations without forgetting how they all work. The minute you have to take it on faith that the thing works, you've taken a step back towards the dark ages, and you've ceded a bit of control to the high priest.“
But I don't really understand how my phone works – at least, not by any rigorous standard. Does this mean I've ceded control to the high priest in Cupertino?
“Maybe you have," Otto said.
I had to ask. Had he built his own phone?
Of course he had.
"You know what they found written on Richard Feynman’s blackboard after he died?” Otto said. “‘Understand everything that has ever been understood.’ It was his last message to his students.”
“But even Feynman didn't quite get there,” I said. “It was an impossible goal. Worthwhile, sure, but impossible.”
With this, however, Otto took his leave, tapping on his wrist to summon a copy of his past self.
So I continued the stroll, turning randomly, winding between the Every and the Automator, Otto’s past selves and the ghosts of my toaster, with politics all the way down. Visions of science and democracy flashed metaphorically before me. Maybe we don't have to understand everything, if we understand a system that collectively understands? But even this seemed suspended on a tightrope. The future. How uncertain! How tenuous! I walked until the sun beat its steady path above the horizon. Relentless, yes, but absolutely dependable.