Riddle Weekly: Issue 7

A happy new year from the team at Riddle! We’ve got some big things in store for 2022, starting with our recent website redesign, and including a revamped newsletter and countless ideas, tools, and workflows we’re excited to share with you in the coming months.

As ever, we’re committed to providing you both with good fodder for thought – “something to make you laugh, and something to make you think” – but we’ve also become increasingly interested in the process of thinking itself, and how our technologies can both disrupt and enhance this fragile miracle. Almost all of our modern technologies promote distraction and fragment attention, but it hasn’t always been this way. Take books and the literary culture that arose in the wake of Gutenberg. Nothing might have had a greater effect on the development of modern civilization than the kind of intellectual agility and sustained attention widespread literacy promoted: as it spread, democratic and scientific revolutions followed.

If anything, the internet has had the opposite effect. Its adoption has coincided with a wave of authoritarianism, undermining the very idea of objective truth. 60 years ago, Marshall McLuhan warned us that "the medium is the message." How right he was!

But a counter revolution is brewing. Much of the attentional havoc the internet has wreaked stemmed from a naivety on the part of the creators, and haplessness on the part of the consumers. “Sure! I can respond to several hundred emails a day. Why not?” “Sure, I don’t see why I shouldn’t get all of my news from a Facebook timeline.“ “Sure! I like actual cookies, so virtual cookies must be nice!“ Led by design philosophers, cultural critics, and veteran technologists like Tristan Harris, Nicholas Carr, and Jaron Lanier, people are waking up from this naivety. A demand has risen for new types of digital technology that not only respect attention and preserve privacy, but also (and of particular interest to us at Riddle) help us think more clearly and creatively.

We find this last possibility fascinating. Can we turn the vast potential of digital technology from an attentional wasteland into a second literary revolution? This sort of change can only stem from the bottom – from a critical mass of people experimenting and reporting what works and what doesn’t, collectively guiding everyone towards more humane, more rational, more creatively empowering technologies. We hope you’ll join us on this journey.

Let’s get started.

In this Issue: Professor Wug reviews Dave Eggers’ latest dystopian novel, and muses on democratic ways to handle automation and epistemological uncertainty. Then, Chester Snuphanuph presents his first installment in a series called “What to do about to-dos?” Finally, the usual round-up of noteworthy tools and articles.

Hot off the Press

Our latest articles

Every week (or thereabouts) the Riddle team reads and discusses a new book and writes an installment of “The Riddle Book Club Review”.

We send the full book reviews to the members of the Riddle Club, who we invite to read the books with us. And when a book particularly strikes us as relevant to Riddle’s mission, we publish it here.

This week, we read Dave Egger’s “The Every”. Here’s Professor Wug’s review.

The Every and the Automator

“What should I do?”

“Am I good?”

These, writes Eggers, are the defining questions of the human experience, in particular because no one but you can answer these questions for yourself. “What should you do?” — there are plenty of people who will butt in with suggestions (sometimes veiled as mandates), from preachers to philosophers to your micromanaging boss. Even more people will offer their opinions on this second point, “Am I good?” They'll foist their faith and its morals, or perhaps their favorite philosopher and his "categorical imperative." But in both cases there are few real imperatives. Eat. Breathe. Don't kill anyone. Do something useful. Beyond this, there's no science – no equations to solve, nothing beyond you and your gut, in its messy, often irrational glory.

Occupied by musings of this sort, I was loping down a sidewalk on New Year's Eve, whiling away the hours until midnight, when I bumped (almost literally) into my friend, Pessimistic Petey.

“What ho, Petey" said I. He grunted some hybrid of greeting and question in return, which I took as an invitation to unload my thoughts on him. Humans are free. Free to decide what's good and bad, using their gut, even if their gut isn't very good at it. Isn't that marvelous?

"Humph.” Petey said. “I think in the future, when they excavate the remains of our civilization and ask ‘what happened?’ they'll realize that we let all of these people run around with nuclear weapons and greenhouse gases while ‘just trusting their gut.’”

Dark! But what did you expect from a guy called “Pessimistic Petey”?

"But there's another part of it," he said, “I hate that freedom. It's such a burden to be worrying about whether I'm doing the right thing, and doing it well enough. It's an existential angst. For all I know, I could be wasting my life. Heck, I probably am." Without another word, Petey slouched off.

I dearly hope, clear reader, that you aren't as despondent as Pessimistic Petey. But you can probably sympathize with him. Who hasn't felt the quiet terror of choice overload? Who hasn't longed for an authoritative voice to decide for us? I daresay this is why Wirecutter is so popular; and why so many people have smartwatches that tell them when to exercise and for how long, as mine did, while penning these words. "Wug, take a 12-minute walk to close your move ring.”

Eggers’ "The Every" takes these quirks of the present and extrapolates them into a (not too distant) future where our devices don't just prompt us to stand and to wash our hands for 20 seconds, but also to sleep, and for how long; to eat, what and when; to drink water; to talk with friends and family; even to laugh for the recommended 12 minutes a day. In short, Eggers imagines an A.I. conscripted to answer that question "What should I do” for you — taking your preferences into account, of course, and then solving the equations to instruct you how to do everything so as to “optimize" your life. Eggers likewise imagines a single metric, the "SumNum," that would combine vast amounts of data into a single number in answer to that question, "Am I good?”

If you have a high enough SumNum, then yes.

Plenty of cult-leaders, religions and management fads have offered a similar salve to the paradox of freedom and excess choice. But Egger's algorithmic life is different: it's objective. Scientific. Unlike a cult leader, it may genuinely have your best interests in its simulated mind, because all it needs to solve the equations of happiness is your input. In this world, when your watch alerts you to do 4 minutes of jumping jacks, you can genuinely believe that it knows exactly what you should be doing, better than you yourself know.

So why, as Eggers imagines, do the subjects of these algorithmically designed lives feel such an aching hollowness inside?

I considered several answers as I continued my stroll. The easiest way out of Eggers’ perfectly designed dystopia is that the “objective, scientific” recommendations of his algorithms get human life fundamentally wrong. Those doctors who prescribed leeches surely thought of themselves as “objective” and “scientific” — little did they suspect that their medicine was actually sickening their patients! Perhaps the SumNum and the Every’s other algorithmic recommendations commit similar errors. Cue the standard lessons in epistemic humility: “we don’t understand much about why human societies work, and we don’t know what we don’t know, so any efforts to reform society must proceed slowly and with extreme caution…” Yes, thank you, Mr. Burke; an overconfidence in the powers of technology certainly is part of what makes The Every alarming, but that can’t be the whole story.

Indeed, if "epistemic humility" were the only thing The Every lacked, the solution would be simple: give them some time to figure things out. Eventually, one presumes, they'll have gathered enough data that your phone's prompts – to do jumping jacks, to have a laugh – will not only be well-intentioned, but impeccably timed, so as to refresh, rather than distract. Suppose this future is reached. Is there really nothing wrong with ceding complete control over one's schedule to a piece of software, provided that software really knows what it's doing?

"Why yes, of course!" is the instinctive answer. We like to think – and Eggers indulges this belief – that human autonomy is irreplaceable, even if using it results in objectively worse outcomes. We cherish the illusion of free will (my god, I’m starting to sound like Pessimistic Petey). But in practice, we are quick to outsource it: to let Google Maps decide where you drive, to let the Apple Watch decide how long you exercise. This outsourcing isn't just convenient, it might to some extent be necessary.

In the 1970s, NASA had automated almost every part of their Shuttle launches and landings, but they left the astronauts the job of deploying the parachute – the heroic astronauts had to have something to do! But deploying the parachute was a delicate business – it had to be timed precisely – and the astronauts, being human, often fudged the timing. Eventually, NASA engineers wrote a program to deploy the parachute, and the astronauts were reportedly delighted by how well it worked.

I continued my perambulations, turning somewhat randomly. I passed many other wanderers – or stumblers, maybe – their heads downcast, eyes transfixed by something in cyberspace. Some chatted animatedly with invisible people, probably coworkers, but possibly hallucinations. There has never been a better time to be a crazy person wandering the streets; with all the craziness of the world, you fit right in.

Then, across the way, I glimpsed the telltale stride of another of my favorite characters, Otto the Automator. Otto was moving quickly, as always. But who better to ask about the automation of free will? Otto was perpetually fiddling with bits of code and building his own tools. His stated purpose was to make his computer do all of his work for him. Wouldn't that make him obsolete?

"Ha!” Otto said. "I made myself obsolete years ago, but I’m still here.” Indeed, his first job had required a painful amount of manual curation of data into spreadsheets. "Boring, mindless work,” he said. “So I automated it!” Thanks to his scripts, he did a year’s work in a week. He then moved on to bigger, better pursuits, and along the way, automated many of them too. “I figure that when something gets boring, I should automate it. But there's always plenty of non-boring stuff to do!" He was currently working on a program to automate the landing of SpaceX shuttles.

Otto's watch dinged. He tapped at it. "I've been experimenting, and I found out that I do my best thinking when my heart rate is in the 110s, so I wrote an app to help me keep it there.“ He paused. “I need to add some sort of snooze button, so I can say, ‘Not now, watch, I'm talking to Wug!" He smiled, eyes suddenly dreamy as he contemplated this improvement.

"But don't you find it demeaning to have an app telling you what to do?''

"Ha! Not when I wrote the app! It's more like my past self helping out my present. And if I don't like how it's working, I can change it. I delight in tweaking these things.”

I considered this. Unlike the Every, Otto had no delusions of an objective accuracy. He built the thing. He trusted it as much as he trusted himself, which is to say: he knew there would be mistakes.

“I get such joy seeing my automations at work.” he said. “When I come home at night, all of my lights turn on. When I go to bed, they all turn off. When it gets dark, all of my blinds close. It’s like I have copies of my past self camping around the house and helping me out. And sure, each of those things saves what – 5 minutes? But I have hundreds like that. It gives me so much time.”

A funny image, this: little copies of Otto crouching around this home, doing his chores. And, come to think of it, my house had plenty of its own little homunculi. What else was my washing machine, or dishwasher, or toaster? This morning, little copies of the engineers at the Acme Toaster Corp readied my bread for marmalade. All technology is like this. It's never been the “machines versus the humans,” because the machines are just like copies of past humans automating some little thing. If there's any struggle, it’s the humans against the humans, as ever: our future selves in tension with our past, or the shareholders of *the Every* in tension with the rest of us. It's all politics.

"The way I see it," said Otto, "the central challenge of our times is to keep adding to the scaffolding of automations without forgetting how they all work. The minute you have to take it on faith that the thing works, you've taken a step back towards the dark ages, and you've ceded a bit of control to the high priest.“

But I don't really understand how my phone works – at least, not by any rigorous standard. Does this mean I've ceded control to the high priest in Cupertino?

“Maybe you have," Otto said.

I had to ask. Had he built his own phone?

Of course he had.

"You know what they found written on Richard Feynman’s blackboard after he died?” Otto said. “‘Understand everything that has ever been understood.’ It was his last message to his students.”

“But even Feynman didn't quite get there,” I said. “It was an impossible goal. Worthwhile, sure, but impossible.”

With this, however, Otto took his leave, tapping on his wrist to summon a copy of his past self.

So I continued the stroll, turning randomly, winding between the Every and the Automator, Otto’s past selves and the ghosts of my toaster, with politics all the way down. Visions of science and democracy flashed metaphorically before me. Maybe we don't have to understand everything, if we understand a system that collectively understands? But even this seemed suspended on a tightrope. The future. How uncertain! How tenuous! I walked until the sun beat its steady path above the horizon. Relentless, yes, but absolutely dependable.


Next, we have some lighter fare. Our correspondent Chester Snuphanuph has read far too many productivity blogs over the past years, but none satisfied a gnawing tension he felt between, on one hand, David Allen’s embrace of processes and systems, and, on the other, Sönke Ahrens’ dictum: “Experts don’t plan.”

He now presents his own attempt to unify these opposing tendency in a single framework, as the first installment in what he tells us will be a series of articles, bearing this terribly punny name:

What to do about todos?

by Chester Snuphanuph

"Task Management" – it’s a sterile term, yet may take it as a centerpiece of their life’s work. Is it necessary? Sure, to some degree. But is it commonly overapplied or inflated beyond the bounds of reason? Most certainly.

For a time, I was such an inflationist. I believed that the best way to do more was to have more to dos, and then to will a oneself, or guilt oneself into doing them. There’s a kernel of truth here. But it lives in dangerous disorder. To navigate it, we must be conscious of two opposing tendencies of human (ir)rationality. The tension between them can highlight the goldilocks zone of sufficient but not overly prescriptive planning. It’s in this goldilocks zone that managing tasks reaps the maximal benefit to productivity without hindering creativity.

First, the truth: having things to do, and having committed to them via some conscious intention, is usually preferable to having nothing at all to do. Better to be busy working towards previously defined goals than skipping from whim to whim.

Let's capture this principle with an impressive-sounding name: "The Principal of Previous Intent.” Your present self, if anything like mine, is capricious, prone to whimsy, and tends to inflate the value of immediate experiences while undervaluing the future. To him, the valuation of time is like this:

Arguably, this curve is rational in high-stakes environments like those of our hunter gatherer ancestors. If long-term survival is uncertain, of course you should value the present more then next month, which you might not be around to experience. Especially if an activity, like eating, helps you survive, you should do it presently, without hesitation.

But in the modern world, in which we are lucky enough to have more guarantees of long-term survival, this instinctive over-valuation of the present leads to strange choices, like taking 100 dollars today instead of 200 dollars in a year.

When it comes to deciding how we spend our time, we actually have two instinctual biases conspiring against us. Not only do our instincts lead us to overvalue the present, they also present an inflated perception of what in the present merits our attention, with preference given to things which seem dangerous or promise novelty. These were, again, helpful instincts in the pliocene (pay attention to charging rhino, then investigate those weird tracks you noticed around camp). But for the office worker, these instincts lead to procrastinating while checking the news and then spending what time remains putting out little fires that arose while you procrastinated.

Hence the Principal of Advance Intent. From a (temporal) distance, the distortions of the present and the importance of seemingly urgent tasks fade. We'd rather accept $200 in three years than $100 in two years. Rationality is required to supplant the tyranny of irrelevant instincts and this rationality works most clearly when done ahead of time, when one is clearheaded and free from temporal biases.

But beware the limits of human rationality! It's easy to extrapolate from the benefits of a rough schedule to the necessity of elaborate charts and lists with hundreds of tasks covering a project from start to finish. For example, a student working on her thesis might surmise: “I have to write 60 pages in the next month, so every day, I'll write two pages." If only writing were as simple as this! What will actually happen? She'll stick with the regimen for a few days and then realize that her writing is disorganized, or needs more research, or that the idea she was pursuing is in some way flawed and the first ten pages have to be rewritten. An exhaustive plan might work well for things one does routinely, like applying for grant, doing chores around the house, etc; but for creative work, the terrain is by definition unexplored. You must be prepared to make the map for your journey as the journey progresses.

So this is the tension we must balance when budgeting our time. On one hand, we must employ of rational thinking in advance to escape the tyranny of instinct and impulse. But on the other hand, we have to account for the limits of our planning abilities. Neglect to plan, and you'll drift about putting out fires and spinning your wheels led, astray by mismatched instincts in a modern environment. But overplan, and you'll slowly drift away from the plan, becoming increasingly unmotivated by either its unrealistic demands or its irrelevance.

Bridging these two tendencies requires nimbleness and adaptation to the particulars of your projects, but there are general guidelines and practices which can help you find this sweet spot. I’ll be covering many of these in the coming weeks, including a knights errant themed approach to kanban-based task management, and some useful tools – and, for the aficionados, integrations with my favorite note-taking apps: Obsidian and Craft.

Noteworthy Tools and Interesting Links

Apps, tools, and workflows for thinking better with technology.

Do note that there’s no sponsorship, nor anything shady here: just useful tools which, in our opinion, deserve to be better known.

  • Earlier this year, Max Krieger published a free app called Voiceliner, pitched as a tool for better brainstorming during your strolls. It was Nietzsche who proclaimed that “all truly great ideas are conceived while walking.” He no doubt counted several of his own ideas among this category, but regardless, science backs him. A simple stroll increases blood flow to the brain, potentially priming your neurons for improved creative thinking. Voiceliner attempts to make brainstorming while walking easier, with phone apps (for iOS and android) that can record and transcribe bits of spoken audio into an outline. You can rearrange the thoughts in the outline by swiping them left and right. It’s a clever idea, and well executed.
  • If you use Apple devices and practice (or have been tempted to practice) some variant of time-block planning, you might be interested in Leo Mehlig’s Structured - Day Planner. It’s an app that does something simple quite beautifully: it puts your tasks in a list to the side of your calendar, and allows you to easily assign them blocks of time.
  • There are, surprisingly, only a handful of browser extensions that let you highlight and take notes directly on webpages. WorldBrain's Memex is my favorite. Unlike LINER (the most popular in the category), Memex doesn’t do anything creepy with your data. It has a strong privacy policy (and a refreshingly ethical business plan). It can even sync highlights to Readwise.

Coming Soon from Riddle

Professor Wug reads quite a few papers and takes a lot of handwritten notes. He’s experimented with several e-ink devices for both purposes, and has promised to report back to us with his findings, as well as tips for setting up (and automating) the Kindle, the ReMarkable, and the Onyx Boox Note Air. There might also be some kind of paean to the virtuous limitations of e-ink tablets as tools for focus.


As always, we thank you for reading, and hope you enjoyed this week’s newsletter.

If you’d like to support the team at Riddle Press, you can join the Riddle Club for only $3 a month. Just visit Riddle Press and click the green button in the bottom right to sign up, and you’ll get full access to the Riddle Book Club and other member-only articles and even some custom automations.

Until next week!

– The Team at Riddle Press