Karthik Tadepalli

Economics PhD student at UC Berkeley


Rationality in the 21st Century

Published May 9, 2019

I.

Popular academic books fascinate me; I like academic knowledge, and I thing it can be expressed interestingly to an average reader. But the more one dives into research - especially in economic theory - the less it seems people will care about it. (Pop quiz: what fraction of people could you impress by telling them about Rubinstein’s email game, and why is it zero?)

I’ve only read pop-economics, though, and I didn’t know about the concept of popular computer science until my friend introduced me to Algorithms to Live By, by Brian Christian and Tom Griffiths. Algorithms to Live By is about how computer science can inform our daily lives, in domains from scheduling to exploration/exploitation to learning.

The truth is, Algorithms to Live By is a damn good book. There’s no qualifier; I thoroughly enjoyed it. But this is not a book review: rather, it’s a thought that Algorithms to Live By sparked. A thought that originates in computer science and culminates in economics, with significant consequences for both fields.

II.

Algorithms to Live By covers a wide array of topics: exploration/exploitation, caching, scheduling, Bayesian inference, overfitting, etc. Each chapter asks, “what is the best way to do X?” where X could be “manage a to-do list” or “predict the future” or “decide where to eat dinner”. Then it covers the state of the art in CS research on how to do X in a CS context, and then explains with apt analogies how the same principle applies outside a CS context.

A major aspect of this is pointing out the connections between the CS recommendation and folk wisdom. This is perfectly natural for the authors to do: it’s much easier for people to accept a theorist’s recommendations when it squares up with their own process/a process they’re familiar with.

Here are some of the recommendations that Christian and Griffiths make:

  • when dealing with interruptions to work (e.g. email), deal with them all at once rather than responding to each email as it comes.
  • when predicting the outcome of a process, use your prior knowledge of that process to inform you.
  • when organizing your desk, keep the most used items closest at hand.
  • when trying to predict how another person will behave, imagine how you would behave in their shoes.

None of these are mind-bending revelations: there is no misinformed consensus being shattered by the weight of authoritative research. If anything, the research is confirming intuitions people already had about optimal decisionmaking. And a subtle, important implication of that is people already do the optimal thing. Not all people all the time, because then the book would be pointless. But if people can intuitively understand the CS prescription and apply it, what does that say about their decisionmaking process?

III.

Economics has a bad rap lately. The primary charge against it is a complaint of the form, “economics assumes people are perfectly rational.” What does this mean? It usually translates to “economics assumes people mentally solve optimization problems with more feasible configurations than there are atoms in the universe, and that they do this while shopping for groceries every week”. In other word, this construct of rationality is too comprehensive.

Economists sometimes defend comprehensive rationality: they argue that people rationally respond to incentives in real life all the time, so clearly they can solve optimization problems. This can only have limited success. Economics is too agnostic: it doesn’t take a stance on how decisionmaking actually occurs in the nuts and bolts of the human brain. Wolfgang Pesendorfer’s critique of neuroeconomics sums up this approach well:

Neuroscience evidence cannot refute economic models because the latter make no assumptions and draw no conclusions about the physiology of the brain.

This is good for generality (“our conclusions hold for different forms of decisionmaking across different people”) but it’s still inherently limited. You cannot defend yourself against attacks about decisionmaking without a cohesive framework for how decisionmaking really happens. Comprehensive rationality is cut off at the knees without a nuts-and-bolts description of how comprehensive we can be.

Computer science does take a stance on how decisionmaking occurs: according to computer scientists, human decisions are computational problems. The metaphor of the brain as computer is everywhere in society. When we’re tired, our brain is “overloaded”. When we learn in a class, we “process information”. The metaphor is increasingly being reinforced in the other direction, as computational tasks become identified with intelligence.

It’s no wonder, then, that Christian and Griffiths evoke this metaphor repeatedly. They say we should simplify our interactions with people to make them “computationally easier”. To-do lists for us to follow are the task of “scheduling on a single machine”. When we form beliefs based on scant evidence, we are “overfitting”.

This metaphor is not necessarily at odds with comprehensive rationality, which is implementation-agnostic. But the major difference is that people believe in the brain-computer metaphor. Most people either consciously or unconsciously accept the computational metaphor for our brains. They subscribe to the brain as a computer in a way that they never subscribed to the human being as a rational agent, even though these are the same position.

IV.

In the conclusion of Algorithms to Live By, Christian and Griffiths write:

The intuitive standard for rational decision-making is carefully considering all available options and taking the best one. At first glance, computers look like the paragons of this approach, grinding their way through complex computations for as long as it takes to get perfect answers. But as we’ve seen, that is an outdated picture of what computers do: it’s a luxury afforded by an easy problem. In the hard cases, the best algorithms are all about doing what makes the most sense in the least amount of time, which by no means involves giving careful consideration to every factor and pursuing every computation to the end. Life is just too complicated for that.

… Up against such hard cases, effective algorithms make assumptions, show a bias toward simpler solutions, trade off the costs of error against the costs of delay, and take chances. These aren’t the concessions we make when we can’t be rational. They’re what being rational means.

This is what I found to be the most remarkable thing about Algorithms to Live By. It seeks to do nothing less than redefine rational decisionmaking. Comprehensive rationality is out, and now it’s all about heuristic rationality: rationality as minimum-effort computation.

This is not a new research idea, either in economics or computer science. Decision theorists often include the “cost of information acquisition” into models of learning, and the entire field of approximation algorithms is centered on not-quite-correct-but-good-enough approximations of NP-hard problems. But I find this significant because it popularizes these research items while explicitly rebranding them as rational decisionmaking. This is taking the baton of rationality from economics in the 20th century and passing it to computer science in the 21st century.

This looks to be the future of the rational agent. Behavioral economics sought to eliminate it from the scene, but in all likelihood Homo economicus is just getting a facelift and returning as the human computer that we already know intimately as ourselves.