Thinking, Fast and Slow (Daniel Kahneman)

Published on Thu, 22 Aug 2024. Estimated reading time: 7 min.

Thinking, Fast and Slow is a non-fiction book by psychologist Daniel Kahneman published in 2011, discussing the human decision-making process and the flaws that come with it. If I had to break it down to one sentence, the book challenges the perception of humans as purely rational beings (“econs”) by describing various mechanisms of the human mind that can produce inconsistent results.

Kahneman died earlier this year after working on the research described in the book for decades and reaching highest academic acclaim, being dubbed “godfather of behavioral economics”. I had known about Thinking, Fast and Slow before, but his death prompted me to add the book to my reading list and I finally picked it up in July.

Summary #

Each of the five parts of the book looks at different aspects of decision-making mechanisms observed by Kahneman.

It begins with introducing two systems involved in our decision-making process: System 1, which operates subconsciously and is responsible for reflexive and intuitional decisions, and System 2, which is conscious and what we consider as ourselves when we think. The case for these two systems (which are mostly described as actors throughout the book to make it easier to conceptualize them) is rather compelling.

They both have strengths and weaknesses (e.g. system 1 can handle averages and peaks well, but is not good at handling sums; system 2 is much better at dissecting problems, but is “lazy” and while tasked with validating inputs from system 1, tends to make mistakes at that verification process) and interlace to come up with decisions we make every minute.

Concepts #

A (non-exhaustive) list of interesting concepts that the book proposes:

Substitution is a process in which a harder question is replaced with an easier question to answer the original question. This happens without the person realizing that they aren’t answering the original question but an easier one.

Intensity matching describes a more specific case of substitution: Transferring a value on one scale (e.g. your emotional response to a heartbreaking story) to another scale (e.g. a financial contribution to a good cause related to the prior story) to (intuitively) decide an appropriate value.

Regression to the mean is one of several examples where human intuition is beaten by statistics: We tend to see a particular good or bad day/event/occurrence as evidence of overall performance, but stasticially an outstanding event (e.g. a goalkeeper being able to catch every or none of the shots at the goal) is followed by an event closer to average. An outstanding game by the goalkeeper is more likely to be followed by a more average one but we tend to expect a similar outstanding game and are disappointed when it doesn’t happen.

Anchoring is when the context of a decision sets the stage by creating a reference point from which the decision-making process starts out. This is again a subconscious process: We are not aware it happens. But asking people to estimate something (e.g. a historical fact) above or below an arbitrary number supposedly changes their answers.

Prospect theory which is a little hard to put into a couple of sentences. If I’d still try I would describe it as following: It’s a counter-argument (or rather, an amendment) to utility theory which postulates that (rational) agents maximize the utility of a decision. Prospect theory however claims that human decision-making is more complex and driven by divergence from a status quo (instead of absolute numbers), by diminishing sensitivity (stark differences are more impactful) and by loss aversion (losses have a higher impact than gains).

Expert Intuition #

A very fascinating part of the book is the discussion of expert intuition. Kahneman worked on this topic with Gary Klein, who Kahneman describes as somewhat adversarial in the book. Kahneman’s synthesis of their opposing positions is: Expert intuition can only be skilled (as in: have a higher chance of being correct than a coin toss) in sufficiently predictable environments with immediate feedback loops. A firefighter (the example from the book) can pick up on non-obvious cues that forebode escalations because they have experienced similar events in the past. Intuition is basically just memory retrieval and pattern matching.

A stock picker however cannot learn to correctly forecast stock performance because the stock market is a “zero-validity” environment. And the book makes an even bolder claim: If you actually validate stock picking skills, you will encounter minuscule differences from pure chance. In a market in which ETFs seem to perform more consistently than manually picked stocks, this rings true. Remarkable – according to the book – is the cognitive dissonance when confronted with such performance metrics: The results are ignored and everyone carries on as if they had the skills that statistics have been unable to verify.

Conclusion #

There are many more topics discussed in great detail in the book, but the conclusion it draws is clear: Human decision-making is driven by evolutional features that were of great help surviving difficult conditions, but they have not fully adjusted to the kind of decisions that humans need to make in the present day. The “rational agents” described by economic theory (the book calls them “econs”) do not exist in that shape and form, and humans are susceptible to flawed heuristics and biases.

The conclusion of the book draws a line from all that to what it means for policy making. The gist is that humans do not necessarily need to be “kept from harm”, but they need to be provided with decision-making input in formats that do not skew their perception. Marketing departments and public opinion drivers are aware of psychological research as described in this book, and Kahneman adopts the standpoint that policy makers need to be aware of biases and heuristics and should set policies that enforce neutral information to minimize framing, anchoring, biases or similar flaws in decision making.

Review #

Somewhat surprisingly, reading this book has been a breeze. The topic is difficult (and should be dry, given the amount of discussion of research results), but the writing style made it easy to follow the threads spun by Kahneman, much more than I anticipated. It’s certainly a book that requires breaks to process and think about the concepts, but I finished it way faster than was reasonable for me.

A criticism leveled against the book is that the countless studies it references are caught up in the replication crisis of social sciences – the inability to reproduce them successfully or consistently. That’s a very valid point, most of the book’s claims only work if the described behaviour can be considered somewhat universal. It becomes moot if humans simply behave differently.

To a layperson, a lot of the book makes intuitive sense (but I just learned to not always trust intuition) and the examples seem to create the cognitive knots described in the book, even when I was aware that I’m going to encounter a logical fallacy. Observing myself working through those has been a fascinating experience. The book provides some underpinning to fuzzy beliefs I’ve held before but had trouble articulating (e.g. that status quo plays a huge role in decision making) which makes it both compelling and precarious (as it makes me prone to confirmation bias).

Software engineers can – in my opinion – learn from this book, in particular from the review of expert intuition. We all know situations in which expert intuition is desired (say, story point estimations) but the responses are usually extremely poor and there is basically not feedback loop. And remarkably, those situations share some other characteristics with the case made in the book: A whole industry feels that story point estimations are bordering on the useless, but a competing (business) interest creates a cognitive dissonance we have simply accepted. If we accepted that human decision making is simply not up to the task, we could maybe stop wasting time on it.

Describing humans as purely rational beings has also been the theoretical underpinning for predatory, self-serving ideologies, and only if we acknowledge that these foundations are extremely shaky we can put policies in place that improve human well-being and social constructs.

Overall, the book highlights aspects we all can pay attention to while making decisions and even proposes processes to insulate organizations and societies from decision-making mistakes. Stopping ourselves in our tracks to review a decision and test it for biases makes universal sense, even if the models described in Thinking, Fast and Slow are not as useful or correct as they are made out to be. I’d strongly recommend this book to anyone interested in decision-making processes and introspection, but it should be taken as a (likely flawed) attempt at modelling of, not an exhaustive guide to, human behaviour.