Professional negotiators often wax poetic about win-win outcomes: where both sides cooperate and compromise. In practice, win-win is never a dominant strategy. Lose-lose almost always beats it. Here’s why.
Adopting a Dominant Strategy
One of our clients asked us to help them define a set of dominant strategies for a new AI system. The goal was simple: For a given scenario, train the system to achieve better (winning) outcomes no matter what strategies any other competing systems might adopt.
In a laboratory, crafting a dominant strategy, where one exists, requires knowledge of contemporary game theory, some math skills, and a significant amount of testing. There’s not a lot of harm to be done trying to get a set of algorithms to obtain the lowest price for an ad or to optimize a media mix.
But in the real world, actions come with consequences. Is a short-term win good in the long term? Does winning mean more for me and less for everyone else? Or is winning defined as good for me if it’s also good for everyone else?
There’s another consideration. Outside of a gaming environment, a dominant strategy may not always be the best strategy, nor does it necessarily always lead to the best outcomes.
Unlike AI systems, human beings are pre-programmed and genetically wired to act in their own self-interest. As the saying goes, “Winning isn’t everything; it’s the only thing.” It may sound (or actually be) amoral, but there is some math to back it up.
One of the most famous games in game theory is the “Prisoner’s Dilemma.” In the two-player game, you and your accomplice have committed a crime. You are arrested and immediately separated from each other. During your interrogation, you are given a choice: cooperate or defect. To cooperate means that you will tacitly cooperate with your accomplice and remain silent. To defect means that you will break your partnership and tell the authorities that your accomplice is guilty.
Your options are as follows:
- If you cooperate and your accomplice cooperates (you both stay silent), you both go free (and split the loot you’ve hidden prior to your arrest).
- If you defect (turn on your accomplice) and your accomplice cooperates (stays silent), you will go free (and get all the loot you stole), while your accomplice will receive the maximum sentence.
- If you defect (turn on your accomplice) and your accomplice defects (turns on you), you will both get a reduced sentence reserved for informants (rats). After you both serve your jail sentences, you and your accomplice will split the loot.
- If your accomplice defects (turns on you) and you cooperate (stay silent), you will get the maximum sentence and your accomplice will go free (and get all the loot you stole).
No matter what your accomplice does, you will always be better off if you defect. There is a chance that staying silent (cooperating) will get you released. But there’s a better chance that your accomplice will dominate by turning on you (defecting), in which case you get the maximum sentence. If you defect, you will go free or get a lighter sentence. But by defecting, you always avoid the harshest sentence. In a single two-player game, defect is always the dominant strategy. (In an iterative two-player game there are other strategic options, but they are not germane to this discussion.)
The paradox of the Prisoner’s Dilemma is that the dominant strategy will often yield a short prison sentence for both you and your accomplice. This is not a particularly good outcome. Prison sucks! But a short sentence is better than a long one.
What strategy could you adopt that would guarantee both you and your accomplice would go free?
There really isn’t a strategy, but there is a solution: an outside force. This could be in the form of leadership, regulations, or a belief system. Say your mob boss told you both that the rules of the mob require you to stay silent (cooperate) no matter what. Whether you serve jail time or not, if you rat on your accomplice, the mob will find you and kill you. That outside force would completely alter the dynamics of the game. It would also virtually guarantee you and your accomplice would go free.
This might work in a two-player game. What about in an iterative multi-player version?
Tragedy of the Commons
“Tragedy of the Commons,” a paper written in 1968 by evolutionary biologist Garrett Hardin, describes an iterative multi-player version of the Prisoner’s Dilemma. The paper posits that in an unregulated shared-resource system, individuals will always act according to their own self-interest (thereby neglecting the well-being of society in the pursuit of marginal personal gains). In other words, in the multi-player game, the dominant strategy is also “defect.”
For example, during a drought, the mayor proclaims that no one should water their lawn, and showers should be limited to five minutes. Most people will not water their lawns because of peer pressure (and possible legal action by the mayor). But who will know if you take a 10-minute shower? It wouldn’t really hurt anyone, would it?
As you can imagine, a 10-minute shower is an easy short-term win. Of course, if everyone in town takes 10-minute showers, the water supply will run out – and then there won’t be any water to drink, let alone take a shower with. Hence, the “tragedy” in “Tragedy of the Commons.”
Some Shower Thoughts
We seem to be drifting from a society that indefatigably strived to define right from wrong to a culture that exalts winners and excoriates losers. This is dangerous in the extreme. There may be business reasons to eschew win-win because the dominant strategy is simply “win.” But to win as a society, we are going to need an outside force, belief system, or leadership to dissuade individuals from acting solely in their self-interest. Otherwise, as game theory suggests, tragedy may dominate our future.
Other Articles You May Enjoy
Who Should Be The President of the Internet?
iPhone X: Imitation Is Not Innovation
Chipping People: Are You Ready?
The Five Jobs Robots Will Take First
The Five Jobs Robots Will Take Last
Machine Learning & AI: When to Start?