Wednesday, January 11, 2017

Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy




[If you like this blog, consider signing up for the newsletter...]

It is a noticeable feature of intellectual life that many people research the same topics, but do so using different conceptual and disciplinary baggage, and consequently fail to appreciate how the conclusions they reach echo or complement the conclusions reached by others.

I see this repeatedly in my work on algorithms in governance. It’s pretty obvious to me now that this is a major topic of scholarly interest, being pursued by hundreds (probably thousands) of academics and researchers, across multiple fields. They are all interested in much the same things. They care about the increasing pervasiveness of algorithmic governance; they want to know how this affects political power, human freedom, and human rights; and they want to mitigate the negative effects and accentuate the positive (if any). And yet, I get a tremendous sense that many of these scholarly groups are talking past each other: packaging their ideas in the conceptual garb that is familiar to their own discipline or that follows from their past scholarly work, and failing to appreciate how what they are saying fits in with what others have said. Perhaps this is just the inevitable result of the academic echo chambers created by institutional and professional networks.

But note that this isn’t just a problem of interdisciplinarity. Many scholars within the same disciplines fail to see the similarities between what they are doing, partly because of the different theories and ideas they use, partly because there is too much work out there for any one scholar to keep up with, and partly because everyone longs to be ‘original’ — to make some unique contribution to the body of knowledge — and avoid accusations of plagiarism. I think this is a shame. Pure plagiarism is a problem, for sure, but reaching the same conclusions from slightly different angles is not. I think that if we were more honest about the similarities between the work we do and the work of others we could advance research and inquiry.

Admittedly this is little more than a hunch, but in keeping with its spirit, I’m trying to find the similarities between the work I have done on the topic of algorithmic governance and the work being done by others. As a first step in that direction, I want to analyse a recent paper by Karen Yeung on hypernudging and algorithmic governance. As I hope to demonstrate, Yeung reaches similar conclusions in this paper to the ones I reached in a paper entitled ‘The Threat of Algocracy’ but by using a different theoretical framework she provides important additional insight on the phenomenon I described in that paper.

Allow me to explain.


1. From the Threat of Algocracy to Hypernudging
In the ‘Threat of Algocracy’ I used ideas and arguments drawn from political philosophy to assess the social and political impact of algorithmic governance. I defined algorithmic governance — or as I prefer ‘algocracy’ — as the use of data-mining, predictive and descriptive analytics to constrain and control human behaviour. I then argued that the increased prevalence of algocratic systems posed a threat to the legitimacy of governance. This was because of their likely opacity and incomprehensibility. These twin features meant that people would be less able to participate in and challenge governance-related decisions, which is contrary to some key normative principles of liberal democratic governance. Individuals would be subjects of algorithmic governance but not meaningful creators or controllers of it.

To put it more succinctly, I argued that the increased prevalence of algorithmic governance posed a threat to the liberal democratic order because it potentially reduced human beings to moral patients and denied their moral agency. In making this argument, I drew explicitly on a similar argument from the work of David Estlund about the ‘Threat of Epistocracy’. I also reviewed various resistance and accommodation strategies for dealing with the threat and concluded that they were unlikely to succeed. You can read the full paper here.

What’s interesting to me is that Karen Yeung’s paper deals with the same basic phenomenon — viz. the increased prevalence of algorithmic governance systems — but assesses it using tools drawn from regulatory theory and behavioural economics. The end result is not massively dissimilar from what I said. She also thinks that algorithmic governance can pose a threat to human agency (or, more correctly in her case, ‘freedom’), but by using the concept of nudging — drawn from behavioural economics — to understand that threat, she provides an alternative perspective on it and an alternative conceptual toolkit for addressing it.

I’ll explain the main elements of her argument over the remainder of this post.


2. Design-based Regulation and Algocracy
The first thing Yeung tries to do is locate the phenomenon of algorithmic governance within the landscape of regulatory theory. She follows Julia Black (a well-known regulatory theorist) in defining regulation as:

the organised attempt to manage risks or behaviour in order to achieve a publicly stated objective or set of objectives. 
(Black quoted in Yeung 2016, at 120)

She then identifies two main forms of regulation:

Command and Control Regulation: This is the use of laws or rules to dictate behaviour. These laws or rules usually come with some carrot or stick incentive: follow them and you will be rewarded; disobey them and you will be punished.

Design-based Regulation: This is the attempt to build regulatory standards into the design of the system being regulated, i.e. to create an architecture for human behaviour that ‘hardwires’ in the preferred behavioural patterns.

Suppose we wanted to prevent people from driving while drunk. We could do this via the command and control route by setting down legal limits for the amount of alcohol one can have in one’s bloodstream while driving, by periodically checking people’s compliance with those limits, and by punishing them if they breach those limits. Alternatively, we could take the design-based route. We could redesign cars so that people simply cannot drive if they are drunk. Alcohol interlocks could be installed in every car. This would force people to take a breathalyser test before starting the car. If they fail this test, the car will not start.

With this conceptual framework in place, Yeung tries to argue that many forms of algorithmic governance — particularly algorithmic decision support systems — constitute a type of design-based regulation. But she makes this argument in a circuitous way by first arguing that nudging is a type of design-based regulation and that algorithmic decision support systems are a type of nudging. Let’s look at both of these claims.


3. Algocracy as Hypernudging
Nudging is a regulatory philosophy developed by Cass Sunstein and Richard Thaler. It has its origins in behavioural economics. I’ll explain how it works by way of an example.

One of the key insights of cognitive psychology is that people are not, in fact, as rational as economists would like to believe. People display all sorts of biases and psychological quirks that cause them to deviate from the expectations of rational choice theory. Sometimes these biases are detrimental to their long-term well-being.

A classic example of this is the tendency to overprioritise the short-term future. It is rational to discount the value of future events to some extent. What happens tomorrow should matter more than what happens next year, particularly given that tomorrow is the gateway to next year: if I don’t get through tomorrow unscathed I won’t be around to appreciate whatever happens next year. But humans seem to discount the value of future events too much. Instead of discounting according to an exponential curve, they discount according to a hyperbolic curve. This leads them to favour smaller sooner rewards over larger later rewards, even when the expected value of the latter is higher than the former. Thus I might prefer to receive 10 dollars tomorrow rather than 100 dollars in a year's time, even though the value of the latter greatly exceeds the value of the former.

This creates particular problems when it comes to retirement savings. The bias towards the short-term means that people often under-save for their retirements. One famous example of nudging — perhaps the progenitor of the theory — were the attempts made by Richard Thaler (and others) to address this problem of undersaving. He did so by taking advantage of another psychological bias: the bias toward the status quo. People are lazy. They like to avoid cognitive effort, particularly when it comes to mundane tasks like saving for retirement. Thaler suggested that you could use this bias to encourage people to save more for retirement by simply changing the default policy setting on retirement savings plans. Instead of making them opt-in policies, you should make them opt-out. Thus, the default setting should be that money is saved and that people have to exert effort to not save. Making this simple change had dramatic effects on how much people saved for retirement.

We have here the essence of nudging. The savings policy was altered so as to nudge people into a preferred course of action. According to Thaler and Sunstein, the same basic philosophy can apply to many regulatory domains. Policy wonks and regulators can construct ‘choice architectures’ (roughly: decision-making situations) that take advantage of the quirks of human psychology and nudge them into preferred behavioural patterns. This is a philosophy that has really taken off over the past fifteen years, with governments around the world setting up behavioural analysis units to implement nudge-based thinking in many policy settings (energy, tax collection, healthcare, education, finance etc.).

Yeung argues that nudging is a type of design-based regulation. Why? Because it is not about creating rules and regulations and enforcing them but about hardwiring policy preferences into behavioural architectures. Changing the default setting on retirement savings policy is, according to this argument, more akin to putting speed bumps on the road than it is to changing the speed limit.

She also argues that algorithmic governance systems operate like nudges. This is particularly true of decision-support systems. These are forms of algorithmic governance with which we are all familiar. They use data-mining techniques to present choice options to humans. Think about the PageRank algorithm on Google search; the Amazon recommended choices algorithm; Facebook’s newsfeed algorithm; the route planner algorithm on Google maps; and so on. All of these algorithms sort through options on our behalf (websites to browse, books to buy, stories to read, routes to take) and present us with one or more preferred options. They consequently shape the choice architecture in which we operate and nudge us toward certain actions. We typically don’t question the defaults provided by our algorithmic overlords. Who, after all, is inclined to question the wisdom of the route-planner on Google maps? This is noteworthy given that algorithmic decision support systems are used in many policy domains, including policing, sentencing, and healthcare.

There is, however, one crucial distinction between algorithmic governance and traditional nudging. Traditional nudges are reasonably static, generalised, policy interventions. A decision is made to alter the choice architecture for all affected agents at one moment in time. The decision is then implemented and reviewed. The architecture may be changed again in response to incoming data, but it all operates on a reasonably slow, general and human timescale.

Algorithmic nudging is different. It is dynamic and personalised. The algorithms learn stuff about you from your decisions. They also learn from diverse others. And so they update and alter their nudges in response to this information. This allows them to engage in a kind of ‘hypernudging’. We can define hypernudging as follows (the wording, with some modifications, is taken from Yeung 2016, 122):

Algorithmic Hypernudging: Algorithmically driven nudging which highlights and takes advantage of patterns in data that would not be observable through human cognition alone and which allows for an individual’s choice architecture to be continuously reconfigured and personalised in real time.

This is certainly an interesting way of understanding what algorithmic governance can do. But what consequences does it have and how does it tie back into the themes I discussed in my work on the threat of algocracy?


4. The Threat of Hypernudging
The answer to that question lies in the criticisms that have already been thrown at the philosophy of nudging. When they originally presented it, Sunstein and Thaler were conscious of the fact that they were advocating a form of paternalistic manipulation. Taking advantage of quirks in human psychology, changing default options, and otherwise tweaking the choice architecture, seems on the face of it to disrespect the autonomy and freedom of the individuals affected. The choice architects presume that they know best: the substitute their judgment for the judgment of the affected individuals. This runs contrary to the spirit of liberal democratic governance, which demands respect for the autonomy and freedom of all.

Sunstein and Thaler defended their regulatory philosophy in two ways. First, they made the reasonable point (in my view anyway) that there is no ‘neutral’ starting point for any choice architecture. Every choice architecture embodies value preferences and biases: making a retirement savings policy opt-in rather than opt-out is just as value-laden as the opposite. So why not make the starting-point one that embodies and encourages values we share (e.g. long-term health and well-being)? Second, they argued their’s was a libertarian form of paternalism. That is to say, they felt that altering the choice architecture to facilitate nudging did not eliminate choice. You could always refuse or resist the nudge, if you so desired.

Critics found this somewhat disingenuous. Look again at the retirement savings example. While this does technically preserve choice — you can opt-out if you like — it is clearly designed in the hope that you won’t do any choosing. The choice architects don’t really want you to exercise your freedom or autonomy because that would more than likely thwart their policy aims. There is some residual respect for freedom and autonomy, but not much. Contrast that with an opt-in policy which, when coupled with a desire to encourage people to opt-in, does try to get you to exercise your autonomy in making a decision that is in your long-term interests.

It’s important not to get too bogged down in this one example. Not all nudges take advantage of cognitive laziness in this way. Others, for instance, take advantage of preferences for certain kinds of information, or desires to fit in with a social group. Nevertheless, further criticisms of nudging have emerged over the years. Yeung mentions two in her discussion:

Illegitimate Motive Critique: The people designing the choice architecture may work from illegitimate motives, e.g. they may not have your best interests at heart.

The Deception Critique: Nudges are usually designed to work best when they are covert, i.e. when people are unaware of them. This is tied to the way in which they exploit cognitive weaknesses.*

Both of these critiques, once again, run contrary to the requirements of liberal democratic governance which is grounded in respect for the individual. Consequently, it follows that if algorithmic governance systems are nudges, they too can run contrary to the requirements of liberal democratic governance. Indeed, the problem may be even more acute in the case of algorithmic nudges since they are hypernudges: designed to operate on non-human timescales, to take advantage of patterns in data that cannot be easily observed by human beings, and to be tailored to your unique set of psychological foibles.

This is very similar to the critique I mounted in 'The Threat of Algocracy'. I also focused on the legitimacy of algocratic governance and worried about the way in which algocratic systems treat us passively and paternalistically. I argued that this could be due to the intrinsic complexity and incomprehensibility of those systems: people just wouldn’t be able to second guess or challenge algorithmic recommendations (or decisions) because their minds couldn’t function at the same cognitive level.

As I see it, Yeung adds at least two important perspectives to that argument. First, she highlights how the ‘threat’ I discussed may not arise from factors inherent to algocratic systems themselves but also from the regulatory philosophy underlying them. And second, by tying her argument to the debate around nudging and regulatory policy, she probably makes the ‘threat’ more meaningful to those involved in practical policy-making. My somewhat esoteric discussion of liberal political theory and philosophy would really only appeal to those versed in those topics. But the concept of nudging has become common currency in many policy settings and brings with it a rich set of associations. Using the term to explain algocratic modes of governance might help those people to better appreciate the advantages and risks they entail.

Which ties into the last argument Yeung makes in her paper. Having identified algorithms as potential hypernudges, and having argued that they may be illegitimate governance tools, Yeung then challenges liberal political theory itself, arguing that it is incapable of fully appreciating the threat that algorithms pose to our freedom and autonomy. She suggests that alternative, Foucauldian, understandings of freedom and governance (or governmentality) might be needed. I’m not sure I agree with this — I think mainstream liberal theory is pretty capacious — but I’m going to be really annoying and postpone discussion of that topic to a future post about a different paper.


* Very briefly, nudges tend to work best covertly because they take advantage of quirks in what Daniel Kahneman (and others) call System 1 — the subconscious, fast-acting, part of the mind — not quirks in System 2 — the slower, conscious and deliberative part of the mind.

No comments:

Post a Comment