Pages

Monday, June 15, 2015

How might algorithms rule our lives? Mapping the logical space of algocracy


IBM Blue Gene

This post is a bit of an experiment. As you may know, I have written a series of articles looking at how big data and algorithm-based decision-making could affect society. In doing so, I have highlighted some concerns we may have about a future in which many legal-bureaucratic decisions are either taken over by or made heavily dependent on data-mining algorithms and other artificial intelligence systems. I have used the term ‘algocracy’ (rule by algorithm) to describe this state of affairs.

Anyway, one thing that has bothered me about these past discussions is their relative lack of nuance when it comes to the different forms that algocratic systems could take. If we paint with too broad a brush, we may end up ignoring both the advantages and disadvantages of such systems. Cognisant of this danger, I have been trying to come up with a better way to taxonomise and categorise the different possible forms of algocracy. In the past couple of weeks, I think I may have come up a way of doing it.

The purpose of this post is to give a very general overview of this new taxonomy. I say it is ‘experimental’ because my hope is that by sharing this idea I will get some useful critical feedback from interested readers. My goal is to develop this taxonomic model into a longer article that I can publish somewhere else. But I don’t know if the model is any good. So if you are interested in the idea, I would appreciate any thoughts you may have in the comments section.

So what is this taxonomy? I’ll try to explain it in three parts. First, I’ll explain the inspiration for the taxonomy — namely: Christian List’s analysis of the logical space of democratic decision-procedures. Second, I’ll explain how my taxonomy — the logical space of algocratic decision-procedures — works. And third, I will very briefly explain the advantages and disadvantages of this method of categorisation.

I’m going to try my best to keep things brief. This is something at which I usually fail.


1. List’s Logical Space of Democracy
I’ve written two posts about List’s logical space of democracy. If you want a full explanation of the idea, I direct your attention those posts. The basic idea behind it is that politics is all about constructing appropriate ‘collective decision procedures’. These are to be defined as:

Collective Decision Procedures: Any procedures which take as their input a group of individuals’ intentional attitudes toward a set of propositions and which then adopt some aggregation function to issue a collective output (i.e. the group’s attitude toward the relevant propositions).

Suppose that you and I form a group. We have to decide what to do this weekend. We could go rollerblading or hillwalking. We each have our preferences. In order to come up with a collective decision, we need to develop a procedure that will take our individual preferences and aggregate them into a collective output. This will determine what we do this weekend.

But how many different ways are there of doing this? One of List’s key insights is that there is a large space of logically possible decision procedures. We could adopt a simple majority rule system. Or we could adopt a dictatorial rule, preferring the opinion of one person over all others. Or we could demand unanimity. Or we could do some sort of sequential ordering: he who votes first, wins. I won’t list all the possibilities here. List gives the details in his paper. As he notes there, there are 24 logically possible decision procedures.


This might seem odd since there are really only two possible collective decisions, but List’s point is that there are still 16 possible aggregation procedures. By itself, this mapping out of the space of logically possible decision procedures isn’t very helpful. As soon as you have larger groups with more complicated decisions that need to be made, you end up with unimaginably vast spaces of possible decision procedures. For instance, List calculates that if you had ten voters faced with two options, you would have 21024 possible collective decision procedures.

So you have to do something to pare down the space of logical possibilities. List does this by adopting an axiomatic method. He specifies some conditions (axioms) that any democratic decision procedure ought to satisfy in advance, and then limits his search of the logical space of possible decision procedures to the procedures that satisfy these conditions. In the case of democratic decision procedures, he highlights three conditions that ought to be satisfied: (i) robustness to pluralism (i.e. the procedure should accept any possible combination of individual attitudes); (ii) basic majoritarianism (i.e. the collective decision should reflect the majority opinion); and (iii) collective rationality (i.e. the collective output should meet the basic criteria for rational decision making). And since it turns out that it is impossible to satisfy all three of these conditions at any one time (due to classic ‘voting paradoxes’), the space of functional democratic decision procedures is smaller than we might first suppose. We are left then with only those decision procedures that satisfy at least two of the mentioned conditions. Once you pare the space of possibilities down to this more manageable size you can start to think more seriously about its topographical highlights.

Anyway, we don’t need to understand the intricacies of List’s model. We just need to understand the basic gist of it. He is highlighting how there are many possible ways of implementing a collective decision procedure, and how only a few of those procedures will meet the criteria for a morally or politically acceptable collective decision procedure. I think you can perform a similar analysis when it comes to understanding the space of possible algocratic decision procedures.


2. The Logical Space of Algocratic Decision-Procedures
To appreciate this method of mapping out the logical space, we first need to appreciate what an algocratic decision procedure actually is. In its most general terms, an algocratic decision procedure is any public decision procedure in which a computerised algorithm plays a role in the decision making process (this can be via data-mining, predictive analytics etc). Take, for example, the use of facial recognition algorithms in detecting possible instances of criminal fraud. In his book The Formula, Luke Dormehl mentions one such procedure being used by the Massachusetts registry of motor vehicles. This algorithm looks through the photographs stored on the RMV’s database in order to weed out faces that seem to be too similar to one another. When it finds a matching a pair, it automatically issues a letter revoking the licenses of the matching drivers. This is a clearcut example of an algocratic decision procedure.

Moving beyond this general definition, three parameters appear to define the space of possible algocratic decision procedures. The first is the particular domain or type of decision-making. Legal and bureaucratic agencies make decisions across many different domains. Planning agencies make decisions about what should be built and where; revenue agencies sort, file and search through tax returns and other financial records; financial regulators make decisions concerning the prudential governance of financial institutions; energy regulators set prices in the energy industry and enforce standards amongst energy suppliers; the list goes on and on. In the formal model I outline below, the domain of decision-making is ignored. I focus instead on two other parameters defining the space of algocratic procedures. But this is not because the domain is unimportant. When figuring out the strengths or weaknesses of any particular algocratic decision-making procedure, the domain of decision-making should always be specified in advance.

The second parameter concerns the main components of the decision-making ‘loop’ that is utilised by these agencies. Humans in legal-bureaucratic agencies use their intelligence when making decisions. Standard models of intelligence divide this capacity into three or four distinct tasks. I’ll adopt a four-component model here (this follows my earlier post on the topic of automation):

(a) Sensing: collecting data from the external world.
(b) Processing: organising that data into useful chunks or patterns and combining it with action plans or goals.
(c) Acting: implementing action plans.
(d) Learning: the use of some mechanism that allows the entire intelligent system to learn from past behaviour (this property is what entitles us to refer to the process as a ‘loop’).

Although individual humans within bureaucratic agencies have the capacity to perform these four tasks themselves, the work of entire agency can also be conceptualised in terms of these four tasks. For example, a revenue collection agency will take in personal information from the citizens in a particular state or country (sensing). These will typically take the form of tax returns, but may also include other personal financial information. The agency will then sort that collected information into useful patterns, usually by singling out the returns that call for greater scrutiny or auditing (processing). Once they have done this they will actually carry out audits on particular individuals, and reach some conclusion about whether the individual owes more tax or deserves some penalty (acting). Once the entire process is complete, they will try to learn from their mistakes and triumphs and improve the decision-making process for the coming years (learning).

The important point in terms of mapping out the logical space of algocracy is that algorithmic systems could be introduced to perform one or all of these four tasks. Thus, there are subtle and important qualitative differences between different types of algocratic system, depending on how much of the decision-making process is taken over by the computer.

In fact, it is more complicated than that and this is what brings us to the third and final parameter. This one concerns the precise relationship between humans and algorithms for each task in the decision-making loop. As I see it, there are four possible relationships: (1) humans could perform the task entirely by themselves; (2) humans could share the task with an algorithm (e.g. humans and computers could perform different parts of the analysis of tax returns); (3) humans could supervise an algorithmic system (e.g. a computer could analyse all the tax returns and identify anomalies and then a human could approve or disapprove their analysis); and (4) the task could be fully automated, i.e. completely under the control of the algorithm.

This is where things get interesting. Using the last two parameters, we can construct a grid which we can use to classify algocratic decision-procedures. The grid looks something like this:




This grid tells us to focus on the four different tasks in the typical decision-making loop and ask of each task: how is this task being distributed between the humans and algorithms? Once we have answered that question for each of the four tasks, we can start coding the algocratic procedures. I suggest that this be done using square brackets and numbers. Within the square brackets there would be four separate number locations. Each location would represent one of the four decision-making tasks. From left-to-right this would read: [sensing; processing; acting; learning]. You then replace the names of those tasks with numbers ranging from 1 to 4. These numbers would represent the way in which the task is distributed between the humans and algorithms. The numbers would correspond to the numbers given previously when explaining the four possible relationships between humans and algorithms. So, for example:


[1, 1, 1, 1] = Would represent a non-algocratic decision procedure, i.e. one in which all the decision-making tasks are performed by humans.

[2, 2, 2, 2] = Would represent an algocratic decision procedure in which each task is shared between humans and algorithms.

[3, 3, 3, 3] = Would represent an algocratic decision procedure in which each task is performed entirely by algorithms, but these algorithms are supervised by humans with some residual possibility of intervention.

[4, 4, 4, 4] = Would represent an pure algocratic decision procedure in which each task is performed by algorithms, with no human oversight or intervention.


This coding system allows us to easily work out the extent of the logical space of algocratic decision procedures. Since there are four tasks and four possible ways in which those tasks could be distributed between humans and algorithms, there are 256 logically possible procedures (bear in mind that this is relative to a particular decision-making domain — if we factored in all the different decision-making domains we would be dealing with a truly vast space of possibilities).


3. Conclusion: The Utility of this Mapping Exercise?
So that’s it: that’s the basic gist of my proposal for mapping out the logical space of algocracy. I’m aware that this method has its limitations. In particular, I’m aware that coding the different possible algocracies in terms of general tasks like ‘sensing’ and ‘processing’, or particular relationships in terms of ‘sharing’ or ‘supervising’, leaves something to be desired. There are many different ways in which data could be collected and processed, and there are many different ways in which tasks could be shared and supervised. Thus, this coding method is relatively crude. Nevertheless, I think it is useful for at least two reasons (possibly more).

First, it allows us to see, at a glance, how complex the phenomenon of algocracy really is. In my original writings on this topic, I used relatively unsophisticated conceptual distinctions, sometimes referring to systems that pushed humans ‘off’ the loop or kept them ‘on’ the loop. This system is slightly more sophisticated and allows us to appreciate some of the nuanced forms of algocracy. Furthermore, with this coding method we can systematically think our way through the different ways in which an algocratic system can be designed and implemented in a given decision-making domain.

Second, using this coding method allows us to single out broadly problematic types of algocracy and subject them to closer scrutiny. As mentioned in my original work, there are moral and political problems associated with algocratic systems that completely undermine or limit the role of humans within those systems. In particular, there can be problems when the algorithms make the systems less prone to human understanding and control. At the same time, there are types of algocracy that are utterly benign and possibly beneficial. For example, I would be much more concerned about a [1, 3, 1, 3] system than I would be about a [3, 1, 3, 1] system. Why? Because in the former system the algorithm takes over the main cognitive components of the decision-making process (the data analysis and learning), whereas in the latter the algorithm takes over some of the ‘drudge work’ associated with data collection and action implementation.

What do you think? Is this a useful system for mapping out the space of possible algocratic decision procedures? Or is it just needlessly confusing?

No comments:

Post a Comment