Friday, October 17, 2014

Algocracy and other Problems with Big Data (Series Index)




What kind of society are we creating? With the advent of the internet-of-things, advanced data-mining and predictive analytics, and improvements in artificial intelligence and automation, we are on the verge of creating a global "neural network": a constantly-updated, massively interconnected, control system for the world. Imagine what it will be like when every "thing" in your home, place of work, school, city, state and country is monitored or integrated into a smart device? And when all the data from that device is analysed and organised by search algorithms? And when this in turns feeds into some automated control system?

What kind of world do you see? Should we be optimistic or pessimistic? I've addressed this question in several posts over the past year. I thought it might be useful to collect the links to all those posts in one place. So that's what I'm doing here.

As you'll see, most of those posts have been concerned with the risks associated with such technologies. For instance, the threat they may pose to transparency, democratic legitimacy and traditional forms of employment. But just to be clear, I am not a technophobe -- quite the contrary in fact. I'm interested in the arguments people make about technology. I like to analyse them, break them down into their key components, and see how they stand up to close, critical scrutiny. Sometimes I end up agreeing that there are serious risks; sometimes I don't.

Anyway, I hope you enjoy reading these entries. This is a topic that continues to fascinate me and I will write about it more in the future.

(Note: I had no idea what to call this series of posts. So I just went with whatever came into my head. The title might be somewhat misleading insofar as "Big Data" isn't explicitly mentioned in all of these posts, though it does feature in many of them)


1. Rule by Algorithm? Big Data and the Threat of Algocracy
This was the post that kicked everything off. Drawing upon some work done by Evgeny Morozov, I argued that increasing reliance on algorithm-based decision-making processes may pose a threat to democratic legitimacy. I'm currently working on a longer paper that develops this argument and assesses a variety of possible solutions.

2. Big Data, Predictive Algorithms and the Virtues of Transparency (Part One, Part Two)
These two posts looked at the arguments from Tal Zarsky's paper "Transparent Predictions". Zarsky assesses arguments in favour of increased transparency in relation to data-mining and predictive analytics.

3. What's the case for sousveillance? (Part One, Part Two)
This was my attempt to carefully assess Steve Mann's case for sousveillance technologies (i.e. technologies that allow us to monitor social authorities). I suggest that some of Mann's arguments are naive, and that it is unlikely that sousveillance technologies will resolve problems of technocracy and social inequality.

4. Big Data and the Vices of Transparency
This followed up on my earlier series of posts about Tal Zarsky's "Transparent Predictions". In this one I looked at what Zarsky had to say about the vices of increased transparency.

5. Equality, Fairness and the Threat of Algocracy
I was going through a bit of Tal Zarsky-phase back in April, so this was another post assessing some of his arguments. Actually, this one looked at his most interesting argument (in my opinion anyway). In this one, Zarsky claimed that automated decision-making processes should be welcomed because they could reduce implicit bias.

6. Will Sex Workers be Replaced by Robots? (A Precis)
This was an overview of the arguments contained in my academic article "Sex Work, Technological Unemployment and the Basic Income Guarantee". That article looked at whether advances in robotics and artificial intelligence threaten to displace human sex workers. Although I conceded that this is possible, I argued that sex work may be one of the few areas that is resilient to technological displacement.

7. Is Modern Technology Creating a Borg-Like Society?
This post looked at a recent paper by Lipschutz and Hester entitled "We are the Borg! Human Assimilation into the Cellular Society". The paper argued that recent technological developments pushed us in the direction of a Borg-like society. I tried to clarify those arguments and then asked the important follow-up: is this something we should worry about? I identified three concerns one ought to have about the drive toward Borg-likeness.

8. Are we heading for technological unemployment? An Argument
This was my attempt to present the clearest and most powerful argument for technological unemployment. The argument drew upon the work of Andrew McAfee and Erik Brynjolfsson in The Second Machine Age. Although I admit that the argument has flaws -- as do all arguments about future trends -- I think it is sufficient to warrant serious critical reflection.

9. Sousveillance and Surveillance: What kind of future do we want?
This was a short post on surveillance technologies. It looked specifically at Steve Mann's attempt to map out four possible future societies: the univeillant society (one that rejects surveillance and embraces sousveillance); the equiveillant society (one that embraces surveillance and sousveillance); the counter-veillance society (one that rejects all types of veillance); and the McVeillance society (one that embraces surveillance but rejects sousveillance).

10. Procedural Due Process and Predictive Analytics
Big data is increasingly being used to "score" human behaviour in order to predict future risks. Legal scholars Frank Pasquale and Danielle Keats Citron critique this trend in their article "The Scored Society". I analyse their arguments and offer some mild criticisms of the policy proposals.

11. How might algorithms rule our lives? Mapping the logical space of algocracy
This post tried to formulate a method for classifying the different types of algocratic decision procedure. It did so by identifying four distinct decision-making tasks and four distinct ways in which those tasks could be distributed between humans and algorithms.

12. The Logic of Surveillance Capitalism
This post looks at Shoshanna Zuboff's work on surveillance capitalism. Zuboff follows a conceptual framework set out by Google's chief economist Hal Varian and argues that we are entering a new phase of capitalism, which she calls 'surveillance capitalism'. This phase hinges on the collection and control of data and is characterised by four distinctive features. I discuss (and critique) her analysis of these four features in this post.

13. The Philosophical Importance of Algorithms
This post looks at some of Rob Kitchin's work on the importance of algorithms in modern society. First, it assesses the process of algorithm-construction and highlights two key translation problems that are inherent to that process. Second, it considers the importance of algorithms for the three main branches of philosophical inquiry.

14. How to Study Algorithms: Challenges and Methods
This is another post looking at Rob Kitchin's work. This one is quite practical in nature, focusing on the different research strategies one could adopt when studying the role of algorithms in contemporary society.

15. Understanding the Threat of Algocracy
This is a video of a talk I delivered to the Programmable City Project at Maynooth University on the Threat of Algocracy. I tried to ask and answer four questions: (i) What is algocracy? (ii) What is the threat of algocracy? (iii) Can we (or should we) resist the threat? and (iv) Can we accommodate the threat?

16. Is there Trouble with Algorithmic Decision-Making? Fairness and Efficiency-Based Objection
This is a discussion of a paper by Tal Zarsky on the trouble with algorithmic decision-making. The post tries to offer a high-level summary of the main objections to algorithmic decision-making and the potential responses to those objections.




No comments:

Post a Comment