Tuesday, July 24, 2012

Blinding, Information Hiding and Epistemic Efficiency



This post is about the importance of information hiding in epistemic systems. It argues (though “argues” may be too strong a word) that hiding information from certain participants in an epistemic system can increase the epistemic efficiency of the overall system. While this conclusion is not particularly earth-shattering, the method adopted to reach it is quite interesting. It uses some of the formal apparatus from Roger Koppl’s work on Epistemic Systems, which combines game theory and information theory in an effort to better understand and intervene in certain social systems.

In what follows, I lay out some of the key elements from Koppl’s theory and then describe a simple model epistemic system (taken from Koppl’s article) that illustrates the importance of information hiding.


1. What is an Epistemic System?
An epistemic system is any social system that generates judgments of truth or falsity. The classic example might be the criminal trial which tries to work out whether or not a person committed a crime. Evidence is fed into this system via witnesses and lawyers, it is then interpreted, weighed and evaluated by a judge and jury, who in turn issue a judgment of truth or falsity, either: “Yes, the accused committed the crime” or “No, the accused did not commit the crime”. Although this may be the classic example, the definition adopted by Koppl is broad enough to cover many others. For example, science is viewed as an epistemic system under Koppl’s definition.

The goal of epistemic systems theory is to adopt some of the formal machinery from game theory and information theory in order to better understand and manipulate these epistemic system. In effect, the goal here is to develop simple models of epistemic systems, and use these to design better ones. The first step in this process is to identify the three key elements of any epistemic system. These are:

Senders: A set of individual agents who choose the messages that are sent through the system. 
Message Set: The set of possible messages that could be sent by the senders. 
Receivers: A set of individual agents who receive the messages and determine whether they represent the truth or not.

In its more mathematical guise, an epistemic system can be defined as an ordered triple of senders, receivers and messages {S, R, M}, with a formal symbology for representing the members of each set. I will eschew that formal symbology here in both the interests of simplicity and brevity. Full details can be found in Koppl’s article. I will use some elementary mathematics and pictures, such as the following, which represents a simple epistemic system with one message, one sender and one receiver. 




The system issues a judgment, and this judgment will either be true of false. Whether it is in fact true or false is not determined by the beliefs of the senders or receivers.


Senders and Receivers are viewed as rational agents, sometimes locked in strategic battles, within these systems. As such they have utility functions which represent their preferences for particular messages or conclusions and they act so as to maximise their utility. One of the key assumptions Koppl makes is that these utility functions will not usually include a preference for the truth. For instance, he assumes that scientists will have a preference for their pet theory, rather than for the true theory; or that lawyers will have a preference for evidence that supports their client’s case, not for the true evidence. In doing so, he adopts a Humean perspective on epistemic systems, believing we should presume the worst in order to design the best. He uses a nice quote from Hume to set this out:

… every man ought to be supposed a knave, and to have no other end, in all his actions, than private interest. By this interest we must govern him, and, by means of it, make him, notwithstanding his insatiable avarice and ambition, co-operate to public good.

This assumption of knavishness is fairly common in rational choice theory and I have no wish to question it here. What does need to be questioned, however, is what represents the “public good” when it comes to the design and regulation of epistemic systems. One could argue about this, but the perspective adopted by Koppl (and many others) is that we want epistemic systems that reach true judgments. To be more precise, we want epistemically efficient systems, where this is defined as:

Epistemic Efficiency: A measure of the likelihood of the system reaching a true judgment. Either: 1 minus the error rate of the system; or the ratio of true judgments to total judgments.

So the goal is to increase the epistemic efficiency of the system. The argument we will now look at claims that information hiding is one way of achieving this.


2. The Importance of Information Hiding
The argument, like all arguments, depends on certain assumptions. One of the advantages of the formal machinery adopted by Koppl is that these assumptions are rendered perspicuous. If you think these assumptions are wrong, the strength of the argument is obviously diminished, but at least you’ll be able to clearly see where it’s going wrong as you read along.

So what are these assumptions? First, we are working with an extremely simple system. The system consists of one sender, one receiver, and two messages. It does not matter what these messages are, so we shall simply denote them as m1 and m2. We shall refer to the sender as S and the receiver as R. S must pick either m1 or m2 to send to R. R does not question whether S is right or wrong. In other words, R always assumes that the message sent by S represents the truth. This is, roughly, illustrated in the diagram below. We assume that m1 has a 0.25 probability of being true, and m2 has a 0.75 probability of being true.



The second crucial assumption relates to the payoff functions of R and S. They are as follows:

Receiver’s Payoff Function = U(m) = 1 (if m=m1) or 0 (if m=m2)
Sender’s Payoff Function = V(m) = Pr(m is true) x E[U(m)]

In other words, we assume that R prefers to receive m1 over m2. And we assume that S’s payoff function is partly determined by what he thinks is the truth, and partly determined by what he expects R’s payoff function to be (E(U) denotes expected utility. This looks like a fairly realistic assumption. Imagine, for instance, the expert witness recruited by a trial lawyer. He will no doubt wish to protect his professional reputation by picking the “true” message from the message set, but he will also wish to please the lawyer who is paying for his services. So if he knows that the lawyer prefers one message over the other, he too may have a bias toward that message. That such biases may exist has been confirmed experimentally, and they may be entirely subconscious.

This is where information hiding comes into play. Look first at the efficiency of the system when there is no information hiding, i.e. when S knows exactly what R’s payoff function is. In other words, when E[U(m)] = U(m).

If S sends m1 then: 
  • (1) U(m) = 1; P(m1) = 0.25 
  • (2) V(m) = P(m1) x E[U(m)] 
  • (3) V(m) = (0.25) x (1) 
  • (4) V(m) = .25

If S sends m2 then: 
  • (5) U(m) = 0; P(m2) = 0.75 
  • (6) V(m) = P(m2) x E[U(m)] 
  • (7) V(m) = (0.75) x (0) 
  • (8) V(m) = 0

Since we assume S acts so as to maximise his payoff, it follows that S will always choose m1 in this system. And since m1 only has a one in four chance of being correct, it follows that the epistemic efficiency of the system as whole is 0.25. Which is pretty low.

Can efficiency be improved by hiding information about R’s preferences from S? Well, let’s do the math and see. Assume now that S has no idea what R’s preferences are. Consequently, S’s adopts the principle of indifference and assumes that R is equally likely to prefer m1 and m2. In other words, in this scenario E[U(m)] = (0.5)(1) = (0.5).

If S sends m1 then: 
  • (1*) E[U(m)] = 0.5 ; P(m1) = 0.25 
  • (2*) V(m) = P(m1) x E[U(m)] 
  • (3*) V(m) = (0.25) x (0.5) 
  • (4*) V(m) = 0.125


If S sends m2 then: 
  • (5*) E[U(m)] = 0.5; P(m2) = 0.75 
  • (6*) V(m) = P(m2) x E[U(m)] 
  • (7*) V(m) = (0.75) x (0.5) 
  • (8*) V(m) = 0.375

S’s preference now shifts from sending m1 to sending m2. And since m2 has a three in four chance of being correct, the epistemic efficiency of the system is increased from 0.25 to 0.75. This is a significant improvement. And, if the assumptions are correct, illustrates one significant way in which to improve the overall efficiency of an epistemic system.

As I said at the outset, this is not a particularly earth-shattering conclusion. Indeed, it is what motivates blinding protocols in scientific experimentation. What’s nice about the result is the formal apparatus underlying it. This formal apparatus is flexible, and can be used to model, evaluate and design other kinds of epistemic system.

No comments:

Post a Comment