Building Trust and Confidence:

Why trustworthy voting systems require institutionalized distrust

Part of the Voting and Elections web pages
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Copyright © 2003 This work may be transmitted or stored in electronic form on any computer attached to the Internet or World Wide Web so long as this notice is included in the copy. Individuals may make single copies for their own use. All other rights are reserved.

Position paper for the
First Symposium on Building Trust and Confidence in Voting Systems
National Institute of Standards and Technology
Gaithersburg, Maryland
December 10-11, 2003

 

Thesis

A trustworthy system of elections must rest on one central principle: Trust no-one.

Do not trust any voter, do not trust any polling place worker, do not trust any election administrator. Do not trust any vendor of election equipment, supplies and services. Do not trust anyone who audits the process or who certifies the equipment. Do not trust anyone who writes laws governing the conduct of elections, and do not trust anyone who writes the standards to which election equipment is tested. Do not trust any academic, do not trust any professional, do not trust any politician, and do not trust any pundit. Certainly, you should not trust me!

This is not a matter of paranoia, it is a matter of self interest. Every citizen of a democracy has a moral obligation to have an opinion on every issue that comes before the electorate. Every resident, citizen or not, has an interest in the outcome of the democratic process, whether they are aware of it or not. And in the case of the United States in the late 20th and early 21st century, every person in the world has an interest in the outcome of our democratic process because we are truly a superpower. All of the people on whom we might be tempted to place our trust must therefore be presumed to have a conflict of interest between what is to their own personal benefit and what is to the benefit of the electorate as a whole.

Background

The more I have investigated voting systems, the more I have pondered the question of trust. How do we create a culture of honesty, where almost everyone can be trusted? What leads to the breakdown of such a culture? How do we replace entrenched and deeply corrupt governments with honest governments? How, for example, did the thoroughly corrupt political machines that used to dominate many American cities and states come to be replaced with the sometimes honest and sometimes trustworthy governments of today?

The answer to these questions is fairly clear. If we extend unlimited trust to any party, to any institution or to any individual, no matter how trustworthy they may appear, they are likely, eventually, to abuse our trust! This is a well-known problem.

The importance of limiting trust has been known for millennia. For example, it is an ancient legal principle that a man should not be convicted on the evidence of only one witness; this rule can be traced back to biblical law: "The evidence of a single witness shall not suffice for a sentence of death" [1, 2], and more generally, "a case can be valid only on the testimony of two witnesses or more" [3]. The rationale for this ancient law is obvious. If we trust a single witness, we have no way to determine if that witness is honest. The ancient formulations of these rules do not make exceptions; the testimony of a policeman or a judge requires corroboration just as much as the testimony of a peasant.

The right of the candidates and people to observe the key steps in the election has long been recognized. Such observers are the witnesses on whom we rely for the testimony that the totals published for the election do indeed reflect the votes of the electorate. Consider this century-old call for reform: "The election laws should be carefully revised and amended. As the law stands, the judges of election may utterly disregard the plain provisions of the statute respecting the rights of candidates and of electors to be present at the counting an canvassing of votes without incurring any liability, either civil or criminal" [4].

John Stuart Mill opposed voting by secret ballot for similar reasons. In his time, only a fraction of the population had the right to vote, and he felt that, as they were voting on behalf of all, their votes must be known to all so that all could pressure them to vote in the public interest instead of in their narrow private interests [5]. Although he did not address the issue, it is clear that, when the entire voting process is conducted in public, it is very difficult to falsify the count. Mill does discuss the possible need for secret ballots when there are deep class divisions within the electorate, and he discusses the possibility of universal suffrage, but these were not pressing issues in his time, so he did not need to discuss the problem of conducting an honest count of secret ballots.

In looking at the issue of trust in voting systems, some have suggested that we apply the high standards of the gaming industry to voting systems. Because gaming machines have long been subject to manipulation, Nevada and many other states apply extremely tough standards to their regulation, requiring far more thorough oversight than we require for voting systems. Despite this, there are cases of insider fraud in the slot-machine arena, perhaps most notably the case of Ron Harris, who was convicted in 1998 for rigging a test fixture used by the Nevada state Gaming Control Board to test slot machines. The rigged test fixture injected software into the slot machine itself, and that software allowed Harris and his accomplices to steal $47,000 from Nevada casinos between 1992 and 1995 [6]. One surprising feature of this case is the small apparent payoff, given the risks involved; much of the speculation about the cost of buying a programmer's services to fix an election have assumed that it would take far more money.

And, of course, in the world of accounting, we have numerous recent examples of fraud. We know that, if it were not for the threat of audits, many of those who run the finances of our world could not be relied on to be honest. Furthermore, we know that, if the auditors themselves are too closely allied to the organizations they audit, they will no-longer perform reasonable audits. The examples of Enron and Worldcom illustrate this quite clearly. Neither the auditors nor the stock market commissioners, nor the shareholders of these companies were able to exercise meaningful oversight, and the result was, at least for many of those involved, a financial catastrophe [7].

How can we trust a voting system using secret ballots?

If we cannot extend unsupervised trust to any participant in our system of elections, how can we conduct an election and present results that people can trust? This is an old problem, as old as democracy, and the answer is clear when you examine the use of any mature voting technology.

The best example we have of this is the Australian paper ballot. Typical election laws and administrative procedures for the conduct such an election are quite complex, and the system is sufficiently new, which is to say, not yet 150 years old, that there is still no consensus about how best to conduct such an election. There are, however, some good examples; elsewhere, I have made an effort to summarize the best practices I am aware of [8].

In sum, wherever we can, we require that members of opposing parties cooperate to assure the integrity of the vote. The panel of election judges that runs the polling place is required to contain equal numbers representing the two parties. The ballot box must be either locked away securely or in the joint custody of members of opposing parties at all times, the public have a right to observe all stages in ballot processing, and technical security measures are applied only where public oversight is impossible, specifically, to assure that the ballot the voter returns for deposit in the ballot box is the same one that was issued to that voter minutes earlier. A typical technical measure used for this purpose involves serial-numbered ballot stubs that are torn from the ballot and reconciled with the poll-book before the ballot is inserted in the box.

This system allows us to trust the result of the election only when the election is conducted by people with known partisan biases! It works best when the election officials strongly distrust each other, and it is the least secure when one of the parties cannot be trusted to oppose the other. As a result, elections conducted this way using officially nonpartisan judges are not trustworthy, because if we prohibit the judges from expressing political opinions in public, how can we know that the pairs of judges are indeed mutually distrustful? Another result is that this system works best in the presence of two strong parties, and it breaks down when there is only one functional party or when there are multiple small parties with shifting coalitions.

The paper ballot example makes it clear that the administrative rules and election laws are a key part of the problem! Paper ballots are so simple, from a technological perspective, that it is easy to assume, mistakenly, that conducting a fair election using them is simple. It is not, and there are many very bad laws and administrative rules governing elections using paper ballots that are still on the books. The reason they are still there is that, over the past century, we have repeatedly replaced our election technology instead of fixing the problems with the misadministration of the older technology.

Who are we forcing ourselves trust?

As we move to voting systems that are more opaque than paper, it is fair to ask, for each new technology, and for each set of administrative rules, who are we required to trust? One measure of the trustworthiness of a voting system is the number of people that system requires us to trust, individually. The more this is, the worse the system is.

I say individually to emphasize that we can generally trust groups of people working together even when we can trust none of them working in isolation. The measure of the untrustworthiness of an election that interests me is how many people, working in isolation, could corrupt the result of the election without the need for the cooperation of others. This measure may be too strict; teams of two are almost as dangerous unless we can ensure their mutual distrust.

Serious problems with administrative rules

I want to emphasize, as I have said many times before, that voting technology cannot be evaluated in isolation; rather, the technology must be evaluated in the context of the laws and administrative rules that govern its use [9].

The update of administrative rules to account for new technology does not necessarily erase old rules from the books, nor are all relevant rules routinely updated with equal care. Thus, for example, when Georgia updated their state rules to account for the use of direct recording electronic voting machines, variants of the following phrase were included in many sections: "... open to the public to observe; however, such members of the public shall not in any manner interfere ...". This applies to the pre-election setup and testing, to the polling place before the polls open, and at the canvassing center where the final totals are computed, but this wording is omitted in the rules applying to the closing of the polls [10]. I hope this is merely an editorial oversight, but it clearly suggests that the problems with our administrative rules pointed out a century ago by Governor Lewelling are still with us [4].

On closer examination, even more interesting oversights emerge. We have been very interested in assuring that our elections are conducted without glitches, so we set, in our rules, the latest dates by which certain steps must be conducted. Again, quoting the Georgia rules, the pre-election testing must be completed by the end of the third day preceding the election, and the keys (physical and electronic) to the machine must be delivered to the polling-place supervisor no later than an hour before the polls open. Unfortunately, the rules set only a lower bound on the time this material is available. I see no rule prohibiting voting machine delivery immediately after pre-election testing, and I see no reason why the keys could not be delivered to the supervisor just as soon, allowing the supervisor several days of uncontrolled and unmonitored access to the machinery.

I do not mean to single out Georgia unfairly; on the whole, their rules are a fairly good model, and most states have even more serious problems. I also make no allegations that the potential for abuse created by this set of rules has ever been exploited, but I believe that if we want to create a genuinely trustworthy system of elections, we need to do better! For example, the key to the closet where the voting machines are stored could be delivered to the Republican co-supervisor of the polling place, while the keys to the machines themselves could be delivered to the Democratic co-supervisor, thus creating a system similar to that used for launching nuclear weapons, where two people must both present their keys before either key is of any use.

Defending against the machine itself

When we look at the current generation of voting machines, we find that there is no genuinely independent record of the votes cast from the time the voter enters his or her choices until the time that ballot is stored in the redundant memory of the voting system. Between these times, the ballot image is stored in main memory within the data structures of a single voting application program. No attempt at a recount can recover data lost or deliberately altered in this period; in effect, there is no second witness to the voter's intent, and the only witness we have is software, acting as the agent of one or several programmers.

Admittedly, this period of vulnerability is brief, lasting only milliseconds, but it is just the kind of opportunity that Ron Harris exploited when he rigged slot machines in Nevada [6]. There are two approaches to guarding against this threat, both of which must be fully exploited if we are to assure ourselves that such an attack has been ruled out.

One line of defense, code audit and version control

The primary defense we currently employ, at least to some extent, is to require a strict audit of the code that goes into each voting machine. This involves not only the source code audit originally introduced in Section 7.4.2 of the 1990 voting system standards [11], but issues of configuration control that are a matter of state law and administrative rules.

Unfortunately, in the past 6 months, two major stories have dashed any hope that we can rely on these measures. First, the security weaknesses revealed by the Hopkins Report [12], confirmed by the SAIC study [13], and re-confirmed by the Compuware study [14] make it clear that the current source code audit process has failed to catch extremely serious flaws in the security of voting systems. The first two of these studies only focused on one voting system, but the last focused on and found serious security defects in 4 of the most widely used systems. I have only read the source code audit reports for two of these 4 systems, but those reports (not available to the public) did not point out any of these defects! It is clear, therefore, that we cannot trust the current source code audit process to provide a meaningful assurance that our machines are secure.

Even if we assume perfect source code audits, we must also assume that the software in the voting system as used in the polling place is the version that was subject to the source code audit audit. This is a matter of state administrative rules, and as was pointed out immediately after the Hopkins Report made it into the news, proper administrative procedures at the state level should mask many potential security loopholes in the voting application [15].

Unfortunately in the months that followed, there were two clear test cases for this argument. In both Georgia and California, the applicable rules for voting systems require fairly strict standards of version control. While these standards look good on paper, field investigation reveals that they have not been carried out in practice. In an interview, Rob Behler discussed his work at Georgia's voting machine testing lab at Kennesaw State University; he said: "... one of the engineers used my laptop ... [to get the patch] from the FTP, put it on a card, make copies of the cards and then we used them to update the machines." He went on to clarify that this was his personal laptop, an unsecure machine, and that the patches were downloaded from a public FTP server [16].

Note added, July 22, 2004: This incident did not take place on KSU premises, but rather, it appears that KSU Center for Election Systems employees were involved in oversight of the acceptance testing for the machines in an off-campus location. Rob Behler, working as a temp for Diebold, was involved in preparing machines for these acceptance tests.

Even more damning is the quote from Connie McCormack of Los Angeles; when asked by a reporter from the Los Angeles Times about a state audit of voting system software, her response was: "All of us have made changes to our software - even major changes - and none of us have gone back to the secretary of state. But it was no secret we've been doing this all along" [17]. Between these two stories, it is clear that even the best administrative rules, if unmonitored by routine audits and therefore, effectively unenforced, do nothing to ensure the security of our voting systems and protect against insider fraud, whether it originates from a vendor or from any of the technicians who maintain the machines at more local levels.

Another line of defense, redundant recording

There is a second defense we can use, and it is clear that this line was envisioned by the authors of the 1990 Voting System Standards. Section 3.2.4.2.5 of this standard mandates that direct recording electronic voting systems contain multiple memories that "shall maintain an electronic or physical image of each ballot, in an independent data path" [11].

In practice, this requirement and its successors have been interpreted entirely as fault tolerance standards, so that, for example, a voting system that records ballot images on disk and in flash EEPROM would be considered to have satisfied this requirement. Strictly interpreted, however, such an approach does not maintain independent data paths, since the program logic that recorded the vote on disk is in fact the same program logic that recorded the vote to flash EEPROM. The net result is something that behaves, at best, as some kind of stable storage.

If, however, the entire vote recording application, from the decoding of touch screen coordinates onward, was split into two genuinely independent data paths, with a guarantee that there was no communication between these paths, then these two programs can be viewed as independent witnesses of the ballots cast, and if there is any disagreement, this can be taken as evidence that one or both programs are either incorrect or fraudulent. If there is agreement and we have genuinely ruled out collusion between the two programs, we can conclude that either the vote is safe or there was an extremely unlikely coincidence with two malfunctioning or fraudulent programs producing the same incorrect result independently.

In effect, the genuinely independent data paths through the voting system can be made to serve the purpose of the two witnesses required in ancient biblical law [1, 2, 3]. With human witnesses, we worry about collusion, for example, by disqualifying witnesses who are husband and wife or who are involved in other forms of partnership. We can do the same with our independent data paths, requiring that they not be developed by the same corporation; this, of course, would require a sufficient range of open standards for data representation that the two vote recording mechanisms could work in terms of the same ballot definition file that was used for ballot presentation, no small feat in today's environment where proprietary file formats are the norm.

An end run around these failures

These alternatives have failed, but we can dodge the entire issue by using an alternative approach, the widely touted but not always well explained voter verifiable audit trail. With current direct-recording electronic voting systems, the voter is asked to verify that their selections are as displayed on the screen before the ballot is finally committed to what we hope is some variant of stable storage. This does correct some potential problems, primarily those resulting from misinterpretation of the human interface by the voter. This does not address the issue of misprogrammed or deliberately dishonest machines! Between the time the voter sees the verification screen and the time the data is recorded in redundant memory, there is still only one copy, and that copy is potentially subject to tampering.

Therefore, when we speak of a voter verified audit trail, we are speaking of something where the copy displayed to the voter for verification is already indelibly recorded on some medium that cannot be changed between the time the voter verifies the selections and the end of the mandatory ballot retention period set by state or federal law. The most common medium suggested for this is paper, but there are probably viable high-tech alternatives.

There are at least two competing models for a voter-verified audit trail. In one model, known as the frog model, the voting machine delivers, to the voter, a "frog" that the voter can verify to be a correct representation of the vote. In effect, the frog is the legal ballot. Once the voter has verified that the frog is correct, the voter deposits the frog in a ballot tabulator that adds the votes it contains to the count and retains the frog as a backup in case a recount is needed [18].

The term "frog" was coined to avoid suggesting an implementation technology, but all currently workable versions of the frog model use paper ballots, sometimes, as simple as machine-marked but human-readable paper ballots, or sometimes, ballots with bar-coded versions of the vote that was recorded, so that a verifier machine must be used. The developers of these latter systems generally suggest that they will support open standards so that these verifiers can be provided by independent parties. The truth is, so long as the frog is a paper document printed in a fixed font by the voting machine, optical character recognition of the text printed by the machine should eliminate any need for bar codes or other non-human-readable codes.

It is worth noting that hand-marked optical mark-sense ballots have most of the attributes of frogs! The ballot marking device is just a pencil or pen, not a touch-screen and printer, and for most voters, assuming that the instructions are clear, the ballot can be verified by eye. Handicapped accessibility poses problems for mark-sense ballots, but there are almost certainly inexpensive technical solutions to this problem, and expensive solutions are already coming to market.

The main alternative to the frog model has come to be known as the Mercuri model; in this model, the voter's choices are displayed on a paper ballot behind glass. If the voter accepts this document, it is dropped into the ballot box, all without being touched by human hands. There are alternative proposals for what should happen if the voter disagrees with the ballot that was presented [19].

With either the frog model or the Mercuri model, the machine count of the ballots is considered a secondary record, while the physical ballots themselves are the primary record. This secondary record should be made at the time the voter accepts the ballot as correct, so that there is a duplicate record of the ballot as insurance against theft or destruction of the originals, but the originals, if they are available, should be considered the primary documents in case of any question about the election, and routine auditing of the conduct of elections should compare the physical ballot images with the machine count in order to check the correctness of the tabulating software.

With either the frog model or the Mercuri model, we no longer have to trust the vendors of the voting system, we need not struggle to prove the independence of software developers and auditors, and we need not rely on apparently unrealistic assumptions about the care with which election officials carry out the rules we impose on them. With a voter-verified paper audit trail, we do not rely on trust, so "it doesn't matter whether Satan himself designed the voting machines" [20].

References

[1] Numbers 35:30.

[2] Deuteronomy 17:6.

[3] Deuteronomy 20:9.

[4] L. D. Lewelling, Governor's Biennial Message to the Legislature of Kansas, 1893. http://kslib.info/ref/message/lewelling/1893.html

[5] J. S. Mill, Considerations on Representative Government, Parker, Son and Bourn, London, 1861; see particularly Chapter X, of the Mode of Voting.

[6] Sean Whaley, Former gaming official sent to jail for slot scam, Las Vegas Review Journal, Jan. 10, 1998.

[7] Richard A. Oppel, Jr., Senator says Merrill Lunch helped Enron 'cook books', the New York times, July 30, 2002; reprinted by The Global Policy Forum.

[8] Douglas W. Jones, Voting on Paper Ballots, http://homepage.cs.uiowa.edu/~dwjones/voting/paper.html last changed Sept. 19, 2002.

[9] Douglas W. Jones, Evaluating Voting Technology, Testimony before the United States Civil Rights Commission, Tallahassee, Florida, January 11, 2001; http://homepage.cs.uiowa.edu/~dwjones/voting/uscrc.html; see the section entitled "A System View."

[10] Georgia Administrative Rules, 183-1-12-.02, Direct Recording Electronic Voting Equipment; http://www.state.ga.us/rules/index.cgi?base=183/1/12/02.

[11] Performance and Test Standards for Punchcard, Marksense, and Direct Recording Electronic Voting Systems, Federal Election Commission, January 1990.

[12] Tadayoshi Kohno, Adam Stubblefield, Aviel D. Rubin and Dan S. Wallach, Analysis of an Electronic Voting System, http://avrubin.com/vote.pdf, July 23, 2003.

[13] Risk Assessment Report, Diebold AccuVote-TS Voting System and Processes, Science Applications International Corporation, SAIC-6099-2003-261, September 2, 2003; available from the State of Maryland in redacted form, http://www.dbm.maryland.gov/SBE.

[14] Direct Recording Electronic (DRE) Technical Security Assessment Report, Compuware Corporation, v01 11/21/2003, November 21, 2003; available from the State of Ohio, http://www.sos.state.oh.us/sos/hava/files/compuware.pdf.

[15] Checks and balances in elections equipment and procedures prevent alleged fraud scenarios, Diebold Election Systems, http://www2.diebold.com/checksandbalances.pdf, July 30, 2003; this report was written as a rebuttal to [12], to which it serves as a useful index.

[16] Bev Harris, What really happened in Georgia, http://www.blackboxvoting.org/robgeorgia.htm; the transcript of an interview conducted in March 2003.

[17] Secretary of State Orders Audit of All Counties' Voting Systems, Los Angeles Times, Nov. 13, 2003.

[18] Shuki Bruck, David Jefferson and Ronald L. Rivest, A Modular Voting Architecture ("Frogs"), Caltech/MIT Voting Technology Project working paper, August 18, 2001. http://www.vote.caltech.edu/Reports/vtp_WP2.pdf.

[19] Rebecca Mercuri, A Better Ballot Box, IEEE Spectrum 39, 10 (October, 2002) page 46-50; http://www.spectrum.ieee.org/WEBONLY/publicfeature/oct02/evot.html.

[20] David Dill, E-mail to voting-activists@chicory.stanford.edu, April 23, 2003; Jim Adler paraphrased this, substituting the phrase "the devil himself" on August 25, 2003.