Why did one out of seven voters in a hotly contested congressional race appear to skip that race on their ballots? This happened in the Sarasota County part of Florida's 13th Congressional District (CD 13) race in the 2006 election. The 14.9% undervote (over 18,000) in the predominantly Democratic part of the district more than accounted for the narrow margin of victory (about 380 votes) for the Republican candidate. In the other counties of the district, the undervote ranged from 2-5%, as did the undervote on absentee ballots in Sarasota County. The losing candidate has protested the election, but there is no way to carry out a conventional ballot recount, because an e-voting system without a voter-verifiable paper trail was used.
There is no consensus about the causes of this phenomenon. I will criticize views put forward by various individuals and groups. Possible explanations include:
Poor ballot design made it easy to overlook the CD 13 race?
According to this more sophisticated theory, cogently presented in a
Frisina, et al, the fact that the CD 13 race appeared at the
top of the same screen that listed the candidates for governor
and lieutenant governor, caused many voters to overlook it. There are
several arguments buttressing this hypothesis. One is that it fits, in
the sense that the Sarasota County ballot format was the only one of
the five CD 13 counties with this characteristic.
Another piece of supporting evidence is that there was an arguably analogous situation involving another race, in Charlotte County (which also happens to be partly included in CD 13). There, the contest for attorney general (AG) appears at the bottom of the same screen with the gubernatorial race. The AG undervote in Charlotte County was even larger proportionately than that for CD 13 in Sarasota.
On the face of it, this is a plausible explanation. It is in the same category as the notorious 2000 presidential election butterfly ballot involving punched cards (also in Florida). But the Sarasota ballot design is by no means so obviously deceptive. The CD 13 race is not obscured and there is no confusion as to which box to check (the butterfly problem). Take a look at the relevant screen image (see ballot page 2) and judge for yourself. Do you think you might have overlooked the race at the top of the screen page? Perhaps the most problematic aspect is the use of color for the word "State".
Now compare this image with that of the Charlotte ballot (see screen p. 3 of Charlotte ballot). Here one can see how a voter might overlook the AG race since it lists just two candidates at the bottom of the screen, below a list of 6 pairs of candidates for governor and lieutenant governor. One could easily imagine a person voting for one of the two candidates (Rep and Dem) at the top of the screen and not noticing the AG race at the bottom. Furthermore, it would not be surprising if a substantial number of voters skipped the AG race deliberately as it was far less significant and less publicized than the CD 13 race.
A stronger reason for discounting the likelihood that this type of ballot design accounted for the huge CD 13 undervote is the far more clumsy design of the ballot faced by voters of Mecklenburg County, NC in 2006. (This was pointed out by Joyce McCloy.) Here, the congressional race (CD 8) was shown in the left column of the screen under an extensive explanation of how to cast a straight party vote. In the right column were several races for state offices. It is easy to see how a voter might look at the left column, see the explanation of the straight party voting procedure, decide not to vote a straight ticket, and then shift attention to the right column, not noticing the few lines at the bottom of the left column devoted to the congressional race. Check out the Mecklenburg ballot for yourself. If, as is the case, no more than 4% of the voters were confused by this ballot, how likely is it that more than three times that ratio were confused by the Sarasota ballot?
Well over a hundred Sarasota County voters complained to the press that the e-voting machines did not register their choices for Democratic candidate Jennings. It seems reasonable to assume that many more who noticed such an effect did not report it. Furthermore, many others may have failed to notice that their choices were not being registered. The ballot design theory does not account for this.
A report critical of the SAIT work was issued by David Dill and Dan Wallach. They point out various steps that were not taken, such as a study of ES&S bug tracking data related to the Sarasota system. The Dill-Wallach (DW) report also emphasizes the unpredictable nature of the consequences of program errors. They point out that there has been no systematic effort to detect hardware faults.
A strong, but certainly not conclusive, argument that the CD 13 undervote was not caused by conventional software or hardware faults is that the problem appeared only in the CD 13 race; there seems to be no evidence of problems with other races on the ballot. An example of this argument is the problem with touch screen response. It has been reported that the Sarasota machines suffered from a fault that requires an excessively long touch by the voter before the system recognizes a selection. At first, this seems to account for the system missing CD 13 votes. But if this is the explanation, why were there no large undervotes for other races on the ballot?
The argument that bugs were not the cause is weakened by the fact that the SAIT report, the DW report, and various studies made by other experts all agree that the quality of the ES&S software is very poor (the same seems to be true of the software produced by other e-machine vendors). This, in addition to increasing the likelihood of faults, also makes it more difficult to find them. So we cannot altogether rule out the possibility that some combination of bugs and hardware problems manifested themselves only for one ballot position.
It is important to understand that the conclusions of the SAIT report were based on the assumptions that the source code they were given was the actual source code used, that it was correctly converted to object code, that the iVotronic hardware performed as specified, and that no communication devices had access to the machines. Their assignment did not entail any effort to detect cheating of any kind.
While the SAIT report did not officially deal with security issues or draw any formal conclusions about security, the authors point out many security weaknesses in the source code they studied. One weakness, noted by others as well in e-voting systems, is very sloppy handling of passwords, and even of physical security, where easily picked locks on cabinet doors are common. Apparently it would not be very difficult for malicious outsiders to gain access to the systems so as to modify code, or even to install physical devices to perform simple functions such as turning cheating features on or off by remote means. But most such operations would be somewhat labor intensive as they would have to be carried out on individual machines. More difficult would be implementing schemes that would cause propagation of fraudulent software among a set of machines, at least within a precinct. I will not explore this aspect further here, because I believe that insider corruption is a more serious danger.
It is widely believed that most computer related fraud in the world of finance is the work of employees or recent employees of the victimized companies. Those writing complex programs may be able to insert surreptitious code that allows them to manipulate data, often remotely, for nefarious purposes. In the e-voting world there is certainly a great deal to gain by manipulating elections. One might imagine a variety of situations in which this might be done. Particularly ruthless factions (perhaps consisting of only a handful of people) within major political parties might employ skilled programmers to penetrate such companies as ES&S. Does anybody doubt the existence of groups willing to carry out such acts?
The DW report dismisses the idea that the CD 13 undervote was caused by malicious software, saying,
...if someone had the ability to unduly influence the election outcome, they would be unlikely to choose to create an obviously high undervote rate, rather than making other changes that would be less likely to be noticed.This seems very persuasive, but wait a bit before placing bets on it. Consider the following paragraph from the SAIT report, which seems to reinforce the dismissal of malice.
One challenge facing any would-be attacker is the low margin for error in mounting this kind of attack, and software developers well know that perfect software, including attacking software, does not exist. If the virus contains a bug or programming error that causes it to behave in a way different from how its creator intended, that bug might have effects that could disable the attack, cause it to be detected by election officials, or expose the attacker's identity and methods to forensic analysis. Just as all application code has defects, attacker code is also subject to defects. Moreover, it would be difficult for an attacker to test virus operation rigorously in the lab before injecting it into the wild, so an attacker would have to be concerned about the possibility of bugs in her code. There is no clear way for an attacker to influence or control the virus after it has been introduced into the system, so if she wants to remain undetected, the attacker must plan to succeed on the first try. Even with the most careful precautions, complex first try attacks are not guaranteed to succeed.
Suppose now that a cheater inserts malicious code intended to switch some percentage of votes from A to B. In detail, such code would, for each ballot image to be so corrupted, have to delete the vote for A, insert the vote for B, and ensure that this switch was not reflected in the screen image. Now assume that, due to the kind of defect envisioned in the above SAIT paragraph, the malicous code segment succeeded in deleting the A-vote, but failed to insert the B-vote, and that it also failed to interfere with the code controlling the screen image, so that the screen correctly reflected the fact that, in the modified ballot image, neither A nor B was selected. The result would be precisely the undervote scenario that played out in Sarasota. There are any number of ways in which this would not be detectable by the SAIT team, particularly since they had access only to source code given to them by FLDoS.
Is this what actually happened? Could be. But the kind of trusting, polite investigation carried out in Sarasota under the aegis of the state agency that might have been complicit in fraud (remember Katherine Harris?), with the company that produced the machines also involved, is not very likely to find proof. Furthermore, given the way many jurisdictions handle the difficult task of securely sequestering voting machines after elections, if there was cheating, whatever evidence existed in the form of corrupted programs in machine memory is probably long gone.
The SAIT report includes another argument supporting the ballot design theory and purporting to undermine both the bug and the fraud hypotheses.
In December 2006, a Sarasota newspaper conducted an analysis examining the correlation between age and CD13 undervotes . They found that in "...precincts where the median age was greater than 65, the undervote rate in the congressional race was 18 percent, 40 percent higher than in younger precincts." Some suggest that the undervote-age correlation supports the ballot design hypothesis and refutes most machine-related hypotheses since software cannot detect a voter's age. It may also explain the correlation between undervotes and voters associated with one party or the other. We attempted to identify fault hypotheses to explain this correlation, but we were unable to construct any machine-related fault hypotheses that would explain this observed effect.The unstated assumption here is that those over 65 were more likely to overlook the CD 13 race, which would account for more undervoting in that group. Given that assumption, this data is consistent with the hypothesis that the undervote was due mainly to a confusing ballot. But it is no less consistent with the theory that the principal cause of the undervote was a software fault, or machine fault, or malicious action of the kind sketched above. All these could have caused cast votes to disappear from both ballot and screen images. The same people who might have missed seeing the CD 13 race on the screen are just as likely to have not noticed that their votes were not recorded.
Here is another instance in which the SAIT report downplays the likelihood of fraud. After mentioning that compact flash (CF) cards may be a vulnerable point with respect to security, the authors write,
One significant mitigating factor in this case is that under Sarasota County procedures, only four highly trusted individuals are authorized to access the election administration server and the CF cards. This reduces the risk because it limits the number of people with an opportunity to exploit this vulnerability.
Now I have no idea who these "highly trusted individuals" are, so I am obviously not making any accusations. My question is, "exactly who is it that highly trusts them?" The goal here is to ascertain the cause of a peculiar event. The cause might have been poor design of one kind or another, or it might be a deliberate criminal act. Where a possible crime is involved, it does not seem to be a good idea to "highly trust" those who might possibly be the criminals. A more prudent inference from the second clause of the first sentence of the above quote is that any of four people were well positioned to help implement fraud via the mechanism of the CF cards.
Perhaps one reason stems from a statement in the DW report, where the authors justify not investigating the possibility of malicious acts,
And most importantly, an effective investigation of malice or tampering would be exceptionally difficult to conduct with limited resources.I fully agree with this statement. As I indicated above, in this case I don't know if there was fraud, but if there was, I don't see much chance of detecting and proving it. But to conclude from this that we should now exclude the possibility of fraud and consider only other causes is analogous to looking under the lamp post for the lost wallet because, if it isn't there, we won't be able to find it in the dark.
The Sarasota undervote case dramatically illustrates how hard it can be to determine if the results produced by e-voting systems are valid. It would have been easier if the machines had generated paper trails, but paper trails, especially those produced by DRE machines, are far from conclusive. (The problem is that most voters don't actually verify that the printed ballot shows their votes correctly. Click here for a fuller explanation with supporting references.) Note that, even if the cause of the CD 13 undervote was poor ballot design, a very real possibility, that would still be an e-voting issue. People marking paper ballots manually rarely have problems of this kind.
E-voting systems are babies that should be thrown out with the bath. (See my earlier articles on e-voting.) The simplest, safest, and, in most cases at least, the cheapest, solution is hand-marked, hand-counted ballots. (To assist handicapped people, there exist relatively simple ballot marking systems, such as IVS, that can produce paper ballots that can be counted along with the hand-marked ballots.)
Comments can be sent to me at unger(at)cs(dot)columbia(dot)edu
For more on e-voting see
Return to Ends and Means