In a machine retabulation (hereafter just "retabulation"), ballots cast in an election are rescanned and reinterpreted to produce new vote counts. A retabulation may be complete (all ballots are rescanned) or partial (e.g., ballots in some election districts or precincts are rescanned). Some retabulations produce records of the votes purportedly cast on each ballot: Cast Vote Records, or CVRs.
Some jurisdictions around the country use retabulations in lieu of manual recounts. Other jurisdiction are considering machine retabulations as a routine method of checking voting system results. For instance, Connecticut currently requires a manual post-election audit, in which votes cast in several contests in at least 10% of election districts statewide are counted by hand, but it is considering legislation to replace the manual audit with a retabulation.
Reliance upon a machine retabulation violates best practices for post-election audits. It even violates the common definition of a post-election audit, which entails manually inspecting some ballots (or voter-verified paper audit records). A manual audit provides a human-observable check on the vote tabulation that does not depend upon the trustworthiness of any hardware or software component.
Machine-assisted audits (Calandrino et al., 2007) that combine retabulations with manual audits, if properly designed, have real advantages over both unaudited retabulations and hand counts of entire precincts or other large "batches" of ballots. As we explain further below, a machine-assisted audit crucially entails manually comparing a random sample of ballots with the machine interpretation of each ballot. Relying on unaudited retabulations is dangerous and unwarranted.
A voting system is software-independent if an undetected change or error in its software cannot cause an undetectable change or error in an election outcome (the winner[s], or whether a runoff is needed). ("Software independence" was initially defined in Rivest and Wack, 2006; Rivest, 2008.) Software independence implies that people do not have to trust that the voting system tabulated votes as it should: At least some people can observe whether it did. Auditing methods should be designed to leverage software independence, by verifying the voting system's performance without relying upon the correctness of its software.
A machine retabulation system without a manual audit squanders the benefit of software independence. Instead of demanding trust without evidence that the voting system performed correctly, it demands the same unsupported trust of the retabulation system. Such a system constitutes poor IT design and poor public policy. Relying on unaudited retabulations is like insisting that because two computerized expense reports agree, there is no reason to check the receipts.
Retabulation can detect some kinds of voting system errors, in some circumstances. If the retabulation results differ materially from the voting system results, then at least one set of results must be wrong, and an audit or hand count can reveal which one(s). A retabulation may detect certain inadvertent errors such as double-scanning some ballots, or some configuration errors.
However, even a close correspondence between two sets of machine counts cannot demonstrate their accuracy—no matter how "independent" the counts are said to be. Similar systems are subject to making similar errors. Even apparently dissimilar systems may have similar software defects, or may misinterpret certain kinds of ballots in the same way, or may be subject to subversion that causes them to report the same incorrect results. The purpose of auditing a machine system—whether it is the voting system or a retabulation system—is to determine the system's accuracy through observation, rather than depending upon assumption or speculation.
Two other misconceptions about retabulations deserve special mention.
One misconception is that if a retabulation system produces sufficiently many subtotals that match (or almost match) the corresponding voting system subtotals, the accuracy of both systems is demonstrated. This approach is somewhat like asserting that we really can verify a computerized expense report by comparing it to another computerized expense report, without checking the receipts, as long as the expense reports match in sufficient detail. In reality, what matters is not how detailed the expense reports are, but whether the reported details stand up against the receipts.
Another misconception is that we can "audit" the retabulation system by checking graphic ballot images stored in the retabulation system against the ballot interpretations (Cast Vote Records) produced by—and, in some cases, stored in—the retabulation system. At best, this process checks the internal consistency of the retabulation system—or part of the retabulation system. At worst, a subverted retabulation system could display arbitrarily many ballot images and correct interpretations thereof, yet every vote count could be misreported. Observers should be able to assess the retabulation system's accuracy without relying on the system itself.
Comparing images of ballots to Cast Vote Records cannot provide much evidence that electoral outcomes are correct. To know that outcomes are correct, we must know that the combined error rate of creating the graphic images from the ballots of and converting those images to Cast Vote Records is small. But comparing images to Cast Vote Records checks only the latter: it gives no information about the first rate. Therefore, it cannot confirm that electoral outcomes are correct.
The easiest way to tell whether the combined error rate is small is to measure the paper-to-Cast-Vote-Record error rate directly: to manually compare the original ballots to the Cast Vote Records.
Ideally, an election does not merely report results. Rather, it should provide convincing evidence that the reported results are correct. This principle is called evidence-based elections. (Stark and Wagner, 2012.) Retabulations cannot provide convincing evidence that outcomes are correct, because they do not examine the ballots, the artifact that the voters themselves had the opportunity to verify correctly reflected their intent. By failing to leverage the Software Independence conferred by voter-verifiable physical ballots, retabulations at best provide negative evidence: they can detect some "smoking guns," but cannot provide affirmative evidence that electoral outcomes are correct. Absence of evidence is not evidence of absence.
Audits that compare individual ballots to the voting system's interpretations of those ballots (Cast Vote Records, or CVRs) can be far more efficient than audits that hand-count all ballots in selected precincts or other batches. However, these ballot-level comparison audits are intractable on many voting systems, which either do not record CVRs or do not permit matching each CVR to the corresponding ballot. Therefore, machine-assisted audits based on a retabulation may provide more rigorous audits with less effort than alternative approaches. (Machine-assisted audits were first described in Calandrino et al., 2007.)
A machine-assisted audit, also known as a transitive audit, follows these basic steps:
In particular, if the audit of the retabulation system is a risk-limiting audit, then this approach provides a risk-limiting audit of the original system. A risk-limiting audit has a large, predetermined minimum chance of leading to a full hand count if a full hand count would report a different outcome than the system being audited. For a further discussion of risk-limiting audits in general and machine-assisted (transitive) audits in particular, see Bretschneider et al., 2012.
Crucially, a machine-assisted audit does not rely upon the accuracy of the retabulation, but rather verifies it, in two steps: (1) Confirm that the CVRs produce the totals reported by the retabulation; (2) Manually confirm a high degree of correspondence between the CVRs and the corresponding ballots. Additional procedures may be implemented to provide insight into the performance of the voting system and/or the retabulation system.
It is also possible to perform a partial retabulation combined with a manual audit of that partial retabulation. If the manual audit is large enough, this approach can be almost as effective as a hand count of the retabulated ballots. How this approach compares to a comprehensive machine-assisted audit depends on the breadth of the partial retabulation, but in general it cannot provide as much evidence that electoral outcomes are correct.
Typically, most of the time and effort of a machine-assisted audit is in the initial retabulation: re-scanning the ballots, creating Cast Vote Records, and computing contest results from the Cast Vote Records. Manually comparing a relatively small number of those ballots to the corresponding CVRs is, in comparison, a modest task, which can be observed by many people, and can be tailored to meet constraints of time and budget. If a retabulation system supports ballot-level manual auditing, skipping this manual verification step makes little sense, since it takes little additional work to produce much stronger evidence that the retabulation is correct. If the system does not support ballot-level manual auditing, we would advise against adopting it.