Skip Navigation
Archive

Testimony of Lawrence Norden Before District of Columbia Council

Counsel Lawrence Norden testified before the District of Columbia Council on October 3, 2008 regarding irregularities on the tallying of ballots from the September 9, 2008 primary election.

Published: October 3, 2008

Download PDF

 

 

 

 

 

 

 

Council Board of Elections and Ethics Investigation Special Committee

Council of the District of Columbia

Statement of

Lawrence D. Norden

Counsel, Brennan Center for Justice at NYU School of Law

October 3, 2008

The Brennan Center for Justice thanks the Council Board of Elections and Ethics Investigation Subcommittee of the District of Columbia and Chairwoman Mary M. Cheh for holding this hearing.  We appreciate the opportunity to share with you the results of our extensive studies on voting systems and best practices.  We hope that this information will be helpful in the coming weeks, as Washington D.C. prepares for what will undoubtedly be a high-turnout election, after a primary that has shaken the confidence of many voters in the integrity of the City’s voting systems.

THE BRENNAN CENTER’S WORK ON VOTING SYSTEM SECURITY

The Brennan Center for Justice is a nonpartisan think tank and advocacy organization that focuses on democracy and justice.  For the last four years, in collaboration with the nation’s leading technologists, election experts, security professionals, and usability and accessibility experts, I have led the Brennan Center’s Voting Technology Project and worked to make the country’s voting systems as secure, reliable and accurate as possible.  From 2004 to 2006, I chaired the Brennan Center Task Force on Voting System Security, which conducted the first systematic analysis of voting system security.  I am also lead author of the nation’s first comprehensive and empirical review of electronic voting systems entitled The Machinery of Democracy: Voting System Security, Accessibility, Usability and Cost.[1]  In 2007, I co-authored a book on voting system security, The Machinery of Democracy: Protecting Elections in an Electronic World,[2] as well as a report of the Brennan Center and the Samuelson Clinic at the University of California, Berkeley School of Law on post-election audits entitled Post Election Audits: Restoring Trust in Elections.[3]

RESTORING CONFIDENCE IN WASHINGTON, D.C.'S ELECTIONS

Initial unofficial results released by the D.C. Board of Elections and Ethics (the “D.C. Board”) on September 9, 2008 were apparently incorrect, with thousands of extra write-in votes and “overvotes” recorded.[4]  The D.C. Board insists that the final unofficial results were correct.[5]  Since the problems in initial vote totals were discovered, Sequoia Voting Systems and the D.C. Board have blamed the miscount on problems uploading information from a single cartridge.[6]  Sequoia has suggested that the problem was caused by poll workers, or by a static or electrical discharge.[7]  The D.C. Board’s conclusion about the cause of the problem has been inconclusive.[8]

Regardless of whether the explanations provided by Sequoia and the D.C. Board are correct, many voters in Washington, D.C. are likely to be skeptical, and want reassurance that similar problems will not disrupt a future election.  There are at least three things the Board can do to address these concerns.

1.         Appoint an Independent Investigator to Examine Technical Problems

So far, the only investigations into the problems on September 9, 2008 have been conducted by Sequoia, the voting system vendor, and the D.C. Board’s technology staff.[9]  Many voters are likely to be skeptical that these parties are disinterested investigators.

In fact, the history of recent investigations of machine malfunctions in other jurisdictions suggests that such skepticism would not be unfounded.  For instance, after the March primary in Butler County, Ohio, election officials discovered that their tally server failed to properly process memory cards and tabulate votes from county voting machines.[10]  After conducting its own analysis, the voting system vendor Premier Voting Solutions concluded that the problem was not caused by any flaw in its software, but by the County’s use of another company’s anti-virus software.[11]

In early August, the Ohio Secretary of State, in conjunction with Butler County election officials and observers from Premier, conducted a simulation to test Premier’s conclusions about the failure.  They found that the anti-virus software was not the cause of the problems.  Following this study, on August 19, 2008, Premier wrote to the Ohio Secretary of State conceding that the errors were probably caused by a logic error in Premier’s software.[12]

Ultimately, the public should not have to trust that those who may have been responsible for the problems on Election Day will adequately investigate machine failures. When a mysterious technical problem arises, the vendor should not lead the investigation.

Instead, an independent investigator should take the lead in determining what went wrong.  This is exactly what was done during the controversy over 18,000 lost votes in Congressional District 13 in Sarasota County, Florida in 2006, and in a current controversy over voting machine problems in New Jersey.[13]

2.         Review Logic & Accuracy Testing and Ballot Accounting Practices

The public explanations for the Election Day problems on September 9, 2008 have been insufficient to determine exactly what went wrong; therefore, it is impossible to say what steps might have prevented the initial miscount.  However, most technical problems with voting systems can be caught through thorough pre-election (sometimes called “logic and accuracy”) testing before the election begins, and good post-election canvassing (sometimes called “ballot accounting and reconciliation”) after the polls have closed.

The District of Columbia has fairly detailed requirements for both pre-election testing and post-election canvassing.[14]  It is difficult to know what is actually done in practice.  As it happens, the Brennan Center is in the midst of conducting a survey of the post-election canvass of all 50 states and the District of Columbia.  We have received responses to our questions regarding post-election canvass practices from all jurisdictions with the exception of the District.

There are a number of models or “best practice” recommendations for both pre-election testing and post-election canvasses.  It would probably be useful for the Board of Elections to review best practice recommendations made by Professor Doug Jones and John Washburn for pre-election testing, to ensure that they are, in fact, taking all reasonable steps to catch technical problems before an election begins.[15]  Similarly, we recommend that the Board review the Brennan Center’s recently released checklist (the “Brennan Center checklist”) for ballot accounting and reconciliation, after polls have closed.  It is annexed to my testimony as Exhibit A.

There are no statutory requirements for some of the recommendations made in the Brennan Center checklist.  For instance, officials do not appear to be statutorily required to reconcile the number of ballots cast with the number of voters signed in at the precinct level.  This step is critical to early detection of a problem with in-precinct voting machines.

3.         Conduct Post-Election Audits of Voter Verified Paper Ballots; Establish Clear Procedures for the Audit Before the Election

One of the most important measures for increasing security and reliability of voting systems (as well as public confidence in them) is to conduct post-election audits of voter verified paper records, comparing these records to the electronic tallies provided by the voting machines.  Post-election audits can serve several useful purposes, including: creating an appropriate level of public confidence in the results of an election; deterring fraud against the voting system; detecting large scale, systemic errors; providing feedback that will allow jurisdictions to improve voting technology and election administration in future years; and confirming, to a high level of confidence, that a complete manual recount would not change the outcome of the race.

Of course, this kind of post-election audit can only be effective if there is a voter verified paper record to compare to the electronic record.  Many voters in Washington, D.C. vote on paperless touch-screen machines.  Post-election audits cannot be used on these machines to verify that they are working properly and accurately recording every vote.  Nevertheless, the Brennan Center urges the D.C. Board to join 19 states this November and conduct post-election audits on the optical scan machines that read paper ballots, as an extra measure to ensure that these machines are working properly.[16]  This should increase voter confidence in the optical scan machines and allow the Board to improve election administration and use of these machines in the future.

If the D.C. Board does conduct a post-election audit in November, it must improve the procedures it employed when auditing the September 9 primary.  The D.C. Board’s “Final Report for the Congressional and City Council Primary Post Election Audit” (the “Audit Report”) raises a number of concerns about the way that audit was conducted.[17]  Below are some problems we see with the Audit Report.

a.         The Audit Report cannot be reconciled with the official election result;. All Paper Ballots Should Be Included in the Audit

The total votes listed in the Audit Report by candidate and precinct is frequently less than the total votes listed in the official results.  For instance, in precinct 21, in the contest for the Democratic candidate for the United States House of Representatives, the final election result shows Eleanor Holmes Norton received 183 regular votes, plus 6 write-ins, for a total of 189 votes.  The Audit Report shows Congresswoman Norton receiving 76 votes on the “Edge” touchscreen machine, 101 votes on the optical scan machine, plus 6 write-in votes, for a total of 182 votes.  It is likely that this 7 vote discrepancy can be attributed to unaudited provisional or absentee ballots, but that is not clear from reviewing the Audit Report.  In any case, provisional and absentee ballots should be included in any audit.

b.         Potential Systemic Problems Identified in the Audit Should Be Investigated

The Audit Report notes that in one of the audited precincts “the Statehood Green ballots could not be read by the machine because of the ballot header,” with no further explanation.[18]  We are very concerned that there does not appear to have been a more widespread investigation of this problem.  All we know from the Audit Report is that in 25% of the precincts audited, no votes were recorded in the Statehood Green primaries.  When an audit shows that a large percentage of votes may have been miscounted as a result of a systemic error, it is imperative that the problem be thoroughly investigated and all votes accounted for.

It is important that the public is provided with the full results of such investigation and notified of steps that have been put in place to ensure that it does not happen again.  The State Green primary was not highly contested and received little public scrutiny.  But a failure to investigate and fully address the problem could mean a much bigger problem in a future election.  It is not difficult to imagine how much more problematic this situation might have been if it involved Democratic party primary ballots.  Why was this problem not discovered in pre-election testing?  If the ballots were not counted at all, why were they not rejected by the scanners as blank ballots? 

As noted in “Principles and Best Practices for Post-Election Audits,”[19] a set of best practices for post-election audits endorsed by a number of voting integrity groups, including the Brennan Center, “audit protocols must clearly state” ahead of time under what circumstances officials must “audit additional machines.”  “Such factors might include the number of discrepancies and their distribution across the sample.”  For instance, in Minnesota, if a discrepancy greater than 1/2 of 1% is identified in the audit of any particular contest, three more precincts in that jurisdiction and county must be audited within 2 days.  If the expanded audit reveals a discrepancy greater than 1/2 of 1%, the review must expand to include the entire County.  If the County-wide review reveals a discrepancy, and the number of voters in that county comprise at least 10% of voters for the affected race, a race-wide hand count must take place.[20]

c.         Update Official Totals When Audit Shows Machine Count Was Incorrect

There were a number of instances were the audit showed that the machine totals were incorrect.  It is unclear from the Audit Report why this happened, although there is a note on the first page that in precincts 21 and 22, some voters had used pencils with erasers.  The implication is that eraser marks caused overvotes.[21]

Not all discrepancies found in post-election audits should lead to further investigation.  As already stated, the circumstances under which further investigation is mandated should be spelled out clearly, before the election.  Nevertheless, whenever discrepancies are found, official vote totals should be updated.

d.         Always Audit Precincts that Appear to Produce Anomalous Results

We are troubled by reports that the D.C. Board did not originally audit Precinct 141; this is the precinct that both Sequoia and the Board concluded contained the cartridge that caused the miscount.  In addition to selecting some number of precincts randomly, Boards of Election should always audit precincts that they conclude appear to have produced anomalous results.

RECOMMENDATIONS FOR POST-ELECITON AUDITS IN NOVEMBER

In Post-Election Audits: Restoring Trust in Elections, the Brennan Center teamed with the Samuelson Law, Technology & Public Policy Clinic at Boalt Hall School of Law (UC Berkeley), as well as several election officials and leading academics (collectively, the “Audit Group”) to make several recommendations for conducting post-election audits.  Many of these recommendations were echoed in “Principles and Best Practices for Post-Election Audits,” which is annexed to this testimony as Appendix B.

We urge the D.C. Board to review both documents in establishing a post-election audit protocol for November.  In particular, we recommend that the D.C. Board adopt the following steps:

  • Use Transparent and Random Selection Processes for All Auditing Procedures.  Audits are much more likely to prevent fraud, and produce greater voter confidence in the results, if the ballots, machines or precincts to be audited are chosen in a truly random and transparent manner.
  • Allow the Losing Candidate To Select Precinct(s) or Machine(s) To Be Audited.  In addition to conducting random audits, jurisdictions should allow a losing candidate to pick at least one precinct to be audited.  This would serve two purposes: first, it would give greater assurance to the losing “side” that the losing candidate actually lost; second, it would make it much more likely that anomalous results suggesting a programming error or miscount were reviewed.
  • Implement Effective Procedures for Addressing Evidence of Fraud or Error.  If audits are to have a real deterrent effect and catch widespread, systemic problems, jurisdictions must adopt clear procedures for dealing with audit discrepancies when they are found.  Detection of fraud will not prevent attacks from succeeding without an appropriate response.  Such procedures should also ensure that outcome-changing errors are not ignored.
  • Encourage Rigorous Chain of Custody Practices.  Audits of voter-verified paper records will serve to deter attacks and identify problems only if states have implemented solid chain of custody and physical security practices that will allow them to make an accurate comparison of paper and electronic records.
  • Record and Publicly Release Numbers of Spoiled Ballots, Cancellations, Over-votes and Under-votes.  Audits that record the number of over-votes, under-votes, blank votes and spoiled ballots (including in the case of DREs, cancellations) could be extremely helpful in uncovering software attacks and software bugs and point to problems in ballot design and instructions.
  • Audit Entire System, Not Just the Machines.  History has shown that incorrect vote totals often result from mistakes when machine totals are aggregated at the tally server.  Accordingly, good audit protocols will mandate that the entire system – from early and absentee ballots, to provisional ballots, to aggregation at the tally server – be audited for accuracy.

CONCLUSION

Occasional mistakes and anomalies in some elections are unavoidable.  When these problems occur, the best course is to conduct a careful, public and independent investigation, and to adopt new protocols to ensure that bigger failures do not occur in the future.  The steps we have recommended in this testimony – an independent investigation of the problems on September 9, 2008, a review of the District’s pre-election testing and post-election canvass, and the institution of clear policies for post-election audits – should go a long way toward both improving elections in the District and restoring public confidence in the system.



[1] Lawrence Norden et al., The Machinery of Democracy: Voting System Security, Accessibility, Usability and Cost (Brennan Center for Justice ed., 2006).

[2] Lawrence Norden and Eric Lazarus, The Machinery of Democracy: Protecting Elections in an Electronic World (Academy Chicago 2007).

[3] Lawrence Norden et al., Post Election Audits: Restoring Trust in Elections (Brennan Center for Justice ed., 2007).

[4] Nikita Stewart & Elissa Silverman, Primary Vote Still Doesn’t Add Up, Washington Post, Sept. 22, 2008 at B01.

[5] News Release, D.C. Board of Elections and Ethics, Analysis of the Unofficial Election Night Results from the September 9, 2008 District of Columbia Congressional and Council Primary Election (Sept. 10, 2008).

[6] News Release, D.C. Board of Elections & Ethics, Analysis of the Unofficial Election Night Results from the September 9, 2008 District of Columbia Congressional and Council Primary Election (Sept. 10, 2008); Sequoia Voting Systems, Report to the District of Columbia Board of Elections & Ethics (Sept. 22, 2008) [hereinafter “Sequoia Report”].

[7] Nikita Stewart, Voting Database Is Fine, Firm Says, Washington Post, Sept. 12, 2008 at B01.

[8] District of Columbia Board of Elections & Ethics, Internal Review Committee’s Investigative Report into Election Night Results Summary Reporting Irregularities During the September 9, 2008 District of Columbia Congressional and Council Primary Election (Oct. 1, 2008), available at http://www.dcboee.org/pdf_files/nr_172.pdf [hereinafter “D.C. Board Investigative Report”]; Editorial, D.C.'s Primary Mystery, Washington Post, Oct. 2, 2008 at A22.

[9] Sequoia Report, supra note 6; D.C. Board Investigative Report, supra note 8.

[10] Editorial, Dropped, Then Caught, Columbus Dispatch, Aug. 24, 2008; Letter from David Byrd, President, Premier Election Solutions, to Jennifer Brunner, Ohio Secretary of State (Aug. 19, 2008) (on file with the author).

[11] Letter from David Byrd, supra note 10.

[12] Id.

[13] Carol J. Williams, Much Ado About Fla. E-Voting, Los Angeles Times, Nov. 16, 2006, at A18; Rob Amen, CMU Professor Investigates Vote, Pittsburgh Tribune Review, Jan. 9, 2007; Anita Kumar, Jennings Has Another Loss at Voting Machines, St. Petersburg Times, Feb. 24, 2007; Diane C. Walsh, Experts To Test Machines at ‘Rock’ State Police Ewing Site Called Ideal, Secure Spot, Times of Trenton, May 17, 2008 at A01; Diane C. Walsh, Voting Machine Test Results Will Be Released To The Public, Newark Star-Ledger, June 21, 2008 at 8.

[14] D.C. Mun. Regs. tit. 3, § 800–803 (2008).

[15] Douglas W. Jones, Testing Voting Systems, http://www.cs.uiowa.edu/~jones/voting/testing.shtml; John Washburn, Testing Voting Machinery, http://www.washburnresearch.org/archive/TestingGuidelines/TestingVotingMachinery.html.

[16] These states are: Alaska, Arizona, California, Colorado, Connecticut, Florida, Hawaii, Illinois, Kentucky, Minnesota, Missouri, Nevada, New Mexico, North Carolina, Ohio, Oregon, Pennsylvania, West Virginia, and Wisconsin.

[17] D.C. Board of Elections & Ethics, Final Report for the Congressional and Council Primary Post Election Audit 1 (Sept. 24, 2008), available at http://www.dcboee.org/pdf_files/nr_169.pdf [hereinafter “Audit Report”]

[18] Id. at 1.

[19] ElectionAudits.org, Principles and Best Practices for Post-Election Audits (Sept. 2008), available at http://electionaudits.org/files/best%20practices%20final_0.pdf.

[20] Minn. Stat. § 206.89 (2007).

[21] Audit Report, supra note 17 at 1.