By John Gideon on 8/2/2007, 8:15pm PT  

Blogged by John Gideon, VotersUnite.Org

"The problems we found in the code were far more pervasive, and much more easily exploitable, than I had ever imagined they would be." Matt Blaze 02 August

Today the California Source Code Review Reports were released by Secretary of State Bowen's office. Reports were released on the Diebold, Sequoia, and Hart Intercivic voting systems.

The lead researcher for the Sequoia source code team was Matt Blaze. In his blog, Exhaustive Search, Blaze discusses the results of all of the inspections.

In spite of the short time and other sub-optimal conditions, the project found deeply-rooted security weaknesses in the software of all three voting systems reviewed.

I was especially struck by the utter banality of most of the flaws. Exploitable vulnerabilities arose not so much from esoteric weaknesses that taxed our ingenuity, but rather from the garden-variety design and implementation blunders that plague any system not built with security as a central requirement. There was a pervasive lack of good security engineering across all three systems, and I'm at a loss to explain how any of them survived whatever process certified them as secure in the first place. Our hard work notwithstanding, unearthing exploitable deficiencies was surprisingly --- and disturbingly --- easy.

Blaze then concludes with what may be a hint of decisions to come:

The root problems are architectural. All three reviewed products are, in effect, large-scale distributed systems that have many of their security-critical functions performed by equipment sent out into the field. In particular, the integrity of the vote tallies depends not only on the central computers at the county elections offices, but also on the voting machines (and software) at the polling places, removable media that pass through multiple hands, and complex human processes whose security implications may not be clear to the people who perform them. In other words, the designs of these systems expose generously wide "attack surfaces" to anyone who seeks to compromise them. And the defenses are dangerously fragile --- almost any bug, anywhere, has potential security implications.

This means that strengthening these systems will involve more than repairing a few programming errors. They need to be re-engineered from the ground up. No code review can ever hope to identify every bug, and so we can never be sure that the last one has been fixed. A high assurance of security requires robust designs where we don't need to find every bug, where the security doesn't depend on the quixotic goal of creating perfect software everywhere.

In the short term, election administrators will likely be looking for ways to salvage their equipment with beefed up physical security and procedural controls. That's a natural response, but I wish I could be more optimistic about their chances for success. Without radical changes to the software and architecture, it's not clear that a practical strategy that provides acceptable security even exists. There's just not a lot to work with.

I don't envy the officials who need to run elections next year.