Thursday, May 12, 2016

Jackware: coming soon to a car or truck near you?

As 2016 rolls on, look for headlines declaring it to be "The Year of Ransomware!" But what kind of year will 2017 be? Will it be "The Year of DDos" or some other form of "cyber-badness" (kudos to my ESET colleague Cameron Camp for coining that term). Right now I'm worried that, as the years roll on we could see "The Year of Jackware" making headlines.

What is jackware?

Jackware is malicious software that seeks to take control of a device, the primary purpose of which is not data processing or communications, for example: your car. Think of jackware as a specialized form of ransomware. With ransomware, the malicious code encrypts your documents and demands a ransom to unlock them. The goal of jackware would be to lock up a car or other piece of equipment until you pay up. Fortunately, and I stress this: jackware is currently, pretty much, as far as I know, theoretical, not yet "in the wild".

Unfortunately, based on past form, I don't have much faith in the world's ability to stop jackware being developed and deployed. So far the world has failed abysmally when it comes to cybercrime deterrence. There has been a collective international failure to head off the establishment of a thriving criminal infrastructure in cyberspace that now threatens every innovation in digital technology you can think of, from telemedicine to drones to big data to self-driving cars.

Consider where we are right now, mid-May, 2015. Ransomware is running rampant. Hundreds of thousands of people have already paid money to criminals to get back the use of their own files or devices. And all the signs are that ransomware will continue to grow in scale and scope. Early ransomware variants failed to encrypt shadow copies and connected backup drives, so some victims could recover fairly easily. Now we're seeing ransomware that encrypts or deletes shadow copies and hunts down connected backup drives to encrypt them as well.

At first, criminals deploying ransomware relied on victims clicking links in emails, opening attachments, or visiting booby-trapped websites. Now we're also seeing bad guys using hacking techniques like SQL injection to get into a targeted organization's network, then strategically deploy the ransomware, all the way to servers (many of which aren't running anti-malware).

The growing impact of ransomware would also seem to be reflected in people's reading habits. Back in 2013, one of my colleagues at ESET, Lysa Myers wrote an article about dealing with the ransomware scourge. For the first few weeks it got 600-700 views a week. Then things went quiet. Now it is clocking 4,000-5,000 hits a week and the war stories from victims keep rolling in.

But how do we get from ransomware to jackware? Well, it certainly seems like a logical progression. When I told Canadian automotive journalist David Booth about ransomware on laptops and servers, I could see him mentally write the headline: Ransomware is the future of car theft. I knew David would see where this could be headed. He's written about car hacking before, going deeper into the subject than most of the automotive press.

The more I think about this technology myself, the more I think that the point at which automotive malware becomes serious jackware, and seriously dangerous, will be the conjunction of self-driving cars and vehicle-to-vehicle networks. Want a nightmare scenario? You're in a self-driving car. There's a drive-by infection, silent but effective. Suddenly the doors are locked with you inside. You're being driven to a destination not of your choosing. A voice comes on the in-car audio and calmly informs you of how many Bitcoins it's going to take to get you out of this mess.

Why give the bad guys ideas?

Let's be clear, I didn't coin the term jackware to cause alarm. There are many ways in which automobile companies could prevent this nightmare scenario. And I certainly didn't write this article to give the bad guys ideas for new crimes. The reality is that they are quite capable of thinking up something like this for themselves.

Can I be sure there's not some criminal out there who's going to read this and go tell his felonious friends? No, but if that happens it's quite probable that his friends will sneer at him because they know someone who's already done a feasibility study of something like jackware-like (yes, the cybercrime underworld does operate a lot like a fully evolved corporate organism). We are not seeing jackware yet because the time's not right. After all, there's no need to switch from plain old ransomware as long as people keep paying up.

Right now, automotive jackware is still under "future projects" on the cybercrime whiteboards and prison napkins. Technically it's still a stretch today, and tomorrow's cars could be even better protected, particularly if FCA has learned from the Jeep hack and VW has learned from the emissions test cheating scandal and GM's bug bounty program gets a chance to work.

Unfortunately, there's this haunting refrain I can't quite get out of my head, something about "when will they ever learn..."

Sunday, May 08, 2016

White paper on US data privacy law and legislation

Recently I put together a 15 page white paper titled Data privacy and data protection: US law and legislation. Among the 80 or so references at the end of the paper you will find links to a lot of the federal privacy laws, and some of the articles I cited.

Back in 2002 when I published a book on
data privacy, I asked the cat to "look shy"
and
she struck this pose (honest!)
I figured this would be a handy resource for folks looking to learn more about how data privacy works in the US. Of course, some would say data privacy doesn't work in the US, and the white paper is written with that opinion in mind. Frankly, the whole subject is pretty complex and in writing this paper I found out I had been wrong, or at least, not quite right, about quite a few things.

Knowing how data privacy protection has evolved in the US so far should help inform its further progression. Clearly, data protection will continue to evolve in the EU and US with the arrival of the General Data Protection Regulation (GDPR), also known as the European Data Protection Regulation (the GDPR is not discussed in the white paper – the subject probably merits one of its own – I have been clipping news on GDPR here and tweeting it here)

For more on the white paper, which was made possible by ESET, visit the We Live Security website, and be sure to sign up for regular news on all manner of data privacy and cybersecurity topics by email.

If a white paper is too much and you're just getting started in your data privacy reading, here are some good places to start:


Thursday, March 10, 2016

Infowar and Cybersecurity: Pitfalls, history, language, and lessons still being learned

I recently registered to attend a very special event in the cybersecurity calendar: InfoWarCon. The organizers of this unique gathering ask all participants to write a short blurb about what they bring to the proceedings. You can read what I wrote later on in this post, but first, some background.

The Information Warfare Conference

An institution created by my good friend Winn Schwartau, InfoWarCon has been around from more than 20 years. Even if you haven't heard of Winn, I bet you've heard the phrase: "Electronic Pearl Harbor". Winn was the first person to use that term, as recorded in his testimony to Congress about the offensive use and abuse of information technology in 1991. That was five years before CIA Director John Deutch made national headlines using the term, also in congressional testimony (you may recall President Clinton issuing a presidential pardon to Deutch after he was found to have kept classified material on unsecured home computers).

The first InfoWarCon I attended was the one held at the Stouffer Hotel in Arlington, Virginia, in September of 1995. In those days, Chey and I were both working for the precursor to ICSA Labs and TruSecure, then known as NCSA, a sponsor of InfoWarCon 95. The agenda for that event makes very interesting reading. It addressed a raft of issues that are still red hot today, from personal privacy to open source intel, from the ethics of hacking to military "uses" of information technology in conflicts.

Winn was passionate that there should be open and informed debate about such things because he could see that the "information society" would need to come to grips with their implications. Bear in mind that a lot of the darker aspects of information technology were still being eased out of the shadows in the 1990s. I remember naively phoning GHCQ in 1990, back when I was writing my first computer security book, and asking for information about TEMPEST. The response? "Never heard of it; and what did you say your name was?" When I first met Winn he was presenting a session on a couple of other acronyms, EMP bombs and HERF guns. That was at Virus Bulletin 1994, one of the longest running international IT security conferences (my session was a lot less interesting, something about Windows NT as I recall).

The InfoWarCon speaker lineup in 1995 included a British Major General, several senior French, Swedish, and US military folks, Dr. Mich Kabay - chief architect of one of America's first graduate level information assurance programs, and Scott Charney, now Corporate Vice President for Microsoft's Trustworthy Computing. Many of those connections remain active. For example, the Swedish Defence University is involved in this year's InfoWarCon, via its Center for Asymmetric Threat Studies (CATS). Recent InfoWarCons have eschewed the earlier large-scale public conference format in favor of a more intimate event - private venue, limited attendance, no media - more conducive to frank exchanges of perspectives and opinions.

For Chey and I, the trip to InfoWarCon16 is personal as well as professional - after all, we have known the Schwartaus for more than two decades, somehow managing to meet up in multiple locations over the years, from DC to Florida, Las Vegas to Vancouver, not to mention Moscow. So when I got to the registration page for InfoWarCon16, which asks all prospective attendees and invitees to submit a short “What I Bring to InfowarCon” blurb, my first thought was "I don't need no stinking blurb!" But that soon passed as I relished an excuse to convey something of my background in a new, and hopefully interesting, way. Here is what I wrote...

A Student of Information Technology Pitfalls

Mining coal in the Midlands, 1944 © IWM
I was born in 1952, in the English county of Warwickshire, in a small terraced house heated by fireplaces that burned coal. That coal was mined from one of 20 pits under our county, some of which were more than a century old by then. Between 1850 and 1990, pitfalls in mines in the Midlands killed hundreds of men as they toiled to fuel the industrial revolution. Across Britain during that time period coal pits claimed over a hundred and fifty thousand miners, but theirs were not the only lives taken by fossil-fueled industrial technology. Consider this: a few months after I was born, 12,000 Londoners died from a single air pollution incident, of which burning coal was a primary cause (the Great Smog of 52).

And so it was that, many years before computers came into my life, I was well aware technology brings pitfalls as well as benefits. Like many of the swords displayed in Warwick castle, originally built by William the Conqueror in the eleventh century, technology is double-edged. This is certainly true of information technology. It can be good for growth, good for defense, but also tempting for offense.

Since I started researching my first computer security book in the late 1980s I have thought long and hard about such things, sometimes in ways that others have not. I have listened closely to the language invented to articulate the uses and abuses of this technology. For example, in 2014, I presented a paper at CyCon titled “Malware is called malicious for a reason: the risks of weaponizing code” in which I introduced the term ‘righteous malware’ (IEEE CFP1426N-PRT).

 In 2015, I analyzed the problem of measuring the scale and impact of cybercrime in the peer-reviewed Virus Bulletin paper: “Sizing cybercrime: incidents and accidents, hints and allegations”. The serious shortcomings of both public and private sector efforts to address this issue were articulated and documented in detail. I am currently doing post-graduate research at the University of Leicester seeking to identify key traits of effective cybersecurity professionals. But more importantly, for the past 25 years I have engaged myself as much as possible - resources and life events permitting - in the ongoing conversation about how best to reap the benefits of information technology without suffering from what have been called its downsides, its pitfalls.

Speaking of which, it is relevant to note, in the context of InfoWarCon, that the word pitfall did not originate in coal mines, but on the battlefield. The Oxford English Dictionary identifies 1325 as the first year it was used in written English. The meaning? “Unfavourable terrain in which an army may be surrounded and captured.” To me, that doesn't sound a whole lot different from some parts of cyberspace.

Monday, February 01, 2016

Some cybersecurity-related videos

Here are some videos of projects I have been involved with over the past 12 months or so, starting with the Cyber Boot Camp, held in June of 2015. ESET and Securing Our eCity hosted the top eight teams in the San Diego Mayors' Cyber Cup competition for five days of hands on cybersecurity education on the campus of National University. My colleague, Cameron Camp led the "war room" exercises.



Cybersecurity and cybercrime were the topic of this TEDx talk I delivered in San Diego last October.



In December of last year I spoke to a meeting of the Sage Group, an association for entrepreneurs and executives in San Diego. While I covered some of the same topics as the TEDx talk, I also discuss ESET and the origins of my interest in cybersecurity.



Several well-attended webinars were recorded over the last year or so.


Saturday, September 12, 2015

Crime, ignorance, ethics, and irony in the wake of the Ashley Madison affair

I'm hoping that the Ashley Madison hack will be a turning point in cyber-ethics, the point in time when we collectively decide that:
  • hacking companies and publishing the private information they have stored about people is morally reprehensible; 
  • lying to your customers about how you handle their data is unforgivable and needs to be punished; 
  • passing judgment on the sex lives of consenting adults is a fool's game; 
  • hacking people and products just because you don't like them is irresponsible and stupid; and 
  • hacking organizations to show they are not protecting data as well as they could be is a waste of skills and everyone's time - we know this already so creating more evidence does nothing to advance human knowledge or improve life on earth.
Sadly, a lot of the early media coverage and social discussion of the Ashley Madison hack showed few signs that we are at this hoped for ethical turning point. In light of this, I thought I would try to move the discussion forward with thoughts on five different parties to this whole mess.

1. The Perpetrators: So-called hackers

The people who recently stole and published gigabytes of data from the website AshleyMadison.com need to be identified and made to answer for violating the privacy of the tens of millions of real people whose information is apparently in that data dump (the number of real people affected is hard to determine because the website's owners, Avid Life Media or ALM, made little effort to prevent people creating multiple fake accounts and are alleged to have created many such accounts themselves).

To be clear: there is nothing brave or noble or good about what was done by these "hackers" (whom it would be better and more accurate to call "data thieves"). Furthermore, any deaths or other harms that come from the theft and release of this data are on the heads of the person(s) who perpetrated these acts. They had no right, moral or otherwise, to carry out these acts.

By stealing and then publishing this data, the perpetrators have enabled countless scams, frauds, and other criminal acts, not least of which is blackmail. There is no legal, logical, or ethical analysis of their actions which can absolve them of responsibility for what they have done (and which cannot be undone, as well they know).

As for the rest of the world, most notably the world's media, claiming that people who are named in that data dump somehow deserve exposure is a totally untenable position, not least because many of those named didn't actually have affairs, or seek affairs, or even sign up to the site. Some people surfed the site out of curiosity or for titillation; and registering people on the site was a common prank, made possible by the irresponsible and frankly avaricious data handling practices of its owners.

Look for someone to sue the Ashley Madison data thieves for privacy violation, which is different from suing the company that failed to keep the secrets from which it made its money, Avid Life Media. The latter form of legal action is already underway to the tune of $578 million.

2. The Corporate Victim: Avid Life Media

Whatever you think of the business model of ALM, and I happen to think it sucked, they have been victimized by criminal perpetrators. If you condone the actions of those perpetrators you are appointing yourself judge and jury and enforcer of your own values, a course of action which, if replicated, poses a threat to society.

What if I dislike the way you do business? What if I think your employer needs a dose of "hacktivism" acted out as the righteous liberation of confidential data, which may happen to include, like it did in the Sony Pictures hack, the identity data of current and former employees, yourself included?

Are we really going to make the leap from justifiable anger at shady business practices to trashing cyberspace and turning it into a playground for disaffected bullies and jerks? What do we do when someone gets hurt? When someone takes their own life? Do we just dismiss them as collateral damage in our self-appointed war on whatever it is we don't like?

3. The Corporate Creeps: Avid Life Media

In their eagerness to make money, the folks running AshleyMadison.com not only cut corners on security, they deceived people. Here's an example, an email that was sent to someone who had registered on the website and then asked to be removed. The email certainly reads like the person's request had been honored:

However, after the recent dump of data from ALM's computers, this person found their information was still there, more than five years after they thought it had been removed. At some point ALM actually introduced account removal as a paid service! I don't know when that was, but if you've spent any time studying privacy law and the widely held principles of fair information practices, it is simply staggering that a commercial organization would charge a person to delete data about them.

Of course, if you read the above email closely, it doesn't actually say the person's data has been erased. This is just one of many ways in which ALM used weaselly wording in an effort to make money however it could. While making apparently serious claims to guarantee customers an affair, the ashleymadison.com terms and conditions state "there is no guarantee you will find a date or partner on our Site or using our Service. Our Site and our Service also is geared to provide you with amusement and entertainment."

But when you take money for promising to remove people's information, and then don't? That's beyond weaselly, and many people have alleged that their data persisted on ALM's systems even after they had paid to have it removed. These deceptive practices are particularly heinous because of how Ashley Madison positioned itself, as both the epitome of discretion and the endorser and enabler of actions some portion of the population find to be immoral and worthy of exposure.

4. The Innocent Victims: Ordinary people

To be clear, meeting people online is not, in my opinion, immoral. I met my partner of 30 years through a dating site, one that was located on the pages of the San Francisco Bay Guardian. We used pen and paper and postage stamps not computers, but it was clearly the precursor to online dating services, with which I have no problem. I know numerous couples who, like my partner and I, met through a dating service of some kind and remain happily married and monogamous.

And as long as nobody gets hurt, I don't have a problem with adults enjoying non-monogamous inter-personal relationships. I'm pretty sure many monogamous people fantasize about affairs without having them, which may contribute to their staying in a relationship. And I expect a lot of Ashley Madison clients were doing just that. Of course, many people, married or otherwise, surfed the site out of curiosity or for titillation; and registering people on the site was a common prank, made possible by the irresponsible and frankly avaricious data handling practices of its owners.

5. The Big Loser: Society at large

Make no mistake, if we continue down this road - exercising a self-appointed right to publish confidential personal data without the data subject's permission - we all lose. And by all, I mean humanity, and by lose, I mean serious losses, not least of which are the potential benefits of responsible data sharing, from telemedicine to population healthcare and genetic cures, from energy efficiency to environmental protection and improvement programs, and so on.

It is my firm and considered opinion that the promised benefits of big data and the Internet of Things will not be realized if we humans don't learn to avoid the temptation to abuse the underlying technology for selfish and/or misguided purposes.

Which leaves us with this irony: the criminals who stole and published the Ashley Madison data, wrong as they were, may have given us an opportunity to take stock of the way we are using digital technology, revealing in the process how far we have yet to go in our efforts to enjoy its benefits while managing its risks.

Tuesday, August 04, 2015

The cost of cybercrime: short version

The cost of cybercrime = $66.66.

That rather beastly number is a rough and very modest approximation of the cost of 18 minutes of my time, which is how long it just took me to make an online tuition payment to my school in England. Allow me to explain.

1. The tuition for my MSc in the Criminology Department at the University of Leicester is paid in multiple chunks of about $2,800 per chunk.

2. The university has a very convenient online payment system.

3. I am fortunate right now to have a credit card that can handle $2,800.

4. But I cannot charge $2,800 to the card via a website that is outside the US unless I spend 18 minutes on the phone with the bank to let them know this charge is okay (believe me, I've spent longer, and I've tried doing the transaction without the call enough times to know that this is typical, across multiple cards/banks).

5. That phone call is required because there is so much payment card fraud being perpetrated around the world today, most of which can be classified as cybercrime.

6. I work in cybersecurity. The hourly rate for an appropriately certified independent consultant in this field is likely to be at least $200. So 18 minutes of wasted time at that rate = $66.66.

Now multiply that by all the transactions that match the "must call us" category. Like when you're trying to surprise your wife with an upgrade as you're flying out of Heathrow (despite the fact that you told the credit card company you would be in England, still they required a call). At that rate the cost of cybercrime, just in terms of lost productivity, quickly adds up.

As for the rate calculation, I think I'm being reasonable. Back in the 1990s, our IT security consulting firm billed clients $2,500 per person per day, which was a combination of overhead and direct labor costs. The going rate today for specialists in this field, like the people brought in to respond to a big corporate data breach, can be as high as $900 per person per hour. I'm not saying my time is worth more than another person's, I'm just trying to put a number on the surcharge that cybercrime imposes on an otherwise efficient payment processing system. Time is money and spending 18 extra minutes to complete an online transaction is costly, whomever you are and however you look at it.

And this is nothing to do with my university. I have the same problem buying tickets for international air travel. And in some ways I'm glad I have the problem because it means my bank is protecting my account. But I'm also sad that the darker side of human nature has imposed these limits on our enjoyment of technology's many potential benefits (like studying at a university in another country).

Speaking of time, I've spent quite a bit of it studying the size and cost of cybercrime in my work as well as at school. I will be talking about this topic later this year at the Virus Bulletin Conference in Prague, as well as at this month's ISSA meeting in San Diego. Measuring the cost of cybercrime is not easy, indeed, it might be impossible. But I do think you can argue that the cost of cybercrime could get too high: if we reach a point where the cost of cybercrime deters the adoption of otherwise helpful technology, then we will have a much bigger problem than me getting grumpy on the phone with my otherwise very helpful bank.

Saturday, May 30, 2015

Recent security research output

Good evening. Welcome. Just time for a quick update (with apologies to John Oliver and Last Week Tonight).

The following links are humbly presented as evidence that I am still very actively involved in researching security, mainly as it relates to crime and computers, a.k.a. cybersecurity and cybercrime.

1. Blog posts on We Live Security, of which there are many. These are conveniently listed here.

2. Webinars on Brighttalk, which include this introduction to risk analysis and this look at cybersecurity legislation.

3. Slide decks posted on Slideshare, like this one: Cybercrime and the Hidden Perils of Patient Data. I used that deck when talking to a group of about 40 dentists in San Diego. Here's a deck I used in security awareness sessions with about 400 petroleum plant workers in Texas. 

4. Snippets posted on Twitter by @zcobb, which may consist of quotes, statistics, pieces of information that I think will help people better understand security challenges. Here's an example:
So, while the rate of posting here on S. Cobb on Security has not been stellar of late, it's not because I'm not working on security problems.

Sunday, January 04, 2015

Why Willie Sutton Robbed Banks: the real answer, and what it has to do with the #SonyHack

Willie Sutton was one of the most notorious American bank robbers of the twentieth century, spending two years on the FBI's list of Ten Most Wanted Fugitives.

Sutton is also the subject of one of the most frequently cited - and bogus - anecdotes in all of security (we're talking everything from physical security to information security and cybersecurity). At just about every security conference that I've attended, someone has used some version of the following:
"When a reporter asked the bank robber Willie Sutton why he robbed banks, Sutton replied: "Because that's where the money is.""

Saturday, December 20, 2014

Why the #SonyHack is not cyberwar

Here are two links that are essential reading for anyone tempted to invoke the term "cyberwar" to describe the hacking of Sony Pictures and its subsequent canceling of The Interview.

Book: The Tallinn Manual on the International Law Applicable to Cyber Warfare. This is the primer on the subject. Readable online at no charge.

Article: Cyberwar: reality or a weapon of mass distraction. Very readable paper by my friend and boss, security expert Andrew Lee (.pdf file).

Hopefully, politicians and commentators talking about the Sony Pictures hack will familiarize themselves with the facts and arguments laid out in the above publications before crying War!

Friday, December 19, 2014

Dear George Clooney - A word about cybersecurity

The following letter was written in response to remarks made by the actor and activist, George Clooney, in this article: Hollywood Cowardice: George Clooney Explains Why Sony Stood Alone In North Korean Cyberterror Attack

Dear Mr. Clooney,

I have great respect for your work sir, on film and off; I have a feeling we hold many of the same views on politics and economics and social justice. So it makes me sad to see how badly people have briefed you on the stark realities of cybersecurity. You seem to be under the impression that America can, with impunity, tell cyber criminals to "bring it on". You appear to be having difficulty understanding why big companies don't want to provoke hackers. Please allow me to explain.

In my own work I have seen the way in which multinational companies generate billions of dollars in profits by applying digital technology to improve productivity. My job has been, for the better part of two decades, advising companies on how to defend this highly profitable digital technology that they deploy.

Sadly, time and again, too many times to count, my fellow security professionals and I run into companies and company executives who reject our advice as too costly to implement, as an unreasonable burden on their business. When we say that the path they are taking comes with a large amount of risk, they either don't believe us or they say, "fine, we'll risk it."

Saturday, August 23, 2014

The Continuing Pain of Cybercrime Explained in One Simple Graph


Let the line A show the rate at which we are increasing the following variables:
  • number of people with cyber skills
  • the amount of resources devoted to deterring cybercrime
  • the level of regulatory compliance
  • the national resolve to address the problem
  • international resolve to address the problem
Now let line B show the rate at which the following are increasing:
  • number of people on the Internet
  • number of things on the Internet (IoT)
  • the ease of use and accessibility of cybercrime tools
  • the number of people prepared to engage in cybercrime
Graph these over time and you can see C = the pain of cybercrime. The more we can increase the upward angle of A, while reducing the upward angle of B, the less cybercrime we will experience.

Just to be clear, globally speaking, C is a net negative. Cybercrime can be positive for criminals and their immediate economic environs, such as communities with limited options for legal employment of a gainful nature. However, C undermines the primary factors by which the upward angle of A can be increased: economic prosperity and political stability.

Saturday, August 09, 2014

Is this your Sample Information Security Policy?

If you or your organization is the original creator of the following Sample Information Security Policy then I would like to hear from you: 
Every organization needs an Information Security Policy (although they may call it something different). When used appropriately the organization's whole approach to security will be guided by the policy document, a copy of which may well be requested during discussions around mergers, partnerships, and bids for new business. I have discussed the role and importance of security policy in several webinars, including this one directed at small and medium sized businesses.

Sunday, April 27, 2014

Business Continuity Management: Sounds boring yet saves lives, companies, butts

Lately, I've been revisiting an area of information security into which I have dived deeply on several occasions over the years: Disaster Recovery, which is pretty much the same as Business Continuity Management or BCM, which includes Business Continuity Planning (BCP). Along the way I have assembled a list of high quality BCM resources and articles that folks might find useful (and available for free in most cases). You will find the list at the end of this article. Here's a scene-setting quote from one of the articles:
Disasters can strike at any time – often with little or no warning – and the effects can be devastating. The cost in human lives and property damage is what makes the evening news because of the powerful tug of human interest. Much less coverage, however, is given to the disruption, struggle and survivability of business operations. A study fielded by the Institute for Business and Home Safety revealed that 25 percent of all companies that close due to disasters – hurricanes, power failures, acts of terror and others – never reopen. (Disaster Preparedness Planning: Maintaining Business Continuity During Crisis, Disruption and Recovery)

Monday, April 14, 2014

Internet voting security: a scary tweet that reached 227,391 (even before Heartbleed)

Last month I tweeted a picture of some computer code that was part of an Internet voting system. That picture was re-tweeted so many times it reached more than 220,000 Twitter users. So, that had to be some pretty amazing code, right? Yes, as in amazingly frightening. Take a look, and then read on for a short explanation, and also a long one if you have the time.


A very clever computer scientist, Joe Kiniry, has been concerned about the security of Internet voting applications for some time. Joe is a former Technical University of Denmark professor, now Principal Investigator at Galois. In his research Joe noted this section of code in a program that was actually used for national elections in a European country.

The coder(s) have included a comment reminding themselves that security checks still need to be coded. My tweet suggested that this slide nicely illustrated the question of “what could possibly go wrong?” when it comes to Internet voting. Of course, the best answer to that question is: So much could go wrong you simply cannot use the Internet to elect public officials in a fair, honest, secret ballot!