Public-interest technology, information security, data privacy, risk and gender issues in tech
Pages
▼
Thursday, August 30, 2007
Scobbs Blog on Hiatus
There won't be any new posts here for a while, but you can catch the latest news and views at Cobbsblog.com.
Saturday, June 23, 2007
Trust in Banks Declines: UK distrust rises 47% to 71%
Interesting study indicates banks cannot substitute a virtual presence for local branches and a commitment to community.
Nearly three-quarters of UK customers do not trust their retail bank, and the more virtual a bank is, the lower the level of trust, according to a survey by Unisys.,..When Unisys asked the same questions in 2005 and 2006, 47 per cent of customers indicated that they did not trust their retail bank. This year the figure had risen to 71 per cent....the attributes most cited for eroding trust are 'disrespectful attitudes', 'poor privacy', 'weak IT' (such as websites), 'poor corporate governance' and a 'lack of investment in the local community'.I'm just speculating here, but I'd say the constant drumbeat of security breaches and phishing scams involving online banking are having an erosive effect on trust.
Tuesday, May 22, 2007
What SMBs Need to Know About Computer Security Threats
I found a handy set of pages by a Victor Ng titled What SMBs Need to Know About Computer Security Threats in a publication called SMBedge which describes itself as "The Pulse of SMBs in Asia Today."
It is basic infosec 101 material that is handy because you can send that link to someone who doesn't know what infosec is--but should--just to get them started. Ng's material is more current than some of the 'intro' articles I had been using for this purpose in the past. You know, when someone says "So, you're a computer security consultant? I got a question. Should I renew that Symantec software that came with the PC I bought last year for inventory? I heard there are zombies out there." What do you tell them? Ask for their email address and send them a link.
Of course, this may be someone to whom you have just paid money for services rendered at the rate of $1 a minute and they are now inviting you to donate about $20 of your time given them a basic education (although they probably won't see it like that). As a CISSP, I always try to strike a balance between politely doing my civic duty and giving them that 10 minute intro and telling them to just go buy a book (valuing my time at $2 per hour minimum).
Usually it takes less than 5 minutes talking to the SMB to figure out if it is in more immediate danger than the rest of us, i.e. doing something really dumb with their systems. If they are, I am obliged, I think, to advise them to call in a professional. If I have the time I might be the professional and do a 10 minute fix for free, but then you start to encounter others issues, like: the problem you are fixing is just the tip of the iceberg; they have no budget; and what about liability if there is no formal contract?
It is basic infosec 101 material that is handy because you can send that link to someone who doesn't know what infosec is--but should--just to get them started. Ng's material is more current than some of the 'intro' articles I had been using for this purpose in the past. You know, when someone says "So, you're a computer security consultant? I got a question. Should I renew that Symantec software that came with the PC I bought last year for inventory? I heard there are zombies out there." What do you tell them? Ask for their email address and send them a link.
Of course, this may be someone to whom you have just paid money for services rendered at the rate of $1 a minute and they are now inviting you to donate about $20 of your time given them a basic education (although they probably won't see it like that). As a CISSP, I always try to strike a balance between politely doing my civic duty and giving them that 10 minute intro and telling them to just go buy a book (valuing my time at $2 per hour minimum).
Usually it takes less than 5 minutes talking to the SMB to figure out if it is in more immediate danger than the rest of us, i.e. doing something really dumb with their systems. If they are, I am obliged, I think, to advise them to call in a professional. If I have the time I might be the professional and do a 10 minute fix for free, but then you start to encounter others issues, like: the problem you are fixing is just the tip of the iceberg; they have no budget; and what about liability if there is no formal contract?
Saturday, May 19, 2007
TJX Discovering Cost of Security Failure
Here is a pretty good reason to make sure your company is doing a good job of protecting customer data: TJX: Data breach damage $25 million and counting.
That's right, according to SearchSecurity, the bottom line for TJX Companies Inc. took a big hit in the first quarter of 2007, thanks to a $12 million charge tied to the security breach that exposed at least 45.7 million credit and debit card holders to identity fraud. In total, the breach has cost the company about $25 million to date. And that doesn't include the cost of customers who decided to shop elsewhere.
TJX executives better hope that they can document the security policies and practices they had in place to prevent the hacking that took place. If a judge deems them to be up to par, they may avoid censure even though they were hacked. An active and well-documented security program is a good defense against charges of negligence or failure to meet the standard of due care.
That's right, according to SearchSecurity, the bottom line for TJX Companies Inc. took a big hit in the first quarter of 2007, thanks to a $12 million charge tied to the security breach that exposed at least 45.7 million credit and debit card holders to identity fraud. In total, the breach has cost the company about $25 million to date. And that doesn't include the cost of customers who decided to shop elsewhere.
TJX executives better hope that they can document the security policies and practices they had in place to prevent the hacking that took place. If a judge deems them to be up to par, they may avoid censure even though they were hacked. An active and well-documented security program is a good defense against charges of negligence or failure to meet the standard of due care.
Friday, May 18, 2007
As Predicted: Lawsuits up the security stakes
As predicted, by myself and numerous other information security experts, lawsuits are becoming an increasingly common response to a security breach. The latest example: The American Federation of Government Employees is suing the Transportation Security Administration after the TSA lost a hard drive containing employment records for some 100,000 individuals, including names, social security numbers, dates of birth, payroll information and bank account routing information,
The drive went missing from the TSA Headquarters Office of Human Capital. The names included various personnel and even U.S. Sky Marshals. The law suit is AFGE, et al v. Kip Hawley and TSA (AFGE = American Federation of Government Employees and Kip Hawley is the TSA Administrator). The AFGE claims, that by failing to establish safeguards to ensure the security and confidentiality of personnel records, the TSA violated both the Aviation and Transportation Security Act and the Privacy Act of 1974.
The Aviation and Transportation Security Act (ATSA) requires the TSA administrator "to ensure the adequacy of security measures at airports." The 1974 Privacy Act requires every federal agency to have in place security measures to prevent unauthorized release of personal records. Losing a hard drive containing employment records for some 100,000 individuals constitutes unauthorized release. Stay tuned for progress in the suit.
TSA web site dedicated to this incident.
The drive went missing from the TSA Headquarters Office of Human Capital. The names included various personnel and even U.S. Sky Marshals. The law suit is AFGE, et al v. Kip Hawley and TSA (AFGE = American Federation of Government Employees and Kip Hawley is the TSA Administrator). The AFGE claims, that by failing to establish safeguards to ensure the security and confidentiality of personnel records, the TSA violated both the Aviation and Transportation Security Act and the Privacy Act of 1974.
The Aviation and Transportation Security Act (ATSA) requires the TSA administrator "to ensure the adequacy of security measures at airports." The 1974 Privacy Act requires every federal agency to have in place security measures to prevent unauthorized release of personal records. Losing a hard drive containing employment records for some 100,000 individuals constitutes unauthorized release. Stay tuned for progress in the suit.
TSA web site dedicated to this incident.
Saturday, May 12, 2007
Penn College Students Win Award for Computer-Security Video
A reported in PCToday, Penn College Students Win Award for Computer-Security Video. This is REALLY encouraging. Congratulations guys!
I am a big believer in awareness programs. Check out the free podcast of tips on developing successful security awareness programs over at Cobb Associates.
I am a big believer in awareness programs. Check out the free podcast of tips on developing successful security awareness programs over at Cobb Associates.
Thursday, May 10, 2007
Public Wi-Fi Often Wide Open, But Who Cares?
Nice article by David Colker of the LA Times, republished here in the Chicago Tribune: Public Wi-Fi may turn your life into an open notebook. He vividly reminds us that surfing with your notebook at Starbucks can be a less than private experience. There is quite a bit of personal irony in this for me.
Wi-Fi at Starbucks is served by T-Mobile which made a big noise in October of 2004 about offering secure Wi-Fi at all its hot spots: T-Mobile Rolls Out Strong Security at Wi-Fi Hot Spots. I am personally aware of this because back then I was Chief Security Executive at STSN, now iBAHN, which provides Internet service to thousands of hotels, hotel lobbies, restaurants, and conferences around the world. At the time, iBAHN was close to completing its own roll-out of secure Wi-Fi and was under the impression it would be the first such major provider to offer this level of security at all its locations. Naturally, T-Mobile's announcement stung, partly because it garnered headlines while being ambiguous. Consider this "reporting" which is close to the wording of T-Mobile's press release:
Despite the fact that Brett and David were two of the best bosses I have ever had, I decided to leave iBAHN in 2005 and take a break from the corporate world. For a while I lost track of the secure hotspot debate. But now I am back "on the road again," so to speak, I have had occasion to try the Wi-Fi at Starbucks in several locations around the world over the last six months and have noticed that the logon had changed considerably. It's a lot less complicated, with a lot less warning about potential security problems, than it was in 2004, and 802.1x-based authentication was apparently not offered.
Which suggests that there is considerable truth to what some of us security experts have been saying ever since computers escaped from Fortress Data Center in the eighties: Unless security is really simple and seamless, users won't use it. About the only exception to this is the user who has been educated about the risks. That is why iBAHN spent a lot of time educating its chosen market place (hotels and conferences) about those risks. And that is why iBAHN makes money selling secure connectivity at a premium.
Wi-Fi at Starbucks is served by T-Mobile which made a big noise in October of 2004 about offering secure Wi-Fi at all its hot spots: T-Mobile Rolls Out Strong Security at Wi-Fi Hot Spots. I am personally aware of this because back then I was Chief Security Executive at STSN, now iBAHN, which provides Internet service to thousands of hotels, hotel lobbies, restaurants, and conferences around the world. At the time, iBAHN was close to completing its own roll-out of secure Wi-Fi and was under the impression it would be the first such major provider to offer this level of security at all its locations. Naturally, T-Mobile's announcement stung, partly because it garnered headlines while being ambiguous. Consider this "reporting" which is close to the wording of T-Mobile's press release:
T-Mobile is introducing strong, 802.1x-based authentication and encryption across its network of 4,700 hot spots. The move, which appears to be the first use of advanced 802.1x-based security by a national mobile carrier in U.S. hot spots leverages the existing 802.1x infrastructure used to authenticate GSM (Global System for Mobile Communications)/GPRS (General Packet Radio Service) cell-phone users. "CIOs across the country have been asking for enhanced security, and we're the first U.S. wireless carrier to deliver it.But T-Mobile was not the first to deliver strong, 802.1x-based authentication and encryption. iBAHN was already doing that, but had not talked about it publicly because the roll-out was not complete. T-Mobile decided to claim the glory by talking about their own roll-out before it was complete. I know because, at the time of the announcement, I was in downtown Chicago and I walked many blocks to test several Starbucks locations to see if 802.1x authentication was indeed available. The results were mixed, some consolation to my boss, Brett Molen, iBAHN's CTO, and CEO David Garrison.
Despite the fact that Brett and David were two of the best bosses I have ever had, I decided to leave iBAHN in 2005 and take a break from the corporate world. For a while I lost track of the secure hotspot debate. But now I am back "on the road again," so to speak, I have had occasion to try the Wi-Fi at Starbucks in several locations around the world over the last six months and have noticed that the logon had changed considerably. It's a lot less complicated, with a lot less warning about potential security problems, than it was in 2004, and 802.1x-based authentication was apparently not offered.
Which suggests that there is considerable truth to what some of us security experts have been saying ever since computers escaped from Fortress Data Center in the eighties: Unless security is really simple and seamless, users won't use it. About the only exception to this is the user who has been educated about the risks. That is why iBAHN spent a lot of time educating its chosen market place (hotels and conferences) about those risks. And that is why iBAHN makes money selling secure connectivity at a premium.
Monday, May 07, 2007
Spector CNE and HTTP Traffic Cops
Remember when SPECTOR stood for Special Executive for Counter-Intelligence, Revenge and Extortion?** Now comes Spector CNE - one of a group of products I've been sniffing around in response to this question: What's to stop employees from copying and pasting confidential company data into blogs and Google App documents?
I've been putting this question to clients lately and not getting very good answers (where 'good'='good for their information security'). I don't feel comfortable sharing specifics on a public web page, but I think this is a big problem for some big companies. I also think this could become yet another front in the endless arms race between the good guys and the bad guys (where 'bad guys'='everyone from ruthless corporate spies to weak-willed individuals under stress, or merely under-trained.) So, if anyone knows of a good http traffic cop, or any other solution to this problem, I'd love to get your comments on it.
**If you already knew what SPECTOR stood for, then you already know the name of its on-screen nemesis. But do you know the make and model of the weapon said nemesis is brandishing in the famous black tie promotional 'shots' for the second movie in the genre? I will email an electronic copy of my privacy book to the first person who sends the right answer to scobb at scobb dot net.
I've been putting this question to clients lately and not getting very good answers (where 'good'='good for their information security'). I don't feel comfortable sharing specifics on a public web page, but I think this is a big problem for some big companies. I also think this could become yet another front in the endless arms race between the good guys and the bad guys (where 'bad guys'='everyone from ruthless corporate spies to weak-willed individuals under stress, or merely under-trained.) So, if anyone knows of a good http traffic cop, or any other solution to this problem, I'd love to get your comments on it.
**If you already knew what SPECTOR stood for, then you already know the name of its on-screen nemesis. But do you know the make and model of the weapon said nemesis is brandishing in the famous black tie promotional 'shots' for the second movie in the genre? I will email an electronic copy of my privacy book to the first person who sends the right answer to scobb at scobb dot net.
Monday, April 30, 2007
Image Vulnerability: Is anyone looking at the outbound threat?
Remember last summer when the warnings about a surge in image spam started to appear? (Image spam being defined as unsolicited commercial email in which the message is presented as an image rather than text.) Then we saw spam volume drastically increase towards the end of 2007 with much hand-wringing over the difficulties of detecting of image-based spam.
Well, I wonder how many companies have started to worry about the outbound-image threat? A certain percentage of companies do monitor outbound Internet traffic for trade secrets and inappropriate content. Some just monitor email. At least a few monitor web traffic. But I am fairly sure most of this is filtering based on text. Even so, I don't know how many would actually spot an employee typing company secrets into a password-protected blog hosted outside the company.
But what if the employee scans images of confidential company documents and uploads the JPEG files to a blog? Would that trigger a response from information security? Scanning the content of a JPEG for sensitive text is not impossible, but it is certainly processor intensive and in some ways it is not unlike the problem of detecting image-based spam.
Of course, one way of reducing the amount of image-based spam coming into an enterprise is to use the Turntide anti-spam technology that chokes off spam without a filter, instead using a behavior-based approach (now available as the Symantec Mail Security 8100 Series Appliance). Not sure if this would work the other way round. I know there was some discussion of using it to prevent enterprise networks from sending spam. If someone tried to send out 90,000 scanned pages, one after another, as JPEGs, would it show up as an anomaly and trigger some alarms?
BTW, the 90,000 number is not entirely random. In 1992 about twenty cases of confidential documents belonging to General Motors were physically shipped to Volkswagen headquarters in Wolfsburg (many of them allegedly transported aboard a Volkswagen corporate jet, via the Spanish residence of J. Ignacio Lopez de Arriortua, then Vice President at GM in charge of Worldwide Purchasing, later hired by VW). The number of purloined pages was put at 90,000.
BBTW, this piece of infosec trivia was my excuse for featuring Ron Patrick's amazing street legal VW (Beetle) Jet.
Well, I wonder how many companies have started to worry about the outbound-image threat? A certain percentage of companies do monitor outbound Internet traffic for trade secrets and inappropriate content. Some just monitor email. At least a few monitor web traffic. But I am fairly sure most of this is filtering based on text. Even so, I don't know how many would actually spot an employee typing company secrets into a password-protected blog hosted outside the company.
But what if the employee scans images of confidential company documents and uploads the JPEG files to a blog? Would that trigger a response from information security? Scanning the content of a JPEG for sensitive text is not impossible, but it is certainly processor intensive and in some ways it is not unlike the problem of detecting image-based spam.
Of course, one way of reducing the amount of image-based spam coming into an enterprise is to use the Turntide anti-spam technology that chokes off spam without a filter, instead using a behavior-based approach (now available as the Symantec Mail Security 8100 Series Appliance). Not sure if this would work the other way round. I know there was some discussion of using it to prevent enterprise networks from sending spam. If someone tried to send out 90,000 scanned pages, one after another, as JPEGs, would it show up as an anomaly and trigger some alarms?
BTW, the 90,000 number is not entirely random. In 1992 about twenty cases of confidential documents belonging to General Motors were physically shipped to Volkswagen headquarters in Wolfsburg (many of them allegedly transported aboard a Volkswagen corporate jet, via the Spanish residence of J. Ignacio Lopez de Arriortua, then Vice President at GM in charge of Worldwide Purchasing, later hired by VW). The number of purloined pages was put at 90,000.
BBTW, this piece of infosec trivia was my excuse for featuring Ron Patrick's amazing street legal VW (Beetle) Jet.
Friday, April 20, 2007
White Hat Hacking for Rainy Day Fun: Weak search forms still revealing too much data
What better way to spend a rainy April day than white hat hacking? Experience the thrill of hacking with none of the guilt. I highly recommend this for anyone who has difficulty understanding why hackers do what they do (and you are NEVER going to be a really good information security professional unless you DO understand what hacking is about).
Allow me to swap my white hat for my linguist cap for a moment (B.A. Honours, School of English, University of Leeds--one year behind guitar virtuoso Mark Knopler but way ahead of the wonderfully talented Corinne Bailey Rae--and would you believe I can't even carry a tune, but I digress). It has to be said that hacking is one of the most hotly contested words of the information age. In justifiable homage to the original good-hearted hackers many infosec professionals use the qualifier "criminal hackers" to distinguish the bad guys from the good guys (that's gender-neutral colloquial 'guys' by-the-way). The good guys, who don't break laws, can be referred to as white-hat hackers, the bad guys being black hat. I am actually leaning towards 'bad actors' as a preferred term for the bad guys (with apologies to my thespian readers).
So, one rainy April afternoon I was wearing "my film producer hat" and working the web to promote the film's appearance at two overlapping film festivals, one in Winston-Salem and the other in Columbus, Ohio. Neither the director nor myself could afford to attend these events in person and we were worried that turnout would be low. I decided to surf the web sites of colleges in the target areas and to identify faculty with an academic interest in civil rights history (and thereby interested in the film enough to tell their students about it). In the process I found a classic example of weak web design that was hackable.
After using standard search tools to identify the people I wanted to contact, I looked for their email addresses. Many organizations like schools and hospitals have a directory of staff phone numbers and email addresses. However, to prevent a variety of problems, such as spam, these and other details are not displayed wholesale in a list, but one at a time in response to a name search. In other words, a form on a web page enables users to search a database of people (in infosec terms, this database can be referred to as an asset). The premise is that you have to know the person's name to find their information.
I used this sort of directory to email several professors at several schools. However, I also found something interesting. These forms usually consist of two fields, for Last Name and First name, together with a Submit button. The way it's supposed to work is that you, perhaps an aspiring Physics major, enter Einstein in one field, Albert in the other, click Submit, and get the phone number and email address for Prof. Einstein. However, such forms can be a pain for users who can't recall the professor's full name, so the form might allow you to enter Einstein for the last name and the letter 'A' for the first name. And herein lies a dilemma that can become a problem. How 'vague' to make the search. For example, if I can enter 'D' in the last name field and 'J' in the first name field, I can all Jane and John Does in the database. What you need is a fairly clever set of rules built into the form to control the results of any conceivable form input.
You see, in terms of information security, one can reliably predict that someone (referred to as an agent) will at some point click the Submit button without entering any characters at all in either field. If the result of this action is to reveal all of the records in the database (what we might call a means), one can reliably predict, based on past history, that this method will eventually be used to make a copy of all the records in the database (asset).
Thus, by failing to properly code the handling of form input from this search page, the folks who put up the page have created a vulnerability. This becomes a means of attack and a threat exists if someone figures out how to exploit it to gain unauthorized access to the asset. (This same problem crops up with student directories as well, where you are even less likely to want to grant access to the full list.)
This example nicely displays all of the elements of an information security threat (asset, agent, means). I have seen this type of problem on local government web sites where the effect was enable the attacker to find all the data required to steal a person's identity, or even find all of the special training taken by former military personnel in the area.
As a white hat hacker it is your responsibility to inform the site manager of the problem. You avoid, as I have here, revealing specifics of the problem (e.g. the address of web site where I found this example). Hopefully, they will correct the problem. As for me, I will admit that, wearing my producer hat, I did use some of the email addresses that I found. I did not spam anyone. I sent them personal notes. And maybe it worked. At the Ohio festival Dare Not Walk Alone won the audience award for best film.
.
Allow me to swap my white hat for my linguist cap for a moment (B.A. Honours, School of English, University of Leeds--one year behind guitar virtuoso Mark Knopler but way ahead of the wonderfully talented Corinne Bailey Rae--and would you believe I can't even carry a tune, but I digress). It has to be said that hacking is one of the most hotly contested words of the information age. In justifiable homage to the original good-hearted hackers many infosec professionals use the qualifier "criminal hackers" to distinguish the bad guys from the good guys (that's gender-neutral colloquial 'guys' by-the-way). The good guys, who don't break laws, can be referred to as white-hat hackers, the bad guys being black hat. I am actually leaning towards 'bad actors' as a preferred term for the bad guys (with apologies to my thespian readers).
So, one rainy April afternoon I was wearing "my film producer hat" and working the web to promote the film's appearance at two overlapping film festivals, one in Winston-Salem and the other in Columbus, Ohio. Neither the director nor myself could afford to attend these events in person and we were worried that turnout would be low. I decided to surf the web sites of colleges in the target areas and to identify faculty with an academic interest in civil rights history (and thereby interested in the film enough to tell their students about it). In the process I found a classic example of weak web design that was hackable.
After using standard search tools to identify the people I wanted to contact, I looked for their email addresses. Many organizations like schools and hospitals have a directory of staff phone numbers and email addresses. However, to prevent a variety of problems, such as spam, these and other details are not displayed wholesale in a list, but one at a time in response to a name search. In other words, a form on a web page enables users to search a database of people (in infosec terms, this database can be referred to as an asset). The premise is that you have to know the person's name to find their information.
I used this sort of directory to email several professors at several schools. However, I also found something interesting. These forms usually consist of two fields, for Last Name and First name, together with a Submit button. The way it's supposed to work is that you, perhaps an aspiring Physics major, enter Einstein in one field, Albert in the other, click Submit, and get the phone number and email address for Prof. Einstein. However, such forms can be a pain for users who can't recall the professor's full name, so the form might allow you to enter Einstein for the last name and the letter 'A' for the first name. And herein lies a dilemma that can become a problem. How 'vague' to make the search. For example, if I can enter 'D' in the last name field and 'J' in the first name field, I can all Jane and John Does in the database. What you need is a fairly clever set of rules built into the form to control the results of any conceivable form input.
You see, in terms of information security, one can reliably predict that someone (referred to as an agent) will at some point click the Submit button without entering any characters at all in either field. If the result of this action is to reveal all of the records in the database (what we might call a means), one can reliably predict, based on past history, that this method will eventually be used to make a copy of all the records in the database (asset).
Thus, by failing to properly code the handling of form input from this search page, the folks who put up the page have created a vulnerability. This becomes a means of attack and a threat exists if someone figures out how to exploit it to gain unauthorized access to the asset. (This same problem crops up with student directories as well, where you are even less likely to want to grant access to the full list.)
This example nicely displays all of the elements of an information security threat (asset, agent, means). I have seen this type of problem on local government web sites where the effect was enable the attacker to find all the data required to steal a person's identity, or even find all of the special training taken by former military personnel in the area.
As a white hat hacker it is your responsibility to inform the site manager of the problem. You avoid, as I have here, revealing specifics of the problem (e.g. the address of web site where I found this example). Hopefully, they will correct the problem. As for me, I will admit that, wearing my producer hat, I did use some of the email addresses that I found. I did not spam anyone. I sent them personal notes. And maybe it worked. At the Ohio festival Dare Not Walk Alone won the audience award for best film.
.
Tuesday, April 17, 2007
Photocopier FUD? Americans copying billions of tax docs don't have time to think
So, you've filed your tax return and put away your tax papers until next year, but how much of the very personal information on those tax papers is still out there, accessible to other people (besides you and the IRS)?
The answer could be "a surprisingly large amount," particularly if you used a digital photocopier to make copies of things like your 1040, W2, 1099s, K-1 and so on. We're not talking about leaving your originals in the photocopier, a common enough mistake, but about the fact some digital copiers retain images of those pages until they are over-written by successive copy jobs, a fact highlighted in an AP article last month. This is not a case of unfounded 'fear, uncertainty, and doubt.' The vulnerability highlighted here is real enough to warrant serious attention, particularly in some quarters.
The underlying fact is that many office photocopiers now contain hard drives to which scans of the pages being copied are written before paper copies are printed and those scans are not always erased after the copy job is completed. Steal one of those hard drives and you could get access to some very personal information (and we're not just talking about tax returns and after-hours butt-scans).
The extent to which this 'feature' of digital copiers poses a threat to your privacy depends upon many factors, like who you are and what kind of enemies have you have got. Personally, I'm not too worried. But if I was a key player in a large company in a hotly contested market I would be paying attention to this particular vulnerability.
Note that the possibility someone could read your personal data off the hard drive of a machine you used to copy personal documents is not a threat it is vulnerability--it becomes a threat when a threat agent is willing and able to exploit the vulnerability.
As to exploitation of the vulnerability by a threat agent, the following scenario is entirely plausible: as a key person in your organization you and your spouse are under surveillance by the opposition. They've searched your trash but found nothing useful. Then one of you is seen entering the local copy-shop and spending some time on machine number 9. After you leave, a generic service person enters said copy-shop muttering something about a maintenance flag on copier number 9. He opens the machine, removes the hard drive and mutters something about a spare in the van. Off he goes with a digital copy of whatever papers you ran through that machine.
Variations on this theme are numerous and include the janitor stealing or mirroring office copier hard drives on the night shift (a great way to get a copy of that competitive bid you had to submit in triplicate). Defenses include being more thoughtful about where you do your photocopying, what access you give to the copier, and what copying hardware you use (some digital copiers offer 'safety' features--of which more later).
However, the first thing that struck me when I read the AP article was a sense of deja vu. Hard drives have been built into a lot of large copiers and printers for some time. It was at least 7 years ago that the penetration testing team at my company figured they could run a publicly accessible web site from the hard drive of such a machine located on the internal network of a large public school district (which we had been hired to test, I hasten to add). That tells you a lot about how much thought the folks who design such machines were giving to their potential for abuse.
In other words, many 'new' or 'emerging' information security threats are not so much new as newly realized or newly rediscovered. And this 'newness' is not simply a function of vulnerabilities found or re-found, but also changes in the means and motives of threat agents prepared to exploit them.
Sidebar/postscript: When you read the AP article referenced above you get the distinct impression that it was prompted by copier-maker Sharp and if I were to swap my infosec hat for my entrepreneur hat I'd have to doff it to the folks at Sharp (or Sharp's PR agency) who were behind this. I know from experience it is very difficult to get someone like AP to write a story that comes from your particular perspective. Sharp's perspective is that of a company which has gone to the trouble to makes photocopiers that are more secure (as you can read here). I think this is a good thing and this article was a good fit between education and marketing.
.
The answer could be "a surprisingly large amount," particularly if you used a digital photocopier to make copies of things like your 1040, W2, 1099s, K-1 and so on. We're not talking about leaving your originals in the photocopier, a common enough mistake, but about the fact some digital copiers retain images of those pages until they are over-written by successive copy jobs, a fact highlighted in an AP article last month. This is not a case of unfounded 'fear, uncertainty, and doubt.' The vulnerability highlighted here is real enough to warrant serious attention, particularly in some quarters.
The underlying fact is that many office photocopiers now contain hard drives to which scans of the pages being copied are written before paper copies are printed and those scans are not always erased after the copy job is completed. Steal one of those hard drives and you could get access to some very personal information (and we're not just talking about tax returns and after-hours butt-scans).
The extent to which this 'feature' of digital copiers poses a threat to your privacy depends upon many factors, like who you are and what kind of enemies have you have got. Personally, I'm not too worried. But if I was a key player in a large company in a hotly contested market I would be paying attention to this particular vulnerability.
Note that the possibility someone could read your personal data off the hard drive of a machine you used to copy personal documents is not a threat it is vulnerability--it becomes a threat when a threat agent is willing and able to exploit the vulnerability.
As to exploitation of the vulnerability by a threat agent, the following scenario is entirely plausible: as a key person in your organization you and your spouse are under surveillance by the opposition. They've searched your trash but found nothing useful. Then one of you is seen entering the local copy-shop and spending some time on machine number 9. After you leave, a generic service person enters said copy-shop muttering something about a maintenance flag on copier number 9. He opens the machine, removes the hard drive and mutters something about a spare in the van. Off he goes with a digital copy of whatever papers you ran through that machine.
Variations on this theme are numerous and include the janitor stealing or mirroring office copier hard drives on the night shift (a great way to get a copy of that competitive bid you had to submit in triplicate). Defenses include being more thoughtful about where you do your photocopying, what access you give to the copier, and what copying hardware you use (some digital copiers offer 'safety' features--of which more later).
However, the first thing that struck me when I read the AP article was a sense of deja vu. Hard drives have been built into a lot of large copiers and printers for some time. It was at least 7 years ago that the penetration testing team at my company figured they could run a publicly accessible web site from the hard drive of such a machine located on the internal network of a large public school district (which we had been hired to test, I hasten to add). That tells you a lot about how much thought the folks who design such machines were giving to their potential for abuse.
In other words, many 'new' or 'emerging' information security threats are not so much new as newly realized or newly rediscovered. And this 'newness' is not simply a function of vulnerabilities found or re-found, but also changes in the means and motives of threat agents prepared to exploit them.
Sidebar/postscript: When you read the AP article referenced above you get the distinct impression that it was prompted by copier-maker Sharp and if I were to swap my infosec hat for my entrepreneur hat I'd have to doff it to the folks at Sharp (or Sharp's PR agency) who were behind this. I know from experience it is very difficult to get someone like AP to write a story that comes from your particular perspective. Sharp's perspective is that of a company which has gone to the trouble to makes photocopiers that are more secure (as you can read here). I think this is a good thing and this article was a good fit between education and marketing.
.
Wednesday, April 11, 2007
Windows & Office Barf Again! Microsoft's recommended Automatic Updates trash data
If you are using Windows and value your time, do this:
1. Go to the Control Panel for Automatic Updates
2. Change the setting from "Automatic (Recommended)" to something like "Download updates for me, but let me choose when to install them."
If you don't do this, you may be set to lose a lot of time and money. Why? Whenever there is a patch Tuesday and the patch requires a reboot, like the one this week, the recommended setting means Microsoft will reboot your system for you, unless you happen to be sitting there at the keyboard to prevent it. Here's a typical scenario:
Let's face it, in the year 2007, twenty years into an OS, twenty five years into an application, this is bad behavior of the worst and mist unforgivable kind. The vendor recommended mode of operation is literally data destructive.
Of course, some readers may say that, "if you are using Windows and value your time," you should switch to a Mac. But Apple has its own share of hubris and I have thousands of dollars invested in software that won't run on a Mac. Come to think of it, I have invested thousands of dollars and hundreds of man-hours creating a computer system that pretty much does what I want it to do, except when the historical recipient of many of thousands of my dollars decides to use its software and ignorance to trash my data.
.
1. Go to the Control Panel for Automatic Updates
2. Change the setting from "Automatic (Recommended)" to something like "Download updates for me, but let me choose when to install them."
If you don't do this, you may be set to lose a lot of time and money. Why? Whenever there is a patch Tuesday and the patch requires a reboot, like the one this week, the recommended setting means Microsoft will reboot your system for you, unless you happen to be sitting there at the keyboard to prevent it. Here's a typical scenario:
You spend several hours researching a topic on the web. You have about ten browser tabs open displaying your research results and you are cutting and pasting said results into a Microsoft Word document. The door bell chimes and you rush to answer it. You are a savvy user so even as you head to the door you make a mental note that the two apps you are using have auto-save. Word auto-saves documents. Firefox auto-saves session data. But as you stand at the door signing for a package you hear the "chime of death" from your office, signalling that your Windows machine has restarted. Not only has it restarted, it has, under the control of Microsoft's Automatic update, has trashed your Word documents.That's right, it has not even created the temporary files that allow you to restore documents when something crashes Word. This is because Microsoft, in its current state of engorged hubris, which can only be described as galactic in scope, does not consider an unapproved system restart of its choosing to be a crash. So it only gives you the last user-saved version of the docs that you have spent an hour compiling.
Let's face it, in the year 2007, twenty years into an OS, twenty five years into an application, this is bad behavior of the worst and mist unforgivable kind. The vendor recommended mode of operation is literally data destructive.
Of course, some readers may say that, "if you are using Windows and value your time," you should switch to a Mac. But Apple has its own share of hubris and I have thousands of dollars invested in software that won't run on a Mac. Come to think of it, I have invested thousands of dollars and hundreds of man-hours creating a computer system that pretty much does what I want it to do, except when the historical recipient of many of thousands of my dollars decides to use its software and ignorance to trash my data.
.
Monday, April 09, 2007
Security Means Availability: Google and others need to address this ASAP in SaaS
As enterprises explore Software as a Service, security experts like David Brussin are keeping a watchful eye. Clearly there are serious security implications whenever data is allowed to live beyond the--hopefully, strongly defended--perimeter of the enterprise fortress. Typically those implications are first thought of in terms of confidentiality and integrity: Will our data be safe from prying eyes and unauthorized access? But the third pillar of security, availability, should not be neglected. How much does strong protection against unauthorized access matter if authorized access is impaired?
Google must be pondering this question right now as news of outages spreads: "Little over a month after introducing Google Apps' Premier version, which includes a 99.99 percent uptime commitment, Google is failing to meet that service level agreement (SLA) for an undetermined number of customers." PC World article highlighted in this succintly titled posting by Ann All on the Straight to the Source blog at IT Business Edge: It's the SLAs Stupid.
This is timely data for me as I have just spent a week over in Europe meeting with executives of a VLO to discuss information security strategy in the context of a possible shift to SaaS as an alternative to out-sourcing (VLO = Very Large Organization).
Actually, I see not one but two availability question marks with SaaS. The first is supplier-side: Will the SaaS vendor's infrastructure keep up with demand. This seems to be the very problem Google is wrestling with right now.
Second is the user-side connectivity question: What use is Google Mail if the user can't get on the Internet? This is such a basic question that I am almost embarrassed to raise it, but I feel I must. Failure to question underlying assumptions is a shortcoming sadly endemic in technology adoption (the classic is probably "Sure, it's safe to handle this stuff" --Madame Curie).
SaaS seems to be predicated upon universal high-speed connectivity, a wonderful thing, but not yet a real thing, and not--perhaps ever--a cheap thing. Try to keep working on an online document as you move from office to train to plane to hotel to client to airport and back to the office. How successful you are will depend upon, among other things: where your home is; what hotel you stay at; what your client's connectivity policies and facilities are like; and your budget. This last item may be even more critical when you consider "working securely on an online document as you move..."
As for enterprise SaaS solely at the office, there will still be two SLAs to consider: Your SaaS vendor SLA and your ISP SLA.
Google must be pondering this question right now as news of outages spreads: "Little over a month after introducing Google Apps' Premier version, which includes a 99.99 percent uptime commitment, Google is failing to meet that service level agreement (SLA) for an undetermined number of customers." PC World article highlighted in this succintly titled posting by Ann All on the Straight to the Source blog at IT Business Edge: It's the SLAs Stupid.
This is timely data for me as I have just spent a week over in Europe meeting with executives of a VLO to discuss information security strategy in the context of a possible shift to SaaS as an alternative to out-sourcing (VLO = Very Large Organization).
Actually, I see not one but two availability question marks with SaaS. The first is supplier-side: Will the SaaS vendor's infrastructure keep up with demand. This seems to be the very problem Google is wrestling with right now.
Second is the user-side connectivity question: What use is Google Mail if the user can't get on the Internet? This is such a basic question that I am almost embarrassed to raise it, but I feel I must. Failure to question underlying assumptions is a shortcoming sadly endemic in technology adoption (the classic is probably "Sure, it's safe to handle this stuff" --Madame Curie).
SaaS seems to be predicated upon universal high-speed connectivity, a wonderful thing, but not yet a real thing, and not--perhaps ever--a cheap thing. Try to keep working on an online document as you move from office to train to plane to hotel to client to airport and back to the office. How successful you are will depend upon, among other things: where your home is; what hotel you stay at; what your client's connectivity policies and facilities are like; and your budget. This last item may be even more critical when you consider "working securely on an online document as you move..."
As for enterprise SaaS solely at the office, there will still be two SLAs to consider: Your SaaS vendor SLA and your ISP SLA.
Sunday, March 25, 2007
Security Appliances Come to Dodge: So where are the horse thieves being hung?
This article, Security Appliances Come to Dodge, by Drew Robb, reminded me of a train of thought I have been following for a while. Here's the opening paragraph:
What we haven't seen yet is the equivalent of hangings for horse theft, swift and decisive justice for those whose immoral and illegal acts strike at the infrastructure of the information age. We have flirted with the idea. When I spoke at The Global Internet Project special workshop on Internet spam in June of 2002, the chairman asked the audience what should be done about spammers and the suggestion [not from me] that there should be some hangings was widely applauded.
But when I see some of the puny sentences handed out for computer crimes, I wonder if it might be time to make a few examples. Yes, I know that is a dangerous path and there is an inherent risk of fallout from unfairness. Yet think about this: What is more corrosive to the future of our culture and economy: Selling a few ounces of pot or stealing a few million credit card records? From sentencing patterns it would appear that dealing drugs is considered way more immoral than either using drugs or ripping off consumers. America jails more people than any other country. But very few people who commit fraud and deceptions detrimental to commercial trust seem to do serious jail time (it will be interesting to see how much time the likes of Fastow and Ebbers actually serve).
Another one to watch is Brian Salcedo, who got "the longest prison term ever handed down in a computer crime case in the United States" for trying to steal customer credit card data from Lowe's. Not surprisingly, the publications like Wired that still think there is something cool about messing with people's lives [as long as you do it with a computer and not a baseball bat] termed Salcedo's 9 year sentence "Crazy" (see Crazy-Long Hacker Sentence Upheld).
Keen observers will note that story was written by Kevin Poulsen who was himself sentenced, in 1991, to 51 months for various criminal hacking offenses committed in the 1980s. At the time it was said to be the longest ever sentence for hacking. Maybe a sentence of 20 years back then, instead four and a quarter, might have had a more powerful deterrent effect.
Sometimes with the Internet it seems like you are living out on the frontier. But unlike the "wild West," which settled down after a few years, computer security threats have continued to rise and show no signs of abating any time soon.I generally avoid picking apart analogies, but there is a flaw in this one. The Wild West took more than a few years to settle down. Which is why the basic Wild West analogy is actually apt. Cyber-space today is like the Wild West, a virtual Deadwood upon Dodge upon Laramie. People of low morals are trying anything they think they can get away with, and often they are. There's easy money ripping off them there virtual wagon trains and consumer pioneers.
What we haven't seen yet is the equivalent of hangings for horse theft, swift and decisive justice for those whose immoral and illegal acts strike at the infrastructure of the information age. We have flirted with the idea. When I spoke at The Global Internet Project special workshop on Internet spam in June of 2002, the chairman asked the audience what should be done about spammers and the suggestion [not from me] that there should be some hangings was widely applauded.
But when I see some of the puny sentences handed out for computer crimes, I wonder if it might be time to make a few examples. Yes, I know that is a dangerous path and there is an inherent risk of fallout from unfairness. Yet think about this: What is more corrosive to the future of our culture and economy: Selling a few ounces of pot or stealing a few million credit card records? From sentencing patterns it would appear that dealing drugs is considered way more immoral than either using drugs or ripping off consumers. America jails more people than any other country. But very few people who commit fraud and deceptions detrimental to commercial trust seem to do serious jail time (it will be interesting to see how much time the likes of Fastow and Ebbers actually serve).
Another one to watch is Brian Salcedo, who got "the longest prison term ever handed down in a computer crime case in the United States" for trying to steal customer credit card data from Lowe's. Not surprisingly, the publications like Wired that still think there is something cool about messing with people's lives [as long as you do it with a computer and not a baseball bat] termed Salcedo's 9 year sentence "Crazy" (see Crazy-Long Hacker Sentence Upheld).
Keen observers will note that story was written by Kevin Poulsen who was himself sentenced, in 1991, to 51 months for various criminal hacking offenses committed in the 1980s. At the time it was said to be the longest ever sentence for hacking. Maybe a sentence of 20 years back then, instead four and a quarter, might have had a more powerful deterrent effect.
Saturday, March 24, 2007
Would Your Competitors Do This? Oracle's suit against SAP a timely lesson
Referring to my previous post about the threat of spying as a "driver" in information system security, this just in:
Oracle recently found their biggest competitor has been hacking their systems and stealing their data. on a scale that may best be described as "massive."
SAP allegedly employed the usernames and passwords of customers that the firm had lured away from Oracle to download a variety of technical materials. SAP employees used the log-in IDs of multiple customers, combined with phony user log-in information, to gain access to Oracle's system under false pretexts...
Thursday, March 15, 2007
Witches Brew: Cheap domains, DDoS, and man-in-the-middle eBay scams
A rash of recent reports seem to revolve around the great ease and small cost of registering domains. Perhaps it is time to revert to some of the original limitations on domain name registration. Consider that before April 1, 1998, the fee for registering domain names at InterNIC (operated by Network Solutions) was US $100.00 for a two year registration and there was a limit on how many names one person could register. On April 1 the fee went down to US $70.00 for a two-year period, and renewals were decreased to $35.00 from $50.00. Despite that, the number of domains registered was already close to 2 million.
According to research from McAfee cheap or free registration of new domain names drives the growth in Web sites used for spamming or hosting malicious software.
One of the biggest names in domain name registration, GoDaddy, was hit with significant and sustained distributed denial-of-service attacks Sunday, resulting in four to five hours of intermittent service disruptions, including hosting and e-mail.
Symantec has uncovered an unusually sophisticated email scam, targeting eBay users with a combination of legitimate eBay auctions and a Windows Trojan that intercepts a user's web traffic. The "advanced" malware involved, called Trojan.Bayrob, sets up a man-in-the-middle attack, Symantec said in a blog last week.
"While we have previously seen Infostealers that try to steal your username and password, a threat attempting a man in the middle attack on eBay is very unusual," wrote Symantec's Liam O'Murchu. "Man-in-the-middle attacks are very powerful, but are also difficult to code correctly."
Fascinating differences in levels of risk around te world have been mapped by McAfee. For example, "a consumer is almost 12 times more likely to encounter a drive-by-download while surfing Russian domains as Columbian ones."
According to research from McAfee cheap or free registration of new domain names drives the growth in Web sites used for spamming or hosting malicious software.
One of the biggest names in domain name registration, GoDaddy, was hit with significant and sustained distributed denial-of-service attacks Sunday, resulting in four to five hours of intermittent service disruptions, including hosting and e-mail.
Symantec has uncovered an unusually sophisticated email scam, targeting eBay users with a combination of legitimate eBay auctions and a Windows Trojan that intercepts a user's web traffic. The "advanced" malware involved, called Trojan.Bayrob, sets up a man-in-the-middle attack, Symantec said in a blog last week.
"While we have previously seen Infostealers that try to steal your username and password, a threat attempting a man in the middle attack on eBay is very unusual," wrote Symantec's Liam O'Murchu. "Man-in-the-middle attacks are very powerful, but are also difficult to code correctly."
Fascinating differences in levels of risk around te world have been mapped by McAfee. For example, "a consumer is almost 12 times more likely to encounter a drive-by-download while surfing Russian domains as Columbian ones."
The Threat of Spies: Often overlooked, often under-estimated, inside and out
I love it when people ask questions about security that cannot be answered definitively, questions like: "What are the three most serious emerging threats?" Indeed, I ask questions like that myself, of others, and of myslef. Why? Because it gets brains working, and the output can be very valuable.
I have been pondering emerging threats quite a bit this year as a result of preparing my keynote for an enterprise security conference in Malaysia last month. But lately I have been asking myself "What are the most persistent threats?" and also "What are the most under-estimated threats?"
And I think I might have a winner, or at least a threat that is a finalist in both categories: industrial espionage (iconically represented by a patent application drawing).
Clearly industrial espionage has been around for a long time (and I'm talking centuries before the late eighties when British Airways started stealing Virgin Atlantic passengers with lies and bribes and a little database hacking on the side--leading to some pretty messy headlines for BA, not to mention some hefty financial settlements in favor of Virgin and its owner, Richard Branson).
VW did it to GM. Boeing did it to Lockheed. WestJet did it to Air Canada (allegedly). Not only has industrial espionage been around for a while, it has always been, quite consistently in my experience, under-rated as a security threat. As with many areas of information security knowledge there are few hard facts to back up my assertion. But my impression, when dealing with clients, when making presentations at conferences, and when teaching seminars, has always been that most people in business don't think--or maybe prefer not to think--that their competitors would break the law to gain advantage. It is not unusual for senior people to come up to me after a presentation that touches on industrial espionage, or criminal hacking in general, and say something like "Do people really do that?"
Perhaps line managers and executives are so busy worrying about all the other critical stuff--like supply, demand, deadlines, sales targets, profit margins--they just don't want to ponder questions like: Are my competitors prowling my network? Sitting outside our offices with a listening van? Going through our garbage? Bribing our employees?
But chances are, they are. Indeed, I would say that if your company is doing more than $100 million in annual revenue then it is unlikely that your competitors are not performing aggressive competitive intelligence ops against you. And of course, the many, many ways in which our "going digital" has made information easier to copy and move now come into play (in the early nineties VW took 90,000 pages worth of documents from GM in hard to hide boxes--today that stuff would fit on a $30 flash memory card you can buy on the High Street and slip into your sock as you walk it through the metal detector undetected).
While the methodology of competitive intelligence (open source, public documents, general and specific observation) is generally legal, it is very easy for such activities to slide into "aggressive competitive intelligence ops" which are illegal. Bear in mind that a lot of spying is done without direct management approval or endorsement. Sometimes employees take it upon themselves.
And thus we arrive at the hidden, two-edged sword of industrial espionage. You are likely to be wounded if you fail to guard against spying from competitors; you may also be wounded by your own staff if you fail to rein them in and they take competitive intelligence too far (and get caught).
Here are a couple of cases to ponder just from the auto parts industry:
Selling secrets to the [Chinese] competition
Selling secrets to the competition
Note that the second link is to article summaries at the New York Times which gives 66 hits on espionage under "Automobiles" alone.
Stay tuned for more on this topic.
P.S. This article by Prof. Mich Kabay, well-respected friend and colleague, gives some examples to get you thinking (but don't think that the examples are not relevant because they are a few years old--I doubt anyone would claim the world is more moral today than it was a decade ago, and it is certainly easier to steal a gigabyte of data in the age of the SD card and USB thumb drive than it was in the age of the floppy and Zip disc).
I have been pondering emerging threats quite a bit this year as a result of preparing my keynote for an enterprise security conference in Malaysia last month. But lately I have been asking myself "What are the most persistent threats?" and also "What are the most under-estimated threats?"
And I think I might have a winner, or at least a threat that is a finalist in both categories: industrial espionage (iconically represented by a patent application drawing).
Clearly industrial espionage has been around for a long time (and I'm talking centuries before the late eighties when British Airways started stealing Virgin Atlantic passengers with lies and bribes and a little database hacking on the side--leading to some pretty messy headlines for BA, not to mention some hefty financial settlements in favor of Virgin and its owner, Richard Branson).
VW did it to GM. Boeing did it to Lockheed. WestJet did it to Air Canada (allegedly). Not only has industrial espionage been around for a while, it has always been, quite consistently in my experience, under-rated as a security threat. As with many areas of information security knowledge there are few hard facts to back up my assertion. But my impression, when dealing with clients, when making presentations at conferences, and when teaching seminars, has always been that most people in business don't think--or maybe prefer not to think--that their competitors would break the law to gain advantage. It is not unusual for senior people to come up to me after a presentation that touches on industrial espionage, or criminal hacking in general, and say something like "Do people really do that?"
Perhaps line managers and executives are so busy worrying about all the other critical stuff--like supply, demand, deadlines, sales targets, profit margins--they just don't want to ponder questions like: Are my competitors prowling my network? Sitting outside our offices with a listening van? Going through our garbage? Bribing our employees?
But chances are, they are. Indeed, I would say that if your company is doing more than $100 million in annual revenue then it is unlikely that your competitors are not performing aggressive competitive intelligence ops against you. And of course, the many, many ways in which our "going digital" has made information easier to copy and move now come into play (in the early nineties VW took 90,000 pages worth of documents from GM in hard to hide boxes--today that stuff would fit on a $30 flash memory card you can buy on the High Street and slip into your sock as you walk it through the metal detector undetected).
While the methodology of competitive intelligence (open source, public documents, general and specific observation) is generally legal, it is very easy for such activities to slide into "aggressive competitive intelligence ops" which are illegal. Bear in mind that a lot of spying is done without direct management approval or endorsement. Sometimes employees take it upon themselves.
And thus we arrive at the hidden, two-edged sword of industrial espionage. You are likely to be wounded if you fail to guard against spying from competitors; you may also be wounded by your own staff if you fail to rein them in and they take competitive intelligence too far (and get caught).
Here are a couple of cases to ponder just from the auto parts industry:
Selling secrets to the [Chinese] competition
Selling secrets to the competition
Note that the second link is to article summaries at the New York Times which gives 66 hits on espionage under "Automobiles" alone.
Stay tuned for more on this topic.
P.S. This article by Prof. Mich Kabay, well-respected friend and colleague, gives some examples to get you thinking (but don't think that the examples are not relevant because they are a few years old--I doubt anyone would claim the world is more moral today than it was a decade ago, and it is certainly easier to steal a gigabyte of data in the age of the SD card and USB thumb drive than it was in the age of the floppy and Zip disc).
Nice article here from Sandra Rossi of Computerworld (Australia) on the cost of security breaches: Data leaks equal 8 percent drop in revenue.
And Larry Ponemon did a fairly recent and pretty rigorous study which showed the cost of a security breach was about $182 per lost record (you can read about the survey here). In other words, lose 6,000 records and you have surpassed $1 million in negative impact. These numbers should help security managers convince company executives to take security seriously. (Don't forget to stress "opportunity cost" as in "Even if recovery after a security breach goes well, the money spent on recovery is money not spent on a new product launch, new ad campaign, bonuses, etc.")
Note that the study cited in the Computerworld article above found that: "The primary channels through which data is lost, in order of risk, includes PC's, laptops and mobile devices, e-mail, Instant Messaging, applications and databases."
"Organisations that experience publicly reported data breaches suffer an eight percent loss of revenue. Compounding the revenue and customer losses are additional expenses averaging $100 per lost or stolen customer record to notify customers and restore data, according to the compliance group which is made up of members from the Computer Security Institute, the Institute of Internal Auditors, Protiviti and Symantec."While it is hard to arrive at firm numbers to describe security problems (or security solutions) these numbers jibe well with some past assessments. While I have not done a study of revenue impact from security breaches, I did look closely at stock price impact about six tears ago and that worked out to about 12-14% if memory serves (hey, this is a just a blog, so memory will have to serve for now--I will dig up the actual data when I get a chance). In other words, if you were to suffer a serious and publicized security hit, your stock price would go down from 12 to 14 percent.
And Larry Ponemon did a fairly recent and pretty rigorous study which showed the cost of a security breach was about $182 per lost record (you can read about the survey here). In other words, lose 6,000 records and you have surpassed $1 million in negative impact. These numbers should help security managers convince company executives to take security seriously. (Don't forget to stress "opportunity cost" as in "Even if recovery after a security breach goes well, the money spent on recovery is money not spent on a new product launch, new ad campaign, bonuses, etc.")
Note that the study cited in the Computerworld article above found that: "The primary channels through which data is lost, in order of risk, includes PC's, laptops and mobile devices, e-mail, Instant Messaging, applications and databases."
Tuesday, March 06, 2007
Hard Lessons About Hard Drives: Time to get a drive grinder?
The Times Union of Jacksonville carried an interesting story about hard drives a few days ago. Seems a local businessman had taken his computer to a Best Buy store for repairs.
This approach has a lot to recommend it. Using a less powerful degausser can require the hard drive platters to be removed from the casing. This requires a fair amount of effort (I just opened up a dead drive recently and brute force was involved). However, despite assurances that degaussing makes the data go away for good, I bet there will still be people in three-letter agencies who opt for physical destruction. It's just so, tangible, so very verifiable. Anyway, here's the letter that the Times-Union published today:
"Kudos to Times-Union reporter David Bauerlein for Friday's Metro article drawing attention to the security issues involved in hard drive repair and replacement. As a 25-year veteran of the computer security business I have to say this is one vulnerability that simply refuses to go away. It seems that each new generation of computer users has to learn the hard way (pun intended) that the convenience of hard drive storage comes at a price.
Businesses and individuals not only need to back up their hard drives on a regular basis to pre-empt data loss due to drive failure, they also need to take appropriate steps to keep that data under their control at all times. As your reporter correctly points out, a hard drive sent out for repair is not under your control. The same is true of hard drives on leased machines that are returned and older machines that are given away. Standard policy should be for all data to be stripped from hard drives before they are handed over to anyone else.
The steps you take to remove data from drives should be determined by the sensitivity of the data. A simple format of the drive is not enough to hide the remnants of the data from even a mildly curious hacker. Drives that have stored sensitive personal or business data should be wiped with a so-called scrubber or shredder program which over-writes each sector multiple times.
However, even that may not be enough to totally destroy the data. If the drive falls into the hands of a well-funded adversary, some data might still be recoverable. That's why America's spy agencies routinely grind their old hard drives into powder; not a huge price to pay when state secrets are at risk. Given the negative impact of a security breach on company profits, stock price, and reputation, it could prove to be a cost-effective course of action for many businesses as well. "
When told the old hard drive was being replaced--a hard drive that contained information about his clients--he was stunned to learn he wouldn't get it back. The retailer said it would destroy the drive so no one else could get access, but that didn't sit well with Wemhoff. It took a series of calls up the corporate chain of command to get the old drive returned. Best Buy said its policy in this case was to follow the manufacturer's warranty, which often calls for the old hard drive to be sent to the maker, even if it is loaded with personal information.This led me to send the following letter to the paper, commending the reporter on highlighting this problem and adding some thoughts of my own. I mentioned the grinding or "chipping" of hard drives that spy agencies do but it seems Georgia Tech is working on a less messy alternative: a powerful degausser, seen here (click photo for article).
This approach has a lot to recommend it. Using a less powerful degausser can require the hard drive platters to be removed from the casing. This requires a fair amount of effort (I just opened up a dead drive recently and brute force was involved). However, despite assurances that degaussing makes the data go away for good, I bet there will still be people in three-letter agencies who opt for physical destruction. It's just so, tangible, so very verifiable. Anyway, here's the letter that the Times-Union published today:
"Kudos to Times-Union reporter David Bauerlein for Friday's Metro article drawing attention to the security issues involved in hard drive repair and replacement. As a 25-year veteran of the computer security business I have to say this is one vulnerability that simply refuses to go away. It seems that each new generation of computer users has to learn the hard way (pun intended) that the convenience of hard drive storage comes at a price.
Businesses and individuals not only need to back up their hard drives on a regular basis to pre-empt data loss due to drive failure, they also need to take appropriate steps to keep that data under their control at all times. As your reporter correctly points out, a hard drive sent out for repair is not under your control. The same is true of hard drives on leased machines that are returned and older machines that are given away. Standard policy should be for all data to be stripped from hard drives before they are handed over to anyone else.
The steps you take to remove data from drives should be determined by the sensitivity of the data. A simple format of the drive is not enough to hide the remnants of the data from even a mildly curious hacker. Drives that have stored sensitive personal or business data should be wiped with a so-called scrubber or shredder program which over-writes each sector multiple times.
However, even that may not be enough to totally destroy the data. If the drive falls into the hands of a well-funded adversary, some data might still be recoverable. That's why America's spy agencies routinely grind their old hard drives into powder; not a huge price to pay when state secrets are at risk. Given the negative impact of a security breach on company profits, stock price, and reputation, it could prove to be a cost-effective course of action for many businesses as well. "
Sunday, February 25, 2007
Virtual Trade Show Appearance (is that the right word?)
A couple of days ago I participated in a Ziff Davis Virtual Tradeshow on Security Management. This was a "live" event for one day but the sessions are archived for several months for people to browse. If you want to listen to the presentations (including mine) you need to go to this page and register. The registration process asks quite a few questions, but that's the price you pay for free education, so to speak. The keynote was by Peter Neumann who [IMHO] is always worth a listen.
If you check out my session on security awareness programs you will find it is taken at a pretty fast pace owing to time constraints of the format. So, I plan to podcast a less strenuous and hopefully more informative version as soon as my head cold clears up (if I record it white dow ewe wood nut udder stand be).
If you check out my session on security awareness programs you will find it is taken at a pretty fast pace owing to time constraints of the format. So, I plan to podcast a less strenuous and hopefully more informative version as soon as my head cold clears up (if I record it white dow ewe wood nut udder stand be).
Friday, February 23, 2007
SaaS Challenge Mounts: Is Google the next Microsoft, security-wise?
Two eWeek headlines appeared this week, as if on queue, one right after the other: Google Patches Security Vulnerability in Desktop Search and Google Apps Premier Edition Takes Aim at the Enterprise.
What do you bet that the sparks really flew at Google HQ over that timing? First, a quick reminder that we, Google, have to patch security holes, just like Microsoft. Followed by, tada! Premium enterprise apps, just like Microsoft. From my perspective this was great timing, IF it helped potential enterprise clients stop and think twice before embracing software as service web apps.
Don't get me wrong, I have been using and enjoying Google's free document and spreadsheet apps in beta. They offer a lot of convenience (not to mention great functionality for the price). But you won't find me using them for sensitive business data any time soon. There is no way I am going to be the first to find out that Google, in its enthusiasm for offering neat tools to the world, missed some of the security implications and exposed "my stuff."
From an enterprise perspective I would be blocking employees from using the free version on company machines and connections. And, no offensive to Google, but I would not adopt the paid version without a very intense security review. (How intense? I don't think there are more than two dozen people on the planet with the kind of smarts it takes to do that sort of review to an appropriate enterprise level of assurance.)
.
What do you bet that the sparks really flew at Google HQ over that timing? First, a quick reminder that we, Google, have to patch security holes, just like Microsoft. Followed by, tada! Premium enterprise apps, just like Microsoft. From my perspective this was great timing, IF it helped potential enterprise clients stop and think twice before embracing software as service web apps.
Don't get me wrong, I have been using and enjoying Google's free document and spreadsheet apps in beta. They offer a lot of convenience (not to mention great functionality for the price). But you won't find me using them for sensitive business data any time soon. There is no way I am going to be the first to find out that Google, in its enthusiasm for offering neat tools to the world, missed some of the security implications and exposed "my stuff."
From an enterprise perspective I would be blocking employees from using the free version on company machines and connections. And, no offensive to Google, but I would not adopt the paid version without a very intense security review. (How intense? I don't think there are more than two dozen people on the planet with the kind of smarts it takes to do that sort of review to an appropriate enterprise level of assurance.)
.
What Did I Tell You? Google looking like a nexus of insecurity
I've been saying this for several months now. I highlighted it in my keynote at the Enterprise Security Asia conference. Google could be the next big thing in security, as in "insecurity." The recently announced hole, now patched, that permitted cross-site scripting attacks via Google Desktop, is only one aspect of "the Google factor." The concern is that Google has many of the characteristics of a "nexus of insecurity." Here are some of those characteristics:
Now, let me make it clear that I have no knowledge of Google's security strategy or how it schools its programmers in secure coding, or how it tests its code before putting it into production. Google may be doing a great job in all these areas. I would love to find out that they are. All I am saying is that, historically, software possessing the characteristics listed above has tended to become a source of security problems.
- New and exciting
- Popular and widely used
- Cross-platform
- Network-based
- Rapidly growing
- Easy to install
- Becoming a standard
- Processing sensitive data
Now, let me make it clear that I have no knowledge of Google's security strategy or how it schools its programmers in secure coding, or how it tests its code before putting it into production. Google may be doing a great job in all these areas. I would love to find out that they are. All I am saying is that, historically, software possessing the characteristics listed above has tended to become a source of security problems.
Tuesday, February 20, 2007
The Last Great Security Crisis? Sadly Not
You can't read all the security pundits all the time, but I usually take time to read Larry Seltzer at eWeek. So I am not knocking Larry when I take issue with his recent column titled the Last Great Security Crisis. Indeed, it is well worth reading and sheds light in an area that needs it: application security.
Larry is not talking about web apps or Software as a Service but Microsoft Office apps, arguably the biggest single gateway to networked computers and sensitive data on the planet. Whuh? That's a pretty sweeping claim. But think about it. Just about every organization's really important data is currently condensed into Word documents, Excel spreadsheets, and Powerpoint slides.
Want to know what is going on in a company? Forget mining complex databases, look for the highlights, which are more often than not found in some kind of doc/xls/ppt file, starting with executive summaries of everything from new product development to sales projections to cashflow analysis. Combine that with the seemingly endless stream of holes and you have the ingredients for a permanent security headache (as opposed to the plain human headache you get from trying to picture a stream of holes).
How many organizations eventually get to experience that headache will depend on a number of factors, from the diversification of applications and formats (Mac, pdf, open document, xml, etc.), to the actions of the world's bad actors. The latter may focus more on desktop application vulnerabilities if Vista does deliver an improvement in overall enterprise security. It's that old displacement of risk black magic. As long as bad actors are plentiful and well-motivated (actually it seems like that should be badly-motivated, but you know what I mean) the overall threat level will not go down, it will just keep seeking the low-hanging fruit and the easy wins, which will be losses for legitimate users.
Larry is not talking about web apps or Software as a Service but Microsoft Office apps, arguably the biggest single gateway to networked computers and sensitive data on the planet. Whuh? That's a pretty sweeping claim. But think about it. Just about every organization's really important data is currently condensed into Word documents, Excel spreadsheets, and Powerpoint slides.
Want to know what is going on in a company? Forget mining complex databases, look for the highlights, which are more often than not found in some kind of doc/xls/ppt file, starting with executive summaries of everything from new product development to sales projections to cashflow analysis. Combine that with the seemingly endless stream of holes and you have the ingredients for a permanent security headache (as opposed to the plain human headache you get from trying to picture a stream of holes).
How many organizations eventually get to experience that headache will depend on a number of factors, from the diversification of applications and formats (Mac, pdf, open document, xml, etc.), to the actions of the world's bad actors. The latter may focus more on desktop application vulnerabilities if Vista does deliver an improvement in overall enterprise security. It's that old displacement of risk black magic. As long as bad actors are plentiful and well-motivated (actually it seems like that should be badly-motivated, but you know what I mean) the overall threat level will not go down, it will just keep seeking the low-hanging fruit and the easy wins, which will be losses for legitimate users.
Saturday, February 17, 2007
The Next Big Enterprise Threat? It's time to think SaaS = Software as a Service
I recently asked my good friend and security guru David Brussin for his thoughts on emerging threats to enterprise security. In response he posted a very interesting entry on his blog about SaaS. I highly recommend this to CIOs and CSOs as well as CISSPs.
And for readers who are none of the above, and thus in danger of drowning in initials and acronyms, let me make it clear that:
SaaS = Software as a Service
SARS = Secure Acute Respiratory Syndrome (a non-IT enterprise threat)
SpIT = Spam over Internet Telephony (VOiP)
SpIM = Spam over Instant Messenger
CISSP = Certified Information System Security Professional
Hopefully this will help folks disambiguate a few of these threatening things.
And for readers who are none of the above, and thus in danger of drowning in initials and acronyms, let me make it clear that:
SaaS = Software as a Service
SARS = Secure Acute Respiratory Syndrome (a non-IT enterprise threat)
SpIT = Spam over Internet Telephony (VOiP)
SpIM = Spam over Instant Messenger
CISSP = Certified Information System Security Professional
Hopefully this will help folks disambiguate a few of these threatening things.
Thursday, February 15, 2007
Free Mike Cobb Security Webcasts and Podcasts Now Available!
That's right, my brother Mike, the younger one (and some would say, the smarter one) is a fellow author and CISSP. And he has pulled together his recent security webcasts on one handy page. Just click and learn. Here's what is available right now:
Messaging Security: Preventing Data Loss and Malware Infection through Electronic Communications --In this webcast, discover the many procedures, tools and policies available to Windows security administrators to secure an enterprise's electronic communications.
Messaging Security: Understanding the Threat of eMail and IM Attacks -- This 15-minute podcast helps assess the evolving threats to enterprise communications. Mike investigates the severity of phishing and IM virus threats, and spends time assessing the effectiveness and requirements of unified messaging security products.
How Simple Steps Ensure Database Security --This Podcast examines some of the most common database attacks, including SQL injection, cross-site scripting and weak/default passwords. Learn how you can protect your database from these threats and listen to this Podcast now.
SearchSecurity.com's Web Security School --Learn how to harden a Web server and apply countermeasures to prevent hackers from breaking into a network. Study at your own pace and learn how to implement security policies and test a Web site's security, as well as how to handle a breach should the unspeakable happen. Michael Cobb will also arm you with tactics for creating a human firewall to combat problems such as phishing and spyware. This course consists of an entrance exam, three lessons -- each consisting of a webcast, technical paper and quiz -- and a final exam. You'll also find handy checklists that you can download and use on the job. All of these resources are available on-demand so you can learn at your convenience.
Five common application-level attacks and the countermeasures to beat them --This on-demand webcast reviews five of the most common attacks against applications: active content, cross-site scripting, denial of service and SYN attacks, SQL injection attacks and malicious bots. For each, Michael Cobb explains how they work, the damage they're capable of doing and how pervasive they are. He also arms you with:
Messaging Security: Preventing Data Loss and Malware Infection through Electronic Communications --In this webcast, discover the many procedures, tools and policies available to Windows security administrators to secure an enterprise's electronic communications.
Messaging Security: Understanding the Threat of eMail and IM Attacks -- This 15-minute podcast helps assess the evolving threats to enterprise communications. Mike investigates the severity of phishing and IM virus threats, and spends time assessing the effectiveness and requirements of unified messaging security products.
How Simple Steps Ensure Database Security --This Podcast examines some of the most common database attacks, including SQL injection, cross-site scripting and weak/default passwords. Learn how you can protect your database from these threats and listen to this Podcast now.
SearchSecurity.com's Web Security School --Learn how to harden a Web server and apply countermeasures to prevent hackers from breaking into a network. Study at your own pace and learn how to implement security policies and test a Web site's security, as well as how to handle a breach should the unspeakable happen. Michael Cobb will also arm you with tactics for creating a human firewall to combat problems such as phishing and spyware. This course consists of an entrance exam, three lessons -- each consisting of a webcast, technical paper and quiz -- and a final exam. You'll also find handy checklists that you can download and use on the job. All of these resources are available on-demand so you can learn at your convenience.
Five common application-level attacks and the countermeasures to beat them --This on-demand webcast reviews five of the most common attacks against applications: active content, cross-site scripting, denial of service and SYN attacks, SQL injection attacks and malicious bots. For each, Michael Cobb explains how they work, the damage they're capable of doing and how pervasive they are. He also arms you with:
- Specific countermeasures for each of these attacks
- The security policies and security defense technologies worth considering for safeguarding applications against each attack
- How to improve incident response in the event of an attack
- A quick overview of other, less common (but potentially damaging) application attacks that you need to be aware of
Wednesday, February 14, 2007
Good Intentions, Wrong Conclusions: Bill Gates' security vision at RSA is cloudy at best
Said Gates: “Security is the fundamental challenge that will determine whether we can successfully create a new generation of connected experiences that enable people to have anywhere access to communications, content and information.” DailyTech
Well, that sounds good, but what does it really mean? Will lack of security prevent a new generation of connected experiences being created? No. We have seen several generations of insecure connected experiences created. Their lack of security has not doomed them. Yes, security issues have meant slower and more shallow adoption than might otherwise have been achieved. And security problems have in general made the experience less enjoyable than it should have been (not to mention a royal pain in the pocket book in specific cases where the lack of security was exploited by particularly bad or careless actors). But success is relative and often based on expectations.
Mr. Gates would certainly be unwise to make higher levels of security the only measure of success. But I think that Mr. Gates is quite capable of being unwise. After all, this is the man who said spam would be a thing of the past--by this time last year. Sadly, the place where the Gates vision falls short is in its expectations of people. I say sadly because I think Mr. Gates is basically a very decent chap, one who has consistently under-estimated the decency deficit out here in the real world, while over-estimating technology's ability to make up for it.
Consider what else he said: “The answer for the industry lies in our ability to design systems and processes that give people and organizations a high degree of confidence that the technology they use will protect their identity, their privacy and their information.”
No Mr. gates, that is not where the answer lies. The answer lies in the overall standard of human behavior. Until that improves, connected experiences that enable people to have anywhere access to communications, content and information will suffer at the hands of bad people. Folk may not suffer to the extent that they give up on those experiences. But they won't be able to enjoy them as much as they should and a large chunk of resources will likely be consumed trying to maintain a barely tolerable level of enjoyment. Technology is not the answer to bad behavior.
.
Well, that sounds good, but what does it really mean? Will lack of security prevent a new generation of connected experiences being created? No. We have seen several generations of insecure connected experiences created. Their lack of security has not doomed them. Yes, security issues have meant slower and more shallow adoption than might otherwise have been achieved. And security problems have in general made the experience less enjoyable than it should have been (not to mention a royal pain in the pocket book in specific cases where the lack of security was exploited by particularly bad or careless actors). But success is relative and often based on expectations.
Mr. Gates would certainly be unwise to make higher levels of security the only measure of success. But I think that Mr. Gates is quite capable of being unwise. After all, this is the man who said spam would be a thing of the past--by this time last year. Sadly, the place where the Gates vision falls short is in its expectations of people. I say sadly because I think Mr. Gates is basically a very decent chap, one who has consistently under-estimated the decency deficit out here in the real world, while over-estimating technology's ability to make up for it.
Consider what else he said: “The answer for the industry lies in our ability to design systems and processes that give people and organizations a high degree of confidence that the technology they use will protect their identity, their privacy and their information.”
No Mr. gates, that is not where the answer lies. The answer lies in the overall standard of human behavior. Until that improves, connected experiences that enable people to have anywhere access to communications, content and information will suffer at the hands of bad people. Folk may not suffer to the extent that they give up on those experiences. But they won't be able to enjoy them as much as they should and a large chunk of resources will likely be consumed trying to maintain a barely tolerable level of enjoyment. Technology is not the answer to bad behavior.
.
Saturday, February 10, 2007
4th Annual Enterprise Security Asia Conference
4th Annual Enterprise Security Asia Conference
A big thanks to the folks at AC-Nergy who put on an excellent conference in Kuala Lumpur last week: Dyanna, Jin Yin, Christopher, and Andrea. Also to chairpersons Michael Mudd of CompTIA and Stan Singh of PIKOM.
The two sets of slides that I presented can be found at the newly re-launched Cobb Associates site. And a quick reminder to (ISC)2 attendees: this event is approved by (ISC)2 for CPE credits.
.
A big thanks to the folks at AC-Nergy who put on an excellent conference in Kuala Lumpur last week: Dyanna, Jin Yin, Christopher, and Andrea. Also to chairpersons Michael Mudd of CompTIA and Stan Singh of PIKOM.
The two sets of slides that I presented can be found at the newly re-launched Cobb Associates site. And a quick reminder to (ISC)2 attendees: this event is approved by (ISC)2 for CPE credits.
.
Wednesday, February 07, 2007
Meet the new OS, same as the old OS: AV, Vista, and Microsoft MS-DOS 6
News that Microsoft's own anti-virus [AV] product does not do a good job of protecting the new Microsoft Vista operating system will come as no surprise to the infosec "old guard" who remember Microsoft's first foray into anti-virus back with MS-DOS 6.0 in 1993. A detailed deconstruction of this product's shortcomings was written by one of the early AV pioneers, Y. Radai at the Hebrew University of Jerusalem. He graciously allowed me to reprint it in my PC and LAN security book and a copy is archived here in an Adobe PDF.
Unless you are a real AV history buff you may not want to read the whole thing (and if you are a real AV history buff you've read it already). But everyone should take note of the final sentences where Radai summarized the effects of Microsoft's decision to make its own AV and bundle it with the OS:
And who are these folks who just gave Microsoft Live OneCare a failing grade? Virus Bulletin, which has a sterling reputation for objective AV testing. If VB says a product does not do a good job, you can rest assured it does not (of course, depending on the product you are using, the assured rest may not come easily).
Unless you are a real AV history buff you may not want to read the whole thing (and if you are a real AV history buff you've read it already). But everyone should take note of the final sentences where Radai summarized the effects of Microsoft's decision to make its own AV and bundle it with the OS:
True, many people who have never before installed AV software will now do so, and this seems to be a benefit. However, they will be under the false impression that they are well-protected.Enough said? After all, few things are more worrying to an information security professional than someone having a false sense of security. One of them is a lot of people having a false sense of security.
And who are these folks who just gave Microsoft Live OneCare a failing grade? Virus Bulletin, which has a sterling reputation for objective AV testing. If VB says a product does not do a good job, you can rest assured it does not (of course, depending on the product you are using, the assured rest may not come easily).
Sunday, February 04, 2007
More VA Data At Risk? Reminds me of last summer
Looks like another black eye for the Department of Veterans Affairs. A hard drive containing thousands of unencrypted records apparently went missing. Here is what I wrote last summer for a local magazine, after the BIG data leak at the VA:
During a hotter than average summer you might think the only exposure problems we face in Saint Augustine are those caused by the UV index. And it would be nice to think the only chills we've been getting come from ice cream or the ice in our drinks. Unfortunately, some folks in town have been receiving chilling news about their personal exposure. It goes something like this: "Information identifiable with you was potentially exposed to others."
In fact, if you were one of the more than 26 million American veterans whose data was on an external hard drive stolen from the home of a Veterans Affairs employee in May, you will have read those words already, in letter from the VA. What sort of data are we talking about? According to the letters that started going out in the first week of June: names, Social Security numbers, and dates of birth, as well as some disability ratings. That is enough information to get an identity thief started, running up bills in your name.
Sadly, some local veterans who bank with VyStar were hit with a double dose of chilling news about their personal exposure. They also received letters from the Jacksonville-based credit union informing them that hackers had acquired their names, addresses, Social Security numbers, birthdates, mothers' maiden names, and email addresses. The exact number of people affected was not revealed by VyStar, which would only say it was less than ten percent of its 344,000 membership. However, that type of data would give an identity thief a running start, in several directions. For example, the email addresses could be used for very targeted and effective "phishing" attacks in which falsified email is used to trick recipients into revealing such valuable data as account numbers and passwords.
I know that at least one of the affected Vystar members was a local resident, because I had breakfast with him recently, at Jasmine's on San Marco. Over a latté and breakfast burrito he lamented that he had received letters from both VyStar and the VA. Perhaps a little too glibly I said that if he got a third letter we would write an article about him. That afternoon I noticed a new security breach exposing Floridians. Approximately 133,000 Florida driver and pilot records were on a Department of Transportation laptop stolen from a government vehicle in July.
So how should you react if this happens to you? Are you at risk if your data is exposed? What can you do to protect yourself? To answer these questions, begin by examining any information you have about the exposure. For example, here's what Vystar said about that incident: "Vystar has no indication that the stolen data has been used or will be used for identity theft or fraud."
Fortunately, you don't need to be a computer security expert to see through that one. Your first clue that this is not a very reassuring statement is how the data was exposed. According to Vystar's own report, hackers stole it. These days, that is not good. In the good old days of mainframes and early personal computers the term "hacker" did not necessarily mean someone who broke the law, more like someone who broke into the technology just to see how it worked. Hacker today can mean someone who steals bank records, either for their own nefarious purposes, or for resale to someone even more nefarious. There is a thriving black market in identity data. Organized crime is a big player in that market.
Even if your data was on a computer stolen at random, which may be the case with the stolen VA laptop and hard drive, you need to be wary of assurances that "there is no indication the data has been used for identity theft." Any computer security professional would want to add the word "yet" to that statement. After all, how can you tell if the data has been used? The beauty of all things digital is that they can be copied over and over without any indication that they have been copied. A data thief seldom erases the data, just lifts a copy so you are none the wiser.
Another assurance that bears closer inspection is this one, as seen in the VA letter: "Authorities believe it is unlikely the perpetrators targeted the items because of any knowledge of the data contents." Well, contrary to the VA's claims in the letter, the VA employee had been taking home the same sort of data for years, with permission. This implies that someone could indeed have targeted the data; but even if they didn't, your average thief today probably knows a thing or two about computers. Imagine getting that computer home and finding all that data. Knowing that it could be worth dollars per record might tempt a common burglar to branch out into data trafficking.
At this point you might be wondering what happened to all the marvelous computer security technology you see in movies: passwords, fingerprints, encryption. These are not science fiction. They exist and they are relatively effective, cheap, and easy to use. The reality is that they are not used nearly as much as they should be. One way you can tell is to read between the lines of an "exposure" announcement. The VA made no mention of passwords; the Department of Transportation did. You can bet the DOT data was password protected, the VA data was not.
So what can you do when your data is exposed by one of these incidents? The first step is to take advantage of any resources provided by the "breachee," the entity whose security was breached, thus leading to the exposure. For example, VyStar has provided a lot of information about Internet security on its web site. In addition, it has said it will provide identity theft protection to all those affected by the breach. This is a smart move because it helps to limit the company's exposure to damage claims. Several years ago I provided testimony in a class action suit brought by another group of military personnel whose data was exposed as a result of the TriWest security breach in Arizona. The victims were seeking to force TriWest to pay for identity theft protection. As far as I know the case is still unresolved, but the security lapse has already cost TriWest several million dollars.
The primary defensive action you can take, regardless of what the breachee does, is place a temporary fraud alert on your credit bureau account. This should alert you to anyone trying to open new accounts in your name. To place an alert contact one of the three main agencies: Equifax (www.equifax.com or 800-525-6285); Experian (www.experian.com or 888-397-3742); TransUnion (www.transunion.com or 800-680-7289). The alert is free, good for 90 days, and may get you a free credit report. In fact, getting a credit report on yourself is a good all-round defensive measure, even if your data has not, to your knowledge, been exposed. If it has been more than 12 months since you saw your credit report, check it out, via the contacts above, to make sure it contains no surprises.
None of this implies that the party whose inadequate security made the exposure possible is off the hook. The VA is currently under pressure to improve security and do more for the victims. You can learn more at www.firstgov.gov/veteransinfo.shtml. Sadly, if you visit the site created to keep vets informed about the May incident, you are greeted by news of an August incident. That's right, another computer went missing, this time exposing the insurance records of tens of thousands of vets.
Is there any good news? Well, I can say that the VA/VyStar victim I know has not received a third letter, yet. I'd like to say I see light at the end of the tunnel but, based on my 25 years of work against computer fraud and abuse, I don't. So be prepared to act in defense of your identity, keep abreast of new incidents, and cast a critical eye over any letters you receive. I'm afraid more of us will be over-exposed before things get better.
During a hotter than average summer you might think the only exposure problems we face in Saint Augustine are those caused by the UV index. And it would be nice to think the only chills we've been getting come from ice cream or the ice in our drinks. Unfortunately, some folks in town have been receiving chilling news about their personal exposure. It goes something like this: "Information identifiable with you was potentially exposed to others."
In fact, if you were one of the more than 26 million American veterans whose data was on an external hard drive stolen from the home of a Veterans Affairs employee in May, you will have read those words already, in letter from the VA. What sort of data are we talking about? According to the letters that started going out in the first week of June: names, Social Security numbers, and dates of birth, as well as some disability ratings. That is enough information to get an identity thief started, running up bills in your name.
Sadly, some local veterans who bank with VyStar were hit with a double dose of chilling news about their personal exposure. They also received letters from the Jacksonville-based credit union informing them that hackers had acquired their names, addresses, Social Security numbers, birthdates, mothers' maiden names, and email addresses. The exact number of people affected was not revealed by VyStar, which would only say it was less than ten percent of its 344,000 membership. However, that type of data would give an identity thief a running start, in several directions. For example, the email addresses could be used for very targeted and effective "phishing" attacks in which falsified email is used to trick recipients into revealing such valuable data as account numbers and passwords.
I know that at least one of the affected Vystar members was a local resident, because I had breakfast with him recently, at Jasmine's on San Marco. Over a latté and breakfast burrito he lamented that he had received letters from both VyStar and the VA. Perhaps a little too glibly I said that if he got a third letter we would write an article about him. That afternoon I noticed a new security breach exposing Floridians. Approximately 133,000 Florida driver and pilot records were on a Department of Transportation laptop stolen from a government vehicle in July.
So how should you react if this happens to you? Are you at risk if your data is exposed? What can you do to protect yourself? To answer these questions, begin by examining any information you have about the exposure. For example, here's what Vystar said about that incident: "Vystar has no indication that the stolen data has been used or will be used for identity theft or fraud."
Fortunately, you don't need to be a computer security expert to see through that one. Your first clue that this is not a very reassuring statement is how the data was exposed. According to Vystar's own report, hackers stole it. These days, that is not good. In the good old days of mainframes and early personal computers the term "hacker" did not necessarily mean someone who broke the law, more like someone who broke into the technology just to see how it worked. Hacker today can mean someone who steals bank records, either for their own nefarious purposes, or for resale to someone even more nefarious. There is a thriving black market in identity data. Organized crime is a big player in that market.
Even if your data was on a computer stolen at random, which may be the case with the stolen VA laptop and hard drive, you need to be wary of assurances that "there is no indication the data has been used for identity theft." Any computer security professional would want to add the word "yet" to that statement. After all, how can you tell if the data has been used? The beauty of all things digital is that they can be copied over and over without any indication that they have been copied. A data thief seldom erases the data, just lifts a copy so you are none the wiser.
Another assurance that bears closer inspection is this one, as seen in the VA letter: "Authorities believe it is unlikely the perpetrators targeted the items because of any knowledge of the data contents." Well, contrary to the VA's claims in the letter, the VA employee had been taking home the same sort of data for years, with permission. This implies that someone could indeed have targeted the data; but even if they didn't, your average thief today probably knows a thing or two about computers. Imagine getting that computer home and finding all that data. Knowing that it could be worth dollars per record might tempt a common burglar to branch out into data trafficking.
At this point you might be wondering what happened to all the marvelous computer security technology you see in movies: passwords, fingerprints, encryption. These are not science fiction. They exist and they are relatively effective, cheap, and easy to use. The reality is that they are not used nearly as much as they should be. One way you can tell is to read between the lines of an "exposure" announcement. The VA made no mention of passwords; the Department of Transportation did. You can bet the DOT data was password protected, the VA data was not.
So what can you do when your data is exposed by one of these incidents? The first step is to take advantage of any resources provided by the "breachee," the entity whose security was breached, thus leading to the exposure. For example, VyStar has provided a lot of information about Internet security on its web site. In addition, it has said it will provide identity theft protection to all those affected by the breach. This is a smart move because it helps to limit the company's exposure to damage claims. Several years ago I provided testimony in a class action suit brought by another group of military personnel whose data was exposed as a result of the TriWest security breach in Arizona. The victims were seeking to force TriWest to pay for identity theft protection. As far as I know the case is still unresolved, but the security lapse has already cost TriWest several million dollars.
The primary defensive action you can take, regardless of what the breachee does, is place a temporary fraud alert on your credit bureau account. This should alert you to anyone trying to open new accounts in your name. To place an alert contact one of the three main agencies: Equifax (www.equifax.com or 800-525-6285); Experian (www.experian.com or 888-397-3742); TransUnion (www.transunion.com or 800-680-7289). The alert is free, good for 90 days, and may get you a free credit report. In fact, getting a credit report on yourself is a good all-round defensive measure, even if your data has not, to your knowledge, been exposed. If it has been more than 12 months since you saw your credit report, check it out, via the contacts above, to make sure it contains no surprises.
None of this implies that the party whose inadequate security made the exposure possible is off the hook. The VA is currently under pressure to improve security and do more for the victims. You can learn more at www.firstgov.gov/veteransinfo.shtml. Sadly, if you visit the site created to keep vets informed about the May incident, you are greeted by news of an August incident. That's right, another computer went missing, this time exposing the insurance records of tens of thousands of vets.
Is there any good news? Well, I can say that the VA/VyStar victim I know has not received a third letter, yet. I'd like to say I see light at the end of the tunnel but, based on my 25 years of work against computer fraud and abuse, I don't. So be prepared to act in defense of your identity, keep abreast of new incidents, and cast a critical eye over any letters you receive. I'm afraid more of us will be over-exposed before things get better.
Friday, February 02, 2007
What's Up With Dataflation?
A few years ago I coined the term 'dataflation' in an effort to focus attention on the possible negative effects of widespread exposure of personally identifiable information (PII, like name, address, Social Security number, mother's maiden name, pet's name, credit card number, and so on). My thinking had been pointed in this direction by the large number of security breaches in the first half of 2005 and the massive amount of PII that they exposed (66 million records).
Plenty of people were focused on the immediate effects of this phenomenon and the media paid attention. We saw articles on What to do if it happens to you. How to protect your identity online. What companies should do to prevent such breaches. A lot of good advice was dispensed and recent figures show it might be having a positive effect. (Remember: "The best weapon with which to defend information is information.")
However, there was no immediate sign of improvement during 2005 and I continued to focus on the cumulative rather than individual effects. What would these exposures mean to the current and future value of information? How would this impact trust within society? What would be the effect on commerce, particularly e-commerce? And what effect does trust have on growth? (There are indications that more trust = stronger GDP growth, starting perhaps with the 1997 paper by Knack and Keefer, click here for a list of articles).
To me it seemed like there had to be some sort of inflationary effect on personal data, hence data-flation. Perhaps, I wondered, the more bits of personal data pertaining to you that are known by everyone, the less value each piece of that personal data would have, notably when it comes to authenticating you, to a system, a merchant, a bank, a government agency, and so on.
My article for TechTarget on the subject of dataflation was published in October, 2005. Then I witnessed the massive exposures in early 2006 which included the 28.6 million veterans (including a friend of mine who was also 'exposed' at the same time by his credit union). So I continued to think about dataflation. When I was invited to speak at Interop Moscow I chose it as the topic of my presentation.
Then a strange thing happened. In the Q&A session after my presentation, one member of the audience told me that you could find just about any data about anyone in Russia on the streets of Moscow, sold on CD. Unfortunately, I didn't have enough time or Russian to go and buy any of these CDs, but several people confirmed that large numbers of records were sold to these street-level data vendors by employees of various government agencies. We did not have enough time for a protracted discussion, and there was something of a language barrier, but I think I sensed an implied statement: "Our data is hopelessly exposed and our society/government/economy is not crumbling."
Now, I am not an expert on the Russian economy, but I think one could argue it is not doing as well as it might. One might further suggest that a lack of trust is one reason, although proving this statement is probably an entire masters or even doctoral thesis. Furthermore, I am open to pondering that implication. Maybe dataflation won't happen and everything will work out. It's just that, when you look at a compilation of the ever-increasing numbers, such as this amazing table at Privacy Rights Clearing House, it is hard to believe we are on the right track.
Plenty of people were focused on the immediate effects of this phenomenon and the media paid attention. We saw articles on What to do if it happens to you. How to protect your identity online. What companies should do to prevent such breaches. A lot of good advice was dispensed and recent figures show it might be having a positive effect. (Remember: "The best weapon with which to defend information is information.")
However, there was no immediate sign of improvement during 2005 and I continued to focus on the cumulative rather than individual effects. What would these exposures mean to the current and future value of information? How would this impact trust within society? What would be the effect on commerce, particularly e-commerce? And what effect does trust have on growth? (There are indications that more trust = stronger GDP growth, starting perhaps with the 1997 paper by Knack and Keefer, click here for a list of articles).
To me it seemed like there had to be some sort of inflationary effect on personal data, hence data-flation. Perhaps, I wondered, the more bits of personal data pertaining to you that are known by everyone, the less value each piece of that personal data would have, notably when it comes to authenticating you, to a system, a merchant, a bank, a government agency, and so on.
My article for TechTarget on the subject of dataflation was published in October, 2005. Then I witnessed the massive exposures in early 2006 which included the 28.6 million veterans (including a friend of mine who was also 'exposed' at the same time by his credit union). So I continued to think about dataflation. When I was invited to speak at Interop Moscow I chose it as the topic of my presentation.
Then a strange thing happened. In the Q&A session after my presentation, one member of the audience told me that you could find just about any data about anyone in Russia on the streets of Moscow, sold on CD. Unfortunately, I didn't have enough time or Russian to go and buy any of these CDs, but several people confirmed that large numbers of records were sold to these street-level data vendors by employees of various government agencies. We did not have enough time for a protracted discussion, and there was something of a language barrier, but I think I sensed an implied statement: "Our data is hopelessly exposed and our society/government/economy is not crumbling."
Now, I am not an expert on the Russian economy, but I think one could argue it is not doing as well as it might. One might further suggest that a lack of trust is one reason, although proving this statement is probably an entire masters or even doctoral thesis. Furthermore, I am open to pondering that implication. Maybe dataflation won't happen and everything will work out. It's just that, when you look at a compilation of the ever-increasing numbers, such as this amazing table at Privacy Rights Clearing House, it is hard to believe we are on the right track.
Wednesday, January 24, 2007
What's Next next? A new time for Daylight Saving Time
Just a quick post to point out the change in DST this year which will require some systems to be patched. I have some tech details over on Cobb on Tech. From a security perspective, the possibility exists that someone could exploit mis-matches between systems that correctly auto-update time on on 3/11/2007 and those that do not (mis-match being the non-technical term for who-knows-what-kind-of-synchronization-errors). One area to watch [apologies for the pun] will be access control devices for both perimeter and system security.
.
.
Sunday, January 21, 2007
Much Anticipated Brussin Blog Now Online
Attention all serious blog readers! There's a new tech blog on the block and I'm betting it will become a "must-read" for anyone serious about Web 2.0, Business 2.0, and the whole intersection of technology and business. The blog is called "What Comes Next" and the blogger is David Brussin.
While David Brussin might not be a household name in high tech households, I would add the caveat "yet." I've been in the high tech field for over 25 years and have yet to encounter a sharper mind than Brussin's. It was no coincidence that he was named to the 2004 list of the world's 100 Top Young Innovators by Technology Review, MIT's Magazine of Innovation. Brussin has that rare combination of a. technical brilliance (he was building serious commercial networks before he graduated from high school) and b. business acumen (he had co-founded two successful startups before he was thirty, and both were snapped up by public companies).
Then there is c. he is very articulate. So, not only does Brussin come up with valuable and sometimes highly complex insights, he can put them into full sentences that are easily understood. Now, you sometimes meet people who have a or b or c. Occasionally you meet people with two of the three, but rarely do you encounter someone who has all three AND a sense of humor AND above average scores in tact and diplomacy.
So check out Brussin's blog. I hope you find it as interesting as I do.
While David Brussin might not be a household name in high tech households, I would add the caveat "yet." I've been in the high tech field for over 25 years and have yet to encounter a sharper mind than Brussin's. It was no coincidence that he was named to the 2004 list of the world's 100 Top Young Innovators by Technology Review, MIT's Magazine of Innovation. Brussin has that rare combination of a. technical brilliance (he was building serious commercial networks before he graduated from high school) and b. business acumen (he had co-founded two successful startups before he was thirty, and both were snapped up by public companies).
Then there is c. he is very articulate. So, not only does Brussin come up with valuable and sometimes highly complex insights, he can put them into full sentences that are easily understood. Now, you sometimes meet people who have a or b or c. Occasionally you meet people with two of the three, but rarely do you encounter someone who has all three AND a sense of humor AND above average scores in tact and diplomacy.
So check out Brussin's blog. I hope you find it as interesting as I do.
Thursday, January 18, 2007
Small Business Continuity Gets a Boost: IMCD from ContingenZ
What if you could buy a large amount of expert advice on how to keep your business running despite everything that fate throws at you? Want to learn how? Read on...
Everyone knows that small businesses are the true powerhouse of free market economies, whether in the US, the UK, the EU, or beyond. Most people also know that the failure rate of small businesses is very high. What a lot of people don't realize is that many of those failures could be avoided if only small businesses did a little more advanced planning. This fact gets lost in the seemingly endless array of factors that adversely impact small businesses: fire, flood, wind damage, snow days, power outage, earthquake, employee theft, virus outbreaks (biological and digital), hacking, abrupt departure of key employee(s), prolonged office evacuation due to nearby toxic spill, over-eager customer driving through the front window and mowing down the file server, unexpected incarceration of treasurer, public relations snafus. All of these happen and it is hard to predict when (you don't have to believe if global warming to know that the weather has been mighty unpredictable and frequently severe in recent years).
But all of these things have something in common: they are incidents, and incidents can be managed. Hence the art and science of Incident Management. One of the finest practitioners of this art is my friend Michael Miora who started a company called ContingenZ. The idea was that he couldn't be in two places at once and there just aren't enough incident management experts to go around meaning that smaller businesses couldn't afford to hire one. So why not distill his expertise into a piece of software that any business owner or manager can use to create an incident management plan and business continuity strateg precisly tailored to the specific needs of the company?
And that is what Michael Miora has done, working with someone I also know quite well, Mike Cobb. Both Mike and Michael are CISSPs with a ton of experience in business management and data security. The product they came up with, IMCD, is now available in two versions. The more expensive Pro version is suitable for larger companies (and some very large companies are using it right now). The brand new and considerably less expensive Small Business Edition is ideal for small firms. What is more, businesses large and small can download a trial copy of IMCD to check it out.
This is a product that could literally save your business and it may well make you a ton of money even if--fingers crossed--you never have a single incident to deal with. How? Consider what happened to one of IMCD's first customers, a small firm specializing in shipping antiques that was in the running to get a big fat contract from a major shipping company. Like many big companies establishing new vendors, this one was doing due diligence. Did the small company have a business continuity plan? Yes, replied the small company. Can we see it? asked the big company. Umm, yes, well, it is sort of...informal, replied the small company. No formal plan, no big contract. And so the small company used IMCD to formally document its business continuity plan in a complete set of highly professional documents automatically generated by the software.
Result, the company that bought IMCD got the contract. And should anything ever happen to disrupt their business they are well placed to "keep on trucking." Scobb says "Check it out!"
[Disclaimer: I don't own stock in this company. Even if you buy a zillion licenses to IMCD I won't get a single penny. On the other hand you will make two of my friends very happy.]
Everyone knows that small businesses are the true powerhouse of free market economies, whether in the US, the UK, the EU, or beyond. Most people also know that the failure rate of small businesses is very high. What a lot of people don't realize is that many of those failures could be avoided if only small businesses did a little more advanced planning. This fact gets lost in the seemingly endless array of factors that adversely impact small businesses: fire, flood, wind damage, snow days, power outage, earthquake, employee theft, virus outbreaks (biological and digital), hacking, abrupt departure of key employee(s), prolonged office evacuation due to nearby toxic spill, over-eager customer driving through the front window and mowing down the file server, unexpected incarceration of treasurer, public relations snafus. All of these happen and it is hard to predict when (you don't have to believe if global warming to know that the weather has been mighty unpredictable and frequently severe in recent years).
But all of these things have something in common: they are incidents, and incidents can be managed. Hence the art and science of Incident Management. One of the finest practitioners of this art is my friend Michael Miora who started a company called ContingenZ. The idea was that he couldn't be in two places at once and there just aren't enough incident management experts to go around meaning that smaller businesses couldn't afford to hire one. So why not distill his expertise into a piece of software that any business owner or manager can use to create an incident management plan and business continuity strateg precisly tailored to the specific needs of the company?
And that is what Michael Miora has done, working with someone I also know quite well, Mike Cobb. Both Mike and Michael are CISSPs with a ton of experience in business management and data security. The product they came up with, IMCD, is now available in two versions. The more expensive Pro version is suitable for larger companies (and some very large companies are using it right now). The brand new and considerably less expensive Small Business Edition is ideal for small firms. What is more, businesses large and small can download a trial copy of IMCD to check it out.
This is a product that could literally save your business and it may well make you a ton of money even if--fingers crossed--you never have a single incident to deal with. How? Consider what happened to one of IMCD's first customers, a small firm specializing in shipping antiques that was in the running to get a big fat contract from a major shipping company. Like many big companies establishing new vendors, this one was doing due diligence. Did the small company have a business continuity plan? Yes, replied the small company. Can we see it? asked the big company. Umm, yes, well, it is sort of...informal, replied the small company. No formal plan, no big contract. And so the small company used IMCD to formally document its business continuity plan in a complete set of highly professional documents automatically generated by the software.
Result, the company that bought IMCD got the contract. And should anything ever happen to disrupt their business they are well placed to "keep on trucking." Scobb says "Check it out!"
[Disclaimer: I don't own stock in this company. Even if you buy a zillion licenses to IMCD I won't get a single penny. On the other hand you will make two of my friends very happy.]
Monday, January 15, 2007
Prairie Dogs and Information Security
I have blogged elsewhere about the Bush administration's interference with science. In the Union of Concerned Scientist's great catalog of these crimes against reason there's an interesting example of why it is important that everyone learn the basics of information security. The example concerns the white-tailed prairie dog (aww shucks, ain't he cute y'all).
The scientists claim that Julie MacDonald, of the Mountain Prairie Regional Office of the Fish and Wildlife Service, "directly tampered with a scientific determination by FWS biologists that the white-tailed prairie dog could warrant Endangered Species Act protection, and further, prevented the agency from fully reviewing the animal's status." A strong allegation. Any proof? How about Microsoft Word "track changes" edits? Yep, when you go altering reports written in Word you best be careful. Word tries hard not to forget. Check out the detailed sample here, illustrated in a pdf file that shows just what the changes were. As evidence of the scientists' claims I think the phrase that comes to mind is "dead to rights."
And change tracking is not the only way that Word coughs up secrets. Ever open a Word doc with Notepad or Texpad (which happens to be my favorite text editor)? You may well find stuff that doesn't appear in the document itself, stuff you thought you had deleted. Similar problems can occur if you are careless with Adobe Acrobat documents. See a great example of the Word issue (involving Tony Blair, Colin Powell, and the war on Iraq) on Richard Smith's fascinating Computer Bytes Man site.
The point here is that companies using Word or Adobe documents to store and distribute information need to know exactly how those programs work so those documents don't store any information that you would prefer to keep secret.
.
The scientists claim that Julie MacDonald, of the Mountain Prairie Regional Office of the Fish and Wildlife Service, "directly tampered with a scientific determination by FWS biologists that the white-tailed prairie dog could warrant Endangered Species Act protection, and further, prevented the agency from fully reviewing the animal's status." A strong allegation. Any proof? How about Microsoft Word "track changes" edits? Yep, when you go altering reports written in Word you best be careful. Word tries hard not to forget. Check out the detailed sample here, illustrated in a pdf file that shows just what the changes were. As evidence of the scientists' claims I think the phrase that comes to mind is "dead to rights."
And change tracking is not the only way that Word coughs up secrets. Ever open a Word doc with Notepad or Texpad (which happens to be my favorite text editor)? You may well find stuff that doesn't appear in the document itself, stuff you thought you had deleted. Similar problems can occur if you are careless with Adobe Acrobat documents. See a great example of the Word issue (involving Tony Blair, Colin Powell, and the war on Iraq) on Richard Smith's fascinating Computer Bytes Man site.
The point here is that companies using Word or Adobe documents to store and distribute information need to know exactly how those programs work so those documents don't store any information that you would prefer to keep secret.
.