Remember last summer when the warnings about a surge in image spam started to appear? (Image spam being defined as unsolicited commercial email in which the message is presented as an image rather than text.) Then we saw spam volume drastically increase towards the end of 2007 with much hand-wringing over the difficulties of detecting of image-based spam.
Well, I wonder how many companies have started to worry about the outbound-image threat? A certain percentage of companies do monitor outbound Internet traffic for trade secrets and inappropriate content. Some just monitor email. At least a few monitor web traffic. But I am fairly sure most of this is filtering based on text. Even so, I don't know how many would actually spot an employee typing company secrets into a password-protected blog hosted outside the company.
But what if the employee scans images of confidential company documents and uploads the JPEG files to a blog? Would that trigger a response from information security? Scanning the content of a JPEG for sensitive text is not impossible, but it is certainly processor intensive and in some ways it is not unlike the problem of detecting image-based spam.
Of course, one way of reducing the amount of image-based spam coming into an enterprise is to use the Turntide anti-spam technology that chokes off spam without a filter, instead using a behavior-based approach (now available as the Symantec Mail Security 8100 Series Appliance). Not sure if this would work the other way round. I know there was some discussion of using it to prevent enterprise networks from sending spam. If someone tried to send out 90,000 scanned pages, one after another, as JPEGs, would it show up as an anomaly and trigger some alarms?
BTW, the 90,000 number is not entirely random. In 1992 about twenty cases of confidential documents belonging to General Motors were physically shipped to Volkswagen headquarters in Wolfsburg (many of them allegedly transported aboard a Volkswagen corporate jet, via the Spanish residence of J. Ignacio Lopez de Arriortua, then Vice President at GM in charge of Worldwide Purchasing, later hired by VW). The number of purloined pages was put at 90,000.
BBTW, this piece of infosec trivia was my excuse for featuring Ron Patrick's amazing street legal VW (Beetle) Jet.
Public-interest technology, information security, data privacy, risk and gender issues in tech
Pages
▼
Monday, April 30, 2007
Friday, April 20, 2007
White Hat Hacking for Rainy Day Fun: Weak search forms still revealing too much data
What better way to spend a rainy April day than white hat hacking? Experience the thrill of hacking with none of the guilt. I highly recommend this for anyone who has difficulty understanding why hackers do what they do (and you are NEVER going to be a really good information security professional unless you DO understand what hacking is about).
Allow me to swap my white hat for my linguist cap for a moment (B.A. Honours, School of English, University of Leeds--one year behind guitar virtuoso Mark Knopler but way ahead of the wonderfully talented Corinne Bailey Rae--and would you believe I can't even carry a tune, but I digress). It has to be said that hacking is one of the most hotly contested words of the information age. In justifiable homage to the original good-hearted hackers many infosec professionals use the qualifier "criminal hackers" to distinguish the bad guys from the good guys (that's gender-neutral colloquial 'guys' by-the-way). The good guys, who don't break laws, can be referred to as white-hat hackers, the bad guys being black hat. I am actually leaning towards 'bad actors' as a preferred term for the bad guys (with apologies to my thespian readers).
So, one rainy April afternoon I was wearing "my film producer hat" and working the web to promote the film's appearance at two overlapping film festivals, one in Winston-Salem and the other in Columbus, Ohio. Neither the director nor myself could afford to attend these events in person and we were worried that turnout would be low. I decided to surf the web sites of colleges in the target areas and to identify faculty with an academic interest in civil rights history (and thereby interested in the film enough to tell their students about it). In the process I found a classic example of weak web design that was hackable.
After using standard search tools to identify the people I wanted to contact, I looked for their email addresses. Many organizations like schools and hospitals have a directory of staff phone numbers and email addresses. However, to prevent a variety of problems, such as spam, these and other details are not displayed wholesale in a list, but one at a time in response to a name search. In other words, a form on a web page enables users to search a database of people (in infosec terms, this database can be referred to as an asset). The premise is that you have to know the person's name to find their information.
I used this sort of directory to email several professors at several schools. However, I also found something interesting. These forms usually consist of two fields, for Last Name and First name, together with a Submit button. The way it's supposed to work is that you, perhaps an aspiring Physics major, enter Einstein in one field, Albert in the other, click Submit, and get the phone number and email address for Prof. Einstein. However, such forms can be a pain for users who can't recall the professor's full name, so the form might allow you to enter Einstein for the last name and the letter 'A' for the first name. And herein lies a dilemma that can become a problem. How 'vague' to make the search. For example, if I can enter 'D' in the last name field and 'J' in the first name field, I can all Jane and John Does in the database. What you need is a fairly clever set of rules built into the form to control the results of any conceivable form input.
You see, in terms of information security, one can reliably predict that someone (referred to as an agent) will at some point click the Submit button without entering any characters at all in either field. If the result of this action is to reveal all of the records in the database (what we might call a means), one can reliably predict, based on past history, that this method will eventually be used to make a copy of all the records in the database (asset).
Thus, by failing to properly code the handling of form input from this search page, the folks who put up the page have created a vulnerability. This becomes a means of attack and a threat exists if someone figures out how to exploit it to gain unauthorized access to the asset. (This same problem crops up with student directories as well, where you are even less likely to want to grant access to the full list.)
This example nicely displays all of the elements of an information security threat (asset, agent, means). I have seen this type of problem on local government web sites where the effect was enable the attacker to find all the data required to steal a person's identity, or even find all of the special training taken by former military personnel in the area.
As a white hat hacker it is your responsibility to inform the site manager of the problem. You avoid, as I have here, revealing specifics of the problem (e.g. the address of web site where I found this example). Hopefully, they will correct the problem. As for me, I will admit that, wearing my producer hat, I did use some of the email addresses that I found. I did not spam anyone. I sent them personal notes. And maybe it worked. At the Ohio festival Dare Not Walk Alone won the audience award for best film.
.
Allow me to swap my white hat for my linguist cap for a moment (B.A. Honours, School of English, University of Leeds--one year behind guitar virtuoso Mark Knopler but way ahead of the wonderfully talented Corinne Bailey Rae--and would you believe I can't even carry a tune, but I digress). It has to be said that hacking is one of the most hotly contested words of the information age. In justifiable homage to the original good-hearted hackers many infosec professionals use the qualifier "criminal hackers" to distinguish the bad guys from the good guys (that's gender-neutral colloquial 'guys' by-the-way). The good guys, who don't break laws, can be referred to as white-hat hackers, the bad guys being black hat. I am actually leaning towards 'bad actors' as a preferred term for the bad guys (with apologies to my thespian readers).
So, one rainy April afternoon I was wearing "my film producer hat" and working the web to promote the film's appearance at two overlapping film festivals, one in Winston-Salem and the other in Columbus, Ohio. Neither the director nor myself could afford to attend these events in person and we were worried that turnout would be low. I decided to surf the web sites of colleges in the target areas and to identify faculty with an academic interest in civil rights history (and thereby interested in the film enough to tell their students about it). In the process I found a classic example of weak web design that was hackable.
After using standard search tools to identify the people I wanted to contact, I looked for their email addresses. Many organizations like schools and hospitals have a directory of staff phone numbers and email addresses. However, to prevent a variety of problems, such as spam, these and other details are not displayed wholesale in a list, but one at a time in response to a name search. In other words, a form on a web page enables users to search a database of people (in infosec terms, this database can be referred to as an asset). The premise is that you have to know the person's name to find their information.
I used this sort of directory to email several professors at several schools. However, I also found something interesting. These forms usually consist of two fields, for Last Name and First name, together with a Submit button. The way it's supposed to work is that you, perhaps an aspiring Physics major, enter Einstein in one field, Albert in the other, click Submit, and get the phone number and email address for Prof. Einstein. However, such forms can be a pain for users who can't recall the professor's full name, so the form might allow you to enter Einstein for the last name and the letter 'A' for the first name. And herein lies a dilemma that can become a problem. How 'vague' to make the search. For example, if I can enter 'D' in the last name field and 'J' in the first name field, I can all Jane and John Does in the database. What you need is a fairly clever set of rules built into the form to control the results of any conceivable form input.
You see, in terms of information security, one can reliably predict that someone (referred to as an agent) will at some point click the Submit button without entering any characters at all in either field. If the result of this action is to reveal all of the records in the database (what we might call a means), one can reliably predict, based on past history, that this method will eventually be used to make a copy of all the records in the database (asset).
Thus, by failing to properly code the handling of form input from this search page, the folks who put up the page have created a vulnerability. This becomes a means of attack and a threat exists if someone figures out how to exploit it to gain unauthorized access to the asset. (This same problem crops up with student directories as well, where you are even less likely to want to grant access to the full list.)
This example nicely displays all of the elements of an information security threat (asset, agent, means). I have seen this type of problem on local government web sites where the effect was enable the attacker to find all the data required to steal a person's identity, or even find all of the special training taken by former military personnel in the area.
As a white hat hacker it is your responsibility to inform the site manager of the problem. You avoid, as I have here, revealing specifics of the problem (e.g. the address of web site where I found this example). Hopefully, they will correct the problem. As for me, I will admit that, wearing my producer hat, I did use some of the email addresses that I found. I did not spam anyone. I sent them personal notes. And maybe it worked. At the Ohio festival Dare Not Walk Alone won the audience award for best film.
.
Tuesday, April 17, 2007
Photocopier FUD? Americans copying billions of tax docs don't have time to think
So, you've filed your tax return and put away your tax papers until next year, but how much of the very personal information on those tax papers is still out there, accessible to other people (besides you and the IRS)?
The answer could be "a surprisingly large amount," particularly if you used a digital photocopier to make copies of things like your 1040, W2, 1099s, K-1 and so on. We're not talking about leaving your originals in the photocopier, a common enough mistake, but about the fact some digital copiers retain images of those pages until they are over-written by successive copy jobs, a fact highlighted in an AP article last month. This is not a case of unfounded 'fear, uncertainty, and doubt.' The vulnerability highlighted here is real enough to warrant serious attention, particularly in some quarters.
The underlying fact is that many office photocopiers now contain hard drives to which scans of the pages being copied are written before paper copies are printed and those scans are not always erased after the copy job is completed. Steal one of those hard drives and you could get access to some very personal information (and we're not just talking about tax returns and after-hours butt-scans).
The extent to which this 'feature' of digital copiers poses a threat to your privacy depends upon many factors, like who you are and what kind of enemies have you have got. Personally, I'm not too worried. But if I was a key player in a large company in a hotly contested market I would be paying attention to this particular vulnerability.
Note that the possibility someone could read your personal data off the hard drive of a machine you used to copy personal documents is not a threat it is vulnerability--it becomes a threat when a threat agent is willing and able to exploit the vulnerability.
As to exploitation of the vulnerability by a threat agent, the following scenario is entirely plausible: as a key person in your organization you and your spouse are under surveillance by the opposition. They've searched your trash but found nothing useful. Then one of you is seen entering the local copy-shop and spending some time on machine number 9. After you leave, a generic service person enters said copy-shop muttering something about a maintenance flag on copier number 9. He opens the machine, removes the hard drive and mutters something about a spare in the van. Off he goes with a digital copy of whatever papers you ran through that machine.
Variations on this theme are numerous and include the janitor stealing or mirroring office copier hard drives on the night shift (a great way to get a copy of that competitive bid you had to submit in triplicate). Defenses include being more thoughtful about where you do your photocopying, what access you give to the copier, and what copying hardware you use (some digital copiers offer 'safety' features--of which more later).
However, the first thing that struck me when I read the AP article was a sense of deja vu. Hard drives have been built into a lot of large copiers and printers for some time. It was at least 7 years ago that the penetration testing team at my company figured they could run a publicly accessible web site from the hard drive of such a machine located on the internal network of a large public school district (which we had been hired to test, I hasten to add). That tells you a lot about how much thought the folks who design such machines were giving to their potential for abuse.
In other words, many 'new' or 'emerging' information security threats are not so much new as newly realized or newly rediscovered. And this 'newness' is not simply a function of vulnerabilities found or re-found, but also changes in the means and motives of threat agents prepared to exploit them.
Sidebar/postscript: When you read the AP article referenced above you get the distinct impression that it was prompted by copier-maker Sharp and if I were to swap my infosec hat for my entrepreneur hat I'd have to doff it to the folks at Sharp (or Sharp's PR agency) who were behind this. I know from experience it is very difficult to get someone like AP to write a story that comes from your particular perspective. Sharp's perspective is that of a company which has gone to the trouble to makes photocopiers that are more secure (as you can read here). I think this is a good thing and this article was a good fit between education and marketing.
.
The answer could be "a surprisingly large amount," particularly if you used a digital photocopier to make copies of things like your 1040, W2, 1099s, K-1 and so on. We're not talking about leaving your originals in the photocopier, a common enough mistake, but about the fact some digital copiers retain images of those pages until they are over-written by successive copy jobs, a fact highlighted in an AP article last month. This is not a case of unfounded 'fear, uncertainty, and doubt.' The vulnerability highlighted here is real enough to warrant serious attention, particularly in some quarters.
The underlying fact is that many office photocopiers now contain hard drives to which scans of the pages being copied are written before paper copies are printed and those scans are not always erased after the copy job is completed. Steal one of those hard drives and you could get access to some very personal information (and we're not just talking about tax returns and after-hours butt-scans).
The extent to which this 'feature' of digital copiers poses a threat to your privacy depends upon many factors, like who you are and what kind of enemies have you have got. Personally, I'm not too worried. But if I was a key player in a large company in a hotly contested market I would be paying attention to this particular vulnerability.
Note that the possibility someone could read your personal data off the hard drive of a machine you used to copy personal documents is not a threat it is vulnerability--it becomes a threat when a threat agent is willing and able to exploit the vulnerability.
As to exploitation of the vulnerability by a threat agent, the following scenario is entirely plausible: as a key person in your organization you and your spouse are under surveillance by the opposition. They've searched your trash but found nothing useful. Then one of you is seen entering the local copy-shop and spending some time on machine number 9. After you leave, a generic service person enters said copy-shop muttering something about a maintenance flag on copier number 9. He opens the machine, removes the hard drive and mutters something about a spare in the van. Off he goes with a digital copy of whatever papers you ran through that machine.
Variations on this theme are numerous and include the janitor stealing or mirroring office copier hard drives on the night shift (a great way to get a copy of that competitive bid you had to submit in triplicate). Defenses include being more thoughtful about where you do your photocopying, what access you give to the copier, and what copying hardware you use (some digital copiers offer 'safety' features--of which more later).
However, the first thing that struck me when I read the AP article was a sense of deja vu. Hard drives have been built into a lot of large copiers and printers for some time. It was at least 7 years ago that the penetration testing team at my company figured they could run a publicly accessible web site from the hard drive of such a machine located on the internal network of a large public school district (which we had been hired to test, I hasten to add). That tells you a lot about how much thought the folks who design such machines were giving to their potential for abuse.
In other words, many 'new' or 'emerging' information security threats are not so much new as newly realized or newly rediscovered. And this 'newness' is not simply a function of vulnerabilities found or re-found, but also changes in the means and motives of threat agents prepared to exploit them.
Sidebar/postscript: When you read the AP article referenced above you get the distinct impression that it was prompted by copier-maker Sharp and if I were to swap my infosec hat for my entrepreneur hat I'd have to doff it to the folks at Sharp (or Sharp's PR agency) who were behind this. I know from experience it is very difficult to get someone like AP to write a story that comes from your particular perspective. Sharp's perspective is that of a company which has gone to the trouble to makes photocopiers that are more secure (as you can read here). I think this is a good thing and this article was a good fit between education and marketing.
.
Wednesday, April 11, 2007
Windows & Office Barf Again! Microsoft's recommended Automatic Updates trash data
If you are using Windows and value your time, do this:
1. Go to the Control Panel for Automatic Updates
2. Change the setting from "Automatic (Recommended)" to something like "Download updates for me, but let me choose when to install them."
If you don't do this, you may be set to lose a lot of time and money. Why? Whenever there is a patch Tuesday and the patch requires a reboot, like the one this week, the recommended setting means Microsoft will reboot your system for you, unless you happen to be sitting there at the keyboard to prevent it. Here's a typical scenario:
Let's face it, in the year 2007, twenty years into an OS, twenty five years into an application, this is bad behavior of the worst and mist unforgivable kind. The vendor recommended mode of operation is literally data destructive.
Of course, some readers may say that, "if you are using Windows and value your time," you should switch to a Mac. But Apple has its own share of hubris and I have thousands of dollars invested in software that won't run on a Mac. Come to think of it, I have invested thousands of dollars and hundreds of man-hours creating a computer system that pretty much does what I want it to do, except when the historical recipient of many of thousands of my dollars decides to use its software and ignorance to trash my data.
.
1. Go to the Control Panel for Automatic Updates
2. Change the setting from "Automatic (Recommended)" to something like "Download updates for me, but let me choose when to install them."
If you don't do this, you may be set to lose a lot of time and money. Why? Whenever there is a patch Tuesday and the patch requires a reboot, like the one this week, the recommended setting means Microsoft will reboot your system for you, unless you happen to be sitting there at the keyboard to prevent it. Here's a typical scenario:
You spend several hours researching a topic on the web. You have about ten browser tabs open displaying your research results and you are cutting and pasting said results into a Microsoft Word document. The door bell chimes and you rush to answer it. You are a savvy user so even as you head to the door you make a mental note that the two apps you are using have auto-save. Word auto-saves documents. Firefox auto-saves session data. But as you stand at the door signing for a package you hear the "chime of death" from your office, signalling that your Windows machine has restarted. Not only has it restarted, it has, under the control of Microsoft's Automatic update, has trashed your Word documents.That's right, it has not even created the temporary files that allow you to restore documents when something crashes Word. This is because Microsoft, in its current state of engorged hubris, which can only be described as galactic in scope, does not consider an unapproved system restart of its choosing to be a crash. So it only gives you the last user-saved version of the docs that you have spent an hour compiling.
Let's face it, in the year 2007, twenty years into an OS, twenty five years into an application, this is bad behavior of the worst and mist unforgivable kind. The vendor recommended mode of operation is literally data destructive.
Of course, some readers may say that, "if you are using Windows and value your time," you should switch to a Mac. But Apple has its own share of hubris and I have thousands of dollars invested in software that won't run on a Mac. Come to think of it, I have invested thousands of dollars and hundreds of man-hours creating a computer system that pretty much does what I want it to do, except when the historical recipient of many of thousands of my dollars decides to use its software and ignorance to trash my data.
.
Monday, April 09, 2007
Security Means Availability: Google and others need to address this ASAP in SaaS
As enterprises explore Software as a Service, security experts like David Brussin are keeping a watchful eye. Clearly there are serious security implications whenever data is allowed to live beyond the--hopefully, strongly defended--perimeter of the enterprise fortress. Typically those implications are first thought of in terms of confidentiality and integrity: Will our data be safe from prying eyes and unauthorized access? But the third pillar of security, availability, should not be neglected. How much does strong protection against unauthorized access matter if authorized access is impaired?
Google must be pondering this question right now as news of outages spreads: "Little over a month after introducing Google Apps' Premier version, which includes a 99.99 percent uptime commitment, Google is failing to meet that service level agreement (SLA) for an undetermined number of customers." PC World article highlighted in this succintly titled posting by Ann All on the Straight to the Source blog at IT Business Edge: It's the SLAs Stupid.
This is timely data for me as I have just spent a week over in Europe meeting with executives of a VLO to discuss information security strategy in the context of a possible shift to SaaS as an alternative to out-sourcing (VLO = Very Large Organization).
Actually, I see not one but two availability question marks with SaaS. The first is supplier-side: Will the SaaS vendor's infrastructure keep up with demand. This seems to be the very problem Google is wrestling with right now.
Second is the user-side connectivity question: What use is Google Mail if the user can't get on the Internet? This is such a basic question that I am almost embarrassed to raise it, but I feel I must. Failure to question underlying assumptions is a shortcoming sadly endemic in technology adoption (the classic is probably "Sure, it's safe to handle this stuff" --Madame Curie).
SaaS seems to be predicated upon universal high-speed connectivity, a wonderful thing, but not yet a real thing, and not--perhaps ever--a cheap thing. Try to keep working on an online document as you move from office to train to plane to hotel to client to airport and back to the office. How successful you are will depend upon, among other things: where your home is; what hotel you stay at; what your client's connectivity policies and facilities are like; and your budget. This last item may be even more critical when you consider "working securely on an online document as you move..."
As for enterprise SaaS solely at the office, there will still be two SLAs to consider: Your SaaS vendor SLA and your ISP SLA.
Google must be pondering this question right now as news of outages spreads: "Little over a month after introducing Google Apps' Premier version, which includes a 99.99 percent uptime commitment, Google is failing to meet that service level agreement (SLA) for an undetermined number of customers." PC World article highlighted in this succintly titled posting by Ann All on the Straight to the Source blog at IT Business Edge: It's the SLAs Stupid.
This is timely data for me as I have just spent a week over in Europe meeting with executives of a VLO to discuss information security strategy in the context of a possible shift to SaaS as an alternative to out-sourcing (VLO = Very Large Organization).
Actually, I see not one but two availability question marks with SaaS. The first is supplier-side: Will the SaaS vendor's infrastructure keep up with demand. This seems to be the very problem Google is wrestling with right now.
Second is the user-side connectivity question: What use is Google Mail if the user can't get on the Internet? This is such a basic question that I am almost embarrassed to raise it, but I feel I must. Failure to question underlying assumptions is a shortcoming sadly endemic in technology adoption (the classic is probably "Sure, it's safe to handle this stuff" --Madame Curie).
SaaS seems to be predicated upon universal high-speed connectivity, a wonderful thing, but not yet a real thing, and not--perhaps ever--a cheap thing. Try to keep working on an online document as you move from office to train to plane to hotel to client to airport and back to the office. How successful you are will depend upon, among other things: where your home is; what hotel you stay at; what your client's connectivity policies and facilities are like; and your budget. This last item may be even more critical when you consider "working securely on an online document as you move..."
As for enterprise SaaS solely at the office, there will still be two SLAs to consider: Your SaaS vendor SLA and your ISP SLA.