Saturday, October 31, 2020

Thanks for reading and heeding. Please #BeCyberSmart! (Cybersecurity Awareness Month, Day 31)

Vote for those committed to doing more about cybersecurity than has been done so far
Vote for those committed to doing
a lot more about cybersecurity
than has been done so far
This is blog post 31 of the 31 posts that I pledged to write in October, 2020, for Cybersecurity Awareness Month, an international effort to help people improve the security of their devices and protect the privacy of their data.

There is a lot more that I wanted to say, and I will get round to saying it in the coming weeks. However, for the moment, there is just time for some final cybersecurity awareness thoughts. 

We should all heed the advice that has been dished up during the month, from locking down our logins and limiting access to all of our connected digital devices, to being careful how and where we reveal sensitive personal information. 

But the world now faces unprecedented levels of criminal behavior in cyberspace, and in my opinion a lot more of the heavy lifting in cybersecurity must be done by governments. Firstly, by taking seriously the need to achieve global consensus that abuse of digital technology is wrong, morally reprehensible, and will be prosecuted. Secondly, by funding efforts to enforce that consensus at levels many times greater than the paltry sums that have been allocated so far. 

So I will close the month by repeating something that I said back on Day 22:

Whenever we vote to elect representatives, we can vote for those most likely to take all this as seriously as it needs to be taken.

Take care, stay safe, and #BeCyberSmart

Author's Note
If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Friday, October 30, 2020

Cybersecurity needs more women, now and in the future (Cybersecurity Awareness Month, Day 30)

A woman with a laptop next to a server, making the point that IT needs more women. Cybersecurity needs more women. Shoutout to Christina @ for the image on UnSplashsocial or copy the text below to attribute.

Hopefully, you have seen many images like the one above during Cybersecurity Awareness Month 2020, which is now drawing to a close. This messaging emphasizes our individual and collective responsibility for taking whatever steps we can to protect digital devices and data from being abused for selfish purposes. To me, this particular image is a reminder that cybersecurity is not only a shared obligation, but also a field of endeavor that offers a lot of job opportunities for women. And that is the subject of today's blog post. 

If you have been reading along on this blog this month you will know that there is post for each day of the month. I hope you have found these helpful and, if so, that you will share them with friends and colleagues through the coming months and into next year. You don't need to read many of these posts to realize that, while I fully support raising awareness of cybersecurity, I also think a lot more than awareness needs to be raised if humans are ever going to get ahead of the cybersecurity problem. One of the things that needs raising is the percentage of women working in technology.

Today we look at the need for more women in technology generally, and in cybersecurity specifically. But before I go any further with this, I need to give a shoutout to Christina at for the great photo that makes up the right half of the image at the top of this article. Women of Color in Tech are creators of the WOCinTech stock photo collection, full of great images that are easy to find on UnSplash.

More women in cybersecurity

As I outlined in the article for October 28, there is a huge cybersecurity skills gap, despite the fact that the pay for some cybersecurity roles can be very good.* We're talking half a million open positions in North America this year, and most countries are faced with large shortfalls in qualified applicants for cybersecurity roles. 

Note that these are funded jobs, waiting for the right applicants; and there is no reason that all those applicants need to be men. Indeed, I would argue that the cybersecurity workforce would benefit from becoming far more gender diverse, and just more diverse in general. When a field of endeavor embraces greater diversity that means a larger pool of talent from which to recruit, plus the potential to benefit from a wider range of perspectives.

Clearly, there are multiple ways in which it makes sense to encourage women to consider a job in cybersecurity, starting with the number of openings and the levels of pay available. Industry organizations—like CompTIA, (ISC)2, and ISSA—recognize this and have done a lot to encourage recruitment of women and minorities into tech in general, and cybersecurity specifically. Here's just a sample of web pages and articles that have more information about this: 

Of course, getting into the field may require some knowledge and training that you don't have yet, but these can be acquired, often through self-paced learning, on the job or in your own time, combined with security certifications. There are also community college course and apprenticeship programs. In other words, getting into a career in cybersecurity and progressing to the point where you're earning a six figure salary does not require a university degree (there are still some employers who don't believe this, but they are wrong, and there are a lot of people, like me, working at convincing them of this).

Cybersecurity can be a great fit for women returning to the workforce, or entering it "late" (as defined by social convention). In my experience, women can acquire the necessary knowledge and training for cybersecurity work just as fast as men, if not faster. In yesterday's article I looked at reasons why some people might be more aware of technology risks than others, and I believe that lot of those more aware people are female.

Here are a couple of examples that show women being particularly adept in one particular aspect of cybersecurity: raising awareness of how easily our digital devices and data can be compromised. To be clear, both women are making a good living advising organizations on how to avoid becoming victims of the kind of "vishing" attacks that they so effectively demonstrate.  

This second example offers more detail, some colorful language, and live video of a fairly serious theft of information, plus airline points. It also works as a great cybersecurity awareness video. Use it when you need to show someone how all that online authentication stuff we talked about on days 19, 20, and 21, can be bypassed if you shift communications to the phone and the target is not vishing-aware). 

Of course, the cybersecurity realm is much, much wider than this, and women are making valuable contributions across the board. From the very human side, seen in these videos, to the most cerebral, like Artificial Intelligence, a topic I will get back to in tomorrow's blog post). 

One thing I find particularly encouraging about the state of play for women entering cybersecurity today, is the amount of encouragement that is on offer, not just upon entering the field, but throughout career development. One of my favorite encouragers is Keirsten Brager. Consider the approach she took when investigating the recurring career question of "what should I be paid?" (When I heard Keirsten speaking at The Diana Initiative as few years ago, I learned several career strategies that were new to me, and cybersecurity has been my career for more than three decades.)

Women on cybersecurity

Getting more women to enter the field of cybersecurity is only part of what needs to happen. I would like to see, and the world would benefit from, more non-male influencers in the field. For example, several of my cybersecurity awareness blog posts this month recommended websites and newsletters that are good for keeping up with the latest security news, incidents, breaches, vulnerabilities, research findings, etc. 

You might have noticed that these cybersecurity resources tend to be helmed by men, guys who have developed a reputation for providing, useful and un-gated information about, and analysis of, cybersecurity trends and issues. I wanted to include more non-male sources in my posts, but I encountered a very interesting phenomenon: women charging for their take on cybersecurity. This makes sense given the way that the field has evolved; guys who rose to prominence in the field early on have developed followings that can be monetized with ads and paid speaking engagements, and so on. 

But what if you have achieved expertise and a perspective worth sharing, but no prominence (circumstances with which many women may be familiar)? Why not build the following your work merits while also monetizing it: pay as you grow as it were. That is what some women in cybersecurity are now doing, charging for their cybersecurity content on a pay-as-you-go basis. Here are two of the paid sources that I have signed up for: Infosec Sherpa and Cybersecurity Roundup

If you know of others, please ping me on Twitter and I will check them out. In the meantime, here is a very helpful list of top cybersecurity and website blogs to follow, curated by a woman. And here is an impressive list of 50 Women In Cybersecurity Associations And Groups To Follow. Also check out Lisa Forte's Rebooting channel on YouTube.


* When I say there is a huge cybersecurity skills gap "despite the fact that the pay for some cybersecurity roles can be very good" I mean yes, you can earn good money, but not all the jobs pay well. Furthermore, very sadly and all too predictably, the sector currently pays women 21% less than men according to a recent study. Clearly, this is wrong and needs to change. 

Thursday, October 29, 2020

Cybersecurity awareness: Why some people get it, more than others (Cybersecurity Awareness Month, Day 29)

In 2017, I wrote: “the digital technologies that enable much of what we think of as modern life have introduced new risks into the world and amplified some old ones. Attitudes towards risks arising from our use of both digital and non-digital technologies vary considerably, creating challenges for people who seek to manage risk.” 

This is still true today, the 29th day of Cybersecurity Awareness Month, 2020; and, as the month draws to a close, I think it is helpful to reflect on how we feel about "cyber" risks, those created by our use of connected devices and the rest of the digital infrastructure that supports so many facets of life in the 21st century. You may have found that not everyone seems to be as concerned about some risks as you are.

Conversely, you might not be as worried about some things as some of your friends are. For some reason this makes me think of a Chief Information Security Officer cycling to work: she's more aware of, and concerned about, the risks posed by a new operating system vulnerability than most people, but she's less concerned than her friends and family about the risks of cycling to work. 

The reasons for differences in risk perception are many and complex, and there's not enough room in this article to address them all in a fully-documented fashion. What I do have room for is a short account of my considered opinions on this, followed by some sources at the end. The underlying theme of what I have to say is this: the failure of some people to heed expert advice, particularly experts who are warning that something is a problem and poses risks that need to be taken more seriously. 

The Way I/We/You/They See Certain Risks

Consider a survey question that offers the following choices for your answer: Low risk, Between low and moderate risk, Moderate risk. Between moderate and high risk, High risk. Suppose the question is this: How much risk do you believe global warming poses to human health, safety, or prosperity? What is your answer?

Over the last decade or so, numerous surveys have asked that question and the most frequent response is High risk. Almost all climate scientists agree that High risk is the "correct" answer, based on the science. But not everyone agrees, and that is clearly hampering efforts to slow down global warning. 

So guess what what happens when you analyze the survey responses by gender you find that men are more likely to rate this risk Low. In fact, whenever you ask people to rate risks pertaining to a bunch of different technologies, you tend to find men see less risk than women. Furthermore, white males tend to see less risk than white females, non-white males, and non-white women.

And this is not a new phenomenon. There is a long history of failure to heed the warnings sounded by experts on a wide range of issues. Consider the 1994 survey results graphed on the right. The grey line with the round data points is white males who saw less risk than everyone else in nuclear waste, chemical pollution, motor vehicle accidents, outdoor air quality, nuclear power plants and medical X-rays.

To be a bit more precise, the implication is that, on aggregate, white men in America tend to under-estimate technology risks, relative to the mean. And if you think, like I do, that the technologies we have been talking about so far present serious risks to human health, safety, or prosperity, then those men are wrong. What is more, their opinions act as a brake on efforts to address the risks that others are concerned about. And not only are they wrong, history has shown it is hard to persuade them of this, and of the need to raise awareness of these risks. All of which could have serious implications for cybersecurity awareness if it turns out that this pattern of findings extends to cyber risks.

Guess what? The pattern does extend to the digital realm, as you can see from this chart based on research I did a few years ago, working with my good friend Lysa Myers who was on my research team at ESET at the time, with some assistance from Dan Kahan of the Cultural Cognition Project at Yale Law School.

See that White Male line undercutting the others across a wide range of risk categories? The yellow highlighting picks out the “digital risks,” and it shows that white males tend to see less risk from digital technology than the other groups, although the gap is smaller than with some other risks (and there is one notable exception: government data monitoring seems to trouble non-white males even less than white males—there could be several explanations for this, but that is a subject for a different blog post).

"Not All White Men"

Of course, the story here is not as simple as it appears from these graphs. If you watched the TEDx talk on Day 8 of this month's cybersecurity awareness blog posts you will know that, the first time I got excited about this White Male Effect in technology risk perception, my wife point out that I am a white male; and I don't—in her professional opinion—under-estimate risk. And in fact, research shows that significantly less than half of white males are what I would call the "problem" here: refusing to accept expert opinion as to how serious the risks of technology are to human health, safety, and prosperity.

One of the pioneers in risk perception research, Dan Kahan, collaborated in a 2007 study that found a certain type of white male was "so extremely skeptical of risks involving, say, the environment ... that they create the appearance of a sample-wide "white male" effect." 

As Kahan puts it, "that effect 'disappears' once the extreme skepticism of these individuals (less than 1/6 of the white [male] population) is taken into account." (see Kahan's discussion here). This makes a lot of sense when you look at cybersecurity. I think we can safely assume that most cybersecurity professionals perceive the risks from digital technology abuse to be high rather than low. And we know that for decades the cybersecurity profession has been dominated by white males. So what distinguishes them from the "certain type of white male" to which Kahan refers?

The answer lies in something called the Cultural Theory of Risk, and in the language of that theory, the white men in question, the guys drastically underestimating technology risks, are white hierarchical and individualistic males. According to this theory, "structures of social organization endow individuals with perceptions that reinforce those structures in competition against alternative ones" (Wikipedia). A hierarchical individualistic is inclined to agree with statements like: it's not the government's business to try to protect people from themselves, and this whole "everyone is equal" thing has gone to far. 

This blog post does not have room for a discussion of the Cultural Theory, but the diagram on the right helps put the terms hierarchical and individualistic into context. To grossly over-simplify, the folks who see as much risk as I do in technology tend to be in the lower right: egalitarian and community-minded (we're all equal and in this together). A lot of women tend to be in that quadrant. 

For much more on this, you can read about the ground-breaking digital risk research that Lysa and I did on this theory in the context of digital risks in this two-part report: Adventures in cybersecurity research: Risk, Cultural Theory, and the White Male Effectpart one, and part two. (Kudos to ESET for supporting this work.) There is also a summary here on Medium

Lysa and I gave a talk about this work at the (ICS)2 Security Congress in 2017, describing how the failure to listen to experts, rooted in these differences in risk perception, impacts cybersecurity. The main points are as follows:

  • The security of digital systems (cybersecurity) is undermined by vulnerabilities in products and systems.
  • Failure to heed experts is a major source of vulnerability.
  • Failure to heed experts is a known problem in technology.
  • The Cultural Theory of risk perception helps explain this problem.
  • Cultural Theory exposes the tendency of some males to underestimate risk (White Male Effect or WME).
  • Our research assessed the public’s perceptions of a range of technology risks (digital and non-digital).
  • The findings provide the first ever assessment of WME in the digital or cyber-realm.
  • Additional findings indicate that cyber-related risks are now firmly embedded in public consciousness.
  • Practical benefits from the research include pointers to improved risk communication strategies and a novel take on the need for greater diversity in technology leadership roles.

We suggested several ways in which our findings, and those of other experts researching risk perception, might help improve risk communication. Here is the relevant slide from the talk.  

If you want to explore this line of thinking further, I recommend reading about "identity protective cognition," a form of motivated reasoning that, according to Kahan, describes the tendency of people to fit their perceptions of risk (and related facts) to ones that reflect and reinforce their connection to important affinity groups, membership in which confers psychic, emotional, and material benefits. 


Wednesday, October 28, 2020

Are you aware of the cybersecurity skills gap? (Cybersecurity Awareness Month: Day 28)

Graphic illustrating the idea of a skills gap
There is a shortage of effective guardians

Remember back on Day 1 of this Cybersecurity Awareness Month when I talked about how much cybercrime there is these days? And on Day 15 I put up this graph of Internet crime losses reported to IC3 and the FBI? 

We talked about how this graph is a pretty good representation of the overall trend in cybercrime—which is likely to set new records this year—and how that makes raising cybersecurity awareness a very urgent task. 

During this month we have also looked at some of the reasons why there is so much cybercrime, including one very fundamental insight into crime in general, from Felson and Cohen. Back in 1980 they said that crimes occur when there is: 
"convergence in space and time of offenders, of suitable targets, and of the absence of effective guardians."
So, one way to look at the seemingly relentless rise of cybercrime is to see it as the convergence of offenders and suitable targets in cyberspace, a place where, at the present time, there is a very real absence of effective guardians. 

In fact, there are literally hundreds of thousands of unfilled jobs for effective guardians. Sadly, governments and companies just cannot find enough people to do the cybersecurity work that needs to be done to effectively guard against cybercrime. 

I don't mean that organizations don't have the money to hire people with the necessary cybersecurity skills to be effective guardians. I mean that even when they have the money, i.e. when positions are funded, they just can't find enough qualified applicants to fill those positions. 

In America, this shortfall is actually mapped out on the web at a site called CyberSeek. That link takes you straight to a map that shows you where the demand is and where the supply is located. Nationally, there are half a million jobs open in cybersecurity, or to out it another way, one third of the "effective guardian workforce" is missing.

This phenomenon is widely referred to as the "cybersecurity skills gap" and it is not a new thing. This gap has been there for years now. I studied the problem in some detail in 2016 and presented a paper on it in 2016 titled: Mind this gap: criminal hacking and the global cybersecurity skills shortage, a critical analysis. Back then I said this about the cybersecurity skills gap: "It is real, it is large, and it is growing, despite recent efforts to close it."

I said exactly the same thing about a month ago when a reporter was looking for input on the skills gap in 2020 relative to the pandemic. That reporter did not use my input, but here is how it might have appeared in an article: 

Cobb started exploring the cybersecurity skills shortage in 2015 after a report from Cisco said the global gap could be as big as one million people. In 2016, drawing on relationships with CompTIA and (ISC)2, he researched the gap for a master’s dissertation and presented a paper titled “Mind This Gap” at that year’s Virus Bulletin Conference. His conclusion: the gap is real and could easily be as big as one million globally.

Cobb says skills gap skeptics who claim its size is exaggerated tend either to be people who have skills but can’t find a job, or market-oriented economists who say any claim of a skills gap must have this qualifier with at current pay levels.

"I have a lot of sympathy for those who have skills but no job," says Cobb, "In my experience, a lot of this is due to serious shortcomings in hiring processes at many organizations; hiring for cybersecurity roles is a skill in itself, one that many HR departments lack." 

According to Cobb, reducing bottlenecks in hiring, while ensuring that recruitment efforts are as diverse as possible, would definitely help to reduce the number of unfilled or under-filled cybersecurity positions.

As for closing the skills gap by increasing pay levels, Cobb says this is an overly simplistic view of markets. "Paying higher and higher wages until your company has all the security people it needs only works for goods and services sold at “cost plus” prices. While some defense contractors may be able to do that, most businesses see increased spending on security as a reduction in profit."

The reporter specifically asked: Do you see the pandemic adding to this shortage? If so, why?

"The pandemic is clearly increasing both the demand for people with cybersecurity skills and the demands put upon those people," says Cobb. "It’s not just the sudden shift to home working, but the rapid rise in levels of cybercrime, and the heightened levels of anxiety and fear that can affect an employee’s judgment."

Cobb added: "A lot of cybersecurity teams started out 2020 with a smaller headcount than they needed and open roles that they were struggling to fill, then suddenly they find themselves fighting more battles, on more fronts, than ever before; they’re going to need a lot more people than are available to hire."

On a brighter note, Cobb sees the increasing openness to employing remote staff as a positive factor for recruiting cybersecurity talent: "There are people who have great potential do well in security but for whom a conventional office environment is not a good fit."

Another good question was this: Is the new work-from-home model adding to the problem by creating more work for cybersecurity professionals?

Cobb says that, "Many organizations have made a fast-paced switch from office-based computing, where systems and users can be tightly controlled and closely monitored, to a loose-net web of connections over public networks via an almost infinite combination of home-based hardware and software. In other words, he says, "attack vectors have multiplied, controls have been weakened, users are stressed, and criminals are on a tear."

"Organizations are currently faced with multiple factors that magnify the cybersecurity challenge: complexity, rapid change, economic anxiety, personal stress, and increasingly aggressive adversaries operating with apparent impunity."

However, bad as the pandemic is, Cobb sees the failure of governments to tackle the root causes of cybercrime as a bigger long-term threat to cybersecurity, one which may make closing of the skills gap impossible any time soon.

So, there you have an up-to-date view of the cybersecurity skills gap, served up in article format, without the annoying adverts and pop-up requests to subscribe. But what does this mean for cybersecurity awareness? Here are two things:

  1. Any time you finding yourself assuming that a connected device or online service is well-protected, remind yourself that the organization behind that device or service is probably struggling to fill positions that involve making that assumption valid. 
  2. If you find cybersecurity interesting, there are plenty of ways you can turn that interest into a well-paid job.

I will talk more about point two before the week is out. 

In the meantime: #BeCyberSafe

Note: If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and fuel more content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Tuesday, October 27, 2020

From ransomware to blackmail: cybercrime takes a nasty, evil turn (Cybersecurity Awareness Month, Day 27)

Criminal abuse of digital technology hits new depths of depravity with blackmail of psychotherapy patients, headline

We interrupt our regularly scheduled cybersecurity awareness blog post to bring you this deeply disturbing news:

One or more criminals are trying to blackmail psychotherapy patients after gaining access to their computerized medical records from therapy sessions.

This is not fake news. This is not an imaginary scenario. This is the state of play in cybercrime today: some truly evil person or persons threatening to leak stolen mental health records onto the Internet unless patients pay up. Some of the people whose records have been stolen are underaged.

I'm so angry about this I don't think I will have much more to say in today's article for Cybersecurity Awareness Month. I was going to publish something to raise awareness of the cybersecurity skills gap but am putting that off until tomorrow. 

Here's what is known publicly so far: this psychotherapy patient blackmail incident is still evolving. An early report from Politico provides the basic details. There are more details in this Security Magazine article, and SC Magazine is reporting that the CEO of the psychotherapy center that was breached has been fired.

Who should we blame? Criminals and governments

While firing the CEO of the organization that got breached may well be the right thing to do, the bulk of the blame for this heinous incident lies squarely on the shoulders of the person or persons who perpetrated it, and the government that failed to adequately deter this from happening to its citizens. 

I am not singling out the government of Finland, where this particular incident is centered; every country in which ransomware attackers are operating and thriving has to share this blame. This is a dereliction of a government's duty to protect the people who pay it to protect them. 

Just a few days ago I was trying to raise awareness of What's different about health data security (Cybersecurity Awareness Month, Day 22). I flagged the very real possibility of suicide triggered by the revelation of medical information, made possible by weaknesses in computer security and human ethics. That was in the context of an incident that occurred 25 years ago. 

The risk of such a tragedy has not gone away. The amount of sensitive medical information stored in bits and bytes today is exponentially greater than it was a quarter of a century ago. I know from personal experience that suicide can occur in the wake of sensitive personal information being revealed. Even the possibility of such revelations can be enough to push someone to the edge.

Yesterday, I highlighted the question what could possibly go wrong? I did so in the context of cybersecurity folks asking that question to help surface potential problems with new technology. Clearly, there are cybercriminals out there who need to think long and hard about what could possibly go wrong when they execute a ransomware attack against a medical facility. 

It is hard to believe that this needs to be spelled out, but I'm going to: if the medical facility refuses to pay your ransomware demand, do not try to blackmail the patients whose records you have illegally accessed. People may die. And if that happens, the level of moral condemnation heaped upon you may well haunt you for the rest of your life. 

Monday, October 26, 2020

Cybersecurity for our hyperconnected future (Cybersecurity Awareness Month, Day 26)

Graphic for: Do Your Part. #BeCyberSmart’, helping to empower individuals and organizations to own their role in protecting their part of cyberspace

We are now in the final week of Cybersecurity Awareness Month, 2020. The theme for this week is to look at the future of connected devices, specifically:

"how technological innovations, such as 5G, might impact consumers’ and business’ online experiences (e.g. faster speeds and data transmission, larger attack surface for hackers), as well as how people/infrastructure can adapt to the continuous evolution of the connected devices moving forward."

I am quoting there from the guidelines on the National Cybersecurity Alliance website. They go on to say: "No matter what the future holds, however, every user needs to be empowered to do their part." So what does that mean in practice? I will try to answer that question this week, beginning with this article, written for day 26 of Cybersecurity Awareness Month.

But first, we need some context, and if you like to get your context via video, watch this short one from the StaySafeOnline website. It makes the important point that "as technologies evolve, so will the behaviors and tactics of cyber criminals." 

Image of temperature control app, making the important point that "as technologies evolve, so will the behaviors and tactics of cyber criminals

I captured this image from the video because it suggests a cool way of researching people's attitudes to technology. First, we show our subjects a clip of this, without the text, then you ask what they saw. Most people will probably say something like: it's a person using a smartphone app to adjust the temperature of something, maybe a room somewhere. 

Now we ask our subjects second question: Assume this is a person changing the temperature of a room somewhere and give me all the reasons you can think of for doing this? If none of the answers involve some sort of negative reason—such as "annoying the person in that room" or "proving to the owner of the room that you have taken control of their heating system"—then I suggest that this group of subjects needs more cybersecurity awareness training.

Why do I say that? Because protecting technology from abuse requires us to think about what could possibly go wrong. In fact, what could possibly go wrong is something of a mantra for people working in cybersecurity. Because if you're not thinking about what could possibly go wrong with any given piece of hardware or software or combinations thereof, you're probably not going to do a good job of preventing it actually going wrong.

Of course, what could possibly go wrong is used in contexts other than cyber, often with a question mark. You can sometimes find the hashtag #WCPGW trending. I used it when I tweeted my response to this Apple announcement a few months ago: "The digital car key on your compatible iPhone allows you to conveniently and securely lock, unlock, and even start your BMW." I mean WCPGW!

That response is not me being some cynical old white dude, even though I might look like one. It is me being aware of dozens of examples of new technology being hailed as cool and convenient and safe, only to become yet another contributing factor in the relentless expansion of global cyberbadness (see the list of tech that I have posted on the right, about which I will have more to say later).

Still think it's just me be a cranky curmudgeon? Look at what happens when we Google can thieves steal keyless cars. Right away we see that: 

Criminals can easily steal top keyless-car models using cheap equipment that's available online ... The study looked at 237 models of cars that can be started with an electronic rather than mechanical key, and found thieves could unlock 230 of them without much difficulty. (Fortune, 28 Jan 2019)

Of course, technophilic tech bros may discount Fortune magazine as just a bunch of cynical old white dudes, but the facts speak for themselves, and so does the app, the one that my local police force uses to let folks know whenever a car is stolen without keys.

Which brings us back to cybersecurity awareness, which for millions of people now includes their keyless cars. If you are one of them, here are the top five security tips from a leading UK locksmith

  1. Use a blocking pouch
  2. Turn off keyless fob's wireless signal
  3. Use a steering wheel lock or car alarm
  4. Re-programme your keys
  5. Park defensively

Jackware: a case study in future threats

Bearing all of the above in mind, you can maybe understand why, back in 2016, I tried to raise awareness of a future cyber-threat that I called jackware, a threat that was not "real" at the time, but one which will—I firmly believe—become real under the "right" circumstances. 

Here's how I first described jackware on this blog: "Think of jackware as a specialized form of ransomware. With ransomware, the malicious code encrypts your documents and demands a ransom to unlock them. The goal of jackware would be to lock up a car or other piece of equipment until you pay up."

A formal definition of jackware would be: malicious software that seeks to take control of a device, the primary purpose of which is not data processing or communications, for example: your car. In my original article I said jackware would become particularly dangerous when there are more self-driving cars and vehicle-to-vehicle networks; and I suggested this nightmare scenario: 

"You're in a self-driving car. There's a drive-by infection, silent but effective. Suddenly the doors are locked with you inside. You're being driven to a destination not of your choosing. A voice comes on the in-car audio and calmly informs you of how much Bitcoin it's going to take to get you out of this mess.

Not long after I wrote that, the possibility of jackware began to generate media attention, in both automotive and IT news outlets. Here are the top 10 articles that address it, only two of which were written by me: 

  1. Jackware: When connected cars meet ransomware
  2. Motor Mouth: Will your self-driving car kidnap you?
  3. Ransomware: The Next Big Automotive Cybersecurity Threat?
  4. Prepare for the day when a hacker takes over your self-driving car and kidnaps you enroute
  5. How Safe Are Cars from Hackers?
  6. Heard of Jackware? When connected cars meet ransomware
  7. Jackware hits the big screen in #Fast8: Fate of the Furious
  8. ‘Who the hell hacked my car?’ Is jackware (ransomware for connected cars) inevitable?
  9. Ransomware + IoT = Jackware?: the evolution of ransomware attacks
  10. Why Data Security is More Important Than Ever

As of today, the nightmare scenario that I described in 2016 has not played out in real life (assuming you don't count the Fast and Furious movies as real life). But even though the automotive industry is taking cybersecurity a lot more seriously today than it did 10 or even five years ago, nothing I have seen or heard in the last four years leads me to think jackware will never happen. 

To be clear, I have been actively tracking this issue. I attended a 2018 talk by the two guys who infamously hacked a Jeep in 2015. I discussed the practical aspects of ransomware with several experts under Chatham House rules, including award-winning researchers at UCSD who were already alerting the automotive industry to weaknesses in vehicle computer systems back in 2010 (and have recently been recognized for their pioneering work). 

My point is that the technology industry has such a long history of getting security wrong—which was the point of the list shown earlier—that there has to be a presumption of failure, perhaps more kindly described as an eventual inadequacy relative to threats. That is what I was getting when I gave this quote in Car and Driver: 

"The computer systems are designed, features are designed, products are brought to market, and people adopt them. On the other side, hackers speculate, probe, develop a proof of concept, [criminals] attack, and then finally monetize the threat.”

When you add to the equation the incredibly low probability of capture and sanction that criminals currently face when monetizing the exploitation of vulnerabilities in technology, and the abject failure of world governments—so far—when it comes to agreeing upon ethical norms in cyberspace, you can see why I am so concerned about the future of cybersecurity.

But what can we do about this?

So here we are, in the final week of Cybersecurity Awareness Month, thinking about how technological innovations might impact consumers’ and business’ online experiences, as well as how people and infrastructure "can adapt to the continuous evolution of the connected devices moving forward," while trying to kind in mind that "no matter what the future holds, however, every user needs to be empowered to do their part."

Keyless Fob Pouch
6,648 reviews, 4.5 stars
Amazon UK
We've looked at a some technologies—such as keyless cars and self-driving cars—that are advancing and spreading rapidly, while at the same time introducing new security challenges. We've even noted several individually empowering security tips, like keeping your keyless car fob in a blocking pouch. Another tip might be to only buy those cars that have the least hackable features. But somehow I don't see steps like that holding back the rising tide of hackable connected devices on our planet and in our lives. 

One 2019 report projects that the number of connected IoT devices will be 24 billion by 2030. If you add up both "normal computers" and IoT devices, that number probably passed 22 billion total during 2018. That works out to just under three connected devices per woman, child, and man. 

The UN reckons humans will number 8.5 billion by 2030. That means there could be six connected devices per every one of them by the early thirties (that 2019 report predicts there will be 50 billion such devices by 2030). 

Now consider the predictions about 5G growth. If those are correct, most of those 50 billion devices will be connecting at very high speeds, from just about everywhere. Stated bluntly, if governments and technology companies don't step up, a decade from now we will have more crime, way faster, in way more places, affecting way more people. 

So how do we get governments and technology companies to step up? We can start by reaching out to them and letting them know how concerned we are. I will offer some suggestions along those lines before the end of the month. For now I will just note that there are many ways in which technology itself can help with this outreach, for example, by making it very easy to contact representatives in the US and just about any elected official in the UK.

P.S. Remember, whenever we vote to elect representatives, we can vote for those most likely to take cybersecurity as seriously as it needs to be taken.


Sunday, October 25, 2020

Time and awareness and other security musings (Cybersecurity Awareness Month, Day 25)

Because October is the designated month for cybersecurity awareness, and because this year is 2020, that means the 25th day of the month is a Sunday. So today's security awareness blog post will be less like a work day call to action, and more like a mediation on time as it relates to security.

You see, this is not just any Sunday, it's the one that may seem longer than the others, the one on which, during the wee hours of the morning, the clocks go back one hour, marking the end of Daylight Saving Time in many countries, but not all. Folks in many parts of North America will have to wait another week for their "extra" hour. 

For everything you ever wanted to know about Daylight Saving Time, including where and when it happens in every country of the world, check out this page. And if you are one of the many people who will be holding international conference calls and Zoom meetings next week, check out this cool page for coordinating the timing of events across time zones

But what, you may well ask, has time and timing got to do with cybersecurity? 


That would certainly be the answer if you asked my good friend Winn Schwartau "what does time have to do with security ?" (and Winn often speaks like THAT.) Indeed, Winn wrote a whole book about this very question; it's called Time Based Security (1999). And while you can still buy a copy on Amazon, it is also available from Winn as a PDF (a gesture that other noted security "mavens" have made with their earlier works, as you can see from the upper right of the web page you are reading now).

You can think of time-based security like this: the longer it takes a burglar to break into your house, the greater the chances that:

  • the burglar will give up and move on to another house
  • the burglar will spotted by a neighbor or security camera
  • your stuff will not be stolen

Time also matters if you hear someone trying to break into your house and call the police. The less time they take to respond, the greater the chances the burglar will be apprehended. So, if you substitute network and cybercriminal for burglar and house you can see that Time Based Security makes a lot of sense, even before you dig deeper, which Winn does in the book.

The goal is to give cybersecurity professionals: "a process methodology by which a security practitioner can quantifiably test and measure the effectiveness of security in enterprise and inter-enterprise environments." The book also lays out: "a quantifiable framework so that the security professional and management can make informed decisions as to where to smartly invest their security budget dollars."

But what if you're not a security professional or IT manager? Why is time an important factor in cybersecurity awareness for the general populace, all of whom are now, in one way or another, interacting with computers? 

Let me give you an example: when my mum gets an email that she thinks is a scam she forwards it to me. The one shown here is an attempt to scare recipients into clicking on a link to "update their details," in other words gather information, such as account numbers and passwords. 

The fear factor leverages the fact that every household in England is required to have a TV license (the fees from which fund commercial free television programs from the BBC). However, my mum immediately spotted the false claim that she missed a payment on her TV license, because she doesn't have to pay! (An exemption based on her age.)

When mum sends me something like this, I notify the malware analysts at ESET and they immediately make sure it is blocked by ESET security software. If they have not seen this particular scam email before, they let me know. In the last few years, my mum has supplied ESET with several "first seen" scam messages. Clearly, the speed with which one person—in this case a retired English teacher in her nineties—can identify a cyber threat has the potential to make a difference for millions of other people.

Time for some spam

Remember, Time based Security was published in 1999, clearly ahead of its time, but also at a time when I was seriously distracted by spam, unwanted mass emails that were a particularly serious problem in the late nineties because a) they were not illegal at that time, and b) organizations were struggling to prevent spam traffic overwhelming email systems and networks. As part of my research back then I was collecting spam, purposefully receiving any and all email sent to any address at one of the Internet domains that I owned, even if that address did not exist.

To make a long story short, when some friends and I founded a company to address the spam problem, I used my analysis of that collection of spam to prove that delaying spam delivery would be very painful for spammers. One of those friends, a person with amazing network skills, devised a way for organizations to slow down incoming spam. This led to several patents and the development of a very successful product which was eventually acquired by Symantec, due in part to customer testimonials like this: "Thanks to your product, we were able to reduce the number of email servers from four to one, saving us a ton of money." 

End times

Sadly, I'm running out of blogging time this Sunday, so I need to wrap this up and bring it back around to the beginning (cue theme song from Bron/Broen, the original TV series Bridge, about 1 minute and 26 seconds in). 

I won't go all the way back to the beginning of time, or even the beginning of Daylight Saving Time, the topic with which I began. And I won't get into agents of the apocalypse, which really is a topic that I covered in my recent conference talk: How Hackers Save Humanity - a cautionary tale.

But I do want to go back 15 years to the time when America broke the DST norms, namely 2005. That is the year "George W. Bush Ruined Daylight Saving Time" according this very enjoyable 2010 article. In effect, the president broke the DST norm, putting America out of step with many of the countries with which it does business. 

Apparently, "the rationale for the new daylight savings calendar was that it would reduce energy use by encouraging people to use less electric light," but as the author of the article points out, that was a poorly tested assumption. The result has been the addition of two periods of annoyance and confusion twice a year, with no serious reduction in energy consumption (numerous serious proposals for which were on the table in 2005, but were rejected by Bush and the Republicans).  

As you might know, if you read the article from Day 23, I am a big believer in norms if they are universally agreed and enforced for the common good. For example, it would be great if all humans could embrace a norm like this: "thou shall not access, use, or abuse someone else's device or data without their permission." 

So how about this: the first president of the United States who negotiates a global commitment to establishing and enforcing that norm gets to decide when DST begins and ends?


Saturday, October 24, 2020

Cybersecurity resources for your modestly-sized business (Cybersecurity Awareness Month, Day 24)

This year, 2020, the 24th day of Cybersecurity Awareness Month is a Saturday. For many smaller businesses, like retailers and restauranteurs, Saturdays can be very busy workdays. For others, like accountants and lawyers, Saturday may be a quiet day, or a day to catch up on things. 

In my case, speaking as a one-person business, I'm using today to catch up on the business of posting one cybersecurity article every day of this month (something I pledged to do for reasons that I hope to explain by the end of the month, if I have any words left). 

My strategy today is this: provide helpful cybersecurity advice for smaller firms by drawing on work that's been published before and/or by other people. That way I may still get out of my study in time for the curry that's being delivered for dinner tonight, while providing some genuinely helpful security resources for the smaller business.

A great place to start if your modestly-sized business wants to learn how to be safer and more secure online is CyberSecure My Business, a national program coordinated and funded through the National Cyber Security Alliance. Another good starting place might be to review the basic steps that I have mapped out below.

(Note that I am using the term "smaller businesses" because there seems to be no general consensus on what constitutes a small business. I tend to think anywhere from 1 to 100 employees is "small" but you can still meet the US Small Business Administration definition of small business if you have up to 1,500 employees and under $38.5 million in average annual receipts. To my mind that encompasses a lot of companies that I think of as medium in size, hence the widespread use of the more flexible term Small to Medium Business (SMB). In the UK, the preferred term is Small to Medium Enterprise and your firm is an SME if it meets two out of three criteria: it has a turnover of less than £25m, it has fewer than 250 employees, or has gross assets of less than £12.5m.)

A cybersecurity roadmap for the smaller business

The task of securing your business against cybercriminals can seem daunting, particularly if your business is of modest size, the kind of place that does not have a crack team of cybersecurity experts on staff. But small size and a strained budget does not mean that you should avoid addressing the challenges of cybersecurity and the very real risk to your business that the rising tide of cybercrime presents. Fortunately, the problem becomes more manageable if you break it down into a series of steps. 

The following six-step program or roadmap can get you started. It is helpfully constructed so that the steps are alphabetically named, A through F:

  • Assess your assets, risks, resources
  • Build your policy
  • Choose your controls
  • Deploy the controls
  • Educate employees, execs, vendors
  • Further assess, audit, test

Bear in mind that defending your organization against cybercriminals is not a project, it is a process, one that should be ongoing. Too often we see organizations suffer a data breach these days because the security measures they put in place a few years ago have not been updated, leaving newer aspects of their digital activities undefended. This means it is not a case of doing A through F and you're done. You will need to keep going:

Graphic illustrating that defending your organization against cybercriminals is not a project, it is a process that gets repeated

A: Assess assets, risks, resources

The first step in this process is to take stock. What kinds of information does your organization handle? How valuable is it? What threats exist? What resources do you have to counter those threats?

Catalog assets: digital, physical

If you don’t know what you’ve got, you can’t protect it. List out the data that makes your organization tick and the systems that process it. (I assume you already have an inventory system for tracking all company computers, routers, access points, tablets, printers, scanners, computer-controlled machines, IoT devices, etc.)

Be sure to include the systems receiving data and outputting data as well as those that process and store it. For example, if your company depends on a central database of clients and their orders it is possible to focus on that as your main digital asset, and feel fairly secure because it resides on a well-protected server in a locked room or in a private cloud. But connections in and out of that database may come from a wide range of devices that are beyond your physical control (and bear in mind that some of the most valuable data may exist in highlights, summaries, and attachments emailed between executives. You need to catalog those connections.

Calculate risk

You need to answer this question: What are the main threats to your data and systems? Try stating these in terms of actors, actions, assets, attributes, and motives. For example, criminals (actors) might gain remote access (action) to your server (asset) to encrypt the files on it (attribute) to extort money from you in return for the key to unlock those files (motive). 

But don't just think of money-seeking attacks; for example, people who don't like your construction company's use of imported timber (actors) might attack (action) your website (asset) to prevent you taking orders (attribute) to make a point (motive).

This type of breakdown is used in the annual Verizon Data Breach Investigation Report (DBIR) which provides a solid background to internal discussions about risks because it is based on recent, real world attacks. You can download the 2020 DBIR here. The action categories are: Malware, Hacking, Social engineering, Misuse, Physical, Error, and Environmental. The motives are Financial, Espionage, Activism, and Other. These are handy schemas to use when performing your review of the risks faced by your organization.

List resources

After cataloging all the digital assets that you need to protect, and reviewing the threats ranged against them, you can feel overwhelmed, so it is time to take heart and list out the resources you have the potential to tap as you swing into action. This can include current employees with cybersecurity skills, to consultants recommended by friends, partners, and trusted vendors. You may be able to get help from trade associations, local business groups, even the federal government. 

Build your policy

The only sustainable approach to cybersecurity begins with, and depends on, good policy (that is the consensus opinion of information security professionals, myself included). Ideally, policy begins with top-level buy-in and flows naturally from there. Your organization needs a high-level commitment to protecting the privacy and security of all data handled by the organization. For example:

We declare that it is the official policy of Acme Enterprises that information, in all its forms, written, spoken, recorded electronically or printed, will be protected from accidental or intentional unauthorized modification, or destruction throughout its life cycle. 

From this flow policies on specifics. For example:

Customer information access policy: Access to customer information stored on the company network shall be restricted to those employees who need the information to perform their assigned duties.

You implement this policy through controls, which we discuss in a moment. First, I want to stress that for many companies, information security policy is not optional, no matter how small the business. I'm not just talking about legal requirements to have policy, which exist in areas such as health and financial data. 

I'm talking about the need to have such policies in place in order to close deals. These days it is not unusual for a company that you want as a client to want you to have security policies. For many years now, some companies have required potential suppliers to comply with requirements like this:

Vendor must have a written policy, approved by its management, that addresses information security, states its management commitment to security, and defines the approach to managing information security.

In other words: you don't get to be one of their approved vendors if you don't have written and defined information security policies. (That is actual language presented as part of contract negotiations between a small software company and a large, well-known retailer.)

Choose the controls to enforce your policies

Information system security professionals use the term "controls" for those mechanisms by which policies are enforced. For example, if policy states that only authorized employees can access certain data, a suitable control might be:

  • Limit access to specific data to specified individuals by requiring employees to identify and authenticate themselves to the system.
That's a high level description of the control. You will need to get more specific as you move toward selection of actual controls, for example:
  • Require identification and authentication of all employees via unique credentials (e.g. user name and password).
  • Forbid the sharing of user credentials.
  • Log all access to data by unique identifier.
  • Periodically review logs and investigate anomalies.

Spelling out the controls will help you identify any new products you may need, bearing in mind that there may be suitable security features available in products you already use. For example, if policy states that sensitive data shall not be emailed outside the organization in clear text, the control to apply, encrypting of documents, may be accomplished through the document password protection features in products like Microsoft Office and Adobe Acrobat. (Note: I'm not saying that is strong enough for very sensitive data, but it does make intercepted documents a lot harder to read than ones that are not encrypted.)

Deploy and test controls

Putting controls in place is the deployment phase but this also includes part of the next phase, education. For example, when you roll out a control like unique user IDs and passwords you will need to educate users about why this is happening and how it works (in this example, that process should include explaining what constitutes a strong password—see Day 19 for tips on that). You will also need to test as you deploy, to make sure that the controls are working.

A phased approach to roll out often works better because you can identify problems and find solutions while scale is still limited. Rolling out to more experienced users first is a good way to get initial feedback and improve messaging to be used with the wider population (bearing in mind that some things which experienced users already know may nevertheless need to be explained to the general user population).

When testing a control, you need to make sure that it works technically, but also that it "works" with your work, that is, does not impose too great a burden on employees or processes.

Educate employees, execs, vendors, partners

Security education is too often the neglected step in cybersecurity. In my opinion, for your cybersecurity efforts to be as successful as they can be, everyone needs to know and understand:

  • What the organization's cybersecurity policies are.
  • How to comply with them through proper use of controls.
  • Why compliance is important.
  • The consequences of failure to comply.

Your goal should be a "security aware workforce" that is self-policing. In other words, employees are empowered to say "No" to practices that are risky and report them to management (even if the persons engaged in unsafe cyber-practices are management).

In terms of consequences, there is no need to sound overly-draconian but calmly point out that a breach of security could be very bad news for the organization and even threaten its continued operation, including employment.

Two areas of education you don't want to skimp on are executives (who may feel they are above being educated about security) and partners, vendors, even clients. In fact, any data-sharing relationship should be encompassed in policies, controls, and security awareness education.

Further assess, audit, test…

Step F on the road map is by no means the end of the line, in fact, it is a reminder that this process continues. Once polices and controls are in place and education is under way, it is time to re-assess security, by testing and auditing. You can do some of this in-house but you may also want to engage an outside entity to get an objective perspective on your efforts so far.

Best practice is to have a plan to assess security on a periodic basis and adjust defenses accordingly. Even when there is no audit scheduled, you will want to stay up-to-date on emerging threats and adjust your controls accordingly. For example, just a few years ago it was unusual to see RDP attacks on small business servers but today they are happening a lot. (See this article to learn what an RDP attack is.) This means you may need to pay more attention to the security of your remotely accessed servers than you have been accustomed to doing. How would you know this is a trend? One way is to subscribe to good security websites, like Dark Reading, Info SecurityGCHQ, Krebs on Security, and We Live Security

You should also be alert to changes in your systems and connections to your data. For example, there are security implications whenever you establish new vendor relationships, create new partnerships, and design new digital marketing initiatives. The departure of an employee is another event that requires security attention, making sure that access to data and systems is terminated appropriately.

Cybersecurity checklist

Yes, there is a lot to think about when tackling cybersecurity for your organization. Here are some high points you don't want to miss: 

  • Do you know what data you are handling?
  • Do your employees understand their duty to protect the data?
  • Have you given them the tools to work with?
  • Can you tie all data access to specific people, times and devices?
  • Have you off-loaded security to someone else?
    • Managed service provider
    • Privacy cloud provider
    • Public cloud provider
  • Be sure you understand the contract
    • You can’t off-load your liability
    • Ask how security is handled, what assurances are given

Cybersecurity resources and a sweet diagram

If you are still wondering if cybersecurity is a big deal for smaller businesses, or if you are convinced it is, but you need to persuade someone else, try using this diagram that I came up with some years ago while I was working at ESET: 

(This diagram illustrates the "SMB sweet spot" as seen from a cybercriminal perspective. While many smaller firms have lower levels of cybersecurity protection, they may well handle enough money and digital assets to be worth attacking. For example, a small construction firm may think of itself as too small to attack because each year it only shows a small profit, yet during the year it may handle large amounts of money from different sources to fund projects.)

For further learning and assistance here are some more resources, some in the form of PDF files:

Creating a Small Business Cybersecurity Program

There's a very helpful book that I've been recommending lately called Creating a Small Business Cybersecurity Program. It was published earlier this year, authored by Alan Watkins and edited by Bill Bonney. These gentlemen are two security experts that I had the pleasure of working with in San Diego, and this book is a great cybersecurity resource if you are a small organization (say 25 to 500 people). Indeed, any organization looking to take a structured approach to meeting the security and privacy challenges created by the digital information systems—on which business, consumers, and governments now rely—will find this book a solid place from which to start, and from which to build. 

The current trend lines for both cybercrime and technology dependence point sharply upwards. Every entity in every sector—business, non-profit, education, government—needs a cybersecurity program if it hopes to manage and survive the many risks that these trends create. The approach that Alan takes to creating that cybersecurity program is based on his decades of experience in the field. The book is practical, the concepts and strategies are clearly articulated. Alan is thorough without being overwhelming. Based on sound theories developed through decades of work in the field, this book is a generous source of knowledge, advice, ideas, resources, examples, and links to many more.

In my experience, protecting your digital assets is not about buying the latest and greatest security products. It’s about properly deploying the right products for the cybersecurity program that’s right for your organization. While Alan does point to suitable products, his focus is on making sure you have the right plan, the necessary policies, and the appropriate controls to guide the purchasing decisions you make.

A long time ago I wrote one of the first books about the security of computers used by small businesses, so I am keenly aware that the task of distilling cybersecurity advice into a readable work of a manageable scale is far from easy—and much harder than it was back then. So my hat is off to Alan, and his skillful editor Bill Bonney, for creating a much needed book that was hard to write but easy to use.

And as someone who has given talks and presentations on cybersecurity to hundreds of small organizations, the question I’ve been asked the most, a question I frankly dread, is: “where do I even start?” Now I have a ready answer: read Creating a Small Business Cybersecurity Program by Alan Watkins.


Friday, October 23, 2020

Facing the challenge of protecting health data from abuse (Cybersecurity Awareness Month, Day 23)

On this, the 23rd day of Cybersecurity Awareness Month, it's time to acknowledge something this is both sad and true: cybersecurity awareness sometimes means accepting that some of the things that we enjoy a lot may not do us a lot of good. It's a bit like pumpkin spice lattes: I really enjoy drinking them, but doing so is not particularly good for me, and the science strongly suggests that drinking a lot of them is bad for me. 

Likewise, I really enjoy sharing information about myself, but I need to do so carefully in order to minimize certain risks. For example, I should probably think twice about sharing on social media the fact that I really like using the Acme Patient Portal App for Android; and maybe think three times if I've also been sharing lots of pictures of our new cat Nadia while using her name as my password on that portal, and all my other accounts. 

In yesterday's blog post I talked about how serious the threats to health information have become now that so much of it is stored on, processed by, and communicated between, digital devices, things that now range from wearable tracking devices to mainframe computers and huge server farms "in the cloud." 

While most people would argue that this massive digitization of medical data is not wrong in itself, criminologists like myself would argue that abuse of this new reality for selfish ends is inevitable, particularly if the data is not protected at all times by "effective guardians" (a term we talked about on Day 7). 

Unfortunately, the mass digitization of medical data has been occurring at the same time as an explosion in the number of points at which "bad actors" can attack the systems processing the data, the so-called attack vectors that I referred to on Day 9 (The Internet of Things to Get Smart About). The rapid adoption of everything from tablets to smartphones to connected watches and health trackers is expanding the attack surface, the amount of digital territory that needs to be monitored and defended. 

Some years ago I started diagramming this for folks in healthcare, and while it's not the prettiest picture I've ever drawn, I think this one does convey how complex all of these develops have made the task of maintaining cybersecurity:  

Diagram of the attack surface for medical data, from smartwatch to clinic

To carry on being a bit technical, I should point out one more thing that makes cybersecurity so difficult in the healthcare sector: the required level of granularity and multiplicity in the sharing and not-sharing of medical data. Think of all the entities that might be in the data sharing mix, requiring some of your medical details, sometimes in a hurry, but without exposing all of those details to criminals or the public:
your doctor; that doctor's colleagues, nurses, and assistants; any specialists you see and staff at the places to which you are referred; your pharmacy; the accounting and administrative departments for all of these; the same again for any insurance companies involved, plus their claims assessors and adjudicators; your employer, who may be paying for all or part of your insurance; and your government, that might be funding, researching, or otherwise tracking some or all of the medical services you need.
Yet, challenging as cybersecurity is when it comes to healthcare, there are always things you can do to reduce the odds of your medical data being abused. Thanks to the National Cybersecurity Alliance, four of these things have been put into is a handy infographic (full version downloadable here).

You might find this graphic helpful if you are working on raising the cybersecurity awareness of others, perhaps in your office, church, social group, or household. Here is a link to a short video that might also help (I'd say loop it on the monitors in the company cafeteria, but I'm not sure how many people are in company cafeterias these days).

If you are trying to reach management with the urgency of this topic, please urge them to watch this interview with an expert that I respect a lot, Joshua Corman, titled Cybersecurity Advice for the COVID-19 Era. For more on dealing with things at an organizational level in healthcare, see this article: Putting People at the Center: Three Ways the Healthcare Industry Can Proactively Prevent Cyberattacks