Читать книгу Do No Harm - Matthew Webster - Страница 21

The Road to High Risk

Оглавление

The key foundation for commerce is trust—trust in the exchange of money and/or good and services. Without trust, trade becomes riskier and less likely to happen. A thousand years ago you could touch, feel, see, and work with products. Today, in the IT world, we test products, read reviews, talk to peers, and so on. We install them, ensure the functionality, and do what we can to see if they work.

What is sometimes difficult to tell is how secure the product is. I once worked with a piece of software designed to examine security requirements. It did not meet many of the requirements it was examining in other products. While this may seem very rudimentary, it is not that uncommon for vendors not to do as they ask others to do. One famous case where this happened was a company formerly known as Bit9—a company that provides security protection software. They were hacked, but they did not use their own software to protect their environment. If they had, they would not have been hacked.31

What may be surprising to some of you is that some medical devices are built with old or outdated operating systems.32 What this means is the systems are full of weaknesses (called vulnerabilities in the security world) that can be exploited by hackers. The vulnerabilities are often so severe that the entire system can be compromised. Every shred of data related to the system can also be compromised. What is worse, that system can then be used to compromise other systems in a hospital. The fact that so many systems have severe vulnerabilities compounds the problems of security practitioners trying to protect the hospitals in the first place.

To make matters worse, in many cases the interface to the machine completely obfuscates the operating system, making it difficult to assess the underlying technology. The manufacturer can also add security on the front end of the medical devices, making it seem as though the security is high. For example, some systems will provide strong password requirements such as long password length, complexity, password rotation, and so on, making it seem as though the system is built securely. That aspect of the system may be relatively secure, but not necessarily the rest of the product.

Many of you may be thinking that this is an old issue and that operating systems are usually up to date. The hard reality is that these outdated operating systems are almost par for the course when it comes to internet-connected medical devices. Recently Palo Alto Networks put out a report demonstrating that 83% of medical imaging devices had operating systems that could not be updated.33 This is very serious as it means those operating systems have vulnerabilities that were not previously known and they cannot be remediated. From a hacker's perspective, these internet-connected medical devices are a metaphorical gold mine—not only because they have data, but also because they are relatively easy to hack—often allowing hackers to jump from one system to another within an organization.

This very same idea can be applied to other internet-connected medical devices that do not utilize a full operating system. In those cases, the system has a very small operating system known as firmware. On a personal computer firmware can be updated very easily, but devices that are very small with firmware only may or may not be updatable—cybersecurity patches cannot be applied In some cases, what is included is unalterable. The unalterable nature of the device is referred to as hardcoded. This is where passwords are hardcoded into some of the devices.

Processors are another avenue of attack. In January 2018, two new processor vulnerabilities, Spectre and Meltdown, hit the news and security staff across the world like a ton of bricks. They uncovered, and subsequently demonstrated, flaws in the way that motherboards were designed over the last few decades. As a result of the motherboard flaws, operating systems could be compromised in ways that previously the hardware would have provided some protection. Ultimately, if an attacker had access to a system, data could be exposed by the combination of the two vulnerabilities (of which there are three variations). For Meltdown, an attacker gains access to data they normally shouldn't see by “melting” the division of protected memory normally enforced by hardware. Spectre, on the other hand, is about making a system reveal data that it should not reveal to the attacker.34

Both Spectre and Meltdown are examples of what were zero-day vulnerabilities—flaws that, at the time, were out but, as they are too new, do not have remediation. Hardware (such as motherboards), operating systems, and internet-connected medical devices are all prone to zero-day vulnerabilities. They are the bane of IT and security practitioners alike. They are the kind of situation, due to the severity of the vulnerability, that requires companies perform out of band patching (also called emergency patching), which can seriously disrupt the schedule of the IT department. While some zero-day vulnerabilities are of little consequence, many are much more serious—as Spectre and Meltdown were.

But why do we have these challenges with internet-connected medical devices to begin with? An incomplete and simplistic perspective might be to say that the dollar is king, security costs money, and therefore it is not done until companies are pushed into it. The reality is far more complex than that.

If we step back in time a decade for the purposes of looking at internet-connected medical device security, there were no regulations concerning their construction—very little regulatory oversight. In theory they had to meet HIPAA requirements, but many connected medical device companies did not always adhere to those—not by a longshot. Quite often these companies were not even striving to meet HIPAA requirements. The features and functions of the devices were the key capabilities they had to focus on—not security capabilities.

What makes matters worse is not every company is validating the security or making security the priority when purchasing a medical device when making a purchase. Think of it this way: If you are looking at a half-million-dollar piece of medical equipment and one company has a product that the doctors find far better than other pieces of equipment and has a better chance of saving lives, versus another product that may not save as many lives but may be a little more secure, which product do you buy? Many companies would want to purchase the product that would save more lives. It is almost common sense when weighing one concern verses another. Many hospitals would not give security a second look. Further, if you have only one or two devices that are connected, it is easy to overlook the one insecure exception in your environment. This is the way medical equipment was for decades as internet-connected medical devices first made their appearance. Keep in mind that when this started taking place, connected medical devices were not commonplace and security was not as large of a priority as it is today. Context is everything.

Another challenge that hospitals are sometimes faced with two products with poor security (or sometimes even one product with poor security). In these situations, hospitals need to choose a product and simultaneously make the hospital less secure. In those situations, you kind of have to live with the an imperfect decision of having an insecure device or decide not to help people. For most, not helping people is unthinkable for very good reasons.

Do No Harm

Подняться наверх