Читать книгу You CAN Stop Stupid - Ira Winkler - Страница 25

Оглавление

2 Users Are Part of the System

Users inevitably make mistakes. That is a given. At the same time, within an environment that supports good user behavior, users behave reasonably well. The same weakest link who creates security problems and damages systems can also be an effective countermeasure that proactively detects, reports, and stops attacks.

While the previous statements are paradoxically true, the reality is that users are inconsistent. They are not computers that you can expect to consistently perform the same function from one occurrence to the next. More important, all users are not alike. There is a continuum across which you can expect a range of user behaviors.

Understanding Users' Role in the System

It is a business fact that users are part of the system. Some users might be data entry workers, accountants, factory workers, help desk responders, team members performing functions in a complex process, or other types of employees. Other users might be outside the organization, such as customers on the Internet or vendors performing data entry. Whatever the case, any person who accesses the system must be considered a part of the system.

Clearly, you have varying degrees of authority and responsibility for each type of user, but users remain autonomous, and you never have complete authority over them. Therefore, to consider users to be anything other than a part of the system will overlook their capacity to introduce errors and cause security breaches and thus lead to failure. The security and technology teams must consider the users to be one more part of the system that needs to be facilitated and secured. However, without absolute authority, from a business perspective, you must never consider users to be a resource that can be consistently relied upon.

It is especially critical to note that the technology and security teams rarely have any control over the hiring of users. Depending upon the environment, the end users might not be employees, but potentially customers and vendors over whom there is relatively little control. The technology and security teams have to account for every possible end user of any ability.

Given the limited control that technology and security teams have over users, it is not uncommon for some of these professionals to think of users as the weakest link in the system. However, doing so is one of the biggest copouts in security, if not technology management as a whole.

Users are not a “necessary evil.” They are not an annoyance to be endured when they have questions. Looking down upon users ignores the fact that they are a critical part of the system that security and technology teams are responsible for. In some cases, they might be the reason that these teams have a job in the first place.

It is your job to ensure that you proactively address any expected areas of loss in the system, including users. Users can only be your weakest link if you fail to mitigate expected user-related issues such as user error and malfeasance.

Perhaps one of the more notable examples is that of the B-17 bomber troubles. Clearly, a pilot is a critical part of flying the airplane. They are not just a “user” in the most limited sense of the term. When the B-17 underwent the first test flights in 1935, it was the most complex airplane at that time. The pilots chosen as the test pilots were among the top pilots in the country. Yet, these top test pilots crashed the plane. The reason was that they failed to disengage a locking mechanism on the flight controls.

It was determined that the pilots were overwhelmed by the complexity and made a simple mistake. As the pilots were a critical part of the system, removing them was not an option. They were highly experienced and trained professionals, so the problem was not that they were poorly trained. The government could have sent the pilots for additional training, but retraining top pilots in the basics of how to fly the plane was not going to be an efficient approach. Instead, they recognized that the problem was that the complexity of the airplane was overwhelming.

The solution was the implementation of a checklist to detail every basic step a pilot had to take to ensure the proper functioning of the airplane. Similar problems have since been solved for astronauts and surgeons, among countless other critical “pieces of the system.”

Users Aren't Perfect

Users can be both a blessing and a curse. For the most part, if the rest of the system is designed appropriately, users will behave accordingly. At the same time, you must understand and anticipate that despite your best efforts, users sometimes do the wrong thing.

For example, in one case that is unfortunately not isolated, author Ira Winkler ran a phishing simulation against a large company. Employees were sent a message that contained a link to a résumé. The sender claimed that one of the company's owners suggested they contact the recipient for help in referring them to potential jobs. If the employees clicked on the fake résumé, they received a message explaining that the email was fake and how they should have recognized it as such. In at least one case, the user replied, still believing it was a real message, saying there was a problem with the attached résumé. This sort of phishing training exercise can improve some user behaviors, but it is certainly far from making users a foolproof part of the system.

Anticipating how people will behave helps you design better systems to capitalize on predictable behaviors, leading to better security. Even though people make mistakes, good systems should anticipate that and not break when they do.

“Users” Refers to Anyone in Any Function

When we use the term users throughout the book, it might seem that we are implying end users or low-level workers. The reality is that we mean anyone with any function. This can be managers, software developers, system administrators, accountants, security team members, and so on.

Anyone who has a job function or access that can result in damage is technically a user. Administrators can accidentally delete data and cause system outages. Security guards can leave doors open. Managers can leave sensitive documents in public places. Auditors can make accounting errors. Everyone is a user at some level.

Our use of the term users can also include outside contractors, customers, suppliers, cloud service providers, or anyone else who interacts with your organization. If they can take an action that can potentially cause harm to your organization, they must be considered in your risk model.

Cloud services and remote workers create additional concerns, where you potentially lose control over your information and users. For example, if a user goes into Starbucks and uses the free WiFi to connect to your network, that user creates a whole new class of users, increasing the risk profile. Cloud services change the profile of your users, given that access control methods change to allow for someone to theoretically log in from anywhere in the world. The risk can be mitigated, but you have to plan for it.

Perhaps some of the more overlooked groups of users are the people who are responsible for mitigating risk. They tend to look at the errors caused by others and believe that they themselves would have never caused the errors. This causes two types of problems.

The first is that if they don't conceive that an error can occur, they cannot proactively mitigate it. We have been on software test teams and found problems with potential uses of the software and told the developers. The developers have often responded that “nobody would ever do that” and fought us on implementing the fixes.

The second issue is that the risk mitigation teams, like information security, IT, physical security, operations, and so on, don't perceive themselves as being the source of errors. They do not believe they will make mistakes. They can have tremendous privileges and access, which provides the capabilities for their errors to create more damage than any normal user would.

Malice Is an Option

Although the natural assumption is that user-initiated loss happens through ignorance or carelessness, a great deal of damage is caused by malicious users. The 2018 DBIR found that 28 percent of incidents result from malicious insiders who have clear intent to either steal something of value or create other forms of damage. That is a staggering number.

More critical is that malicious insiders typically know the best ways to access whatever it is they are trying to steal or destroy. Additionally, if they are intelligent in their planning and execution, they might be able to identify and bypass your protection, detection, and reaction capabilities.

When malice is involved, awareness efforts can sometimes even work against an organization. Awareness efforts typically educate people about how malicious actors accomplish their goals. This provides your malicious employees with information about how they too can commit those types of crimes. It also gives them ideas about how and where you allocate defensive resources and what countermeasures they need to bypass. Clever malicious insiders use this information to improve their own attacks.

As a percentage of overall users, the number who will launch malicious attacks, let alone succeed at them, is fortunately small. Even so, the reality is that malicious users exist, so you must account for them. There have been various studies that have shown that a small percentage of users create the most damage. This is intuitively obvious. Such users will always exist. The best you can do is acknowledge this reality and prepare for them.

What You Should Expect from Users

Users need to perform their jobs properly in a fundamentally safe and secure manner. You need to ensure that security is embedded in job functions and that people know how to perform those functions properly. This should be well defined, and just like any other job function, you should set the expectation for those users to follow those definitions. We would love to say that you should also expect users to be fundamentally aware of security concerns beyond what is specifically defined, but that will not likely happen on a consistent basis.

Therefore, businesses should factor the users' limited awareness into their risk management calculations and plans. You should provide awareness training and opportunities to further reduce risk. Although we don't want organizations to rely too strongly on awareness, it is a critical component of any security program to reduce risk.

Although user ignorance can be partially improved with training, carelessness is another matter. Assuming you have properly instructed users in how they should perform their functions, if some users still consistently violate policies and cause damage, you may need to take disciplinary action against them.

Beyond ignorance and carelessness, you also must account for malicious actions. We discussed this in the previous section, and we will explore options to address it as we discuss security measures throughout the book.

It is important to follow our recommended strategies to ensure that your systems reduce the opportunities for users to make errors or cause malicious damage and then mitigates any remaining potential harm. Then regardless of whether the harmful actions are due to malice, ignorance, or carelessness, your environment should be far more powerfully positioned to minimize or even stop the resulting damage.

You CAN Stop Stupid

Подняться наверх