Читать книгу Tribe of Hackers Red Team - Marcus J. Carey - Страница 9

3 Paul Brager

Оглавление

“As you can imagine, the best way to get a red team job is to first understand what it is that you want to do and then build a technical skill set and foundation to align with what that type of role would entail.”


Twitter: @ProfBrager

Regarded as a thought leader and expert in the cybersecurity community for more than 25 years, Paul has deep expertise evaluating, securing, and defending critical infrastructure and manufacturing assets (ICS, IoT, and IIoT). An avid speaker and researcher, Paul seeks to move the conversation forward surrounding ICS cyber and managing the threat surface.

He has provided commentary on several security-related podcasts, publications, and webinars that provided guidance and insight into strategies for critical infrastructure and manufacturing cyber defense. Paul has a passion for mentoring and guiding people of color who are aspiring to contribute to the advancement of the industry and promoting diversity within the cyber community.

How did you get your start on a red team?

My red team beginnings (much like most experiences in this space) came about from necessity. Company leadership fired a “legacy” employee who was using a Windows 95 desktop with local accounts (yes, Windows 95). At the time, it wasn’t uncommon for workstations to not be part of a domain (Windows domains weren’t terribly common in the mid-’90s), but there also weren’t many methods of getting into a workstation if the password was lost. Novell was still king of the network operating systems, so you get the picture. Recovering a machine typically means re-installing over the top of it and hoping that you didn’t step on any of the critical documents/areas or getting into it with one of many “magic boot disks” that had started to appear at the time.

These were generally Slackware-based, but you needed some “skills” to be able to get them to work without destroying the master boot record (MBR) on the target. “Hacking” those disks with predictable results became more of an art than a science, as you needed not only some Linux/BSD knowledge but also knowledge of how partitions worked within Windows. After spending countless hours building (and rebuilding) a Windows 95 test machine to get the parameters correct, I was able to successfully gain access to the Windows 95 workstation and recover valuable source code that would have cost the company months in development.

What is the best way to get a red team job?

Well, it depends—red team job doing what? Pure penetration testing? Survivability testing? Penetration testing against certain classes of assets, in other words, ICS? As you can imagine, the best way to get a red team job is to first understand what it is that you want to do and then build a technical skill set and foundation to align with what that type of role would entail. Experience is generally key here but not always—sometimes raw knowledge and demonstrated know-how are enough. Much of how you are received as a legitimate red teamer is left to the devices of those interviewing, but those who can truly recognize talent may show interest. Networking, either in person or through social media (or both), remains one of the strongest ways to get insight into available red team roles, but you may also luck out and talk to someone in a position to make a hiring decision.

How can someone gain red team skills without getting in trouble with the law?

Today, gaining red team skills without getting into legal trouble is easy. Many of the tools that one would need to practice are open source and easily downloaded; the same is true about access to many of the operating systems that would be potential targets. The world of virtualization has opened the door to the creation of virtual labs that can be destroyed and rebuilt with no impact to anyone—other than you, of course. Additionally, there are numerous hackable platforms available to test various skills and abilities (such as Hack The Box) to further hone red teaming skills. The more specialized type of practice—against ICS assets, for example—is a bit trickier, although some PLCs (the primary targets in an ICS) can be purchased on eBay. Likewise, IoT devices (such as Raspberry Pis) can be purchased inexpensively to develop skills against those.

Why can’t we agree on what a red team is?

As with many things in cybersecurity, there is always an implied “it depends” when discussing what constitutes red teaming. Some believe that red teaming is just hacking; others believe that red teaming is far more robust and systematic than that. I believe that ultimately it depends on the perspective of the audience. For those in a purely corporate setting, red teaming gives a more elegant name to penetration testing with a nonmalicious purpose. It infers a sense of structure and methodology that leverages offensive security capabilities to uncover exploitable vulnerabilities. Among the hacker community, however, there may be a much looser definition being used.

What is one thing the rest of information security doesn’t understand about being on a red team? What is the most toxic falsehood you have heard related to red, blue, or purple teams?

Being on a red team does not automatically make a person nefarious or malicious. Rather, what excites them within the realm of cybersecurity tends to be more the offensive capabilities. Researching and discovering exploitable vulnerabilities is both tedious and painstaking, and to be able to do so and articulate findings in a consumable manner is more an art than a science. While their pedigree may be hacker-made, it does not define them but legitimizes their necessity within the cybersecurity ecosystem.

Perhaps the most toxic falsehood to date that I have heard is that cybersecurity professionals completely fit within one of three buckets: red team, blue team, and purple team. This gives the perception that cybersecurity professionals are single-threaded, which simply isn’t true at all. While each professional may have more of an affinity to one or the other depending on how they have matured within cybersecurity, it is functionally impossible to not consider the other buckets. Red teamers must understand how their penetration attempts could be thwarted or detected and come up with countermeasures to lessen the likelihood of that happening. Blue teamers must understand at some level the TTPs that adversaries are launching to better develop countermeasures to repel them. Most cybersecurity professional are a shade of purple, being more red or blue depending on affinity and maturity in the field.

When should you introduce a formal red team into an organization’s security program?

A formal red team can be introduced into a security program at any point. The value and benefit of doing so largely depends on what is to be gained from the red team exercises. If the intent is to understand the threat surface and to what degree a program (or a part of the program) is vulnerable, then it is reasonable to engage red team services early in the program’s develop phase as a tool to better frame overall risks. Similarly, formal red team engagement can be part of the overall security strategy and lifecycle to reassess the robustness of controls and the organization’s ability to detect and respond.

How do you explain the value of red teaming to a reluctant or nontechnical client or organization?

Lobbying for red teaming within one’s organization can be challenging, particularly if the organization’s security program has not matured beyond vulnerability assessment and/or vulnerability management. Additionally, if the organization has not sufficiently invested in or implemented controls or resources, red teaming may uncover vulnerabilities that have not been budgeted for and which there are insufficient resources to address, which exacerbates the problem. My approach has always been to frame the notion of red teaming as a function of risk management/mitigation. Red teaming allows for an organization to find potentially damaging or risky holes in their security posture before bad actors exploit them, minimizing the potential impact to company reputation, customers, and shareholders. Taking this approach makes the question of whether to use red teaming a business decision, as opposed to a technical one.

What is the least bang-for-your-buck security control that you see implemented?

With the myriad of security products, services, and capabilities that are on the market, they all should be supporting two principal edicts: detect and respond. However, many security organizations are not staffed appropriately to consume and act on all the data that is available to them from these tools. Standalone threat intelligence tools, in my opinion, offer the least bang for the buck because they still require contextual correlation to the environment, which implicitly requires human cycles. Even with automation and orchestration between firewalls, SIEM, and IDS/IPS, correctly consuming threat intelligence requires resources—and burns cycles that may be better utilized elsewhere. The robustness of many of the more effective controls (firewalls, IDS/IPS, EPP) will generally give you the threat context that is necessary to detect and respond, without the overhead of another tool.

Have you ever recommended not doing a red team engagement?

Typically, a customer or an organization can always benefit from some form of “red team” activity, even if it is just a light penetration test. In my consulting life, we generally would recommend against a full-blown red team exercise if there was significant immaturity evident within the organization’s security program or if the rules of engagement could not be settled upon to safely conduct the red team exercise. What has been recommended in the past is a more phased approach, going after a limited scope of targets and then gradually expanding as the organization’s security maturity increases.

What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?

Security awareness training can be one of the easiest and most important controls that bolsters the overall security posture of an organization. User behavior can be the difference between a managed threat landscape and an unruly one, and in many instances, the end user will see incidents before security. Educate and empower users to practice good cyber hygiene. Beyond that, certain security controls that are cloud-based can be leveraged to offset the capital costs of infrastructure, if that is a barrier. This is particularly true in small to medium-sized businesses with limited staff and/or budgets.

Why do you feel it is critical to stay within the rules of engagement?

Rules of engagement are established as the outer markers for any red team/pentesting exercise. They basically provide the top cover for activities that may cause harm or an outage, even if unintentional. Additionally, the rules of engagement can be your “get-out-of-jail-free” card should something truly go sideways, as they generally include a hold harmless clause. Deviating from the stated rules of engagement without expressed written consent of the client could open you up to legal liability issues and be devastating to your career.

If you were ever busted on a penetration test or other engagement, how did you handle it?

I had an instance where a physical penetration test was being conducted for a client, and the sponsor had neglected to notify site security about my presence. After gaining access to the facility through a propped-open door in the back (repair personnel didn’t want to keep badging in), I was walking through the facility with a hard hat that I had “borrowed” from a table, and I was apprehended by site security and the local police. To make matters worse, my contact was unavailable when they called to confirm that I was authorized to conduct the penetration test. After two intense hours of calling everyone that I could to get this cleared up and the threat of charges being filed, the contact finally called back and I was released without being arrested.

What is the biggest ethical quandary you experienced while on an assigned objective?

Without question, the biggest ethical quandary I’ve experienced is stumbling upon an account cache, financial records, or PII in a place where they shouldn’t be and being told by the sponsor not to disclose the details to the impacted individuals until the penetration testing exercise was complete, which may be over several days. For me, there are certain discoveries that take priority and need to be acted upon immediately, particularly when it is PII or financial information. In this case, the sponsor was attempting to prove a point to another member of management and had virtually no regard for what had been discovered.

How does the red team work together to get the job done?

Red teaming, as the name implies, generally involves more than one person. The coordination that is needed to engage in a penetration test against multiple targets requires clear accountability as to what is expected of each team member. Additionally, there are generally members of the team who are better at certain tasks than others—those more suited to speaking with the customer do so, those more technical stick to those roles, and so on. It is always useful to have a team of red teamers comfortable speaking with customers, as each of them (particularly in large engagements) may have to report at different times to different audiences.

What is your approach to debriefing and supporting blue teams after an operation is completed?

When I was consulting, there would be two report-outs. One would be for management and reported on the high-level activities that were conducted, what was found, and the risk concerns that had arisen from those findings. Any extraordinary findings would be enumerated within that conversation so that if any legal or other actions needed to get underway, the accountable parties could get started. The second report was the technical deep-dive; it was generally divided into finding areas, and individual small sessions were conducted with blue team designees to confirm what was in the report and walk through any questions. It was also during these sessions that follow-on remediation efforts and next steps would be discussed.

If you were to switch to the blue team, what would be your first step to better defend against attacks?

Having lived on both sides of the fence, one of the things I am always amazed about is the lack of contextual visibility—not just logs and so on, but actual visibility with context into the associated assets. Additionally, there still seems to be considerable challenge in identifying assets within the ecosystem. The introduction of IoT (IIoT in the industrial world) has exacerbated this problem. Those two areas need to be addressed from a defense-in-depth approach because you simply cannot defend what you cannot see and identify. Effective cybersecurity defense is deployed in layers so that even if attackers get past one layer of defenses, it is increasingly difficult for them to get past subsequent layers. Lastly, I would spend more time and energy on security awareness training and arming the end user with the information needed to change behavior.

What is some practical advice on writing a good report?

When writing a testing report, it is important to understand what the objective of the customer is and write the report to align with those objectives. At the end of the day, any remediation efforts are going to need to be funded, and the more the testing report can help build that case, the more likely the client is to reach back out to your entity (or you) for follow-up work. Consider what the customer would need to show management to compel them to act. Get feedback from the customer during the drafting process and incorporate it; certainly the style and tone of the report can be critical to the efforts of the security function within that organization. Seek to highlight areas where the security function performed well, followed by findings characterized by risks. Also keep in mind that the content will have to be defended, so make the language succinct and as unambiguous as possible.

How do you ensure your program results are valuable to people who need a full narrative and context?

In my experience, how value is added to the red team program varies from organization to organization, but principally it should align with the overall security program and the risk posture of the organization. The program should strive to enumerate material and exploitable vulnerabilities within a given ecosystem, understanding that all findings may not be outside of the organization’s risk tolerance, whereas some may be nonnegotiable as a risk that absolutely has to be mitigated. In either case, the ability to link the red team program to some repeatable metric, such as the number of materials and exploitable vulnerabilities found, the number of successful versus unsuccessful attacks, or the number of false positives, can go a long way in legitimizing the value of the effort. Your skill set really doesn’t matter if the work you are doing doesn’t align with something of value to the business. Senior management isn’t interested in a report showcasing how skillful and smart you are—what they are interested in is their overall risk exposure given what you have discovered, so frame your activities in that light.

How do you recommend security improvements other than pointing out where it’s insufficient?

In any red team exercise, it is important to highlight those areas where the customer/organization did things well. For instance, if the organization has a robust patching program and it led to a smaller attack surface for the red team, be certain to acknowledge that. Remember, part of the job of a red team is to legitimize not only its skill capability but its intrinsic value as part of the security program. If the red team cannot contribute to the success of the security program to get the funding it needs, then its value is severely diminished. Conversations with blue team members should be as informative as possible, and if both teams come from the same company, it may be useful for the red team members to assist the blue team in identifying countermeasures. Be a source of expertise that is not just for hacking into systems but also for securing them—help the blue team think like hackers (assuming they aren’t already).

What nontechnical skills or attitudes do you look for when recruiting and interviewing red team members?

The most important nontechnical skill any security professional can have is strong communication skills. When recruiting for red team members, there must be an air of trustworthiness and integrity within the candidate. Red teamers will have access to very sensitive knowledge about infrastructures, security controls, vulnerabilities, and so on, and that information will need to be held in the utmost of confidence. The ability to be not only technically astute but also able to explain those technical concepts to the layperson is invaluable in a red team asset.

What differentiates good red teamers from the pack as far as approaching a problem differently?

Good red teamers are not only technical hacks but also have an innate understanding of what value their activities represent to their organizations (as an employee or consultant). Good red teamers are thorough and detail-oriented and comfortable with their own skill. Good red teamers are always looking to hone their abilities and figure out ways to exploit without detection. Problem-solving can be highly methodical, or it can be serial. Regardless of the approach, a good red teamer applies the proper approach when necessary and adjusts when that approach runs into a dead end. ■

Tribe of Hackers Red Team

Подняться наверх