Читать книгу Tribe of Hackers Red Team - Marcus J. Carey - Страница 11

5 Christopher Campbell

Оглавление

“What are red team skills? When you list the skills that make someone a competent and effective attacker, you realize that those are the same skills that make someone a good server administrator, network engineer, or security practitioner.”


Twitter: @obscuresec

Christopher Campbell has been doing security research for many years and has a few college degrees, industry certifications, and open source project contributions. He has also found a few bugs and given a few talks at conferences. Chris is currently the red team chief for ManTech ACRE and was formerly a member of the U.S. Army red team.

How did you get your start on a red team?

The opportunity to join the U.S. Army red team was a lot more luck than anything else. I was on the receiving end of an assessment 15 years ago before I had any idea that it was a possible career field. I decided to work on making myself marketable and reaching out to members of the red team community at conferences. I received a lot of helpful advice. I would like to think that it helped, but ultimately it was just applying for positions that seemed interesting, working through the interview process, and following up after being interviewed about where I needed to focus more attention. Once I got on the team, my true journey started.

What is the best way to get a red team job?

The best way to get any job that you want is to document demonstrated competency and diligently apply. I had the opportunity to interview and have a part in hiring some really awesome red teamers, and they were all persistent throughout the process. All were honest about their skill sets during their interviews, and many asked for feedback afterward. Demonstrating passion and a willingness to learn whatever is necessary to be successful in a task goes a long way in the hiring process.

How can someone gain red team skills without getting in trouble with the law?

This question hits at a really misunderstood topic. What are red team skills? When you list the skills that make someone a competent and effective attacker, you realize that those are the same skills that make someone a good server administrator, network engineer, or security practitioner. You can gain all the skills you need to be a good tester without ever breaking a law. Even things that seem borderline illegal can be done within virtualized environments on your own computer with free software. Ultimately, getting in trouble with the law could likely be far more detrimental to your future career aspirations than most people realize. Why risk it?

“Ultimately, getting in trouble with the law could likely be far more detrimental to your future career aspirations than most people realize. Why risk it?”

Why can’t we agree on what a red team is?

In a similar semantic shift as the word cyber, red team has lost the meaning that many still associate with it. The key distinction between a red team assessment and any other kind of test is adversarial replication. In other words, if you aren’t utilizing the tactics, techniques, and procedures of an actual, documented threat actor, then it is likely you aren’t conducting a red team assessment. That doesn’t mean you aren’t a red teamer. However, if you can’t articulate which actors use which techniques, then many would have a hard time believing that you are. Red team assessments aren’t better or worse than any other type of assessment, but for a long time they have been considered the sexiest. They exercise an actual defender on production networks and test policies and human responses that aren’t otherwise properly evaluated. I hope that the industry is able to reclaim the old definition, but it is probably unlikely.

What is one thing the rest of information security doesn’t understand about being on a red team? What is the most toxic falsehood you have heard related to red, blue, or purple teams?

One thing that people are often surprised by is the fact that I value empathy over most other traits when looking for red team members. Unfortunately, that empathy typically develops from being in the position to build, configure, develop, or defend production environments. That leads to the oft-repeated statement that being a red teamer shouldn’t be your first job. Some scoff at that statement because at first glance it appears to be gatekeeping. However, being able to quickly spot where a competent professional would apply their efforts is invaluable. It allows you to quickly home in on areas where less effort or attention may be applied and rapidly gain elevated access in your target environment. In other words, empathy helps you be better at the job and also helps you deliver the hard message at the end of an engagement. People who lack empathy or are ignorant of the actual struggle of day-to-day IT work often display the toxic mentality often associated with arrogant testers, which hurts their effectiveness on the job and in delivery.

When should you introduce a formal red team into an organization’s security program?

The hard truth is that most organizations, even mature ones, don’t need a formal red team. The sole purpose of a red team is to exercise the defenders so that they will improve. The easy answer is when a program is mature enough to have the cycles to be exercised, it might be time for a red team. Red teams aren’t easy to build from scratch, and there are plenty of qualified organizations that offer their services on a temporary basis. Start there and then use them to help build, train, and augment your team when the time comes.

How do you explain the value of red teaming to a reluctant or nontechnical client or organization?

I once had the opportunity to sit next to someone on a cross-country flight who strongly believed that there was zero business value in paying for red team assessments. It was a point of view that I was unfamiliar with, so I listened to him. In the end, our views weren’t actually far from each other. The problem is that most people don’t understand what the core mission of a red team is and instead compare it to other types of testing and assessment. Red teaming doesn’t replace automated vulnerability assessments or penetration testing but rather complements it later in the defensive maturity of some organizations. Red teams aren’t for every organization, but every defender can benefit from having an adversarial mind-set. Sometimes that is hard to envision without having a trained team demonstrate it for you. Furthermore, red teams test how all of your policies and procedures work together in the actual production environment, where there are real people. For example, I have seen environments where immediate network-blocking actions were taken with minimal inspection. A few spoofed packets allowed for a large outage and a perfect social-engineering opportunity against upset, internet-deprived users. That type of logic flaw might be hard to see on paper but is much clearer thanks to the red team.

“Red teams aren’t for every organization, but every defender can benefit from having an adversarial mind-set.”

What is the least bang-for-your-buck security control that you see implemented?

It is hard to name just one. I think any device that you purchase in order to secure your environment and that adds more attack surface would be my answer. Unfortunately, that includes probably half the blinking-light appliances being peddled at the booths at most security conferences. They typically have elevated credentials stored in them and have gone through minimal testing before being advertised as the answer to all your security problems. No CISO wants to see their latest security acquisition on the report, but it happens far too often.

Have you ever recommended not doing a red team engagement?

Any honest tester should have many stories like this. While on the Army red team, we didn’t have the flexibility to offer different options most of the time because the engagements were mandated. Assessing the maturity of an environment is a critical step in the planning of an engagement, but it is often a tricky one. Years of internal and external penetration tests should occur long before a red team happens. This is further complicated by the wide misuse of the term red team throughout the industry.

What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?

Something that I have mentioned a few times in conference talks is a creative practice that legitimately caught us on an engagement. The organization used log review as a punishment of sorts for almost anything in the office. If you were late to a meeting, lost a wager on last night’s game, or failed to contribute to a swear jar, you earned log review time. The manager handed out logs from a different team for review every week, and they would produce a write-up on the most serious thing they found for the teams afterward. This had a few benefits. All teams knew that their logs would be reviewed and likely were more thorough as a result, and there was an obvious focus on security in all departments. Fresh and curious eyes were able to find anomalies that would have otherwise been lost in the noise, which led to the discovery of compromise without the use of any expensive security appliances. Finally, skills and knowledge were passed between the teams, which kept people engaged and, I believe, more satisfied in their work.

Why do you feel it is critical to stay within the rules of engagement?

The rules of engagement (ROE) are absolutely critical and need to be ironclad. Trust and professionalism are paramount to the ultimate success of a red team. If you violate the ROE, you have broken the trust the organization has put in the entire team. The time to question and adjust the ROE for everyone’s benefit is before the engagement begins. The level of access a typical team achieves during an assessment is likely greater than that of any individual admin or IT section in the organization. That trust should never be violated.

If you were ever busted on a penetration test or other engagement, how did you handle it?

I have been busted many times during assessments and could probably write an entire book on those interactions alone. One of my favorite incidents involved signing up for a conference room through social engineering, tailgating into the building, and then joining several members of the red team while we plugged into the target’s internal network. After using traditional methods of acquiring credentials, I attempted to utilize those credentials on the first interesting hostname I saw in Active Directory. We underestimated our audience, and the use of credentials over the network directly to a workstation from another workstation IP address caused an alert to prompt from their host-based security product. The warning contained my IP address, the account that I was using, and what file I was attempting to access. Unfortunately, the machine I targeted was the machine being used to project slides for a meeting that the entire IA section was attending. They jumped into action and were able to figure out where we were before I realized that I had tripped anything. We were busted, and it was all my fault. They ran into the room where we were quietly working and demanded to know who we were. Feeling responsible for the situation, I firmly told them to go get their boss so we could discuss their disturbing our work. This confused the group, but they declared that they would be right back. Afterwards, they returned to an empty conference room, and we endured a super-awkward out-brief a few days later.

What is the biggest ethical quandary you experienced while on an assigned objective?

A situation that I have encountered several times is being asked to leave out specific critical findings from a report. I imagine this is common, based on discussions with other testers. Sometimes assessments come with far greater ramifications than a tester realizes. As tests have become more normal and more organizations have publicly reported being breached, it has ideally reduced the stigma that comes with poor performance on an assessment. An experienced and properly scoped red team engagement will almost always result in a successful compromise. No one should lose their job because of it unless they are found to have violated company policies or acted unethically. The point is to train and improve the security posture of the organization and not poke people in the eye. For that reason, I am against withholding confirmed findings from a report.

How does the red team work together to get the job done?

The “team” element of red teaming is critical. No one can be an expert in everything. Having people with diverse technology backgrounds allows you to work together to accomplish the mission. Communication is important, and each member of the team should have an understanding of what the others are working on. Additionally, leadership of the team is extremely important. Each action on the network and the risk it presents of being caught or reacted to needs to be understood and analyzed. Rogue actions can be detrimental and lead to internal conflict as well as poor results. Documentation during the assessment should be everyone’s responsibility, but it is my experience that rotating one person to combine the results into the report leads to better results.

What is your approach to debriefing and supporting blue teams after an operation is completed?

Every engagement is going to have required written or oral deliverables, but I have always been partial to the informal out-brief. This brief is free of managers and egos and is just a frank discussion of the things that were done with all who were involved. When it is done correctly, both sides benefit. Additionally, this is a great time to glean things that the organization did well or things the defenders would like you to emphasize for their bosses in order to secure support or funding.

If you were to switch to the blue team, what would be your first step to better defend against attacks?

I believe that the first step in defending any environment is to map it. There is almost always a discrepancy between the number of machines an organization believes they have and how many they actually have. It seems so simple, yet there are often surprises that have been either forgotten over the years or never documented. You have to figure out what is there before you can ever hope to defend it.

What is some practical advice on writing a good report?

The unfortunate reality is that the report matters more to everyone else than to the person writing it. The best reports are accurate, concise, and engaging. Accuracy comes from documenting your actions and providing evidence of your findings and not overstating them. State the facts that your data points to. Concise writing prevents reader fatigue. If you have made it this far into my answers, you have likely discovered that I struggle with this. Figure out who on the team is good at it and have that person edit reports until everyone improves their writing style. Provide raw data and results as addendums. Finally, you want your reports to be engaging. Speak to the reader in a way that keeps them reading. Show them how much work went into the assessment and possibly inspire one of their admins to go into security.

How do you ensure your program results are valuable to people who need a full narrative and context?

The narrative is what differentiates a red team report from a pentest report, and for many it is where much of the value of a red team engagement comes from. What techniques did you use? What adversary were you emulating? Why did you choose that group over other groups? All of those questions should be answered in a red team report. The reader should learn not just what you did to them but how their environment could be realistically attacked in the future.

How do you recommend security improvements other than pointing out where it’s insufficient?

It is important to understand that you don’t know why decisions were made in an environment. It is so easy to recommend specific improvements without having any knowledge of business needs, which makes your recommendations potentially worthless and may actually serve to discount your findings entirely. Presenting findings with generic recommendations for how to improve their security posture is likely the best a true red team will be able to do. You need to have a better understanding of an organization’s needs to make specific recommendations in a lot of cases. You just don’t get that from the adversarial perspective.

What nontechnical skills or attitudes do you look for when recruiting and interviewing red team members?

Empathy and passion are what I look for. Passion keeps you learning new technologies and not becoming complacent with the same techniques you have previously used. Empathy helps you predict what people would likely not have had time to devote attention to, and it helps you write a more effective report.

What differentiates good red teamers from the pack as far as approaching a problem differently?

Good red teamers are able to quickly evaluate attack surface. All testers rely on some sort of methodology, but a red teamer doesn’t need to flip every stone. They can look at an application or system and see where the quick wins or low-hanging fruit are and move on. I like to describe it as being in a dark room with one door. A typical tester will walk in every direction and will eventually find the door after touching most of the walls. A good red teamer will walk straight to the door and never touch the wall. It looks like magic, but being able to quickly identify attack surfaces is what separates a good red teamer from the rest of the pack. ■

Tribe of Hackers Red Team

Подняться наверх