Читать книгу Tribe of Hackers Red Team - Marcus J. Carey - Страница 14
8 Ben Donnelly
Оглавление“There isn’t just one type of red team job. There are quite a few subtle differences between different companies/groups that perform this type of work.”
Twitter: @Zaeyx
Benjamin Donnelly is an omni-domain engineer and the founder of Promethean Information Security LLC. Ben has worked as part of teams hacking such things as prisons, power plants, multinationals, and even entire states. He is most well known for his research projects, including his work on the DARPA-funded Active Defense Harbinger Distribution. Ben has produced a number of field-leading advancements, including the Ball and Chain cryptosystem. He has spoken at Derbycon and BSides Boise and has contributed content to multiple SANS courses. Outside of cybersecurity, he can often be found skydiving, producing underground electronic music, or starring in indie films.
How did you get your start on a red team?
I competed in a high school cyber-defense competition called Cyberpatriot. My team did quite well, and from there I managed to talk myself into getting invited to come out and compete in the first-ever NetWars tournament of champions. At this point, my entire skill set was still entirely from a blue team perspective—that was the only thing that Cyberpatriot had trained us in. Recently graduated from high school, where else was I supposed to learn the black arts (“red arts”?)?
But it turns out that my specialized blue team skill set quickly transitioned into red cell activity. I didn’t win my first run at NetWars, but I did score in the top ~10 percent. Considering my age and that I was competing against professionals, I think that impressed some people. This got me a few job offers, and I took one working for a SANS instructor. I was supposed to just be an intern, but I kept throwing out knowledge and hard work. It wasn’t long before I was getting called in to help with penetration tests, and my job title officially changed to penetration tester/security researcher.
What is the best way to get a red team job?
Define “best.” If what you value is an interesting story, then perhaps your best way would be to do the old “black hat captured by FBI and forced to hack for good.” Of course, assuming that your idea of best is to (as soon as possible) have a strong, well-paying, prestigious job “hacking things” legally, then there are a few things I can recommend.
You need to know your target audience, and then you need to impress them. There isn’t just one type of red team job. There are quite a few subtle differences between different companies/groups that perform this type of work. From a high level, you’ll find that there are two major types of hackers in this field. Both have places on different red teams, and both are really cool. The biggest practical difference between the two will be in their clientele.
The first type of red team is the computer network operator–type team. Their primary focus is going to be on access. They train to utilize hacking tools and frameworks, and they aim to impress. If you want to join one of these teams, you need to be focusing on training on breach simulation because that’s what their world is all about. Their clients hire them to show exactly how an attacker might gain and leverage access to a network or system. This type of team is going to be dropped into a network, or onto a target system, with the goal of exploiting the system to its fullest extent and building a narrative they can present to the company’s executive team detailing how they got it done. To join one of these teams, you almost certainly won’t need a bunch of certs, and you probably don’t need a college degree. What you do need are the skills to do the job and the guts to ask for it. To get there, find a team that you want to join, train until you’re ready, and then prove yourself by competing or contributing to the community.
The second type of team is the security engineering–type team. This type of team is less likely to be dropped into networks with the goal of “simulating” a literal breach. Instead, they are likely to spend their time creating and building and auditing complex solutions to hard security-centric problems with the goal of improving the technical sophistication and security of a given software or hardware system. If you join one of these teams, you won’t spend your time trying to create a narrative to describe how exactly you accessed a network via a simulated hack. Rather, you will spend your time analyzing systems from a multitude of perspectives and then applying your knowledge to answer tightly scoped questions such as “If an attacker had access to this network, could they bypass our host whitelist?”
For both team types you’ll want some combination of computer science and information technology knowledge. You can gain these things in school or on your own time. The type of team that you want to join will influence whether you should be learning Metasploit and Active Directory or cryptology and software engineering. Once you know what it is exactly that you want to do, simply learn those skills and send in an application.
How can someone gain red team skills without getting in trouble with the law?
For me it was competitions. I kind of got dragged into them when I was quite young. I was in a cadet program in high school that gave me the opportunity to compete in Cyberpatriot back when it was just getting started. This competition opened my eyes to information security, though it didn’t really give me red team skills. What it did do was to prepare me to be able to understand and parse red team contexts.
You can easily and legally learn the basal skills required to be ready to quickly transition into a red team role by working in computer network defense–type roles. You’ll learn about what it is that attackers do as you learn to anticipate them. And far more importantly, you’ll learn how to play with infrastructure.
Look, certainly part of red teaming is knowing how to actually exploit a system. You will need to know SQLi and XSS, and you will need to know how to pop a shell and pivot through it. Those specific things will not use up even half of your time. Even when you’re actively “hacking,” you will spend the vast majority of your time on building, manipulating, and traversing infrastructure.
If you want to be an amazing red cell member, here’s what you need:
Massive ability to manipulate infrastructure (gained from IT training)
Massive ability to manipulate software systems (gained from CS training)
Massive ability to manipulate social systems (gained from psychology training/high empathy/life)
I left out a few skills there, such as time management and report writing, but you get the idea. In the end, the crazy cool “hacker” things do not exist in a void. They are just the other sides of various coins you’re already familiar with.
What is one thing the rest of information security doesn’t understand about being on a red team? What is the most toxic falsehood you have heard related to red, blue, or purple teams?
Many people think what we do is magic. In the past, I’ve met incredibly intelligent and well-spoken people who treated us like gods. We absolutely do not deserve this praise. If we work hard, if we do a great job, then thank us. But our field isn’t for immortals; it’s just for lucky people who managed to find the opportunities and walk the esoteric path that led them here. You can be here if you so choose. Not to throw massive shade, but I absolutely can think of a few people who get tons of undeserved praise for simply existing in this field. And that’s okay, until it makes other people feel like they don’t measure up. There are absolutely wizards in the world; I defend this 100 percent. I’ve met some; I’ve worked with some. But the vast majority of the time, the gentleman professional running Metasploit and logging Nessus results is not one of the few rare and crazy-haired titans of computer science.
“Many people think what we do is magic. In the past, I’ve met incredibly intelligent and well-spoken people who treated us like gods. We absolutely do not deserve this praise.”
What is the least bang-for-your-buck security control that you see implemented?
Oh my goodness, firewall tech by far. All these expensive “security” devices that seem to keep selling like hotcakes are effectively capable of stopping 0 percent of technically sophisticated adversaries. It’s quite unfortunate that these things sell so well, though it’s certainly understandable. The people making these purchase decisions just don’t know what’s possible.
Take the example of deep packet inspection. I think a lot of even decently technical people making purchase decisions hear “This firewall will ensure only HTTP traffic can exit your network by searching for HTTP header data sent via valid TCP routing” and they think to themselves, “That sounds awesome.” This is one of the most sophisticated analysis methods we regularly see. I’m sure people are paying tons of money for it.
They’re also paying me tons of money to walk into their network and wrap other protocols in unintelligible data globs sent as part of HTTP proxied traffic. This recently popped up on a test, where a client had exactly this technology in place. I just built a fake “update service” that polled a remote “update server” at random intervals, sending and receiving X-API-Key headers that contained arbitrary base64-encoded data. Normally such headers contain random strings encoded base64 or hex. In this case, we just piped any protocol/content of our choosing into that area.
For higher per-packet data throughput, we could have easily utilized a fake JWT/JWS-style header value containing multiple such random strings in the body tunneling data, with a fake signature section tunneling more data—or even better, a “JWT/JWE” wherein the encrypted body is entirely ours to play with.
Have you ever recommended not doing a red team engagement?
I’ve gotten right in the front door with a few companies. When that happens, often the rest of the test is a waste. Sometimes it’s just an honest mistake—something like a default password left on an administrator account. That stuff happens even with high-level application developers (you wouldn’t believe who). But more often than not, a test like this just becomes a slaughter because of the architectural failures of an application or system in general.
I’ve watched applications literally unravel from within by means of insecure direct object references (IDORs). The developers thought that it was fine to perform no authN/authZ prior to object access as long as the object IDs were long and random. Hint: that is almost never okay. In the specific case I’m thinking of, you could request a series of tokens if you knew only one starting tokenized value. They assumed you could get that token only if you were logged in as the user it belonged to. It turned out that you could find the token by requesting their password reset page with the user’s email and then pull down a series of chained requests to compromise everything that belonged to the user.
In these architectural cases, the offer isn’t to continue red teaming them. The offer is to help them rebuild their application from the ground up with a member of our team working with them to ensure they make valid security decisions.
What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?
One hundred percent client (host) isolation. Unless the systems on your network absolutely must be talking to each other, you need to implement this, and you need to do it now. Especially in the modern world of AWS, GCP, and Azure, your business applications aren’t living on-premises. They’re living somewhere else, accessed via an external pipe that exits your LAN/WAN. Few organizations have any need for workstations to talk directly to each other. Not only is this functionality all but useless in most business use cases, implementing isolation stops a huge number of attacks that we would otherwise be able to leverage to gain access to and exploit your network resources. Without device-to-device access, how am I supposed to find and exploit unpatched servers or workstations on your network? How am I supposed to pivot laterally? How am I supposed to relay credentials or access a rogue SMB shared directory? You implement isolation, and I guarantee you will watch red teams fail.
Why do you feel it is critical to stay within the rules of engagement?
When it comes to causing harm, that’s a huge no-brainer. You want to have a job? You want people to trust you? You want to not be in jail? However, I think people often get tripped up over some of the gentler rules surrounding things like scope and attack types. These are the gray areas where it’s easier to just let things slide. You shouldn’t do this. Don’t let it slide, and don’t purposely play in the gray areas. Here’s what you need to do.
Have an open chain of communication with your client through which you can easily reach out at any time. When you bump into a gray area, don’t just keep going. Reach out to your client and request clarification. If the rules of engagement or the project scope isn’t matching the reality of the application/network/system, renegotiate it.
“Have an open chain of communication with your client through which you can easily reach out at any time.”
Literally everything is up for negotiation. Talk with people.
If you were ever busted on a penetration test or other engagement, how did you handle it?
I’ve never managed to get hard busted, yet though I’ve heard some great stories. I seem to be pretty good at living off the land to avoid detection in network pivots. As a result, I rarely get noticed by network security teams. When I’m blocked during external tests, I literally just round-robin out of their way and back into attacking.
I was stopped from entering a building once during a physical penetration test. I wouldn’t say that they “caught us” as much as their security procedures didn’t allow random people who showed up at the gate claiming to be “mold inspectors” to enter without a signed work order waiting for them. They asked us to head down to the security office, and they left me and my partner alone in their control room with just one guard checking out our credentials on his computer. When he didn’t find our fake mold inspector badges listed for entry, he simply asked us to leave.
I know it’s not a crazy cool story. But hey, maybe if you’re nervous about how hard some of the heart-pumping, adrenaline-inducing portions of red teaming might be, even getting caught isn’t always that bad. In the end, they just asked us to leave. We flew to another state and broke into a different facility for the same company.
What is the biggest ethical quandary you experienced while on an assigned objective?
“Should I report this to the vendor?” is a huge one, especially when it involves systems that you know are in production globally at a massive scale. The moral penalty for not reporting can be huge. But, there are certainly situations in which you’ll find yourself locked into an NDA that limits your ability to share findings with a third party. In this case, it’s often best to work with your client and have them report or permit a redacted report to be transmitted to the selected third party. This is yet another reason that it’s good to have great clients. It’s important to choose who you work with.
How does the red team work together to get the job done?
Personally, I find it a little bit harder or less efficient to work with others in many ways. The biggest issue is information flow. Interpersonal communication is hugely low bandwidth at best. Talking is a terribly inefficient way to transmit large amounts of information.
I think within red teams it’s a huge hindrance that must necessarily be overcome. The work that we do is incredibly complex and terribly high in specificity. Especially when you bring in the issues that our perspective often adds to our tasks. We don’t usually have a control console flashing lights and outputting debug information to tell us what a system is doing internally as we interrogate it. Instead, we have to work out what’s happening inside by tracking huge numbers of variables indicated by external responses (such as error messages). Communicating exactly what is happening (and when) to each other can be a huge challenge.
When it comes to working with the defenders/blue team/product engineers, I’m a huge fan of the collaborative engagement model in which the “find a way in and tell us a cool story” method of black-box testing is replaced by the gray-box work with the engineering team testing discrete sections of the application/network to understand the threats posed to exactly that portion of the system independent of any other protective layers. We don’t want to find one “cool” way in. We want to find and patch all the many different ways in.
A lot of red teams are absolutely trying to prove something and come home with cool stories to impress their friends. They want to live the life of a hacker, and sure, it’s cool to find XSS in a site and pivot that into full control over a site. You know what’s even cooler? Actually helping the security team maximize the security of the application. You do this by forgetting about the glory and working directly with the security team to remove the “fog of war” around the application. Then you apply concrete adversarial-security engineering skills to building a model for a threat against any system/subsystem therein.
What is your approach to debriefing and supporting blue teams after an operation is completed?
Full transparency. My skill set isn’t “running Metasploit,” and my access doesn’t rely on a bucket of zero days I pull from to impress people. We’re kidding ourselves if we’re not engineers first and foremost. Engineering is a skill that I hold inside my mind, and I don’t lose anything by showing people what I’ve done. The next challenge will be different, the next project will be different, and I’ll engineer my way through them as well.
You should be able to answer any and all questions asked of you by the product team. There isn’t any special sauce here, and any red teamer who tells you there is is simply holding onto whatever weak power they might have for fear of losing it. If questioned, I consider it my job to tell the product team literally everything I know. If they can hold all that knowledge, then I as a red cell professional need to move on and discover more. There is no secret sauce.
We really do need to be the virtual explorers and cartographers of our field, helping everyone to follow in our footsteps as quickly as possible and then moving on to new horizons when they arrive.
If you were to switch to the blue team, what would be your first step to better defend against attacks?
Of course, the answer here depends on a lot of variables, such as the company or my delegated responsibility scope. But if I were to choose just one thing to push, it would absolutely be Active Defense. I’ve done a lot of work on Active Defense in a past life. It’s something that I still think (even if some of the most common techniques are getting a little dated) is worth a huge amount (especially within the context of the average organization, wherein the cost of security engineers or custom software systems might be prohibitive).
With Active Defense, you’re looking at interdicting attack methodologies as opposed to simply trying to “harden” everything. If your plan is to remove every last vulnerability from your network, you’re going have a bad time. But if you implement systems that can anticipate adversarial actions and counter them to gain you something, you might just get positive play out of it.
I was the lead developer on Black Hills Info Security’s Active Defense Harbinger Distribution (ADHD) for a few years. This tool was designed to tackle this exact problem space. We want to find ways to anticipate the types of actions that an adversary will take and then do things to hamper them—or sometimes, not even just hamper. If you look at tooling like Honeybadger (inside ADHD), you’ll find that it’s actually possible to track an adversary down to their physical location when they try to hack you.
There is a lot that can be done, and it’s not even hard to do. Most teams just haven’t thought to try. But you can, and you should.
This is a special question that’s dear to my heart. If you’re reading this and you’re part of a blue team that wants to do this type of thing but have no idea where to go, please do reach out to me directly on Twitter, and I’ll do what I can to point you to resources that can get you on the right track.
What is some practical advice on writing a good report?
Be detailed, correct, and honest. It sounds crazy simple, but in my experience, these are things people struggle with.
You need your reports to contain the necessary detail that any findings you produce should be easily understood and, in all possible situations, easily reproduced. You can check this by reading over it once or having someone outside of your project review it for you. If your writing makes any leaps from one idea to the next, you need to fill in exactly what you meant.
You should be correct in all your report writing. What this means is that you should ensure you tell the full honest and clear truth. Don’t insinuate things that aren’t there in order to make your team look better, and don’t be overconfident. I like to write using words like can, might, and may unless I’m absolutely sure about something. So much of what we do is chaotic (highly complex) and filled with massive gray areas. It’s unlikely that you will ever be able to say much with 100 percent certainty. Don’t write like you know it all.
Finally, be honest in your report writing. Be brave even. Be willing to say that you found something truly damaging. Remember, your job is to play the part of an adversary. There aren’t too many highly destructive adversaries in the world for one simple reason. People aren’t often willing to risk it all. But rest assured, when a true villain strikes an organization, they won’t pull any punches. You need to be brave, abnormally brave, when it comes to trying things that are hard. Be willing to run attacks that may fail, and be willing to be embarrassed if they do. Be willing to write about these attacks, and admit when you don’t know things. Your clients aren’t just paying you; they’re relying on you. Act like it.
How do you recommend security improvements other than pointing out where it’s insufficient?
You need to have at least a cursory understanding of what’s actually happening in three areas if you want to be a good red team member.
You need to know what attacking is like: This is the first one that most aspiring red cell members rush to learn. They want to know “how to hack,” so they rush out and start reading books and watching tutorials or talks looking to understand what the attack looks like/how it’s carried out. This is great. You absolutely must know these things. But this isn’t everything.
You need to know what your attack does: You need to understand what the attack is actually doing. Not just “Oh, the SQL injection string is injecting SQL.” I’ve met people who knew ‘ or 1=1;– who didn’t know a drop of actual Structured Query Language. These are not serious people.
You need to know what your target system does when you’re not around: Plain and simple, you need to actually understand what it is you’re attacking. Perhaps not intensely or in depth, but you should have at least a cursory understanding of everything—every computer, every system, every person you interact with—outside of the context of your run.
If you know these three things, you can do your job, you can do it well, and you can provide the context modifications of your actions to the security team with professionalism and ease.
What nontechnical skills or attitudes do you look for when recruiting and interviewing red team members?
One hundred percent self-awareness. You look for the people who make fun of themselves. You look for the people who are willing to ask questions or admit when they don’t know something. You look for the people who correct themselves.
In this field, your ego doesn’t get to decide when you gain access to a computer system. Almost everything we do is reactive. We don’t get to (often) write the vulnerability into the system beforehand. Therefore, you need to be 100 percent able to parse what’s happening around you. That’s what self-awareness is for. You need to be able to track the world without your ego attempting to force its own will on the world around it.
With self-awareness you can understand, control, and react to yourself. This means that you can put yourself aside and focus on the Herculean task of outsmarting armies of engineers and outperforming computers.
You’ll be able to see what I’m talking about when you work on a team with both types. The difference is like night and day. Most people are stuck within themselves. I massively support and affirm those people who are (by right of birth or right of hard work) able to see themselves from a pseudo-objective perspective.
What differentiates good red teamers from the pack as far as approaching a problem differently?
I have met an inordinate number of exceptional red cell members who would almost certainly be considered to be somewhere on the autistic spectrum. If you’ve been in this field for even a brief period of time, you almost certainly have seen something similar. This doesn’t mean you have to be autistic to be good. But it does imply that there is something going on.
It’s probably true that the general autistic cognitive profile performs exceptionally in this field relative to the average or neurotypical cognitive profile: to be able to focus for extremely long periods of time, to be more apt to reason from first principles (axiomatically), to be highly sensitive to the specificity of your environment, and to be able to translate that into task-applied “detail orientation.”
We welcome all types. If you know your stuff and if you can deliver, you belong here. But neurotypicals can in large part survive anywhere. As such, I do think that it’s especially heartening to see neurodivergent people, who in many cases haven’t ever before been able to clearly demonstrate their value to their peers/parents/community, absolutely kill it as part of a red cell. You take the “nerdy” kid who got made fun of for not following viral dance crazes in high school or whatever, you give him a laptop, and suddenly power plants start shutting off for seemingly no reason; it’s beautiful. ■