How MDR and AI are Shaping Cybersecurity in 2024?

View Show Notes and Transcript

MDR vs. SOC: What You Need to Know for Cloud Security? In this episode, Ashish Rajan sits down with Warwick Webb, VP of Managed Detection and Response (MDR) Services and Adriana Corona, Director of Product Management, AI at SentinelOne to talk about the changing landscape of Managed Detection and Response (MDR) and the role AI plays in modern cybersecurity. They talk about differences between traditional SOC teams and MDR services, the role of AI for security teams and challenges of cloud security in a multi-cloud environment, from threat detection to attribution.

Questions asked:
00:00 Introduction
01:30 A bit about Warwick
01:53 A bit about Adriana
02:25 What is MDR and how is different to Managed SOC?
03:29 What is Detection and  Response?
04:39 MDR in the Cloud Context
07:15 How has the Detection and Response space evolved?
09:20 Have the threats evolved as well?
11:23 Challenges with MDR implementation
13:08 Where does AI fit into Detection and Repsonse?  
17:18 Current State of Security Analyst in MDR  
20:54 How AI impacts Detection?
22:26 The role of AI in reducing response time
25:27 Separating signal from the noise for AI and MDR
28:14 Skillsets for detection engineering teams
30:18 The role of AI Detection Tools

Ashish Rajan: [00:00:00] Managed detection response is an interesting topic now that we are in the cloud world where there are instances of new threats appearing from cloud, the detection response time that I have to report as a CISO to my board, to my CTO, to CEOs, and I also have to work out ways on am I really in a state where I should be going for an MDR, which is a managed detection response. In this conversation, I had Warwick and Adriana from SentinelOne who came in and spoke about the managed MDR space, the use of AI to potentially optimize and how far are we in this AI conversation in the reality of 2024, whether it makes sense for us to be even leaning harder on AI.

Can we trust AI to be at a point where, we can completely replace MDR. If you are someone who's considering MDR or probably wants to look into a bit more into the MDR space. Now Warwick himself is the MDR person for SentinelOne. So they're drinking their own champagne as I would like to say in from France.

So it was really interesting to have Warwick and Adriana share their experience of the usage of AI and [00:01:00] the MDR space and how they work together as we move towards a world which has more cloud and more AI. I hope you enjoyed this conversation with Adriana and Warwick. I would see you in the next episode.

Welcome to the episode with Cloud Security Podcast. We've got two special guests over here. Welcome to the show. I'm not going to name names, but I'm going to start the introduction over here first. Could you share about yourself, where you are at this point in time?

Warwick Webb: Sure, thanks. And thanks for having me.

My name is Warwick Webb. I am the Vice President of Managed Detection and Response Services at SentinelOne. I've been in the threat detection and response space for about 25 years now. I started at Symantec and Managed Services, went to Salesforce and built and led the SOC and IR team there. Then Rapid7 for managed detection and response, and now that's what I'm doing at SentinelOne

Wow, what about yourself?

Adriana Corona: Hi, I'm Adriana Corona. I am a product director for AI and ML at SentinelOne. I started as a cybersecurity analyst on premises, then moved into cloud security products and now I'm in the AI ML space at Sentinel.

Ashish Rajan: Wow. We have such a breadth of [00:02:00] experience over here. To lay out the foundation for people.

Yeah. We were talking about this before we started recording the whole MDR, EDR, SOC, managed SOC, just to lay the foundation. Cause a lot of organizations primarily, would never go beyond a managed SOC. Yeah. Not because the organization doesn't want you, they just probably would not even know. So what is an MDR and how different

Warwick Webb: is it to a managed SOC?

Yeah.

And we were talking also earlier, but I think before we started just about, as an industry, we're not great with acronyms, right? There's a little bit of confusion out there. Yeah, exactly. But but happy to answer that question because there are many different types of managed services and they address really different use cases, right?

Traditionally managed MSSPs have a very broad sort of scope. They include management of security infrastructure they generally support a much wider range of security tools and that's the role that they've played for many years. MDR services really came out of the MSSP space with a much more targeted scope, which is effectively detecting and responding to breaches, right?

So taking that [00:03:00] part of the problem space for customers and addressing it in a very focused way, generally with the MDR service, we'll have their own technology or a subset of technology that they use for that. And the value there is clear because essentially you're providing 24 by seven detection response at a fraction of the cost of hiring a team to do that yourself.

Ashish Rajan: And how would you describe the detection response as well?

Warwick Webb: If you really zoom out and look at the security problem space overall, you can categorize almost everything as prevention, detection, and response, right?

So you obviously want to prevent as much as possible. The best incident is the one that never happened, right? But we also have to accept in the industry and I think we do, that not every attack can be prevented. And that's where detection and response comes in. For anything that makes it past your preventative controls you need to be able to effectively and efficiently detect that activity, which is the first step, but obviously detecting it in and of itself does nothing for you. You have to be able to effectively investigate it, meaning you can [00:04:00] gain confidence as to whether this really is what it appears to be, right? So it actually is a true positive. And then once you've completed that step is the most important step is take that response action to limit the impact on the organization, right? And this is where MDR services have committed to doing all of those things for their customers. Whereas in the past, the more traditional managed services approach was more around just managing the security infrastructure reviewing alerts, maybe calling the customer, but the responsibility was on the customer to do all the response, right?

MDR takes a lot of that on.

Ashish Rajan: Yeah. And cause you've been in the cloud space as you transitioned over, how different is the MDR world in the cloud context, because I guess there's a lot of experience in on premise world. People have been doing it for a long time.

Adriana Corona: Yeah. How

Ashish Rajan: is that different in the cloud world?

Adriana Corona: And I would say to think about the difference, we have to think about the different problem between EDR and cloud security.

Ashish Rajan: Oh, another two acronyms right there.

Adriana Corona: So like when we were thinking about endpoint security versus all cloud asset security beyond the endpoint, it's a very different problem [00:05:00] space.

So it's actually the same workflow. Yeah. So first you have to detect and then you have to investigate and then you're spun. But in the cloud, the added complexity of figuring out who even created or owns this thing is very complicated. So just like with endpoints, there's a separate security team, then the developers who create the cloud assets, the developers who create and deploy the cloud assets are not intentionally adding security holes to their code, right?

That's hopefully not. But when the security team is like, there's a problem to solve, their first step is actually hours of investigating. How could I find the right person to make the change? Because the security team does not have the authority to make the change on behalf of the developers most of the time.

So that's a very different problem. With Endpoint, you almost always have that attribution. Just by default.

Ashish Rajan: Yeah,

Adriana Corona: so you know which employee you assigned this laptop to and you know who logged into this [00:06:00] laptop, right?

Ashish Rajan: Yeah,

Adriana Corona: you don't have that first problem of like I don't even know who to call right now to fix it You also have more authority to fix right you can run scripts reinstall, upgrade things on an endpoint.

It's a little bit different in the cloud space. So I think that also influences how you would provide an MDR service.

Warwick Webb: Yeah, and I would also add that another challenge but also benefit of the cloud space is a lot of these assets are very ephemeral, right? Yes. Oh yeah. By the time you go to investigate, are they still even there, right?

That's also a benefit in the sense that remediation can be much more effective and efficient because cloud infrastructure generally is designed to be just, recreated at will, right? Whereas you might have to rebuild a sort of on premise systems over a long period of time, you do have that ability, it really always starts with understanding the attack surfaces and what are the types of attacks that will be leveraged in a given environment, and there's a lot of overlap with on prem as well. There's still vulnerabilities in the cloud that can be exploited on the edge. There's [00:07:00] still account takeover where credentials are stolen and reused. So there is a lot that carries to the cloud, but certainly a lot that's different too.

Ashish Rajan: Would you say a different kind of compute? Now in that world, which no longer is just virtual machines, we have Kubernetes, containers and insert new serverless or something else in there as well. Yeah, put throw AI in there as well as an attack person as well. Would you say the detection response space, even if it's MDR or in the cloud how has that evolved with more complexity of compute being added in there?

Is that the similar challenges or it's making it worse. I don't know if it's making better as well.

Adriana Corona: You're definitely better positioned to answer this question, but I think it's like adding even more workload to the same security teams who are already overburdened, right? So now you have to learn a new space.

That maybe if your engineering team needs to innovate, they're going to learn these new technologies.

Ashish Rajan: Yeah.

Adriana Corona: And now the security team has to keep up, right? You have [00:08:00] no option and no company is going to be like, I'm going to build a dedicated security team just exclusively for this one new technology.

Yeah. At the start, maybe it evolves into that. So it does actually heavily increase the burden on security teams.

Warwick Webb: Yeah. And it's interesting because even just traditional on premise sort of detection and response that expertise has not kept up with the demand. And then you add in the ability to have cloud expertise for detection response.

That's an even more specialized skillset, which again, just comes back to, when we talk about MDR, the clear value add here is. If you want to hire and staff a 24 by 7 team internally, you need a minimum of 9 to 12 people. And that's if nobody takes time off or gets sick or anything, right? And that's not even staffing for a volume of work.

That's just coverage, right? Yeah. For most organizations, all but the, absolute largest organizations, that's just a non starter.

Ashish Rajan: Yeah.

Warwick Webb: And it so happens that sort of making that expertise available, And when we talk about AI, I think this is interesting too, because it's another form of making [00:09:00] expertise available to organizations that need it.

MDR is another way of doing that. We can hire the cloud security experts, the detection response experts, and make them available to a much larger group of organizations.

Ashish Rajan: Would you say the kind of threats are different as well? I guess we're, obviously, detection response primarily isn't around that category for Hey, my threats are evolving.

I have new threats coming in. We've just gone on to, the customer side, which is the, I have compute challenges, team challenges. What's the threat aspect? Has there been an evolution of threats? You're obviously managing a team, which primarily just does this. Yeah.

Warwick Webb: Has the threats evolved as well?

And yes, although it's also always astonishing to me, the threats that have not changed, right? When you're in this industry for long enough they're like so in some ways it's like things are very different in some ways they're not. So whether it's cloud or on prem, as I mentioned earlier, there's just so many of the same sort of challenges, whether it's, malware living off the land, like an attacker stealing credentials and using legitimate applications [00:10:00] to move through an environment again, even in 2024, vulnerable internet facing services being exploited and used to gain access to an environment is still a thing. Like it was a thing 25 years ago. I think we've gotten better at managing those risks. But certainly when you start talking about entirely new modes of compute, like serverless compute all of the, everything is a service that you see in the cloud.

That really does change everything as far as what does a threat look like in a, if you're thinking about like an AWS lambda function. So if you look at, for example, like Lambda functions, right? There's no malware to be executed. There's no operating system. It's just code, right? What are the types of attacks that could target Lambda functions in AWS? And then you build your detection and response around those threats.

And candidly, I think as an industry, we're still figuring out exactly. How are those sorts of attacks very different? Like you also go back to what is the action on objective? What does the attacker want to do? Most attackers are busy right now at detonating [00:11:00] ransomware because it pays them a lot of money.

So could they do something creative with a Lambda function? Sure. Could they just exploit an internet facing vulnerability and deploy and detonate ransomware and make a lot of money? Probably a better use of their time.

Ashish Rajan: In terms of challenges, a lot of CISOs and practitioners probably would hear the word detection response like, I'm not even sure if I'm ready for this yet at this point in time. At what point do you find that it's the right call for people to think beyond just a SOC team?

Because a lot of people just stop at the point, I have a level one, level two, great. I have a SOC team that's done because I think, are there challenges running this kind of an MDR space? What are the challenges for the people haven't faced yet?

Adriana Corona: Yeah. And also a follow up question.

I feel like we also see both, right? So a company choosing to have their SOC team and managed services working together both to secure their enterprise. They have both sides. Like they have both. Yeah. Oh, wow. Yeah. For 24 7.

Warwick Webb: Yeah. In some cases I think that first of all, and I'm not saying this just because I run [00:12:00] an MDR service, , I actually think every business in the world needs 24 7 detection and response. And the reason for that is simple. Security controls will eventually fail. Not only might attackers target you at the worst possible time, they will deliberately target you at the worst possible time. July 4th is an infamous time for us in the industry. That's when the attackers are gonna attack because everybody's off on vacation, right?

Yeah. So they know exactly when they're gonna most likely be successful. So you have to have 24 7, whether you're a corner bakery or a fortune 10 organization. And I personally think, MDR is a great solution to that problem for most organizations.

So whether it's are we ready for a detection and response program? I think every organization needs it. Now again, that doesn't mean that an MDR service is going to do everything from a security operations perspective. An organization has other security operations functions. They could work with an MSSP, they could have an internal SOC that does it, but when it comes to monitoring and responding to breaches. I think an MDR service is the right fit most of the time.

Ashish Rajan: [00:13:00] And now that we've opened up the Pandora's box, where does AI fit into all of this? Because I feel like there's a lot of alert fatigue coming in from this as well.

And the challenge of this, I need at least minimum nine people 24 7, I'm just talking shift works, people need to be available 24 7, I potentially have a 3am phone call that I have to jump on. I have no idea what log4j happened. What was log4j again? So where does AI fit into a lot of this conversation?

Is that even an answer?

Adriana Corona: It's definitely an answer. And I think we already talked about overburdened security teams. That's a fact, right? That's not news. You talk about alert fatigue. We even talked about new technologies mean even more expertise by security professionals. But with AI, there's actually two separate opportunity avenues.

One of them is expertise. So helping augment an expertise gap on a team. That is one of the areas. The other one is just efficiency.

Ashish Rajan: Yeah.

Adriana Corona: So making even the most expert of experts [00:14:00] even faster and better at their jobs and reducing the burden, let's say three hours per day per analyst triaging alerts, right?

How can you reduce that time from three hours to three minutes? That would be an objective that you could start fulfilling with AI. And especially recent advances in AI technologies.

Warwick Webb: The way I think about it is, and I think you touched on this a little bit, but the reason MDR services have existed for so many years is there is a gap between what technology can do today and the outcome that customers need, right?

In this case, if we look at detection and response, there is a gap between today, between what technology can do from a detection and response perspective. It's not a self driving car at this point, right? But that gap. is continuing to close in large part because of these advances in AI, where it is doing more and more of the things that were previously out of reach for technology, right?

It's able to do things that generally only humans could do. So both MDR and these AI solutions close that gap. As an AI company and a company that offers an [00:15:00] MDR service, we see like basically a virtuous cycle. Between these two services and I'd love to elaborate on that because I think it's interesting, as our sort of AI expert here, right?

How does MDR help AI?

Adriana Corona: Yeah. And I think if you even talk about the way that we built our AI assistant, Purple AI, it's a RAG architecture. So that again, another acronym, Retrieval Augmented Generation, all of that means is the large language model is a part of the architecture, but actually the most important piece of it is our knowledge base.

Our knowledge base is a source of truth, has to be really high quality facts. So our data schemas, threat intelligence feeds, very high quality examples of queries you would run to do threat hunts, the types of threat hunts you would do, right? That knowledge base, it has to be created by the experts, right?

That knowledge base has to have the input of the in house experts, which are, and in this case, our MDR [00:16:00] team, because without it, you might as well go ask ChatGPT. Yes. Like that is actually the difference. Security knowledge is actually the difference.

Warwick Webb: And so we've got this great flywheel going where you know our purple AI is able to basically observe what our human experts are doing 24 seven for our customers and learn, right?

And then in return, what it does for us on the MDR side is we can start using purple AI. It can start taking some of these tasks that it's learned how to do from us.

Ashish Rajan: Yeah.

Warwick Webb: Which you would ask how does that benefit the customer just because your analysts don't have to do work? Our analyst time is our customer's time, right?

They're basically buying access to our experts and the more efficient we can be, the more we can do for them. So you know, we're really excited about the fact that, purple AI and our analysts are going to be working side by side. Purple is going to continue to grow in capabilities.

There's always the interesting conversation of Will it ever just do all the things I don't think any of us are brave enough to make any predictions, right? It's certainly I don't rule anything out with [00:17:00] some of the amazing advances recently, just in all industries. But for now, humans are still in the loop, whether those humans are people that you hire from us or your own team.

Ashish Rajan: So without the AI context in there, what is the current state of security analyst in MDR space?

Warwick Webb: We've gotten more efficient as an industry and security operations, relying on heuristic, yeah, automation but you start running into limits of what you can do with that technology

Adriana Corona: and the other thing we found so we talked to a lot of our customers are in house experts and are the SOC teams of customers, threat hunting and investigation, just answering us what sounds like a simple question, there's a new threat actor. Am I being targeted? To answer that, you either have hired an engineering team to create the automated tooling for it, or you've siphoned your own security team to create tooling for it for automations, right?

Both of those require maintenance because to piece it all together, even the most expert of experts needs to know which of the data properties you have to look for in your data logs. [00:18:00] What are the actual indicators of compromises, the up to date ones for this specific threat actor. You need to know in detail the query language and how to create a very complicated query that doesn't have any typos, that is well formed, that gives you the right results.

That's something that an expert has to do as well, right? For a novel threat, you can't get away from that step of now I have to go do these things, and that's time that you can give back Alright. to your experts as well. If you offload that work to an AI system that can learn this based on experts training it.

So that's where we really see you can reduce even a quick hunting time well now quick hunting time from hours to seconds.

Warwick Webb: The last point I would make there is, if you're listening to this and thinking how does this apply to my security program?

I would just say that, if you have an internal SOC team, that's 24 seven and some organizations do, then using a platform that has these AI enabled functions that were trained by a team of human experts [00:19:00] is going to make your SOC team's life a lot easier. It's going to accelerate their velocity.

You still need that SOC team 24 7, if you would rather partner and you partner with our MDR service, we're also more efficient with the same tooling, which again, that efficiency just leads to faster response times and focusing on higher order problems.

Adriana Corona: And I also think there's another area that is always ignored, I think, in the conversation, because we focus on the hard part.

So it's really hard to do threat hunting investigations. There's some other parts that are just. a burden on the team, on security teams. Don't assume that security teams spend most of their time doing security. There's a lot of time with administrative overhead of communicating.

So reporting on the result of your investigation for others, either because they have to fix it or because someone asked you, like your boss asked you, the CIO asked you, right?

And making that digestible because they are not security experts. So you do have to spend many extra hours after you've already put out the fire, figured out the response, you completed the process. Now you have to go through the reporting step again, [00:20:00] right? So you start going through the, you're not going to send your queries in the logs.

So you have to create a digestible summary. What we found is that these new tools are actually really well suited for that. So with our tool, it's almost a self documenting investigation. That's already understandable. By both the security expert and the person they need to share that report with in a really easy, natural way.

So that's been something that I think gets overlooked, but it actually is a lot of the work day of a security person.

Warwick Webb: Oh yeah, I mean it's certainly something that my team is proud of. painfully aware of as part of response is documenting exactly what happened, what the impact was, what was done about it, what the corrective actions are, that all takes time.

And that's taking time away from getting back to looking for new threats.

Ashish Rajan: Yeah. And to your point, as a CISO probably watching or listening to this, a lot of the times I'm focusing more on my mean time to response, mean time to detect. With AI, we're saying that it has the capability at the moment, we should be able to reduce the response time.

[00:21:00] Does it affect detection as well? Or is it just a response time?

Warwick Webb: That's interesting. I would argue yes. And I would argue that at SentinelOne, we've been doing this from a detection perspective for a long time. Yeah. Like our entire endpoint model around detection is based on AI models, right?

And yes, unlike signature based detections, they tend to be, if you measure meantime to detect in terms of your coverage, In other words, will there be a threat that we don't pick up on. These models are much more likely to pick up on novel threats than the old approach to doing things.

Ashish Rajan: And what about response time then I guess, Cause I guess, they're obviously tracking that Yep. For people who probably run their own detection engineering team at the moment as well, They're probably facing the same challenges that you guys mentioned at this point in time. Not using AI or using AI. Where basically what we're calling out is that AI does have the capability at the moment, which is more than chat GPT for at least to have a response that you can use as a way to accelerate with a human in between.

Yep. And being able to use that to reduce your overall response time, which obviously is good for ROI, all of that. Is that the [00:22:00] current state of where AI is with SOC at the moment and with MDRs? In terms of good because obviously a lot of people see this as people are skeptical because of the ChatGPT example, you mentioned as well.

There's hallucination. There is this, there is that. how is that being managed? Am I sending my security analyst into a rabbit hole, which is a false positive? I

Adriana Corona: think that's a really great question at this moment, because I do feel like where this is going to evolve is right now you see a lot of these, chat interfaces or co pilots and they're off to the side.

Where this is evolving is becoming an integral part of every step of an analyst day, right? And like for a triage, for deep investigation. And I think the next opportunity for this will be how do we use these same new emerging AI technologies to actually help with. Almost automatically identifying false positives, helping to triage. more quickly reduce the time it takes someone to decide am I even going to investigate this one of 5, 000 alerts from [00:23:00] today? So I do think that's where we're headed

Warwick Webb: And I would also say that yeah I think you should be skeptical if anybody says and a human is not required in the loop today for full end to end detection response Maybe for some limited use cases, but broadly speaking, there's no 24 7 SOC.

That's just you know operating Again, whether we're gonna get there's a different question but I do think it's also important to point out that the error rate does not have to be zero for it to be a worthwhile investment, right? Detection and response, a lot of it is about managing risk in the sense that every time you tune something out or decide not to review an alert because it's low fidelity, you're taking a risk.

But in return, you're getting time that's spent on hopefully looking at higher fidelity data. And I'll go back to self driving car examples. A self driving car doesn't have to never have an accident, it has to be safer than a human driver, right? Yeah. And I think until we get to that point where we feel like our AI systems are safer than a human analyst, we wouldn't pull the human out of the loop.

I'm just making the point that it doesn't have to be perfect in order to, dramatically reduce risk and increase [00:24:00] time to respond.

Ashish Rajan: And but it's even in the current state that it is today, it is still able to help reduce the time required. It is still able to at least get that because you laid it out really well.

Getting into an organization where there's cloud environment there's compute different kinds. There's a lot of context to be brought in before you even find the owner. Like you need to get to that point where, Oh, okay. Now that I found the root cause, I can now look for the owner. Like getting to that point itself sometimes can take hours if you've never been in that space before, right?

So is that kind of what we're saying is also being impacted at the moment by AI that can actually help get that context as well?

Adriana Corona: I think it definitely has the potential to impact that. I don't know if anyone's doing that very well right now, but I do think that expanding the same types of techniques that we're using for, interrogating your logs for threat hunting purposes, expanding that beyond to interrogate your logs to find, show me the origin of this asset, its deployment origin based on the connected log sources.

Ashish Rajan: Yeah.

Adriana Corona: I think. we do have a lot of opportunity [00:25:00] to also make that process, which is very manual right now, and like looking at many different sources and it's almost like piecing all the evidence together to make a conclusion. That's what we could actually be really good at if we have the right high quality expert knowledge in the knowledge base for a tool like ours, like PurpleAI.

Ashish Rajan: I would say, I think you guys have laid out a really good point because a lot of CISOs probably would hear MDR, AI, and Go. How do I separate the signal from the noise? Like the thing you mentioned earlier about the knowledge base of not just going off the fact that, hey, I have a ChatGPT response for what are my top threats for cloud versus I rely on the fact that someone has been collecting a lot of threat Intel in the space for a while.

They obviously have the right kind of information to train the model behind the scene for this to be able to be the case. I think it's worthwhile calling that part out as well where it's not just that every like, how do they separate singing from the noise? If that's what I'm going to get you, they'll hear a lot about, Oh, we do AI.

Oh, exactly. Yeah. [00:26:00] Yeah.

Warwick Webb: Yeah. And I think that's, that is absolutely the answer around signal to noise that as it pertains to AI for MDR, I would say one of the things I love about MDR is the clarity of responsibility, which is especially for a company like us where we bring the technology and the expertise, right?

It's our solution from top to bottom, whether it's AI, human experts, endpoint solution, data lake, it's all ours, right? So our commitment to our MDR customers. Is that we will detect and respond effectively to breaches in their environment, how we do that as we evolve our AI, it's going to make us more efficient, more effective, but at the end of the day, that's how we're held accountable.

So if you want to know signal to noise, it's basically a good MDR service will not miss breaches and will also not miss pen tests, by the way. So there's two ways for us to establish trust with a customer. One is during a breach and the other is that they test us and we'd rather you didn't go through a breach.

We encourage our MDR customers, run pen tests, don't announce it to us, see if we catch you, right? Yeah. But at the end of the day, that's the cash value of an MDR service. Do you catch breaches in our environment? Do you [00:27:00] respond quickly to limit scope and impact? How you do that, whether it's AI, human experts is an implementation detail.

Ashish Rajan: Yeah.

Warwick Webb: So that's how I would say you separate the signal from the noise.

Ashish Rajan: And people who are trying to build their own detection engineering teams and feel that they're confident to do that. I think we spoke about the different kinds of compute. How would you describe this kind of skill set they should expect in their team at the moment for deciding, you know what, great man, but I still believe my team is amazing.

So yeah, that's a skill set that they have in their organization and perhaps their hybrid cloud, which most enterprise these days are multi cloud as well. What kind of skill set are we expecting from a detection response team these days and does AI help in that gap as well? So curious to know.

Warwick Webb: So maybe I can cover what are some of the skill sets and then we can talk about sort of the AI because there are you know, ways for AI to help all of these, right? So you mentioned detection engineering. That is a specialty within DNR. There are people that have careers in this, right?

They get really good at how to build detections that effectively detect malicious activity without a high false positive rate. [00:28:00] Again, very few organizations. I've certainly worked with some that have their own detection engineers. Generally, Unless they have some very unique requirements from a detection perspective this is just one of those areas.

And there are other areas of security that don't match this definition. But this is an area that's solved at scale, right? How you detect a certain threat is fairly consistent across all organizations, right? You have to ask the question with detection engineering, does it really make sense for me to come up with detections for new, malware variants?

Or for new, attacker techniques when there are other companies that do this all the time at scale. But that's, again, just my perspective on detection engineering. And by the way, there may also be companies that have very unique detection requirements and they should, and as far as the skillset there, they should all have worked in a SOC, done detection and response understand all the different types of security data.

Ideally have done threat hunting because threat hunting and detection engineering are close cousins, right? Yeah. It's essentially you're trying to find evil in an environment. So yeah, but [00:29:00] that's just one of many different detection and response functions.

Yeah, skill sets. I can list the others, but I don't know how much time we have. I'm curious, like I know from a detection engineering perspective it's similar to writing code in some ways. So I would imagine there's AI Tools that could be helpful there.

Adriana Corona: Yeah.

Warwick Webb: But I think, yeah, I think we're still in the early stages on that one. And

Adriana Corona: I think even if like the less sophisticated version of that is like create your custom rules, right? So many security teams will create their own custom detections. Maybe not as sophisticated as a detection engineering team would make.

, just a new detection.

Ashish Rajan: Yeah.

Adriana Corona: For that, I do see an opportunity to have. AI tools help in that process. So give the first draft, create it automatically, and then you review it. Because it's another one of those where you have to piece together little chunks of code and properties and data schemas to figure out what the detection will do.

Ashish Rajan: Yeah.

Adriana Corona: It will be a lot natural to just ask for something. Write me a detection for that finds this there in natural language. That's what these tools are actually really good at. Again, [00:30:00] if you have the right examples and knowledge as reference in your knowledge sources. Awesome.

Ashish Rajan: Yeah. That was most of the technical questions I had.

You guys are going to have me like I should sign up for AI, but but at least which is better than ChatGPT, for example. No, but thank you so much for coming on the show. I really appreciate this conversation as well. Where can people connect with you to talk more about AI, MDR, drinking your own champagne, with doing MDR stuff?

Where can people connect with you?

Warwick Webb: Oh I don't know, I guess I'm too old to have most of the socials. So you can just find me on LinkedIn. Oh. Oh.

Ashish Rajan: Oh. I thought you were going to say twitter, Linkedin is still very modern.

Adriana Corona: I also don't have social media. I didn't know that about you, Warwick. So LinkedIn, I have a LinkedIn so people know I'm a real person and not an AI profile, which

Ashish Rajan: you guys have profile pictures. Oh, okay, cool. So at least there's another layer before this where people have LinkedIn profiles, but no profile picture.

And it's like, how do I know this is a real person? But thank you so much for coming on the show. And I appreciate you guys coming in and sharing your perspective as well. Thanks for watching and listening in. [00:31:00] I'll see you next time.

No items found.