Can Threat Detection be enhanced with AI? Ashish sat down with Dave Johnson, Senior Threat Intelligence Advisor at Feedly, at BSides SF 2024, where Dave also presented a talk.Dave shares his journey in cyber threat intelligence, including his 15-year career with the FBI and his transition to the private sector. The conversation focuses on the innovative use of large language models (LLMs) to create Sigma rules for threat detection and the challenges faced along the way. Dave spoke about his four approaches to creating Sigma rules with AI, ultimately highlighting the benefits of prompt chaining and Retrieval Augmented Generation (RAG) systems.
Thank you to our episode sponsor Panoptica. Panoptica, Cisco’s Cloud Application Security solution, provides end-to-end lifecycle protection for cloud native application environments and you can find out more about them here.
Questions asked:
00:00 Introduction
01:44 A word for our episode sponsor, Panoptica
02:39 A bit about Dave Johnson
03:33 What are Sigma Rules?
04:36 Where to get started with Sigma Rules?
05:27 Skills required to work with Sigma Rules
06:32 The four approaches Dave took to Sigma Rules
11:29 Are Sigma Rules complimentary to existing log systems?
12:18 Challenges Dave had during his research
14:09 Validating Sigma Rules
16:01 Working on Sigma Rule Projects
18:54 The Fun Section
Resources
Dave's Webpage: https://daveinthemiddle.com/
Sigma HQ GitHub:https://github.com/SigmaHQ/sigma
Dave Johnson: [00:00:00] So I'm trying to use large language models to create Sigma rules and I created the worst detection rules possible just by a simple query. And so I thought, okay, there's gotta be a better process to do this. There's a concept called overfitting. So it was definitely overfitting.
I'm not like a PhD in data science, but I could tell. And yeah, I think this is. It's pretty trash. I think the shift is going to be from, these massive AI models that are very hard for people to create to smaller models that are more efficient than the big ones just because the data is better quality.
So it's going to be about that data management, I think.
Ashish Rajan: Sigma rules. Yes. You may not have heard of it, but if you are in threat intelligence or you consider threat intelligence as an important thing, whether it's cloud on premise, otherwise, I had this great conversation with Dave Johnson from Feedly who was able to use some experiments and as he would like to call it to give you inspiration, to find some kind of LLM that would help you create Sigma rules that you can apply to any of your logs. Now, logs in this context, how your security logs that you can use to cover a lot of [00:01:00] breadth in what you're looking for.
For example, I'm in BSides SF right now. And if you wanted to find out, Hey, when did Ashish do an SSH from San Francisco? That query. Maybe an out of the box query in whatever SIEM provider you may be using. But if it's not, or if you want to enhance it, you can use something called Sigma Rule, which is an open source framework you can use to identify and create queries that could be helpful for that journey.
As always, if you enjoy this conversation, and if you enjoy a conversation like this, if you know someone who's working on the threat detection space in cloud, definitely share this with them. They'll definitely enjoy learning about Sigma Rules. And if you're here for the second or third time, definitely appreciate you subscribing.
If you're watching this on YouTube, if you're on Apple, iTunes, or Spotify, Let me give a subscribe and follow. Definitely helps a lot in getting the word out for the podcast. Thank you so much for your time. And I hope you enjoy this episode. Talk to you soon.
We interrupt this episode for a message from our episode sponsor, Panoptica. Panoptica Cisco's cloud application security solution provides end to end lifecycle protection for cloud native application environments. It empowers organizations to safeguard their APIs, serverless functions, [00:02:00] containers, and Kubernetes environments.
Panoptica ensures comprehensive cloud security compliance and monitoring at scale, offering deep visibility, contextual risk assessment, and actionable remediation insights for all your cloud assets. Powered by graph based technology, Panoptica Attack Path Engine prioritizes and offers dynamic remediation for vulnerable attack vectors, helping security teams quickly identify and remediate potential risks across cloud infrastructure.
Panoptica utilizes advanced attack path analysis, root cause analysis and dynamic remediation techniques to reveal potential risks from an attacker viewpoint. Visit panoptica. app to learn more. Now back to the episode.
Welcome to another episode of Cloud Security Podcast.
Today I've got Dave Johnson. Welcome to the show, man. Maybe to start off with, could you share a bit about yourself?
Dave Johnson: My name is Dave Johnson. Actually I've been in cyber threat intelligence, which is like a very specialized arm of security. Yeah. For about 15 years. Half my career was with the FBI.
I worked out of Wisconsin on the Cybercrime Task Force. Oh, wow. I worked nation state [00:03:00] APTs, cybercriminals, mostly based in Russia. Okay, that's cool. So it was a crazy time. I finally had enough of it and I went private sector and worked in the financial services industry. You have a talk at B Sides as well. I do, yeah. So my talk is on basically creating detection rules for security and specifically Sigma rules. Okay. So using large language models, using AI, that type of thing. I
Ashish Rajan: may have to Google Sigma rules, but what is Sigma rules for people in security,
Dave Johnson: sigma rules are beautiful because they're not vendor specific.
They're not locked into a particular vendor. It's basically just a way to describe attack behavior in logs. Okay. You basically break down an attack. You think of exactly what you want to look for, what that behavior looks like on a system. It could be in the cloud, for your viewers out there figuring about cloud security.
It could be authentication attempts. It could be brute force attacks. So you get like a very specific thing you're looking for. And if you're able to map that into logs that you would look for, then you can create a Sigma rule for it.
Ashish Rajan: Oh, would that [00:04:00] be like all these logs that I collect from all these separate resources, whether in cloud, on premise, whatever, for me to make sense of a particular imagine I have a scenario where Ashish logged in from San Francisco, I can make a Sigma rule for that.
And apply that to this massive amount of data that I have is that yeah,
Dave Johnson: potentially, because it all depends. Like you want to focus on the abnormal behavior, so it requires a lot of testing, sometimes a lot of domain expertise to know what's irregular, and it can be environment specific, too.
So Sigma rules can be tricky. It's intended to be something simple human readable. Yeah, but it does take a little bit of expertise to write these things.
Ashish Rajan: Okay, some people listening to this or watching this may have some idea, but to even set a foundational context. What are some of the functional components required for someone walking down that Sigma rule path like I want to create Sigma rules?
Sounds like a great idea. Yeah. Because I don't want to be vendor locked.
Dave Johnson: No, so I'm trying to use large language models to create Sigma rules. And I use really bad inputs. I created the worst detection rules [00:05:00] possible just by a simple query. And so I thought, okay, there's got to be a better process to do this.
So the inputs are very specific. So you have to know what you're looking for at first. It depends on the AI model that you're looking at. If it knows security data really well, which we don't really have one yet. They're all kind of general purpose. So they don't do a good job if you just say, look for bad stuff.
You're not going to get good quality detection rules from bad descriptions of attack behavior. So you need to have really good input.
Ashish Rajan: Every time I talk about something to do with data, Or having large amount data being processed, it's normally look at, hey, that's not a security job. That's a data scientist.
Yeah. And I don't know if you're a data scientist, but I feel like kind wanna be one, wanna be one. Yeah, fair. I would actually be one as well, but I definitely find that, do we have to be data scientist to understand and do Sigma rules or use LLM for it?
Dave Johnson: What we're going to see and what we need in the community is better curation of data sets for security.
And I think it's going to be more of a requirement to understand what goes into good [00:06:00] data. And part of that is having some kind of foundation in data science. I think you can't get away from that. Okay, but my talk is mostly about You don't want to go extreme, hardcore, get a PhD in data science, right?
I want to make this accessible to people. So the whole idea was for me to figure out how can I create some prompts to something like ChatGPT in order to create good quality Sigma rules.
Ashish Rajan: And would that also mean that it would just be still me typing in prompts to create Sigma rules and all that?
Dave Johnson: I tried to keep that completely out of the equation.
Okay, I did write a tool as part of this talk and my experiments to figure out the right approach in order to create a good Sigma rule.
Ashish Rajan: Okay, and in your experiments you did, you used three
Dave Johnson: approaches. I did. I actually used four. Oh, okay. Surprise fourth one. The fourth one I didn't put in the talk.
The first one I tried that I don't mention is just asking it to create a Sigma rule. And I just tried to loosely describe what was happening. Yeah. And it just did a terrible job most of [00:07:00] the time. I'm like, oh, there's got to be a better way to do this.
Ashish Rajan: As in when you see this, it's a terrible job where you're not able to simply copy paste the Sigma rule.
From whatever LLM that you were using onto a search, yeah, you weren't able to do that.
Dave Johnson: You really, you shouldn't do that anyway, even with AI stuff, it's, it can create stuff that is very close, but you still need to test it. So I always suggest putting things in dev. Yep. Or some type of testing environment to make sure that the sigma rule works.
Yeah. There's a special field in these Sigma rules. If you look at it, you can see like the type of sigma rule you can put experimental down. Okay. Until it gets verified by the community. Oh, is that an open sourcing like Sigma rule? So when you say yeah. Everything, it is an open framework. It's an open way to just basically describe this attack behavior.
So if you go to the Sigma hq, GitHub repo, yeah. You get basically all the community, sigma rules.
Ashish Rajan: Oh wait, so does that Sigma when people who probably don't want to be vendor locked and are looking at Sigma rules, they can use existing set of open source sigma rules to?
Dave Johnson: Actually, [00:08:00] I use that as part of my experiment.
I basically mined all the Sigma rules for that. To help with one of my other approaches to creating Sigma rules. So I created a dataset out of all the stuff that the community shared in order to do this thing called Retrieval Augmented Generation. And so RAG. That's, yeah, RAG.
Yeah. That's the conventional approach. Yeah, so the idea behind that is you basically create this database. Yeah. And what you're able to do is provide an input, say you're looking for SSH in your example. Yeah. It will look at keywords or the meanings of the words to put in your question in order to find Sigma rules in the repo that are most relevant and can apply to the problem at hand.
Interesting. And this was your second approach? That was my second approach.
Ashish Rajan: I think we probably should talk about the skill set level as well. From asking like, asking ChatGPT or similar, tell me, build me a Sigma rule, which I feel like most people would know what to do. What's the level of skill set for RAG?
Dave Johnson: You have to have a lot of familiarity with the basics. So what I try to do in my talk is, I created this thing on [00:09:00] GitHub where people can use this and they can try and use the benefits of my research in order to actually create these things without doing all the stuff. All the stuff is available in case you want the finer details.
So yeah it's more advanced because you have to think about data storage, retrieval, the retrieval method. When you just ask ChatGPT for example, it's very simple. Yeah. So a good middle ground is actually the third approach, the idea behind that is where you take one prompt, get an output, and you use that as an input to another prompt.
So you're basically feeding the outputs into the next input. The reason that's beneficial is because you don't need to create all this stuff on the back end. You don't need to create another database with RAG. Yeah. It's pretty straightforward. You break up the problem into small steps and you feed each one sequentially into the next step.
So most people can probably try this in ChatGPT. They can try taking the outputs of their prompts and stick that into another input. It's definitely doable. I think it's easier to understand.
Ashish Rajan: I think the prompt chaining is interesting also because when [00:10:00] I think about making something super simple, if I use the example that I gave earlier for Ashish logged in from San Francisco would you say the smallest problem from that is what does logging in from San Francisco look like?
Before I even put the Ashish context in, or what would be like, sometimes it's hard to gather what would be a simple breakdown of a query, because yeah.
Dave Johnson: So that's part of the experiment, I think. Yeah. Okay. So when I talk about what goes into a SIgma rule Yeah.
Using an LLM what makes a good one? It's all about. Quality control of the input, if you don't have a specialized LLM that knows everything about security data Maybe it's a generalist like the things that we have today. Yeah, then you have to be very specific So I created this evaluation system that looks at the input.
It looks at the attack technique like this thing Yeah, if you don't mention SSH and you're just saying look for bad stuff coming from San Francisco It may not gonna say this is a bad thing Prompt basically, right? And it won't continue in the sequence.
Ashish Rajan: People who are experimenting with this, they hear this and go, Oh yeah, anyone who is a bit technical will get the fact that, Oh, I know I [00:11:00] need to look for Ashish doing an SSH from San Francisco.
I need to know what IP is coming from. Like they can go down the pathway very quickly.
Dave Johnson: You need a little expertise to do that, right? Yeah. The tool that I made actually does the work for you. So I use an LLM to say, give a web address Yeah. To a threat research article. It talks about the newest attack.
Yeah. It will actually break the article down and extract all the procedures for you. So I'm trying to make this incredibly accessible to people so that they know, like they have something to plug into their log system.
Ashish Rajan: Would you say this is more complimentary to the existing log system if they want to make it?
Like it's not going to be, you have to pluck out the existing one. Cause I feel a lot of people would. Even though what you're sharing is open source. I don't want people to think that, Hey, I have to rip out whatever Splunk or whatever you're using. This is just to enhance the information further if they wanted to.
Dave Johnson: So you don't have to rip and replace at all. It's supplementary or complementary. So the way Sigma rules work is that there's this adjacent project that translates Sigma rules into the specific vendor logging solution.
Ashish Rajan: Oh, so if I have a [00:12:00] Splunk or something, it can actually translate it to?
A hundred percent. Oh, wow. Okay.
Dave Johnson: So that's the beauty behind it because Sigma rules are not vendor locked in because you don't want to just learn, the Splunk query language. Yeah. Not apply that if you change jobs. Yeah. You have to convert that and learn that new language. Sigma is like open standard and you can translate that into whatever tool that you're using.
Ashish Rajan: What are some of the challenges you came across as you went down the three approaches?
Dave Johnson: It's hard to assess this, right? The first problem, I think, is just time and money. Yeah. Yeah, I mean it. The other, I didn't talk about the fourth. No, I mentioned three in my talk. But there's a fourth that's fine tuning.
That was the most expensive thing. And it was one where I had to create a dataset from the Sigma HQ repo. I created this thing, which took a lot of time to get right. And then I fine tuned was it ChatGPT turbo, 3. 5. Yeah. That's the most powerful one I could do so far. But then the results were pretty terrible.
Oh, I'm like, Oh man. So fine tuning is not the answer to everything. So I think, when. The AI stuff was blowing up, large language models. [00:13:00] Everyone was thinking, let's just fine tune. Yeah. Yeah. It's not the solution to everything.
Ashish Rajan: When you say it's not the right solution, what you find in that fine tuning was actually making your queries worse.
Dave Johnson: Yeah. So
Ashish Rajan: yeah,
Dave Johnson: there's a concept called overfitting. Yeah. So it was definitely overfitting. I'm not like a PhD in data science, but I could tell I looked it up and yeah, I think this is pretty trash.
Ashish Rajan: I've been trying to find ways to Sigma rules, but I went down the path of, Hey for cloud security people who are trying to learn cloud security, and I also find when I give it a specific technical problem after a while, it starts spitting out. Like in my mind, I'm fine tunings, but like the answers somehow keep getting worse and worse, and it's almost like starts. I won't say hallucinating.
Sometimes it does hallucinate. I guess you've tried four approaches. You won the third one is the best one so far, where it's a mix of the RAG and the prompt chaining.
Dave Johnson: That was the, so I evaluated all the approaches. Fine tuning was a failure. Like I mentioned before, everyone wants to try it.
Everyone thinks it's the panacea of [00:14:00] LLMs and AI, but it can be. But like the rest of my story, it's all about data input. So if you give bad data for training. It's not going to have good results.
Ashish Rajan: But how do you, oh, and I guess the intent would be they already have a large data set internally that they have to, they have available at their hand to work with, that they can apply a Sigma rule to and validate that Sigma rule works or not.
Is that how you is that for validation? Yeah.
Dave Johnson: So for validation, I think the approach I did because I didn't have a ton of time was I used a different LLM. to evaluate each of the Sigma rules that I created. And I came up with this rubric for scoring the Sigma rule. And I did something similar to the community created Sigma rules too, just to see if there was a difference.
And from the analysis from the other LLM, cause you do that to avoid bias. You don't want to use the same LLM to create. And it evaluates sometimes, so I tried to minimize that bias.
Ashish Rajan: Oh you had to pick different LLMs as well. I did, just to be safe, yeah. I didn't even think about it because, there may already be an existing bias on one dataset from one [00:15:00] LLM based on how they were trained.
But then moving that result to another LLM.
Dave Johnson: For example if you're going through this prompt chaining. Yeah. And you're saying, build all these quality controls, make sure you do this, make sure you do this, and then you create a Sigma rule. And then if you turn around and ask the same LLM to evaluate it, I think it's doing a great job.
But then I took that output and put it into a differently trained LLM, just to get a more objective view. It's not perfect, but the best way to validate is to do real world testing. There's another project called Atomic Red Team. They do these sort of like atomic level red team tests. Okay. And then you can generate security locks from that.
But I just didn't have enough time. So it's like next direction future research.
Ashish Rajan: Oh, so if people want to join in that race for Atomic Red Team or can they contribute into your project as well? Or is that going to be, is that tool going to be?
Dave Johnson: Yeah, I think that's the future of, That project, if anyone wants to use it, is the pair up the defense with the offense.
You create the log data. It could test the Sigma rules. If you have both of those, then you have purple teaming. So [00:16:00] it's a really cool idea.
Ashish Rajan: I think it'd be pretty cool for people to start working on it, maybe contribute to your tool as well. The other question that I have is now that we know about the skillset challenges, we also know the three approaches you took.
For anyone who started thinking about starting it, would you ask them to jump straight to the third approach or would you like, because in a way you have a bit of a learning curve, you ask the question first and then became RAG and prompt chaining for people are curious about this. Where would you recommend? Sigma curious, right? Yes, Sigma curious. Yeah. For people who are Sigma curious where would you want them to start?
Dave Johnson: As far as the future of this type of project, it's creating data sets. So if people are interested in Sigma rules, like creating them manually, a few times will give them a good sense of what goes into it.
They can contribute back to the community. And the thing is with the approach that I used, any new Sigma rule created by the community helps fuel the RAG system. So if there's a new attack completely novel, maybe it affects the cloud. Yeah, that gets mentioned in the community I get that [00:17:00] imported into the RAG system and then that Sigma rule could be an example for something completely new That's the cool thing.
So people can contribute doing that they can contribute making data sets. Yeah and security So that's a big problem too is that in security people are siloed. Yeah, I want to share the data because There's a lot of sensitive data.
Ashish Rajan: Yeah. No one would share their personal logs on to anyone on the internet.
Dave Johnson: Yeah. But one workaround is potentially using LLMs to sanitize the log data so that you can have better distribution of security data sets.
Ashish Rajan: Oh! As in, LLM can help you anonymize information?
Dave Johnson: That's just the theory. I'd like to test that. Oh, okay, fair. That's something I think would be pretty cool to test.
Because, technically, it should be possible, one would think. The only problem is that LLMs are expensive, right? Yeah. If you're Putting a ton of log data into it. It might not be super great. So there might be some kind of hybrid approach where you use regular machine learning a little bit cheaper compute.
But I think there's something there for people to use.
Ashish Rajan: Would there be a skill set change or what kind of skills that [00:18:00] people would acquire if they want to? Work on this whole LLM Sigma rule. Where do you recommend them to build their skillset to walk that journey as well for Sigma rules and LLMs and all of that?
Dave Johnson: I think you're maybe hesitant and you're like telling people to go down the data science path, but I have a bit cause I'm like, I don't know where to start as well there. You don't have to do any fancy degrees. Okay. I never did. I took, I think I, I took a Udemy course. And some Udacity things.
And just got my feedback. Wait, data science courses? Yeah. Oh okay. Just three data science courses. I think that's, if you're going to be in this field for, another 10, 20 years, it makes sense to get a little bit of foundation. Because, I think you just can't get away from it.
It's all about, I think the shift is going to be from, these massive AI models that are very hard for people to create to smaller models that are more efficient than the big ones, just because the data is better quality. So it's going to be about that data management, I think.
Ashish Rajan: That was all the technical questions I had. I have three fun questions as well. Okay. The fun questions that I have for you are, first one, what is [00:19:00] something that you're proud of that is not on the internet?
Dave Johnson: I'm very proud in my son who, it's like I went this path of cyber security and stuff.
And I think he learned from me to be more adventurous. So he's doing horse riding and he's learning piano. He had his first like recital, he's seven years old. Oh wow. At seven. Yeah. Seven years old. Wow. Okay. That was definitely,
Ashish Rajan: I was not doing that at seven years old. Okay. That is something to be proud of for sure.
Dave Johnson: I think I was still drooling.
Ashish Rajan: Yeah. I'm like trying to figure out life. Yeah. Should I play with this toy or that toy was the question that I'm in my mind. But definitely not a recital, that's pretty awesome. Your son is able to do that. Second question, what do you spend your time on when you're not working on Sigma rules and other challenges?
Dave Johnson: I work for Feedly, which is a startup and I love their product. I think it's super cool, but I also have my own side business. Oh, it's actually called Junebeat. Okay. And so I just released a flashcard. Yeah. Yeah. Yeah. A flashcard app that uses AI to create flashcards for people. It could be for teachers trying to make cards for their students, any kind of [00:20:00] flashcards any kind of flashcard.
So you could take your phone's camera, take a a photo of a text. Yeah. And for a file, it'll create flashcards and help you study for it. Flashcard for Sigma rules. Not for Sigma rules. Oh, okay. Fair. You could do it on security stuff. I actually did that because I was studying for a security exam.
Yeah. And it had this thick ream of books and I would like to make flashcards 'cause that's better for learning, I think. But it just wasn't possible, okay. That's why I went into that.
Ashish Rajan: Is that an app or is that a website? It's
Dave Johnson: an app. I'm going release in about
Ashish Rajan: a month. I'll put the link to the app as well.
That's a good, flashcard sounds like a great idea. Final question. What is your favorite cuisine or restaurant that you can share?
Dave Johnson: I do a lot of home cooking. I don't go out to eat anymore. Oh, what's your favorite thing to make then? Going back, favorite restaurant, Finn's Sushi in Madison.
I love sushi. It's my jam. Yeah. I do love sushi. I can't make that. So if I did, I would, be worried.
Ashish Rajan: Yeah, 20 years of just squishing rice in there, like that's a very intense activity.
Dave Johnson: I think a specialist can handle that better. Yeah, fair. My favorite thing to cook is I make this low carb pizza.
It sounds lame, but I was gonna say, wait, how do you make a low carb pizza? I use almond flour. Oh, okay, [00:21:00] but does it stick together? Because I always use a little cream cheese, almond flour, and mozzarella cheese. And it sticks together? Sticks together, yeah.
Ashish Rajan: But where can people find out about low carb pizza or just, what do you do if the Sigma rules as well, the open source project?
Dave Johnson: They can find out more about this stuff. I've got my own website. It's Dave in the middle. com. Okay. And then if they search for a SIGEN, S I G E N. Yeah. They'll find the GitHub repo that talks about my project.
Ashish Rajan: I would definitely put that in, but dude, thanks so much for coming on the show. Thank you very much.
Thank you for listening or watching this episode of Cloud Security Podcast. We have been running for the past five years, so I'm sure we haven't covered everything cloud security yet. And if there's a particular cloud security topic that we can go for you in an interview format on cloud security podcast, or make a training video on tutorials on cloud security bootcamp, definitely reach out to us on info at cloudsecuritypodcast. tv. By the way, if you're interested in AI and cybersecurity, as many cybersecurity leaders are, you might be interested in our sister podcast called AI Cybersecurity Podcast, which I run with former CSO of Robinhood, Caleb Sima, where we talk [00:22:00] about everything AI and cybersecurity. How can organizations deal with cybersecurity on AI systems, AI platforms, whatever AI has to bring next as an evolution of ChatGPT, and everything else continues.
If you have any other suggestions, definitely drop them on info at CloudSecurityPodcast. tv. I'll drop that in the description and the show notes as well so you can reach out to us easily. Otherwise, I will see you in the next episode. Peace.