CNAPPs & CSPMs don’t tell the full cloud security story

View Show Notes and Transcript

In this episode we speak to Nick Jones, an expert in offensive cloud security and Head of Research at WithSecure to expose the biggest security gaps in cloud environments and why CNAPPs and CSPMs alone are not enough often.

  • How cloud pentesting differs from traditional pentesting
  • Why CSPMs & CNAPPs don’t tell the full cloud security story
  • The biggest cloud attack paths—identity, IAM users, and CI/CD
  • Why “misconfigurations vs vulnerabilities” is the wrong debate
  • How organizations should prepare for a cloud pentest

With real-world examples from red team engagements and cloud security research, Nick shares insider knowledge on how attackers target AWS, Azure, and Kubernetes environments—and what security teams can do to stop them.Questions asked:

00:00 Introduction
02:40 A bit about Nick Jones
03:56 How has Cloud Security Evolved?
05:52 Why do we need pentesting in Cloud Security?
08:09 Misconfiguration vs Vulnerabilities
11:04 Cloud Pentesting in Different Environments
17:05 Impact of Kubernetes Adoption on Offensive Cloud Security
20:19 Planning for a Cloud Pentest
29:04 Common Attacks Paths in Cloud
33:05 Mitigating Common Risk in Cloud  
35:14 What is Detection as Code?
41:17 Skills for Cloud Pentesting
45:28 Fun Sections

Nick Jones: [00:00:00] They want a pentest report to find as much as possible. And so the way they do that is to not give them the CSPM outputs to make sure that the pentesters do a thorough job, but if you give us the CSPM output as a starting point, then we can immediately understand. What's being covered, what's not.

We have a whole load of findings in there that we can use to inform the rest of the stuff we do. And quite often, we do have some clients who do give us the CSPM output. And we'll often include findings from the CSPM in the report. But we do that because we look at it and go, okay, this is actually a, like a critical part of an attack path.

Ashish Rajan: If you work in the cloud security space, specifically the offensive side in the UK or Europe region, this is the episode for you. I finally got a chance to catch up with Nick Jones in Copenhagen, or as part of the Nordic Security Tour that Cloud Security Podcast is running over the past few weeks. Nick has been a very popular cloud security community member, has spoken at a lot of cloud security conferences and given presentations on AWS security research and pentest experience that he's had over the last few years.

Nick and I obviously have been part [00:01:00] of the cloud security community for a long time. So there was a bit of reminiscing the past as cloud security used to be when we were trying to convince people about cloud security and today what it is, especially when, even if you have a CSPM or CNAPP, how confusing that word could be and whether you still need something on top of it, in spite of having the best of breed a CSPM tool that you may have at this point in time We also spoke about some of the requirements for why do you even need a pentest?

Especially if you have AWS, Azure, Google Cloud already being looked after by CSPM and perhaps what should you expect from a pentester who's coming into pentest your AWS environment? And where should you technically start doing a pentest? At the cloud or the application itself? All that and a lot more in this conversation.

Now the conversation is primarily focused on the UK and Europe region, which is where Nick is based out of, but the conversation we had can be applied to a broader context in other regions as well. So I would still listen to it even if you are not in the UK or Europe region. If you know someone who probably wants to work in the offensive cloud security [00:02:00] side of UK and Europe region, definitely share this episode with them.

I'm sure they'll equally enjoy the conversation that I had and you would as well, if you want to know where the current state of cloud security is especially on the offensive security side. As always, if you are listening or watching a Cloud Security Podcast episode for the second or third time, or perhaps even the 10th time, I would really appreciate if you could take a moment to give us a follow, subscribe on audio platform that you may be listening to this on. Maybe it's Apple or Spotify or the video platform that you may be watching this on LinkedIn or YouTube. I really appreciate hitting a subscribe or follow button means a lot to us that you support us over here. So it only takes a few seconds. I hope you enjoy this episode and I'll talk to you soon. Peace. Hello, welcome to another episode of Cloud Security Podcast. Today I have Nick. I'm really excited about this conversation and actually before we start, if we could just have a brief intro about your professional background, what have you been up to?

How did you end up in cloud in the first place?

Nick Jones: Sure, so I started out with a computer science degree and went into traditional penetration testing, offensive security, trying to break web [00:03:00] apps and old school Windows networks and all of these kinds of things. And I spent a few years doing that and eventually found myself in a situation where we sat down with this client and they said, oh, we know you're supposed to be doing a network test.

Turns out some of our network's actually in Azure, so if you can go have as well as whatever it is that we've also given you that would be fantastic. And I sat there going, okay, Azure, what's that? Open Google, how do I log into this thing? And we quite quickly found a bunch of stuff the client had screwed up.

But it was certainly an eye opener into a world that I hadn't seen before. So fast forward sort of six, nine months and I'd started digging deep into AWS actually. And after a little while eventually ended up running our cloud security team here at WithSecure, which I did for five years or so.

Cloud still very much my specialism, but I ended up being so taken out of that role. And I'm now our Head of Research, which means I get to oversee all of the research that our consultants do across not just cloud, but a whole load of other stuff, too, so.

Ashish Rajan: Fair. And I think, obviously, you and I have been talking for a long [00:04:00] time way before this as well.

You've been talking about detection as code as code as a space. Offensive security as a space. I'm curious, actually, since you've been in the cloud security space for a while as well, how has that evolved from when you first did that Azure pentest to today, 2025? What do you hear now? I had to explain that to people before.

Nick Jones: Yeah.

So back when I first started, a lot of people were still getting their head around what this whole cloud thing was. And this is 2016 or so and the cloud had been around for a long time.

Ashish Rajan: Yeah.

Nick Jones: But a lot of people hadn't really been using it yet. And it was certainly, we had quite an international client base and a lot of the more advanced cloud users were very sort of UK US focused.

So if you started looking into Europe or to Asia, people were just starting to find their feet with it. And especially a lot of the more old conservative companies. And so a lot of the time you would have to sit down and explain that yes, identity is an acceptable perimeter. You don't have to have everything locked away behind a firewall.

You know you can't firewall off your S3 buckets. That's not how this works. We were still encountering a lot of legacy thinking about how to approach [00:05:00] security where people were lifting and shifting their mindset as well as their workloads, right? And that's definitely changed an awful lot over time.

People have got a much better at hiring cloud security experts or cloud security specialists into their security team, understanding that this is a different space, they do need specialist knowledge. Finally now people are also understanding that you can't hire a cloud security guy and expect him to do AWS, Azure and Google Cloud altogether.

There's a lot more specialization happening than there was historically. Things are maturing an awful lot. The tooling has gotten a lot better. What CSPMs looked like in 2016 was pretty basic, misconfiguration scanners, basically. And these days things like Wiz, Orca or all the rest of them are getting much, much better.

A lot of very advanced capabilities they're bringing to the table. It's matured, I think, is probably the straight answer. A lot of things have matured. People have worked at how they work. It's a lot less of a Wild West.

Ashish Rajan: I think because this is an interesting one, I often get into conversations where, over advisory calls because people already have what a CSPM or a CNAPP, whatever you want to call [00:06:00] it the whole another can of worm on the CNAPP that we were talking about before, but a lot of people think that to your point about pentesting in cloud, if I already have a CSPM or CNAPP or it doesn't really matter which insert category here of the Cloud Security tool that is there today. Why is there a need for a pentest or even a human to come into the loop of all of this?

Nick Jones: In my experience, a lot of that boils down to CSPMs and CNAPPs being very good at understanding what a misconfiguration looks like.

And nowadays also what a potential attack path exploiting some of those misconfigurations might look like and some of the fancy ones also do identity mapping to that and all these things. What they're generally not so good at is contextualizing findings. They do a, as best a job as they can.

Fundamentally, you still need a human to sit there and look at this and go, Okay, the CSPM says this is a high. Is it actually a high? Do I really care? It's a public S3 bucket. Is this a five alarm fire and we've got an alert incident response and off we go, or is it just a bucket containing some [00:07:00] public files that are supposed to be public and so we don't care?

And that's a very simplistic example, but you find more of that the further you dig into things like AWS IAM configurations, for example, your how you've got your permissions models set up, these kinds of things. Often the CSPMs now can infer quite a lot, but you still need to, you still need to apply that business contextual understanding of what is this workload?

What's it trying to do? How is it supposed to operate? What needs to happen in order for this thing to function correctly? And so therefore, what's overprivileged? What's not? And a good pentest should also come out the other side with not just band aid fixes on all you know, where you got the IAM user over here and you should probably get rid of your IAM users, which you should get rid of all of your IAM users.

But CSPM will tell you that. What you should also come with is recommendations on processes and long term fixes and improvements to the way you do things so that these vulnerabilities don't come back, right? They should be working with you to help you understand how to improve your situation.

And a lot of that. If you've got experts on your team who know all of that already [00:08:00] then maybe you don't need that level of guidance. But in my experience, very few organizations do have that level of deep cloud security expertise in house on a permanent basis.

Ashish Rajan: Also, which is why a lot of them tend to just rely on a CSPM to go give me the alert attack part. But then the follow up to that also is that a lot of people talk about the fact that all the red wall of alerts that I have on a CSPM are technically misconfiguration, not really vulnerability.

Nick Jones: So I think the misconfiguration versus vulnerability debate is an interesting one.

More in an academic sense than a useful sense. Because what we're essentially saying is that a misconfiguration is a setting that you've set wrong. A vulnerability is where the software's been programmed wrong. To an attacker, it doesn't really matter which it is that they're making use of.

At the end of the day, it's a floor in the way that the thing you've got set up is running and it's something they can exploit to steal stuff. But I think the other thing that's important to note is often the problems we find when we're assessing cloud environments. On always specifically in the workload itself, often the biggest security challenges rely on or are related to things that the workload [00:09:00] relies on.

So things like, for instance CICD, your deployment pipelines, all of these kinds of things. The platforms that those run on the security of your GitHub repositories that the codes loaded into the pipelines from the Entra ID that you're using to then authenticate into AWS or Azure to do your engineering.

All of these supporting pieces have often historically been thought of as second line systems, or not production, or, not internet facing and so therefore not as important. But actually we find these days that CSPMs have gotten pretty good at catching a lot of the low hanging fruit inside an individual workload.

But they really aren't very good when you consider the whole picture and all these disparate systems, right? Again, they're getting better. People are building GitHub support into their CSPMs. But it's when you start looking at the bigger picture that value starts really coming through.

And the wall of red comment you make is quite an interesting one because we hear that a lot from a lot of organizations, especially some of the more interesting startups we deal with where maybe they're four or five hundred people total. They've got a security team of two or [00:10:00] three guys who are covering everything from their end points through to their cloud workloads through to the data lake they've got tucked in a corner on.

They don't really have the time or the energy to go of these 15,000 findings in my CSPM, which of these do I care about? And so we have started also doing engagements where rather than a more traditional sort of pentest type thing, we've actually gone in with a view to look at the entire organization as a whole, 180, 200 AWS accounts for the last one I did. Something like that. Take in all their CSPM findings, do our own digging around inside the organization too to look at things like how the identity is configured, and look for specific stuff we don't think the CSPMs are very good at finding. And then take some time to analyze what we gather plus what's in the CSPM, yeah, and produce something out the other end that is here's six months worth of problems for you to fix that we prioritize for you, yeah, start working on these and then once you've got all that in order, then we can start looking at what else is in the backlog , we've taken you from 15,000 things to five categories of things and a couple of individual [00:11:00] cases of stuff and that gives you something that's a lot more manageable to then start taking two teams and working on.

Ashish Rajan: Yeah, interesting and I think yeah wall of red is an interesting one.

Maybe just to give us some context for, you were talking about earlier, the early adopters that you worked with, the financial sector, US, UK, and now with the startups as well, what do you find as the, I almost want to ask the cloud space suite, how different is it between an enterprise, like a large financial organization, which is regulated and all of that.

How, because what does cloud look like there when you go with your pentesting research assessment kind of space vs all the way to the other end, we just spoke about startups. What does that look like on the other end?

Nick Jones: Sure. So in that, that there's a few quite interesting distinctions there, I think.

In that when you look at large enterprises, especially in regulated spaces like finance you'll find there's a lot of concern around making sure that we remain compliant with these various standards and requirements and everything that are either part of our license to do business, you lose your banking license if you don't stay compliant.

[00:12:00] Or they're compliant with a billion different frameworks like PCI DSS or whatever else. And so a lot of the time when you're doing security audits there, A, they'll have a CSPM and they'll be working with it, but it doesn't matter, they still need a pentest because that's what the compliance standards say.

So there's a fair amount of work that's like that. That often ends up being less interesting for the consultant like me because they're by their nature very narrowly scoped because they're looking to tick a box to say that workload X is okay. And so the type of thing you're allowed to look at is precisely workload X and none of the supporting systems.

Ashish Rajan: And you only have two days.

Nick Jones: Or anything else. Yeah, or often you don't have as much time as you'd like. And then a lot of the vulnerabilities that I typically wouldn't care about are things that they have to. There's a running joke I have with Chris Farris about how you need auditors in your threat model.

Failing compliance is a legitimate threat to a lot of these businesses. And there's a whole bunch of vulnerabilities that vulnerabilities, misconfigurations where in practice, no attacker is ever going to exploit you that way. But you have to care about it because otherwise an auditor is going to say, nope, that's not allowed.

And so therefore you failed your audit. The classic example is [00:13:00] encryption at rest in the cloud, right?

Ashish Rajan: Yeah.

Nick Jones: The chances of some bunch of ninjas delta forcing their way into an AWS data center and stealing the right disks is pretty minimal, right? Yeah. And I can't remember ever coming across an AWS estate where the next best thing for them to work on to improve their security was going to be their KMS controls.

But that doesn't mean that there aren't a lot of people out there who spend a lot of time dealing with encryption at rest related concerns, and it's usually to do with the auditors all right. Then there's a middle ground when you start looking at these FinTech startups so for instance in the UK space, you've got Monzo, Starling, Revolut, N26 in Germany.

There's a big market for it in the U. S. now. I'm not quite as familiar there, but these app only banks, these app only financial institutions, money transfer services.

Ashish Rajan: Nanobanks is it called?

Nick Jones: Nanobanks is another term, yeah. Challenger banks and so a lot of these are very much cloud first, cloud native.

No on premises estates whatsoever. No branches or anything either. Everything is in the cloud bar, their end [00:14:00] points on that comes with some quite interesting challenges from a regulatory perspective. I know Starling in the UK were one of the first to get a regulated banking license despite being cloud only.

And I know they had a hell of a time convincing the regulator that this was an okay thing to do. That they could rely on AWS and it wasn't going to be the end of the world. And so they have a lot of that same regulatory pressure, but they bring that startup velocity and speed of doing things.

And that brings a lot of challenges that a lot of the big banks just don't deal with because they're not interested in moving at that kind of speed.

Ashish Rajan: Yeah.

Nick Jones: And for instance, from a pentesting perspective, I remember being in one of those nano banks where oh, we were doing some network testing.

They wanted to do a network pentest to supply their investors with a thing that said they were okay. And I started mapping and scanning around and trying to find some vulnerabilities. And the scanners kept getting really confused. And it turns out it was because they'd left their deployment systems on, even in this testing environment they'd given me.

And so they were deploying 25, 30 times a day. And every time they deployed, they killed an old EC2 [00:15:00] a new one in its place. So all the IPs kept changing underneath me. Which confused the hell out of me to start with. But actually ends up being an incredibly resilient setup in many ways against an attacker.

Yeah. Because how do you as an attacker persist if the system that you've exploited and you've landed on dies 15 minutes later? Yeah, so the clouds were a lot of interesting challenges for things like that. And then if you go through right the way to the sort of move fast and break things, start up space, the challenge there usually tends to be that without a big regulatory stick to beat them into submission.

A lot of the time security doesn't have the leverage to make engineering do what needs to be done to stay secure. Now, I'm firmly of the opinion that security exists to support and enable engineering. Security shouldn't be a gatekeeper, it should be an enabler. All of these good things. But I've definitely seen environments where the engineers are so free to do whatever the hell they want that security just can't keep on top of it.

And so then when we go in and we start having a poke around because they've, they've paid for a penetration test and we've gone in to have a look. We find [00:16:00] all kinds of things. But security hasn't got the time or the leverage to be able to make them go fix it, right? Until we come along and prove the damage that it can do.

Ashish Rajan: Yeah.

Nick Jones: And so actually at the startup end of things, quite often I'll find that the penetration testing we brought in to do is as much as anything a way of sending a message to the engineering organization that we do need you to invest some time and effort into security more than you are at the moment.

Yeah, because I've never met an engineer who wants to be insecure. Everyone tries to be secure. But when you've got product managers breathing down your neck, saying, we need these five features out next week and your choice is get the five features out or be secure. If the boss is breathing down your neck, you go with the fast route and not the secure one, right?

Lot of the time, yeah, we end up being used as almost like a bludgeon to beat the other side of the house into submission with, which is also not very much fun actually as a consultant, but it's it's something we're seeing more and more of because especially now with GDPR has obviously been around for a while now, but more and more of these regulations are starting to take effect worldwide.

Even some of the startups are starting to care or having to care a lot more than they used to [00:17:00] about these things. And it's been a bit of a wake up call, I think, for a lot of them who hadn't realized what state they were in.

Ashish Rajan: Actually, it's a great answer because I think, most of the advisory calls that I've been with directors or CISOs in the UK and Europe region, one of the things that have been coming up is also the complexity of, to what you said about the second layer systems, which is the GitHub of the world, CICD part of the world.

But then, initial wave of thinking about cloud security used to be around virtual machines. Now we are gone away from that, I feel, to a large extent. It's still there. People who started on it, they're still there. But now we're in this land of containers, Kubernetes. Now, for people who understand that, they probably realize, oh, yeah, that, I'm sure people who are listening or watching this, the moment I mentioned Kubernetes, they're like, oh, rolling the eye oh, my God, that thing. Has that made it easier or has that made it even more challenging, and obviously from a pentesting perspective, because now, when you go into environment, it's not just, I have to look at 180 AWS account, How many have data centers within data centers as they call for [00:18:00] Kubernetes?

Nick Jones: It's changed the nature of a lot of things, I think. Serverless is an interesting one because serverless simplified a lot of the Infrastructure level security, right? Because you don't have to worry about running your own virtual machines anymore. Amazon's handling the patching and all these kinds of things. But what we actually started seeing coming with that is that some organizations then look to build the simplest serverless functions they can.

Which often means removing all the old web dev frameworks that people used to use, your Djangos and Ruby on Rails and things. Which also means stripping away all of the SQL injection protections and cross site scripting protections. All these other things that devs have historically been used to having built in by default.

And I wouldn't say there's been a huge uptick in those kinds of vulnerabilities, but I have definitely noticed there's been a few cases where removing those frameworks has led to devs making mistakes because they don't think about these things the way they might have done 15 years ago when we didn't have these frameworks helping out.

Kubernetes on the other hand Ah, Kubernetes is a separate beast. I'd do a little bit here and there. My colleague [00:19:00] Mohit Gupta is probably rolling his eyes in the background right now because he's our Kubernetes expert. You might have seen him as skybound on a few of the KubeCon CTFs and fwd:cloudsec and things. He teaches our internal Kubernetes training course for pentesters and that is a week long sit down in person training course on here's how Kubernetes works and here's how you go about attacking it as a security consultant.

Because there is so much there and so many things that can go wrong that yeah we take a full week to teach it. So it's a real beast and the problem with Kubernetes often isn't Kubernetes itself. Vanilla Kubernetes has a fair few things that can go wrong, but not tons and tons.

What the problem is that by the time you have a functioning production Kubernetes cluster, it's got 15 other projects running on top of it. You've got a networking layer, you've got a service mesh, you've got your observability and monitoring solutions.

Ashish Rajan: It's own firewall, it's own load balancer.

Nick Jones: Right, and there's tons of stuff buried in there. And then you've got your What was it? Cloud native application protection platforms and runtime protections and all this stuff running in there, too. And in fact, actually, on occasion, some of those systems can be the [00:20:00] weak point, right? Because quite a few of these monitoring and debugging and supporting tools have ways of reaching into the cluster to do things.

And so as an attacker, if I can compromise one of those, and that gets me a shell into the Kubernetes cluster, or it has right access to the Kubernetes APIs. Then I didn't need to compromise Kubernetes. I've gone in via the side door, so to speak.

Ashish Rajan: So to your point, it has added more complexity, but maybe because I would love to have you talk about some of the pentesting work you are doing with fwd:cloudsec as well in terms of what the paper is looking like.

We've been talking about, just the space in general, how complex it is for people who are either pentesting today or thinking of pentesting cloud tomorrow. I imagine a lot of people who would listen to this episode or watch this would be curious about how do you recommend people plan for a pentest for a cloud account?

Doesn't really matter. AWS, Azure, whatever these days we have, as we just spoke about right now, Kubernetes containers, hybrid cloud, multi cloud, private cloud.

Nick Jones: So there's a lot of different things to think about for sure and I would say that oh, I think if [00:21:00] you're an engineer or a security team looking to procure a pentest one of the things I see people get wrong most often in the cloud space is that is assuming that a firm who is good at pentesting in general is also good at cloud pentesting, right?

And and that's not to say your usual firm won't be good, but you, I think it's prudent to do a bit of due diligence on the cloud side specifically to understand who it is they're going to be providing you with and that firm's reputation specifically in the cloud space. Because one of the things that we see really commonly if you go to the big security conferences like Defcon and things like this you'll see people running these cloud pentesting, cloud red teaming training courses, or talking about these things.

And you can really tell who is a pentester who's done some cloud, and who is a cloud security specialist who does pentesting.

Ashish Rajan: Ah.

Nick Jones: And a lot of that is around the mindset and the thinking and how people approach stuff. I have seen one pentest report from a vendor I [00:22:00] won't name where the pentester said, oh yeah, we got 87 shells in your environment.

And it turns out actually what they'd done was drop Cobalt strike beacons into the same Lambda function 87 times. Because the function kept dying and so they kept dropping another beacon and round they went. Because they didn't understand they were landing inside a Lambda function and therefore didn't understand what they were supposed to do next, or what they could do next, or what that meant for the access they thought they'd got, or all of these kinds of things, right?

So if you just try and apply your standard pentesting methodologies and mindsets to the cloud without going and learning about the cloud and cloud security first, you end up with some very weird outcomes. And a lot of the time it's also then from a risk perspective, really bad for the organization buying the pentest because the findings they're going to get are either not gonna be contextualized properly or the risks are gonna be scored wrong or they're gonna have missed more cloud native issues and just found the stuff that's easy to do with the old school pentesting tools that they're used to using. So finding a partner who really knows what they're doing in the cloud [00:23:00] security space, I think, is the most important thing in many ways. And then once you're there, my the usual tips then are make sure that you've got the engineering team for the workload that's being tested available for the duration of the pentest.

Because as a consultant, if I can ask questions and engage with the team, and you're going to get far better recommendations for how to fix things out the other side because I understand how it all fits together. You're also going to get recommendations that take into account things that devs have already tried and failed with or with some context for your business specifically, not just the workload.

And it also means that there'll be times where I could spend half a day flailing around pulling all this data together and working things out for myself or a few slack messages to the right engineer might also get me the answer a lot faster.

And so if we're able to engage much more closely with you as an organization.

We can move a lot faster. We can cover more ground. We can find more things is to everyone's benefit, right? And likewise please give us the infrastructure as code if you're using it. It's really common to find organizations saying, Oh we don't really want to give [00:24:00] you the source code or what have you.

But if you give us the infrastructure as code, there's all kinds of things that we can do that allow us to speed up our testing. There's various things that we can find much faster with infrastructure as code with some of our scanners and things. We can also tell when we're looking at something.

If we found 50 instances of the same problem, or if it's 50 different instances of a problem, because we can look at the infrastructure's code and say, Oh, Terraform says account equals 50. Okay, it's the same thing. We change it over here. We could also then even make recommendations as specific as, going to this Terraform file line 55 change this field from true to false that fixes the problem, right?

So the more visibility and the more access you give us, the more we'll be able to do for you and the more value you'll get out of it.

Ashish Rajan: That's great advice. Also, would you say talking back to the complexity of the large footprint that many people have, hybrid, multi cloud, whatever, is there some recommendation on start with one provider or application based?

Because a lot of the vendors, at least the ones that I commissioned as a CISO, or the ones [00:25:00] that I do advisory to, they all focus on applications first because they're like, hey, this is my crown jewel. This is what is under the regulation or whatever the reason here is, or it should be more because it's a cloud security pentest.

Should I be focusing more on let's do AWS? Let's do azure. What's the approach there?

Nick Jones: So it depends a lot on what you've had done historically, I think. But imagining that we're starting completely from zero I would say probably what I'd do is look to work out where your crown jewels were.

And get those tested first, ideally in a combination of the app and the cloud stuff at the same time, because if you engage it as a single pentest, then it gives people the opportunity to more easily spot like ways of pivoting from the app into the cloud or from the cloud up into the Apple or all of these other things, right?

You'll get a more holistic responses to what the risks are and I would also say where possible look to group a lot of these things together and include things like CICD pipelines and all this [00:26:00] stuff in the single engagement because the more the more breadth you give the consultants to look at the more risks they're going to be able to account for, the more attack paths they'll be able to explore.

Whereas if you just box them in on a single workload and a single set of AWS accounts, you're probably going to miss a lot of the the bigger picture findings, right?

Ashish Rajan: And would you say that? Going back to what you were talking about, Nmap and killing the machine in a few minutes. Maybe don't give those environments?

Nick Jones: So I think this is about what I was saying earlier in terms of engaging with the team, right? Because I was quite closely engaged with that team, I was able to say, Hey guys, why do these boxes keep disappearing? And one of the engineers said, Oh yeah because the way we do deployments is like blue green, we turn the old boxes off, turn the new ones on.

Yeah. Oh, okay. In which case, can we stop deploying into this account for a couple of days so I can finish my pentests? And I looked at it and went. Oh, that's one of the testing accounts. Yeah, we don't really care about it, sure. We'll just turn that off for you and on it went, right?

But that's another. That's another really good example of where that might not matter in the case of a lot of cloud pentests, because if you've got a serverless environment and everything's Lambda functions and S3 [00:27:00] buckets and things, you've not got a network to be in Nmapping anyway, right?

Yeah. Yeah, I think it still all boils back to engage closely with the provider if you're buying this stuff and help them understand what you're doing and how you're doing it. So that you can work out between you how best to test it because you'll know what your crown jewels are and what you need to defend the most and they will understand best what should be in scope in order to give you the best view and the most the better, the best idea of your risks and things, right?

So it's about coming together collaboratively to work out what the best approach is going to be for a specific organization.

Ashish Rajan: Oh, actually, yeah I think collaboration definitely would be key because to your point. You want to be able to answer questions quickly, so you can actually, because we, unfortunately, as most CISOs out there will give you very limited time anyways, because people want to save money, blah, blah.

So having that speed definitely helps. I was also thinking from a perspective of people who already have a CSPM, they would want it, the pentest report to be different. Not be the same.

Nick Jones: Yeah so one of the, one of the things that I see a lot of people [00:28:00] not doing, which also strikes me as a bit silly, is they want a pentest report to find as much as possible.

Ashish Rajan: Yeah.

Nick Jones: And so the way they do that is to not give them the CSPM outputs to make sure that the pentesters do a thorough job, but if you give us the CSPM output as a starting point, then we can immediately, what's being covered, what's not, but have a whole load of findings in there that we can use to inform the rest of the stuff we do.

And quite often, we do have some clients who do give us the CSPM output. And we'll often include findings from the CSPM in the report, but we do that because we look at it and go, okay. This is actually a, like a critical part of an attack path. And so while it's in your thousand CSPM findings that you haven't fixed yet, this particular one actually turned out to be far more impactful than it looked at first because we chained it with a, B and C and now we're in your crown jewels, right?

So if you are concerned about the amount of time you're going to spend on things and the amount of budget doing a decent pentest is going to cost, then yeah, giving the consultants access to the CSPM. And any other security data you've got for your cloud, much like giving [00:29:00] them the infrastructure as code.

The more info we've got going into it, the faster we can move.

Ashish Rajan: Yeah, and maybe to help prepare a lot of people who may need to do the pentests. A lot of people talk about prevention and detection, because cloud enables you to do API based ones, so people should be spending a lot of time doing prevention exercises, so potential pentest exercises come up with zero findings, hopefully.

Now, is there a, I guess a common set of attack paths that you've found as probably top three that you see quite often, and I imagine S3 Bucket is probably one of them, but

Nick Jones: so we'll put the S3 Bucket one to one side, because we don't tend to run into that one too often.

But a lot of that is a function of our client base.

Most of the people who leave S3 Buckets out in public are also organisations who don't have large security teams and large security budgets. So I would say the most common things that I see that really screw people up. Security of [00:30:00] your source code, CICD pipelines, all of that stuff is a big one because

Ashish Rajan: integration or is it secrets in there?

Nick Jones: As in the way that it's all managed as a whole, right?

So because you if I look at how attackers often get in, phishing is still the most common attack vector. So phish a dev and then leverage their access to the source code and CICD pipelines to then move into the cloud is a really common route that we exploit because if we're in a position where we can interact with the source code if you don't have things like the correct two person checks on merges or if you're using Terraform Cloud or Atlantis or one of these other Terraform automated deployment tools, quite often they'll run a Terraform plan automatically when you open a merge request and that compiles and executes all the Terraforms, so if there's malicious stuff in there, then we can get access that way.

There's a whole bunch of different ways that one comes out, but like I say, people are still waking up to the idea that is a major problem. So that for sure. I would say like IAM users are still a massive problem. And I keep putting a a meme in a lot of my AWS presentations about, nuke it from orbit.

It's the [00:31:00] only way to be sure. And honestly, if you can, whatever you can do to burn down your remaining IAM users, throw 'em out, get rid of 'em replace them with any of the seven or eight different other authentication mechanisms that AWS provide. Do it, because something like half the public AWS breaches that we have data on they've had an IAM user either as the initial breach point, or it's been a significant contributing factor to making it worse later down the attack chain.

Because no one ever rotates the credentials. You're supposed to, but no one ever does. They last forever. People forget about them. They get thrown in the wrong places in a repository and accidentally checked into GitHub or there's all kinds of ways it goes wrong. But they're fundamentally something that I understand why they exist, but they shouldn't anymore in the year of our Lord 2025.

So if we could get rid of those, that would be fantastic. That would be a big step up for a lot of AWS accounts. Or AWS workloads. I've been Third, or A lot of it ends up being about how identity is managed at an organization wide level, right? Either for AWS or [00:32:00] for Azure, looking at who's still got guest access that they shouldn't have.

For AWS, who can assume what roles between which accounts on the Azure side of things. Often it's about like the way that your entrants set up makes it extremely difficult to actually understand who's got permission to what. And so therefore you've got twice as many employees with the permissions to do bad things as you think you do.

And so We don't see very much these days where people are getting breached straight in from the outside into their cloud workloads, or at least not with the testing that we do with our clients. It's usually more some kind of initial access that comes from not the cloud. And then we put up occasionally through an app.

If you've got a vulnerable application, we can get a foot off that way. If your application runs in Lambdas and we can print out the environment variables, for instance, then that gets us the AWS access keys for the Lambda function. Then we can do things with that. But yeah, it's not usually publicly exposed S3 buckets or Elastic search databases or EBS volumes or any of these other things is [00:33:00] usually some kind of identity based attacks or coming in through a supporting system, I would say.

Ashish Rajan: And I guess maybe recommendation for the top three as well that you recommended.

So obviously, people watching and listening this.

Nick Jones: Right.

Ashish Rajan: Nick said these three. How am I mitigating these as well? What's your recommendation for this? And obviously, ultimately, it may be different for different sides of organization, but In general, what do you see as a

Nick Jones: One of the reasons that these are still things we find is because they're hard problems to solve, right?

Identity and access management is the classic one where the reason it's difficult is not because each individual piece is difficult, but because doing it at scale is hard, right? And one of the things that a lot of these CSPM, CNAPP tools are getting better at is mapping out identity across an organization and being able to say Oh, you've got these five AWS roles over here that are actually they've got a trust policy that means someone from outside can assume them.

Why is that? Do you still need it? Or they're looking at your Entra and saying, oh, you've got these guest accounts that are mapped over to this other tenant. Do we still need those to have privileged access into these things? But [00:34:00] a lot of those capabilities are very expensive.

If you can afford Wiz, it's great. If you can afford Datadog, likewise. But if you are a smaller organization and, you're using Prowler and great, then the open source tooling is still pretty deficient in that space. And largely because it's become a very commercially successful part of the cloud security product space.

No one's wanted to go open source with it because they can make money on it instead. For the CICD aspects source code management, all of those kinds of things. I would say that a lot of that still ties to identity. It's still about making sure that you can't merge into protected branches without someone someone else double checking it.

And around making sure that whatever you're doing with your GitHub actions and all these things. Isn't introducing more risk than it needs to, but that's a pithy way of putting it, really. It's more look, looking at what it's doing and making sure that you're loading the right things in from the right places.

You trust all of the libraries and supply chain pieces that you're using as part of it. And that you haven't got ways that [00:35:00] a disgruntled developer or someone who's phished a developer to do things like command injection through your pipelines or things like that. Because often those pipelines have a lot of privileges, a lot of credentials associated with them.

So if an attacker gets into that, it's usually a bad time for everyone.

Ashish Rajan: And maybe to add another layer, I was also thinking about detection as code here as well. Like a lot of people go down that path of, oh, I'm gonna build a detection as code capability because cloud is the API I should be able to do it.

Prevention should be easy and I should be able to detect most of the things like the thing, the things that Nick just mentioned and just be, I should be able to put detections for it. But what is detection as code for people who may be, I don't know if they'll be new, but I imagine for people who have not heard that word before, what is it and how does it apply to cloud?

Nick Jones: So detection as code is the idea that your your alerts, your queries, all the things that you run on your telemetry and your log sources to spot bad things happening, are written as code in a format that can then be version controlled and regression [00:36:00] tested and all these other good things. You can apply your software engineering best practices to developing your detections for your cloud estate or actually a lot of the legacy on premises workloads or endpoints or all kinds of other things too, right?

But the challenge with it is that it means that your SIEM, your SOAR, your all of these different pieces of your detection tool stack all have to support this quite well. And you've then got to have the engineering chops inside your detection function, if indeed you have one yourself. To be able to do all this engineering to embed the detection as code and build the pipelines for it to deploy properly and build all the automated testing around it.

And all of these things are, it's a lot of engineering work and it's not the kind of thing that you can expect a sort of a level one alert triage to be able to deal with. You've got quite specialist people who are expensive and few and far between. And for a lot of organizations the running a full blown SOC yourself is expensive, [00:37:00] right?

Most organizations that I work with don't, even a lot of the bigger Nordic enterprises will outsource it to a like a manager detection and response provider because it's just so expensive to run it properly yourself. You need a team of for 24/7 coverage, you're probably looking at minimum 10, 15 people to do it properly.

And some of those need to be very experienced, very expensive.

Ashish Rajan: And know cloud as well.

Nick Jones: And some of them have got to know cloud and that's one of the challenges that comes with it, right? Because they've got to know your endpoints. They've got to know the different providers that you're operating in for the cloud.

If you've got on prem, then I guess we've got to worry about Windows and Active Directory as well. There's a lot of different moving parts. And much like we say to our consultants, you can't be a specialist in everything. The SOC engineers and SOC analysts can't be experts in everything too.

So then if you want to maintain a 24/7 coverage, you've got to have enough people who are enough specialists in enough different things and be able to map all that out to make sure that you've got enough of the right people on it 24/7. It's very expensive. And quite often, especially at the start up end of things I'm mostly an AWS, so it's GuardDuty that I see [00:38:00] people using.

We'll turn GuardDuty on and forget about it. And we might have a few CloudWatch alarms or, alerts set up somehow that ping slack or do something else. For other things that we're particularly worried about that we know GuardDuty doesn't detect. But there's a massive gulf between what I see as the very advanced in house SOCs who do a really good job and what in most AWS or even, Cloud more broadly, organizations using it.

The difference there is enormous. If I look at the the really heavily regulated large financial enterprises, especially in the UK where you have CBEST, which is a it's a mandatory regulated red team that happens, I think it's every two years or something, for each major bank in the UK.

The Bank of England says, thou shalt have a red team, and the standards for doing it are quite high, and it's, it's quite a, an involved and rigorous exercise and so a lot of these security operations centers have had the daylights beaten out of them by very capable red teamers on a regular basis since 2014 or so.

So they've gotten very good, they've had the investment and [00:39:00] they've built often a lot of their own tools and tricks and I've seen some of them doing some fairly fancy machine learning things on big data lakes full of security data and all kinds of stuff because they know they have to be able to withstand really quite sophisticated attacks, because if they can't, the regulator is going to get angry with them, and they're a core systemic part of the economy, so they have to be able to do that.

At the other end of things, if you're a small org doing a little bit in the cloud what you've probably got is one of your engineers who's a bit more security savvy than the rest, who's turned on GuardDuty and has configured some Some things to look for specifically in CloudTrail.

But one of the challenges with the cloud in particular is that a lot of the time, if you've got an attacker who's looking to exploit stuff, there are sometimes things they do that are very standout and unusual. But a lot of the time, it's all abuse of legitimate functionality. You're making the same API call that a cloud admin or a cloud engineer is going to be making.

You're just doing it to change something in a way that gives you as an attacker an advantage or your next foothold or whatever else. So you can't really just blanket alert on these API [00:40:00] calls because otherwise you get a load of false positives when the engineers go in and do their jobs. Yeah. The classic example for a while was back in the day, we always used to recommend alerting on people doing AWS STS get cooler identity, which is the, who am I of the cloud, right?

And that was great in some environments. Because the only people who were doing it were the pentesters and the attackers. But then actually we've since discovered that a, loads of engineers run it to make sure that they're in the right account before they start doing things. And then loads of the either CSPMs or various other cloud products that, operate within your environments.

A lot of them were the first thing they'll do is run STS get caller identity to validate the credentials, make sure they're still valid. before they try and do any other activity. And so for some environments, it worked fine for some environments. We were getting a thousand false positives a day, so it's all very environment specific a lot of the time. And so Datadog and a number of these other vendors have done quite a good job of building out generic rule sets for obvious known bad things and for chains of things that if join together are [00:41:00] probably bad. But we still find that most of our more advanced clients that I work with on purple teaming engagements and things, they've got a lot of their own very custom detection logic that they've written that matches their environments and the tools they've got on the things their engineers do and how they work and all of these kinds of things, right?

Ashish Rajan: So does that mean, because obviously I think the detection engineer field didn't exist for, I feel it's only like less than five years old, but is the requirement for being a pentester in the cloud space? Cause I imagine a lot of people who are also end up watching this episode or listening to this would be either currently pentesting, web apps, network, whatever, looking for that, hey, I should probably, should I specialize in cloud or not? What is a typical skill now for a pentester in the cloud space,

Nick Jones: In terms of how do you build the skills, how do you get there?

Ashish Rajan: Yeah, or even if you were to think of hey, I want to build a detection team internally, or I want to start doing I think one of the examples was around, hey, I want to build detection capability for cloud in my organization, because clearly the CSPM is not enough for [00:42:00] me, or CNAPP is not enough for me.

I need to be able to add the context of the business and that is, that means you're making custom detections. Now, what kind of skill sets are people thinking about?

Nick Jones: So there's the detection engineering and pentesting skill sets are really quite wildly divergent in many ways. So like a lot of the time.

You'll see people specialize in one or the other. I know a few people who've switched. You can do, but main maintaining currency and both of them would be really quite hard thing to do, I think. And so if you are looking to get into detection engineering or you want to do some detection engineering for your organization, then if you're looking at the cloud space.

What I would probably say is your best bet is to consume a lot of the threat intel reports that come out from the better vendors. Wiz put a fair bit out, but so do people like Unit 42 at Palo Alto or Invictus IR. There's a lot of people who produce these reports of things they see happening in the cloud that threat actors are doing. And you're inevitably going to be slightly behind the curve on that [00:43:00] because, you're hearing the reports after it's happened. But that's a good starting point for most organizations. And I'd combine that then with experimenting with one of the automated simulation frameworks that works well out of the box without too much messing around and tuning and things, right?

So Christophe over at Datadog wrote Stratus Red Team, that's been running for a few years now, right? The nice thing with Stratus compared to Leonidas, the one that I wrote, was that it's designed to be out of the box, run a specific attack technique create all the infrastructure, detonate the technique, get all the logs for it, it's very self contained, self packaged, doesn't require a huge amount of manual interaction, configuring and things, whereas the approach I took with Leonidas was that It's designed so that you construct attack paths out of individual pieces, and that means you've got to know what it is you're targeting and why.

And you probably want some infrastructure already deployed, because when we built it, we were testing against clients existing workloads and someone needs to be able to very precisely target it. And so that makes a lot more sense if you've [00:44:00] got either offensive security experts coming in to help or if you've got that on staff already, right?

And what I would say is once you've got to the stage where, you're fairly confident with the stuff that's in Stratus, you've got good coverage of most of the common things coming up through the the threat intel reports, then at that point, it's a great time to engage a consultancy to do some of these, what we call purple teaming exercises where when we do one, we go in, we look at the environments, we look at what they're running we work with them to understand what telemetry they've already got coming in, make sure that, they've actually got the data to do something sensible, and then based on what we've seen, we'll do a threat model, work out how an attack is likely to try and exploit their environments and their workloads and then run very specific attack chains and sequences of activity that tie to what that organization is running and how they do their engineering and all of these things. So you're in a position where we're deliberately simulating custom attack paths and things that fit your organization.

And the big benefit [00:45:00] there is it will then show you what we think an attacker is likely to do, right? We can't predict everything, obviously, but we do enough. We need enough breaking and entering that we can probably work out quite a few other things they're likely to try.

And some of that will probably overlap with some of what's in Stratus and some of what GuardDuty detects by default. But some of it usually doesn't, in my experience, a good chunk of it doesn't. Once you're at a point where you're pretty confident that you've got the common stuff down and the stuff the industry talks about more broadly down, then yeah, at that point, that's when to look to engage a partner, I'd say, to help out.

Ashish Rajan: Awesome. That's all the technical questions. I've got three fun questions for you.

Nick Jones: Okay. Sure. Sure.

Ashish Rajan: First one being, where do you spend most time when you're not trying to do research in cloud and cloud security.

Nick Jones: Oh, my free time. Yeah.

Mix of travel and photography. The travels kind of part professional, part personal.

And the advantage of that is it then takes me to interesting places to take photos. So photography is also part professional, part travel. In some respects, right? Yeah. So I've always had a thing for photography for years now. I should probably get around to doing something with all these photos at the moment, I've got a few of them up on my walls and a nice collection of my hard drive, but I've never really ran an [00:46:00] Instagram

Ashish Rajan: No Flickr account for it.

Nick Jones: No, nothing like that.

Ashish Rajan: Is that still a thing?

Nick Jones: Flickr, I think maybe, but who knows really? So yeah, so that and then or as much as a bit of a stereotype in the cyber security world, I'm a big fan of my craft beers, especially here in Copenhagen. We've got some some amazing craft brewery. So a lot of the time, especially when the weather gets good.

A couple of beers out in one of the nice parks in Copenhagen with a few friends and just chill out and relax.

Ashish Rajan: And followed by, what is something that you're proud of that is not on your social media?

Nick Jones: That I'm proud of that's not on my social media? Yeah. Oh, that's an interesting one. Yeah. I think in many ways actually at the moment, it's the work I'm doing as head of research to try and build out our ability as a business to do more awesome research and have people do more awesome stuff with it. We've always had a core of people who are keen to do conference talks and blog posts and put out, put their stuff out in the world.

But a lot of what I'm doing at the moment is around training and mentoring more junior members of the team or those who've not had that experience. To be able to feel confident to stand up and do these conference talks and put [00:47:00] themselves out there. And we've got an internal conference coming up sometime soon that I'm running that's just for our guys.

And we've got a load of really awesome submissions from the team, so I'm really looking forward to how that goes. It's being called IndependenceCon because now that we're no longer going to be part of WithSecure, we're going independent as a new entity. That, that seemed like a fit name for it.

It's that training and mentoring and leveling people up. I did years of publishing my own research and I still do a bit here and there. But I've now got to the stage where I'm doing a lot of the sort of the mentoring and training and helping everyone else level up.

And that's been really rewarding actually. I've really enjoyed it.

Ashish Rajan: Paving the path for next generation.

Nick Jones: Exactly.

Ashish Rajan: Oh, awesome. And last question. What's your favorite cuisine or restaurant that you can share with us?

Nick Jones: Oh, wow. That's an interesting one. So the cuisine I miss the most from the UK is good Indian food, because Copenhagen is terrible for it.

So every time I go back, I find myself a good curry house. But, oh, favorite restaurant. Is gonna get laughed at by a lot of the Danes in Copenhagen, but honestly an evening at Warpig's in Copenhagen is pretty fantastic. So Warpig's is a combination American barbecue meat restaurant and microbrewery in central Copenhagen.[00:48:00]

And yeah, so you go along, load your plate up with all kinds of amazing smoked meats and things grab a couple of their in house craft beers. And again, in the summer, sit outside, enjoy the, enjoy the vibes and just chill out.

Ashish Rajan: Great smoked meat and craft beer. That's awesome.

Nick Jones: Yeah, right. And I even took a Texan there a few years back and the verdict was, this is decent. Even by the stands of Americans, it's apparently all right. So if you're in Copenhagen check out Warpigs.

Ashish Rajan: Awesome. No, dude this has been really interesting. Thank you for making the time for it, thank you for sharing what you've learned as well, and also the food recommendation. Where can people find you on the internet to talk more about the space, man?

Nick Jones: Nojonesuk on Twitter, or nojones.net are where I hang out. Yeah, my website and my Twitter is the places to be for getting

Ashish Rajan: I will definitely leave those links in the comments as well.

But dude, thank you so much for coming in. Thank you, thanks for having me. And thank you everyone for watching, we'll see you next time.

Nick Jones: Cheers.

Ashish Rajan: Thank you so much for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more episodes like these on www. cloudsecuritypodcast. tv. We are also publishing these episodes on social media as well. So you can definitely find these [00:49:00] episodes there. Oh, by the way, just in case there was interest in learning about AI. Cybersecurity. We also have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well.

I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do an in-depth analysis of different topics within cloud security, ranging from identity endpoint all the way up to what is the CNAPP or whatever, a new acronym that comes out tomorrow. Thank you so much for supporting, listening and watching.

I'll see you next time.