How Attackers Stay Hidden Inside Your Azure Cloud

View Show Notes and Transcript

In this episode, Ashish sits down with Christian Philipov, Principal Security Consultant at WithSecure, to explore the stealth tactics threat actors are using in Azure and why many of these go undetected.Christian breaks down the lesser-known APIs like Ibiza and PIM, how Microsoft Graph differs from legacy APIs, and what this means for defenders.

  • The 3 common ways attackers stay stealthy in Azure
  • Why read-only enumeration activity often isn’t logged
  • What detection is possible and how to improve it
  • How conditional access and logging configuration can help defenders
  • Why understanding Microsoft Graph matters for security ops

Questions asked:

00:00 Introduction
02:09 A bit about Christian
02:39 What is considered stealthy in Azure?
04:39 Which services are stealthy in Azure?
06:25 PIM and Ibiza API
12:53 The role of Defender for Cloud
18:04 Does the Stealthy API approach scale?
19:26 Preventing Stealthy API attacks
21:49 Best Practices for Prevention in Azure
25:47 Behaviour Analysis in Azure
29:31 The Fun Section

Christian Philipov: [00:00:00] In terms of what you've got and in terms of capabilities to log in, process a lot of these events, is that typically you would have, for example, log analytics, workspace is typically a lot of a lot of times organizations will use where they'll pipe kind of the logs from, let's say Entrara.

So diagnostic settings will be configured and then that we'll grab all the logs and then. Send them off to a log analytics workspace similar to various kind of individual resources, et cetera. And then in those that log analytics workspace you've got, that will populate the various tables based on the logs that you're piping over to it.

And then you can do all sorts of like advanced queries on that to try and flag suspicious activities.

Ashish Rajan: If you come from an on-premise AWS or GCP background, you probably have very little idea about Azure security, and even if you are in Azure security, you probably don't know what are the common ways attackers remain stealthy in inside your Azure subscription.

I had a conversation with Christian Philipov from WithSecure, where he spoke about the three common ways attackers have been using to remain [00:01:00] stealthy inside your Azure subscription and tenants without being detected using things technically. Microsoft calls us features, yes, you heard me. Now we go into a bit more detail about what these methods are and what can you do, especially if you come from a non Azure background to understand perhaps it's an m and a, and now you have to look after Azure security as well.

Or you are just starting to figure out what are some of the things you should look out for in Azure for doing security if you are starting today. Now, overall, it was a great conversation if you know someone who is trying to work on Azure security, either on their organization or perhaps who's just trying to understand it a bit better and comes from a different cloud or on-premise background.

This episode is definitely for them. And as always, if you have been listening or watching Cloud Security podcast episodes for a while, and if you've been finding them valuable, I would really appreciate if you could drop us a follow or subscribe on audio and video platforms. Whether you're watching this on YouTube or LinkedIn or listening to this on Apple, Spotify, it really means a lot when you subscribe and follow us.

That helps us to know that we are doing the right thing as well. I hope you enjoy this episode and I'll talk to you [00:02:00] soon. Hello, welcome to another episode podcast. This is the Finnish version with a person from Manchester to make it more interesting, man. Thank you for coming to the show.

Could you share a bit about yourself, your professional journey, man?

Christian Philipov: Yeah, no, absolutely. So I am one of our principal security consultants here at kind of WithSecure, so I'm Christian Philipov I deal with a lot of our cloud work within the team. Specifically focus on Azure and GCP, but do a bit of AWS and whatever kind of comes through the door really.

And yeah heading up like our specialist services internally is my role at the moment. But overall, yeah, still very much focused on the cloud and still my main kind of focus area.

Ashish Rajan: Awesome. And I guess the topic today is more around the Azure security and how to be stealthy in Azure.

Can you give a gist of, I guess maybe before we dive into the services that you discovered how would you classify as stealthy? 'cause people, know what happens in on-premise. What does that mean from a stealthy perspective? Do I, am I getting domain admin? What am I getting in the context of Azure? What is considered quote [00:03:00] unquote stealthy?

Christian Philipov: That's a very good question. So I guess the fundamental difference here to looking at cloud versus on-premise detection is the fact that for on-premise systems, we've had a lot of years to realistically develop defensive techniques and also ways of collecting relevant telemetry from systems that we use.

Yeah. To then try and flag kind of suspicious activities. Yep. Like one really good example that may be a bit overused is for example, mimic cats running on some sort of box within your system. That's in practically 10, outta 10 times will be something suspicious and worth investigating.

Ashish Rajan: Yeah.

Christian Philipov: In contrast, obviously cloud is a relatively. Immature in comparison. Just more traditional on premise systems. And so in that sense, cloud, the detection engineering for like suspicious activities is also something that I think is getting into a very good stage now, but it's still growing kind of development.

In this case, in Azure, specifically, the stealthy part comes from a bit of a limitation that has existed within [00:04:00] the platform and Entra, which is with specifically to read events. Okay, so like enumeration activities, reconnaissance, that sort of thing.

Ashish Rajan: Right.

Christian Philipov: These have been challenging because typically there wasn't really a good way to log these sorts of events like fundamentally don't exist in the telemetry that gets produced by Azure slash Entra.

Yeah. In contrast, obviously AWS has clutter, has a lot of these like even just read on events that are available there. Yeah, that's right. So you can perform detections based on it.

Yeah. But just Azure did not have them. Up until recently, so Okay.

Ashish Rajan: Still working towards it.

Christian Philipov: Yeah. Yeah. No, Microsoft's definitely put in a lot of work to try and cover this gap, but there's still a lot of work to be done.

Ashish Rajan: Okay. And the services that you discovered which helped you become stealthy in a Azure context, which one were they and why are they I guess important?

Christian Philipov: Yeah so a bit of a background to people on it is that the Microsoft like a lot of the Azure stuff fundamentally has a control plane called graph under the hood that underpins a lot how [00:05:00] the platform operates from an identity and access management perspective.

So in the past you've had Azure Ad Graph, which was the more older version that kind of underpinned that whole thing. And then the new one, which is Microsoft Graph. Microsoft Graph is the new one that now has the capability to, for example, log read events that happen against the tenant, which is great.

That's exactly what we want. Yeah. Azure AD graph. The other one does not have that, for example. So that's why any interactions that happen then with Azure AD graph just won't really get logged from a enumeration read only perspective. And then additionally, you've got a few other APIs because this is the truth of like looking under the hood a bit on Azure and Entrara, is that it's made of a lot of different APIs,

Ashish Rajan: Right?

Christian Philipov: Partially because of like purchases and merger just done for Microsoft to build up the Azure platform. Obviously partially depending on kind of the development culture of like how to get different APIs to interact with each other within it. But overall, you've got stuff like the Ibiza API that is the topic of the conversation that [00:06:00] discussed at fwd:cloudsec and now at Disobey. And also you've got stuff like the PIM API Yeah. For example, like all these are just disparate APIs and the notion is that a lot of them don't necessarily log that sort of read enumeration because they don't use Microsoft Graph under the hood.

And they are their own thing. Oh. And so it really depends on whether or not their own thing has been implemented to provide that telemetry for customers to actually consume and process.

Ashish Rajan: Interesting. And, sorry, could you elaborate a bit more on Ibiza and PIM? What are they?

Christian Philipov: Yeah, so these are the lesser known APIs under the hood that are used by Azure.

Okay. No, so the Ibiza API as much as it sounds pretty cool. I was gonna say it's like a party. Yeah. It's not the holiday destination. Yeah. But it's a, it's an API that's basically made for the main kind of portal. For example, the web console that you would use to interact with Azure via just the web browser.

So it's used by that to interact with some of the stuff within Microsoft Craft as well as the Azure Resource [00:07:00] manager API. So it's I guess the best comparison is a, sort of a middleware kind of component to allow the portal to do all these things and interact with the broader APIs in a more extensible manner.

Ashish Rajan: Right.

Christian Philipov: And then the PIM API is its own thing, primarily because privileged identity management was its own service, then integrated into. Andra to underpin and control yeah, just in time access to various roles and like overall, it doesn't integrate as, natively with graph just because it is its own thing that was linked into it.

That's why it's, it also like activities that happen there don't necessarily get logged to Microsoft Graph in the same way.

Ashish Rajan: Interesting. And because the PIM service is an interesting one from a, I guess plus I think this has the, you require the E5 license for it or E7, whatever the license requirement is.

Yeah.

Christian Philipov: So PIM requires the Entra P2 licenses, for example, to use. So it is an additional cost to be added to. That's right. Your kind of manager,

Ashish Rajan: because not many people, I guess maybe most of the enterprises may be using it, but not it may not be as wide, but the Ibiza [00:08:00] one is probably much more wider.

Christian Philipov: Yeah. Yeah. The Ibiza one is just fundamentally there whether you have the license or not, as long as you're using Azure.

Ashish Rajan: Yeah.

Christian Philipov: Kind of under the hood. It will, the portal will be using some calls to the Ibiza API. Yeah. And yeah, so it is available to people just as long as you're just using Azure.

Ashish Rajan: Interesting. And I guess so if it's not being recorded, feels a bit more oh wait, does that mean there is no detection that is possible? Or is that more yeah, I guess, it makes me a bit nervous that the fact that it is there no detection possible.

Christian Philipov: Yeah. So that's a very good question.

And to clarify, it does sound bad, but it's not that bad realistically because the detection issues we have is for can read events specifically. Yeah. So for example, let's say me getting a list of users in the tenant or getting list of groups, et cetera, like state changing action. So for example, if I manage to compromise a high privileged user and try to perform some changes within the environment, those will still get logged because even in the old graph, state changing activities have [00:09:00] always been logged. Yeah. It's been primarily the gap has always been in the enumeration kind of initial access, recon stages.

Ashish Rajan: Like the read only thing that you mentioned earlier.

Christian Philipov: Yeah, yeah. Yeah. So it does sound bad in the sense that like these sorts of events, there isn't a good way to detect purely based on that, similar to what you might want and have in AWS for example, with CloudTrail.

But in the more practical sense of trying to detect active attacks against your environment, you can still very much figure out. Like a lot of the events that happen around that attack. So for example, telemetry that gets generated when a user has an additional MFA factor added.

Ashish Rajan: Yeah.

Christian Philipov: Or telemetry that gets generated when the user tries to add another secret to a service principle.

Like all these things, because they stay changing will still get logged. Yeah. And so we do still have opportunities to get these elements out the door.

Ashish Rajan: Yeah. And would you say now that you're able to, I guess you're able to record the state change, but is it hard? 'cause I imagine like I could just be a normal user trying to figure out, hey, how many users do we have? Which a lot of cloud [00:10:00] security engineers would do as a normal activity. Yeah. Yeah. But that's not being recorded to what you're saying as well in Azure, because it's not a state change. It's just I'm trying to find out, hey, if there are a list of users, who are they?

Who's my domain admin? In terms of, I guess what's a takeover example? Just to draw a parallel from say, on-premise, people can me try and go for on hey, I wanna get domain admin. And I guess the where I'm going with this is that if there is no detection for reactions, but a state change has a quote unquote logs history, how does detection work in Azure for state changes?

And is that do you use Sentinel or what's the part that people choose to do detection? 'cause I think CSPM is another part people can take. What's, what are some of the options for them, if they want to go down that path? And can this be mitigated? Because it sounds like a behavior thing doesn't sound like a, hey, I'm doing a, what do you call it, a blob storage open of the internet kind of a thing.

Yeah, no, that's a good question. So overall, like in terms of what you've got and in terms capabilities to log in, process a lot of these [00:11:00] events, is that typically you would have, for example, log analytics, workspaces typically a lot of times organizations will use where they'll pipe kind of the logs from, let's say Entra, so diagnostic settings will be configured and then that will grab all the logs and then send them off to a log analytics workspace similar to various kind of individual resources, et cetera. And then in those, that log analytics workspace you've got, that will populate the various tables based on the logs that you're piping over to it.

And then you can do all sorts of like advanced queries on that to try and flag suspicious activities. That's typically the way a lot of organizations do it. You can use Sentinel specifically as well, more as a SIEM sort of solution. It effectively uses a log analytics workspace under the hood to it. It's just optimized for security operations and raising alerts based on kind of behavior analytics done by Microsoft under the hood for all of these things. But that's definitely an option. Or you can obviously pipe them onto whatever third party product that you're using as well.

So like [00:12:00] for example, Splunk obviously a very common one for people in security teams to then get the logs in out to, into, yeah.

Christian Philipov: And that's typically how a lot of times, like from an organizational perspective, more bigger enterprises, that's the different ways that we see people go about it.

Ashish Rajan: Yeah.

Christian Philipov: Obviously you've got the CSPM angle, so you can use in the context of Azure specifically, if you want to flag potentially suspicious events there or resources that might have been like exposed. So for example, a storage blob.

Yeah. That has been now publicly exposed. You can use either the native CSPM if like people that are deep into the Microsoft ecosystem often end up using Defender for cloud.

Yeah. Or they can use obviously a third party one. Yeah. Ultimately, a lot of the logs similarly will get. Piped from like diagnostic settings, et cetera, configured to then send over a lot of that telemetry to the CSPM alongside with active access to the resources and then flag stuff accordingly based on that.

Ashish Rajan: And where does Defender fit in? Because I think, I'm glad you cannot mention it, because a lot of people assume that, oh, I don't need any [00:13:00] extra playbook on my Sentinel because I have Microsoft Defender looking at all of it. How is Microsoft Defender in the Azure landscape? I know it's does a great job on the other side of Office 365, SharePoint, all of that as well. On the Azure landscape, how is Defender in cloud in terms of how is it used? What is it for? Because people may not even know what that is. What is it and oh, what's the superpower that it has that makes people use it?

Christian Philipov: It's an excellent question. So Defender for cloud, it doesn't help that, obviously there are a lot of products in the Defender for X Yes.

Suite. And so it does make it a bit difficult to figure out what encompasses what. Yeah. Defender for cloud is a broader term for a lot of other kind of sub-services within it. Okay. There are Defender for resource manager. Okay. Defender for storage account. The Oh, individually each other?

Yeah. Yeah. No, they are technically individual components. That they're part of Defender for Cloud effectively, which is where the complication comes from. Oh. Because obviously you've got these individual components to it [00:14:00] that you can enable, disable accordingly, just because each of them price differently based on resources, based on number of requests, et cetera, et cetera.

So ultimately, Microsoft gives you a chance to enable and disable whichever ones you care about, and then obviously scope it up, scope it down, depending on what exactly you wanna focus on.

Ashish Rajan: Yeah.

Christian Philipov: But fundamentally, to answer your original question, where I think the strength of it lies is that obviously given that it's a lot of, especially if you're using the Microsoft ecosystem a lot,

Ashish Rajan: yeah.

Christian Philipov: It does make sense to keep keep using some of those because they do interact nicely and integrate nicely within each other from a behavior analytics perspective.

Ashish Rajan: Right?

Christian Philipov: And so the more of this suite that you use like technically Microsoft, the implication is there that, but then it provides a lot more telemetry for Sentinel and the various can Defender products to be able to do a lot of these behavioral analytics and kind of machine learning that they do behind the scenes to flag kind of suspicious events for your environment.

Obviously from a more practical perspective, it varies from company to company [00:15:00] how much value that they will get out of it. Yeah. It's a difficult one and I'm sure you're well familiar of like obviously you've got the features versus costs. Yep. Costs. And so it's a difficult like equation to go and balance.

Ashish Rajan: Yeah.

Christian Philipov: But ultimately, I guess that's the strength of it is that it integrates nicely with a lot of native services. For example, containers if you care about that, you might then be interested in using the Defender for kind of containers equivalent solution there. Or if you got a lot of sensitive materials stored in databases, you can look at enabling that.

And then it has the capability to fly kind of suspicious events. Happening with those individual resources.

Ashish Rajan: Yeah. I think one of the challenges with the Azure space is just knowing the number of services that are relevant. I think as you yeah, as you mentioned, storage, the first thing like, oh yeah where does purview fit into this?

I'm like, oh that's another can of worms and I'm like, I'm not gonna open it. But with the Defender in cloud, the behavior that we just spoke about from a read only perspective, I don't think it has the whole behavior analysis from that perspective. Does it can Defender in Cloud do the [00:16:00] behavior analysis, just say, Hey, Christian seems to be doing a lot of read action, but Ashish on the other end seems to be just doing way too many API calls.

Christian Philipov: Yeah, no, it's a difficult one because I don't believe it has that's level of like discussion

Ashish Rajan: that would make it like the level of any like a popular SIEM. 'cause I don't hear Defender for cloud as a really popular Hey, like a user behavior analytic kind of a thing. So it's, I imagine it's not there.

Christian Philipov: No. So a lot of the user behavior stuff is focused very much on kind of the platform level stuff within the Entra tenant itself. Right. So lot of the, for example, it does have like behavior analytics for kind of the way you approach sign-ins where you do the sign-ins, at what point do you do sign-ins, et cetera.

So for example a lot of the, like a common example is obviously if you log in primarily from, let's say London, 'cause you're based in London, et cetera, and then at some point you log in to randomly from an Ip that's based in Finland. Yeah. Or even like just some other part of, just some part of the UK, [00:17:00] it can then figure out, oh, this is a bit suspicious, yeah. Depending on the distance, depending on how diverse it is, it'll flag it at different kind of levels. Yeah. But that's mostly where the behavioral analytics, is focused on is the tenant and like user sign in user kind of basic usage sort of stuff, which is still very important because it does capture and can capture a lot of like just overtly suspicious stuff.

Like obviously you can try to be significantly stealthier. You can still use like a different IP from based in London, so that might not necessarily trip a lot of wires in the same way. Yeah. But then obviously if you got the proper actual security team looking at it, they'll be like, oh, this is weird. Ashish logged in from a slightly different IP this one time. Yeah. Just completely outta the box. And then an hour later he just logged on from his normal one. So maybe what

Ashish Rajan: Because we've been talking about detection. Yeah. Is there like a prevention that we can applied it over here from a, and I guess I'm obviously focusing more, much more on the stealthier side 'cause I think, did, oh, actually, maybe before we go on to the prevention many [00:18:00] people just don't have one subscription with Azure. They probably have a lot. Probably multiple tenants as well.

And how does this kind of were you able to scale it out? Across a larger environment or were people who are listening or watching this is a one tenant problem, or this is a one subscription problem. Like in terms of how stealthy I am, or can I be, can I use that to go completely?

The blast radius could be the entire subscription.

Christian Philipov: So it really, yeah, it really depends on kind of the context of the organization and also the level of access that you've managed to, let's say if you are a malicious kind of threat actor, the level of access that you do gain to these various environments.

You're absolutely right. There are a lot of organizations that are spread across multiple Entra tenants that then each have like multiple to hundreds kind of subscriptions within each. So realistically, the access and kind of blast radius there really depends on the specifically the initial access vector that you've managed to compromise.

Ashish Rajan: Right.

Christian Philipov: The stealthy API approach absolutely is will encompass whatever access, like the percentage you've managed to compromise will [00:19:00] have, so that access could vary from one subscription in one tenant to three tenants, and then several hundreds of subscriptions. It really just depends on, oh, level of segregation done on by controlling who has what access and not like giving it overtly to like people having significant access over a lot and a large scope of resources, which unfortunately is still very much a thing. So it is likely that you might end up in a situation like that. In terms of, preventing it, it's a very good question because it, there isn't a simple way of preventing the core issue. In the sense that there, it's a fundamental limitation with some of these APIs. So like the Ibiza API that there isn't a way to log the read events. Yeah. So you can't really prevent that. But like we discussed a bit earlier it's a focus on kind of the state changing stuff and kind of flagging on that is the immediate.

I thought from my end on like how to go about trying to still make sure that we can react quickly if some suspicious [00:20:00] events are happening in the tenant. Obviously the lack of kind of telemetry for reconnaissance is definitely a gap and it's definitely difficult and I appreciate, like it does make it a harder job for blue teams and just security personnel. To be able to like super quickly react and stop somebody from before they can even like, touch anything. Yeah. Within the tenant. Yeah. But it is the reality of the situation. So it is important for us to try and work with what we've got and then just continue on waiting for Microsoft to expand and try and cover these gaps.

The other stuff that can be proactively done to minimize it is, for example, the stronger conditional access policies. That very much is a big control regardless of whether it is stealthy or not, because if there are strong controls to prevent you from logging into a user's account in a kind of an insecure manner.

So for example. Let's say a common conditional access policy that a lot of enterprises have is allow listed IP ranges which does make it significantly harder. 'cause then an attacker has to compromise an endpoint that either exists [00:21:00] within those IP ranges or fully compromised, let's say an employee device that has a VPN that can get them to those IP ranges to be able to log in.

Yeah. Like ultimately the more we kind of force attackers to have to maintain persistent access as well as try and further compromise like devices that are managed, let's say by then have an EDR installed. Yeah. That does increase the likelihood of their activities to be detected. Like ultimately, it's not necessarily about whether we're gonna catch them in one go, but it's about forcing them to have to do as much as possible to maintain and persist within the environment, which then would increase the chances of somebody detecting Oh this seems a bit weird 'cause this has been going on for a couple of days and it seems a bit anomalous, so let me go and investigate this a bit more. And oh wait, this might be an actual intrusion, like that's the main focus area,

Ashish Rajan: Right and to conditional access policies definitely sound like the right path.

Is there, I'm trying to think from a prevention perspective that, you know how a lot of people talk about, Hey if I have [00:22:00] least privilege, let's just say use that as a word. 'cause global admin, if you have the permission, you probably have it across the board so you get, 'cause I think for content, for people who don't understand it, it's like the, what used to be domain admin on premise, global admin is the equivalent in Azure land.

And you can still go back and forth between on-premise as well as the Azure portal kind of land as well in terms of, the lay of the land as you 'cause I imagine a lot of people coming maybe from an AWS background, a GCP background, they will be listening and watching and going, okay, I guess this, you can be stealthy.

This is probably bad, harder to detect. What do you say are, fundamental things you recommend people to have in their Azure? Footprint, for lack of a better word, where across subscription, across tenancy, conditional access sounds like a great one to start with.

Is there any other things that you recommend, which kind of to what you said. It's a prevention for, maybe this is just one of the things, but it can prevent a lot more, so it is like a, like for example in AWS people, I say, Hey, make sure Cloud Trail is on, guard duty is on all of that. The good stuff is on what's the [00:23:00] equivalent in the Azure land.

Christian Philipov: Okay. No, that's a very good question. Absolutely. I think the line of defense is definitely your conditional access policies because they fundamentally underpin all your login mechanisms to both kind of user accounts, but also you can even include like service principles, so like machine account logins as well. 'cause it's a fairly recent addition and it does require a bit of an external license. But you can restrict, for example, service principles, which have been historically a bit of a like of an increased kind of interest to attackers because they typically won't be included in conditional access policies because the initial focus was on users and restricting user access.

And then obviously that left machine accounts as being the next kind of viable target for attackers. Yeah. Because typically they won't necessarily be restricted by kind of these conditional access policies. That's right. So as long as you manage to compromise of high privilege service principle, you can go wire out from anywhere regardless of whether or not the users are typically required to, for example, only log in via virtual desktop. That's obviously locked down and everything else. Conditional access policies, [00:24:00] core one making sure that all your logs are being actually first of all, generated, turned

Ashish Rajan: on.

Christian Philipov: Yeah sent somewhere. Yeah. That's definitely a very good one. But overall, kind of other ones like a very important one, but unfortunately it's also the hardest one for larger enterprises to implement is exactly that kind of concept of restricting privileges down as much as possible. Like it's very much a difficult process because if you're a large enterprise, you typically have a lot of different teams that have to work different silos within that business.

Yeah. And so access usually. You either have a central team that's technically supposed to manage access across all these other teams, and then dealing with all the requests around it. Figuring out a minimum viable role Yeah. Is usually a very hard thing to do. Yeah. And so you often end them falling back to roles that, work and, allow the people to do their work.

Yep. And so ultimately you accept the risk with it of [00:25:00] saying, ah, it's not ideal, but we'll get to it at some point.

Ashish Rajan: But I trust Ashish he would not do anything wrong.

Christian Philipov: Yeah, exactly right. It. It's definitely a level of, obviously there, it's perfectly fine to trust your employees, consider your risk model for like insider threats and whatever.

But it's a fundamental thing of how a lot of businesses operate. And that's also why it's still very much a problem even in 2025, where you often end up in situations where you're reviewing an environment and you're like, ah. Does everybody in this team really need kind of editor editor access or like contributor access in Azure kind editor in GCP or AWS equivalent?

One step down from admin administrator. Yeah. Oh no, you often come into these and then there's usually ah, yeah, but that team needs it. It's does it really though?

Ashish Rajan: Yeah. And I think I, 'cause I was about say, is there a service in Azure that allows you to almost not to use a behavior, but go into the history of saying, Hey, what was Christian up to?

Oh [00:26:00] actually, 'cause this is pointless because there's no read action. It's only change of state action that's being required for Christian.

Christian Philipov: Yeah. So there is there is a service. I believe it was a solution that was bought and then rolled out into Entra. That works in the sense that it analyzes the types of permissions that you have used over a certain period of time and then suggests a lower seems

Ashish Rajan: like you haven't been using admin for a while effectively. Yeah. And it

Christian Philipov: provides you a more custom kind of permission permission space.

Was it Entitlement Manager? I just can't remember the name. Primarily because I believe unfortunately I've not really seen a lot of adaptation of that. And I assume it's probably because of price points rather. Oh, wow. Oh, so

Ashish Rajan: it's one of those license things?

Christian Philipov: Yes. It does come under a separate license.

And from memory, it, or at least to my, poor bank account, like a fairly expensive license suite. But they, it, there are some services that can do that. But obviously mileage will vary. Yeah. Yeah. Fair to that. In [00:27:00] terms of other stuff within Azure, I can't really think of any good one other than that, that be within the

Ashish Rajan: Sure. No, but that's a good one because I think the way I saw it is okay, if people are being stealthy, you may not be able to detect it real time for now, but at least you're able to go back and see if you had the capability for it.

Someone's threat hunting and they have those kind of logs to see, okay, what was someone up to over a period of, I don't know, six months or one month or whatever, they probably could use something like permission management. Okay. So it seems like there's a Yeah, they can,

Christian Philipov: they can definitely use it to, for example, flag, any sort of excessive permissions that you seem to have that you're not really using.

Yeah. In terms of trying to detect these these more kind of stealthy API usage, then you're still gonna fall back a lot to sign in logs how people are actually using those entities and figuring out some anomalous factors there in terms of differences. Oh, it says that they've signed in usually from a Mac endpoint, but there's this one time that they signed in from a Windows user agent or something like that.

Ah, okay. Admittedly obviously really depends on the skill set of of the [00:28:00] threat actor in terms of how sneaky and how stealthy they can be across that whole range of things that they might flag them.

Ashish Rajan: Yeah, that's right.

Christian Philipov: But, in the end there, there are definitely different logs and then obviously the state changing stuff that does allow you to still get a grasp on activities happening in your tenant.

Yeah. So as much as it sounds really bad it's not as bad as it sounds like, obviously it'll be really good if Microsoft continues trying to cover these gaps. They are definitely trying. Now, for example, the Azure AD graph that I mentioned earlier, which was the graph before Microsoft graph is gonna be deprecated in the middle of this year, finally, right after a long period of deprecation.

So that's really positive news 'cause that does mean that a very common well-known method of just enumerating and getting information about the tenant is going to disappear, so people are going to fall back to trying to find more novel stuff. Yeah. So like for example, the Ibiza API for example, might be the next focus for kind of two development, et cetera, as it's one thing that does have that gap.

Yeah. Which means that in the future. I am hopeful that means Microsoft will start [00:29:00] mitigating, either moving away from, for example, to maybe Ibiza v2 that does have an ability to send the logs to Microsoft Graph or something like that. Like I can very much see it as a potential.

'cause I think Microsoft does realize it, it is a problem. It's not that, they're probably like intentionally not doing it. Yeah. It's probably just limitations under the hood of how quickly they can change everything before stuff breaks. Yeah. And so it's just a long process, I think.

Ashish Rajan: Fair. And would you and I maybe I would say a positive note is a good way to have the tail end of our conversation as well.

That was most technical questions I had. I've got three fun questions for you as well. Okay. First one being, what do you spend most time on when you're not trying to find these stealthy ways of being in Azure?

Christian Philipov: Oh primarily play video games. It's gonna be a very stereotypical answer.

So I do enjoy finding all sorts of, new stuff to play with groups of friends or just

Ashish Rajan: Oh, nice.

Christian Philipov: Solo, solo for RPG standpoint. Other than that, probably D and D oh, nice. On the side. That's pretty fun. Do you guys dress up as well?

Not [00:30:00] really. Halloween. Not Halloween. Oh, Halloween, okay. Do, I do have one major hope for Halloween that I end up taking out every year or so. Oh, fair. But no it's really good. It, I definitely think that the sort of like the role play aspect that, for example, really enjoy it just 'cause it gets you like thinking from different perspectives.

I do think it, as much as this might sound a bit kinda shoehorned, it does make you like, appreciate a little bit more of getting into different perspectives. That is helpful even within kind of a more technical kind of cybersecurity. Yeah. You

Ashish Rajan: can bring that back.

Christian Philipov: Absolutely. Yeah. Yeah. I think just the ability to be able to empathize and figure out, okay, the, they've done, they've made these decisions.

To get to this stage, it's not necessarily always due to somebody misunderstanding how stuff works or et cetera. There could be very well like legitimate reasons that kind of led to certain decisions being made. So it's not always a case of, ah, somebody, somebody did something stupid.

Yeah. It's often cases of just that's the best that we ended up doing under the circumstances that we were at. And so I think that does [00:31:00] help you out a bit to get to that viewpoint.

Ashish Rajan: Fair. And what is something that you're proud of that is not on your social media?

Christian Philipov: I'd say I'm proud of just being able to help people in the industry because for example, a lot of the work of doing of like making a lot of these like lesser known APIs a bit more public and just in general, being able to share this knowledge with the broader kind of cloud security community.

A lot of it does fundamentally come from a desire to just get people to. Know a bit more about it and just help out people to be aware of it. 'cause it's not that all of this is I, personally, I don't think that this is incredibly novel because I suspect that some people, or like even I suspect maybe threat actors are aware of some of these things.

But it's important that we as a community just. To share a lot of this information openly with each other. Yeah. To figure out, okay, there are some limitations here, but what can we do about it? Like, how can we end up fixing it.

Yeah. So I guess that's from a personal perspective, I'm always really glad to see [00:32:00] research that actually helps either other people with their work or it spawns some other research off the back of it.

That's the sort of thing that like makes me, I think as a personal achievement of seeing that you've actually managed to help other people. Like realistically, like you managed to help the community get better and overall make everybody safer in a certain sense.

Ashish Rajan: Yeah. No, that's a good mission to have as well.

Final question, what's your favorite cuisine or restaurant that you can share with us?

Christian Philipov: Ooh. No, there is a, this is a good one. There is this personal favorite restaurant is this really small restaurant on the way to Bansko in Bulgaria. Okay. It's a really small kind of family diner slash restaurant, like incredibly small, but they make the best chicken fillet ever.

Oh. Bang chicken fillet. Absolutely. Is chicken

Ashish Rajan: fillet thing in Bulgaria? Yeah.

Christian Philipov: Yeah, absolutely. Oh, wow. Yeah. No, it's the best chicken fillet, absolutely. Best chicken fillet. A hundred. A hundred percent. Oh, wow. Oh,

Ashish Rajan: do you remember the name of the

Christian Philipov: place? [00:33:00] Oh, no, I don't. I don't think it even has a name.

Like it's legitimate.

Ashish Rajan: It's like what people.

Christian Philipov: Family kind of restaurant sort of thing. But it's incredible I to find this. So

Ashish Rajan: is it like on a highway somewhere? Yeah,

Christian Philipov: it's on the highway, basically on the road to Bansko. But it's a very kind of niche answer. Yeah. If anybody does manage to find it, I'll be well amazed.

Oh wow. Okay. But it is incredibly good. But overall I mean from a cuisine perspective I'm very much apart from some vegetables that I'm not really a big fan of. Like I'm fully open to everything. For example, Asian cuisine, Japanese cuisine, really like Ramen, best thing ever.

Yeah. But yeah, that's I guess the end. Wait,

Ashish Rajan: so if you were stuck in an island somewhere, you probably would have that chicken fillet every day?

Christian Philipov: Oh if possible. Yeah. That'll be awesome.

Ashish Rajan: No. That was all the questions I had and I think I need to, it's an OSINT challenge for people to find out.

Where is this place? I, you did give enough hint. Yeah. It's on the road to what was the place to,

Christian Philipov: to Bansko. To Bansko, which is the, one of the biggest ski resort in, in Bulgaria.

Ashish Rajan: Oh, oh. Oh my God. [00:34:00] It's on the way to ski resort as well. Yeah. Yeah. When else do you need a chicken fillet?

Christian Philipov: No, it's. It's before the big peering golf course.

Ashish Rajan: Oh, wow. I've given

Christian Philipov: enough hints. I think people can eventually find it. There are a bunch of restaurants in kind of a one after the other. 'cause usually as a pit stop for people to stop by and need something along the way so people can find it and do definitely visit it.

It's super

Ashish Rajan: and cheap. It's cheap as well. I think, the OSINT challenge for people who may be interested, where can people find you on the internet to talk a lot more about the work you've been doing with Azure.

Christian Philipov: Yeah, absolutely. No, hit me up on whatever, like I'm on Twitter slash X, I'm on Blue Sky, LinkedIn whatever floats everybody's boat.

Ashish Rajan: I will leave those links over there as well. But dude, thanks so much for coming in, man. Absolutely. No, thanks for having me. Great conversation now. Fun was for me as well. I wanna, now I need to put up my OSINT challenge to find out where this place is. But hope everyone enjoy the conversation.

Talk to you soon. Thank you so much for listening and watching this episode of Cloud Security Podcast. If you've been enjoying content like this, you can find more episodes like these on [00:35:00] www.cloudsecuritypodcast.tv we are also publishing these episodes on social media as well, so you can definitely find these episodes there.

Oh, by the way, just in case there was interest in learning about AI cybersecurity. We also have a sister podcast called AI Cybersecurity Podcast, which may be of interest as well. I'll leave the links in description for you to check them out, and also for our weekly newsletter where we do in-depth analysis of different topics within cloud security, ranging from identity endpoint all the way up to what is a CNAPP or whatever, a new acronym that comes out tomorrow.

Thank you so much for supporting, listening and watching. I'll see you next time.