Kubernetes Security Trends 2024 | Software Supply Chain Security, Zero Trust and AI

View Show Notes and Transcript

Kubernetes is shaping the future of cloud native technology with interest from security folks, businesses and developers - what does the future of Kubernetes Security look like? At KubeCon + CloudNativeCon NA 2023, we spoke to Emily Fox who is the chair of CNCF's Technical Oversight Committee and Software Engineering Lead at RedHat about how Zero Trust plays out in the Kubernetes environment,  challenges and solutions in securing the software supply chain within Kubernetes,  the impact of AI workloads on Kubernetes and future of Edge Computing and Kubernetes:.

Questions Asked:
00:00 Introduction
02:23 A bit about Emily
02:51 What is Supply Chain Security?
03:51 What triggered this conversation?
05:10 Supply Chain Security in Managed Kubernetes
06:07 What is Zero Trust?
07:24 Implementing Zero Trust
09:29 The role of Security and Compliance
11:13 Compliance as code in Kubernetes
13:22 What is Edge?
17:41 The impact of AI on Security
20:39 Detection for AI and Kubernetes
22:29 How are the skillsets changing?
25:00 Security for Open Source Projects
28:01 The fun section

Resources shared during the episode:
Malicious Compliance: Reflections on Trusting Container... - Coldwater, Cooley, Geesaman, McCune -Malicious Compliance: Reflections on ...  
Keylime - https://keylime.dev/
SPIRE - https://spiffe.io/docs/latest/spire-a...
SPIFFE - https://spiffe.io/

Ashish Rajan: [00:00:00] Kubernetes supply chain, Kubernetes AI, Kubernetes security for zero trust. Kubernetes, I can keep going about Kubernetes and the trends that you can expect. In this episode, we had Emily Fox. She works for Red Hat and she works very closely with CNCF community to talk about what are some of the topics that are top of mind for a lot of people in Kubernetes.

Specifically, we spoke about the supply chain issues. We spoke about the whole SBOM, how complicated and how viable it is for the Kubernetes space or the open source space in general, we spoke about zero trust, we spoke about the AI influence on the Kubernetes context as well. And we specifically talk about the CNCF ecosystem for security.

Having a security background also meant that she was able to shed a lot of light from a security perspective on Kubernetes. As to what you can expect in the coming months, weeks, and years, but also from her perspective, what would you think is a difference between open source project, and also if you're thinking of doing zero trust in the Kubernetes space, what would be involved?

If you know someone who's quite keen to know about what's coming in the [00:01:00] Kubernetes space or just the supply chain or the AI impact or zero trust in the Kubernetes space, then this is the episode for you and share it with any of your friends or colleagues who are also researching the same topic of the impact of AI, zero trust, supply chain, and a lot more on the Kubernetes space and specifically what CNCF is doing, helping companies or graduate.

Like for example, Cilium graduated recently, which is an open source project. What does that mean? And how would that change moving forward? If this is your second, third, or maybe even 10th, or maybe 50th episode that you're listening to of Cloud Security Podcast, or maybe watching on YouTube channel, and you have been finding us valuable, I would really appreciate if you could take a few moments to drop us a review or rating on your popular podcast platform like iTunes or Spotify.

That is if you're listening to this, if you are watching this on YouTube or LinkedIn, definitely give us a follow or subscribe. It definitely helps us spread the word and lets other people know as well that we have a community that we would love to welcome them into. We are a growing community of over 50, 000 people so far.

So we would love to keep growing that and keep spreading the good message of cloud security and how to do that this was a [00:02:00] conversation we had at KubeCon North America, where we were a couple of days ago, and thank you to everyone who came in and said hello and took pictures with us and took videos with us and was kind enough to come on my video of the LinkedIn videos that I post for my daily vlogs for conferences that we attend. . It really means a lot. So thank you. Thank you. Thank you for everyone who came and said hello to us and KubeCon.

I hope you enjoyed this episode of Cloud Security podcast. I'll see you next one. Peace.

To start off with could you share a bit about yourself to the audience?

Emily Fox: So my name is Emily Fox. I work at Red Hat. I'm our security lead in emerging technologies, and I'm our security community architect in our open source program office.

That's my day job. But I'm also the chair of the technical oversight committee for the Cloud Native Computing Foundation, and I have previously co chaired KubeCon three times for Amsterdam, Detroit, and then Valencia. So I've been in the ecosystem for a while, and I do a lot of other things in open source, so not just in CNCF.

Ashish Rajan: That's awesome, and lovely to have you in the company as well. Maybe just to level the playing field, how would you describe supply chain security to people who might just not be from a security background?

Emily Fox: I'm [00:03:00] actually glad that you asked because there's still a lot of organizations that don't quite understand it.

Software supply chain security is a series of practices that you can apply to both your software development practices in house, but as well as open source maintainers can apply them to provide better assurances and observations around what went into the software. Where it came from, how it was built, that includes things like signing commits, signing your artifacts, producing a software bill of materials that tells you all of the things that are in this final product that's being packaged up to be delivered to your customers, or for open source projects, just what's in your source code, what's in that container image that was finally released so that they can understand that when the next log4j comes out, if they're using their SBOMs appropriately, they know where it is in their architecture, what container image introduced it, and which build team is responsible for maintaining that package.

Ashish Rajan: Do you reckon the whole SBOM from the Presidential Order that came through is that what's triggered all of this?

Emily Fox: Not really. If you think about it, software supply chain security [00:04:00] is not a new concept. It's been around for a long period of time. And really before that it was around hardware supply chain security and understanding where your hardware components are coming.

But with software supply chain we're starting to apply those practices into software systems. When SolarWinds was attacked, that's when a lot of people started paying attention because you hit most of the government where really hurt in their supply chain. That caused the Executive Order and then when you have log4shell compounding a lot of those concerns the industry had already started this momentum in this movement of okay, we have to fix this.

Something else is going wrong. And we were able to take the principles and concepts from DevOps and that movement and subsequently DevSecOps and start shifting all of that left. But we're not really just shifting it or expanding it left. We're applying security controls and security assurances to where software engineers live, and that naturally creates some friction.

Yeah. So a lot of the new software supply chain technologies that you're learning about now are about easing that developer experience, making it more [00:05:00] automated, and abstracting a lot of those initial concerns so that security engineers get what they want, software engineers get what they want. Faster, secure, deliverable software to their customers.

Ashish Rajan: Quite a common question asked by a lot of people, is the whole Managed Kubernetes versus self hosted one? Does this vary in that space, if you've got to go manage?

Emily Fox: It does. If you go with a Managed Kubernetes instance, you have a shared responsibility model with whoever your Managed Kubernetes provider is.

Whereas if you're doing self hosted, It's open source software is free, but it's free like a puppy. It's the same thing with self hosted Kubernetes. You're responsible for everything. For pulling in the latest updates of your Kubernetes instance, for managing it, for making sure that it is meeting your organization's security policies.

Whereas with managed Kubernetes instances, a lot of that can actually be contractually negotiated with whoever your managed services provider is. And, you also get opinionated instances of Kubernetes with managed services. Which is great for organizations that don't have the technical skills to become Kubernetes experts in building that technical stack.

They can just rely [00:06:00] on that service provider to do that for them.

Ashish Rajan: I love the puppy analogy as well. There's a lot of shit to pick up, literally. Yes. What about zero trust as well? Because I think that's another topic that keeps coming up. How would you describe zero trust to people?

Emily Fox: It's actually becoming more popular, which makes me very happy.

So Zero Trust is really, as Fred Kautz put it more accurately in his last keynote, is zero implicit trust. Organizations, traditional security models, have this boundary, this big hard kind of firewall around all of their stuff. But once an adversary gets into the environment, it's nice and squishy.

Everybody's open. Everybody trusts anybody. I can just bring a note onto the network. Nobody knows I'm here. I'll start scraping some data down and then I'll walk out the door later. With Zero Trust, the idea is that you're always challenging those connections. You're always challenging the identity. And it's not just one service that's doing it.

It's both services doing it, but it's all based on the identities of those devices and where those identities are coming from, which is great for organizations that need a higher degree of assurance. They can have cryptographic identities for [00:07:00] basing a lot of those initial communication decisions on, but for other organizations, there's much more simple solutions out there.

The intent though is that you're no longer implicitly trusting the things that are in your infrastructure, the applications that are running in your workloads and your clusters. The idea is that you're constantly challenging it and you're using information from, like your software supply chain, to make more informed decisions about what can be deployed and where it can be deployed.

Ashish Rajan: Okay, how does one implement this? Because it sounds like you're not trusting anything in the beginning.

Emily Fox: A lot of it is changing your mindset. So most organizations today are still struggling with mutual TLS. Like that concept is still foreign to them. So the easier that we can make MTLS connections within a customer's environments, the easier it's going to be for them to adopt.

And that's like the first easy step for a lot of companies. That's just a general security improvement. The next step is leveraging certain technologies like SPIFFE, the secure production identity framework for everyone to be able to issue cryptographic identities for those workloads [00:08:00] so that they can use a service mesh for mutual TLS and get the benefits of that.

But there's a lot more to it. And Zero Trust as a practice is still evolving and developing. When you think about what infrastructure that you have, where your nodes exist within the environment, those principles and concepts have to apply everywhere that they are. And they have to apply to your software supply chain as well.

So ensuring that the container image that was built from your build pipeline was on a node that it was supposed to be in your environment.

Ashish Rajan: Or even the container you're downloading is from a trusted source and all of that as well.

Emily Fox: Correct, exactly. How many times do developers misspell something?

Ashish Rajan: Yeah, I think we were talking about this just before. I guess the whole idea behind Zero Trust, implementing it, First of all, people don't understand Zero Trust because it's like almost sounds like very government like as well. It does sound very government like. It's not that because people don't trust the government, but oh, Zero Trust sounds like a government agency thing.

Emily Fox: Well, a lot of it is that when you're dealing with national security systems you need to have that high degree of trust, cause if there is a compromise its someone's life on the line, it could be 1000s of people's life on the line but if you are a video game developer and you work for a small business producing [00:09:00] apps for 10-15 years old to use to play I don't know. Crossword puzzles. Your level of, like security needs is going to be very different. But the idea is that if you get in the habit of doing it, these technologies make the practice so much easier, so that the compromise that could occur is very limited.

And you're catching it earlier on to reduce that blast radius.

Ashish Rajan: I'm glad you used that example because in my mind I was thinking that because of the higher adoption of use of Kubernetes usage, It's not just a thing that's being used by, say, small tech companies. It's being used by quite a few people, including government as well.

So how is security and compliance kind of playing a role in this as well?

Emily Fox: It's a really good question. As most technology advances over time, we have these big bursts of innovation. And then security has to play catch up. And you saw this with the DevOps movement yeah. A couple years later we had DevSecOps.

Yeah. It's when the security engineers [00:10:00] figured out this is a bandwagon we need to jump on because we can actually automate and push down a lot of the friction into a point of the infrastructure that developers don't need to concern themselves with it anymore. Yeah. Let us handle that. Yeah, and with a lot of security and compliance comes after

Secure systems are more likely to be compliant than compliant systems are to be secure. And not all compliance or regulatory frameworks meet security goals and objectives. So you need to have this balance. But like with security and DevSecOps, we're seeing a lot more compliance organizations step up and say, We want that too.

We want compliance as code. We want to be able to understand whether or not that workload that was deployed met all of the compliance requirements. Or Ian Coldwater, Duffie Cooley, and several others had an excellent discussion or a presentation at CloudNative Security Con earlier this year called Malicious Compliance, which is a great way that showcases that even with the best of intentions of getting something deployed out there, you can defeat a lot of those security scanners to meet your [00:11:00] compliance objectives.

There's no vulnerabilities if you don't allow the scanner to actually function the way it's intended.

Ashish Rajan: I think you and I both lost a few friends as you said that as over there, it's like. What do you mean compliance is not good? I can't believe you said that Emily, but I am with you on this one as well.

But I also wanted to say that how does compliance as code look like in the Kubernetes world? Because I think people understand what compliance as code is like. Oh, I'm automating compliance controls in AWS, Azure, Google Cloud. How does that translate in GKE or any kind of Managed Kubernetes? Is there an example?

Emily Fox: So typically, when you're going with a managed service provider, they will give you some indicators of what their compliance guarantees are going to be, but then ultimately, you're responsible for the other part of that discussion. Because if you work in a financial services organization, you're going to be subject to a lot different compliance and regulatory frameworks than you are if you're not.

Yeah. So there's been this recent industry momentum around, how do we all get together around, common cloud controls for compliance so that when an auditor comes to investigate [00:12:00] whether or not we're still compliant with whatever the regulatory framework is, that we can do so in an automated fashion. So that's like the next thing what we're starting to see a lot more development go on.

I believe the common controls catalog was actually just announced by the FinOS organization. There's been a lot more discussion recently within the cloud native community around compliance TAG security a couple of weeks ago, I had a presentation for a proposal of a TAG compliance and while it's still a little early to call it a TAG, I definitely think it's worthwhile to do a compliance working group so that our cloud native projects in the ecosystem can start generating enough metadata that some of our adopters and our end users can use that in helping their compliance frameworks and compliance decision making.

Turning over those results to auditors.

Ashish Rajan: Because to your point, at this point in time, as we have this conversation, if people do want to do compliance as code, they can technically If they're going for a Managed Kubernetes provider, they would have some of it already covered. It's just about identifying what's covered and what's the gap that I need to fill.

Emily Fox: Yeah, and you're going to get that from your managed service provider. That's the [00:13:00] biggest thing is understand where their responsibility ends and where yours begins. Because a lot of people, they don't look at that. And they'll be surprised when there's a security incident or a compliance incident.

And they're trying to figure out who's responsible, where do they point fingers. Because that's what people like to do when incidents happen. Yeah, of course.

Ashish Rajan: I don't want to blame the community, I want to blame an individual. It's not my people, it's someone else. It's totally someone else, it's someone else.

Another use case of where people are adopting this quite often, we had a lot of people talk about meat factories and submarines and Edge is also something that we're obviously hearing quite a bit of. What's the adoption there what are some of the concerns that you're hearing from customers over there?

Emily Fox: So Edge is not a new space. It's been around for a while. The challenge comes in is that many organizations, when their Edge devices leave their doors, they consider them compromised if they're smart. Because once you no longer have operational control over that, you don't know what's going on with it.

So some of the newer concerns that are coming up is, how do I get the same assurances and guarantees in my cloud native environments [00:14:00] in my Edge environments? And how do we connect them? The challenge comes in is that, Edge as an environment for deploying into has a lot of constraints and restrictions.

And then you have these concepts of near edge, closer to your data center, and far edge. The thing that you may not see for several years. Maybe it's on a ship moving containers across the Atlantic Ocean. Or maybe it's a laptop that's being traveled to, I don't know, the Sahara desert to do some humanitarian aid work, something along those lines.

They're not always going to be connected and online. So how do you get the same security guarantees for those systems? There's a lot being developed in this space. There's a lot more going on, but with more advances and more open discussions around Edge devices, it's allowing software developers to enable more of those workloads to get those same security guarantees.

Ashish Rajan: But do we have an idea for what that, because even your example of taking a laptop to a Sahara desert for a humanitarian effort. It's there is no way for you to even know remotely what's going on there, but all you can do is there are things or controls within that IoT device to know that, okay, [00:15:00] as long as it's within the parameter, we have limited the number of bad things that can happen to it.

Is that how people are approaching it?

Emily Fox: A lot of it is, it's how do we ensure that the system, when it was out there and offline, is brought up online in a secure way? How do we know that it's secure? been untampered with, what can we observe and measure about it, both on device and potentially even remotely, so that we can deploy our workloads onto it and have that guarantee that nobody's messed with it.

But also, going back to that Zero Trust concept, how do I continuously re attest that so that I know once it's up and running, it's continuously getting updates and still untampered.

Ashish Rajan: Interesting. And have you seen many projects in this space out of curiosity? Or is it primarily being driven by?

Emily Fox: A lot of the projects that are already in this space have been around for a while.

So you have projects like Keylime that work on remote attestation, but primarily that's targeting more server kind of environments with cloud. Now we're starting to see how we can take some of these cloud native technologies and get them onto edge devices. And this is where a lot [00:16:00] of more of the environmental sustainability initiatives are starting to weigh in.

If you as a cloud native project can actually measure what your carbon footprint is, what your CPU utilization is, maybe there's opportunities for you to gain efficiencies so that you can deploy out to the edge without inhibiting significant overlead, both in getting that new update out there, but also ensuring that there's enough space for everything else to run on that house.

Ashish Rajan: For people who may be listening to this on the viewer side and might be thinking that I haven't seen or heard Kubernetes being used on Edge, what are some of the examples you can, I also gave the meat factory example, but imagine televisions, like what they might be watching this on, it has Kubernetes as well.

What's the most interesting place that you've heard people using Kubernetes in an Edge context out of curiosity?

Emily Fox: I don't know that I have heard one that's any more interesting than the others. And the reason for that is, is a lot of IOT devices are so bespoke and so proprietary. And we're starting to see that change over time.

But it's just a matter of ensuring that Kubernetes is the correct [00:17:00] size and the rest of your stack is there and can be deployed. But even still, maybe there's a lot more opportunity for improving that ecosystem within edge devices. And part of that is a balance in discussions. Both between the hardware manufacturers developing IOT as well as the software developers building software for on top of those applications.

And I think those conversations are starting to happen. We've gotten to a point of industry advancement where we've got solved a lot of the problems that we can start having those discussions, which is why you see projects like Keylime and SPIFFE and SPIRE and more of those security oriented projects really starting to take off now, even though they've been around for a while, now that we've got most of the hardware unlocked, we can start moving forward in that space.

Ashish Rajan: Talking about hardware, you can't have a conference conversation and not talk about AI. It's like the one thing I think I need a lot of hardware for that. What's the impact been, and obviously Keynote, we were talking about this earlier, Keynote had some LLM conversations in there as well, AI panel discussions as well.

What's your take on the impact of AI in this space at the [00:18:00] moment?

Emily Fox: It's just another workload. I know there's going to be a lot of people that are upset with me. Don't say that.

Ashish Rajan: It's like ChatGPT is another chat bot is what it sounds like.

Emily Fox: It's a workload that has very specific needs and requirements.

But what I want to see us do as an ecosystem is how do we take a lot of our practices and the knowledge and experience We've gained in fixing software supply chains to be more secure. How do we start applying those conversations to AI? How do we ensure that the data that went into a model to train it was coming from a source that we expected it to?

How do we know that the inference model that's running based off of that is correct? What's the assurances that we can get out of that? How long did it take? Where did it come from? Who touched the system? Who built the model? Also understanding what the confidence score associated with that output is.

Because if you are making decisions based off of information that's being generated, you are at risk of consuming content that could be hallucinated. So anybody using these technologies, needs to have a high degree of confidence both in the context in which they're [00:19:00] asking those questions, but also in how they're going to apply those answers, which is why auditing and Zero Trust is so important.

You need to have that confidence and you need the ability to independently inspect that so you can make more informed decisions. But that's not just Gen AI. All AI and all large language models are going to run into this scenario. So what I'd like to see is where we have software bill of materials, maybe there's an AI bill of materials that goes through and talks about this.

And I'm sure there's somebody out there that's listening who's I'm working on that, or it's already in existence. If you are, that's great. Let me know. I'm interested to learn more about it, but we need to start advancing these discussions. We're at an, a crucial gesture where we start allowing AI workloads within cloud native environments and eventually maybe out to edge devices too, for that faster content generation or decision making even predictions.

We need to do so in a secure manner, because if we don't get in front of this now, We're going to continue to run into problems like we currently have with how LLMs were built in the first place with poisoning attacks and things like that.

Ashish Rajan: Yeah. And I think to your [00:20:00] point, also knowing the source of data that you should train the information Is there bias in there.

Emily Fox: Correct. There's so many other ethical considerations that go into that. And it's not just security at that point. It's your organization's ethics that have to be considered as part of that. On top of it, a lot of the AI models right now are very. They're large language models, but generally they are large workloads and you're probably sharing it with another organization or entity.

So how do we get those smaller? How do we allow businesses and organizations to have confidence that when they put their data into those systems that it's been untampered and it hasn't been leaked or it hasn't been shared with anybody? That's the next question.

Ashish Rajan: Yeah, that, and to an extent, we talk about the fact that detection is going to be hard as well.

How do you even detect something has gone wrong with it?

Emily Fox: Detection is hard in the first place. Yeah. There is detection engineering just generally as a concept requires so much subject matter expertise. And when you start applying that to AI models or AI workloads, it's going to get even harder.

Right now we even [00:21:00] have difficulty in doing detections within cloud native environments because there's so many different ways an attacker can get in. They have all the time in the world. Now we have to apply those principles and learnings. Yet again to AI

Ashish Rajan: And also because a typical Kubernetes build would have multiple other open source projects and which requires its own detection And then you're like who's detecting the detector?

Emily Fox: Yeah, that's actually a really good question. That's been coming up is how do you know whether or not your detection engine is actually running? It hasn't been unmodified. How do you verify that a lot of what it is we deploy and pray And we don't actually check back in to make sure what we expected to be deployed is still deployed and making sure that we don't have that drift occurring within our environments.

Ashish Rajan: Yeah, and bringing it back to a full circle with software supply chain as well, if you have a project with only one contributor suddenly decides, you know what, I won the lottery, great job, everyone, goodbye, and someone is like, Hey, do you mind if I take over? Yeah, go for it. Because I don't care about it anyways.

Yep. I think it's what happened with one of the Java libraries or whatever. [00:22:00] There's a person, I don't know, whatever, maybe made a lot of money or whatever, moved on. Obviously, which is true. We're all humans, right? Yeah. We all have different stages of our life. We want to move on to the next thing.

Yeah. Eventually. I think the individual kind of moved to the next thing. And someone else became a contributor, but that contributor added like a crypto mining thing in there. And so it's only when people realize, Oh my God, this is used by a lot of popular libraries and all of that. So I definitely find detection is hard, but on top of it being top of the open source space requires its own contribution.

How do you see from at least from a impact of different world? How do you see the skill set kind of change in this space? At the moment, I think I feel like before I came here, and before KubeCon Europe as well, most of the conversation is more around, oh, you should know more Kubernetes. You should know you secure it better.

You need to know what a cloud native stack looks like. That's like the conversation. But now it feels like it's more different. The more I talk to people, the more I like, you need to know Argo CD. You need to know Cilium. You need to know this. You need to do that. Like how are you seeing this change?

Emily Fox: A lot more specialization of expertise. But [00:23:00] and it's an interesting cycle. Back when Kubernetes like first broke onto the scene, if you were somebody that knew the stack and you can figure out what are the different projects that you can fit into your Kubernetes ecosystem and deploy them all in a way that actually works, that was a big thing.

Can you make it all work? Then you were considered golden. Like you were the go to person because you knew it all. But now, as we've built more technologies that innovate on that layer of abstraction. And I've talked about this in several of my other keynotes. We're starting to see not necessarily a stupidification.

That's not quite a word, but you're starting to see people that just want to run fast and not necessarily understand what's running under the hood because we've made it so easy that you don't have to. But we've also made it very difficult for people to stop and actually inspect and learn. So what's happening is we're seeing more technologists in specialty areas.

Like Cilium, like etcd, there's a great mentoring program for folks that aren't familiar with that to understand the underlying technology, how it actually works so that we can sustain them. Cause once you get [00:24:00] that day two operations become a little bit easier. You understand where the bug is within your deployment.

You can actually go through and potentially even fix it because you have that knowledge. Whereas when we have all the scheduling, that's doing it. for us, we don't necessarily get that expertise or experience in applying remedies to those problems.

Ashish Rajan: Because almost, I imagine every mentor over there would, or at least, who's probably not thinking from a broader perspective, they would just say, why are you trying to do this manually?

Just use scheduling for it.

Emily Fox: And that's a lot of what it is. Because of the DevOps movement and Agile, a lot of our industry has just run, this concept of velocity is great. It allows us to do amazing things, but we also need to balance that with taking a step back and understanding, did we build the thing that we actually intended to build?

Is this how our customers are actually going to end up using it? Was there something we missed? Is there a threat actor that can take advantage of it? And until you have that specialty expertise. We're going to continue to have those questions.

Ashish Rajan: Yeah, so Cilium obviously being a project that was graduated.[00:25:00]

One thought that I take away from that is also that yes, Speed is important. Security is also important as well. But I also think about the open source projects that are coming in and that made me think of Cilium which recently graduated. How is that ecosystem changing in terms of graduating and thinking of security and all that?

Emily Fox: I'm actually glad that you asked. So the technical oversight committee in CNCF recently brought to closure our project moving levels task force, and it was long overdue. We've had projects coming into the foundation for a long period of time. We also have sandbox projects, incubating, and graduated projects, but we hadn't really reconsidered the criteria and the process, given all of the momentum and innovation that has happened in the year since it was initially started.

So we started by reaching out to a lot of the previous TOC members and a lot of the community members that have experienced some form of pain or joy in reaching that next level. Based off of the recommendations from the group and TOC expertise, they put forth a set of recommendations to the TOC on changing the [00:26:00] criteria, but not only that, making small adjustments to the overall due diligence.

Because since Kubernetes graduated, an industry has advanced significantly. The barrier to entry is significantly higher than it used to be. And what some project maintainers might think it's good enough. It's no longer good enough because end users and adopters are demanding more. They need better visibility.

They need better security guarantees. And they need to be able to independently inspect that information for themselves. So get setting our projects up for success. We do that through the criteria. How do we get you from this great idea of a project with maybe a repo that's only a year or two old into something that has 15 to 20 adopters, five years down the road, highly secure, resilient system, and easy to troubleshoot for adopters.

And that's a lot of what the TOC has been doing is as industry advances, so does our moving levels and so does the CNCF. And that's really where we're heading next.

Ashish Rajan: And is it work already done? And the next person who comes in would be.

Emily Fox: [00:27:00] The recommendations have been provided to the TOC. We still have to meet internally to understand more about what are the implications of that, because we want to make decisions with the community in mind and knowing full well that we're only 11 people and there's what, 173, maybe 174 projects in the ecosystem that we're all responsible for.

So something's got to give. We've been relying on a lot more automation, which kind of comes with its own double. Not yet, but. That doesn't mean that we're not open to it. A lot of what it is right now is understanding what information is observable around projects through the GitHub APIs and their releases.

How do we take that information and pull it into a centralized report so that projects can see it too and it's not just the TOC. But then have that human interaction with maintainers because not everything you can get from an API to understand their project health and where they're headed. Like understanding why your roadmap hasn't been updated because maybe you've been dealing with an update crisis.

It's important and that's worth a [00:28:00] conversation.

Ashish Rajan: 100%. That's most of the questions that I had, but I've got three fun questions for you. Okay. First one being, what do you spend most time on when you're not working on cloud native?

Emily Fox: Oh, that's a, that's a terrible question. So I do a lot of different things.

I work on cars. So I have a 1967 MGB GT named Mildred. I'm currently in the process. No, I bought her from a guy that bought her thinking that he was going to fix her up and didn't fix up Didn't fix her up. So i've actually got her running again. She can drive. I fixed all the brakes I'm redoing the interior right now with some traditional scottish plaid yellow and gray.

It's quite beautiful So i'm in the middle of that project. We have a very small property where I live right now that I'm trying homesteading practices on. So I'm growing a lot of my own fruits and vegetables being very environmentally conscious so that when I moved to Colorado where I have 35 acres, I can actually scale it up really big.

So taking a lot of the practices I've gotten through technology and Trying something small, iterating on it, improving it, and then [00:29:00] scaling it. I apply that to a lot of my life.

Ashish Rajan: Sounds like you had a proof of concept. Now you're like, I'm ready for deployment, production deployment.

Emily Fox: Yeah, but I do cars.

I do gardening. I do homesteading. I do some construction as well. I do sewing and crafting. The list just goes on. I also play piano.

Ashish Rajan: Is there anything that you don't do?

Emily Fox: I don't code Ah! That is the one thing that I don't do. I do not code. Fair enough. If anybody looked at my code, they'd be like,

Ashish Rajan: yeah.

She needs another job. I'm like, she has 25 other jobs. So the next question being, if you could have a superpower, what would that be?

Emily Fox: The ability to spread empathy to others.

Ashish Rajan: Oh, amen to that. We need more of it in this world. Yes, especially now. I think post COVID definitely need a lot more empathy. Yeah.

Because of the fact that I think even when flying over here, if the plane announcement has to include the fact that please be more patient with your fellow passengers. Please be patient with your staff I don't remember it and obviously before covid feels like a long time ago. I don't remember it being a thing

Emily Fox: It wasn't as [00:30:00] bad.

I feel like covid reset everybody's humanity a little bit and their slowly relearning it. What's interesting though is that the concept of being thankful and being gracious for things is a very western culture. There are plenty of cultures and indigenous people that believe that is just the way that life is and you should always be like that.

And I wish that was something that I can impress upon others within either the technical ecosystems that we have or within our own organization. So if you have the opportunity pass that love on and just be kind to others because you never know if they had a bad day or if it's a bad year for them and how they're reacting to you may not necessarily be the cause of anything that you did.

Ashish Rajan: Yeah. You're just meeting the wrong time. That's pretty much it.

Emily Fox: And if you give people the space they'll surprise you.

Ashish Rajan: Final question. What's the best part about coming to KubeCon?

Emily Fox: If you asked me that as a Co-Chair the answer would be different. A lot of people say the hallway track.

And I have to agree with that being able to reconnect with old colleagues as well as meet new colleagues is great. The thing that excites me the most is [00:31:00] seeing the change in gender diversity within the ecosystem, particularly within the security community. A lot of I. T. organizations are very male dominant, male heavy.

But when you start sectioning out the different parts of technology into different domain areas, the gender breakout changes drastically. And security has always been one of those areas that is. It's challenging because of there's a little bit of a bro y kind of culture. A lot of the crypto stuff influenced that.

So the more women that we can get into security roles and positions, that, that is what makes me excited. And I'm always happy to talk to other women in tech and learn what it is that their journey has been like, and think about ways that we can improve it for each other.

Ashish Rajan: That's pretty awesome. We should definitely do that as well.

I think there's a. In general, the cyber security has this thing as well that it's an elite place to be. It's not it's not for you, it's for people who have worked hard made it, working hard is there, don't get me wrong, but it's just the fact that the illusion of the fact that, oh, you have to be the chosen one to get there.

Emily Fox: That, but there's also Security [00:32:00] is so complex. There's so many different subdomains that exist within it. We talked about detection engineering is one software supply chain security is yet another and even within that knowing a particular language and security practices associated with it. That's its own skill set So just because somebody isn't a hacker or they're not an elite hacker breaking into systems and understanding what those bugs are.

You can still become a security professional. There's a lot of room in the ecosystem for improvements. Yeah.

Ashish Rajan: No, I appreciate you sharing that as well. Where can people reach out to you on the internet if they watch this video and want to learn more about this space? Or maybe. Any female, open source contributors.

Yeah. And even

Emily Fox: if you're not an open source contributor, you can still become one. There's always time. So you can find me on CNCF Slack. I'm TheMoxieFox. If it's GitHub, I'm TheFoxAtWork. I have a Twitter handle, but when it went X route, I stopped paying attention. You Xed out of it as well. Yes, I did.

But otherwise check me out on LinkedIn, I'm forward slash The Moxie Fox, so make it pretty simple. And [00:33:00] if you see me running around the conference, just check my shoes to confirm they say Moxie Fox on the back.

Ashish Rajan: I appreciate you coming on the show. Thank you so much for coming. Yeah, thanks so much for having me.

Emily Fox: It's been a pleasure.

Ashish Rajan: Likewise. Thank you everyone for watching as well. See you next episode. Peace.