Contributory BlogsDevelopersSecurity

Container Security Through An Enterprise Lens

padlock encryption security
0

As technologies evolve and become mainstream, their security expectations similarly evolve in response to the types of work and data those technologies process. We’ve seen this progress with virtualization where the initial workloads were targeted at low-risk activities like file and print servers, but eventually virtualization was deemed appropriate for any type of workload and entered the mainstream. This adoption cycle is now almost complete with container technologies where even the most conservative companies are now investing in how containers can accelerate their businesses.

While the pace of technology innovation is controlled by the development teams creating new and interesting functionality, businesses are constantly pushed by both competitive and regulatory concerns. This can result in levels of tension between product teams seeking early adoption of their ideas against the constraints imposed by regulators which are in turned influenced by the shortcomings of past innovations. It is into this highly combative regulatory landscape that Kubernetes and containerized applications find themselves – a landscape filled with requirements that are often foreign to engineers but also quite familiar to business leaders.

As with any discipline, topics like GDPR, PCI, HIPAA, PIPEDA and LGPD are an alphabet soup of concepts which require unique skills. Those skills require just as much attention to detail and practice as learning any programming language or deployment system. They aren’t topics which can be simply “checked off”, but rather require a level of understanding around how a given deployment or feature implementation might result in non-compliance. Additionally, as governments seek to define privacy and digital protection laws, these regulations are ever-shifting, making what was once a “compliant” implementation no longer so.

This is the realm of digital privacy and security in which enterprises adopting container technologies find themselves. Their goal is often to benefit from the agility and elasticity provided by containerizing applications as a defence against shifting regulatory requirements. This goal then leads to a fresh perspective on container security, one that isn’t rooted in the technologies but rather enabled by them. This distinction is critical as without understanding why businesses are adopting a given technology, it’s easy to innovate oneself out of a market.

So, from a security perspective, wrapping our collective arms around this problem means recognizing that there is not, and likely never will be, one vendor that solves everything, nor one implementation that solves everything. Such a mentality reinforces a static view of security which is antithetical to the current trends in digital privacy. This means we need to come back to first principles and that starts with data.

While it’s easy to say that the owner of a set of data is the creator of that data, certain data is special and what makes it special is defined by either industry regulations or political definitions. Protecting such data may be prescriptive, but the reality is that most political definitions are ambiguous at best (e.g., all personally identifiable data must be encrypted). Nonetheless, protecting the data is a security requirement and one whose task is made more complex as functionality gets decomposed in distributed systems like microservices or serverless designs.

For example, in a traditional monolithic application, we might have a code block like the following to protect user data.

for each (element in dataset.record) {
hasUserRevokedRights(user);
isConsentForUsageGranted(element);
isUsageAllowedByJurisdiction(element, user.Jurisdiction);
canElementBeStored(element);
applySecurityRestrictions(element);
….
}

The problem with any distributed system is that data protection responsibilities are now also distributed. The rationale behind a given protection method might become lost to time as the applications evolve unless there are clear security targets that are consistently measured with each release, feature merge and patch commit.

That’s right, our incredibly agile DevOps container development streams will need security speed bumps. But if done correctly they can be minimally invasive; it just requires decomposing the problem into who feels the pain and how the pain will be communicated. In other words, we need to ensure that the DevOps communication channels are open and working properly.

Some key questions to ask would be:

  • Which microservices have access to each data element in an application and how would you know if inappropriate access was occurring?
  • Which data elements are governed by each regulation, and how would you ensure that users in the regulatory dataset are appropriately identified?
  • If regulations impose geographical boundaries, how would you ensure that geographical data is accurately collected and reflected in processing?
  • If regulations require consent, how would you ensure that all potential creators of data within a given record had provided their consent and that their consent was still valid?
  • As application implementations evolve, who is ensuring that updates remain compliant with current regulations? Similarly, as regulations are amended, or new ones are created, what effort is required to ensure compliance?
  • What protections are in place to identify aberrant system or data accesses in an effort to identify breach attempts?

These are all business questions with direct impact on either the security of the application or the security of a deployment. These are also all topics which are loosely covered by the current crop of container runtime security solutions or container risk solutions. What’s required to address business security issues is a return to first principles and apply some context.

  1. There is no magical developer who will fix every security issue if we just “shift left” enough times. Developers work on features, and “secure the app” is too big of a feature with too broad of a skill set to successfully have one person address. The end result being that security features need to be decomposed into workable items.
  2. Most organizations won’t stop feature development to “fix all the bugs”, so they’re equally unlikely to stop development to “fix all the security issues” either. Even if they did, modern software is created using supply chains, and supply chains by definition have external threats.
  3. Even the best feature developer isn’t likely to be a security ninja, nor is someone skilled in hardening operating systems likely to be a skilled feature developer. This means that at each step of the way, security best practice guidance is a must. DevOps culture needs to recognize that even superstars have limitations.
  4. Old school testing paradigms are perfectly usable in agile container development streams with daily deployments.
  5. Security testing should be designed around the reality that replicas are copies of a golden image. It’s not necessary to test everything if you focus on the root of your deployments.

These principles then lead to a fairly boring set of test steps, and as we all know, boring is good.

  1. Tools exist to perform basic static code analysis within IDEs. Use them. If these tools also support some form of inline training, that’s a bonus. If you use such tooling, then you can also build your pipelines around whether those tools approved the code changes. This is huge because in doing this, you’ve just put the security in the hands of the developer while they’re thinking about the same feature the security tools are looking at.
  2. Decouple scanning of base images from the application. Base images need to be maintained by platform teams skilled in hardening operating systems. They should have a minimal attack surface and not enable bad practices like interactive logins or elevated accounts. Then implement admission control policies to ensure that only images built using a prescribed set of approved based images can be deployed in your cluster. This helps tame the wild west.
  3. API fuzzing is your friend. Cloud native applications are powered by APIs and bad API implementations can be unstable or potentially leak data. Leaking data is rarely something regulators are happy about and the same is true of users.
  4. Interactive application security testing (IAST) provides a killer way to deal with difficult distributed system bugs, but it comes with a slight stigma – the need for an agent. People embracing this stigma argue that getting Ops teams to deploy an agent is difficult, but it needn’t be. The agent question boils down to pain. Is it more painful to deal with a data breach or deploy an agent to identify risky data flows during acceptance testing and fix them before they get to production? Just imagine being able to not only identify data privacy issues prior to production, but to do so with a minimum of developer friction and clear supporting evidence from data flows.
  5. Implement a runtime security solution, including performance and stability monitoring, which understands the network requirements and authentication modes used by the applications. Your goal should always be to clearly articulate if the applications are performing properly.

In the end, container security has evolved from its humble beginnings only a few short years ago. This evolution has been spurred by the desire of companies to become more agile. Successful companies are recognizing that in adopting container solutions they also need to revisit their notions of best practice data centre operations. This process should include container security, a practice that encompasses everything from design, implementation, testing and deployment.

To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon EU, March 30-April 2 in Amsterdam.

To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon EU, in Amsterdam.


Authored by: Tim Mackey is a principal security strategist within the Synopsys CyRC (Cybersecurity Research Centre). Tim is also an O’Reilly Media published author and has been covered in publications around the globe including USA Today, Fortune, NBC News, CNN, Forbes, Dark Reading, TEISS, InfoSecurity Magazine, and The Straits Times.