Docker Security at PyCon: Threat Modeling & State Machines

May 26 2017

The Docker Security Team was out in force at PyCon 2017 in Portland, OR, giving two talks focussed on helping the Python Community to achieve better security. First up was David Lawrence and Ying Li with their “Introduction to Threat Modelling talk”.

Threat Modelling is a structured process that aids an engineer in uncovering security vulnerabilities in an application design or implemented software. The great majority of software grows organically, gaining new features as some critical mass of users requests them. These features are often implemented without full consideration of how they may impact every facet of the system they are augmenting.

Threat modelling aims to increase awareness of how a system operates, and in doing so, identify potential vulnerabilities. The process is broken up into three steps: data collection, analysis, and remediation. An effective way to run the process is to have a security engineer sit with the engineers responsible for design or implementation and guide a structured discussion through the three steps.

For the purpose of this article, we’re going to consider how we would  threat model a house, as the process can be applied to both real world scenarios in addition to software.

threat Modeling

Data Collection

Five categories of data must be collected in a threat model:

  1. External Dependencies – are services that elements of the model will interact with, but will not be decomposed during the course of the current threat model. Our house has external dependencies on an alarm monitoring service, and various utilities; power, water, etc…
  2. Entry points – defines the ways in which your system can receive input and provide output. A completely closed system is secure by design, but often not very useful. Our house has three intentional entry points, front and back doors, and a garage door. It also has a number of unintentional, but usable entry points: the windows! For the purposes of this model, we’ll keep things simple and assume, as paranoid security wonks, we’ve nailed our windows shut.
  3. Assets – includes anything we care about protecting in our system. These are both things an attacker can carry away with them like sensitive user data and  resources an attacker might consume. In our house, we care about valuables, important papers, and irreplaceable data like family photos. We also care about our utility bills and want to ensure we’re not paying for somebody else’s car wash. Not everything is an asset though, we don’t care about toilet paper, as long as there’s at least one roll left.
  4. Trust Levels – defines the tiers of access within the system. We have four trust levels concerning our house:
  • The residents – people who live in the house and have the highest levels of access.
  • Guests – friends and family invited to stay overnight
  • Visitors – people invited into the home but restricted to common areas like the living room, kitchen, and backyard.
  • Passers by – strangers who pass by the house but will not be invited inside.
  1. Data Flows – how data moves around the system. The primary data flow of our house is shown below. What we see is that there are Trust Boundaries at our entry points, indicating a change in the trust associated with the data that made it across the boundary. In this case, the boundary itself is the lock on the doors, and the possession of a garage door opener. Once somebody has crossed one of these boundaries though, they have full access to the house, garage, and associated storage.

threat Modeling


From the data collected, and a deep understanding of how the system works, we can begin to look for and inspect anomalies. For example, if a data flow indicates there is no trust boundary between two processes, this should be carefully analyzed. In our data flow diagram, we see there is no trust boundary between the house and the garage. This is probably undesirable but let us further analyze the data to objectively establish why and how we’ll fix it.

While there are many ways to analyze and score vulnerabilities, we have found the STRIDE classification system, and DREAD scoring system to be effective and straightforward. STRIDE is an acronym denoting 6 categories of vulnerability:

  • Spoofing – an entity pretending to be something it’s not, generally by capturing a legitimate user’s credentials
  • Tampering – the modification of data persisted within the system
  • Repudiation – the ability to perform operations that cannot be tracked, or for which the attacker can actively cover their tracks
  • Information Disclosure – the acquisition of data by a trust level that should not have access to it
  • Denial of Service – preventing legitimate users from accessing the service
  • Elevation of Privilege – an attack aimed at allowing an entity of lower trust level to perform actions restricted to a higher trust level

One takes each category and looks for behaviour permitted by the system that creates vulnerabilities within that category. It is common find a single vulnerability that spans multiple STRIDE categories. Some example vulnerabilities for our house may be:

  • Spoofing: When a plumber knocks on our door, if we didn’t schedule them directly (maybe they claim a housemate called them out), we don’t necessarily know they are legitimate. They could just be trying to gain entry to steal our valuables while we’re not looking.
  • Tampering: One of our housemates likes to smoke but doesn’t like going outside in inclement weather, so they disable the smoke alarm in their bedroom.
  • Repudiation: Some neighbourhood kid kicked a ball through the front window but we have no way to prove who it was.
  • Information Disclosure: We only just moved in and haven’t gotten around to installing curtains yet. Anybody walking by can see who is in the house!
  • Denial of Service: a local vandal thought it would be funny to troll the new neighbours by squirting glue in our locks… now we can’t get the doors open and we’re stuck outside.
  • Elevation of Privilege: It hasn’t happened yet, but we’ve heard garage doors are pretty insecure. Somebody that can get our garage door open can immediately get in to the rest of the house.

Having defined as many vulnerabilities as we can find, we score each one. The DREAD system defines five metrics on which a vulnerability must be scored. Each is generally scored on a consistent scale, often 1 to 10, with 1 being least severity, and 10 being greatest severity. The sum of the scores subsequently allows us to prioritize our vulnerabilities relative to each other.

The 5 DREAD metrics are:

  • Damage: how bad would the financial and reputation damage be to your organization and its users.
  • Reproducibility: how easy is it to trigger the vulnerability. Most vulnerabilities will score a “10” here but those that, for example, involve timing attacks would generally receive lower vulnerabilities as they may not be triggered 100% of the time.
  • Exploitability: a measure of what resources are required to use the attack. The lowest score of 1 would generally be reserved for nation states, while a score of 10 might indicate the attack could be done through something as simple as URL manipulation in a browser.
  • Affected Users: a measure of how many users are affected by the attack. Maybe for example it only affect a specific class of user.
  • Discoverability: how easy it is to uncover the vulnerability. A score of 10 would indicate it’s easily findable through standard web scraping tools and open source pentest tools. At the other end of the scale, a vulnerability requiring intimate knowledge of a system’s internals would likely score a 1.

Lets score our Information Disclosure vulnerability against our Elevation of Privilege vulnerability to see how they compare.


Information Disclosure Score

Elevation of Privilege Score





Knowing who is in our house is very low damage, and this could also be observed from who enters and leaves. Gaining access to our house however is severe.




Both vulnerabilities can be reproduced 100% of the time. There are no timing elements involved.




While it’s easy to look in the windows, it’s not as easy to get hands on a garage door opener to effect the initial compromise of the garage. We’re relying on our car to be somewhat secure, and for none of our residents, guests, or visitors to leave the garage open.

Affected Users



All residents, guests, and visitors are affected by both vulnerabilities.




It’s obvious to everyone that there are no window coverings. It is less obvious to an external observer that there is no boundary between the garage and house. It might be observable from outside under the right conditions, so we’re estimating this to be easy to discover, but not entirely trivial.


For each of our categories in STRIDE there is an associate class of security control used to mitigate it. Exactly how the control is implemented will depend on the system being modelled.

  • Spoofing – Authentication; the ability to confirm the validity of the request. We would agree with our housemates to always use a plumber from a specific service, and be able to call the head office to confirm the credentials of the plumber.
  • Tampering – Integrity; we would regularly, and randomly, audit the smoke alarms in the house to ensure nobody has disabled them.
  • Repudiation – Non-Repudiation; we’re going to install some security cameras to ensure we capture images of the next kid to kick a ball through the window.
  • Information Disclosure – Confidentiality; we’ll install some thick curtains that can be closed when we don’t want the inside of the house to be outwardly visible.
  • Denial of Service – Availability; we’re going to install a lock that has both traditional keys and a digital code. This gives us multiple ways to unlock the door, should one be broken or otherwise fail.
  • Elevation of Privilege – Authorization; we’re going to install a single cylinder lock between the house and the garage, requiring a key on the garage side, but no key on the house side. This prevents garage accessing being pivoted to house access, but still makes it easy to move from the highly privileged house, to the relatively less privileged garage.

Having completed all these steps, it’s time to go and implement the actual fixes!

Look out for our next Security Team blog post on Ashwini Oruganti’s talk “Designing Secure APIs with State Machines”.


0 thoughts on "Docker Security at PyCon: Threat Modeling & State Machines"

DockerCon 2022

Registration is now open for DockerCon 2022! Join us for this free, immersive online experience complete with product demos, breakout learning tracks, panel discussions, hacks & tips, deep dive technical sessions, and much more.

Register Now