Episode 21 — Model Threat Applicability: Control Selection With and Without Existing Systems
Security work starts to feel real when you realize that threats are not abstract stories but practical pressures that push on whatever you have actually built. A beginner-friendly way to think about this is to imagine two teams facing the same kinds of attackers, but living in two very different houses. One team is moving into a brand-new place where they can choose locks, cameras, and lighting before anyone ever moves in. The other team is living in a house that has been remodeled for years, with old wiring behind the walls, mismatched keys, and a few “temporary” fixes that somehow became permanent. Threats can apply to both teams, but the best controls they should choose are not automatically the same, because the systems they are protecting are not the same. What we are building toward in this lesson is a disciplined way to decide which threats actually matter for your situation, and how to pick controls when you have modern systems available versus when you are stuck with existing systems and real-world constraints.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A threat is only useful when it connects to something you have that could be harmed, and that connection is what people mean when they talk about applicability. If a threat cannot reasonably reach your environment, or if it would not cause meaningful harm, it might still be interesting, but it is not your priority. That does not mean you ignore it forever; it means you don’t spend scarce time and money treating it like the biggest fire in the building. Applicability is the bridge between a list of scary possibilities and the specific way your organization uses technology. To evaluate that bridge, you think about three things in plain language: what you have, how it is exposed, and what would happen if it broke. If you have customer accounts, identity attacks apply differently than if you only have a public brochure site. If your systems are reachable from the internet, remote attacks apply differently than if everything is inside a tightly controlled network. And if downtime means lost lives in a hospital versus mild inconvenience in a small office, the same threat can carry wildly different weight.
Controls are the things you do to reduce risk, and they come in many forms that beginners should learn to separate in their heads. Some controls prevent bad things from happening, like strong access rules that make it hard to log in as someone else. Some controls detect that something bad is happening, like monitoring that notices unusual login activity. Some controls correct or recover, like backups that let you restore data after a destructive event. Other controls are more about reducing impact, like segmenting a network so a breach in one area does not automatically spread to everything else. When you choose controls, you are making tradeoffs: money, complexity, user experience, time to deploy, and ongoing maintenance. A control that looks powerful on paper can be the wrong choice if it is too fragile to operate or too complicated for the team that has to run it. That is why applicability is not only about the threat; it is also about the control’s fit to your reality.
A practical way to connect threats to your environment is to think in terms of assets, entry points, and trust assumptions. Assets are what you care about, such as data, identities, systems, or the ability to deliver a service. Entry points are the ways an attacker could interact with your environment, such as login screens, exposed services, email, third-party connections, or physical access to devices. Trust assumptions are the beliefs your system makes about what is safe, such as assuming the internal network is friendly or assuming devices are managed and up to date. Many security failures happen when the real world violates an assumption the system quietly depended on. If you assume internal traffic is safe and you allow broad access inside the network, then a single stolen credential can become a full environment compromise. If you assume laptops are always patched but patching is inconsistent, then known vulnerabilities become applicable threats even if you wish they were not. This approach keeps your thinking grounded, because you are not chasing every threat; you are checking how threats line up with your actual openings and assumptions.
Now consider what changes when you are selecting controls for a greenfield environment, meaning a new system or a major rebuild where you can design cleanly. In a greenfield situation, you can build security into the foundation instead of bolting it on afterward. You can standardize identity from the start, define clear roles, and decide that every service will log in a consistent way. You can choose modern patterns, like separating workloads into zones, and you can decide that sensitive data must be encrypted and access must be audited. You can also set baseline expectations for device management and software updates, because you are not fighting years of inherited exceptions. This is the world where you can say, “We will not allow this risky practice,” and have that rule be real because no one is already depending on it. Greenfield does not mean perfect, but it does mean you have more freedom to avoid creating messy dependencies that are hard to unwind later.
In contrast, selecting controls for an existing environment is often about choosing what you can realistically change without breaking the business. Legacy systems might not support modern authentication. Old applications might require broad network access because of how they were built. Some devices might be too old to patch, or they might be maintained by a vendor who controls the update schedule. In these situations, the goal is still risk reduction, but the path is different. You often focus on isolating what you cannot fix, monitoring it more closely, and wrapping it with compensating controls that reduce the chance and impact of compromise. A compensating control is not a magical substitute; it is an alternative way to reduce risk when the ideal control is not possible. For example, if you cannot upgrade a system to support stronger login methods, you might reduce exposure by limiting where it can be accessed from, adding stronger monitoring around it, and tightening how accounts are managed. The key mindset is that the “best” control is not always the “right” control for an environment you have to keep running.
Threat applicability can look very different depending on whether you can remove an entry point or whether you must live with it. Imagine a web-facing application. In a greenfield build, you can decide early that only certain services are exposed to the internet, and everything else sits behind layers of internal access. You can decide that administrative functions are separate, and you can require stronger identity checks for those functions. In an existing system, you might discover that administrative panels are exposed because someone needed quick remote access years ago, and now multiple teams rely on it. The threat of remote exploitation might be highly applicable in both cases, but the control choices diverge. In greenfield, you remove the exposure by design. In existing systems, you might not be able to remove it quickly, so you prioritize controls that reduce risk while a longer-term fix is planned. This is where good security thinking stops being a checklist and becomes a series of practical decisions that respect reality.
A common beginner mistake is to treat threat lists like a ranking of what attackers like, rather than a map of what your environment enables. Attackers do not pick targets by reading your security plan; they probe for reachable weakness and profitable impact. That means a threat that seems less exciting can be more applicable if your environment makes it easy. Password guessing might sound basic compared to advanced malware, but if your login system is exposed and your password rules are weak, that basic threat becomes highly applicable and dangerous. Another mistake is to assume that buying a control automatically reduces risk, when in reality a control that is misconfigured, unmonitored, or bypassed can create a false sense of safety. Applicability includes operational reality: if a control cannot be used correctly by the team, its real effect is smaller than its marketing claims. The most valuable controls are often the ones that are boring, well-run, and consistently enforced.
One helpful way to choose controls is to first decide which type of failure you are trying to prevent: confidentiality, integrity, or availability. Confidentiality failures expose information to the wrong people. Integrity failures change data or systems in unauthorized ways. Availability failures prevent legitimate use of systems or services. Threats can affect more than one of these at the same time, and controls often have strengths and weaknesses across them. For instance, encryption helps confidentiality, but it does not automatically stop unauthorized changes, and it does not keep a service available if it is overwhelmed. Backups help availability and recovery, but they do not prevent data theft. Monitoring helps detect all sorts of badness, but it does not prevent it on its own. When you anchor your thinking in what kind of harm matters most for a specific asset, you get more disciplined about matching controls to outcomes instead of collecting controls like trophies.
When you have no existing system, you can often reduce applicability by reducing attack surface, which means making fewer things reachable and fewer actions possible. If an attacker cannot reach a system, many remote threats become less applicable immediately. If a user account cannot do risky actions, many mistakes become less damaging. That is why design-time choices like least privilege and strong separation of duties matter so much. You are not trying to make attackers give up; you are trying to make it hard for an attacker to turn a single success into a total win. That also means choosing consistent patterns, because inconsistency creates special cases, and special cases create gaps. The fewer exceptions you bake in, the more predictable your security posture becomes, and predictability is a quiet superpower in defense. In a greenfield world, you can say, “We will do it one secure way,” and then actually do it.
In existing systems, you often cannot shrink the attack surface quickly, so you prioritize reducing exposure where it is easiest and most impactful. You might start by identifying which systems are internet-facing, which ones hold the most sensitive data, and which ones are the most fragile. Then you focus on controls that reduce risk without requiring a rewrite: network segmentation, stricter access paths, tighter account controls, and better monitoring. You also look for quick wins that eliminate obvious exposures, like shutting down unused services or removing old accounts that still have access. This is less glamorous than redesigning everything, but it can reduce applicability of many threats in a surprisingly short time. The hard part is being honest about what is truly unused and what is silently keeping something alive. That honesty requires good documentation, careful change management, and sometimes a willingness to accept a bit of short-term discomfort to reduce long-term risk.
Control selection also depends on who must live with the control, because security that people cannot tolerate often becomes security that people bypass. In a new system, you can design workflows so secure behavior is also the easy behavior, like making sign-in simple but strong and making secure defaults automatic. In an existing environment, users may already have workarounds and habits that grew out of historical pain. If you suddenly impose strict controls without understanding why those workarounds exist, you can create friction that pushes risky behavior into the shadows. That does not mean you give up; it means you plan controls with adoption in mind. Sometimes you phase changes in, and sometimes you pair a restrictive control with a usability improvement so the overall experience is not worse. The best control is one that is effective and sustainable, because risk reduction is not a one-day event; it is a long-term operating posture.
A useful way to think about applicability is to ask how an attacker would chain small steps into a larger outcome, because real attacks often work that way. One weakness might allow initial entry, a second weakness might allow privilege escalation, and a third might allow data access or disruption. In a greenfield build, you can break chains by designing strong boundaries and eliminating unnecessary pathways. In existing systems, you might not be able to remove all pathways, but you can still break chains with strategic controls that limit lateral movement and require re-verification for sensitive actions. For example, even if an attacker gets a user credential, you can reduce the chance that credential leads to administrative access by separating admin accounts and requiring stronger checks for privileged actions. Even if a server is compromised, segmentation can keep that compromise from spreading. Thinking in chains helps you choose controls that block escalation, not just the first step, which is a more robust way to reduce real-world risk.
As you get comfortable with this way of thinking, you start to see that there is no single universal list of best controls, because applicability and constraints change the right answer. A disciplined approach is to identify the most important assets, understand realistic threat paths to those assets, and then choose controls that reduce the likelihood and impact of those paths within your operational limits. In a new environment, you emphasize secure-by-design choices that reduce exposure and complexity from day one. In an existing environment, you emphasize containment, visibility, and gradual modernization, using compensating controls when ideal controls are blocked by legacy reality. The goal is not to be perfect; the goal is to be intentional, measurable, and improving. When you can explain why a threat matters for your specific environment and why a control is the right fit given your constraints, you are doing security like an engineer, not like a collector of buzzwords.
SecurityX learners should walk away from this topic with a simple but powerful habit: never select a control just because you have heard it is good, and never panic over a threat just because it sounds dramatic. Instead, connect threats to assets and exposure, and then connect controls to risk reduction and operational reality. When you can build from scratch, use that freedom to make threats less applicable by reducing attack surface and enforcing consistent identity and access patterns. When you inherit an environment, accept the constraints without surrendering to them, and use isolation, monitoring, and careful prioritization to make the environment safer while you plan longer-term fixes. Over time, that habit turns security from a reactive scramble into a repeatable decision process that stays calm under pressure. That calm is not just a personality trait; it is the product of a method that keeps you focused on what truly matters and what you can realistically improve next.