Application Security
The discipline of building applications that are hard to attack, where security is a property of the code, the dependencies, the build, and the runtime — not a layer added afterward by a separate team.
The discipline of building applications that are hard to attack, where security is a property of the code, the dependencies, the build, and the runtime — not a layer added afterward by a separate team.
A bolt-on approach treats security as something added after the application is built: a penetration test before launch, a Web Application Firewall in front of the service, a bug bounty programme that finds what shipped. The application itself is built to do its job; security is a separate concern handled by a separate team after the design is frozen. This works for catching the most obvious bugs and produces predictably bad outcomes for everything else — structural weaknesses get baked in, dependencies bring in vulnerabilities the team never reviewed, and the cost of fixing what shipped is many times the cost of designing it differently.
A built-in approach treats security as a property of the application — present in the design, the code, the dependency choices, the build pipeline, and the runtime, rather than overlaid on top of them. The design is reviewed against threat models. The code is checked statically for known dangerous patterns. The dependencies are tracked, signed, and scanned. The runtime has telemetry that catches what static analysis missed. Each layer catches what the others cannot, and the cumulative effect is an application that is structurally harder to attack — not because it has been hardened against attack, but because it was built to make attack expensive in the first place.
The architectural shift is not "we do security testing." It is: security is a property the application has by design — through threat modelling, secure defaults, validated inputs, encoded outputs, vetted dependencies, and complementary scans at every stage — not a check applied to a finished product.
Every security control added after design is more expensive than the same control designed in from the start, and many controls cannot be retrofitted at all. Authentication assumed to happen at the edge cannot be added inside services without coordinating every caller. Input validation assumed to happen at one boundary cannot be added at another without the original boundary's validation becoming unreliable. Cryptographic decisions made for one threat model do not survive a change in the model. Designing security in means: threat modelling at the start, security requirements in the requirements document, secure defaults in the code, and reviews at architectural milestones — not at the pre-launch milestone when changing direction is impossible.
Pick a recently shipped feature in your application. When was the threat model written, who reviewed it, and what changed in the design as a result? If the threat model was written after the feature shipped (or never), security was bolted on, regardless of what the security testing report says.
OWASP Software Assurance Maturity Model (SAMM) — the canonical maturity framework that names design-time security activities (threat assessment, secure architecture, security requirements) as distinct disciplines that mature independently rather than as a single late-stage gate.
The OWASP Top 10 lists categories of application-security failure — broken access control, cryptographic failures, injection, insecure design, security misconfiguration, and so on. Treating each entry as a single bug to fix and check off misses the point. These are not bugs; they are categories, and they appear at the top of the list because they recur across applications, decades, and technologies due to structural weaknesses in how the work is done. Broken access control isn't one bug — it's a thousand-bug pattern that comes from authorisation logic scattered through code. Cryptographic failures aren't one bug — they're the predictable result of letting application teams choose algorithms and manage keys. Reading the Top 10 as a curriculum means asking: what structural property of our development would prevent this entire category from arising in our applications?
Pick the OWASP Top 10 category that has produced the most bugs in your application's history. What changed structurally to make that category less likely after each bug? If the answer is "we fixed the bug" each time, the team is treating the category as a recurring tax instead of as a structural problem to design out.
OWASP Top 10 — Application Security Risks — read alongside the underlying Common Weakness Enumeration (CWE) catalogue, which provides the deeper taxonomy of weakness types that produce the recurring categories.
Modern applications are mostly other people's code. A typical Node.js or Python service has hundreds to thousands of transitive dependencies — code from authors the team has never met, running in the same process, with the same access. Log4Shell, the xz utils backdoor, event-stream, colors.js, ua-parser-js — each demonstrated the same lesson: a dependency you never reviewed can compromise an application you carefully reviewed. The architectural response is to treat dependencies as a supply chain that requires its own controls: an inventory of what's depended on (SBOM), provenance for each artefact (signing, build attestations), continuous scanning for known vulnerabilities, and a process for responding when something is found. None of this is novel; what's novel is treating the supply chain as part of the application's security boundary rather than as someone else's problem.
Pick a critical dependency in your application — the one that, if compromised, would compromise the most. Who is its current maintainer, when was the last release, when was the last security audit, and how is its provenance verified? If those answers don't exist, the supply-chain security of that dependency rests on hope.
SLSA — Supply-chain Levels for Software Artefacts — the framework that names increasing levels of supply-chain integrity (build provenance, hermetic builds, two-party review) as a maturity ladder; Sigstore and CycloneDX are the practical tooling that makes those levels operational.
Two distinct disciplines, repeatedly conflated, with different failure modes. Input validation rejects data that doesn't belong: malformed structure, out-of-range values, prohibited content, wrong type. It happens at the earliest point data enters trust — the application's perimeter. Output encoding ensures data is rendered safely in whatever destination it reaches: HTML-escape for browser context, parameterised queries for SQL context, shell-escape for command context, attribute-encode for XML/SVG context. It happens at the boundary between the application and the destination. The reason both are necessary: input validation cannot anticipate every destination context, and output encoding cannot remediate inputs that should never have been accepted at all. Conflating them — "we validate inputs so we don't need to encode outputs," "we encode outputs so we don't need to validate inputs" — produces injection vulnerabilities in production, which is what most of the OWASP Top 10's injection category amounts to.
Pick a user-input field in your application that flows to a SQL query, an HTML page, and a shell command (this is more common than it sounds). For each destination, what encoding or parameterisation is applied at the boundary? If the answer is "we sanitised the input once," the application has a single point of failure across three different injection categories.
OWASP Injection Prevention Cheat Sheet — the practical reference for context-specific output encoding, with each major destination grammar (SQL, HTML, OS command, LDAP, XPath) treated as a distinct encoding problem.
Each tool sees the application from a different angle and finds bugs the others cannot. SAST (Static Application Security Testing) reads source code without running it — it finds dangerous patterns (hard-coded secrets, dangerous function calls, taint flows) that are visible in static text but cannot find bugs that depend on configuration, deployment, or runtime state. DAST (Dynamic Application Security Testing) probes a running application from outside — it finds deployment and configuration bugs (open admin endpoints, missing security headers, weak authentication flows) but cannot see inside the code. IAST (Interactive Application Security Testing) instruments the running application — it sees both code and runtime, and finds bugs in the interaction that SAST and DAST each miss. Runtime protection (RASP, WAF) catches what made it past everything else and limits the blast radius. Pretending one replaces the others — usually because of cost or operational simplicity — is the most common reason production applications still ship with bugs that any of the four would have caught.
Pick a recent application security finding that reached production. Which of the four — SAST, DAST, IAST, runtime protection — should have caught it, and why didn't it? If the answer is "we don't run that one," the gap is structural, not the result of bad luck.
OWASP Application Security Verification Standard (ASVS) — the framework that names verification activities at each level (testing, scanning, review) with explicit recognition that no single tool covers the whole standard.
The act of writing down what an attacker might want, how they might try to get it, and what would stop them — this exercise produces clarity that ad-hoc thinking cannot. STRIDE (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege) gives a structured way to ask the question. Attack trees give a structured way to decompose the answer. The output is not a comprehensive enumeration of every possible attack — it is a shared understanding of where the design is strong, where it is weak, and where the team has explicitly accepted risk. Without a threat model, security work is reactive: respond to the bugs that surface. With a threat model, security work is proactive: design for the threats the model predicts, and use the model to spot when reality has diverged from what was assumed.
Pick the most recent significant feature in your application. Where is the threat model for it, when was it last updated, and what risks does it explicitly accept? If those questions cannot be answered, the team is operating on the security threats they have already encountered, not on the threats the design implies.
Microsoft — STRIDE Threat Modelling — the foundational treatment of structured threat modelling that introduced STRIDE; for deeper application practice, Adam Shostack's Threat Modelling: Designing for Security is the canonical book-length treatment.
The diagram below shows a canonical secure-by-design application pipeline: design-time threat modelling feeds into the architecture; code is reviewed by SAST at commit; dependencies are tracked via SBOM and signed via Sigstore; running applications are probed by DAST and instrumented by IAST; runtime protection (WAF) sits at the edge; findings from every stage flow into a single triage backlog with documented SLAs.
The application team builds; the security team scans before launch and produces a list. The list arrives too late to influence design, often too late to influence release, and the team learns nothing about why the bugs exist. The cycle repeats with the next release.
Application security is the application team's responsibility, with the security team providing tooling, expertise, and second opinions — not gatekeeping. Threat modelling, code review, dependency hygiene, and remediation are part of the engineering work, not external to it.
A critical library was added years ago, the original author has stopped maintaining it, and the team has not noticed. When a vulnerability is disclosed, no patch is forthcoming, and the team discovers the problem at the worst possible time.
Dependency hygiene includes tracking maintenance status — last release date, maintainer activity, alternative libraries — not only known CVEs. Unmaintained dependencies are migrated proactively, before the disclosure that forces the migration in a hurry.
A single "sanitise input" function is called everywhere user data is handled, then output is rendered without further encoding because "the input was already sanitised." The function cannot anticipate every destination context, and the bug is found when an attacker exercises a context that was missed.
Validate inputs at the perimeter for what they should be; encode outputs at the boundary for the destination context. The two are complementary disciplines, not interchangeable, and both must be present.
Five overlapping security scanners are running, producing thousands of findings, with no one able to triage them. The team learns to ignore the dashboards, and real findings drown in the noise.
Tools are tuned for signal-to-noise. False positives are suppressed with documented justifications, not ignored silently. The triage process is owned, has SLAs, and feeds back into tool tuning when noise patterns are identified. Fewer well-tuned tools beat more noisy ones.
A threat model was produced at project kickoff and never touched again. The application has evolved substantially since; the model describes a system that no longer exists; nobody refers to it during design or incident response. The artefact exists but provides no protection.
Threat models are living documents — updated when the architecture changes, referenced when designing new features, used during incident response. The model that gets referenced is the model that gets maintained; the unreferenced model is decoration regardless of how good it was at creation.
Threat modelling at design time is dramatically cheaper than threat-finding after launch. The threat model is a living artefact, referenced during design and incident response — not a one-time deliverable for the launch checklist.
Authentication, authorisation, audit, data classification, residency, and threat-model-derived requirements are written down with the same rigour as functional ones. Without this, security requirements get deduced inconsistently by whoever happens to be implementing the feature.
The default state of any new component is the secure state. Insecure configurations require explicit justification and approval. The opposite (insecure default, secure on request) produces production deployments that drift toward whatever was easiest.
The recurring categories of failure are addressed at the design level — centralised authorisation, standardised cryptographic libraries, parameterised query layers — not as a series of bug fixes that resemble each other.
The SBOM is the source of truth for what's actually in the application; without it, dependency-vulnerability response degrades to guesswork. The SBOM is generated automatically and queryable, not produced manually on request.
Vulnerabilities are disclosed continuously; scanning continuously is the only way to catch the disclosure before exploitation. Single-point-in-time scanning leaves windows that adversaries find.
Two complementary disciplines applied at the right layers, not conflated into a single chokepoint. Allow-lists preferred over deny-lists; parameterisation preferred over escaping where the destination supports it.
Each finds bug categories the others miss. Tools are configured for actual languages, frameworks, and risk profile; false positives are managed; findings are tracked as bugs with documented severity.
Who's notified, how the team triages, what the SLA is — exercised before the next zero-day, not invented during one. The process names the people, channels, and decision authority.
A finding without an owner and an SLA is decoration. Triage produces accept-risk, defer, or fix decisions with reasoning recorded; aged findings are escalated; backlog age is a metric.
Other substantive pages in the library that link here: