Why DevOps and Security Keep Fighting (And How to Stop It)

The friction between DevOps and security teams is structural, not personal. It comes from misaligned incentives — and the fix is not compromise, it is integration. Here is what shift-left security actually looks like in practice, from someone who has lived on both sides.
Picture this: a developer raises a pull request on a Friday afternoon. The deployment pipeline is green. The feature is ready. Then security steps in and says the release cannot go ahead because a vulnerability scan — run manually, somewhere outside the pipeline — has flagged a medium-severity finding that has been sitting in a spreadsheet for three weeks.
The developer is frustrated. The security engineer is frustrated. The release is delayed. And in the post-mortem, everyone agrees to "communicate better."
They are diagnosing the wrong problem.
The Real Cause Is Not Communication
The friction between DevOps and security teams is structural. It is baked into how each function is measured.
DevOps is optimised for deployment frequency, lead time, and mean time to recovery. The faster you ship, the better you are doing. Velocity is the metric.
Security is optimised for risk reduction. Every change is a potential attack surface. The fewer things that move, the smaller the blast radius. Control is the metric.
These are not personalities in conflict. They are incentive systems in conflict. You cannot fix an incentive problem by asking people to communicate more. You fix it by changing the structure.
The structural fix is shifting security left — moving security controls into the development and delivery pipeline, so security is not a gate at the end of the process. It is part of the process.
What "Shift Left" Actually Means in Practice
Shift-left security is not a philosophy. It is a set of specific controls that run automatically at specific points in the pipeline. Let me be direct about what those controls are and where they belong.
SAST — Static Analysis in the Pull Request
Static Application Security Testing should run on every pull request, before a human reviewer sees the code. Tools like Semgrep, Bandit (for Python), or SonarQube flag insecure patterns — SQL injection vectors, unsafe deserialization, hardcoded credentials — at the point where they are cheapest to fix: before they are merged.
The key word is automatic. If SAST is a manual step that a security engineer runs on request, it will be skipped under time pressure every single time.
Secrets Scanning — Before the Commit Lands
A secret committed to a repository is a breach, not a near miss. Once it hits version history, you must assume it is compromised, because version history is often more widely accessible than people realise.
Secrets scanning should run as a pre-commit hook and again in the pipeline. GitHub Advanced Security, Gitleaks, and truffleHog are all viable options. The rule is simple: no secret reaches the repository. If it does, the pipeline fails and the rotation process starts immediately.
This is non-negotiable. I have seen the aftermath of leaked credentials in a regulated environment. The remediation cost — in time, in regulatory exposure, in trust — dwarfs the cost of implementing the control.
IaC Security Scanning — Catching Misconfigurations Before They Deploy
Infrastructure as Code has transformed how cloud environments are built. It has also introduced a new category of vulnerability: the misconfigured Terraform module that provisions a publicly accessible S3 bucket, or an overly permissive IAM policy that gives a service account more than it needs.
Tools like Checkov, tfsec, and Terrascan scan Terraform, CloudFormation, and Bicep files for security misconfigurations before terraform apply runs. They should be a mandatory pipeline gate.
I ran an agentic security audit across a Terraform-provisioned AWS environment using automated tooling — the kind of audit that would take hours manually. It surfaced eight confirmed findings in minutes, including a direct S3 accessibility path that bypassed CloudFront Origin Access Control entirely. That finding existed because the infrastructure was built without an automated IaC scan in the pipeline. Once the scan ran, the finding was documented, verified in the live AWS Console, and remediated in the Terraform code within the same workflow loop.
That is what automated IaC security scanning looks like in practice. It is not a theoretical improvement. It closes real gaps that manual review misses.
Pipeline Gates — Fail Fast, Not Late
A pipeline gate is a step that halts deployment if a defined security threshold is breached. High-severity SAST finding: gate fails. Secrets detected: gate fails. IaC misconfiguration above a defined severity level: gate fails.
Gates are where shift-left security has teeth. Without them, the scans run but findings are advisory — and advisory findings in a fast-moving delivery team get deferred, triaged, backlogged, and forgotten.
The gate forces a decision at the point of deployment, not three weeks later in a spreadsheet. The developer gets immediate feedback. The security engineer does not need to chase. The process enforces the standard without requiring someone to say no in a meeting.
The Incentive Problem Has to Be Addressed Directly
Here is the thing that shift-left tooling alone does not fix: security teams are still often measured on the number of vulnerabilities they close, not on whether the delivery pipeline runs smoothly. DevOps teams are still measured on deployment frequency, not on security posture.
Until those measurements change, you will have a security engineer who knows that every pipeline gate they add risks being blamed for blocking a release, and a DevOps engineer who knows that every security requirement adds time to a sprint.
The engineering fix is embedding security controls in the pipeline so that neither team owns the gate manually. The organisational fix is measuring both teams against the same outcomes — deployment frequency, security posture, and mean time to remediate findings — so that a blocked pipeline is a shared problem, not a blame opportunity.
This is not idealism. I spent the last decade of a 19-year enterprise IT career owning cybersecurity posture, cloud transformation, and IT governance for a regulated financial services organisation simultaneously. The tension between velocity and control is real and constant in that environment. The only way I found to manage it was to stop treating security as a separate approval layer and start treating it as an operational discipline embedded in how infrastructure is built, deployed, and monitored.
Five Things That Actually Work
These are not theoretical recommendations. They are the controls and practices that reduce DevOps versus security friction in real environments.
1. Run all security scans in the pipeline, not alongside it. SAST, secrets scanning, and IaC scanning should be pipeline steps with pass/fail outputs. If they run outside the pipeline, they will be ignored under pressure.
2. Define your security thresholds as code. What constitutes a blocking finding — critical only, or high and above? What is advisory? Document it in a policy file committed to the repository. When the threshold is code, it is version-controlled, reviewable, and not subject to ad hoc negotiation in a sprint review.
3. Give developers the security context they need to fix findings, not just a CVE number. A finding that says "CVE-2024-XXXX detected in dependency" is not actionable on its own. Add the remediation step — update to version X, or replace with library Y. The faster a developer can resolve a finding without escalating to security, the faster the pipeline moves.
4. Monitor your deployed infrastructure with the same rigour you apply to the build pipeline. Shift-left is not the complete picture. You also need threat detection and log analysis on what is running in production. Security Onion, AWS CloudTrail, VPC Flow Logs, and IAM Access Analyzer are how you catch what the pipeline did not. I have used Security Onion to trace a multi-stage intrusion — from the initial spear-phishing vector through credential harvesting and lateral movement — by correlating IDS alerts with network traffic and system logs. That is what operational security monitoring looks like, and it is the runtime complement to everything you do at build time.
5. Apply the MITRE ATT&CK framework to your pipeline design, not just your incident response. Most teams use MITRE ATT&CK reactively — to understand what happened after a breach. Use it proactively to ask: which techniques would succeed against our current pipeline and deployment process? That question changes how you design IAM policies, how you think about lateral movement risks in a multi-account AWS environment, and how you prioritise which controls to implement first.
A View From Both Sides
The DevOps versus security argument is usually framed as a speed versus safety trade-off. I do not think that framing is accurate, and I think it is part of why the argument persists.
Security controls embedded in a pipeline do not slow delivery. They move the cost of finding a vulnerability from remediation post-breach — which is expensive, disruptive, and in a regulated sector, potentially catastrophic — to a failed pipeline step at build time, which costs one engineer an hour of effort.
The speed trade-off only exists if security is a manual gate. Once it is automated, it is just part of the pipeline. A build that fails a security gate is the same as a build that fails a unit test. Nobody argues that unit tests slow down DevOps.
What I have found, having sat on both sides of this problem, is that the engineers on both teams almost always want the same thing: to ship working, secure software without being blocked by process. The structural fixes described above get both teams closer to that outcome than any amount of cross-functional workshops.
Security saying "no" at the end of a release cycle is not a security failure. It is a process failure. The control arrived too late to be useful. Shift it left, automate it, and make the feedback loop tight enough that the developer who introduced the risk is the person who resolves it — before it ever reaches production.
Where are you in this journey? Are security controls embedded in your pipeline, or is your team still managing the gate manually? I am interested in where organisations are finding the most friction — and what has actually moved the needle.
#DevSecOps #ShiftLeft #CloudSecurity #AWS #Terraform #CICD #SecurityEngineering #CyberSecurity #DevOps #InfrastructureAsCode #PipelineSecurity #SOC #ThreatDetection #VulnerabilityManagement #SecureSDLC