Stand2

Security & Emergency Management Blog

Aviation safety governance concept image showing a regional jet and helicopter over the Potomac with a safety recommendation document and closure tracker motif.

DCA Midair Collision: How Safety Recommendations Trigger

January 14, 202611 min read

When the risk is immediate, investigators don’t wait for the final report—recommendations move fast.

By Aaron Gilmore — Intergalactic SEM Consultant (humans only so far).

Human Lead, Automation-Enhanced. SEM-Artificium

QuickScan

  • Major incidents often produce “urgent” or time-sensitive safety recommendations before the full investigation is complete.

  • Those recommendations are triggered by early risk signals: recurring hazard patterns, near-miss data, operational constraints, and credible evidence of unacceptable risk.

  • The governance challenge is not writing recommendations—it’s closing them: owners, deadlines, evidence, and verification.

  • For enterprise safety (and supply chain safety), the lesson is portable: build a corrective-action system that can move at incident speed.

For Who Primary audience: DoD/Federal Supply Chain Best for roles: Safety & Risk, Program Management, Aviation/Transportation Ops, Emergency Management, Compliance/Oversight, Corrective-Action Owners

What You’ll Get

You will learn: How urgent safety recommendations are triggered and translated into corrective actions.

You will be able to do: Stand up a simple “Recommendation-to-Closure” tracker and trigger logic that prevents recommendations from stalling.

Time & Effort Read time: 8–9 minutes

Do time (optional): 30–60 minutes

Difficulty: Intermediate

In high-risk systems, the first deliverable is often a recommendation—not a narrative.

Executive Snapshot

What happened: On January 29, 2025, a PSA Airlines CRJ700 operating as Flight 5342 collided in flight with a U.S. Army UH-60L Black Hawk (callsign PAT25) near Ronald Reagan Washington National Airport (DCA) and impacted the Potomac River, killing everyone aboard both aircraft (67 total). (National Transportation Safety Board [NTSB], 2025b)

Why it matters: The collision didn’t only expose an operational breakdown—it exposed a system design problem: helicopter and fixed‑wing traffic could be placed in close proximity with limited vertical margin under specific runway configurations and conditions. When investigators see an immediate, repeatable hazard, they can issue urgent safety recommendations before the final report—because the risk won’t wait. (NTSB, 2025a; NTSB, 2025c)

What to do now:

  • Build a near‑miss reporting pipeline leadership actually reads—and that produces corrective actions (not just statistics). (Federal Aviation Administration [FAA], 2021; National Aeronautics and Space Administration [NASA], n.d.)

  • Treat controls layering as a design requirement: procedures + technology + human factors + oversight—because any single layer can fail or be unavailable in edge conditions. (NTSB, 2025a)

  • Use a governance closure tracker: recommendation/action → owner → milestones → evidence → verified closure. (NTSB, n.d.; NTSB, 2024)

Key lesson: Safety recommendations get triggered when evidence shows a repeatable hazard—and they only prevent the next accident when governance converts them into implemented controls with owners, timelines, and verification.

Field Notes Opening

In every high‑reliability system, there’s a moment when someone says:

“We’ve been doing it this way for years...” or my favorite.... "this is how we did things on deployment..."

It sounds like comfort. Sometimes, it’s a warning. Because “years” can also mean years of near‑misses, thin margins, workarounds, and luck—until luck runs out. The DCA midair collision is a hard reminder that safety isn’t only about the last decision in the cockpit or tower. It’s about the system the decisions happen inside.

What We Know (Verified Facts)

Confirmed facts:

  • The accident occurred on January 29, 2025 near DCA and the Potomac River. (NTSB, 2025b)

  • The aircraft were a PSA Airlines CRJ700 operating as Flight 5342 and a U.S. Army Sikorsky UH‑60L Black Hawk operating under callsign PAT25. (NTSB, 2025b)

  • Everyone aboard both aircraft was fatally injured (67 total). (NTSB, 2025b)

  • During the investigation, the NTSB issued an urgent safety recommendation report (AIR‑25‑01, March 7, 2025) focused on deconflicting helicopter Route 4 traffic and airplanes operating with runway 33 arrivals / runway 15 departures at DCA. (NTSB, 2025a)

  • The NTSB urged FAA action to prohibit operations on helicopter Route 4 between Hains Point and the Wilson Bridge when runways 15 and 33 are in use, and to designate an alternative route. (NTSB, 2025a; NTSB, 2025c)

  • The FAA published statements in the immediate aftermath of the collision (January 30, 2025). (FAA, 2025)

Field note: These facts are sufficient to extract governance lessons about how urgent recommendations are triggered and how they should be closed—without needing the final probable-cause narrative.

What We Don’t Know Yet (Unverified / Evolving)

Open questions / uncertain details:

  • Final probable cause and the full set of contributing factors (training, communications, equipment, procedures, human performance, oversight, etc.).

  • The complete set of final safety recommendations (beyond urgent recommendations already issued).

Why that’s okay for this lesson: This article is about the mechanism, not the final verdict: (1) how a hazard becomes a recommendation, and (2) how a recommendation becomes an implemented control.

Timeline

  • Jan 29, 2025 — Midair collision near DCA; Potomac River impact. (NTSB, 2025b)

  • Feb 2025 — NTSB briefings and ongoing investigation activity.

  • Mar 7, 2025 — NTSB issues AIR‑25‑01 urgent safety recommendation report on deconflicting helicopter Route 4 with runway 33/15 operations. (NTSB, 2025a)

  • Mar 11, 2025 — NTSB emphasizes the urgent recommendations publicly; preliminary report released. (NTSB, 2025b; NTSB, 2025c)

  • Jul 30–Aug 1, 2025 — NTSB holds a multi-day investigative hearing (as noted in the V1 article’s timeline).

Timeline showing the Jan 29, 2025 DCA midair collision followed by the March 7 urgent recommendation report AIR‑25‑01 and the March 11 preliminary report and NTSB press release.

Figure 1 - "DCA Collision → Urgent Recommendation Timeline (Jan–Mar 2025)" [Aaron Gilmore] {Timeline showing the Jan 29, 2025 DCA midair collision followed by the March 7 urgent recommendation report AIR‑25‑01 and the March 11 preliminary report and NTSB press release.}

Why This Matters (So What?)

For DoD/Federal supply chain leaders:

  • Aviation incidents ripple into mission travel, contractor operations, and time-critical logistics flows—especially when they affect dense, complex airspace and public-safety aviation routes.

  • The governance lesson transfers directly: near-misses are intelligence only if your system converts weak signals into controlled change.

For anyone managing risk:

  • You can fund training, buy technology, and write procedures—and still be exposed if hazards aren’t escalated, fixes aren’t owned, and controls aren’t verified.

  • That is a governance problem, not a motivation problem.

SEM Doctrine Translation

This incident helps explain three connected doctrines that apply far beyond aviation:

  • Near-miss reporting (capture weak signals)

  • Controls layering (avoid single points of safety)

  • Governance follow-through (turn recommendations into implemented change)

How safety recommendations get triggered (the pipeline)

Step A — Signal appears (inputs):

  • Near-miss/close call reports (voluntary and mandatory programs)

  • Objective data (automated alerts, telemetry trends)

  • Incident/accident evidence and early investigative findings

Step B — Evidence consolidates: Investigators look for a pattern that indicates a repeatable hazard—not a one-off oddity.

Step C — Hazard is defined: The hazard becomes a statement like: “The separation margin is insufficient under specific operating conditions,” not “someone made a mistake.”

Step D — Recommendation is issued:

  • The NTSB can issue safety recommendations during an investigation.

  • “Urgent” recommendations signal time sensitivity: action should not wait for the final report. (NTSB, 2025a; NTSB, n.d.)

Step E — Recipient response and governance tracking:

  • Recommendations are assigned owners.

  • Progress is monitored until closed, with documented responses and evidence expectations. (NTSB, 2024; Legal Information Institute, n.d.)

Flow diagram showing a pipeline from weak signals/near-misses and data to evidence consolidation, hazard statement, recommendation, owned action plan, implemented control, and verified closure evidence.

Figure 2 - "Recommendation Trigger Map + Closure Pipeline (Weak Signal → Closed Control)" [Aaron Gilmore] {Flow diagram showing a pipeline from weak signals/near-misses and data to evidence consolidation, hazard statement, recommendation, owned action plan, implemented control, and verified closure evidence.}

Controls layering (what “defense in depth” looks like in safety)

One of the sharpest lessons in the urgent report is how thin the margin can be when systems intersect. Even when operations are “within limits,” the available separation can be small and can shrink under certain conditions—exactly where you don’t want single points of failure. (NTSB, 2025a)

Layering means you do not rely on one control to be perfect. A practical control stack (generic):

  • Design controls: route geometry, protected corridors, segregation design

  • Procedural controls: runway-use rules, positive separation requirements, standard phraseology

  • Human factors controls: training, staffing, fatigue management, supervision

  • Technology controls: surveillance, alerting systems, automation support

  • Governance controls: audits, trend reviews, corrective-action closure verification

Field note: Some protective layers can be constrained in edge conditions (for example, near terrain/low altitude or in complex environments). That’s why layering matters—your last layer may not always be available.

Stacked layers diagram showing five safety/control layers—design, procedures, people, technology, and governance/verification—with a note that no single layer is sufficient.

Figure 3 - "Controls Layering Stack (Design → Governance) [Aaron Gilmore] {Stacked layers diagram showing five safety/control layers—design, procedures, people, technology, and governance/verification—with a note that no single layer is sufficient.}

Governance follow-through (where most programs fail)

A recommendation that isn’t implemented is just a well-written document.

“Good” governance looks like:

  • An assigned risk owner (named, not generic)

  • A funded action plan with milestones

  • A verification method (what evidence is required to close)

  • A cadence (monthly until stable; quarterly thereafter)

  • A recorded decision log for accepted residual risk

Lessons Learned

  • Near-misses are not “almost nothing.” They are early warning.

  • Thin margins plus variability equals intolerable risk. If a route has no lateral boundaries, the margin is only theoretical.

  • Safety is a system property. Blame fixes nothing if design and governance stay unchanged.

  • Urgent recommendations are a signal of time sensitivity. Treat them like a stop-the-line event.

  • Closure is a discipline. If you can’t prove it’s implemented, it’s not implemented.

Role-Based Implications (Who should do what)

Senior leadership / governance board:

  • Require a near-miss intelligence brief (trend lines + action decisions) on a set cadence.

  • Demand closure evidence, not status updates.

Safety / Quality / Risk owners:

  • Maintain a single Corrective Action System that integrates incidents, near-misses, audit findings, and external recommendations.

  • Define what qualifies as urgent and what stop-the-line authority means.

Operations & training leaders:

  • Translate recommendations into training objectives, procedural changes, and supervision practices.

  • Verify adoption in the field (not just classroom completion).

Program/Project Management:

  • Treat major recommendations like projects: scope, schedule, resources, deliverables, and a closure package.

Contract / vendor oversight (supply chain):

  • If you depend on a provider/operator, require their near-miss reporting capability, corrective action closure process, and ability to implement external safety recommendations.

What To Do Now (Field Application)

Recommendation Trigger Map (one-page job aid)

Trigger signals (inputs):

  • Near-miss reports (ASRS/ASAP equivalents)

  • Automated alerts / telemetry trends

  • External investigations / recommendations

  • Internal audits

Decision gates:

  1. Is this repeatable?

  2. Is the margin thin?

  3. Can it kill people / halt mission?

  4. Is there a control gap?

Outputs:

  • Immediate restrictions (if needed)

  • Interim controls

  • Formal corrective action project

  • Policy/procedure update

Closure Tracker (minimum fields)

  • Action / recommendation ID

  • Risk statement

  • Owner

  • Due date & milestones

  • Resources funded (Y/N)

  • Verification method

  • Closure evidence link

  • Residual risk accepted by (name/date)

10.3 The “30-60-90” operating cadence (simple)

  • 30 days: report what you did first (stopgap controls)

  • 60 days: show implementation progress (field adoption)

  • 90 days: show verification (auditable evidence)

Table-style tracker template listing the minimum fields to manage a safety recommendation from assignment through verification and closure evidence.

Figure 4 - "Recommendation-to-Closure Tracker (Minimum Fields)" [Aaron Gilmore] { Table-style tracker template listing the minimum fields to manage a safety recommendation from assignment through verification and closure evidence.

Note from the Author

The most dangerous sentence in any high-risk environment is: “We’ve always done it this way.

The safest sentence is:Show me the closure evidence.”.....a metaphoric jingle about cars and foxes🦊comes in mind, to help you remember this 😉.

The other obvious sentence that's been in humans since forever is "better to be safe then sorry", or as we say in the risk management world, "an ounce of mitigation is worth millions of dollars in litigation and damages, when the risk is realized" or something to that effect. Although I haven't experienced near misses with planes on my watch, i have experienced a few deadly near misses like:

  • the sound + pressure wave of a "Paladin" (mechanized field artillery...way bigger guns then tanks FYI) almost liquefying my insides as I'm walking up the side of of one to fix their comms, from a miss-fire because someone tripped inside. Look up "muzzle blast overpressure" on your preferred search engine, its a fun time....and the source of my tinnitus lol.

  • While playing "where's commo" on a hill, another paladin shot a round so low during a live fire certification, it went perfectly in between the mini forest of trees holding my hidden VHF antennas and a brand new Queen (a luxury radio antenna system you can jack up with a drill), literally felling 1 tree in half and spreading its insides everywhere from the wave of air/sound and permanently bending the brand new equipment i was trying out that wasn't from the 1960s (the queen... 😭) and making it permanently inoperable... all within 50 ft of me.

There are many other "near miss" incidents but these 2 were the closest to me not writing this article. Overall I've become what's called "risk adverse" over time because of my experiences and you should too, before you have them even. As I tend to state in other articles, "Plan for the worst, be pleasantly surprised if it all works out". Its safer that way...

Reference List

Federal Aviation Administration. (2021). Aviation voluntary reporting programs. https://www.faa.gov/newsroom/aviation-voluntary-reporting-programs-1

Federal Aviation Administration. (2025, January 30). FAA statements on midair collision at Reagan Washington National Airport. https://www.faa.gov/newsroom/faa-statements-midair-collision-reagan-washington-national-airport

Legal Information Institute. (n.d.). 49 U.S. Code § 1135 — Secretary of Transportation’s responses to safety recommendations. Cornell Law School. Retrieved January 18, 2026, from https://www.law.cornell.edu/uscode/text/49/1135

National Aeronautics and Space Administration. (n.d.). Aviation Safety Reporting System (ASRS). Retrieved January 18, 2026, from https://asrs.arc.nasa.gov/

National Transportation Safety Board. (n.d.). Safety recommendations. Retrieved January 18, 2026, from https://www.ntsb.gov/investigations/Pages/safety-recommendations.aspx

National Transportation Safety Board. (2024, July 22). Responding to our safety recommendations (recipient tips). https://www.ntsb.gov/investigations/Pages/RecipientTipRecommendations.aspx

National Transportation Safety Board. (2025a, March 7). Deconflict airplane and helicopter traffic in the vicinity of Ronald Reagan Washington National Airport (AIR-25-01). https://www.ntsb.gov/investigations/AccidentReports/Reports/AIR2501.pdf

National Transportation Safety Board. (2025b, March 11). Aviation investigation preliminary report: DCA25MA108. https://www.ntsb.gov/investigations/Documents/DCA25MA108%20Prelim.pdf

National Transportation Safety Board. (2025c, March 11). NTSB makes urgent recommendations on helicopter traffic near Reagan National Airport (News Release NR20250311). https://www.ntsb.gov/news/press-releases/Pages/NR20250311.aspx

Aaron is a U.S. Army Signal veteran (25U) and Industrial Security & Emergency Management practitioner with hands-on experience in disciplined communications, COMSEC accountability, Software Engineering, Project Management, security compliance and classified courier operations. 
Now a partner and working practitioner who also builds security focused products/solutions , he’s led and supported initiatives spanning security/compliance services, AI/ML platform architecture & security engineering, a Colorado state blockchain program (SB 18-086), and is a DoD Cogswell Award recipient. 
Expect educated, field-tested guidance—clear doctrine, honest limits, and steps you can use immediately.

Aaron Gilmore

Aaron is a U.S. Army Signal veteran (25U) and Industrial Security & Emergency Management practitioner with hands-on experience in disciplined communications, COMSEC accountability, Software Engineering, Project Management, security compliance and classified courier operations. Now a partner and working practitioner who also builds security focused products/solutions , he’s led and supported initiatives spanning security/compliance services, AI/ML platform architecture & security engineering, a Colorado state blockchain program (SB 18-086), and is a DoD Cogswell Award recipient. Expect educated, field-tested guidance—clear doctrine, honest limits, and steps you can use immediately.

LinkedIn logo icon
Back to Blog

About Our Content

AI tools assist with research, ideation, and content organization on this blog. All posts are reviewed and approved by our cybersecurity team before publication. Our goal is to provide accurate, actionable insights informed by real-world experience.

This content is for informational purposes only and does not constitute professional cybersecurity, legal, or compliance advice.

The right time to build clarity is now.

Connect With Me

© 2026 BEES COMPUTING. All rights reserved.

Designed & Developed by KATALYST CRM