There’s a strange phenomenon that occurs across almost all major engineering endeavors. Teams spend weeks, sometimes months, gathering requirements. Stakeholder sign off. Document approved. This is what we need to build, everyone agrees.
Then six months into development, someone realizes requirements are wrong. Not because someone did something wrong, per se. But because the world has moved on while requirements have been frozen in time.
Contents
The False Assumption of Having All Requirements
Most engineering organizations make a requirements document like a contract. Once stakeholders approve them, they’re the gospel of what the system is supposed to do. The issue is that this falsehood assumes that everything can be predicted in advance, and it almost never can.
Market forces change. Regulations shift. Stakeholders realize they asked for the wrong things after seeing some early prototypes. Competitor systems debut and new features become mandatory. The technological reality changes. But the requirements document? It’s sitting there as a password protected PDF, allowing no one to touch it unless there’s an official committee meeting, with impact assessments, with stakeholder approval, weeks down the line.
What makes this particularly expensive is that engineering teams continue to build to the original requirements even though everyone knows in-house, privately, that they’re out of date. No one wants to be the person to say, “we need to change the baseline.” So teams create systems that satisfy the written requirements (in a contractual sense) but do not solve the desired problems (for what should be current stakeholders).
The Specific Failure of Traditional Documentation
Traditional documentation creates a specific kind of failure for the majority of the project. Designers make decisions based on requirements version 1.2 but by the time they are implemented months later, reality resembles something closer to version 2.5. Except 2.5 was never written down; it’s in an email trail or in someone’s notes from a Tuesday afternoon meeting outside of the office.
The delay grows incrementally through a project. As long as the written requirements deviate from actual needs in some respect at first it’s not a big deal; it’s just an error. The longer it goes on, however, with necessary updates ignored, naturally it becomes disconcerting, as long as that disparity occurs at a stage when it’s too late to fix without increasing costs substantially.
Teams working on advanced and integrated defense systems or aerospace projects see this across the board. Threat landscapes change. New sensor capabilities arise. Financial constraints arise that weren’t present during initial development. Each one ripples throughout the system but requirements are locked in place, editing documents requires clearance from review boards, impact assessments, and meetings with stakeholders which delay progress by at least a month.
Engineering teams learn to work around the documentation. They implement “clarifications” which are technically significant changes and they revise their viewpoints for requirements and let them slide with loose interpretations due to new realities and increasingly the gap between what’s written and what’s being built widens.
The Downstream Disaster of Compounding Effects
This creates problems downstream that fester like wounds over time. Verification teams test to documented requirements that stakeholders no longer need or desired. Integration teams discover subsystems designed with mutually exclusive assumptions because one team worked from one inference while others worked from an informal discussion they had with someone else.
There’s also the problem of meetings. Teams spend hours in peer reviews trying to justify why their design doesn’t match what’s written, although everyone in the room knows it’s too outdated since none of these discussions had gone through formal change controls. But in the interim, that’s where the conversation goes circularly.
“Do you think this meets requirement 3.4.7?” Yes, that’s effectively what was done but no it’s technically not compliant unless you consider what came out during a meeting in March with new stakeholders.
This is where model based systems engineering becomes relevant for organizations tired of acknowledging two different universes existing simultaneously and making decisions based on how they interact with one another. When requirements do not exist in separate documents divorced from model based designs, but instead exist through interconnectedness, and then can all be updated in some tracked version control, it’s much easier to see what impacts what.
The Version Control Mishap
Even when organizations try to keep updated versions of requirements, version control becomes its own separate nightmare; Requirements Document v1.0 becomes v1.1 and suddenly all engineering teams are working from different versions across time because someone didn’t get updated or reviewed in time or got overwhelmed with their work load and hasn’t had time to revisit how changes affect their subsystems.
Extrapolate this across any project with dozens of subsystems and hundreds of engineers; some teams are designing against requirements from January, others are using April’s update, others still are working from drafts that never got approved but everyone infers those are the “real” requirements.
Testing becomes almost impossible, how do you verify something when different pieces came together using different sets of requirements? How do you prove compliance when no one knows what the baseline is anymore?
Why This Keeps Happening
Yet organizations know this is a problem, they’ve experienced it project over project, but the default response is usually “we need better requirements in advance” or “we need stricter change control,” both which only exacerbate the problem.
Better requirements in advance sound like a proactive way to mitigate this situation from happening, but this assumes everything can be predicted in advance, and it cannot; no amount of upfront analysis will determine that a supplier will discontinue a part or that a regulatory change will take place mid-development or that early user studies will demonstrate misunderstandings about how systems will actually work.
Stricter change controls sound like they’d help, but it just incentivizes teams to work around it; when changing a requirement takes three review boards and three weeks of paperwork, engineers learn clever ways to reinterpret existing requirements instead rather than leave their own implementation choices ambiguous to their own detriment.
What’s at Stake
The real problem comes when everyone treats requirements like something you finish as opposed to maintain; static documents are perfectly fine for simple projects where everything can actually be known from the start, but for complex systems spanning years? Requirements must change as much as everything else does.
This does not create chaos; it does not give engineers leeway to do whatever they want, it means putting systems into place where requirements and designs and tests all stay symbiotic even if they deviate and can be traced back down to a high-level need and back up again where projects evolve through dynamic controlled means instead of getting lost in email trash folders with comments floating around in some digitized black hole.
Some organizations are moving toward model based systems approaches where the model itself becomes the source of truth rather than scattered documents. Changes propagate through connected elements. Traceability remains intact. Impact analysis happens through the system rather than through manual coordination across dozens of spreadsheets and email threads.
The Path Forward
Projects that continue relying on outdated requirements documentation pay the price through constant rework, integration failures, and testing delays. Teams waste countless hours reconciling what’s written with what everyone knows needs to happen. Program managers struggle to report progress when the baseline exists in multiple conflicting forms.
The organizations that successfully deliver complex systems aren’t the ones that predicted everything perfectly upfront. They’re the ones that built processes allowing requirements and reality to stay aligned as both evolved. That means fundamentally rethinking how requirements are captured, maintained, and connected to design and verification activities.
Most engineering teams already recognize their requirements process creates problems. What prevents change is often the assumption that more rigorous upfront planning will solve it. Sometimes the real solution involves accepting that change is inevitable and building systems that accommodate evolution rather than resist it.
