The coding blog of Alastair Smith, a software developer based in Cambridge, UK. Interested in DevOps, Azure, Kubernetes, .NET Core, and VueJS.
Any good carpenter, joiner, or other worker of materials will tell you to “measure twice, cut once”. This is a good philosophy to apply to life and your craft as a software engineer. It implies attention to detail, efficiency and proper preparation; it results in “right first time” components and products, quality and reduced waste.
So, getting your pre-requisites right is important. There are different opportunities to emphasis quality: at the beginning of the project (planning and design), during the construction of the product and at the end of the project (testing). During the construction phase, your only option is to build the product solidly, with quality materials and tools. At the end of the project, when your only option is testing, you can’t detect that your product is the wrong solution for the problem, or that it is the right product built in the wrong way. Testing is only one part of quality assurance, and only ensure that the thing is fit for purpose.
Therefore, the planning and design stages are your one opportunity to “get it right”, and the cheapest opportunity to resolve any issues. You can, and should, make sure you have the right project and the right plans, and ensure the design is fit for the product. It’s a risk reduction process. As my Dad used to tell me, and as his boss (fittingly, in the construction industry) used to say, remember “The Seven Ps”: Proper Preparation and Planning Prevents Piss-Poor Performance. Conversely, many programmers (myself included), rarely stick around to prepare ahead, and just dive right in. Why is this? Why don’t programmers prepare ahead? McConnell proposes a number of reasons:
There are, of course, possible solutions to these, ranging from education to self-discpline to resistance (overt or covert). The key thing, though, is to learn from your rushed jump-in-the-deep-end attempts. Are the problems you experienced similar to other problems? Could they have been foreseen?
McConnell also proposes his four-point “Utterly Compelling and Foolproof Argument for Doing Prerequisites Before Construction” (UCFADPBC, for those who prefer snappy initialisms).
Time Detected | |||||
---|---|---|---|---|---|
Time Introduced | Requirements | Architecture | Construction | System Test | Post-Release |
Requirements | 1 | 3 | 5-10 | 10 | 10-100 |
Architecture | — | 1 | 10 | 15 | 25-100 |
Construction | — | — | 1 | 10 | 10-25 |
Different projects need different approaches; for example, the space shuttle software development process is really very different from your favourite Agile methodology: the former is very sequential and very bureaucratic by necessity; the latter is fast, light-weight and very iterative. McConnell mentions that projects generally aren’t exclusively sequential or iterative, but are a mixture of the two. For example, for a systematic change, you might do 80% up-front work; for an incremental change, it might only be 20% up-front work. The choice of approach depends on the type of the project, the formality of the project, the technical environment, staff capabilities and the project’s business goals. McConnell provides the following comparison of when sequential and iterative methodologies might be appropriate:
Sequential | Iterative |
---|---|
Stable requirements | Requirements are not well-understood |
Straightforward and well-understood design | Design is complex, challenging, or both |
Dev team is familiar with the application's area | Development team is unfamiliar with the application's area |
Project contains little risk | Project has a lot of risk |
Long-term predictability is important | Long-term predictability is not important |
Cost of changing requirements, design and code downstream is likely to be high | Cost of changing requirements, design and code downstream is likely to be low |
The next post in this series will cover specific pre-requisite activities, such as requirements and architecture.