This is the first post of a series of posts, in which I'll take a look at the current state of risk management in software project. In the second post, I'll present a way to manage those risks.
Not so uncommon tale of a software project
This is a rough outline of a typical custom software project that I've been participating to. We're usually doing the project on a budget, which is written on a contract in the form of work estimate, agreed with a client. Also, in many cases we add a adequate safety margin to the estimate to account for unexpected surprises. In much more cases than I'm comfortable with, in the end we go over the budget even with safety margin added, and then some. The extra work done might be on the range of 100%, 200% or even more. So if we agree, that customer satisfaction is the top most thing for measuring software project success, we've done a poor job on customer satisfaction and so the software project can hardly be classified as a success.
In fact, this is so common, that all participants seem to silently accept it: sales wants to make the deal with a low ball estimate, customer wants to hear that his original budget holds and developers/project managers are making estimates that fit into the budget (by some margin).
We act like the current (sorry) state of software project management is the norm. We pretend the problems are not there, but they are. We, as in people in the software industry, have not found reliable methods to create estimates that can be trusted for. We're trying to side step the problem by doing things "the agile way", meaning build MVP first and see how much that costs, then add more stuff. While it's a reasonable approach to try to keep the initial scope as limited as possible, many times the MVP functionality requires a large part of the system to be functional. Often we have to create an almost complete back end system to be able to provide for minimal front end functionality.
Naturally, managing the risks in software project requires coordination in all phases, like in sales, design, implementation and maintenance. But the problems I'm mostly concerned are in the design phase, where I think the projects are usually won or lost.
Here be monsters
So, why is it so hard to estimate our work? What makes software development fundamentally different from other engineering disciplines, like machinery or building a house? Well, in case of a house, we often have standard parts, that we can order from suppliers and then put together. Customization is for large part minor modifications to decoration or cover material. Moreover, the people building the house have done it many times before, so they pretty much know what kind of problems might come along and are able to estimate their effect on the building process.
When building custom software for a client, we usually have not done that kind of system before. We can use some standard parts (libraries/frameworks), but the software logic, that defines how those parts work together, comprises the bulk of the development work. Those standard parts need to be utilized in a customer specific way. There's no ready-made code we can use.
Moreover, designing the system means making many decisions about platform, framework, libraries, splitting the code to specific components, automation etc. Getting these decisions right in the design phase is of crucial importance to the success of software project.
Why developers make poor judgements
Traditionally, the critical technical decisions have been made by the developing team, with little or no discussion with clients, project managers or the product owner. Moreover, developers left to their own devices might not be very good at selecting the optimal design for the project. It all comes down to incentives; what is the motivation and payoff for making a particular design decision?
Simply, developers don't have the right incentives when making critical design decisions. When you are working in a software project, your (a developer) mind is bent on two things: a) to make it work, b) to gain further knowledge of your craft. Developers like to check a cool new technology out, or they might want to challenge themselves by trying out new abstraction layer on old technology. This in turn brings in additional risks by making the system more complex, or it might turn out, that the new thing is not being a good fit for their particular problem. The chosen technologies might have not been mature enough to handle all needs and therefore need to be worked around laboriously or completely switched to another. Or the learning curve of the new technology was significantly higher than first estimated and development time increased substantially because of that. Or the automation process chosen for deployment, provisioning etc. turned out to be really complex to set up, again increasing the work needed to put the software into production.
Designing the system is not seen something that fundamentally affects our ability to execute software project in the constraints set by our clients. We developers feel it is within our judgement to challenge or satisfy our curiosity by trying out new things we don't have enough experience of. Lack of experience creates unknowns, that accumulate and might blow up the estimated time or/and costs.
This is why I think we need a better, more systematic approach to estimating and managing risks in software projects. Risks, that come with technical decisions, are my main concern.