Like it or not, we live and work in a world of uncertainty and vague requirements. I like telling people that very little in my life is black and white. Most of it is shades of gray. This is not necessarily a bad thing because I think it is what makes designing software products (and life in general) interesting and challenging. I don’t think I would enjoy the ride as much if everything was straightforward, obvious, and deterministic.
Unfortunately, this reality makes applying process and discipline to our product development efforts quite challenging. This is particularly true when you are developing a new product from scratch, adding a fairly complex subsystem to an existing product, or re-designing a legacy product. It is impossible to identify all the requirements, design details, and effort up front no matter how strong the desire.
How do we create a culture where we embrace both the uncertainty of product development and the desire to design and develop software in a more disciplined, engineering-like approach?
I get asked variations of this question almost weekly. So based on these conversations, what follows is a brief summary of how I respond to these curiosities.
The Wicked Problem
In his book “Code Complete, 2nd Edition“, Steve McConnell discusses the nature of software design, labeling it a wicked problem and identifying some key characteristics.
To summarize, design is…
- …a sloppy process that involves a lot of iterative re-work
- …about tradeoffs and priorities (this is the shades of gray I mentioned)
- …something that involves restrictions which simplify the solution
- …nondeterministic – there is no such thing as one right answer
- …a heuristic process which uses rules of thumb and familiar patterns rather than formulaic approaches
- …emergent and evolving as opposed to a singular step in the process that, once completed, is never revised
So this leaves us with a key question: How do we reconcile the realities of design with our objectives for how we want to develop software? One thing we must keep in mind is that insight is something that happens over time as opposed to a single event. We’ll never have all the answers at any given point, and we must be okay with that. We must be comfortable with the fact that as we get more and more into the details of design and implementation, we must continue to develop an improved view of what an ideal solution looks like. This applies regardless of where we’re working; whether it be at the system, subsystem, service, operation, or data model level. It is important to note that the software design patterns being used must support this evolutionary design insight and volatility of the requirements, but that is a topic for another time.
Just Like Remodeling a House
A real-world (and personal) analogy for this is the process we took when we completely redesigned our house. Our objective was to be able to interact with each other even when some folks were studying, cooking, eating, reading, or watching TV. We knew, conceptually, that we wanted to take down a lot of interior walls and open up our floor plan to create a more inviting space where our family could spend time together. We worked with an architect to develop some overall plans for this space but we left a lot of the detailed decisions out of the plan because we knew we wanted the flexibility to change or add things along the way.
We were lucky to find a developer that was comfortable with this arrangement and agreed to a pricing model that allowed for changes (it was cost plus for those interested). The end result turned out great. By and large, the house resembles the original plans but there are definitely some departures from it as a result of the insights we gained along the way.
Peeling an Onion
What has worked for us is to follow a process that allows us to make progress towards a final solution while, at the same time, fleshing out some of the details of what that solution needs to be. Some people use the analogy of peeling an onion and I feel like that is a good description of this process. We look at the unpeeled onion and we know what it is conceptually, but we don’t know the details inside. Progress made during product development is like peeling the onion one layer at a time. At each layer we see more and more of the onion, and this gives us more and more insight into what the requirements are and an appropriate solution.
In order to do this, we rely on a few essential activities:
Begin with High-level Stories
A customer or product owner often comes to us with a fairly vague description of the “onion” they want. Our first effort is to work with them to develop a more complete picture of what the outside of this “onion” looks like.
To give us some concept of what it is we are trying to build, we need our stories to be at a high level. This gives us a better “view”. This also helps give us a better look at what the expected capabilities are for the system.
We have to begin with a design. It doesn’t have to be flawless or complete, but it does need to contain sufficient fidelity so development can begin. The requirements that drive this design are the backlog of existing stories/capabilities that have been identified for the product. They need to be good enough such that it accommodates necessary re-designs/refactors without feeling like we dropped a grenade into the entire code base. But at the same time, we have to make sure we’re not getting into the weeds with our requirements. We’ll have time to work through the details of implementation when stories are scheduled for development.
There will often be risk associated with every story, but some risks are greater than others. It’s crucial to identify those high-risk stories – which are typically ones that exist around technical uncertainty – and plan for those to be tackled earlier on in the project. That way, when we get thrown a curveball we have enough time to address it.
Use Smaller Stories
The high-level stories then need to be decomposed into smaller stories such that we feel the effort for a particular story is approximately one week or less. For us, we settled on one week because we’ve found that if we get these stories to that relative size we have a pretty good grasp of what actually needs to be done and how much effort it will take.
For example, if we stopped at four weeks there would be a lot of detail that would not be discussed. The result would be a lot of hidden assumptions, holes in the requirements, and large errors in the estimates. In our experience, a four-week story could easily become a four-month story because that level of abstraction does not allow for the critical thinking that is required to develop sufficient insight into the nature of the requirements.
Additionally, this process provides at least two benefits:
- it gives a better idea of the level of effort for the release
- it inevitably uncovers questions, decisions, assumptions, new requirements, and risks (aka design insight)
Another benefit of this process is that it can lead to a conversation of what is essential for the early releases (e.g., MVP) of the product so that these detailed stories can be prioritized as opposed to prioritizing high-level stories that are often too vague to be useful.
Having someone to record all of the questions, decisions, assumptions, new requirements, and risks that are identified along the way is essential. We never know when we’ll need to refer to these little pieces of information, but the one thing we know for sure is that we will need them.
We make sure we’re always creating and looking for opportunities to gain further insight into the details of our requirements. We do this to highlight risks, reduce unknowns, uncover hidden assumptions, and identify gaps in our understanding of the actual requirements. These are:
- UI/UX design
- Story/task decomposition
- Daily standups
- Code reviews
- Smoke tests, spike releases
- White papers, activity diagrams, sequence diagrams
- Release planning
- Sprint planning
- Be Iterative
We follow an iterative development process with short sprints (typically one week), as it enables us to change directions and priorities quickly. We have found that it’s much easier to make small course corrections more frequently than large corrections once in a while.
Let Volatility Do Its Thing
We intentionally do not prevent volatility in the requirements and allow it to occur naturally as the project progresses. Through this, we can incrementally make changes to the software and design to accommodate volatility instead of having that volatility cause massive re-work or allowing the integrity of the design to decay through hacks and shortcuts. In fact, the absence of volatility in a project raises red flags in our minds and causes us to look closer and make sure we are not missing anything. Again, this requires design patterns and processes that properly encapsulate volatility and allow for constant and efficient re-design/refactoring.
I can’t overstate that what I’ve presented here is just how we have successfully built many companies and software products. I know from first-hand experience that much of how we work does not fit many team cultures. That’s why I do not present this as a magic bullet for all software shops or teams. But I do believe that various aspects of what we do can be applied almost anywhere.
This post was originally published on the Don’t Panic Labs blog.