Last Thursday we hosted an open house as part of Agile Lincoln’s monthly meetup. We were blown away by how many people showed up to see who we are, learn what we’re doing, and hear me talk about our approaches to software engineering.

Since then a number of people have requested the slides and recording of the presentation.

Special thanks to the Lincoln Agile Group for providing the Periscope recording!

If you have any questions or want to learn more, we’d love to continue the conversation. Leave a comment below or email us at [email protected].

You can also sign up for our monthly newsletter to keep up with what we’re reading, writing on the blog, and local events.

 

 

This post was originally published on the Don’t Panic Labs blog.

People who know me understand how strongly I feel about experiential learning. I have often talked about how valuable I believe my own personal experiences are and how I feel they impact the way I see and approach problems. I even wrote a blog post talking about how challenging it can be to work with bright but inexperienced people.

Recently, a couple of things happened that brought this whole idea of context and experiential learning back to my mind.

First, I was talking to the parent of one of our interns who just finished his first year at UNL as a Computer Engineering undergraduate. As with many students entering engineering study, the first year can be quite challenging. While he made it through, I think there were times during the year when he got a little discouraged. After the first few weeks of his internship, he expressed to his father that working on projects here has already given him a great perspective on what a software engineer really does and why the education is important. Essentially, he is learning the applied side of his academic education.

Second, I was reading Joshua Foer’s “Moonwalking with Einstein: The Art and Science of Remembering Everything” and came across a section that discussed the importance of having knowledge and experience in order to gain knowledge and experience…

“This paradox – it takes knowledge to gain knowledge – is captured in a study in which researchers wrote up a detailed description of a half inning of baseball and gave it to a group of baseball fanatics…and a group of less avid fans to read. Afterward, they tested how well their subjects could recall the half inning. The baseball fanatics structured their recollections around important game-related events, like runners advancing and scoring. They were able to reconstruct the half inning in sharp detail. One almost got the impression they were reading off an internal scorecard.

“The less avid fans remembered fewer important facts about the game and were more likely to recount superficial details like the weather. Because they lacked a detailed internal representation of the game, they couldn’t process the information they were taking in. They didn’t know what was important and what was trivial. They couldn’t remember what mattered. Without a conceptual framework in which to embed what they were learning, they were effectively amnesiacs.” (Foer 2011, p.208)

Both of these events reminded me of how important it is to have context and personal experience to be able to minimize errors in judgment. This is why I have stressed to underclassmen (and even high school seniors) that the time to start getting internships and real experience is not when they are juniors and seniors but when they are high school graduates and college freshmen. Any and all real-world experience will help provide the framework for “embedding what they are learning” in the academic environment and, what’s more, the absence of these experiences will diminish the value of their educational efforts.

Our educational systems can help with this as well. I can think of two examples in our figurative backyard that are moving in the right direction.

The first is the Design Studio program at the Jeffrey S. Raikes School of Computer Science and Management at the University of Nebraska-Lincoln (UNL).

The second is the upcoming Software Engineering undergraduate degree at (UNL) which will be launching in the fall of 2016 and will likely focus on the knowledge, activities, and behaviors that are important for anyone pursuing a career in software development. This program will include two years of capstone experience similar to the Design Studio in the Raikes School.

Now, if we can only get the Nebraska State Board of Education to recognize how critical it is for all kids to be exposed to computer programming as part of their required curriculum. But that is a topic for another day…

Ultimately, I want to see more and more emphasis placed upon applied learning and more hands-on design and development of real-world systems in the classroom setting. Imagine what would happen if we re-engineered our educational systems that are developing engineers to focus more on experiential learning (which is how we used to train engineers before the computer age).

If you are interested in reading more about the nature of engineering and design, I recommend this article by Eugene Ferguson from 1977 or, better yet, the book he wrote as a follow-up to the article. Chapters 6 and 7 focus on the development of engineering knowledge and the making of an engineer.

 

 

This post was originally published on the Don’t Panic Labs blog.

Like it or not, we live and work in a world of uncertainty and vague requirements. I like telling people that very little in my life is black and white. Most of it is shades of gray. This is not necessarily a bad thing because I think it is what makes designing software products (and life in general) interesting and challenging. I don’t think I would enjoy the ride as much if everything was straightforward, obvious, and deterministic.

Unfortunately, this reality makes applying process and discipline to our product development efforts quite challenging. This is particularly true when you are developing a new product from scratch, adding a fairly complex subsystem to an existing product, or re-designing a legacy product. It is impossible to identify all the requirements, design details, and effort up front no matter how strong the desire.

How do we create a culture where we embrace both the uncertainty of product development and the desire to design and develop software in a more disciplined, engineering-like approach?

I get asked variations of this question almost weekly. So based on these conversations, what follows is a brief summary of how I respond to these curiosities.

The Wicked Problem

In his book “Code Complete, 2nd Edition“, Steve McConnell discusses the nature of software design, labeling it a wicked problem and identifying some key characteristics.

To summarize, design is…

  • …a sloppy process that involves a lot of iterative re-work
  • …about tradeoffs and priorities (this is the shades of gray I mentioned)
  • …something that involves restrictions which simplify the solution
  • …nondeterministic – there is no such thing as one right answer
  • …a heuristic process which uses rules of thumb and familiar patterns rather than formulaic approaches
  • …emergent and evolving as opposed to a singular step in the process that, once completed, is never revised

So this leaves us with a key question: How do we reconcile the realities of design with our objectives for how we want to develop software? One thing we must keep in mind is that insight is something that happens over time as opposed to a single event. We’ll never have all the answers at any given point, and we must be okay with that. We must be comfortable with the fact that as we get more and more into the details of design and implementation, we must continue to develop an improved view of what an ideal solution looks like. This applies regardless of where we’re working; whether it be at the system, subsystem, service, operation, or data model level. It is important to note that the software design patterns being used must support this evolutionary design insight and volatility of the requirements, but that is a topic for another time.

Just Like Remodeling a House

A real-world (and personal) analogy for this is the process we took when we completely redesigned our house. Our objective was to be able to interact with each other even when some folks were studying, cooking, eating, reading, or watching TV. We knew, conceptually, that we wanted to take down a lot of interior walls and open up our floor plan to create a more inviting space where our family could spend time together. We worked with an architect to develop some overall plans for this space but we left a lot of the detailed decisions out of the plan because we knew we wanted the flexibility to change or add things along the way.

We were lucky to find a developer that was comfortable with this arrangement and agreed to a pricing model that allowed for changes (it was cost plus for those interested). The end result turned out great. By and large, the house resembles the original plans but there are definitely some departures from it as a result of the insights we gained along the way.

Peeling an Onion

What has worked for us is to follow a process that allows us to make progress towards a final solution while, at the same time, fleshing out some of the details of what that solution needs to be. Some people use the analogy of peeling an onion and I feel like that is a good description of this process. We look at the unpeeled onion and we know what it is conceptually, but we don’t know the details inside. Progress made during product development is like peeling the onion one layer at a time. At each layer we see more and more of the onion, and this gives us more and more insight into what the requirements are and an appropriate solution.

In order to do this, we rely on a few essential activities:

Begin with High-level Stories

A customer or product owner often comes to us with a fairly vague description of the “onion” they want. Our first effort is to work with them to develop a more complete picture of what the outside of this “onion” looks like.

To give us some concept of what it is we are trying to build, we need our stories to be at a high level. This gives us a better “view”. This also helps give us a better look at what the expected capabilities are for the system.

We have to begin with a design. It doesn’t have to be flawless or complete, but it does need to contain sufficient fidelity so development can begin. The requirements that drive this design are the backlog of existing stories/capabilities that have been identified for the product. They need to be good enough such that it accommodates necessary re-designs/refactors without feeling like we dropped a grenade into the entire code base. But at the same time, we have to make sure we’re not getting into the weeds with our requirements. We’ll have time to work through the details of implementation when stories are scheduled for development.

There will often be risk associated with every story, but some risks are greater than others. It’s crucial to identify those high-risk stories – which are typically ones that exist around technical uncertainty – and plan for those to be tackled earlier on in the project. That way, when we get thrown a curveball we have enough time to address it.

Use Smaller Stories

The high-level stories then need to be decomposed into smaller stories such that we feel the effort for a particular story is approximately one week or less. For us, we settled on one week because we’ve found that if we get these stories to that relative size we have a pretty good grasp of what actually needs to be done and how much effort it will take.

For example, if we stopped at four weeks there would be a lot of detail that would not be discussed. The result would be a lot of hidden assumptions, holes in the requirements, and large errors in the estimates. In our experience, a four-week story could easily become a four-month story because that level of abstraction does not allow for the critical thinking that is required to develop sufficient insight into the nature of the requirements.

Additionally, this process provides at least two benefits:

  • it gives a better idea of the level of effort for the release
  • it inevitably uncovers questions, decisions, assumptions, new requirements, and risks (aka design insight)
    Another benefit of this process is that it can lead to a conversation of what is essential for the early releases (e.g., MVP) of the product so that these detailed stories can be prioritized as opposed to prioritizing high-level stories that are often too vague to be useful.

Track Everything

Having someone to record all of the questions, decisions, assumptions, new requirements, and risks that are identified along the way is essential. We never know when we’ll need to refer to these little pieces of information, but the one thing we know for sure is that we will need them.

Gain Insights

We make sure we’re always creating and looking for opportunities to gain further insight into the details of our requirements. We do this to highlight risks, reduce unknowns, uncover hidden assumptions, and identify gaps in our understanding of the actual requirements. These are:

  • UI/UX design
  • Story/task decomposition
  • Daily standups
  • Code reviews
  • Smoke tests, spike releases
  • White papers, activity diagrams, sequence diagrams
  • Release planning
  • Sprint planning
  • Be Iterative

We follow an iterative development process with short sprints (typically one week), as it enables us to change directions and priorities quickly. We have found that it’s much easier to make small course corrections more frequently than large corrections once in a while.

Let Volatility Do Its Thing

We intentionally do not prevent volatility in the requirements and allow it to occur naturally as the project progresses. Through this, we can incrementally make changes to the software and design to accommodate volatility instead of having that volatility cause massive re-work or allowing the integrity of the design to decay through hacks and shortcuts. In fact, the absence of volatility in a project raises red flags in our minds and causes us to look closer and make sure we are not missing anything. Again, this requires design patterns and processes that properly encapsulate volatility and allow for constant and efficient re-design/refactoring.

Conclusion

I can’t overstate that what I’ve presented here is just how we have successfully built many companies and software products. I know from first-hand experience that much of how we work does not fit many team cultures. That’s why I do not present this as a magic bullet for all software shops or teams. But I do believe that various aspects of what we do can be applied almost anywhere.

 

 

This post was originally published on the Don’t Panic Labs blog.