Musings on Career Development in Software

I have spent a fair amount of time over the last 5-10 years thinking about how our industry views and supports professional development. My own journey has given me an opportunity to see and experience what it takes to go from no education (other than a FORTRAN class in the mid-1980s) to being the author of a book on software engineering and leading a software development company.

In this post, I will share some of my observations related to how the Software Engineering Body of Knowledge (SWEBOK) can be leveraged to help individuals take control of their own professional development and the path toward becoming software engineers and not simply software developers.

What is Software Engineering?

Before digging into the SWEBOK, I think it is important to have a shared understanding of the definition of software engineering. Rather than creating my own definition, I prefer to use the definition provided by IEEE and the ISO standards organizations…

“Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software.”

Source: ISO/IEC/IEEE Systems and Software Engineering Vocabulary and SWEBOK

The bottom line is, being a software engineer is not just about being a more highly skilled programmer. The field of software engineering and the skills and competencies required to be a software engineer go well beyond simply coding. Given this expansion of our understanding of the field, where do we discover what we need to know and understand to successfully “apply engineering to software”? This is where the SWEBOK becomes a critical component of our professional development.

What is SWEBOK?

SWEBOK is a guide to the core areas of software engineering. The IEEE Computer Society and other experts made it and update it regularly to keep up with the field. It currently has 15 knowledge areas (KAs) that cover the whole software life cycle, from requirements to maintenance. Each KA has a description, topics, and references to standards, books, articles, and other sources. SWEBOK is a descriptive and informative document, not a prescriptive or normative one. It does not tell software engineers how to do their work but what they should know and do.

Why is SWEBOK useful for software developers?

Software developers can use SWEBOK to assess their skills and gaps in various software engineering topics and find resources to learn more about software engineering principles and knowledge. SWEBOK can also help software developers manage their competency development by offering a common structure and language for software engineering ideas and methods that span the entire field of software engineering.

For example, here at Don’t Panic Labs, we have leveraged the SWEBOK to identify a broad set of competencies most relevant to the type of work we do. We use this competency list to assess our individual skill levels. We have developed a wide range of tools and resources that our developers can use to help develop their knowledge and skills within their competency gaps.

How Can Software Developers Use SWEBOK?

SWEBOK can help software developers in four ways.

  1. Check their own knowledge and experience in each KA and topic.
  2. Learn more from the references and links for each KA and topic, and from the related and emerging topics.
  3. Use SWEBOK as a guide to understand and use software engineering terms, concepts, best practices, and standards.
  4. Use SWEBOK to plan their career development, by setting competency goals based on the KAs and topics.

Next Steps

While SWEBOK is a critical resource for professional development in our industry, it was not designed to solve the problem of managing our individual competency development. We need more tools to help support us in this effort. Specifically, we need tools for both assessing our current competency and skill levels and tracking improvements as we work to increase our competency. We also need more tools to help us focus on the areas of growth most relevant to our current role and our desired career paths. At Don’t Panic Labs, we have recognized these needs with the development of the talent within our own organization, and we intend to address these challenges going forward. Stay tuned.

 

 

Last week, I had the privilege to give the opening keynote at the 2021 Nebraska.Code() conference. My topic was Musings on Developer Maturity and Growth where I laid out a model for creating developer maturity proto-personas using IEEE’s Guide to the Software Engineering Body of Knowledge, or SWEBOK for short.

I mentioned as I introduced this idea to use the SWEBOK as a knowledge map that I suspected most people had never heard of this document. Someone later tweeted that, based upon the number of people in the audience that took a picture of my slide showing the SWEBOK cover and URL, he suspected my assertion was correct. While I was not surprised by this fact, I feel that it is unfortunate that this is true. In a room of nearly 400 developers, most had not heard of a resource that is devoted to characterizing the content of the software engineering discipline that they are a part of.

To be clear, the SWEBOK does not contain the whole software engineering body of knowledge. Rather, it provides an overview of the scope of the body of knowledge and then provides topical access to other authoritative resources (books, textbooks, papers, etc.) that, collectively, document the body of knowledge. The current/latest version (3.0) is divided into 15 knowledge areas:

  • Software Requirements
  • Software Design
  • Software Construction
  • Software Testing
  • Software Maintenance
  • Software Configuration Management
  • Software Engineering Management
  • Software Engineering Process
  • Software Engineering Models and Methods
  • Software Quality
  • Software Engineering Professional Practice
  • Software Engineering Economics
  • Computing Foundations
  • Mathematical Foundations
  • Engineering Foundations

As you can see, the guide is comprehensive. It covers the whole software development lifecycle. It also includes sections covering engineering topics around management, processes, modeling, quality, as well as engineering ethics and economics. It even includes sections on foundational topics in computing, math, and engineering. I have known about this guide for a number of years now. It is safe to say it has played a significant role in my own personal education journey and was a trusted resource as I worked on my recently released book, Lean Software Systems Engineering for Developers, which I co-authored with my Don’t Panic Labs colleague Chad Michel.

I have often mentioned that if I could only have one reference book on software development, I would select Steve McConnell’s Code Complete. I still think that is a great starting point for building your reference library (it happens to be highly cited in SWEBOK). Fortunately, we are not restricted to only a single resource, so we are all free to leverage the SWEBOK as a guide to gaining the broad understanding we need to have to be effective in this industry.

Each of us would be wise to spend some time getting to know this great resource. The sooner we all can have a shared view of what software engineering is, the sooner our industry, as a whole, can start practicing software development as an engineering discipline.

Have you ever been asked a question along the lines of … “Are you an Agile dev shop?”

In my experience, this question is asking, “Are you following an Agile methodology?” and not, “Are you agile?”

I think the distinction is important. It seems we have come to a point where whether you are following an Agile methodology is more important than whether you are agile.

I want to take a step back and reflect on what it means to be agile (i.e., to have agility) independent of what methodology you may or may not be following. Spoiler alert: doing Agile does not, by itself, mean we become agile.

What Does It Mean to Be Agile?

The “agile” I care about is an adjective, not a noun. In the context of a software development team, agility is a characteristic that reflects their ability. Being agile to me means that we can quickly and effectively respond to change and opportunity – whatever the source of that change and opportunity may be. The level of agility correlates with the ability of a business to seize opportunities and differentiate itself within the market. Correspondingly, a lack of agility becomes a liability for a business.

What Does Agile Look Like in Practice?

I think one of the things that is becoming lost in all the talk about the Agile movement is the outcomes we are looking for and what it takes to achieve these outcomes. It’s as if the industry is looking for Agile methods to solve all of their problems. I have written before that I don’t think Agile methods should be viewed as a silver bullet. The factors impacting our agility in the complex world of software development are multi-dimensional. Agile methods are a tool, not the entire toolbox.

The multi-dimensionality of this agility “problem” makes it hard to define specifics of what it means to be agile. That said, I know agility when I see it. Below is a list of capabilities that I feel would often be present within teams that are agile. I do not suggest the list is all-inclusive or complete. My point is to get you thinking about what it means to be agile.

  • Being able to test ideas with minimal investment
  • Recognizing risk early and taking steps to mitigate
  • Moving developers within and between projects with ease, minimal friction, and efficiency
  • Being able to quickly and confidently tell someone how much effort and what changes are required to implement a new feature
  • Being able to make changes in a system without having to understand the whole system
  • Not spending weeks of stabilization after every “feature complete” milestone
  • Not rolling back releases due to errors found in production
  • Minimizing unnecessary rework by efficiently identifying known-knowns and unknown knowns before development occurs.
  • Easily understanding and visualizing progress and effort throughout a project
  • Being able to confidently and accurately communicate expected timelines
  • Delivering on time and on budget
  • Detecting 90% of defects before customers find them
  • Never having to say “we should rebuild everything and do it right this time”
  • Being able to extend a product with minimal changes to the conceptual design
  • Ability to push decision making down to the developer with the confidence that their decisions will be consistent with values and design principles of the organization
  • Having a consistent design methodology that is understood by all and applied to all projects
  • Identifying, adapting, and adopting new tools, techniques, or processes that improve our efficiency and the quality of our efforts and outcomes

What Does It Take to Become Agile?

One of our goals at Don’t Panic Labs is to become a shining example of what professional software engineering looks like. This is a lofty goal. I define “professional” by comparing the expectations people have for software developers to those of other recognized professions such as doctors, lawyers, pilots, civil/mechanical/electrical engineers, etc. I think we would all agree that our industry has a long way to go before the general public has the same faith in us as they do with doctors or airline pilots. But this should be our goal, and it is our goal at Don’t Panic Labs. It is my firm belief that our ability to achieve the type of agility I described above will correspond directly with our ability to deliver “professional” expectations.

So how do we get there? It is certainly a journey, as opposed to a destination. In the development teams I have been a part of, the one thing I have valued is their ability to see processes and methodologies as tools to achieve desired outcomes. At Don’t Panic Labs, we have assimilated many disparate techniques, but have done so in a lean, lightweight way. We prove the value of a process and then determine where the point of diminishing return is and don’t go beyond it.

So where do I start? The advice I would give folks is to sit down and enumerate the outcomes they desire, prioritize these, and then intentionally and deliberately adopt and adapt processes to achieve these outcomes. You can use the list above as a starting point. Keep in mind that you may not see profound results right away (remember there are multiple dimensions to agility and no silver bullet). The key is to continue to build upon these efforts and layer in new tools and techniques while reviewing the existing tools and techniques.

Case Study: Quality and Agility

As an example, let me walk you through the path we have taken related to improving and maintaining our software quality objectives and creating a culture of quality accountability. It is quite clear to me that the velocity (and morale) of a development team is strongly correlated with how much troubleshooting of quality issues in production they are experiencing. I know of no developer that enjoys constantly revisiting unstable or poorly-written software to resolve some problem.

Research into expected defect detection rates for leading organizations revealed that we should be targeting detection rates greater than 85% (i.e., 85% of defects found prior to shipping code to production). I also know from experience that quality doesn’t “just happen”. You need to be deliberate about your processes and establish a quality culture if you are going to achieve any success.

At a previous company (eSellerate, now MyCommerce), we were able to achieve this detection rate through what I would call “brute force”. We had an army of Quality Assurance (QA) people who went through the software in great detail, over, and over, and over again until it was very stable. While this worked, I was still not satisfied with the result.

I felt that the development team was not taking the proper ownership over the quality of their work. The code that was handed over to QA was often not ready for testing. This created the classic ping-pong scenario where QA and development would go back and forth trying to resolve issues. These quality and stabilization periods delayed releases. When the software was eventually released, it was solid. However, there was very little that was “agile” about our team. This is why I called this the “brute force” quality period. If not for the patience and dedication of our QA team, we would have never reached our quality numbers. I knew there had to be a better way.

When we started Don’t Panic Labs in 2010, I decided not to hire any QA people for the team. I wanted to establish a mindset and culture that placed quality accountability on the developers. Taking this approach naturally increased the amount of scrutiny and desk-checking that our developers were doing before calling their work “complete.” Along with this, we established a consistent design methodology that involved decomposing systems into service-based architectures that are composed of loosely-coupled, stateless/state-aware services. This architecture pattern enabled a high degree of testability, and it made unit and integration testing a core component of our quality processes. Since then we have hired some QA folks, as I believe the QA role is a vital part of any mature development team. However, the quality must begin with the developer.

We also adopted a software estimation process that required decomposing requirements/stories until the estimated level of effort for the story was one week or less. The goal was to reveal hidden assumptions early on so that what was built by the developers was what the product owner was envisioning – another aspect of quality. The combination of these two processes got us to our defect detection rate objectives, but it fell short of ensuring long-term velocity of the developers. We needed something else.

Our main issue with velocity at this point was maintaining the conceptual integrity of our designs. Software entropy is a real phenomenon. If you do not take active steps to overcome it, your systems will degrade and velocity (i.e., agility) will be compromised. A few senior developers at Don’t Panic Labs were doing their best to ensure the conceptual integrity of our systems was maintained – but it was tough.

In 2012, I met with people at Hudl who were using GitHub and their pull request workflow. From the very moment I heard about this, I knew it could be an important part of our quality processes. We immediately began transitioning to GitHub for source control management. We then instituted a pull request process whereby every code change was reviewed (as a minimum) by the lead developer who was responsible for ensuring changes were consistent with our architecture and software design standards. Only the lead developer could merge changes into the master branch. We were now actively maintaining conceptual integrity.

Today all of the above practices give us a layered approach to quality, helping us reach our defect-detection goals. In addition, it enables the long-term velocity and agility we seek in our software products. It did not happen overnight, and there was not some pre-defined recipe for us to follow. Now we are working on developing lead engineers and architects so that we can effectively scale these processes as we continue to grow.

Leaders in our organization are always looking for tools, techniques, and processes that will move us toward our ultimate goal – true, sustainable, business agility.

Looking Ahead

I believe software engineering will only be elevated to the “professional” level of other industries after our industry prioritizes the activities that increase the creation of quality software, and realizes that a single methodology alone will not solve our problems.

Teams should always be on the lookout for new ways to improve their effectiveness (a few of which I’ve included in this post). There’s not always a “one-size-fits-all” approach to enhance a team’s process — it’s up to each one to make these determinations. However, keeping a lookout for these new processes will make a more significant impact than any pre-packaged methodology ever could.

Have you ever wrestled with a problem in your mind and then, while trying to explain it to someone else, had an epiphany of how to solve it? This has happened to me on numerous occasions.

Or have you ever jumped in to develop some code for a piece of business logic that you felt you completely understood only to find unanticipated aspects of the business case that require you to get clarification or – worse – start over? Unfortunately, I have also experienced this – more undesirable – scenario.

In my third and final post of my series on The Danger of Incomplete Pictures (Part 1, Part 2), I am going to talk about this phenomenon and share some thoughts on how we, as developers, can do a better job of transitioning from requirements to coding.

With any design and development effort (i.e., not just software), you begin with more abstract concepts and progress to more concrete, explicit details. Along this journey, you make discoveries and gain insights that were not accessible at the project’s early stages.

For me, uncovering hidden assumptions and details is one of the more rewarding aspects of engineering. Every time I identify some behavior or detail that was not adequately specified, I feel like I’m gaining more understanding of the system (and, consequently, fewer of these hidden behaviors will be discovered by someone else!)

This process of gaining increased understanding and insight is an inevitable consequence of building complex systems. The sooner we gain these insights, the better it will be for us and the ultimate users of the system. In my experience, these insights are gained throughout the various phases of product development:

  • Prior to the construction phase
    • Defining the user stories/epics
    • Decomposing the system into components and services
  • Upon commencement of construction
    • Implementing the features
    • Testing our own work
    • Demonstrating progress
    • User acceptance and quality assurance testing by others
    • Customer use of the system

It is impossible to completely understand a system prior to development (see my post Developing Software Products in a World of Gray), but we should strive to identify the majority of these insights earlier in our projects. It is much easier to change or enhance a story definition or a design plan prior to construction than it is to change the implementation of a design.

Unfortunately, the second scenario in my introduction (where insights are gained during construction) is an all-too-common occurrence in software development teams. In fact, I fear that most teams gain the majority of their insights during the construction phase. I am confident this is the cause of many schedule and cost issues with software projects, not to mention the technical debt that might also be added.

If we think about the collective understanding of a system, we might visualize it as a bucket or container. Once that container is completely full, it represents a thorough understanding of the details and requirements of a system. We add to the bucket as we gain insight and clarity through the various phases of development.

In a healthy development process, the vast majority of significant insights (i.e., insights that can drive decisions that are hard to change later, such as architectural) are identified prior to the construction, or “manufacturing,” phase. Insights discovered during this phase are less disruptive, and do not carry the same risk and cost profile of insights that are identified later on during construction, or after the product is in the hands of the end user.

Again, I should emphasize that there is really no technique that provides perfect insight prior to the construction of systems that have even a modest level of complexity. In my early experiences as a Systems Engineer at McDonnell Douglas in the late 80s, our requirements analysis process was full-on waterfall. We often spent months and had many meetings to attempt to fully understand the requirements and reduce risk on these projects. By and large, our process worked but it took a long, long time and – like other waterfall experiences – did not easily accommodate the insights that would be gained during the construction and testing phases.

Even though our team (rightly) abandoned waterfall, it is my feeling that we as an industry have thrown the baby out with the bath water when it comes to the role of critical thinking. The second scenario in my introduction highlights this overreaction to waterfall: jumping straight from user story to code. When development is done this way, the insight profile tends to look more like the picture below where a significant amount of insight is gained late in the development process.

One of my challenges here at Don’t Panic Labs has been to effectively – and efficiently – introduce disciplined critical thinking into our development process prior to the construction phase. I wanted to achieve the benefits that I saw with waterfall, but in a lean/agile way. Coupled with this requirement was the need to create a process that was accessible to a variety of engineers with different levels of experience (including our interns).

In my experience, critical thinking occurs best in situations like the first scenario in my introduction. There is something that happens within our minds when we are forced to articulate (verbally or in writing) an abstract concept in explicit detail. My own experience with the waterfall approach shows that it is effective in promoting critically thinking. I wanted to capture the essence of that, but in a way that made sense for our lean/agile environment.

To facilitate critical thought in our projects, we came up with the concept of having our engineers write “white papers” based around this set of loose rules:

  • Spend about one hour, but no more than two, to describe the implementation requirements of the particular story or feature the engineer is analyzing.
  • Use whatever method for description that makes sense to the engineer (e.g., drawings, words, diagrams, etc.)
    • Circulate this white paper amongst stakeholders to gather feedback and additional insights. These stakeholders would be product owners, development leads, project managers, QA folks, etc.

Because we were not interested in creating process for process’ sake, we also gave some thought to when it was a good use of time and effort to go through the “white paper” process:

  • When the story presented a particularly challenging area of design and construction with some perceived complexity.
  • When the development lead or engineer felt it warranted some deeper thought.
  • When we had a less-experienced developer (or intern).

We began using this technique with some good results. Dev leads, project managers, and product owners appreciated the feedback loop prior to commencement of development. The only problem with this particular model was that there was – by design – no structure to what was required for the contents or output of this process. We simply wanted to give the developers time to think critically before jumping into code.

But as a result of this lack of structure, we often spent a fair amount of time explaining what was expected and what the engineer should be thinking about. So I went back to the drawing board.

I had an epiphany during the revision process: Why not develop a series of questions designed to get the engineer thinking about specific aspects of this feature? If the goal of this exercise was to answer questions about the feature, this structure seemed like a good way to prompt the engineer.

I also decided to explicitly ask for the specific, discrete development tasks that were envisioned to complete the feature. This task list was meant to be the final thing developed in the document. I also renamed the document to “Design Analysis Document” to be more descriptive of the activity.

The guidance we currently give is similar to the original, with only slight modifications:

  • Spend about one hour, but no more than two, to describe the implementation requirements of the particular story or feature the engineer is analyzing.
  • Use a provided template of questions (and associated guidance) as a framework for thinking through the problem and add whatever method of description that makes sense to the engineer (e.g., drawings, words, diagrams, etc.)
    • Note: It is not required to answer every question.
  • Include the specific sequence of implementation tasks (with estimates) for the story/feature.
  • Circulate this design analysis document amongst stakeholders to get their feedback and additional insights.
    • These stakeholders would be product owners, development leads, project managers, QA folks, etc.

We now find ourselves tweaking the questions in this template. Email me if you are interested in seeing this template.

Any system that provides value will change and evolve over time; that is natural. We know how to design systems to embrace change so that our architecture and design do not become unmaintainable.

This blog series has not been about the impact of requirements changing over time but about the changes that result from missed requirements that stem from hidden assumptions. These assumptions inevitably reveal themselves during development or production, and they can be extremely disruptive and costly to resolve.

If we call ourselves software engineers, we must think critically and leverage the types of structured processes engineers in other industries use to minimize the likelihood of missed requirements and hidden assumptions.

I hope this has provided some food for thought and some tools that you can use in your day to day work. As always, I’d love to hear your thoughts, reactions, and ideas for how you would improve upon what I have presented.

In my first post of this series, I discussed how ambiguity and lack of shared understanding between members of a product development team can occur when we rely on unstructured, ad hoc, and abstract communication processes (i.e., conversations and high-level user stories) for expressing our thoughts and ideas. We feel like we are painting a clear picture, but we are hindered by hidden assumptions and “blind spots” that prevent us from expressing and seeing details of the big picture in our mind’s eye.

In this post, I want to talk about a technique we use at the beginning of projects to gain a shared understanding of what we are trying to build. At the heart of this process is something I think may come as a surprise … estimation.

There has been much written over the years regarding estimation, challenges with estimation, whether we should be estimating at all, etc., etc., (I’m looking at you #NoEstimates!). A lot of energy goes into these conversations. I am not going to try and convince you whether estimation, as it is currently viewed, is right or wrong. For what it’s worth, we find our estimation process incredibly valuable and essential to the success of the complex software projects we work on and the effectiveness of our agile/lean approach to product development.

I am going to attempt to get you thinking differently about the desired outcomes of estimation and how valuable estimation can be used to improve and inform the quality of system design and reduce the uncertainty when trying to determine what should be built.

We have a saying here at Don’t Panic Labs that goes, “the software design process starts with analysis and estimation of the backlog of stories for a project.” I’m pretty sure most people would not think about estimation as a software design tool, but we do. Much like other engineering disciplines that tackle big and seemingly unsolvable problems by breaking them into smaller solvable problems, we can use analysis and estimation of user stories to arrive at a more “solvable problem.”

I am reminded of Mark Watney, the main character in “The Martian,” being stranded on Mars. His ultimate goal was to get off the planet, but he realized that was not a problem he could attack directly. He analyzed what had to happen to get off the planet and began solving all of his smaller problems, like having enough food to survive long enough to be rescued.

When we are provided with a backlog, we are often given epics/stories that may seem quite large and “unsolvable.” An example might be, “determine whether a patient is sitting up in bed.” That seems daunting, but when you start breaking this down into smaller problems, it starts to feel more manageable.

In this case, it might look like this:

  1. identify the bed in the room
  2. determine if the bed contains a patient
  3. determine if the bed is flat or inclined
  4. determine if there is a person-shaped entity that is in a certain orientation relative to the incline of the bed.

This example may seem unusually large in scope (and I would agree), but I use it to illustrate the benefit of breaking bigger problems into smaller ones.

An example of a user story we might be more likely to see might could be: “As a customer, I want to process an order using a credit card for payment” (which we will title the Submit Order story). We will use this story to walk through estimation and task decomposition to help gain a shared understanding of the true requirements and to help reveal uncertainty and any hidden assumptions.

Estimation Buckets

When we estimate stories, we use a fixed number of estimation “buckets” (see Todd Guenther’s post, Project Management – How We Estimate).

These estimate buckets are:

  • 0 effort (uncommon)
  • 1/2 a day
  • 1 day
  • 1/2 a week
  • 1 week
  • >1 week

We do this for a couple of reasons. First, it reinforces that we are estimating by removing the tendency to “put too fine a point” on our estimation by trying to get to the exact number of hours it will take. Estimates are inherently inaccurate, so let’s not pretend they aren’t.

Secondly, we believe it is easier for an engineer to assess a relative size of effort using these buckets – even with uncertainty. Relatively small stories (i.e., stories < 1 week) are usually easy to assess in terms of which bucket something will fall into. We often hear comments like, “I think this is probably more than a day, but I don’t think it will take 1/2 a week.” In this case, we put the estimate at 1/2 week and trust that the Law of Large Numbers will even it out across all estimates – which they almost always do.

Estimating our Story

We favor one-week iterations at Don’t Panic Labs. This forces us to “eat the whole elephant in small bites.” It increases the frequency of delivery to QA and customers for feedback, and means we are – at most – one week away from seeing that we might have some schedule risk. We have tailored our iteration planning process to be very light, so the one-week rhythm remains efficient with little/no overhead.

Even if we were not on one-week sprint cycles, we would still require all of our stories to have an estimate of one week or less. This is key to helping reveal the uncertainty and hidden assumptions. If we originally think the Submit Order story is something like two weeks, then it’s a safe bet there is a significant amount of uncertainty there. When this occurs, we ask the team (usually an engineer, project manager, and product owner) to decompose the story and estimate the sub-stories that are identified. They will do this until all of the sub-stories are equal to one week or less.

For our Submit Order story, we might end up with the following:

  • As a customer, I want to process an order using a credit card for payment – 2 weeks
    • Process credit card authorization – 1 week
    • Store order details – 1/2 week
    • Verify shopping cart pricing, tax, and shipping – 1/2 week
    • Send notification to customer and seller – 1/2 week

The more detailed estimates of the sub-stories add up to about two and a half weeks, which I have much more confidence in than the original two-week estimate. I am also pleased to see that there was some discussion about the desire to verify the product pricing before submitting the order (rather than just accepting the pricing from the shopping cart). This is a requirement that might not have been articulated in the original description of the main story.

Wrapping Up

I strongly encourage our development team to engage in analysis and design early on in the project cycle, perhaps as early as backlog development. I have seen time and again how critical thinking has uncovered details that were not originally understood. Estimation is a powerful tool for telling us how well we truly understand what is involved for a particular story. Stories that are greater than a week – let alone stories with estimates of a month or more – almost certainly have a lot of unseen details that could disrupt development efforts unless they are revealed early on in the project.

In the next (and last) post in this series, I will discuss a lightweight tool we use within the sprint cycle itself that helps ensure the technical details are properly fleshed out before a bunch of time and effort is spent re-working code that does not match the product owner’s requirements.

I was recently re-introduced to one of my favorite essays, Why We Should Build Software Like We Build Houses, by Leslie Lamport. Leslie is one of several thought leaders within our industry who I really admire, both for his insights into the nature of software design as well as for his contributions in terms of the products and technologies he has developed (in Leslie’s case, TLA+ and the recent Cosmos DB project).

There were a couple of quotes in the article that really resonated with me and sparked a few thoughts on a series of blog posts I am writing.

The first was a quote from cartoonist Dick Guindon:

“Writing is nature’s way of letting you know how sloppy your thinking is.”

The second quote is:

“Some programmers argue that the analogy between specs and blueprints is flawed because programs aren’t like buildings. They think tearing down walls is hard but changing code is easy, so blueprints of programs aren’t necessary. Wrong! Changing code is hard – especially if we don’t want to introduce bugs.”

What I like about these two quotes is that they are consistent with my worldview of software design and development, while at the same time challenge the conventional wisdom of our industry at large. Since we at Don’t Panic Labs are trying to advance the state of the art in software design and development in our little corner of the world by “shaking things up a bit,” I thought they would be good touchstones from which to launch this series of essays.

The first quote relates to my firm belief that we do our deepest, most critical thinking when we are forced to articulate our thoughts. I never really did much writing in my engineering undergraduate education. It wasn’t until my first job as a Systems Engineer that I did much writing, and then I was doing it a lot. Most of it was concept documents, requirements specifications, and test plans. I learned early on from these experiences how different the mind works when you are expressing a concept or idea in words, as opposed to just thinking about it.

Our mental models tend to be more abstract, but when put down on paper they become more concrete. This process often reveals gaps in our understanding of the problem or solution – important gaps that are valuable to identify early on.

I also experienced this when I was teaching at the UNL Raikes school. It became quite apparent that the depth of understanding required to teach a concept is much more than to simply apply it. Somehow, our intuition and instincts (likely influenced by our past experiences) allow us to effectively apply concepts without the depth of understanding required to teach them. I suspect many of us have had that experience whenever someone asked us why we did something the way we did. We often react with “because I know it will work.”

The second quote relates to the situation we often find ourselves in at Don’t Panic Labs where we are expressing views and ideas that tend to run counter to the cultural norms and “conventional wisdom” in our industry. This quote specifically speaks to the notion that we can just start “slinging code” and come back to fix it later. I have discussed this idea in a number of talks describing this as a trap. Martin Fowler demonstrates this brilliantly with his “Design Stamina Hypothesis.”

So, where am I going with this, and what is the theme behind this series? I am going to start with a discussion of what I see as a key contributor to re-work, missed schedules, and poor product fit, and then I will follow up with a couple of processes/techniques we emphasize at Don’t Panic Labs that are specifically designed to mitigate these risks before we start putting fingers to the keyboard and begin coding.

The Danger of Incomplete Pictures

Software development is a team sport with many people, disciplines, perspective, skill sets, and communication styles represented. When things go wrong, it is often the result of a key problem we see within these multi-disciplined software projects. People might assume this problem is a lack of requirements specificity. I agree that this is a problem, but I feel this is more symptomatic as opposed to causal.

I have come to view the lack of requirements specificity to be the result of a lack of recognition. It is incredibly challenging to gain a shared picture of the requirements. We often assume that we have a shared understanding when, in fact, we do not.

Software design of complex systems (aka the type of systems many of us work on) is, by its very nature, a wicked problem:

“A wicked problem is a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.”

I am sure we have all experienced this phenomenon in our work. As a side note, one of the challenges with the way we are educating software engineers is that many of the assignments in education are not wicked problems. But I digress.

In my experience, we as engineers are not being as effective as we can in minimizing requirements that are “difficult to recognize.” Some of this is a result of human nature and the challenges with communication. Let me explain.

Imagine you are a software engineer who is responsible for implementing a user story. You meet with the product owner and she explains the concept and requirements behind the user story. In her mind, what she described is this:

The problem is, you took what you heard and created a mental model that looks like this:

What’s more, when you described this to your QA engineers, they created a mental model that looks like this:

We need to keep in mind that even if we all shared the same picture from these conversations, it is highly unlikely our aggregate picture is complete:

You have probably seen this demonstrated via the following cartoon:

There are two key problems that this presents to us. I have described these as “blind spots” in our thinking:

  • Our assumption that we all have a shared picture
  • Our assumption that all assumptions and requirements are known

My next two posts in this series will introduce a couple of tools/techniques we have put to use inside Don’t Panic Labs that have made a significant impact on reducing the occurrence of “difficult to recognize” requirements, making the solutions to our projects and problems less “wicked.”

In my first post, I wrote about the responsibilities, goals, and struggles that development teams are facing today. In this post, I am covering our experiences and how we must take a bigger picture look at how we’re working in a world of constantly changing requirements.

Our Experience

So, I seemed to paint a lot of doom and gloom in my last post. As you might expect, I am not suggesting we just throw up our hands and say “Well, that’s just the reality of software development! Deal with it.”

When we launched Nebraska Global and Don’t Panic Labs in 2010, we were already using a lot of agile project management techniques. However, we were still suffering from an inability to “maintain a constant pace indefinitely” and we were committed to doing something about it. Before I get into our outcomes, let me give you some perspective on what we have done since 2010:

  • We have worked on dozens of greenfield software products including our own Beehive Industries, EliteForm, and Ocuvera products.
  • The team who built these products has been relatively small. We have 33 software engineers across Nebraska Global, with 17 of them being in Don’t Panic Labs. This count has steadily increased from the 12 engineers we started with in the spring of 2010.
  • While we have a decent balance of youth and experience, we have numerous individuals who have 15+ years of experience and much of that experience in the area of software product development.
  • From day one, these experienced leaders of the software development team have had a strong devotion to proven principles and patterns – or at least a strong passion to adopt and leverage said principles and patterns.

The net result is that the last seven years have been an amazing crucible for rapid learning and iteration on how we approach software design and development.

So what has been the result of this learning and iteration? At the risk of sounding hyperbolic, we have been solving the most complex problems most of us have ever worked on while achieving the best quality we’ve ever seen, all while maintaining the “constant pace” that enables sustainable business agility.

Specifically, we have:

  • Implemented a consistent methodology for software architecture and design that embraces change
  • Proven sustainable development velocity is achievable – some of our systems under development have code bases that are over seven years old
  • Maintained continuous attention to technical excellence and good design through processes and mentoring, which has enabled consistent adoption of our software design principles and patterns across all projects

Here Come The Zealots!

I can almost hear the questions and challenges to this idea that we need to invest more in software design and architecture to be truly agile:

“We can’t afford to do this now, we need to get the MVP out!”

“No one does design upfront – it’s not agile!”

“We can’t begin to architect a solution when the only certainty is that functionality will change.”

“If this gets traction we will come back later and re-architect or refactor it correctly.”

The last one is particularly fascinating to me. Think about what that is saying: If our hacked together MVP (minimum viable product) starts taking off, we will stop what we are doing and re-design it correctly. This is a trap, plain and simple. In my experience, the only scenario where you will have the time to be able to re-design an MVP is if you are NOT getting traction. Any project I’ve worked on that customers cared about was driven to add more and more features. You just aren’t going to get too many second chances on a system design if your product has initial success.

Enter The Design Stamina Hypothesis

Martin Fowler wrote a blog post a few years back that really captured, both in words and visually, the argument against the “code and fix” mentality of a lot of development teams. He makes the argument that while there is an initial advantage, in terms of throughput, for a “code and fix” team over a team that is doing “good design”, there will be a crossover point where the “code and fix” team starts to lose velocity and the “good design” team continues with a “constant pace”.

The image below is taken from Fowler’s post:

He goes on to argue that that crossover point is usually weeks into a project, not months. My own personal experience confirms the legitimacy of this phenomena and I suspect if teams were honest with themselves they would also recognize this in their own experiences.

Designing For Change

One argument I have heard is that it is not realistic to design a system when so much is unknown due to changing requirements and insights we will gain from customers that may drive a product in directions different from what we understand today.

My answer to this is pretty simple: this is only a problem if you are not designing for change.

This notion of encapsulating future change has been around for quite some time and is pretty common in non-software system design. It requires that areas of likely change need to be isolated from the rest of the system so that when a change in requirements does occur, its impact is limited to a single module, service or class. The advantage of this approach is that you don’t need to know the specifics of the future changes; you need to just recognize that change is likely.

Here are some examples of areas of likely change or volatility:

  • Data storage and access
  • Workflow/sequence
  • External service and hardware dependencies
  • Business rules and algorithms
  • Difficult design and construction areas

The driving principles behind this methodology are derived from David Parnas’ seminal 1972 paper where he introduces the concept of Information Hiding that promoted, among other things, decomposing systems by hiding “arbitrary” design decisions behind static interfaces.

The end result becomes a system that literally embraces change as opposed to one that is rigid and fragile in the face of change. There are other benefits as well:

  • Reduces the coupling and increases the cohesion of modules, services, and classes
  • Creates systems that are highly testable
  • Tends to naturally incorporate widely accepted design principles like SOLID
  • Reduces the “field of view” for a developer when she is making changes to the system thus reducing the likelihood of unintended changes in behavior
  • Results in a system that is easier for developers to comprehend and follow

Bottom Line

We live in a world of changing requirements and businesses rely on us be able to function in this world. It is our responsibility to enable business agility.

Software can rot as a result of requirements change, lack of coherent design, and developer’s lack of familiarity with the original design philosophy.

The bottom line is this: software design always happens. You are either doing it:

  • proactively by following good software design principles and methods and informed by a conceptual design for the entire system and recognizing the likely areas of change, or
  • in real-time as you are coding without fully understanding the implications of each design decision that occurs in isolation.

If you and your team choose to forgo a disciplined approach to development and maintenance, expect software entropy to happen. There is no shortcut or magic framework that can save you. You will not be able to maintain a “constant pace”. Allowing chaos and poor design to rule means your organization will have reduced business agility over time.

You need to ask yourself if you want your design to be random or if you want it to be coherent and consistent and able to embrace change? The answer is important because the success of your product or business and the health of your development culture depend on it.

 

 

This post was originally published on the Don’t Panic Labs blog.

Hopefully this headline got your attention. I considered trying to come up with a hashtag like #NoTDD or #NoEstimates that seem to be popular these days. I opted for a (hopefully) catchy title instead. The purpose of this post is not to bash agile methods or to somehow suggest that we should not be following agile principles. Far from it. My purpose is to get you thinking about how agile, by itself, has limitations in its ability to transform development teams and to challenge some of the dogma around “just-in-time” software design in agile environments.

In this part of my two-part series, I am covering responsibilities, goals, and struggles that development teams are facing today. In part 2, I will cover our experiences and how we must take a “big picture” look at how we’re working in a world of constantly changing requirements.

What Is Our Responsibility?

When you think about where software development fits within your organization, what do you see as the ultimate responsibility you have within the company? What is it that you provide? What is your ultimate role in contributing to the success of the business?

I have given some thought to this over the years, especially when we formed Nebraska Global and Don’t Panic Labs. Our goal in building the software development team at NG/DPL was to reduce the risk of the development side of startups and enable accelerated MVP (minimum viable product) timelines. We would do this by creating a “flexible” product development resource that we could move in and out of the various startups and projects we were building. Over time, this internal flexible resource has become an external one as we have built a thriving contract product development business that enables other companies to flex us in as they need help with various innovation, product development, and legacy application migration initiatives.

The insight I had on all of these efforts is that, regardless whether you are developing a product for sale or developing a support system for a company, we all have similar responsibilities to our partners – to enable business agility.

So what do I mean by this? We must be able to rapidly react and respond to new information and new insights that our partners receive. This means we should be providing the necessary features and support systems that allow them to effectively leverage the new information and insights. To the extent we are able to do this defines how agile we really are.

Agile Methods Are Essential, But…

… they are not a silver bullet. We see a lot of teams and organizations transitioning to agile processes with the hope/expectation that this will give them enough agility to be responsive to business needs, create better alignment between stakeholders and developers, and increase the team’s productivity.

I would be surprised if most organizations adopting agile methods don’t get at least some lift in terms of alignment between stakeholders and the ability to respond to changing priorities. However, unless the development team is quite mature and already developing loosely coupled, highly cohesive, testable software designs, I doubt whether they see much, if any, improvement in their productivity. In fact, I can see the potential for a significant decrease in quality as well as the ability for these teams to complete the work they are committing to in each sprint. Why? I see two main reasons…

  • Ever-increasing complexity – the problems we are trying to solve with software today are far more complex than they were 5-10 years ago, and this is not going to slow down. The increasing sophistication of the solutions and the need to react quickly to changing market demands is putting increasing pressure on software platforms that often have a significant amount of technical debt and have become fragile and rigid.
  • Team maturity – it’s easy to go to a conference and watch a thought leader talk about the benefits of agile processes and how they have dramatically changed their culture and success. These are genuine stories and I have no doubt there is little hyperbole in their claims. The problem is, most organizations do not typically have the consistent level of maturity in processes and ability that these thought leaders’ teams have. If I had a team led by Robert Martin, Martin Fowler, Kent Beck or folks of that caliber (i.e., folks with extremely strong software design/development capability), I could see where layering in agile processes would be sufficient to achieve the sustained business agility that is promised. The reality is that most of our teams are composed of “mere mortals” who lack the background and experience necessary to design and develop software in a way that takes full advantage of agile processes.

Addressing The Software Development Side Of The Equation

What’s missing from this conversation, and many of the implementations of agile, is the recognition of the necessary changes required on the software development side of the equation. In other words, what are the changes and processes we need to adopt in the areas of software design, maintainability, quality assurance, etc to fully realize the benefits of an agile culture?

The irony here is that the folks who developed the Agile Manifesto saw this. Many of us have seen the home page of agilemanifesto.org and are familiar with the list of what is valued. I suspect most people did not see the “Twelve Principles of Agile Software” link at the bottom of the page. There are three of these principles that I think we, as a community of agile developers, fall short on:

  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity–the art of maximizing the amount of work not done–is essential.

The first bullet states that we should be able to achieve sustainable development velocity indefinitely. Ask yourself this… “How many projects have I worked on where it was just as easy to make changes to the system in year seven as it was on day one?” or “Have I ever worked on a code base that was more than five years old where I was quite comfortable making changes to it?” I suspect most of you have never worked on a project where either of these has been the case.

I suspect some people might look at the second bullet and assume it refers to how proficient our developers are in a programming language or how comfortable they are using the latest framework. I see it quite differently. What this says to me is that we need more REAL software engineering. These are the software architecture, software design skills, and the processes to nurture and maintain these architectures and designs that are necessary to achieve the sustainable development discussed in the first bullet. Unfortunately, we are falling short on how these skills are taught and, as a result, many development teams are lacking.

The final bullet is included because this is a hot-button issue for me. I too often see folks pursue esoteric, overly complex solutions to problems as opposed to looking for the simplest (i.e., most understandable, testable, maintainable) solutions. When we are creating overly complex solutions to simple problems, we are creating technical debt. I see this most commonly in the unnecessary asynchronous processing and messaging that exists in a lot of software applications.

Managing Complexity

Steve McConnell stated in “Code Complete” that unmanaged complexity is the most common technical reason projects fail. In other words, we don’t do a good job managing complexity. One needs only look at the healthcare.gov debacle a few years back. Why is this? What is going on here? I already mentioned above that the problems we are solving today are more complex than in years past and yet I do not see much being done on the software design side to mitigate this complexity risk. The main challenges facing us are human limitations:

Unless we change our approach to software design and development to meet these challenges head-on, we will never be able to “maintain a constant pace indefinitely”.

 

 

This post was originally published on the Don’t Panic Labs blog.

It’s very easy to throw terms around in our (or any) industry. In our hurried culture, we latch onto words or phrases that may not fully encapsulate their original intent. I’m afraid that is what has happened to the label of “software engineer”. And it has not been without consequences.

I believe the real, working definition of what a software engineer truly is has been diluted. Some of this could be that the title has been overused by people who have little knowledge of what makes an engineer an engineer. Part of it, too, could be that it has become a marketing or recruiting phrase (let’s be honest, just stating you have “programmers” or “developers” doesn’t have the same prestige as “software engineer”). As soon as one company begins using “software engineer” to describe their employees, the floodgates are opened for other businesses to begin doing the same. The term quickly becomes meaningless.

Software development is still a relatively young industry and is one that has evolved quickly during its short history. So, it’s not surprising that we have informally adopted varying terminology. But as our industry continues to mature with formal accreditations and recognition of skills, it’s important to be aware of what we’re implying with the words we use and titles we assign (especially when we begin appropriating terminology from existing disciplines for use in our own industry).

What I’d like to cover here is what we at Don’t Panic Labs believe a software engineer is and what we mean when we say it. Hopefully, this will force us to be more deliberate about its definition and highlight the differences that separate developers from software engineers. And even more, we can bring some awareness to what we call ourselves and the potential ramifications of doing so incorrectly.

A Little History

In the short history of software development, the act of writing code has suffered from a lack of awareness around what is truly required to create a functional, understandable, and maintainable codebase. In some ways, this is due to the industry seeing developers as just coders.

Since the beginning, we, as an industry, have had little understanding of what solid software design principles look like in action. We code it, release it, patch it, update it with new features, and then repeat the process. Code and fix. Code and fix.

But that all comes with a cost. In the 1980s, a few people began advancing ideas that software development should be treated as a discipline like any other engineering-based vocation. Sadly, it hasn’t caught on as quickly as it should have.

Regardless of how far we’ve come, we still have an identification problem when we label all developers as “software engineers”. It’s not only unfair to actual engineers, it implies something that is not true.

As we at Don’t Panic Labs see it, you have four basic levels: the developer, the software engineer, the senior software engineer, and the lead software engineer/software architect.

The Software Developer

The definition of a software developer is the widest of the four we’re covering here. At the risk of sounding like I’m reducing the role of developer, what I’m listing here is what I consider the bare minimum of what we’d consider a developer at Don’t Panic Labs.

We see the act of writing code analogous to the process of manufacturing a particular product. While it requires specialized knowledge and skills, it does not involve any design. Developers are the construction workers, the heavy lifters, the folks who bring the ideas into reality.

A developer is a person who understands the programming languages used, has a grasp of coding best practices, and knows the tech stack of their project’s requirements. However, they are not the people who created the plans and thought through the various scenarios that could arise.

The Software Engineer

To me, the difference between a developer and a software engineer is analogous to the difference between the person working on the factory floor (the developer) and the manufacturing supervisor. While the former does the actual work, the latter ensures the directions and plans provided for manufacturing are correctly executed and that any ambiguities or adjustments are identified and dealt with. To perform in this role, it is essential to be able to communicate and collaborate as well as possess a working knowledge of design principles and best practices.

In our world of software, the role of software engineer includes all of the software developer’s skills as well as consistently demonstrating great attention to detail. This person also looks for opportunities to use and execute appropriate engineering techniques such as automated tests, code reviews, design analysis, and software design principles.

From a maturity standpoint, the software engineer is also someone who recognizes when their inexperience is a factor and proactively reaches out to more senior engineers for assistance or insight. This is where one’s ability to collaborate and communicate should be evident.

The Senior Software Engineer

A senior software engineer must be all that is listed above, but also be someone who can mentor younger programmers and engineers (e.g., young hires, interns) to improve their skillsets and understand what goes into design/architecture decisions. They also help fill in the gaps left by an education system that focuses on programming skills and not software design knowledge. To do this, they need the ability to articulate and advocate design principles and recognize risks within a design and project.

A senior software engineer is also expected to be able to autonomously take a customer problem all the way from identification to solution and is uncompromising when it comes to quality and ensuring integrity of the system design.

Senior software engineers are your best programmers. But just because a person is a great programmer does not mean they can decompose a system into smaller chunks that can then be turned into a solution. That is the main requirement of a lead software engineer / software architect.

The Lead Software Engineer / Software Architect

What makes a lead software engineer/software architect is the thinking, the considerations, the design behind the code, the identification of tradeoffs, and all of what must come together before any code is written.

Being a lead software engineer is not so much about “fingers on the keyboard”. Their focus is on the design of the system that will get fingers on the keyboard later. But before any of that happens, the lead software engineer must scrutinize and evaluate all big picture elements. The design that comes from this effort should produce a clear view into all the considerations, anticipate most of the problems that could come up, help ensure that best practices will be adhered to along the way, and, maybe most critically, enable the system to be maintainable and extensible as new and changing requirements appear. In other words, the design must enable sustainable business agility.

Once the system design (or software architecture) is in place, it is the lead engineer who owns the “big picture” and ensures that the development and detailed design decisions that ensues is consistent with the intent and principles behind the original design. Without the constant nurturing and maintenance of overall design integrity, the quality of the system design will rapidly decay. I strongly believe this responsibility should not be shared but rather should be owned by the lead software engineer.

The Future

So now that I’ve laid out how we at Don’t Panic Labs view developers and engineers, there’s something else we believe is also important and must be addressed if we are to move forward as an industry: education.

I often speak about how we aren’t educating our future developers and engineers. I’ve even written a post about how we’re failing. The problems we will be solving in the future (or now, one could argue) will require more than the “construction worker” developers we’re cranking out of our educational institutions today. While we need our developers to construct the code, we need more engineers equipped and educated to effectively think about that code before a single keystroke lands. As the complexity of our systems increases, so too is our need to ensure we’re anticipating the problems we may come up against and building sustainable systems.

This is no different than if – using building construction as an analogy – we only were educating our carpenters, plumbers, electricians, and welders without considering the need for engineers who create the plans for these folks to follow. As technologies and materials improve, engineers must be able to leverage these advances and make the necessary adjustments. Otherwise, workers will be left using the same old methods or, worse, making their own decisions in a vacuum and possibly creating a worse (or dangerous) situation for everyone. Without the high-level vision and insight provided by an engineer, the whole industry is held back.

As our world continues to run at full speed toward a future more reliant on ever-evolving technologies, the need for properly-trained engineers (who are educated based on a standardized set of requirements) will continue to grow.

And with that comes the need for a better understanding of the roles that comprise development teams. Whether one’s role is a developer, software engineer, senior software engineer, or software architect, the distinctions are as important as the work they do. We already know that following the current path inevitably leads to chaos.

That’s something our world cannot afford.

 

 

This post was originally published on the Don’t Panic Labs blog.

I worry daily about the software we rely on in our daily lives. One need only look at the problems our airlines have had and the disruptions they have caused in our lives. I feel like there is a significant risk that we will be crushed under the weight of technical debt in this software. As an industry, we have got to start getting ahead of the mismatch between the complexity of the problems we are trying to solve and the approaches we are taking to manage its complexity. We need only look back to the Healthcare.gov debacle to see our failures on the grandest stage. Interestingly, our ability to deliver large and complex projects successfully still hovers around 5%, which is appalling.

It is 2017 and we still see many instances of teams and projects that, while they may be leveraging agile processes for project management, have no structure for translating those requirements into a software design. What’s more, we still see very few organizations effectively leveraging proven practices of test-driven design, automated integration testing, code reviews, and so on. I estimate that we have interacted with over 100 engineers from dozens of organizations over that last seven years and only a few (less than five) were working in environments where test-driven design or automated unit/integration testing was part of their development culture.

Why is this still happening in the face of all the benefits these structures and practices provide? I believe it is in large part due to how we train and develop software development professionals.

The Role of Formal Education

What role is formal education playing in this? Unlike my generation of software developers, most of the people we see entering the field today are coming from an educational background that is Computer Science-related (either a computer science/engineering degree or minor). I have said for years that the way we are training software developers (and engineers for that matter) in our universities is flawed.

I am not alone in my viewpoint. Thought leaders like David Parnas, Steve McConnell, and Fred Brooks (“A scientist builds in order to learn; an engineer learns in order to build.”) have all written about this. Research papers have been published based upon surveys showing the gap between what is taught in school and what is required in practice. We are asking educational institutions to assemble a single program of study that satisfies accreditation and simultaneously prepares people for either a career in academic research or a career in our industry. How is this any different than eliminating mechanical engineering as a degree program and asking a physics department to train people to build bridges, aircraft, robots, industrial machines, etc?

It seems absurd, right? But that is exactly what we are expecting the Computer Science programs of our universities to do. The reality is, these programs are more focused on in-depth coverage of the body of knowledge of computer science at the expense of the types of curriculum a true engineering program would have. Most programs I see have a single course on “software engineering” and no courses on best practices for software architecture or software design and development.

Sad, but true.

I am by no means indicting these programs for a failure to prepare students for the real world. The vast majority of kids coming out of college with these degrees find employment so the universities might not see that there is a problem. While I believe these students are underprepared, we as an industry need to demand a better product from our higher education system.

Until something changes, we will continue to see students enter the workforce with little or no understanding of what a mature software development team looks like, what the design criteria should be for decomposing a system into a coherent architecture and design, how to effectively evaluate design tradeoffs against a set of accepted design principles, and the benefit of following best practices for the design and development of their code.

If students are lucky, they will be entering the world as apprentices and hopefully land in an organization that has a mature and strong software design and development culture. Good luck. My experience is that these types of organizations are rare.

The Role of Professional Development

What role is professional development playing in training people once they get into the workforce? Tech conferences have historically been focused on introducing new technologies and frameworks. Occasionally there are presentations covering patterns and best practices, but I suspect most of the session presenters come from organizations that have a disciplined maturity to their development processes. A lot of what they are talking about almost assumes that the audience could take advantage of these frameworks, which are designed to enable the best practices and patterns they themselves were following.

I suspect what really happens is either the developer walks away from the conference frustrated that they would never be able to leverage the tools or the developer “implements” the tools and frameworks with less than optimal results. I have seen first-hand the results of teams implementing the WPF MVVM framework after working through the quintessential tutorial. They jump into using the framework without really understanding the MVVM pattern and are unable to make the nuanced decisions required to fully leverage the advertised benefits of developing a UI layer on top of this framework (separation of UI and business logic, testability of view models, etc). Our own experiences have made us wonder whether we would be better off with good ‘ol windows forms.

Bottom line: it seems like we as an industry are completely ignoring the need to coach up our development teams on these fundamental principles and practices.

What is the solution? I feel like we must address both the root cause of this problem (our university education programs) as well as addressing the way we are developing software developers already in the industry.

Fixing Our Education System

Changing the way we train software developers is not a new idea. Thought leaders in our industry such as David Parnas and Fred Brooks (especially his chapter discussing “Where Do Great Designers Come From?” in “The Design of Design”) have been arguing for this type of change for quite some time.

In 1999, Parnas wrote a paper proposing a curriculum designed to address many of the concerns I have outlined above. In it, he argues for separating the education of software engineers from computer scientists, allowing the software engineers to be trained in the style of the other engineering disciplines and emphasizing that being a software engineer is more than just being a good programmer.

Ever since I joined the advisory board for the University of Nebraska-Lincoln (UNL) College of Computer Science & Engineering, I have been advocating for a dedicated B.S. in Software Engineering that would be differentiated from not only the computer science program at UNL but similar programs throughout the country.

The idea for this began gaining traction a couple of years ago and eventually led to UNL formally launching their B.S. in Software Engineering last fall. While many aspects of this program are still being developed, it is encouraging to see industry best practices as part of the performance criteria for the students and inviting industry practitioners to talk on state of the practice.

I have often spoken with faculty about the need to establish behaviors and norms that are consistent with industry best practices. For example, making activities like test-driven design something the students understand and use naturally throughout their coursework. It is also very encouraging to see unique software engineering courses targeted at providing students the background and experience that will better prepare them to enter the workforce.

I believe the UNL software engineering program has the potential to be a model for an improved approach to preparing students for careers in software development.

Creating True Professional Development Opportunities

If everyone had a Robert Martin or Martin Fowler on their team, then we would have nothing to worry about. Unfortunately, many (if not most) teams do not have people with the depth of understanding and experience (or even the time) to train up their teams. We need to help these teams along their path to maturity by providing meaningful and effective professional development opportunities. Somehow, we “mere mortals” need to be equipped with tools, technologies, processes, and patterns that can help us be successful without being experts. Our goal should be that the journeyman software developer is able to effectively adopt these patterns and practices.

To be clear, I am not talking about traditional “code schools” as a solution. We can’t send people to code schools and expect them to be productive in a true software development team. Where will they be getting the background and experience on concepts like encapsulation? Information hiding? Loose coupling? SOLID principles?

These code schools can, at best, produce people who can augment a development team as a junior apprentice as they are certainly less prepared for the real world than those coming out of traditional computer science programs. These are the folks working on the “factory floor”. At least the computer science grads have some exposure and working knowledge of a broad set of concepts.

The answer to me is to build upon the foundation these computer science grads have gotten and provide the type of education that can help them make the connection between engineering concepts they may have learned and the benefits they provide. I’m imagining a program that provides hands-on experience in order to successfully apply these concepts in real-world scenarios, allows them to see how these patterns and practices will benefit them, demonstrates that software “rot” is not inevitable in every code base, and shows that “funability” is something every development team should experience.

This type of combined lecture/hands-on skill development is exactly what we are trying to build at Don’t Panic Labs through our Software Design & Development Clinics. These courses are focused on reinforcing concepts that software developers are familiar with and making them concrete and actionable. We show students how to apply these concepts, practices, and patterns in real-world scenarios of both new projects and, more importantly, legacy systems that have technical debt in the architecture, design, and code.

If we can instill people with the confidence to begin applying these practices and patterns on the maintenance of their legacy systems, then we will have achieved what we set out to do.

Wrapping it Up

What’s it going to take to make progress in this area? I know we at Don’t Panic Labs have attempted to do our part by engaging with our university (UNL) and developing professional development programs for engineers. Maybe this is how it will grow, through grassroots efforts.

My hope is we, as an industry, can get ahead of the curve before some catastrophic event occurs that results in a massive upheaval in our education system. That would be tragic. I worry that we have a lot of people who, while doing the best they can, are crippled by a lack of understanding or confidence to do what’s right and are making poor judgments that could have a significant impact on their companies’ ability to compete and innovate.

Let’s mobilize and bring forward those who have been left behind by the folks who have already adopted these best practices and patterns. Let’s raise the tide in a way that floats everyone’s boats. If we can do this, imagine where we could be as a community and a profession. I encourage you to share your thoughts below.

 

 

This post was originally published on the Don’t Panic Labs blog.