I just got back from the International Conference on Global Software Engineering (ICGSE) that was held in Montreal, Canada on May 24-26. This was the first time I had attended and presented at a conference that is primarily focused on research (i.e., formal paper submission and acceptance). I was not exactly sure what to expect from the experience. I was not familiar with this conference, but I had been encouraged by Bonita Sharif, a faculty member at the University of Nebraska, to submit an Industry Talk proposal at the last minute.

I was a bit surprised by the mix of individuals there. For one, there were more people from Brazil than from the US! Additionally, there was a mix of both academic and industry professionals at the workshop. In fact, there were as many papers presented by scholars as were presented by industry people.

Overall, the experience exceeded my expectations, and I thought I would use this blog post to compare and contrast with my experiences speaking and attending industry-focused tech conferences like Microsoft Build and the Heartland Developer Conference (HDC).

Format of the Presentations at the Workshop

The workshop had about 50 attendees and was broken into five separate tracks with both research and experience presentations in each track. The five tracks were Managing Human Resources, Business Strategy, Methods and Processes, Technology, and Teaching/Skills. My talk was part of the Methods and Processes track.

Each presenter had a 20-minute slot which was moderated by someone to ensure the talking portion did not exceed 15-17 minutes. The idea was to present the topic and then leave time for questions at the end. I found this format to be very effective and, as a result, I got exposed to quite a bit of information in a pretty compressed timeframe.

Additionally, since these workshops tend to be very topic-focused, the same people are in the room for most of the presentations. This gives you the opportunity to engage and interact with folks who share a common interest. As a result, I was able to have quite a few conversations with individuals from both academia and industry throughout the conference.

Observations and Contrasts with Tech Conferences

To be honest, I was concerned going into the conference that the content would be more “academic” or deal with more basic research as opposed to addressing real-world challenges head-on. I was very pleased to see that this was not the case. In fact, I found myself thinking about the potential benefit of engaging with these types of conferences more enthusiastically.

If I had to describe the difference between this type of conference and a traditional tech conference like Build or HDC, I would say conferences like ICGSE are more focused on researching and presenting findings on real-world challenges, methods, and processes versus the sharing of technology and tools that we often see at other tech conferences. Generally speaking, one is focused on advancing the state of the art, and the other is focused on educating practitioners in modern technology.

An additional observation is that there seemed to be a general tone to the talks that feels more collaborative and inquisitive than the tech conferences. You are not left wondering whether you are getting a sales pitch like you are sometimes at these tech conferences. In fact, many of the talks concluded with the identification of further research that might be required based upon the findings.

If I had to pick an industry conference that might align more directly, I would look at something like the O’Reilly Software Architecture Conference or any number of the agile practice conferences out there. These tend to provide a variety of case study experiences, not unlike ICGSE.

In the end, I think both types of conferences play an important role. We need to have options for people with different goals and needs (e.g., learning about modern technology versus learning about the practice of software engineering and development).

I think we, as an industry, should participate in more conferences like ICGSE or ICSE or Agile Alliance XP to engage more with the research community. Doing so will allow industry to have more influence in the areas and types of research that is being done. These conferences already have industry support, but I am concerned that the support might be coming from the usual suspects (i.e., larger companies) which would mean there is less interaction between researchers and medium to small-sized companies and less influence on research with problems being faced by these types of companies.

Going Forward

I have written and spoken on numerous occasions about the problems and challenges our industry is facing and how we are falling short when it comes to education. I do not believe technology is the answer to our problems. Software engineering is not coding. Software engineering is the set of intellectual activities that occur in advance of the coding, or “manufacturing” portion of developing software. Software engineering is concerned with the “-ilities” of software systems.

As I mentioned above, I was happy to see far more industry-relevant research, information, and discussions at this conference. If this is the nature of these types of gatherings, I am more inclined to be more involved in them.

I know from past conversations that researchers in academia have struggled to engage industry partners for research. I also know that many in industry are dismissive of the research community because they don’t see obvious near-term benefits of much of the research. A solution to this problem may be for academia to be more sensitive to the need to create immediate value for industry and for industry to be more focused on the software engineering challenges, not technology/coding/manufacturing, when it comes to collaboration with academia.

Given that our profession is still relatively young and maturing, I like to look for analogies in other professions that might apply to our field. We have all been a part of conversations where we debated the merits of some particular methodology or process that has gained some traction in the field. Everyone argues for their own preference, and the debate tends to center around more qualitative arguments and intuition. No one ever really wins when folks are entrenched in their viewpoints.

I expect that there were times when this might have occurred in other fields like engineering, medicine, etc. For example, I can imagine two surgeons arguing, without any hard data, about whether their particular technique was easier to learn and perform and had better outcomes. It’s hard to imagine either becoming the standard of care based upon these informal debates. While I am no expert, I suspect that the medical field takes a more formal approach when evaluating the relative merits of alternative treatments by subjecting them to more rigorous analysis. Because of this, other practitioners can trust these evaluations and adopt the better method as the new standard of care.

Imagine if 1) more of the methodologies, processes, and techniques that have developed within our industry were evaluated rigorously, resulting in 2) these formal evaluations being trusted and accepted by the community as a whole, resulting in broader adoption of the methodologies, etc. I believe we will have reached a new level of maturity once this becomes the norm. I, for one, am hoping to collaborate with people like Bonita to work on some research that explores how we can make sound software engineering methods, practices, and patterns more approachable and adoptable by organizations of all sizes.

Hopefully this headline got your attention. I considered trying to come up with a hashtag like #NoTDD or #NoEstimates that seem to be popular these days. I opted for a (hopefully) catchy title instead. The purpose of this post is not to bash agile methods or to somehow suggest that we should not be following agile principles. Far from it. My purpose is to get you thinking about how agile, by itself, has limitations in its ability to transform development teams and to challenge some of the dogma around “just-in-time” software design in agile environments.

In this part of my two-part series, I am covering responsibilities, goals, and struggles that development teams are facing today. In part 2, I will cover our experiences and how we must take a “big picture” look at how we’re working in a world of constantly changing requirements.

What Is Our Responsibility?

When you think about where software development fits within your organization, what do you see as the ultimate responsibility you have within the company? What is it that you provide? What is your ultimate role in contributing to the success of the business?

I have given some thought to this over the years, especially when we formed Nebraska Global and Don’t Panic Labs. Our goal in building the software development team at NG/DPL was to reduce the risk of the development side of startups and enable accelerated MVP (minimum viable product) timelines. We would do this by creating a “flexible” product development resource that we could move in and out of the various startups and projects we were building. Over time, this internal flexible resource has become an external one as we have built a thriving contract product development business that enables other companies to flex us in as they need help with various innovation, product development, and legacy application migration initiatives.

The insight I had on all of these efforts is that, regardless whether you are developing a product for sale or developing a support system for a company, we all have similar responsibilities to our partners – to enable business agility.

So what do I mean by this? We must be able to rapidly react and respond to new information and new insights that our partners receive. This means we should be providing the necessary features and support systems that allow them to effectively leverage the new information and insights. To the extent we are able to do this defines how agile we really are.

Agile Methods Are Essential, But…

… they are not a silver bullet. We see a lot of teams and organizations transitioning to agile processes with the hope/expectation that this will give them enough agility to be responsive to business needs, create better alignment between stakeholders and developers, and increase the team’s productivity.

I would be surprised if most organizations adopting agile methods don’t get at least some lift in terms of alignment between stakeholders and the ability to respond to changing priorities. However, unless the development team is quite mature and already developing loosely coupled, highly cohesive, testable software designs, I doubt whether they see much, if any, improvement in their productivity. In fact, I can see the potential for a significant decrease in quality as well as the ability for these teams to complete the work they are committing to in each sprint. Why? I see two main reasons…

  • Ever-increasing complexity – the problems we are trying to solve with software today are far more complex than they were 5-10 years ago, and this is not going to slow down. The increasing sophistication of the solutions and the need to react quickly to changing market demands is putting increasing pressure on software platforms that often have a significant amount of technical debt and have become fragile and rigid.
  • Team maturity – it’s easy to go to a conference and watch a thought leader talk about the benefits of agile processes and how they have dramatically changed their culture and success. These are genuine stories and I have no doubt there is little hyperbole in their claims. The problem is, most organizations do not typically have the consistent level of maturity in processes and ability that these thought leaders’ teams have. If I had a team led by Robert Martin, Martin Fowler, Kent Beck or folks of that caliber (i.e., folks with extremely strong software design/development capability), I could see where layering in agile processes would be sufficient to achieve the sustained business agility that is promised. The reality is that most of our teams are composed of “mere mortals” who lack the background and experience necessary to design and develop software in a way that takes full advantage of agile processes.

Addressing The Software Development Side Of The Equation

What’s missing from this conversation, and many of the implementations of agile, is the recognition of the necessary changes required on the software development side of the equation. In other words, what are the changes and processes we need to adopt in the areas of software design, maintainability, quality assurance, etc to fully realize the benefits of an agile culture?

The irony here is that the folks who developed the Agile Manifesto saw this. Many of us have seen the home page of agilemanifesto.org and are familiar with the list of what is valued. I suspect most people did not see the “Twelve Principles of Agile Software” link at the bottom of the page. There are three of these principles that I think we, as a community of agile developers, fall short on:

  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity–the art of maximizing the amount of work not done–is essential.

The first bullet states that we should be able to achieve sustainable development velocity indefinitely. Ask yourself this… “How many projects have I worked on where it was just as easy to make changes to the system in year seven as it was on day one?” or “Have I ever worked on a code base that was more than five years old where I was quite comfortable making changes to it?” I suspect most of you have never worked on a project where either of these has been the case.

I suspect some people might look at the second bullet and assume it refers to how proficient our developers are in a programming language or how comfortable they are using the latest framework. I see it quite differently. What this says to me is that we need more REAL software engineering. These are the software architecture, software design skills, and the processes to nurture and maintain these architectures and designs that are necessary to achieve the sustainable development discussed in the first bullet. Unfortunately, we are falling short on how these skills are taught and, as a result, many development teams are lacking.

The final bullet is included because this is a hot-button issue for me. I too often see folks pursue esoteric, overly complex solutions to problems as opposed to looking for the simplest (i.e., most understandable, testable, maintainable) solutions. When we are creating overly complex solutions to simple problems, we are creating technical debt. I see this most commonly in the unnecessary asynchronous processing and messaging that exists in a lot of software applications.

Managing Complexity

Steve McConnell stated in “Code Complete” that unmanaged complexity is the most common technical reason projects fail. In other words, we don’t do a good job managing complexity. One needs only look at the healthcare.gov debacle a few years back. Why is this? What is going on here? I already mentioned above that the problems we are solving today are more complex than in years past and yet I do not see much being done on the software design side to mitigate this complexity risk. The main challenges facing us are human limitations:

Unless we change our approach to software design and development to meet these challenges head-on, we will never be able to “maintain a constant pace indefinitely”.

 

 

This post was originally published on the Don’t Panic Labs blog.

It’s very easy to throw terms around in our (or any) industry. In our hurried culture, we latch onto words or phrases that may not fully encapsulate their original intent. I’m afraid that is what has happened to the label of “software engineer”. And it has not been without consequences.

I believe the real, working definition of what a software engineer truly is has been diluted. Some of this could be that the title has been overused by people who have little knowledge of what makes an engineer an engineer. Part of it, too, could be that it has become a marketing or recruiting phrase (let’s be honest, just stating you have “programmers” or “developers” doesn’t have the same prestige as “software engineer”). As soon as one company begins using “software engineer” to describe their employees, the floodgates are opened for other businesses to begin doing the same. The term quickly becomes meaningless.

Software development is still a relatively young industry and is one that has evolved quickly during its short history. So, it’s not surprising that we have informally adopted varying terminology. But as our industry continues to mature with formal accreditations and recognition of skills, it’s important to be aware of what we’re implying with the words we use and titles we assign (especially when we begin appropriating terminology from existing disciplines for use in our own industry).

What I’d like to cover here is what we at Don’t Panic Labs believe a software engineer is and what we mean when we say it. Hopefully, this will force us to be more deliberate about its definition and highlight the differences that separate developers from software engineers. And even more, we can bring some awareness to what we call ourselves and the potential ramifications of doing so incorrectly.

A Little History

In the short history of software development, the act of writing code has suffered from a lack of awareness around what is truly required to create a functional, understandable, and maintainable codebase. In some ways, this is due to the industry seeing developers as just coders.

Since the beginning, we, as an industry, have had little understanding of what solid software design principles look like in action. We code it, release it, patch it, update it with new features, and then repeat the process. Code and fix. Code and fix.

But that all comes with a cost. In the 1980s, a few people began advancing ideas that software development should be treated as a discipline like any other engineering-based vocation. Sadly, it hasn’t caught on as quickly as it should have.

Regardless of how far we’ve come, we still have an identification problem when we label all developers as “software engineers”. It’s not only unfair to actual engineers, it implies something that is not true.

As we at Don’t Panic Labs see it, you have four basic levels: the developer, the software engineer, the senior software engineer, and the lead software engineer/software architect.

The Software Developer

The definition of a software developer is the widest of the four we’re covering here. At the risk of sounding like I’m reducing the role of developer, what I’m listing here is what I consider the bare minimum of what we’d consider a developer at Don’t Panic Labs.

We see the act of writing code analogous to the process of manufacturing a particular product. While it requires specialized knowledge and skills, it does not involve any design. Developers are the construction workers, the heavy lifters, the folks who bring the ideas into reality.

A developer is a person who understands the programming languages used, has a grasp of coding best practices, and knows the tech stack of their project’s requirements. However, they are not the people who created the plans and thought through the various scenarios that could arise.

The Software Engineer

To me, the difference between a developer and a software engineer is analogous to the difference between the person working on the factory floor (the developer) and the manufacturing supervisor. While the former does the actual work, the latter ensures the directions and plans provided for manufacturing are correctly executed and that any ambiguities or adjustments are identified and dealt with. To perform in this role, it is essential to be able to communicate and collaborate as well as possess a working knowledge of design principles and best practices.

In our world of software, the role of software engineer includes all of the software developer’s skills as well as consistently demonstrating great attention to detail. This person also looks for opportunities to use and execute appropriate engineering techniques such as automated tests, code reviews, design analysis, and software design principles.

From a maturity standpoint, the software engineer is also someone who recognizes when their inexperience is a factor and proactively reaches out to more senior engineers for assistance or insight. This is where one’s ability to collaborate and communicate should be evident.

The Senior Software Engineer

A senior software engineer must be all that is listed above, but also be someone who can mentor younger programmers and engineers (e.g., young hires, interns) to improve their skillsets and understand what goes into design/architecture decisions. They also help fill in the gaps left by an education system that focuses on programming skills and not software design knowledge. To do this, they need the ability to articulate and advocate design principles and recognize risks within a design and project.

A senior software engineer is also expected to be able to autonomously take a customer problem all the way from identification to solution and is uncompromising when it comes to quality and ensuring integrity of the system design.

Senior software engineers are your best programmers. But just because a person is a great programmer does not mean they can decompose a system into smaller chunks that can then be turned into a solution. That is the main requirement of a lead software engineer / software architect.

The Lead Software Engineer / Software Architect

What makes a lead software engineer/software architect is the thinking, the considerations, the design behind the code, the identification of tradeoffs, and all of what must come together before any code is written.

Being a lead software engineer is not so much about “fingers on the keyboard”. Their focus is on the design of the system that will get fingers on the keyboard later. But before any of that happens, the lead software engineer must scrutinize and evaluate all big picture elements. The design that comes from this effort should produce a clear view into all the considerations, anticipate most of the problems that could come up, help ensure that best practices will be adhered to along the way, and, maybe most critically, enable the system to be maintainable and extensible as new and changing requirements appear. In other words, the design must enable sustainable business agility.

Once the system design (or software architecture) is in place, it is the lead engineer who owns the “big picture” and ensures that the development and detailed design decisions that ensues is consistent with the intent and principles behind the original design. Without the constant nurturing and maintenance of overall design integrity, the quality of the system design will rapidly decay. I strongly believe this responsibility should not be shared but rather should be owned by the lead software engineer.

The Future

So now that I’ve laid out how we at Don’t Panic Labs view developers and engineers, there’s something else we believe is also important and must be addressed if we are to move forward as an industry: education.

I often speak about how we aren’t educating our future developers and engineers. I’ve even written a post about how we’re failing. The problems we will be solving in the future (or now, one could argue) will require more than the “construction worker” developers we’re cranking out of our educational institutions today. While we need our developers to construct the code, we need more engineers equipped and educated to effectively think about that code before a single keystroke lands. As the complexity of our systems increases, so too is our need to ensure we’re anticipating the problems we may come up against and building sustainable systems.

This is no different than if – using building construction as an analogy – we only were educating our carpenters, plumbers, electricians, and welders without considering the need for engineers who create the plans for these folks to follow. As technologies and materials improve, engineers must be able to leverage these advances and make the necessary adjustments. Otherwise, workers will be left using the same old methods or, worse, making their own decisions in a vacuum and possibly creating a worse (or dangerous) situation for everyone. Without the high-level vision and insight provided by an engineer, the whole industry is held back.

As our world continues to run at full speed toward a future more reliant on ever-evolving technologies, the need for properly-trained engineers (who are educated based on a standardized set of requirements) will continue to grow.

And with that comes the need for a better understanding of the roles that comprise development teams. Whether one’s role is a developer, software engineer, senior software engineer, or software architect, the distinctions are as important as the work they do. We already know that following the current path inevitably leads to chaos.

That’s something our world cannot afford.

 

 

This post was originally published on the Don’t Panic Labs blog.

I was talking with a colleague recently who is part of an organization that has recognized that their development processes, culture maturity, etc., was holding them back. The conversation went something like this… “As a developer, I would see Don’t Panic Labs as a dream job given the emphasis and adherence to consistent software design and development principles as part of the culture. What advice would you tell someone who wanted to achieve the same within their organization?”

Oddly, I had not been asked this question directly before. The short answer I gave him was that they had to find a passionate, strong leader who could come in and convince everyone of the better way and get everyone onboard. This leader would need to be maniacal about insisting on no compromises when it comes to the processes and principles that will achieve these cultural changes.

An analogy that came to mind was that of a college football team (seems natural given my location here in Lincoln and being a Husker football fan). Imagine you have assembled 22 talented college-level football players (11 for the offense and 11 for the defense). Each of these players have experience playing the game, have had success on previous teams, understand core concepts of strategy, teamwork, technique, etc. They have probably watched quite a bit of football and seen how high-performing teams and players have played. They probably have some athletes they look up to and aspire to emulate on the field.

Now, imagine you took these 22 players and sat them down on a Monday and told them they would be playing Nebraska on Saturday. They need to do whatever they need to do and, oh, by the way, there is no coaching staff. Could you imagine what the result would be? At times you might see something that resembled football but, in general, it would look chaotic. Despite having similar player talent, Nebraska would absolutely pummel them. Why? The quality of coaching at Nebraska will enable them to play as a team and within the context and vision of how they execute their offense and defense. Our team, without any clear vision, concept, or plan for offense and defense would likely look like 11 individuals who are not on the same page – which is what they are.

Now, imagine you took a group of talented programmers with similar levels of experience and understanding and did the same thing – tackle a complicated system to design, build, and maintain. Software development is a team sport and without an experienced leader or coach with a clear vision who can get the team performing together and within the constraints of the design of the system, you will get a system that looks like it contained the designs and ideas of everyone on the team – chaos.

Not every football coach can be successful at every level of football. The higher the level (midget to professional), the more capable the coach and coaching staff needs to be to get the most out of their talent and to compete. The same is true with software systems. As systems become more complex, there are more requirements for discipline, organization, depth of understanding of key concepts, experience, and attention to details by the technical leadership.

Building a high-performing team that can successfully design and create complex systems with conceptual integrity of design and sustainable agility requires an experienced, qualified leader who can both design complex systems as well as lead and inspire the team. In other words, you need a head coach. Without this, you will have chaos.

 

 

This post was originally published on the Don’t Panic Labs blog.

We are not a lifestyle company.

This is a sentiment I made clear at a recent Don’t Panic Labs company meeting. It’s not that we have anything against companies where employees settle in for their entire careers, but that’s just not who we are (or ever aspire to be).

I’m not saying anything new here. This was pretty much implied when we formed Nebraska Global/Don’t Panic Labs back in 2010. We never intended to build a place where people spent all of their working lives. We wanted to grow the entrepreneurial spirit in developers and engineers by exposing them to what it takes to launch a software product/company, all with the expectation that some of them would pursue these types of endeavors.

As I was drafting my Nebraska Global reflection post, a few software engineers came to mind as I was thinking back to our early days: Paul Bauer, Nick Ebert, Rich Kalasky, Nate Lowry, Cody Leach, and Spencer Farley.

Most of them were with us from pretty much the beginning and were key to getting Nebraska Global/Don’t Panic Labs off the ground and helping build what became EliteForm, Beehive Industries, and Ocuvera.

As new opportunities were presented to these guys, they listened to the inner voices calling out to new and challenging roles outside our walls (figuratively since Nate is part of Travefy and Paul is part of Ocuvera, both in or adjacent to our location). Seeing them go was difficult (the talent they possess doesn’t just walk through the door every day). However, I understood that this comes with the territory and fits exactly into what we wanted to see happen with Nebraska Global/Don’t Panic Labs.

We reached out to these guys to have them share what they’re doing today and what they’ve learned along the way.

Paul Bauer, Product Manager at Ocuvera

My main goal is to build the bridge between the development team and the outside world, to make sure that everything we’re working on is the right thing and is solving problems for our customers.

As is typical for an employee of a young company, my key responsibilities are much broader than my role. I also wear the hats of sales, support, customer training, project management, marketing, and more on a regular basis. These are incredibly diverse and force me to speak several different ‘languages’ to a variety of stakeholders throughout the day.

I came to DPL as a new college graduate, and DPL provided the opportunity to experience real-world product development first hand as part of a team. Through that experience, I learned what it takes to produce real-world software. Through the attention given to software design and architecture at DPL, I learned a framework (iDesign) that I knew I could replicate anywhere to build just about any software product. That knowledge alone was a huge confidence booster which, I think, put me on a whole new career trajectory.

One of the biggest things that helped me into my current role was the variety of experiences DPL provided: I was able to sit in on sales calls, visit customer sites, learn about user interviews from UI/UX engineers, attend conferences, contribute to marketing copy, and so on. Nobody told me, “that’s outside of your responsibility,” instead they encouraged me and gave me more opportunities. My definition of software expanded beyond just code to include everything else that goes into running a software business. I discovered that those non-developer activities were really fun for me, and I missed them when I sat at my computer coding all day. All this made the move to my current position a smooth and natural one.

My advice to aspiring software developers is that working on the same team as more experienced people is a super-effective way to learn. Don’t view your co-workers as competition, view them as people you can learn from.

Nick Ebert, Director of Engineering at Spreetail

I focus on the health/well-being and growth of the Spreetail Engineering team, so I do a lot of recruiting and development of people. I also work to remove technical barriers that are preventing the growth of the company’s software.

I was introduced to a breadth of different technologies and processes during my time at DPL, which has helped me quickly understand new systems and software architectures. DPL team members are very authentic, and that trait has stuck with me in my current role.

I would advise aspiring software engineers (at any level) to write code every day. Simple, right? One of the things that makes software engineering so unique is that there is very little overhead to get to build something. You can go from vague idea to running software in a matter of hours.

Rich Kalasky, Applications Development Architect at Buildertrend

At a high level, I am leading the technical direction for Buildertrend on our web application and services. Day to day, I help break down new features, mentor developers, and generally help set the tone for our department.

Working at DPL gave me a deep introduction to the .NET stack and well-designed software in general. After joining Buildertrend, I was able to quickly ramp up and start building features on a similar tech stack. As my role has expanded, I find myself reaching back to my experiences at DPL and applying them to new problems constantly.

To engineers coming out of school, I would suggest finding a place where you are the dumbest person in the room. Become a sponge and soak up everything you can from the smart people around you. DPL was this place for me, but there are plenty of others (including Buildertrend!) that can provide a similar experience. There was so much accumulated software knowledge in the building every day it was impossible to NOT learn something.

Nate Lowry, Software Engineer at Travefy

I am a software developer for Travefy. We’re a small team, so I code anything from web frontend, backend, services, and everything in between. I led design and development of our Public API and am in charge of our plethora of mobile apps. I’ve also been involved as a technical contact in some of our enterprise sales.

At DPL I learned to do just about anything. It was a fast-paced environment where we didn’t use titles or hierarchy, we just got stuff done when it needed to be done. Working with lots of different clients also helped me understand the importance of communication, transparency, and feedback.

For developers and engineers coming out of school, I say go try something. If it doesn’t work out, try something else. Work hard. Have fun. Be happy. Make time for things outside of work.

Cody Leach, LeverageRX

I’m the CTO at LeverageRx, an online platform that helps doctors make smarter financial decisions. I’m currently the only developer on the team, so my responsibilities include everything under the product umbrella. It’s up to me to keep the site running smoothly, to make technology decisions as the business grows and evolves, and to keep a pulse on the business and foresee upcoming technology needs. Most importantly, I work closely with my business partner to make sure I’m designing and building our platform to solve the right needs of our customers.

I started at DPL as an intern my senior year of college, which eventually led to a full-time position with the company. I had the luxury of being a full-stack developer at DPL, but I was also the lead UX designer on most of the projects I was involved with. When I started as an intern, I didn’t really know much about developing “in the real world,” and I knew even less about UX and UI design. I learned a ton about working on a development team, what is (and isn’t) great software architecture, how to balance business needs with product needs, and how users interact with your product and how that impacts the business, among other things. In my current role at LeverageRx, I am constantly making product decisions that have a direct impact on the business. The development structure and discipline I learned right out of college has been a huge factor in helping me make these critical decisions.

What advice for aspiring software engineers is two-fold:

  • It’s okay to fail. I can’t tell you how many times I’ve built crappy software or designed a horrendous user experience. Without these failures, I wouldn’t have learned as much. The best part about embracing the failure mindset is that it keeps me learning. If I don’t fail, I’m not learning.
  • Pay attention to how people use your software. If your users can’t figure out how to use your software (or if they hate using it), then what’s the point of making software? Great software AND great user experience is where the real magic happens.

Spencer Farley, CTO at ScoutSheet

I am currently CTO at ScoutSheet. My responsibilities include everything related to creating the product. I teach interns, user test, program, formulate architecture, glean requirements from user feedback, collaborate with the business people to brainstorm and test high-level direction based on market analysis, QA. The list goes on.

DPL prepared me for this kind of position by giving me ownership over products and teams. In particular, the Backlog Management project taught me a lot about the end-to-end software process. Project Managers Todd Guenther and Lori McCarthy educated me on project management and were candid customers. I felt the sting of delivering a sub-par product and how much testing processes can improve a product. I got to architect the system, feel what didn’t work, and fix it. I got to experience managing a team and balancing teaching versus getting stuff done. Doug also met with me regularly and we conversed about architecture principles. We also read books together, which was very formative to my dev skills.

My advice to graduating software engineers would be to realize there is still much to learn. Working in a commercial environment is a different beast and it will take time to learn the many concerns outside of syntax and performance (e.g., maintainability, reliability/test-ability, changing requirements,…). The best place to get started is to pick up classic literature of the field. Mythical Man-Month by Fred Brooks is a good place to start.

 

 

This post was originally published on the Don’t Panic Labs blog.

April was the seventh anniversary of the launch of Nebraska Global. While we spend a lot of time internally reflecting on our progress and strategy, we have never really shared it in a public forum or blog post.

Like a lot of startups, we tend to be focused inward and not as concerned with how people on the outside might be viewing us or the perceptions they may have. I feel it is time to be more transparent about some of the inner workings of Nebraska Global and the constituent companies.

Getting Nebraska Global Off the Ground

As stated on our website (which is relatively unchanged since we started – yikes!), we launched Nebraska Global with a couple of goals in mind:

  • Establish an investment fund and combine these dollars with a product development team to build software products and companies. Rather than just doing a fund, we felt we could significantly de-risk the investments if we were the ones responsible for product development execution.
  • Develop young software engineers / entrepreneurs who would build the next generation of Nebraska technology companies

Prior to launching Nebraska Global, we spent 12-18 months fundraising. This always takes longer than you would think but we finally wrapped up all of the fundraising in the first year of the company.

Shortly after launching, we connected with Archrival to help us brand the software development side of our business. This is where the name “Don’t Panic Labs” came from.

Many people have been confused about the relationship between Don’t Panic Labs and Nebraska Global. I can understand their confusion. In the first 3-4 years, Don’t Panic Labs was more of an abstract concept or culture identity that we used to brand ourselves on the product development side and to help with recruiting. We felt it was important to do this so we could differentiate the product development teams and culture from the more “businessy” fund and investment side. This made it much easier to connect with young folks and students.

Initial Plan

The model for Nebraska Global does not have many peers and is somewhat of a unicorn. It is an evergreen fund (indefinite fund life) with both financial and non-financial objectives.

The initial board consisted of eight individuals. There were four members representing the largest investors, three management members (Steve Kiene, Patrick Smith, and myself) and one member appointed by management.

One of the guiding objectives established by the board early on was that we had no interest in fully investing the total amount of the fund ($37.3 million). Instead, we agreed upon an amount we would invest through 2015 and then leave the remaining amount as uncalled capital that would be used for follow-on investments, etc.

With this basic framework in place, we moved ahead full steam on a variety of projects – some wholly owned and some as minority investments.

First Three Years

The core of the team that launched Nebraska Global had been around for quite some time and had built other software companies so we felt like we were able to hit the ground running.

Despite this, the first three years of Nebraska Global was a pretty hectic time as we were simultaneously building and launching four separate companies and products: Icora, EC3H, Beehive Industries, and EliteForm. We spread our development resources across these efforts. If it had not been for the deliberate steps we took to unify our processes and programming models, we no doubt would have had a mess on our hands.

Thankfully, we went into this venture knowing that our product development team would need to be a flexible resource that would ebb and flow from project to project. We were prepared.

Don’t Panic Labs as a Company

About the time we were finishing up initial releases of EliteForm and Beehive in 2012, we began getting inquiries from some of our investors and others in the community wondering if we could help them out on the product development side. Initially, we declined these opportunities as they did not fit our model of building products and companies that we are investing in.

Over time, it occurred to some of us that the idea of Don’t Panic Labs as a flexible resource for our internal projects and companies might be a fit for external companies who also need access to a seasoned product development team that they could flex themselves. The availability of some of our development team allowed us to “dip our toe” into this with some carefully selected projects, which convinced us that we were on to something.

This effort culminated in a joint venture we established with National Research Corporation in the summer of 2013. This company (NRC Connect) was located in the Don’t Panic Labs offices, and it was staffed by Don’t Panic Labs developers and business development and support teams from National Research Corporation. The idea was to prove out the business model and build a product as a joint venture that would eventually be re-acquired by NRC (which occurred in the summer of 2015).

Emboldened by the contract development and NRC joint venture, we tasked Brian Zimmer to do some analysis of the market opportunities and refine a business model that would allow Don’t Panic Labs to transition from an abstract idea to an actual company that would be able to do contract product development as well as pursue joint ventures where our technical expertise and startup experience could help de-risk innovation efforts.

On January 1, 2014, Don’t Panic Labs was formally launched to provide product development services for companies ranging in size from startups to publicly-traded corporations. Its business models run the gamut from traditional time-and-materials contracts to dev-for-equity relationships. Since then, it has grown from a team of three (Brian Zimmer, Matt Will and Cole Easterday) to 25 today.

Transition

As 2015 came around, we began preparing for the end of our initial investment phase. During the first five or so years, we made investments of cash or in-kind services in the following companies:

  • Icora
  • Beehive Industries
  • EliteForm
  • EC3H
  • Ocuvera
  • 42
  • Boutique Window
  • Travefy
  • NRC Connect
  • Prairie Cloudware

With a pause in the use of fund capital for investments, it made sense to us to turn our focus inward to help drive the success of our existing investments. Patrick Smith took over as the CEO of Beehive and has been focused there since 2015. Steve Kiene turned all his attention to Ocuvera in order to get that remarkable technology out to market and generating sales. I have turned my attention to help grow Don’t Panic Labs and build a superb team that is able to solve hard problems following these core values:

  • Empathize then own it
  • Build smart
  • Deliver with pride

With the 2015 “soft landing” executed as designed, it meant that traditional fund operations would be paused. As such, we have slimmed down the headcount on the fund side to reduce expenses. However, the fund – as well as the folks we still have there – are shared resources that support the operations of our portfolio companies.

Pausing fund operations also meant we could (and should) move to a more traditional form of board governance, which we completed last fall. That resulted in a smaller board that now includes three of the original non-management board members, along with Steve Kiene and myself.

Looking Forward

I don’t believe Nebraska Global is done making investments. In fact, we have never really stopped. For now, our investments will be in the form of dev-for-equity done through Don’t Panic Labs. In 2016, more than $400,000 was invested by Don’t Panic Labs in equivalent in-kind services across three startups and one joint venture. This year, we already have made in-kind investments in two additional startups. What this means is we are providing product development services at reduced/no charge (forgoing some or all of our consulting fee) in exchange for upside in the success of the company or product (either through equity or revenue sharing).

This model fits well with our established startup culture and our development team exchanges some near-term profit-sharing in exchange for greater potential for increased returns down the road. In these instances, both the Don’t Panic Labs team and the startup have even greater aligned interests through “skin in the game.”

All of us who are investors in Nebraska Global obviously want to see a return of cash from these investments, but the capital invested in Nebraska Global is patient. I see a point in time down the road where, as our existing portfolio matures and returns are distributed to investors, the Nebraska Global board may decide that some of those funds should be retained in the company to be used to begin investing dollars along with our in-kind services.

Wrapping Up

One of the reasons I decided to write this reflection was that we hear, from time to time, some comments that reflect confusion about what Nebraska Global is, where we have been, who we are right now, and where we might be going. It is my hope that some of this confusion can be cleared up here.

Going forward, it is my intention to publish a reflection on an annual basis to give people better insight into what is going on in our heads and in our space. People who know me know that I am pretty candid and not very practiced in the art of “spin”.

If there is something I have said that creates questions or if there is something I did not discuss that you are curious about, I encourage you to leave a comment below and I will do my best to provide a candid “no-spin” response.

 

 

This post was originally published on the Don’t Panic Labs blog.

I worry daily about the software we rely on in our daily lives. One need only look at the problems our airlines have had and the disruptions they have caused in our lives. I feel like there is a significant risk that we will be crushed under the weight of technical debt in this software. As an industry, we have got to start getting ahead of the mismatch between the complexity of the problems we are trying to solve and the approaches we are taking to manage its complexity. We need only look back to the Healthcare.gov debacle to see our failures on the grandest stage. Interestingly, our ability to deliver large and complex projects successfully still hovers around 5%, which is appalling.

It is 2017 and we still see many instances of teams and projects that, while they may be leveraging agile processes for project management, have no structure for translating those requirements into a software design. What’s more, we still see very few organizations effectively leveraging proven practices of test-driven design, automated integration testing, code reviews, and so on. I estimate that we have interacted with over 100 engineers from dozens of organizations over that last seven years and only a few (less than five) were working in environments where test-driven design or automated unit/integration testing was part of their development culture.

Why is this still happening in the face of all the benefits these structures and practices provide? I believe it is in large part due to how we train and develop software development professionals.

The Role of Formal Education

What role is formal education playing in this? Unlike my generation of software developers, most of the people we see entering the field today are coming from an educational background that is Computer Science-related (either a computer science/engineering degree or minor). I have said for years that the way we are training software developers (and engineers for that matter) in our universities is flawed.

I am not alone in my viewpoint. Thought leaders like David Parnas, Steve McConnell, and Fred Brooks (“A scientist builds in order to learn; an engineer learns in order to build.”) have all written about this. Research papers have been published based upon surveys showing the gap between what is taught in school and what is required in practice. We are asking educational institutions to assemble a single program of study that satisfies accreditation and simultaneously prepares people for either a career in academic research or a career in our industry. How is this any different than eliminating mechanical engineering as a degree program and asking a physics department to train people to build bridges, aircraft, robots, industrial machines, etc?

It seems absurd, right? But that is exactly what we are expecting the Computer Science programs of our universities to do. The reality is, these programs are more focused on in-depth coverage of the body of knowledge of computer science at the expense of the types of curriculum a true engineering program would have. Most programs I see have a single course on “software engineering” and no courses on best practices for software architecture or software design and development.

Sad, but true.

I am by no means indicting these programs for a failure to prepare students for the real world. The vast majority of kids coming out of college with these degrees find employment so the universities might not see that there is a problem. While I believe these students are underprepared, we as an industry need to demand a better product from our higher education system.

Until something changes, we will continue to see students enter the workforce with little or no understanding of what a mature software development team looks like, what the design criteria should be for decomposing a system into a coherent architecture and design, how to effectively evaluate design tradeoffs against a set of accepted design principles, and the benefit of following best practices for the design and development of their code.

If students are lucky, they will be entering the world as apprentices and hopefully land in an organization that has a mature and strong software design and development culture. Good luck. My experience is that these types of organizations are rare.

The Role of Professional Development

What role is professional development playing in training people once they get into the workforce? Tech conferences have historically been focused on introducing new technologies and frameworks. Occasionally there are presentations covering patterns and best practices, but I suspect most of the session presenters come from organizations that have a disciplined maturity to their development processes. A lot of what they are talking about almost assumes that the audience could take advantage of these frameworks, which are designed to enable the best practices and patterns they themselves were following.

I suspect what really happens is either the developer walks away from the conference frustrated that they would never be able to leverage the tools or the developer “implements” the tools and frameworks with less than optimal results. I have seen first-hand the results of teams implementing the WPF MVVM framework after working through the quintessential tutorial. They jump into using the framework without really understanding the MVVM pattern and are unable to make the nuanced decisions required to fully leverage the advertised benefits of developing a UI layer on top of this framework (separation of UI and business logic, testability of view models, etc). Our own experiences have made us wonder whether we would be better off with good ‘ol windows forms.

Bottom line: it seems like we as an industry are completely ignoring the need to coach up our development teams on these fundamental principles and practices.

What is the solution? I feel like we must address both the root cause of this problem (our university education programs) as well as addressing the way we are developing software developers already in the industry.

Fixing Our Education System

Changing the way we train software developers is not a new idea. Thought leaders in our industry such as David Parnas and Fred Brooks (especially his chapter discussing “Where Do Great Designers Come From?” in “The Design of Design”) have been arguing for this type of change for quite some time.

In 1999, Parnas wrote a paper proposing a curriculum designed to address many of the concerns I have outlined above. In it, he argues for separating the education of software engineers from computer scientists, allowing the software engineers to be trained in the style of the other engineering disciplines and emphasizing that being a software engineer is more than just being a good programmer.

Ever since I joined the advisory board for the University of Nebraska-Lincoln (UNL) College of Computer Science & Engineering, I have been advocating for a dedicated B.S. in Software Engineering that would be differentiated from not only the computer science program at UNL but similar programs throughout the country.

The idea for this began gaining traction a couple of years ago and eventually led to UNL formally launching their B.S. in Software Engineering last fall. While many aspects of this program are still being developed, it is encouraging to see industry best practices as part of the performance criteria for the students and inviting industry practitioners to talk on state of the practice.

I have often spoken with faculty about the need to establish behaviors and norms that are consistent with industry best practices. For example, making activities like test-driven design something the students understand and use naturally throughout their coursework. It is also very encouraging to see unique software engineering courses targeted at providing students the background and experience that will better prepare them to enter the workforce.

I believe the UNL software engineering program has the potential to be a model for an improved approach to preparing students for careers in software development.

Creating True Professional Development Opportunities

If everyone had a Robert Martin or Martin Fowler on their team, then we would have nothing to worry about. Unfortunately, many (if not most) teams do not have people with the depth of understanding and experience (or even the time) to train up their teams. We need to help these teams along their path to maturity by providing meaningful and effective professional development opportunities. Somehow, we “mere mortals” need to be equipped with tools, technologies, processes, and patterns that can help us be successful without being experts. Our goal should be that the journeyman software developer is able to effectively adopt these patterns and practices.

To be clear, I am not talking about traditional “code schools” as a solution. We can’t send people to code schools and expect them to be productive in a true software development team. Where will they be getting the background and experience on concepts like encapsulation? Information hiding? Loose coupling? SOLID principles?

These code schools can, at best, produce people who can augment a development team as a junior apprentice as they are certainly less prepared for the real world than those coming out of traditional computer science programs. These are the folks working on the “factory floor”. At least the computer science grads have some exposure and working knowledge of a broad set of concepts.

The answer to me is to build upon the foundation these computer science grads have gotten and provide the type of education that can help them make the connection between engineering concepts they may have learned and the benefits they provide. I’m imagining a program that provides hands-on experience in order to successfully apply these concepts in real-world scenarios, allows them to see how these patterns and practices will benefit them, demonstrates that software “rot” is not inevitable in every code base, and shows that “funability” is something every development team should experience.

This type of combined lecture/hands-on skill development is exactly what we are trying to build at Don’t Panic Labs through our Software Design & Development Clinics. These courses are focused on reinforcing concepts that software developers are familiar with and making them concrete and actionable. We show students how to apply these concepts, practices, and patterns in real-world scenarios of both new projects and, more importantly, legacy systems that have technical debt in the architecture, design, and code.

If we can instill people with the confidence to begin applying these practices and patterns on the maintenance of their legacy systems, then we will have achieved what we set out to do.

Wrapping it Up

What’s it going to take to make progress in this area? I know we at Don’t Panic Labs have attempted to do our part by engaging with our university (UNL) and developing professional development programs for engineers. Maybe this is how it will grow, through grassroots efforts.

My hope is we, as an industry, can get ahead of the curve before some catastrophic event occurs that results in a massive upheaval in our education system. That would be tragic. I worry that we have a lot of people who, while doing the best they can, are crippled by a lack of understanding or confidence to do what’s right and are making poor judgments that could have a significant impact on their companies’ ability to compete and innovate.

Let’s mobilize and bring forward those who have been left behind by the folks who have already adopted these best practices and patterns. Let’s raise the tide in a way that floats everyone’s boats. If we can do this, imagine where we could be as a community and a profession. I encourage you to share your thoughts below.

 

 

This post was originally published on the Don’t Panic Labs blog.

Note: This post was co-authored by Chad Michel. The rest of this 5-part series can be found here:

Part 1 – What and Why
Part 2 – Leverage Your Leadership Roles
Part 3 – Maximizing Productivity
Part 4 – Processes Can Be Fun

We’ve now come to the last post in our series on funability. We hope that along the way you have been inspired to identify opportunities that will enhance the way your teams work (and grow) together.

So far, we have covered the what’s and why’s of funability, how to leverage your leadership roles, ways to maximize the productivity of your engineers, and how processes can achieve funability. In this final post, we will discuss how various layers of testing can produce quality software and – in turn – provide funability for the team.

Code Reviews

We believe code reviews are the single best way to improve the quality of your code base and to ensure the integrity of the design is maintained. Hands down. But while the idea of code reviews isn’t new to us, we will admit the adoption of them was probably slower than we should have allowed.

When we were using TFS back in the good old days (and not having all the branching and merging we do today), code reviews were painful. It was time-consuming and difficult to get the “big picture” of the system. And this would lead to the occasional bug or design inconsistency to slip through, especially when we were under the gun to get a release out the door.

Now with our adoption of GitHub and pull requests, code reviews (assuming they contain a reasonably small number of files and changes) are both manageable and effective. It’s easy to review every pull request, and that helps us to find even the smallest issues (or seeds of potential issues). They also ensure that we’re remaining consistent with our software architecture and design rules.

Aside from the quality aspects, code reviews also provide a mechanism through which mentoring and coaching can occur. There’s very little that can replace the face-to-face conversations where senior engineers talk through code with younger developers and share insights from their experiences.

Continuous Integration with Automated Integration and Unit Tests

One area in which we are continuing to grow and evolve is the way we test our code. We have seen tremendous success in the running of unit tests against every pull request before you merge. While it may sound like a lot of work along the way, it really does make a difference.

It’s easy to say your code works when it just runs on your machine (aka, WOMM – Works On My Machine). Showing that it runs just as well on the build server is something else entirely. Testing at different milestones along the way instills confidence in your code and your systems as a whole.

Putting off testing until several merges have been completed can set a proverbial snowball of problems rolling down the hill, and it’s a snowball that nobody knows exists until it’s too late. By taking the time to test every pull request, you can stop the snowball and prevent your system from becoming the one that nobody wants to work on (and we’ve all worked on those before).

Integration tests focus on issues related to integration of individual classes, modules, and services. This can uncover issues around coupling, invalid assumptions, etc. Not doing integration tests means you will almost always have a “stabilization phase” at the end of your development where you are working out bugs related to integration of the software after you are supposedly “feature complete”. This really sucks.

If nothing else, having a heavily tested code base will help you to sleep better at night because you aren’t dreading a support call.

In Closing

We hope that you have found this series to be useful or at least inspirational. But we should stress that there is no silver bullet to creating funability within your organization. To truly change your software development culture, you need to go beyond the cool spaces and agile/scrum processes. You need to change the way software is designed and created. Only an integrated view of these processes will get you moving towards funability.

And we should say that we have not fully achieved funability. While we have increased it greatly over the past 7 years, there’s still more to be done (and we imagine there will always be room for improvement). So when you begin to look for ways to bring funability into your culture, don’t expect to turn your ship around in short order. Any kind of culture change takes time.

But if you have a plan and are constantly reviewing your practices and processes, you will achieve new levels of funability. And your engineers (and your code base) will be better for it.

 

 

This post was originally published on the Don’t Panic Labs blog.

Note: This post was co-authored by Chad Michel. The rest of this 5-part series can be found here:

Part 1 – What and Why
Part 2 – Leverage Your Leadership Roles
Part 3 – Maximizing Productivity
Part 5 – A Layered Approach To Quality

In the fourth part of our series, we will be building on the what’s and why’s of funability, how to leverage your leadership roles, and ways to maximize the productivity of your engineers by covering how development processes can incorporate more satisfaction into daily work and contribute to a rewarding workplace culture.

Design Identity (Have One)

The concept of having a consistent design identity goes back to the early days of Don’t Panic Labs. Since the beginning, our vision included having several projects going on simultaneously. This meant finding ways to avoid treating each new project as a unique design effort. We needed commonality.

When done properly, the design methodologies employed (e.g., object-orientation, service usage, micro-services, IDesign concepts) in every project should be so similar that they look like they were all created from a single mind.

In his classic book The Mythical Man Month, legendary software engineer Fred Brooks said, “I will contend that Conceptual Integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas.”

In other words, given that we are building software that (hopefully) will be extended and maintained for years, having a system that has no shared or controlled design or, worse yet, where all design decisions are left to individual developers, will surely lead to software “rot.”

Clearly, there needs to be some sort of design. But too often, especially in the startup world, there is a push to move fast and skip the design step. While this may feel good at the beginning because it keeps the forward momentum, whatever benefits you think you are realizing will end quickly.

Martin Fowler addresses this exact phenomenon in what he calls his Design Stamina Hypothesis (see below).

In his experience, as well as ours, it doesn’t take you long to begin reaping the benefits of having good design. See that point where the “good design” and the “no design” lines cross on the “design payoff line”? That usually happens in just a matter of weeks, not months. This is why we believe that if you’re working on a project that’s going to last longer than a few weeks, you’re better off investing in a software design/architecture exercise. That relatively little bit of effort to make the experience of working on that code more enjoyable.

To sum up, here are some of the benefits we’ve realized by having a consistent design identity:

Testability. Perhaps we’re stating the obvious, but quality assurance is made much easier when a shared design identity exists. If several systems share the same methodologies, they naturally become much easier to test. A sort of rhythm is found by the engineers. They become instinctively aware of where possible problems may arise. They build the system with this knowledge in mind, avoiding many pitfalls that would typically ensnare a one-off system. But if a problem does arise, troubleshooting is typically quick. This keeps the project moving forward. It also keeps product quality high. And those make for happy engineers.

Flexibility. Not only does this mindset save time, it also makes it easy for environments (like ours) where engineers may be moved around. By using shared programming models, methodologies, processes and patterns, it’s easy for us to take an engineer from one project and put them on another. We only need to spend a short amount of time showing them the architecture diagrams and discussing which services are implemented. We then send them on their way. This makes for a very efficient environment that consistently keeps the “getting started” frustrations to a minimum.

Repeatability. And this has helped us throughout our diverse portfolio of projects. Whether it is a motion capture and 3D imaging unit that tracks weight-training athletes (EliteForm) or a data-heavy infrastructure management system (Beehive Industries), shared design processes have helped move these projects much faster than building two separate products from the ground up.

Practice Test-Driven Design

In a previous life, both of us worked together at an e-commerce company. The core software was not developed with funability in mind and it was – for lack of better adjectives – a train wreck. It was difficult to maintain and it was difficult to deploy. Basically, it was not enjoyable.

We came to the realization that a rewrite was required so it could be more efficiently maintained in the future. While we initially didn’t want to take this project on, it became apparent very quickly that it was becoming one of the most enjoyable experiences we’ve ever had. We took what we had learned in the past and thoughtfully constructed this new system. We made unit testing a priority and we had numerous tests around the entire code base. We were feeling good about it.

And we needed that confidence because one day, the old system just collapsed. It was a goner. So that seemed as good of time as any to roll out the new system. And it was a fairly painless rollout, but that doesn’t mean it wasn’t scary. We didn’t think that we were ready for a rollout, but when we considered the number of tests we constructed along the way, it instilled a lot of confidence. So from that day forward, we’ve been sold on the idea of test-driven design.

One reason why we believe so strongly in test-driven design is because it forces consumption awareness in the code, because you, as a test writer, are the first consumer of a class/service/method. It focuses you on the interface rather than the implementation. And when you become the first consumer, it makes you consider aspects you might not have before (such as decoupling the code to make it more testable and facing areas head-on you may have otherwise skipped because of its high level of difficulty).

Implementing unit tests forces the engineer to be more empathetic about the code they are writing. It’s perspectives like that which makes life easier for the people who will be maintaining the code in the future. When you have to describe the intent of your code, you can be led to think differently about how you’re going about it. Sometimes the only way to really get your point across is to write example code. Even better. Taking time to document as much as you can lay the groundwork for an enjoyable system later. One of our favorite developer maxims holds true for tests as well as the code itself: “Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”

Testing also allows you to run “what if” games around your code. If you’re planning wide-reaching changes to your system, testing helps you proactively assess the impact. These exercises can be quite fun, especially if you’re the type of engineer who obsesses about performance and efficiency.

There’s Still More

In this post, we covered the importance of processes that help achieve funability through various design processes. In our next post, we’ll cover how various layers of testing can produce quality software and – in turn – provide funability for the team.

 

 

This post was originally published on the Don’t Panic Labs blog.

Note: This post was co-authored by Chad Michel. The rest of this 5-part series can be found here:

Part 1 – What and Why
Part 2 – Leverage Your Leadership Roles
Part 4 – Processes Can Be Fun
Part 5 – A Layered Approach To Quality

In this third part of our series, we are covering the importance of creating environments that maximize the productivity of your engineers. Here we are specifically talking about aspects of our culture and environment that are separate from the more familiar developer productivity strategies related to software design/construction and agile methods.

Of all the strategies we are discussing in this blog series, these are strategies we didn’t have in mind before building Nebraska Global and Don’t Panic Labs. It reminds us that we never really “arrive” as an efficient, fun, and productive software development culture. There are always things to learn and try, and it is important to always be reflecting, evaluating, and implementing new strategies to improve the level of funability in our organization.

Protecting Schedules

It’s easy to fall into the trap of thinking that all time is created equal. Two 30-minute sessions produce the same quality of output as one 60-minute session, right?

Nope.

One of the hard lessons we learned in our early days was the value of keeping schedules free so engineers could have long interrupted sessions where they could just work. The concept of “unmuddying” schedules wasn’t even on our radars.

In the early days, we would schedule meetings willy-nilly. If someone thought a team should gather to discuss a particular topic, we put it on the calendar. But these meetings didn’t occur just once a day; they became frequent, a bad habit.

As you can imagine, our days became a mess very quickly. We had the early-company urgency that kept fires lit under us and we were okay with it. Until we recognized the cost.

One day we woke to the realization that our engineers were left with only small slices of time to work on their projects. Their day was relentlessly divvyed up amongst all of their various meetings. It had become apparent this was not sustainable. We were robbing our engineers of solid blocks of time where they could knuckle-down and be productive.

We used to make excuses, telling ourselves we were good multi-taskers and could just deal with it. But that just wasn’t the case. Our engineers would find themselves returning from yet another meeting and spending about 20 minutes just getting back to where they left off. The constant interruptions were hurting our efficiency. The cost was just too great.

So we stepped back and analyzed the situation. We found that if you give your engineers 4+ hours of uninterrupted blocks of time to focus, they will be more productive, they will enjoy what they’re doing more, and the team will have better software to show for it.

To make sure we don’t fall back into our former meeting-heavy mindset, we operate with the basic plan that we only allow meetings in the mornings. If some emergency comes up and we need to get together, we’ll make time for it (but it had better be a BIG emergency). In the end, we must be strict in protecting the time of our engineers.

As a graphical before-and-after illustration, below are two calendars. The one on the left belongs to Doug. This is typical of what our engineers used to see before we changed our approach to meetings. As you can see, there’s very little time for real work. It’s a productivity disaster. The calendar on the right is a typical calendar today. Sure, there are a couple irregularly scheduled meetings. But for the most part, most of the day is in the clear.

For some additional thoughts on scheduling, we recommend Paul Graham’s blog post Maker’s Schedule, Manager’s Schedule.

We understand that making a change of this type requires a lot of company and culture buy-in. This may not be something you can just “flip a switch” and make happen right away. It’s a real adjustment. But how we run our schedules at Don’t Panic Labs is proof that it works and we are now strong advocates for it.

Local Resources

Another approach to software development that we hadn’t considered in the past was making sure the engineers could run every aspect of their projects on their machines.

Oftentimes there are additional resources that reside in other locations. Perhaps this is a service on another server or a big SQL database box. But if the engineers are spending their time connecting with many “outside” resources, that is time not well spent and can quickly lead to frustration (the opposite of fun).

What if the entire application stack was available on the engineers’ local machines?

This is what we asked ourselves, and then sought the answer by trying it out. In short, it made a world of difference.

For one, your engineers can more quickly develop their projects without spending a bunch of extra time configuring the required remote components. Whether it’s a mock service or a local database, having everything local makes it a lot faster to get up and going.

This approach also creates an environment where engineers aren’t vying for a limited number of resources or having to work around configuration changes/differences on the shared resources. For example, if a centralized SQL server is used for the development team, one developer’s changes that require a database modification will be difficult to manage on a shared server without coordination with other developers to ensure they will be able to test as well. This extra “configuration management” is burdensome and frustrating as a developer.

Achieving this ability to develop and test locally (including database resources) may sound hard but we have been able to achieve this across all of our product platforms in Nebraska Global. It does take some advance planning and adherence to this ideal (including finding the right tools to make it smoother), and we have found that it actually drives some design decisions about our applications that enable/preserve this ability. It also requires the ability to do iterative database design with a solid, script-based, configuration management process for database design and deployment.

As almost a bonus, having as many of the project’s resources locally available also gives engineers the flexibility to work wherever they want. They’re no longer tied to our internal network. They can take their entire project with them on their notebooks and work at home, at another off-site location, or on our deck. This is funability at its finest.

Is that it?

We’ve learned quite a bit about making opportunities for developers and engineers to maximize their productivity. But we’re still not done. We still have more to cover.

In our next post, we’ll show how design and layered quality processes contribute to funability.

 

 

This post was originally published on the Don’t Panic Labs blog.