Category Archives: General

Layoff Journal Week 28 – Traditional Holiday

Welcome to the holiday season. For those still searching for a job, it’s bittersweet. Thankfully, the holiday season is more stretched than ever, and we’re only starting the Halloween season, so it is still very early. Still, the thought of not having a full-time role lined up before Christmas has been looming over my head for a few weeks. I’m thankful to have the opportunity to take up a contract role and I recognize that not everyone has that opportunity, but the holidays are still in the back of my mind. My advice is to keep the holiday plan as normal as possible. Those traditions are the baseline for memories, and while this has been a dark year for many, having a bit of light at the end of the year can be a lift.

Two jack-o-lanterns on a black background and a reflective black floor.
Photo by David Menidrey on Unsplash

I love this time of year. The weather finally starts to cool off, baseball is into the post-season (even if my team didn’t make it this year), and the joy of kids anticipating the costumes, candy, and presents is a great way to wrap up each year. Expect more sentimentality than usual from me here for the next few months.

Non-traditional Numbers

4,632 more people were laid off in September, down from the high of nearly 90,000 back in January. source

83,328 job ads for software engineering jobs were posted on LinkedIn last week, a weakening number but still better than the low point in Q2. source

45% of those laid off were women, while women only represent 39% of the workforce overall (thus they were overrepresented in layoffs) source

Busy, Busyness, Business

“The bad news is time flies. The good news is you’re the pilot.”

— Michael Altshuler

Talking with laid-off teammates, they mention being busier than ever, even without a full-time job. I have had the same experience. Between job searching, filing for unemployment, networking, interviewing, exploring contract work, and all the daily life work of raising a family, the days are long, but the weeks are short. It’s critical to pay attention to where the time is going and look for ways to a) stay busy on things that progress my knowledge or career, b) avoid being overly committed and too busy to be available, and c) keep looking for the new opportunities on the horizon and cultivating them.

I’ve enjoyed more time for family life and getting my hands deep into code again for the first time in years. Starting a consulting business provides excellent opportunities to learn the broader parts of running a company I’ve only ever brushed up against previously. The trick has been choosing which events and activities to say no to. It’s still a real struggle and an area for improvement.

Focus Fuels Velocity

Staying in the theme of team principles that we operated the Indeed Incubator under, we had an unrelenting focus on delivering new products as efficiently and as effectively as we could. We very rarely allowed teams to have multiple distinct product bets going at the same time, instead focusing them on the one they thought was most likely to produce the outsized outcome we were looking for.

Having one key thing to work on helps teams to be laser focused on the most important thing. It also helps leaders as they lay out the priorities. Having only one thing to change at a time is clarifying to the team. It makes it clear that anything else is, at best, second priority. When we set a clear focus we clear away the cobwebs of the other things that could be distracting us.

This doesn’t have to be hard (but often it is). Establishing a clear objective and the key results you’re going to use to measure the success isn’t difficult on its own. It becomes difficult when you see that you have 10 objectives and an average of seven key results for each. First, you’ve got to narrow down the objectives. Three at the most per team and generally these are rolling up to higher level objectives, each time increasing in breadth & depth, but still only around three in total per level. 

Next the key results have to be reduced. Again, to avoid the problem of too many things to track, you reduce the number of measures to approximately three as well. This will give you approximately nine key metrics per team feeding into three overall objectives. The other important part here is to avoid vanity metrics. Each of these results should be a measure of how your business or product is improving, not just a number that’s moving or a feature that’s delivered.

Fun

Black cat staring into the camera with kitchen in the background. 

Caption: The cat definitely wrote the headline and subhead

Headline: ‘Her cat’ alerts owner to burning slow cooker in the middle of the night.

Subheading: Supposed guard dog, meanwhile, offered little help
Picture of a woman holding two chocolate bars with bites out of them. Caption “Every time I avoid eating Halloween candy I reward myself by eating Halloween candy.”
Caption: 

me: takes spider web down with a broom.

Me: hangs up fake spider web for halloween.

Picture:

Man in plaid shirt with a puffy vest staring unimpressed/peturbed at the camera with others behind him, labeled “the spider.”

Final Words

If I can help with your search, please contact me. Please give me feedback on what you like or don’t care for in this newsletter, and I’ll adjust. For total transparency, I have no affiliation with any of the tools, companies, or resources I share. These are my impressions, not colored by any outside influences.

Chapter 3: Your First 90 Days

Kickoff

We briefly discussed the kickoff process in Chapter 1, but much more goes into a successful kickoff. The kickoff is a chance to energize the entire cross-functional team about the product they’re about to build, excited about the customers they’re serving, and fixed on the same clear direction for the new development. Of course, it won’t start that way. It’ll be messy at first—lots of competing ideas and opinions. No data. No product. No clear MVP. But a little structure will bring order from this chaos, and you’ll have a great time.

Black and white foosball players on a green table. The ball is poised for the first kick off.
Photo by Florian Schmetz on Unsplash

The first thing that goes into a great kickoff is a lot of preparation. We have had both User Experience and Product lead the kickoffs, but ultimately, we landed on User Experience as our best fit. The key is you need great facilitators, people who are passionate about the problem space and want to hear all the voices in the room. Allowing the Product Manager or Founder to play the visionary without having to watch the clock and remember to give everyone a chance to grab some lunch helps maintain everyone’s sanity. Additionally, having the User Experience team member be the first to think through the problem space allows them to begin to crisp up the personas and maybe even bring some real-world user experience into the discussion. 

Second, you’ll want all cross-functional partners there for the kickoff. This is an excellent chance for everyone to meet and get to know one another, so including at least one social event (if your budget allows) for the team to grab dinner or go to happy hour will kickstart the team formation process. The other benefit of bringing the cross-functional team together is getting all the voices into the room. Nearly everyone is starting from scratch on this idea, and (generally) no idea is terrible, so hearing more diverse ideas can help flesh out the problem space, the customer persona, or the solution.

We’ve successfully used two different kickoff strategies over the last few years. The first strategy we employed was having the team fill in a Lean Canvas (we modified it a bit, removing a few of the unnecessary boxes for teams in our group) over three days. Day one focused on the customer and the problem space, intending to state the customer’s pain points, needs, and desires at the end of the day. Day two was all about potential solutions and brainstorming ways to solve the customer’s pains. Finally, on day three, the team would explore the competitive differentiators and establish team goals. 

This approach to the Kickoff had its merits. It was short and to the point. Very much focused on customer-centricity. It brought the team back to the principles for the product’s existence and who it should serve. But it had its downsides, too. It wasn’t as structured as Design Sprints. It didn’t incorporate any actual user data into the process. Worst of all, there wasn’t any documentation other than what we had written on how to do this process.

When our User Experience team learned about Design Sprints, they were very excited to test whether they would be helpful as a replacement for our custom kickoff routine. Sure enough, using (a somewhat modified) Design Sprints approach gave us the best of both worlds – a structure for running a successful kickoff that included public documentation/validation (vis a vis Jake Knapp’s book – “Sprint”) and a clearly understood process for exploring a new idea quickly and effectively. I won’t cover the whole process here, but suffice it to say it starts with the goals in mind, works through several potential solutions, and ends by getting real users to talk about your possible product and give you feedback, which is invaluable.

The kickoff results in a well-defined minimum viable product (MVP) that, while a bit embarrassing or risky, will very quickly allow your team to start learning. Let’s talk a bit more about what makes a good MVP.

Selecting the MVP

The Minimum Viable Product (MVP) is well defined in books like “The Lean Startup,” “Nail It, Then Scale It,” and ”Inspired,” so we won’t take the time to redefine it here. The core of how you approach the MVP for Disruptive Innovation teams is about three things: 

1. Creating business value.

2. Reducing risk.

3. Learning.

Creating business value and reducing risk are covered extensively elsewhere and are core to the Agile software development methodology. They take some getting used to if you’re coming from another style of development (like Waterfall or other approaches), but with practice, they can be applied in most (if not all) situations.

Designing your MVP for learning is a new concept for many, so we’ll spend some time unpacking that. The crux of your MVP is testing the market to determine whether there’s enough value to warrant additional investment into this product area. That means if a feature or enhancement is not moving (or likely to move) a key metric by double-digit percentage points, you can defer it (or better yet, delete it. You can always think of it again when there’s more time.) We want to see that we have passionate users who are adopting the product, so if our sales funnel isn’t showing a solid conversion, we invest heavily in fixing that funnel and the messaging. 

Reid Hoffman’s stance that if you aren’t embarrassed by your initial launch, you’ve launched too late is also correct. Please don’t wait for perfection (or anything close to it.) There’s so much to learn in a new business area that every day without a product in the market is a day of learning that is lost forever.

A Stack Designed for Experimentation

Hands pouring a blue test tube of liquid and a green flask of liquid into an (already far too full) flask of orange fluid.
Photo by Alex Kondratiev on Unsplash

Getting started in Engineering will require some upfront investment. You’ll need a technology stack that is primed for experimentation. This stack will be the focal point of each of your product ideas. Consistency is a blessing here because solving a problem once means all other teams can gain from that wisdom, and the process of launching gets simpler for every subsequent product. An additional benefit from a consistent stack is that the engineers working on Product A can lend a hand or give a high-quality code review to engineers working on Product B or when Product B shuts down. The engineers can also easily and quickly join in on Product A because it’s very similar. 

We’re intentionally mixing programming language and technology stack choices for this book to simplify the discussion. In reality, all of these decisions about language, database, scaffolding framework, front-end libraries, etc., are muddled together into one overarching design, and you’ll have to make some compromises here and there to accommodate integrations, brand requirements, or other teams. There are a few key characteristics that you’ll want to consider when choosing a technology stack.

Easy to write (and read)

The ability to quickly and easily express business solutions in code is a crucial characteristic of most high-level programming languages, but some are more verbose than others. We’re not looking for the most terse language here, nor the one that’s the most declarative. You’re looking for a happy medium where the language is sufficiently easy to write, but, to be honest, the more important part is how easy it is to write readable code. This distinction is interesting since any language can be used to write illegible software (sorry, but PERL, anyone?). Still, the goal for Disruptive Innovation teams isn’t to write the most glorious code either. It’s a happy medium where the code is easy to express, and simultaneously, the code is also easy to understand for the next person (or yourself a few months down the road). Performance (meaning the speed of running applications) of that language is a helpful consideration. However, you’ll probably be good to go if it performs well enough to get you through the first million users. Anything beyond that is an ability to scale that you don’t need right now; worse is wasted effort.

Easy to hire

Before you adopt a programming language, you’ll want to do a quick search to see how easy it is to find talented engineers within your budget in whatever location you want to hire. It’s never easy, of course, and engineers aren’t cheap, but choosing a language and well-liked stack can make a big difference in how quickly you can scale up your team when you’re ready to add more investments.

The best gauge we’ve seen for this is to use a company like Indeed.com to see how many jobs there are in your area for the programming languages you’re considering. It will give you a relative measure of how hard it will be to hire in your chosen language at a given time. For example, when writing this, in the Austin area, just over 4,100 jobs mentioned Java, more than 4,800 jobs mentioned Python, around 3,000 jobs mentioned Javascript, and only 11 said Haskell. Of course, it’s not perfectly scientific, but it gives you a rough guide and indicates that trying to staff your team with local Haskell developers will be a significant challenge. This language distribution does vary pretty widely by market as well. For example, we regularly found Python developers easy to hire in Austin and struggled to employ them in Seattle and Tokyo.

Easy to support

The stack you choose should have a healthy online community and, ideally, a community within your company as well (although, depending on the size of your investment in Incubation activities, your team may be able to meet this need independently.) You’ll want a community that is growing and investing in new open-source libraries to help make programming on this stack even easier. Additionally, as you’ll hopefully eventually need to scale up some of your products built on this stack, it’s helpful to have many online resources that help you follow behind someone who’s done it before.

Easy to deploy

Finally, you’ll want to quickly and regularly deploy to keep new features flowing into production, so the stack you select should be able to be deployed to production quickly (e.g., minutes.) Never underestimate how much delays in build and deployment can sap a team’s energy when they’re trying to fix a user bug or when a simple change takes ten times longer to deploy than to write. Investing in a quality continuous integration and deployment pipeline and enough capacity to make it efficient will pay back that investment with significant interest.

Our recommendation (don’t @ me)

For all of our projects, the core language we selected was Python. It’s a language that many people can easily read and write. While it has some eccentricities (especially when doing things the “Pythonic” way), you can ignore those and write straightforward code. One of the other great benefits of Python is that it forces the consistent indentation of code, thus helping make the code a little bit easier to read.

On top of Python, we selected an application scaffolding framework called Django. Django has several advantages, but the biggest ones for us were that it allowed the simple modular design to extend the core application flow easily, it has a project archetype that strongly suggests where each bit of code should be stored, and it has a lot of built-in functionality (class-based views, the Django Admin, the objection relational mapping (ORM), etc.) that make simple applications extremely easy to create.

For the front end, when we weren’t integrating with another team, we often used simple server-side rendered HTML through the default Django templating engine. More and more often, though, we found that we were integrating with other projects and platforms, and we used React since it was the front-end library of choice for the rest of the company. We intentionally held off from adding React for our front-end user experiences until we had a strong case that the product needed React, often opting instead to use JQuery until we had reached a point where the readability of the JavaScript started to fall apart.

For visuals, we kept the design library simple. While there was a component library we could utilize, it was focused on React components, and as I explained earlier, that wasn’t a framework we were always using. Instead, we were able to customize the Bootstrap UI library to closely match the design system the company was using and keep our teams moving quickly with easily laid out pages and on-brand designs.

We stayed with the tried and true MySQL for the backend database as it integrated nicely with Django and was easily deployed for us. Leveraging the Django ORM meant that we were regularly updating models and changing the database schema, so having an accessible, fast, and flexible database to do all of this work was vital. 

We reused the company’s existing infrastructure for everything else – continuous integration/deployment, virtual instances, message queueing, monitoring, etc.

Integrations: You ain’t gonna need ’em until you do.

For your early-stage products, you’ll want to choose your integrations carefully. Of course, the intent is to leverage your core product or other parts of your company to springboard this new product into an even stronger position, but every integration comes with costs. You have to keep that integration up to date. You have to operate the way the system you are integrating with expects. No integration does things exactly the way you want. And so on.

One of the fundamental principles we leveraged to ensure that our teams could continue moving quickly as they iterated on the business model and product was to “maintain optionality.” It’s probably much easier to say, “Stay flexible.” Every time you make a design choice that closes doors, you have taken the risk that the door you closed must be re-opened. 

But it’s not about being too flexible! We’re not looking to design the most flexible, generic system that could do anything for anyone at any time. The team would be lost in the jungle of analysis paralysis, trying to keep all doors open. There’s a sweet spot. Let’s look at an example.

One of our teams was building a job seeker and employer chat product. We knew job seekers had questions for employers and firmly believed that if an employer responded to a job seeker, the job seeker would be much more likely to apply for a job. Simple enough. The design of this application struck a good balance on flexibility by focusing on the core of what was needed in this experiment: an application that allowed a job seeker to communicate with an employer and vice versa. No preconceived notions of having to apply for a job first or that the communication always started with one side or the other.

The team could have integrated with many systems to test introducing chat to job seekers/employers. However, some of those systems had previously conceived notions around when a conversation was taking place, who was initiating the conversation, or the medium the discussion would be conducted on (email, SMS, etc.) By not integrating with these systems early on, the team was free to explore various places where conversations could be helpful, opportunities for either side to start the conversation, and different transport mechanisms for the messages. Once they had done several tests and found an area that seemed promising, they were able to do a deeper integration to run a more thorough examination of whether the chat functionality was beneficial, whether it detracted from any other essential metrics, and what would happen if an entire job market had chat functionality enabled by default for every job. 

Other times, you won’t be able to avoid the integration, so let’s talk about experimentation frameworks next because they’re the key to keeping integrations speedy.

Experimentation Frameworks

We could have said A/B testing frameworks, but the truth is that experimentation is so much more than just the ability to run and verify A/B tests in your product efficiently. Yes, you’ll want a robust platform for doing A/B testing (ours was built in-house and is available as free open-source software: https://opensource.indeedeng.io/proctor/.) There’s plenty of guidance on good A/B testing platforms and implementations and several high-quality vendors that can provide you with this functionality. (That’s a pretty good idea if your company doesn’t already have a team managing the A/B testing platform, but I digress.)

Beyond the A/B testing framework, you’ll want a feature flagging platform. Some A/B testing platforms can do this as well, and that’s probably fine, but there are some benefits to an easily managed feature flagging platform. For example, the simplicity of enabling/disabling as opposed to changing allocations, the ability to coordinate changes in feature flags with other releases/deployable applications, etc. As long as you can quickly and easily turn a feature on and off wholesale, the framework you are using will be sufficient.

Finally, you’ll want an experimentation framework embedded into your core product(s) to make it easier for your Disruptive Innovation teams to explore new ideas/integrations quickly. This can start simply as a way to load arbitrary javascript into any page and have it execute (a company-sanctioned cross-site scripting attack, if you will.) This would need to come with some pretty significant guardrails around how it will be used (or not used), who will use it, how much, and who will turn it off if it misbehaves. All of these questions are true no matter how mature your experimentation framework becomes, and some of them will even more into the experimentation framework and its codebase.

A more complex integration would involve a plugin framework, ideally serverless, that allows your Disruptive Innovation teams to write simple browser (or backend) plugins for your core product. These plugins should be able to target a particular page or a part of the page and replace or add functionality. React, and Webpack Module Federation does a pretty good job laying out the fundamentals for this integration. Each Disruptive Innovation team can construct plugins to promote their product to various existing users on your platform. They can enhance the core product to show additional data or bring additional information to the users’ attention. Or they can create new experiences in the sign-up flow to allow users to configure your experimental products.

The more flexible this plugin system is, the more pages they can adjust, and how much they can change them, the more your Disruptive Innovation teams can experiment quickly. However, there’s a nice second-order benefit to this sort of platform. Your core product teams will also be able to run additional experiments safely and efficiently using this same plugin framework. 

I mentioned before some guardrails you’ll want to place around these experiments. Let’s talk briefly about three critical limits you’ll want in place early on. If you’re doing server-side rendered plugins, you’ll need guardrails around how long that rendering can take to keep page load times low. For browser-based plugins, you’ll want some protections around modifying the page, document object model, and overall plugin size. 

Goals/OKRs

Setting goals for a brand-new product can sometimes be daunting. You’re not even sure what the market looks like, you haven’t written a line of production code yet, and you’re being asked what the product can accomplish. The key is to choose something meaningful and challenging and not get stuck on whether or not that goal is achieved. The goal is valuable if it inspires and pushes the team to iterate quickly. Too much, and you’ll be demotivated with an impossible dream. Too little, and you’re either sandbagging (don’t allow it,) or you’ll just be revisiting the goal in a few weeks because you’ve achieved it, and it’s still not meaningful. 

Objectives and Key Results (OKRs) provide a good framework for structuring goals within the team, but in our experience, the typical roll-up and roll-down of OKRs doesn’t work as well for disparate Innovation teams. Having meaningful objectives (cap it at three) and just a few measurable key results within each objective will give the team a feeling of accomplishment as they move closer to meeting the goals and help with prioritization as you decide what to work on throughout the first quarter.

Lastly, goals should be created collectively by the whole team. The earliest-stage products will probably primarily focus on learning about the market size and delivering value for your customers. Still, as the team progresses into later stages, you’ll add additional goals around scale, quality, and other engineering-specific objectives to balance the investment between new product growth and maintainability.

Check-ins

A person in a meeting with others around a conference table, gesturing with their hands. The table has a notepad, phone, and laptop. Another person is out of focus in the background.
Photo by Headway on Unsplash

Your product team will have three main types of check-ins each funding cycle. The weekly check-ins will provide tactical updates on the delivery of the MVP and other experiments. The monthly executive check-in will be a chance to course correct for any tests that are going astray, as well as a chance to get guidance and executive “air cover” if you need it. The last check-in will be at the end of the current funding cycle and will be a chance to catalog the successes/misses, capture and share learnings, and share plans for the future of the product. Let’s go a bit deeper on each type.

Weekly Check-ins

Weekly check-ins should be attended by much, if not all, of the team and will likely only be about 15 minutes long (maybe more for later funding rounds). There’s value in grouping a few related or similar stage Disruptive Innovation teams into one weekly check-in meeting as there are often common learnings between teams, and teams can learn from each other.

These check-ins should be attended by the Product Manager/Founder and other team leaders, but the main read-out about the product should be delivered by the Product Manager or Founder. They should have enough context on everything happening within the team to give a high-level summary of progress and next steps. Of course, they won’t be able to answer every question in full detail, but having only one person present the product each week will ensure they stay abreast of the work and keep the meeting moving smoothly. If the Product Manager/Founder cannot attend, another leader could step in occasionally to drive the discussion as needed. 

These weekly check-ins should be optional for the rest of the team, but for those who either cannot or choose not to attend, they should have a chance to review the materials shared and ask questions, often at a quick weekly team meeting, but standup can work for this just as well. Beyond the team, you’ll also want several senior leaders from around your Disruptive Innovation organization present to give feedback on the product, typically Product Directors, UX, and Engineering leadership. However, others also bring unique insights if they are available. 

There’s a balance to strike here in having leadership in the room to give constructive feedback to the team and not having too many people in the room such that discussion is stifled. We’ve generally found that between four and six teams work best, and that also works well because it caps the meeting length between one and one and a half hours, which is probably about as long as anyone can sustain attention anyway. You may want to experiment with processes that have everyone pre-read the material before the meeting or dedicate the first few minutes of each team’s time to pre-reading. Still, either way, a little chance for individuals to read before the discussion/presentation begins is very useful.

Monthly Executive Check-ins

The monthly executive check-in is a bit more formal. You’ll want to prepare an actual presentation for this, and it should cover the major features/experiments delivered since the last check-in, the learnings about market sizing, current in-flight experiments, and the things planned next. It would be best if you planned to spend a significant portion of this time answering questions, but thankfully, you’ll be scheduling these for about 30 minutes per team. Once again, the whole team should be included (at least optionally) at this meeting to listen in and hear how the executive thinks about your product. Furthermore, you’ll want much of the senior leadership from your Innovation program there to listen to the feedback and begin planning for scaling up or shutting down based on that feedback. These check-ins are a great time to capture incremental learnings you can share with others in the organization, so saving the presentation decks and recordings in an ordered fashion will be a helpful asset.

Continuation PItch Check-ins

The continuation pitch process will be the most formal of all the check-ins but the least frequent. You’ll need to give a quick executive-level overview of the product and progress to date, showing the data and learnings you’ve acquired. Like the monthly check-ins, you’ll present to the executive sponsor but may be joined by other executives, including the overall Innovation program sponsor. This will replace the last monthly check-in of the funding cycle.

For the core of the meeting, you’ll want the attendance to look just like the monthly check-in: the team, senior Innovation leaders, etc. However, there’s a second part to this meeting where you may want/need to reduce the room to only Innovation leadership and executives, and that’s where the discussion around whether or not the product should receive continuation funding takes place. Sometimes, the debate on whether or not to continue is less about the product or market and more about the execution of the team, and it can be a rather sensitive discussion. If possible, it’s nice to have the funding discussion with the Product Manager/Founder still in the room so that they can understand the nuances of the decision.

If the project is selected to continue, the team should be informed of the decision and their new funding cycle (budget, timeline, etc.). If, however, the decision is made to shut this product down, there’s more to think about and do.

Shutting down

Shutting down a product can be one of the hardest things to do. It would be best if you normalized this as early as possible and celebrate the team’s accomplishments. There is a lot to think about when shutting down a product, and you’ll want a significant amount of Program support to make this a repeatable and efficient process.

Notifying users

One of the first things you’ll need to do is notify existing users that this product is shutting down. That messaging will vary from product to product. Sometimes, you may be required to keep the product around for a while to allow customers access to data, tax information, etc., but the product development (and almost all maintenance) should stop now. Banners within the product and emails are the most common way to notify users that the product is sunsetting, and often, 30 days’ notice is more than enough, but your process may vary a bit.

Cleaning up the application and data

You must do a few things once you have the all-clear to shut down. The most important part is to get the product wholly and cleanly shut down so that there is no maintenance burden left behind from this product. Any lingering portions of a product can quickly become a drag on the team when they need to apply security patches, answer support questions, etc. Here’s a non-exhaustive list of things to do when shutting down:

  1. Redirect the domain from your product to another appropriate place on your company’s website.
  2. Shut down any running instances of the application and remove all data according to your data retention policies.
  3. Archive the project repository (keeping the code around in read-only mode so you can reuse snippets in later products if needed.)
  4. Remove any integrations/experiments you built into other core products.

Capturing Learnings

This will be done throughout the shutdown process and is one of the most critical parts. The Product Manager/Founder should produce a learning presentation and share it with the Innovation program as a whole, but it also should be recorded for posterity. We make these learning presentations available to the entire company as they can (and often do) inspire additional product ideas being pitched to the Incubator in future sessions. 

The Engineering team may also choose to produce some documentation from their learnings, and that may be useful. We have historically coached teams to try to codify some of their knowledge into libraries and services that make the next project faster, simpler, less maintenance, etc. 

DevOps vs NoOps

DevOps Days Austin 2016 Logo

DevOps Days Austin 2016 Logo

Speaking at DevOps Days Austin a few months back on the topic of “DevOps vs NoOps,” led to this post. Ignite presentations are great conversation starters, but can’t convey much substance. It’s a great incendiary title, however I wanted to offer a bit more context on the two “NoOps” applications I’ve been a part of the last few years.

Example #1: DevOps for a Ruby on Rails application deployed to Heroku

The facts:
This application managed the user experience for tens of thousands of customers at WP Engine. It scaled horizontally with the help of HireFire. We managed the log files with Papertrail. It had a Postgres database as the backend data store. As well as a caching layer. Everything managed directly through the Heroku Portal & CLI.

The pros:
Heroku handled much of the security. HireFire was there for horizontal scalability. Easily turn on and scale up services at the push of a button.

The cons:
Obviously, cost. You get what you pay for. Additionally, having a UI for management instead of CLI/scripts for some things. It was difficult for on-call to do more than “restart the dyno’s” due to visibility.

Example #2: DevOps for PHP & Laravel deployed on EC2 directly

The facts:
These applications supported more than a thousand customers. Laravel, Forge & Envoyer gave the team the ability to easily manage deploys and various environments. It provided us with turn-key configuration management of our application servers.

The pros:
Configuration management for initial server setup comes out of the box. It is easy to set up commit/webhook based deploys.

The cons:
Unfortunately, it did not give a true security blanket like Heroku does. Patch management was a required add-on. Additionally, much of the Envoyer/Forge configuration is not tracked by default in the version control system, so configuration drift was common. No built-in or suggested solution for monitoring.

So, do I still need DevOps?

Obviously, both of these applications still required some amount of operations support, but for the most part it was manageable by the existing team of developers as opposed to needing a specialist in monitoring, security, patching or other core DevOps structures.

I truly love DevOps for what it has done to bring development into the operations camp, and operations experience into the development side of things. Anytime we’re talking about breaking down the walls between development, operations, quality, product, or any other collaborator, I’m all for it. I’ll still be hiring DevOps engineers, and looking to build resilient platforms that make every engineer that much more effective.

The Origin Story

Where did this phrase “only new mistakes” come from?

It originally came from a process our development teams used of allowing people to make mistakes, but ironman1encouraging them to always learn from the mistake and to not repeat it. Each review period, quarterly when I first encountered this mentality, we would look at the three things you could have done better. Those mistakes or opportunities were a chance to place a mental speedbump on that memory so that going forward you’ll always remember the mistake and the connected ways you can handle it better going forwards.

As I moved into management, I was looking for a short, sweet version of this process so that I could say it to the members of my team and help them learn this trick and at one point I began using the phrase that it was my goal to only make “new mistakes.” Using my learning from my previous roles, companies and other life events to avoid the known mistakes I’ve seen before, or at the very least to discover new ways to make some of the same mistakes.

Where does this principle apply?

I believe it works great in the business context, but that it’s also very useful in parenting and so many other areas of life. This blog is going to focus on the types of mistakes and learnings you can use to build a great engineering team, but that doesn’t mean the learnings will end there.

This blog will primarily focus on how to apply this principle to leading software development teams, but will draw on experiences in parenting, volunteer work, business, and so much more.

An introduction is in order.

lit matchOnly New Mistakes was created as a place to share what has worked, and what hasn’t for the various software development teams I’ve been a member of and lead over the last decade or so. These are my opinions and not those of my various employers past or present (National Instruments, Bazaarvoice, or WP Engine.) They are the learnings from good times, and bad times. At times they are the best answer I have discovered, and at times they are only the least bad thing I know to do in a situation.

Topics for this blog will cover everything and everything involved in running a successful software development team. Topics like hiring and recruiting, software development lifecycles, testing methodologies, individual and team development, recognition, communication, and much more will be covered. This is meant to be a firestarter to discussions, rather than some guy spouting off on the Internet, so please join in on Twitter or another social media platform.