Recently, I started to read the invaluable book Software Engineering at Google. It’s a great book by Google, describing their engineering practices across many different domains.
One of the first chapters discusses the matter of making a “scalable impact,” which I find very interesting, and something that I believe has been overlooked by many organizations.
What is Scalable Impact?
Creating “scalable impact” is making a change that will improve your organization’s engineering practices without investing unnecessary effort for each new team member.
First, let’s review some examples that don’t demonstrate “scalable impact”.
What Hinders Scalable Impact?
1. Code Reviews
Code reviews have many advantages. Code reviews allow teams to share knowledge, catch design flaws, enforce common patterns and practices. Seems like a good idea, right? Well, the problem is —these don’t scale. The larger the team, the larger the effort, and this scales linearly with each new developer.
2. Manual QA
Similar to code reviews, manual QA for each release doesn’t scale. As a team grows, release velocity increases. As release velocity increases, more releases require manual QA —creating a bottleneck and single point of failure.
3. Manual deployment approvals
In many organizations, only a small, dedicated team can perform the actual deployment to production. Just as with manual QA, increased release velocity brought on by team growth turns this into a function that blocks scale.
4. Excessive documentation
Documentation is useful —it allows teams to share knowledge in-house and publicly without having to be there to personally explain things. But, as you create more and more documentation, there are two downsides: 1. You have to keep it up to date 2. Devs need to read it. And we DEVs, (or we as human beings…) can be lazy. We don’t like to do things that require a ton of effort; we love to take the easy way, if possible. We don’t read the docs and we definitely don’t update the docs if we change something. So in many cases, the end result is a waste of time, or an old and out-of-date document that no one even reads. In the end, the conventions you created may not be used anywhere.
How to Make Scalable Impact
Okay, so how exactly do you make scalable impact then? At Torq, we’ve adopted a number of practices that help us realize scalable impact and set our team up for successful growth. I’ll highlight each of these examples below, and talk through the details of them in a future post.
1. Centralized Linting
Let’s say that one day you decide all your developers have to avoid using errors.Wrapf
and instead use fmt.Errof
. At that point, most organizations tend to create a new convention page and write that down there. Later, a nice Slack message will be sent to the #rnd channel. Something like this:
“Hi there, we decided to stop using errors.Wrapf, please use only fmt.Errorf from now on. See the conventions here: …”
How many of you are familiar with this practice? If you’re familiar with it, you probably also realize this won’t work.
Why, you ask? Because human beings don’t work well with rules, unless they’re reinforced. That wiki page describing the newest convention? It’s already depreciated the moment you wrote it.
So how do you solve that issue, then? My advice: Come up with a centralized linting solution. By centralized, I mean one that immediately affects all new builds, without any changes to your projects.
Returning to the example above, with centralized linting, you change your global linting rules. That means you immediately avoid any new usages of the old convention. No questions asked, and nothing to remember. It’s as simple as that — the PR won’t be mergeable unless the new convention is used. No documentation to maintain, no convention to teach new recruits. Your documentation is now the linting rules code.
There you have it: Scalable Impact.
At Torq we use ReviewDog to achieve this, which I’ll describe in detail in a later post.
2. Unified/Reusable CI pipeline
Another story you may all be able to relate to: One day your CTO reaches out and asks for the full list of dependency licenses used across ALL your codebase.
So then you start googling and you find an awesome tool doing exactly that. But now you’re stuck. You have to run that tool and collect the results for ALL your projects, and we’re talking about minimum 50+ (or more for larger organizations).
Here’s the (happy!) twist: With unified CI Pipelines, this task becomes easy.
By unified, I mean maintaining a common baseline for all your CI pipelines. One that you can change from a single location, by doing only a single change.
To solve the above issue you will add that logic for license extraction to your common base and execute that as part of your CIs. Then, just let your CI do the rest.
Another (real) example: Let’s say you want to change your unit-testing runner.
You decided that gotestsum is the real deal, and your teammates chime in: “That old go test is useless. We MUST move to the new shiny tool.”
More opportunity for scalable impact: Just change the test part of your unified CI, and all your projects will use gotestsum instead of go test.
To achieve this, we use Makefile inheritance and CircleCI orbs. Again, I’ll dig into this in a future post.
3. Automated E2E Tests
Nothing new here —each and every deployment should pass the same test suite. Again, every deployment.
Green -> deployed
Red -> rolled back
No “sanity suite”, no “Ah, this one is flaky, you can go ahead and deploy”. No user intervention. Of course, this means your complete E2E suite should be fast and reliable (I use under 5 minutes as a rule of thumb).
Adopt the “Beyonce Rule”
“If you liked it then you shoulda put a ring on it!” said the mighty Beyonce in her famous song, Single Ladies (Put a Ring on It).
Later, malicious DEVs took that line and rephrased it to “if you like it, put a [test, lint, e2e, ticket] on it!”
Put plainly, new investments or changes require you to put the right framework in place to make them scalable. Getting started with this requires having the right tools; but after that, it’s easy to adopt the practice.
Found a new issue that can be caught by a listing rule? Easy! Add the lint rule to your centralized linting solution.
Want to add a new tool that tracks the test coverage? Simple! Change the unified CI configs.
The upfront investment here helps the entire team grow, and delivers returns (in reduced toil, and increased velocity) over time. Applying the Beyonce Rule turns your team from Danity Kane to Destiny’s Child. It becomes super easy to add/change the existing conventions and processes. The Beyonce Rule is cost-effective and can be easily adopted.