Shifting left into the oblivion.

When unstoppable policy meets immovable code. —

The views expressed here are my own and don't reflect those of my employer.

I’ve been hearing more about shift-left again recently. For the uninitiated, shift-left is a concept in which you take work that is typically done by non-software developers after the development phase and make the developers do it (shifting it left). The last time this term was popular developers took on operations and testing responsibilities. This time we’re seeing a push for shifting left security and privacy.

I’d wager there are two major reasons for seeing the resurgence of the idea now:

  1. Governments are starting to protect their citizens – and their national interests – as they start to realize that tech has become all encompassing.
  2. Companies are facing a looming recession after a few years of weak pandemic growth.

This is a bad spot to be in if your business model doesn’t have a buffer for the privacy and security staff you suddenly need to keep the lights on. So we see companies doing the only thing they can: repurposing their existing development staff that already know the systems.

This approach is a stick of lit dynamite with a cartoonishly long fuse.

The fundamental problem with this approach is power. Compliance professionals need to have enough power to stop the company from shooting itself in the foot. Theoretically they should even trump the CEO. Developers don’t hold that power. Especially those who couldn’t stop the company from forcing them to take on an additional job that requires specialized expertise most developers don’t have.

We saw similar power imbalances when the FAA allowed Boeing to self-certify airplanes leading to the deadly 737-MAX or when VW blamed “rogue engineers” for their impossibly good engines that leadership never questioned while the money rolled in.

If people dying wasn’t enough to combat conflict of interest then the average software project doesn’t stand a chance.

What does the future look like?

We can keep trying to throw people at the problem. As we’ve already seen with content moderation that approach doesn’t scale. Unlike content moderation, ML doesn’t seem like it’ll cut it (not that ML does great there anyhow) because the problems are at least as hard as developing software.

In a few years once the long fuse runs out and companies start being fined I expect most companies will go for a legal or technological solution.

On the legal front, I expect we’ll see a lot of systems where an external entity takes on the risk as long as you play in their sandbox. For example, website builders that enforce accessibility policies and will defend their tooling in court. I wouldn’t be surprised to see a bunch of low-code platforms spring up around this.

For anything remaining that won’t play nicely in a sandbox we’ll need the ability to late-bind policy so companies can flip a switch instead of diving into code when policy changes. It’s more difficult to predict what that looks like in practice. Maybe it’s policy applied on a service mesh, a ton of feature flags, or aspect oriented programming. Whatever the case, service boundaries will be more important than ever and we’ll want to be able to swap implementations at the level of a request or user based on its attributes rather than at the level of environments or instances.

It’s going to be a wild ride.