Control vs Consequences
Hello 👋
This week I write about how taking control makes you a bottleneck.
Also links to articles on a decade of AI platform, context layers, and a scope creep game.
Finally, if you’re in London next week join me as I present at the London Platform User Group (LOPUG) meetup on contract-driven data platforms. I’m particularly looking forward to this talk as it’s for platform engineers, not data people, and hopefully I will get some interesting questions and learn something in the chats after :)
Control vs Consequences
As a data (governance/platform) team you’re always trading off the amount of control you have over your organisations data assets.
You want to ensure data is managed correctly, with appropriate access management, with data retention policies that comply to regulations, with cost controls, and so on.
It’s tempting to make yourself gatekeepers for various tasks, reviewing and approving the actions of data users and owners, avoiding any negative outcome.
The problem is, every gatekeeper becomes a bottleneck.

Bottlenecks slow the organisation down, frustrating those who want to share and/or make use of data, and restricts the flow of data.
They also slow your team down, because now you’re having to take manual actions each time a request comes in, instead of delivering against your projects.
Every time you consider making yourself a gatekeeper, think hard about the consequences of the negative action you are aiming to prevent. Does the potential consequence outweigh the costs of becoming a bottleneck?
If it does, because the consequence is so large, consider how you can implement automated checks within the process, avoiding the need for manual review.
For example, if the action could result in an email being sent to all your customers, including those who opted out of marketing communications, ensure the process and the systems have the right checks in place to prevent accidental emails being sent out, such as approval process from someone within the same team.
But often the consequence isn’t large, and isn’t immediate. We often but a gatekeeper in place to prevent minor risks, maybe as a reaction to something that went wrong before, and fail to consider the costs of the bottleneck against the costs of the risk.
So, wherever you’ve placed yourself as a gatekeeper, look again, and determine those costs.
I’d bet you can remove many of those manual reviews entirely, and automate the rest.
Interesting links
A Decade of AI Platform at Pinterest by David Liu
Lots of interesting platform lessons in here, including pros/cons of a custom language for feature engineering (Linchpin), adoption challenges, and organisational incentives.
I liked this point on abstractions too:
Any unification is temporary — the future is always unknowable, and today’s stable layer will eventually give way to new abstractions.
Just as the Data Warehouse Defined BI, the Context Layer Will Define AI by Prukalpa
Instead of dozens of teams building isolated context stores, we can build a shared, federated context layer — one that reflects how the organization actually thinks and operates.
I like the sound of this.
It’s a game we’ve all played before, and you can play it again in your browser!
Being punny 😅
I just found there’s a new documentary about Rolex. I’ve added it to my watchlist.
Thanks! If you’d like to support my work…
Thanks for reading this weeks newsletter — always appreciated!
If you’d like to support my work consider buying my book, Driving Data Quality with Data Contracts, or if you have it already please leave a review on Amazon.
🆕 Also check out my self-paced course on Implementing Data Contracts.
Enjoy your weekend.
Andrew