Defining Your Paranoia Level: Navigating Change Without the Overkill
Welcome to the new week! The biggest challenge in understanding new technology or practices isn't the time needed to learn them. A much bigger challenge is unlearning the practices we've been using so far. Learning new technology can be challenging for many reasons. Complex APIs, unfamiliar patterns, tricky concepts, etc. But that's not really it. The real problem is that our brains are wired with years of solutions that worked before. We've built intuitions, developed gut feelings about what's "right", and gotten comfortable with certain ways of solving problems. When a new approach contradicts these intuitions, our first reaction is usually resistance:
or
We waste time trying to make the new technology work like our old tools instead of understanding what makes it different. Too often, when we see new, we think hard. The Challenge of UnlearningLet's take document databases as an example. Many developers coming from a relational database background immediately ask:
The real answer is simple but uncomfortable:
And this realization should lead to deeper questions. Instead of thinking about joins, think about access patterns. Who's reading this data? How often? What parts do they need together? Consider a game matchmaking system. In a relational world, players, teams, matches, and game stats would be in separate tables, joined together for each query. The traditional mindset tells us this is the "correct" way. We should normalize everything and avoid redundancy at all costs. But let's challenge this thinking. In a document database, you might store the match document with embedded team compositions, player stats, and match outcomes. Yes, player data is duplicated. Yes, updating a player's username means updating multiple match documents. But let's think about what actually happens in real games. How often do players change their usernames? Even in games that allow it, it's a relatively rare event. When it happens, it's acceptable to take a few seconds to update historical matches - because that mirrors the reality of how games work. Players care about finding their match history, analyzing their performance trends, and seeing their progression over time. They don't care if their old username takes a moment to update in historical matches. This isn't just about technical choices; it's about matching our data model to reality. When players browse their match history or when you're analyzing game patterns, you have all the context in one place. Each match document tells a complete story: who played, what happened, and how the game unfolded. This natural alignment with real-world usage patterns often leads to simpler, more maintainable systems. The Art of Gradual TransitionThe process of unlearning doesn't mean throwing away everything we know. When we try to force our old habits into new technologies and practices, we often end up hurting ourselves. Take Event Sourcing. It's a simple pattern, almost primitive. Instead of overwriting the state, we record a new event. When we need to execute business logic, we fetch previously recorded events, interpret them, make a decision, and record another fact. An event store is really just a key-value database in which the key is the record identifier, and the value is a list of events. That's it. But this simplicity opens up so many possibilities and potential integrations. Suddenly, it feels like our brain might explode. The old way was comfortable and we could run on autopilot. Here, we have to think, analyze, consider, and keep learning. Let me share another journey: I used to be a JavaScript and TypeScript hater. I came from a C# and Java background. I tried to use them precisely, like C# and Java. The syntax looked similar enough that it seemed natural to apply the same patterns. The result? Pain and frustration. The breakthrough came when I realized that JavaScript, despite its superficial similarities to C# and Java, is fundamentally different - and much more suited to functional programming approaches. This realization highlights a common pitfall. Some people (and the tools they build) attempt to blindly transplant patterns from other environments without considering the local ecosystem. While they might offer a quick start for developers coming from specific backgrounds, they often stop at that first iteration without truly embracing the platform's natural conventions. It's like forcing square pegs into round holes - it might work initially, but it creates friction in the long run. Starting Where You AreThe key is to begin your journey from familiar territory and then iterate based on real experiences. In our matchmaking system, when starting with microservices, don't feel pressured to break everything into tiny services immediately. Start with a single matchmaking service that handles the core logic of putting players together. Get comfortable with that service boundary, understand its communication patterns with the game servers, and let the architecture evolve naturally.e key is to start with natural boundaries in your domain. In our matchmaking system, think about what actually needs to be separate: matchmaking logic has different scaling needs than player profiles, match history queries are different from active match management. Start by separating these based on real technical or business needs, not theoretical abstractions. Don't create artificial boundaries or ceremonial interfaces just for the sake of "clean architecture". If match creation needs player data, it needs player data - adding interfaces or abstractions won't change that fundamental dependency. Instead, focus on keeping the data flow clean and explicit. This makes it easier to understand and change the system when real needs arise. Look for similarities between what you already know and what you're learning. When moving to document databases, you'll notice that a document is like a denormalized view of your data - something you've probably created before with JOINs. The difference is that now you're storing it that way. Or in event sourcing, if you've ever used database triggers or audit logs to track changes, you're already thinking about events - you're just making them the primary source of truth now. For document databases, start by thinking about how you'd create a view of your data for a specific use case. Your initial structure might mirror your current relational model - separate documents for players, matches, and teams. That's okay. As you work with this model, you'll start seeing where this doesn't fit: maybe you're constantly joining match data with team data in your queries. These pain points guide you toward better document structures, like embedding team data within match documents. In event sourcing, look for places where you're already tracking changes: PlayerCreated, MatchStarted and TeamFormed. These are probably lurking in your audit logs or status fields. Now, you're just making them explicit. Keep your existing business logic at first, but start expressing it with events. Instead of updating a status field to "InProgress", you're recording a MatchStarted event. The business logic is the same - you're just approaching it from a different angle. The key is recognizing that new patterns often have parallels in what you're already doing. Find these familiar elements and use them as bridges to understand the new concepts. This makes the transition feel more natural and helps you avoid the temptation to force old patterns where they don't belong. The Simplicity ParadoxSometimes what seems like the most straightforward solution isn't actually the simplest. Our instinct is to solve problems directly: store the current state? Just update a field in the database. Need history? Add a status log table. Need to track changes? Add some timestamps. This direct approach feels natural at first. But as requirements grow, we keep adding more and more patches: audit logs, status histories, temporal queries, analytics tables. Each addition seems reasonable on its own, but they pile up into a complex mess... Continue reading this post for free in the Substack app |
Older messages
Pssst, do you want to learn some Event Sourcing?
Friday, February 14, 2025
Hi! Does Event Sourcing tempt you but don't know where to start? Is your business losing data? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
How a Kafka-Like Producer Writes to Disk
Monday, January 13, 2025
We take a Kafka client, call the producer, send the message, and boom, expect it to be delivered on the other end. And that's actually how it goes. But wouldn't it be nice to understand better
Invitation to the Event Sourcing workshop
Friday, January 10, 2025
Hey! I'm usually not making New Year commitments. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Thoughts on Platforms, Core Teams, DORA Report and all that jazz
Monday, January 6, 2025
Everyone's hyping “platform teams” like they're the next big thing—yet I see so many struggling, often for the same reasons core teams do. In latest edition I dive into why these big, central
Locks, Queues and business workflows processing
Monday, December 30, 2024
Last week, we discussed Distributed Locking. Today, we'll continue with it but doing it differently: with a full backflip. We'll see how and why to implement locks with queuing. Then we'll
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your