Dear founder,
The way we build software is changing fundamentally. Through AI systems, the process of creation is transforming into something different - almost a managerial thing, even for technical people.
When you build software with AI assistance, you become less of an explorer or implementer and more of a validator or verifier.
You’re the person who checks and judges the quality of code instead of generating it. We’re moving from generators to validators, from creators to judges - in the best sense of the word.
This shift got me thinking about how all those famous laws and principles that have guided software development and entrepreneurship might be changing in this new AI-assisted world. You know the ones - Murphy’s Law, Brooks’ Law, Conway’s Law - those short phrases that people attribute some kind of truth to. I wanted to explore how developing software and software-enabled businesses, particularly with AI assistance, changes these laws or affects them in some way.
Let me share some insights from my own experiences building Podscan.fm with AI assistance, and how these classic laws now apply - or don’t - in this brave new world of prompts and completions.
Kidlin’s Law: Problem Definition is Half the Solution
One of the clearest and most applicable laws when working with AI systems is Kidlin’s Law, which states that if you write a problem down clearly and specifically, you have solved half of it. This becomes even more true with AI assistance.
The specification of how you want work done through a well-formulated prompt to an AI system involves most of the actual thinking - the consideration of structure, tasks, results, inputs, and outputs. When I provide a prompt to an AI system, I’m communicating an intention with varying levels of input - from a simple “create a list of X, Y, Z” to more complex instructions.
For example, I might say: “I have this existing codebase, and I want a feature that takes data from this database with this schema and displays it on a page - or uses a backend process to transform it into something else.”
In this prompt, I’ve already defined the input (both the data structure and the code that currently handles it), the output (what I want the result to look like), and often even how the transformation should happen. I might explicitly state “aggregate over a 7-day timeframe” or implicitly say “figure this out from the existing code that describes how this data is currently being handled.”
Kidlin’s Law applies even more strongly than before because old-style specifications, at least the ones I remember, were vague in the sense that they left room for experimentation during implementation. Even if you built software in an Agile way with sprints and rough planning, there was always room for discovery during the process.
With AI-assisted development, that room gets much more granular. When I write a spec for a feature, I want it to be so well-defined that the machine has minimal room to get it wrong. The thinking about structure, tasks, parts, results, inputs, and outputs - a well-formulated prompt includes most of this upfront work.
Postel’s Law Reversed: Liberal in Sending, Conservative in Receiving
Here’s another interesting flip: Postel’s Law, also known as the robustness principle. It traditionally states: “Be conservative in what you send and liberal in what you accept.” It was meant to express that a system is best designed if it’s very limited in the randomness of the data it sends out (making it reliable), but good at interoperating with other systems by accepting various kinds of data.
Working with AI systems, I’ve found this has completely turned around. When communicating with AI, you want to be very liberal in what you send because you don’t fully understand how these systems interpret information, but you must be very conservative in what you accept.
Case in point: when coding with AI for Podscan. The more context I provide, the better the result fits my needs, and the less I have to discriminate against the code that comes back. When building a feature, I try to find every single file that touches either the data model or the request flow that this feature involves.
For instance, when I need something related to podcast episodes, I gather the podcast episode data model, the database schema for podcast episodes, the controllers I use, and maybe views that show that model. There are a lot of different things in the codebase that I find and put into my prompt as context for the machine.
I’m very liberal in how much data I supply, even if many of the podcast model functions have nothing to do with the specific feature I want. I still want the AI to understand how I usually interact with data, write to the database, and handle caching in most occasions. The AI needs to get a feeling for the “smell” of my code, the processes and concepts integrated in the system.
But I’m extremely conservative in accepting what fits back into the codebase. I frequently find myself saying “No, this is wrong” or “This doesn’t follow how I want that data to be handled” or “This implements a completely new paradigm of data interaction.” The AI has to do it as it has seen in the other functions I’ve provided - not introduce new paradigms.
Gilbert’s Law: Finding the Best Solution is Your Responsibility
This brings us to Gilbert’s Law, which states that when you take on a task, finding the best way to achieve the desired result is always your responsibility. Until software is written completely by autonomous agents without human involvement, you are responsible for verifying or dismissing what the machine produces.
When I tell an AI system to write code for Podscan, I need to read every single line of it. I need to make sure that the intention of what I have in my task to build this feature is accurately reflected in that code.
The machine’s “best way” might not actually be the best due to lack of context, the wrong context, the wrong prompt, or the wrong examples it has in its training data. My responsibility is to take this code and judge it harshly - not because the machine did something wrong, but because I don’t know what the right code is until I see a result that fits.
I might need to ask the AI to rewrite in a different style to better fit my codebase, or build it differently for future extensibility. That judgment remains my responsibility, and that is something that Gilbert’s Law captures perfectly.
Falkland’s Law: Don’t Decide If You Don’t Have To
Falkland’s Law states that if you don’t have to make a decision about something, then don’t decide. I’m a big fan of abstractions and making things extensible from the start, so you don’t need a full rewrite when extensions are required later.
AI systems are surprisingly good at building things in an extensible way without over-optimizing for future extensibility. In my prompts, I can specify: “Make it extensible, but not crazy extensible. Make it extensible enough so I can switch email providers if needed, or so that if I need a new payment platform, I can easily integrate it. Or make it extensible so I can switch the backend for this microservice.”
That’s the kind of extensibility you can tell the machine to implement, and then neither you nor the AI has to make decisions about direct implementation. The AI can implement some kind of interface or module or whatever is appropriate in your language of choice.
This extensibility is particularly important because when you later extend the code, the surface for error becomes much larger if you didn’t build with extension in mind. When AI tooling takes in old code to rewrite it, it might forget that other components depend on it. Starting with reasonable extensibility helps avoid these future headaches.
Brooks’ Law Reversed: More Agents May Speed Development
Now let’s look at some laws that are being challenged by AI. Brooks’ Law famously states that “adding manpower to a late software project makes it later” - the mythical man-month compressed into one sentence. With AI tools, this feels different.
If you have agents building software in the background, adding one more agent (particularly if they communicate with each other and can test their solutions) might actually benefit development speed. It does clash with what we just discussed in Gilbert’s Law - you’re still responsible for integration - but as we move toward self-contained, agentic coding systems that can build, run, and verify code, adding more engineering power becomes a benefit rather than a drawback.
The less human verification needed and the more machines can verify their own creations, the more we might be able to overcome the limitations that Brooks identified decades ago. Having multiple agents working in parallel could fundamentally change the economics of software development.
Hofstadter’s Law Challenged: Can AI Beat the Schedule?
Similarly, Hofstadter’s Law says: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.” Software projects certainly take time, but AI might offer a way to speed development by having multiple agents work simultaneously.
I find it fascinating to imagine not just one agent working on your code, but several doing it concurrently and competing for speed, performance, reliability, security, or any other benchmarkable aspect. Having multiple sources generate the same feature in different ways, then seeing how each performs, represents an interesting prospect for future AI-assisted development.
It’s like having multiple providers, multiple sources of code, generating the same feature but in different ways, and then just seeing how they work against each other. This competitive approach could potentially overcome some of the scheduling challenges that have plagued software development for decades.
Conway’s Law and Prompt Culture
Conway’s Law states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” I believe we’ll see the way people prompt AI become an organizational structure in itself, producing certain kinds of code that others who prompt differently might not even comprehend.
The benefit of working with humans is multi-level feedback. Your colleagues give professional expertise, but they also provide feedback on communication clarity - “Hey, you’re pretty unclear when you say this” or “You think that means something, but to me, it means something else.”
An AI system might not criticize your communication style - it simply infers your meaning and projects that into the code. But you might not be criticized for the way you communicate. The AI just takes what you say and runs with it.
This means something will be encoded in the code that isn’t necessarily related to the functionality - it’s how you communicate how it should be. And since we all communicate with AI very individually, just between ourselves and the machine with nobody else seeing this communication, there might be a fascinating challenge: keeping prompting styles consistent and compatible within organizations.
We might see unexpected siloing even within small developer teams as different prompting styles clash continually. Perhaps we’ll need prompt mediators or translators that normalize communication styles to prevent Conway’s Law from affecting code quality on a per-prompt basis.
I have no solution for this challenge yet, but I think Conway’s Law will very likely affect code on a per-prompt generation basis, which is pretty interesting when you think about it.
The Peter Principle: AI Rising to Its Level of Incompetence
Finally, because it’s always funny to think about, there’s the Peter Principle: “In a hierarchy, every employee tends to rise to his level of incompetence.” With developers using AI systems to write increasingly complex code, we’ll find these systems pushed to the point where they become unreliable or frankly incompetent at creating solutions that actually work.
It’s relatively easy to build a CRUD app with AI assistance. It’s easy to build a simple game. But complicated apps, games with highly interdependent backend processes, or machine-proximate code that needs to be memory-safe and generally secure? AI might not match the tacit knowledge of experienced developers.
We will make more and more things with AI systems until we find their capability ceiling. The AIs we use right now have a certain capacity and capability limit. Of course, newer and better models will be trained on more reliable code, but think about this: there might be skill atrophy in the humans using AI, and there’s definitely a skill ceiling in AI systems trained only on publicly available code (which has its own problems and may not be fully complete, capable, or secure).
There is a level of incompetence that might be very high - hard for humans to reach - but it still exists. And I think it becomes ever more dangerous to let these systems run by themselves in an agentic fashion once they reach that level.
For now, as long as we can test and reliably verify the results of AI systems, we still have control. But if they run autonomously and get “better and better” (in quotes), they might eventually hit a point where they actually get worse and worse while believing themselves to be correct due to a history of assumed correctness.
The Bottom Line
Most of these laws are still applicable in the AI era, and it’s good to know them. It’s important to understand them, particularly in terms of the unintended side effects of prompting AI systems and using AI-generated code in collaborative codebases.
The key takeaway from my experience is the importance of clearly specifying what you want with as much context as possible to get results that don’t come back to bite you. As we transition from creators to validators, our responsibility shifts but doesn’t diminish - if anything, it becomes more crucial than ever.
Some principles remain unchanged, some flip completely, and others face new challenges in this AI-assisted world. Understanding these dynamics helps us navigate this changing landscape more effectively as entrepreneurs and developers.
I’d love to hear your thoughts on these laws and how you’ve seen them play out in your own AI-assisted development. Are there other principles you’ve seen that have changed? Do you have strategies for dealing with the new challenges I’ve outlined? Let me know in the comments or reach out on Twitter - I’m always curious to learn from other people’s experiences in this rapidly evolving space.
🎧 Listen to this on my podcast.
If you want to track brand mentions, search millions of transcripts, and keep an eye on chart placements for podcasts, please check out podscan.fm — and tell your friends!
Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and share this issue on Twitter.
If you want to reach tens of thousands of creators, makers, and dreamers, you can apply to sponsor an episode of this newsletter. Or just reply to this email!