Dear founder,
Earlier this week, I read a blog post by Bethany Crystal about how ChatGPT saved her life.
She had developed some symptoms she couldn’t place and had blood work done, and the results were available to her but her physician hadn’t looked into them, so she brainstormed with ChatGPT about what the outstanding and non-regular values might mean.
She took photos of the bruises that had developed out of nowhere, and the AI diagnosed her to the point where it recommended she go to an emergency room.
When she got there, medical staff told her she was there just in time.
There’s something amazing about this story on many levels. It illustrates how everybody now has access to a healthcare advisor or consultant that is constantly available, with knowledge of virtually every illness and how to potentially diagnose it. These AI systems have read countless accounts of illnesses in many variations, likely encountering more variants than any individual physician could possibly see in their career. And yet, they’re programmed with guardrails that prevent over-diagnosing and quickly defer to actual experts in the field, to emergency room physicians, and to first responders.
And at the same time, we’re using these exact same AI systems to write code for our software products, draft marketing emails, or conduct research. With the reasoning models now available, they can pull information from a vast context of documents and make use of the fact that they can remember almost anything they consume.
I recently had my own interesting AI experience. I took a photo of my collection of unpainted Warhammer miniatures—the boxes in which the miniatures, still unassembled and waiting to be built and painted, were sitting on my shelf. I asked the AI to set up a list of the order and configuration in which I should build these so I could use them to actually play games. From this photo of boxes containing miniatures that the AI had never seen the inside of, it created a veritable and very in-depth list for a little miniature army that I can now build, paint, and know exactly how to use in the game.
Just a couple minutes after I did this, I gave the same AI a significant portion of my codebase and asked it to refactor a certain part so that the outcome format would better fit a customer expectation. The customer wanted a certain kind of data, and the same exact AI that diagnosed somebody’s urgent need for emergency care and built an army list for my Warhammer hobby was then able to build a component that fits seamlessly into my existing software product.
It’s mind-blowing that there’s so much variety in these systems. To a smaller extent, but still very similarly, that variety exists in locally installable large language models as well. If you were to install Llama 3.1 locally—the 8 billion parameter version—or a 3.2 smaller text-based model, they could still create those results, maybe not as extensively or reasoned as the modern GPT-o3 or the very recently released Claude Sonnet 3.7, but these versions all just get better. Every newer version is just a little better than the old one, but the old ones are still pretty good.
What blows my mind is the massive availability of this technology. The fact that the Claude API responds within milliseconds to any query, whether it’s an uploaded photo of miniatures or bruises or source code. I occasionally take screenshots of my interface and the source code attached to it and ask, “What’s wrong?” because something just doesn’t work. Then, magically, the image detection figures out how the visual components pictured in the image map onto the code, and then fixes that code so that the image looks more like what I want—all within milliseconds. And if you install it locally, within seconds. That is incredible.
I believe that if you don’t yet use AI as a founder, as a hobbyist, or just as a human being on any level, you’re actually missing out on something very meaningful: access to an expert all the time.
To make this more about the entrepreneurial side, as we are on The Bootstrapped Founder podcast, the mindset of being able to prompt any AI system to fulfill almost any task and to go AI-first in brainstorming, figuring things out, planning, and thinking is something we can map onto our business ventures.
Instead of collecting information ourselves or coming up with strategies from what we know, I have found it more effective—and the results more impactful—if I approach a conversation with AI as if it were a more experienced expert in the field. I use it to figure out if my theories about what the next strategy should be or which tactic I can use for any given purpose are valid or invalid. I trust that the collective knowledge instilled in any AI response, because that’s what it draws from, is wiser than whatever individual knowledge I might have.
So when I have a marketing challenge or want to reach a particular customer segment because I see an incredible benefit for them in my product, instead of trying to figure it out on my own, I open up a conversation with an AI system. I use different AIs for different purposes, just to see how the responses vary.
With Claude in particular, which is the one I use most of the time, I go into what I call “conversation mode.” I often record a brainstorm that I do all by myself, talking about something for 5-10 minutes, putting everything I know, everything I assume, and everything I’m not quite sure about into that conversation. Then I prompt Claude to start a back-and-forth exchange with me about this topic, brainstorming strategies and trying to validate or invalidate any theories I might have. I post the full transcript of my solo brainstorming, and the conversation starts from there.
Usually—particularly if I instruct Claude to respond in a structured way, giving examples for ideas, providing blog post URLs to previous attempts at something similar, or sharing narrative examples of companies that have used this strategy to great effect or struggled with it—I get a conversation that I feel I could not have had with another human being for two reasons:
First, it would have been very expensive to find that expert and pay them for their time, whereas Claude is effectively free. The nominal fee for subscription wouldn’t even get me a minute with the expert I’d want to talk to.
Second, it would have taken me weeks to set up. It would have taken days to find a date, even figuring out how to reach that person and then navigate their availability. But with an AI system, it’s immediately available within milliseconds to start this conversation. It’s all on my own terms. Nobody else is involved in making it a priority; I get to do this when and how I want. That is very alluring.
Previously, I would have had to find the expert and do all the research myself, taking hours just for the initial research to figure out what potential strategy might be interesting for the next marketing push.
Take a recent example: we introduced the ability for podcasters to claim their own podcast on PodScan. This feature came out of exactly such a conversation with an AI system. I had been telling it about my last couple months of marketing activities, opening up PodScan’s data more to the public, and the several requests where people wanted to update information about their podcast that they found through Google on PodScan.
I’d get messages like, “Hey, this is not fully correct. The publisher should have this letter in the name,” or “Hey, my podcast does accept guests. Can you put that in there?” I quickly updated the data, but in the conversation with my marketing project on Claude, it came up that we should allow these people to do this themselves. We could let them prove that they own the podcast, and then give them limited but impactful management control over the data, like editing the name if there’s a weird letter or encoding issue, updating the description, formats, and publishing frequencies.
They can also update the size of their audience, which is extremely beneficial. By now, our machine learning system that figures out audience sizes has been very good and quite reliable, but it’s not perfect. Sometimes people say, “Hey, I have way more people in my audience than this,” or “I have much fewer than this. This data is not accurate.” Now they can put the real number in and show proof that they have it. They’re happy because accurate information is properly reflected on the PodScan platform, and I’m happy because I now have customer-verified audience numbers that I can use to train my machine learning model to be more accurate. As customers stream in claiming their podcasts, I get more accurate data, which makes it easier for people to see PodScan as an interesting and reliable product.
All of those benefits are a consequence of a conversation about what the next step could be in making the product more visible to its target audience—anybody in the world of podcasting. That conversation cost me effectively $2 with an expert that knows everything about marketing, product development, and software product adoption. And that same expert also helped me build my Warhammer army and helped Bethany Crystal get to the ER in time to deal with an illness her doctor hadn’t even looked at the blood results for. That is incredible.
If you take anything away from this, it’s the concept that at this point, it’s much cheaper and, in aggregate, more effective to think with AI and do the work with people than to do the work with AI and think with people.
To explain: this brainstorming, this planning, this back-and-forth between me and AI is really just limited by how fast I can type, because AI responds so quickly I’ll never be able to absorb it at the speed it’s responding. If I have a thought and express it, AI can immediately respond. The moment I send it in, it already starts responding, because that’s how these language models work. They respond one token at a time, but they respond very quickly. The speed of that conversation is really just capped by my capacity to keep up.
The work—the actual implementation—is still very complicated and context-dependent, which is something that many AI systems still struggle with. Particularly when it comes to code implementation in larger codebases, AIs are still sub-optimal. They’re great, they do a lot of work faster than people do, and they might do it more reliably than some developers, but that still needs hand-holding, correction, and investigation. So the work part is still better done by humans who use AI as a tool, not by AI agents working completely independently. We’ll get there, and I feel we might even get there this year, but for now, the thinking part—the brainstorming, idea generation, verification, validation, invalidation, anything that has to do with getting an outside perspective—I will always go AI-first, because it’s cheaper, faster, and gives me the whole perspective or many different perspectives if I instruct the AI accordingly.
After that, I talk to my team. That’s how the “claim your podcast” feature actually happened. I talked to the AI, set it up, built prototypes and experiments, and then had a conversation with my UX designer, Nick, who wears many hats too. After the whole thing was strategized, we discussed how to monetize it. Do we monetize it at all, or do we allow access to podcast claims for free in perpetuity as a freemium part of the business? That operational conversation involving multiple moving parts was one I had with a human. But the first level was all between me and the AI.
The result is that podcast claiming is now a freemium part of the product. It makes so much sense from a business perspective to allow anybody who has a podcast to maintain its data for more fidelity and accuracy on the platform without having to pay anything. They only have access to their podcast; if they start a trial, they get access to every feature within PodScan. When the trial runs out, all they can do is edit their podcast, and if they want to do more, they can subscribe. If they want alerts for their podcast, we can remind them that it’s a possibility, and we could even track alerts for them so they could see what they’re missing. All of these potentials exist outside of the freemium plan.
Because we now have podcasters on the platform, I can also present them with information about competing podcasts or potential guests they might want to invite. There’s value in having podcasters on the platform, not just marketers, researchers, journalists (my main audience before), or builders who need podcast data for their own products.
So it’s a freemium part of the product. If you have a podcast, it’s likely already listed on PodScan, so feel free to claim it and check it out. This feature came up because of AI, and it was implemented through AI. I used Claude with the results of the brainstorming and the existing data models around podcasts, users, and tasks to build the backend logic and frontend for claiming podcasts by checking RSS feeds for tokens and having a background checking system.
Everything about this feature, from inception to implementation, was first AI-conceptualized and then, with the assistance of an AI system, implemented by me. PodScan is an AI business on both sides: it uses AI heavily in its operations to extract data, figure things out, and summarize, but it is also built on, through, and with AI.
Any business that doesn’t look to AI first at this point is wasting money and time, because AI is going to be there immediately and good enough for the first steps. If you don’t use this, you’re actually hamstringing your operations.
🎧 Listen to this on my podcast.
If you want to track brand mentions, search millions of transcripts, and keep an eye on chart placements for podcasts, please check out podscan.fm — and tell your friends!
Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and share this issue on Twitter.
If you want to reach tens of thousands of creators, makers, and dreamers, you can apply to sponsor an episode of this newsletter. Or just reply to this email!