· 6 min read

I Built My Way Out of AI Anxiety

Core product management work is being automated. I had two choices: worry about it, or do something about it. I chose to build.

I Built My Way Out of AI Anxiety

Core product management work is being automated. Not “will be”. Is being. Right now.

Specs, call summaries, competitive analysis, slide decks, data synthesis. Done by AI systems in minutes. Often better than I could do it.

That realisation hit me hard towards the end of last year. And I had two choices: worry about it, or do something about it.

I chose to build.

What I actually built

I build on top of Daniel Miessler’s PAI, his open-source Personal AI Infrastructure. It’s not a chatbot. It’s a full system: memory, skills, agents, hooks, voice, a learning loop. All built in. The idea that AI should be persistent infrastructure you build your life on, not a tool you visit. That’s Miessler’s, and it’s the right framing as far as I’m concerned.

What I’ve done is configure it deeply for how I work as a PM (my goals, my projects, my writing voice, my decision-making context) and built automations on top that run every day without me touching them. The platform is Miessler’s. The application to product leadership is mine.

The daily research digest

I’ve subscribed to The Economist for years. I already had a daily reminder to read a section each morning and apply what I’d learned to my work. The habit was there. What I didn’t have was a way to filter a broadsheet newspaper through the lens of what I’m actually working on right now.

So I automated it. Every morning at 7am, a workflow fires. It pulls that day’s Economist articles via RSS, different sections on different days (Britain and Europe on Sunday, Business and Finance on Wednesday, Science and Culture on Thursday). It feeds each article to Claude with a document I wrote that describes my goals, my product’s competitive landscape, and the strategic questions I’m currently working through.

Claude reads each article against those priorities and decides: is this relevant to what I’m working on right now? If yes, it writes a summary explaining why. Not just what the article says, but why it matters to my specific situation. If no, it skips it. The output arrives in my inbox as a formatted digest before I’ve finished my coffee.

What surprised me: I thought automating the reading would save me time. It didn’t. Not much, anyway. I still read the articles. What changed was what I noticed. The system consistently surfaced connections I would have missed: a regulatory shift in a market I don’t usually track that had direct implications for my product’s compliance positioning. A hiring pattern at a competitor that suggested a strategic pivot. Things that were always in the paper, but that I’d have skimmed past without the priorities filter pointing at them and saying: this one matters to you, and here’s why.

The automation didn’t replace my reading. It gave me better eyes.

But the thing that taught me the most: building that workflow took less than 20 minutes. Maintaining it takes almost nothing. But writing the priorities document, the file that tells the system what “relevant” means, took me a day of real thinking. And I’ve rewritten it three times since, because my priorities shifted and the digest started surfacing the wrong things.

The hard part wasn’t the automation. The hard part was defining what matters. That’s a product decision, not a technical one.

I have a similar setup for competitive intelligence. A weekly workflow that sends research agents out in parallel, each tracking a different angle: product announcements, pricing changes, hiring signals, regulatory shifts. They report back independently and the system synthesises their findings against my product’s strategic context into a single briefing. Same pattern: the automation is straightforward, the configuration requires genuine strategic thought.

AI infrastructure dashboard

Why building AI systems is the best PM education I’ve found

That pattern kept repeating. Every automation I built forced a product decision. I’ve started thinking of it as the effort inversion: the building is fast, the thinking is slow. And the thinking is where all the value lives.

What’s worth automating? The daily reading, yes, clearly. Drafting competitive analysis, yes. Writing my actual product strategy, absolutely not. The decisions about what to automate and what to protect are product prioritisation in miniature.

Where does AI judgement fall short? My research agents are extraordinary at synthesis and pattern-matching. They’ll find connections across twelve sources in thirty seconds that would take me a morning. And because the system holds my strategic context (my product’s positioning, my competitive landscape, the bets I’m making) those agents can form opinions grounded in real priorities, not just generic summaries. But the final call on whether to act, and how aggressively, that’s still mine. The system informs the decision. It doesn’t make it.

What feedback loops matter? When the digest started surfacing irrelevant articles, the fix wasn’t a code change. It was a priorities change. I do a weekly review of what the system surfaced, what I actually used, and what it missed. That review feeds back into the priorities document. Over time, the system gets sharper, but only because I’m sharpening the input. The third rewrite, for instance, was adding what I explicitly don’t care about. Turns out defining irrelevance is as important as defining relevance. That’s the same discipline as writing acceptance criteria for a feature. Vague input, vague output.

PMs are already asking these questions about the AI features in their products. We should be asking them about our own processes too.

And here’s the thing I keep coming back to: AI makes the easy work easier. It doesn’t touch the hard work.

The hard work is pricing. Go-to-market strategy. Knowing which customer segment to pursue and which to walk away from. Reading a room full of stakeholders and knowing when to push and when to wait. Deciding what to build next when the data is ambiguous and the stakes are high.

None of that is automatable. All of it becomes more valuable when the tactical work is handled.

What I’d tell another PM

The anxiety came from imagining AI’s capabilities. The confidence came from experiencing them.

Once you’re building with AI, not prompting it occasionally, but building systems that run without you, you see both its power and its limits up close. It’s extraordinary at synthesis. It’s poor at judgement, commercial intuition, and organisational politics.

The PMs I talk to are still in consumption mode. Reading about AI, trying the latest chatbot, attending webinars. Very few are building systems. Very few are treating AI as infrastructure rather than a tool they open occasionally.

If you’re feeling the ground shift under you:

  • Start with one automation that runs without you pressing buttons. A daily research summary. A weekly competitive scan. Something that forces you to define “what matters” in writing. That definition is the valuable part, not the automation.
  • Focus on the work AI can’t do. Strategic judgement, commercial decisions, stakeholder influence, reading a room. That’s where your development time earns the highest return.
  • Don’t compete with AI on information synthesis. You’ll lose. Compete on strategic judgement. You’ll win.

I’m getting back into writing, and I’m going to be writing here about what I’m building and what it’s teaching me about product work. Not as an expert, but as someone running the experiment. It’s a lot of fun right now.