I'm Not a Developer. I Built an AI Product in 2 Months. Here's My Entire $150/Month AI Stack.
The exact AI workflow behind building Temporal.day from scratch.
Most people use AI to write emails faster or summarize articles.
I used it to replace a team I couldn’t afford — and build a product I couldn’t build alone.
Let me back up. I’m a Product Manager. Five years of experience across telecom and travel-tech. I can write a solid PRD, run a user interview, and prioritize a backlog in my sleep. But I can’t code. Not really. I understand architecture at a high level, I can tell you how I want services to talk to each other, but writing production code? That’s not me.
And yet, two months ago, I shipped Temporal.day — an AI-powered calendar that auto-schedules your tasks. A real product. With real users. Built almost entirely with AI tools.
Not a landing page. Not a Figma prototype. A working product with AI auto-scheduling, natural language input, Google Calendar sync, payments, and a live user base.
My total AI spend: roughly $150 per month.
A developer alone would cost me $5,000+ per month. Add a designer, a content person, someone to help with distribution — you’re looking at $10K+ easily. And it would take significantly longer to ship.
I’m not writing this to brag. I’m writing this because most “how I use AI” articles are useless. They list 10 tools, describe each one in two sentences, and leave you with nothing actionable. You’ve read that article. I’ve read that article. It didn’t change how I work.
This is different. This is my actual daily workflow — tool by tool, hour by hour, decision by decision. What I use, why I use it, where AI saves me hours, and where I deliberately don’t use it at all.
If you’re a PM thinking about building your own product, a founder trying to do more with less, or anyone curious about what an AI-first workflow actually looks like in practice — this is everything I know.
The $150 Team
Here’s a mental exercise. Imagine you’re hiring a team to build and launch a product.
You need a developer to write code. A researcher to dig through competitors, find the right tools, analyze markets. A content person to write tweets, blog posts, and product updates. A distribution assistant to monitor Reddit, find relevant conversations, and spot opportunities. And someone to QA everything, find bugs, and write test cases.
That’s five roles. Conservatively $10–15K per month. Realistically more.
Or you can do what I did: pay $150/month and build the team out of AI tools.
Here’s who’s on my team.
Claude — my CTO and Head of Content (~80% of my work)
Claude is the center of everything. I use it for research, strategy, content creation, writing documentation, brainstorming features, analyzing competitors, and building artifacts.
Why Claude over ChatGPT? Three things. First, it holds context significantly better over long sessions. When I’m deep in a product strategy conversation that spans 30+ messages, Claude still remembers what we discussed at the beginning. ChatGPT starts drifting.
Second, it adapts to me. After working with Claude consistently, the responses feel calibrated to how I think and what I need. ChatGPT never quite got there — the personalization felt off, and I could never get comfortable with its responses.
Third, the artifacts. When I need a structured document, a comparison table, a framework — Claude’s artifacts are cleaner and more usable than anything I’ve gotten from other tools.
If Claude is one person on my team, it’s the one I’d keep if I had to fire everyone else.
Claude Code — my developer
This is the big one. I’m not a developer, and Claude Code writes all of Temporal’s production code.
My process looks like this: I describe the architecture I want. How services should interact. What the user experience should feel like. What the edge cases are. Claude Code writes the implementation. I test. We iterate.
Is the code perfect? Probably not by senior engineer standards. But the product works. It ships. It handles real users. And the speed is incomparable — Claude Code writes faster than any human developer. Not better, necessarily. But faster. And when you’re building 0→1, speed of iteration is everything.
The alternative was spending months finding a technical co-founder or burning through savings hiring a freelance dev. Claude Code let me go from idea to working product in two months.
Perplexity — my analyst
Perplexity handles research that Google can’t.
Here’s a real example. I needed a payment processor for Temporal. Sounds simple, right? Google “best payment processors” and pick one. Except I’m a resident of a specific country, which means half the popular services won’t work for me. And the ones Google surfaces are all the same big names — Stripe, Paddle, LemonSqueezy. The SEO game is dominated by the biggest players.
I needed something smaller. Something niche but reliable, with lower commissions, that actually works in my jurisdiction.
Google was useless. I kept seeing the same top-10 lists recycled across every blog.
Perplexity found it. It took a few hours of back and forth — trying different criteria, evaluating options, checking availability. But it surfaced a service I never would have discovered through traditional search. A smaller provider, popular in certain circles, with better terms for my situation.
That’s the pattern: when you need to go beyond the SEO-optimized surface of the internet, Perplexity digs deeper.
ChatGPT Codex — my QA engineer
I still pay for ChatGPT, but my usage dropped to about 5%. Here’s why I keep it: Codex.
Claude Code and ChatGPT Codex have fundamentally different personalities. Claude Code is the fast developer who wants to ship everything now. Codex is the careful reviewer who reads the entire codebase and says, “Hey, did you notice this bug on line 847?”
I use Codex for full project reviews, finding bugs that Claude Code introduced while moving fast, and writing test cases. It’s a different kind of thinking — slower, more thorough, more concerned with the whole picture. Claude Code builds. Codex audits. They complement each other.
Plus, OpenAI’s $20 subscription gives access to the API, which I use for my OpenClaw bot. And honestly — if I pause my subscription, ChatGPT keeps working for another four weeks while sending polite reminders about the failed payment. So effectively it’s $20 per two months. I’m not proud of it. But I’m being honest.
OpenClaw — my distribution assistant who never sleeps
This one changed my mornings completely.
OpenClaw is an open-source AI agent that runs on your machine and connects to messaging apps. I configured mine to work as an automated assistant that operates around the clock.
Every morning when I wake up at 5–6 AM, OpenClaw has already prepared my briefing:
Ideas people are discussing online that could inspire new features or products
A summary of what’s new in the AI and productivity space — things I might have missed
Interesting tweets that accumulated overnight in my niche
Reddit posts where people are discussing productivity apps, calendars, or looking for tools like Temporal — opportunities for me to engage and mention my product
This used to be an hour of manual scrolling. Now it’s a curated feed waiting for me before my first coffee.
The supporting cast
SuperWhispy converts my voice recordings to text — I often brainstorm by talking, then turn those recordings into tweets, notes, or article drafts. AI image generators handle TikTok avatars and visual content. Small tools, but they close gaps that would otherwise require hiring a designer for quick tasks.
Total cost: ~$150/month. Total roles covered: 5+.
5 AM to 10 AM: What an AI-First Workday Actually Looks Like
I wake up between 5 and 6 AM. My main job starts later, so mornings are for Temporal.
First 15 minutes: The OpenClaw briefing.
I check my phone. OpenClaw has already sent me a Telegram summary: trends, ideas, relevant Reddit threads, interesting tweets in my niche. I scan it, star anything worth acting on, and move to my desk.
Next 30–60 minutes: Research and planning in Claude.
Before I write a single line of code, I think. I open Claude — lately through Claude Code on desktop because the research feels more thorough there — and we work through whatever problem I’m tackling.
Say I’m building a new feature. The session looks like this:
First, I research with Claude. What are competitors doing? What are the technical options? What are the tradeoffs? We go back and forth until I have a clear picture.
Then, we formulate the approach together. Not a formal spec, but clear theses: “This is what we’re building. This is how it should work. These are the edge cases.”
I write this down as structured notes. This part is critical — and I’ll explain why in the next section.
Next 2–3 hours: Building with Claude Code.
I take those notes and move to Claude Code. Now it’s execution mode.
I describe what I want. Claude Code writes it. I test. Something breaks. I describe the issue. Claude Code fixes it. We iterate.
On a good morning, I ship a complete feature before my day job starts. On a normal morning, I make solid progress on something complex. Either way — the product moves forward every single day.
Throughout the day: Content in the gaps.
Between meetings at my main job, during lunch, on my commute — I use Claude to draft tweets, brainstorm content ideas, or think through Temporal’s positioning. These micro-sessions add up. Most of my Twitter content is born in 5-minute bursts throughout the day.
Evenings: If time allows, one more session.
After work, gym, or English classes — if I have energy left, I do another 1–2 hour session with Claude Code. But mornings are the sacred time.
What AI Can’t Do (And Where I Refuse to Use It)
This is the part most AI articles skip. And it’s the most important part.
Because if all you hear is “AI can do everything,” you’ll use it wrong. You’ll delegate the things that only you should do. And you’ll end up with a product that technically works but doesn’t make sense.
Here’s where I deliberately keep AI out of my process.
Product vision is mine. Period.
I don’t ask AI to write my product requirements. Ever.
This might sound counterintuitive. Claude is great at writing documentation. It can generate a PRD in seconds. But here’s the problem: if AI writes your spec, you stop understanding your own product.
When I sit down to define how a feature should work, I need to think through every scenario. What happens when a user has 12 meetings and 20 tasks in one day? What should the AI prioritize? What does “urgent” mean in the context of someone’s Tuesday vs. their Friday?
These aren’t technical questions. They’re product questions. And the answers come from my 5+ years of experience, my understanding of the user, and my vision for what Temporal should feel like.
If I outsource this thinking to AI, I become a project manager for a machine. I stop being the product person. And the product becomes generic — because AI will always default to the most common patterns, not the most interesting ones.
So I write the specs. I define the logic. I draw the boundaries. Then, and only then, does AI help me execute.
Testing is human work.
AI can write test cases. And I use AI-generated test cases as a starting point. But the actual testing — clicking through flows, feeling the friction, noticing that something is technically correct but feels wrong — that’s me.
You have to use your own product obsessively. Every day. As a real user, not as a builder. The moment you stop testing personally is the moment your product starts drifting from what users actually need.
Distribution strategy needs a human brain.
AI handles maybe 10% of my distribution work — OpenClaw finding Reddit threads, Claude drafting tweets. But the strategy behind it — which conversations to join, what tone to strike, when to mention my product and when to just help someone — that requires judgment AI doesn’t have.
AI doesn’t understand context the way a human does. It doesn’t know that this particular Reddit thread is the wrong place to self-promote, or that this tweet needs to be vulnerable, not polished. Social intelligence is still a human skill.
The honest limitations.
AI hallucinates. It confidently tells you things that aren’t true. I’ve caught Claude inventing features that don’t exist in competitors’ products. I’ve caught Codex suggesting code patterns that would break other parts of the system.
And context degrades. In long sessions — 50+ messages — AI starts losing the thread. It forgets constraints you mentioned earlier. It contradicts its own recommendations. You have to manage this actively: break complex work into focused sessions, summarize the state regularly, and never blindly trust a response just because it sounds confident.
The bottom line: AI is a multiplier, not a replacement.
Here’s how I think about it. AI multiplies whatever you bring to the table.
If you have strong product vision, AI multiplies it with speed and execution power. You get a product shipped in 2 months instead of 8.
But if your vision is zero? Zero multiplied by anything is still zero. You’ll just get generic output faster.
The skill isn’t using AI. The skill is knowing what to ask, what to keep for yourself, and when to override the machine.
The Math That Changed My Mind
Let me put this in perspective.
Before AI, my options for building Temporal were: find a technical co-founder (months of searching, equity dilution), hire a freelance developer ($5K+/month, slower iteration, communication overhead), or learn to code myself (6–12 months before I could build anything real).
With AI, the math looks like this:
$150/month in AI tools
2 months from idea to working product
0 team members to manage
5 AM to 10 AM daily — my main job stays untouched
This isn’t theoretical. Temporal.day is live. People use it. It has AI auto-scheduling, natural language task input, Google Calendar sync, and a payment system.
Am I saying the code is as clean as what a senior engineer would write? No. Am I saying the product is perfect? Absolutely not — there’s plenty to improve.
But it exists. It works. Users interact with it daily. And it shipped at a fraction of the cost and time that any traditional approach would require.
That’s the real insight: AI didn’t make me a developer. It made development accessible to someone with strong product instincts and no engineering skills. The barrier to building products didn’t disappear — it shifted. It used to be “can you code?” Now it’s “do you know what to build and why?”
That second question? That’s where 5 years of PM experience actually matters.
Start Here
If this resonated and you want to try building your own AI-first workflow, here’s what I’d suggest:
Pick one tool and go deep. Don’t sign up for 10 things. Start with Claude or ChatGPT and use it for everything for two weeks. Research, writing, planning, analysis. Get a feel for what it’s good at and where it breaks down.
Build a real workflow, not a toy demo. Don’t just “try AI.” Apply it to an actual problem you’re solving at work. A competitive analysis. A project plan. A first draft of something real. The value clicks when the stakes are real.
Keep a “human only” list. Decide upfront which decisions stay with you. For me, that’s product vision, final testing, and distribution strategy. Your list will be different. But have one. Otherwise AI will slowly take over the thinking you should be doing yourself.
Start before you’re ready. Two months ago, I had an idea and zero technical ability. If I’d waited until I “knew enough” to start, I’d still be waiting. The tools are here. The gap between “I have an idea” and “I have a product” has never been smaller.
I’m building Temporal.day in public — sharing every decision, metric, and mistake along the way. If you’re on this path too, come say hi @mktpavlenko.
The best time to start building was yesterday. The second best time is this morning at 5 AM with a cup of coffee and Claude open on your screen.

