Case Study
PM Salary Ace
Practice Like the Job Depends On It
A gamified PM prep platform. V1 shipped in 3 hours and revealed two things: users wanted assessment, not learning, and AI-generated questions were too easy to guess. V2 was built around those findings.
V1: 3 Hours
Concept to shipped
22 Users
First 12 hours
49%
Activation rate
336 Questions
Across 5 tiers
At a Glance
V1: 3 Hrs
Build time
Concept to shipped
22
First 12 hours
Beta launch
49%
Activation rate
336
Across 5 tiers
Sections
The Insight
Why I Built This
I run the Product Management Affinity Group for UC Berkeley's MEng program. I kept watching smart engineers undersell themselves into safe technical roles because PM recruitment felt out of reach. The problem was not ability. It was confidence and calibration. They did not know what level they were actually at.
Salary ranges are not just labels. They are psychological permission slips. Showing someone that their thinking maps to a $280K to $350K Frontier AI PM role reframes preparation from obligation to ambition. That was the core design decision everything else was built around.
Sprint 1
V1: 3 Hours to Shipped
Built with Lovable (AI app builder), Claude for prompting and debugging, and Supabase for the feedback database. No traditional coding. The goal was to validate the concept with real users before investing more time.
Hour 1
Product Decisions
Tier structure, salary ranges, gamification approach, question categories.
Hour 2
Building
Quiz UI, timer, hint system, multi-correct question support, flag button.
Hour 3
Polish and Shipping
Radar chart results, admin feedback dashboard, Safari bug fix, deployment.
V1 Stack
LovableClaudeSupabaseVercelUser Research
What V1 Taught Me
I built a learning tool. Users treated it as an assessment.
I designed the platform around exploration: hints, show answer, retry without seeing the correct answer first. The assumption was that people would go slow, learn, and build intuition.
Instead, users answered, clicked next, answered, clicked next. They wanted to know their score, not learn the material. That gap between designer intent and user behavior forced an immediate iteration.
The second discovery: LLMs generate MCQ distractors that are too obviously wrong. The correct answer was plausible, but the wrong answers were clearly implausible. Real interview questions have four plausible answers. That is what makes them hard. All 125 original questions were flagged for regeneration.
V1 Metrics
What the Numbers Showed
Activation Funnel
↓ 51% didn't activate
Question Distribution
Sprint 2
V2: What Changed and Why
336 questions via Gemini 2.5 Pro with human QA
AI generation is fast but produces obvious wrong answers. 211 new questions were generated and reviewed before going live. The original 125 are flagged inactive pending regeneration.
Supabase backend with RLS and server-side answer security
V1 had answers hardcoded in the frontend JS array. Anyone could inspect the source and cheat. V2 uses a SECURITY DEFINER view that hides correct answers from the API. Answers are only fetched after submission via RPC call.
Google OAuth and email auth, but no forced login
Higher tiers were initially gated behind login. That was reversed. Trust is a barrier for an unknown product. Forcing email signup before letting someone try a Staff+ question creates more friction than value. All 5 tiers are now open. Login unlocks progress tracking.
Progress dashboard with per-skill radar chart
The radar chart initially used session-level proxy data. That was not accurate enough. V2 tracks correctness per question per category in a JSONB column, giving users a real skill breakdown across Product Sense, Metrics, Product Design, and Behavioral.
Custom quiz builder with tier, skill, and difficulty filters
Direct response to user feedback. Users wanted to drill weaknesses, not practice randomly. The builder lets them cross-filter and set question count.
Hero copy changed from Master PM Interviews to Think Like a Top PM
Original copy implied mock interview simulation. The product is a skills builder, not an interview simulator. The target user is someone trying to break into PM, not someone already doing PM interviews.
User Feedback
What Beta Users Said
Beta User, aspiring PM
- Multi-select tag too small, not visible enough
- Some mid-level questions feel easy
- Product sense questions are relevant and well-written
- No bugs encountered, fast performance
- Suggested making it more similar to actual interviews
My Response
Multi-select visibility is a valid bug. Question difficulty is intentional: the target user is an aspiring PM, not a current PM. Interview simulation is a V3 direction, not V2 scope. The product is explicitly a thinking skills builder.
Product Screenshots
V2 Live Product
Roadmap
V3 Direction
Still In Progress (V2)
- Contrast fixes on quiz builder and tier badge pills
- Tablet card height consistency
- Regenerate original 125 questions via Gemini
- Make multi-select tag more prominent
V3 Vision
- Open-ended questions with LLM-as-judge scoring
- Full model answer visible after each question
- Personalized weak area recommendations
- Interview simulation format