Case study · solo product
LiftIQ: an iOS strength coach grounded in peer-reviewed sport science.
Solo-built · 466/466 tests passing · ~$8/month infra · Get it on the App Store →
LiftIQ is an iOS app that programs resistance training the way a thoughtful S&C coach would. Every recommendation traces back to the ACSM 2026 Position Stand on resistance training, not vibes from a forum.
The app is an experiment in how far a solo engineer can take an AI-assisted stack when the goal is a defensible, science-grounded coach — not a fitness chatbot.
Stack
- Client: SwiftUI on iOS, native-first.
- Backend: small services on cheap infra — sized so the whole thing runs on roughly $8/month and stays there as users grow.
- AI layer: Claude as the reasoning brain, with a structured prompt/eval pipeline rather than free-form chat.
- Tests: 466/466 passing, run on every change. Tests aren't theatre — they cover the programming logic, the AI guardrails, and the science mappings.
AI architecture
The AI work is structured into three layers, each with its own evaluation:
- Intent layer — interpret what the user actually wants from a session (push day, deload, return-to-training, etc.) without asking 12 questions.
- Programming layer — produce the actual workout: exercise selection, sets, reps, RPE/RIR, rest windows, progression rules. This is where the ACSM mapping lives.
- Coaching layer — explain, encourage, and adjust based on yesterday's session, on-the-day readiness signals, and constraints.
Each layer is evaluated independently with a small held-out test set. If a prompt change moves intent accuracy without moving programming quality, I can see it.
The ACSM 2026 Position Stand mapping story
The hardest engineering problem wasn't the model — it was the mapping. The ACSM 2026 Position Stand is a long, dense, research-driven document. Translating its recommendations into something a workout-generation system can act on required:
- A structured schema for "what does ACSM actually recommend, given this goal, this training age, these constraints?"
- A reviewer step that flags any AI-generated program that drifts outside the recommended ranges (volume, intensity, frequency, exercise selection diversity).
- A way to hold the AI accountable when it gets creative — the AI proposes, the schema disposes.
This is the part of the app I'm proudest of. It's the reason "AI fitness app" isn't a slur in the LiftIQ context.
Result
- Live on the App Store with bilingual UX (EN + ES).
- 466/466 test suite running on every commit, covering programming logic, AI guardrails, and the ACSM mapping schema.
- ~$8/month infra, stable as users grow. Sub-$10/month forces architectural honesty: anything I'd build on top has to earn its cost.
- A defensible answer to "why should I trust this app's recommendations?" Every workout traces back to a peer-reviewed source, not a forum thread.
What this shows
- I take ambiguous, multi-disciplinary work (sport science + AI architecture + native iOS + ops) and turn it into a shippable product end-to-end.
- I treat evals as a product surface, not a vanity metric. The 466-test suite is what lets me ship at solo-founder speed without breaking the science.
- I make defensible technical choices when the criteria aren't obvious: native iOS over cross-platform (UX), schema-constrained AI over free-form (defensibility), heavy testing over fast prototyping (durability).
- I own the whole stack — from research-paper schema to App Store listing — and ship it bilingual on day one.
Try it
If you want to talk shop on AI architecture, evals, or the ACSM mapping work, book a pairing session — that's exactly what those are for.