Skip to content
Abhishek.
Back to index

Selected work

Things I have shipped.

Two live, mobile-first products. Both built end to end with AI tooling and no engineering team. Both running A/B experiments on real users right now. Try them inside the phone frames below.

Project 01 / Next.js 14 build

UPSC Learning Journey

An AI-built mobile learning loop. 280 topics. 3,858 tagged past papers. Live.

A mobile-first AI learning path I shipped to A/B test learning hypotheses with real aspirants.

01

What it is

A mobile-first card-based learning path that covers the entire UPSC syllabus end to end. 280 topics across 9 subjects (GS-I through GS-IV and optional). Each topic opens a 92vh study sheet with AI-generated notes, contextual Wikimedia imagery, and inline links to a conversational UPSC map. From the same screen the user drops into a 5-question MCQ practice drawn from a real corpus of past papers, with hearts, score progression, and a crown system that maps onto questions answered correctly (gray → purple → blue → green → gold → pink at five-correct steps). Daily-goal tiers, streak tracking, and a journey timeline keep the loop closed.

02

Why I built it

Mainstream UPSC prep is expensive, fragmented, and weak on personalization. The mainline product hypothesis was specific: can an AI-first, mobile-first learning loop produce D7 retention comparable to paid offline coaching, at a tiny fraction of the cost? Building it myself end to end was the fastest way to find out. Every design decision is downstream of that question.

03

Built end to end with AI

The codebase was shipped without a traditional engineering team, using AI-assisted tooling — Cursor, Claude Code, and ChatGPT for the trickier pieces — under my product, design, and tagging direction. I designed the data model, the journey navigation system, the AI prompt layer for notes generation, and the practice-question retrieval pipeline. I authored and tagged the syllabus structure. I wrote the data pipeline that tagged 99.3% of the 3,885-question corpus to syllabus topics using NVIDIA Llama-3.1-70b with few-shot prompting and a keyword fallback scorer for any question the LLM missed. The whole thing runs on Next.js + Supabase + Vercel.

04

Designed as an experimentation testbed

This is a live product I use to run A/B and multivariate experiments on real aspirants every week. The point of building it was never to replace a coaching institute. The point was to have a real audience and a real product instrument so that any product hypothesis I want to test, I can ship and measure in days rather than quarters. Every experiment is small, scoped, and measured through the CleverTap funnel I instrumented across the app.

05

Try it inside the phone

Below is the live product running inside a phone bezel on this page. Tap to onboard, pick a daily goal, open a topic, read the AI notes, and take the practice quiz. The whole loop happens here. If you want it on your actual phone, the URL is on the live demo.

06

Experiments live or in flight

  • A/BOnboarding length: 3-step vs 7-step funnel and the impact on first-topic completion
  • A/BDaily-goal tier defaults: casual vs regular vs serious vs intense, measured on D3 and D7 retention
  • A/BTopic ordering on the path: subject-first vs frequency-of-past-paper-first
  • A/BAI-notes formatting: 3-bullet summary vs 5-section deep dive vs hybrid
  • A/BCrown progression cadence: 5/10/15 vs linear vs adaptive
  • A/BPractice-question difficulty ramp: easy-to-hard vs randomized vs adaptive
  • A/BRe-engagement push copy variants on day 2 and day 7

07

Stack

Next.js 14TypeScriptSupabaseGroq Llama 3.xNVIDIA Llama 3.1-70bCleverTapPWA
or open in a new tab ↗
9:41

Project 02 / Next.js 14 build

AI-Powered UPSC Map

Ask any UPSC topic. Watch it land on the map of India in real time.

A conversational map of India that turns any UPSC topic into a live, annotated geographic answer.

01

What it is

A conversational map. The user types or speaks a question about any UPSC topic — major rivers, Mauryan boundaries, Western Ghats geology, Battle of Plassey, coal and iron-ore deposits, Himalayan passes, anything in the civil services syllabus — and the system plots the answer on the map of India with annotations and generated context notes, in real time. The interaction feels less like a textbook and more like asking an expert who can draw on a whiteboard while they talk.

02

Why I built it

UPSC aspirants memorize geography from static atlases and textbook chapters that have not been updated in two decades. The product hypothesis: spatial recall improves materially when the user can ask questions and see them answered geographically, instead of staring at lists of bullet points. The map is the test of that hypothesis.

03

Built end to end with AI

Like the journey, this was shipped solo using AI-assisted tooling. The harder pieces were under-the-hood: the topic-to-geographic-region retrieval layer (mapping arbitrary UPSC topics to plausible points and polygons on the map), the prompt design for annotation generation that has to be factually correct under exam pressure, and the conversational UX that makes typing a topic feel as natural as a search bar.

04

Designed as an experimentation testbed

I use the map to run experiments on the right way to present spatial information to aspirants. Each experiment is a small product hypothesis tested on real users in days. The compounding effect is a working sense of what spatial UX actually does for retention and recall.

05

Try it inside the phone

The live product runs in the phone bezel below. Tap a topic suggestion, type your own, watch the map respond. The mobile UI is designed to be thumb-reachable while you are reading something else (a book, an article, a lecture); that constraint shaped the layout.

06

Experiments live or in flight

  • A/BAnnotation density: single point vs cluster of points vs region polygon
  • A/BNote length under each annotation: one line vs paragraph vs full section
  • A/BInput modality: voice-first vs text-first vs both with a quick toggle
  • A/BMap basemap style: muted geographic vs thematic-colored vs blended
  • A/BPre-suggested topic chips vs free-form input only
  • A/BLatency target: how fast does the response need to be for the map to feel alive

07

Stack

Next.js 14TypeScriptMapLibre GL JSSupabaseGroq Llama 3.xGeminiTopojson
or open in a new tab ↗
9:41

Curious about the rest?

I have more in the pipeline. Let's talk.

Back to my story