Not remote desktop. Not typing code. Prompting a live system that edits, tests, commits, and deploys — from a phone screen.
I was waiting somewhere. Short bursts of time. No laptop.
I built and tested a puzzle game entirely from my phone.
Not remote desktop. Not SSH into a VM with a tiny terminal font. Not typing code with my thumbs. I was prompting a live system that edits files, runs tests, commits, pushes, and deploys — while I watch from a browser.
The game shipped. It has 5 levels, 36 tests, touch input, and a mechanic where tiles vanish when they merge past a threshold. I fixed a bug where it was impossible to win, redesigned the level progression, and tuned the difficulty. From a phone screen.
Idea to live took about 30 seconds per change.
The Setup
My phone is not the machine. It is the interface.
The real work happens on a remote machine running a persistent session. An AI agent lives in that session with full project context — the file tree, the git history, the test suite, the design system, the deployment pipeline. It does not forget between prompts. It does not need me to re-explain the architecture.
I do not open a project. I connect to a running one.
The connection is a terminal. The terminal accepts natural language. The agent translates that into file edits, shell commands, and git operations. My phone is just the input layer — a window into a system that is always alive.
The Loop
Every interaction follows the same pattern:
I type a prompt. The agent edits code. Tests run. Build passes. Git push. Vercel deploys. I open the preview URL on my phone and test.
Real prompts from the Voidle session:
"The game ends up being the same despite the different modes. Also it seems impossible to clear the board since a new 2 always appears. Tune the game to be difficult but not impossible."
What happened next: the agent read the engine code, identified that spawnTile() fired every move (making the clear-board goal impossible), added a spawnRate field to the level type, redesigned all 5 levels with graduated difficulty, ran 188 tests, built the project, committed, and pushed. I refreshed the browser on my phone and played the fixed game.
"Fix the patron board design on mobile."
The agent audited the leaderboard at 320px viewport width, calculated that the name column only got 113px of space, added tier abbreviations for mobile, changed the stats grid from 4-row stack to 2x2, tokenized hardcoded pixel values, ran tests, built, committed, pushed. I opened the page on my phone and verified.
The phone is not limiting. It is the fastest feedback device.
The Same Loop Ships Everything
The exact same system that built the game also built the website hosting it.
Same prompt loop. Same agent. Same pipeline. Prompt, edit, build, deploy, verify.
In this session alone, the loop produced: a puzzle game with 5 levels and 36 tests. An interactive chart tracking LLM pricing across 38 data points. A social engagement dashboard wired to real X metrics. LinkedIn and Facebook content composers. An OG image generator. Three dev log articles. A design token sweep across 28 files. Dead code removal. 100% CSS token compliance. Social sharing buttons on every article and experiment page.
All from the same phone. All through the same terminal.
What Actually Makes This Work
Persistent session. The agent never restarts. Context accumulates across prompts — it knows what it changed 10 prompts ago and what tests are passing now.
Full project awareness. The agent reads files, greps patterns, checks types, runs builds. It does not hallucinate about code that does not exist. It reads before it writes.
Zero-friction deployment. Git push triggers Vercel build. Preview URL is live in 30 seconds. No CI configuration, no manual deploy steps, no waiting.
The magic is not mobile development. The magic is removing restarts. The old flow: sit down, open IDE, reopen files, regain context, start working. The new flow: connect, prompt, observe.
The shift is not from keyboard to phone. It is from sessions to continuity.
The Real Unlock
It is not about building from your phone.
It is about never being blocked from building.
Waiting for a train. Between meetings. On a couch. The project is always running. The agent always has context. The deploy pipeline is always warm. The only thing missing is the next prompt.
I used to think mobile development meant a cramped IDE on a small screen. It does not. It means a persistent system with a small input device. The screen size is irrelevant when the system does the work.
The game is live. The website is live. The 188 tests pass. The 118 pages build. And the phone goes back in my pocket.
Experiment Context
- Commit
- e169a3d
- Mutation rationale
- feat: wire ShareBar into all article + experiment pages
- Last reviewed
- March 22, 2026