From Chat Prompt to Running Service: Rapidly Building a Dining Micro‑App with Claude/ChatGPT
TutorialMicro appsDocker

From Chat Prompt to Running Service: Rapidly Building a Dining Micro‑App with Claude/ChatGPT

sselfhosting
2026-01-22 12:00:00
9 min read
Advertisement

Build and deploy a dining micro‑app from a ChatGPT/Claude prompt to a running Docker Compose service with nginx reverse proxy — step‑by‑step for 2026.

Stop dreaming and ship: build a dining micro‑app from a ChatGPT/Claude prompt to a running service in under a day

Decision fatigue at dinner is real. You want a tiny, private app that recommends restaurants for a small group — without buying SaaS, hiring a dev, or spending weeks. In 2026, with powerful LLMs like Claude and ChatGPT as coding copilots, that is not only possible, it’s fast and repeatable. This guide walks you from prompt to a deployed micro‑app using a small Flask prototype, Docker Compose, and an nginx reverse proxy. You’ll get concrete prompts, code, configs, and deployment steps aimed at developers and confident non‑developers.

"Once vibe‑coding apps emerged, I started hearing about people with no tech backgrounds successfully building their own apps." — Rebecca Yu, inspiration for this micro‑app trend

Why build a micro‑app in 2026?

Short answer: rapid iteration, privacy, and no vendor lock. Since late 2024 through 2025, LLMs and code assistants matured: they produce multi‑file projects, understand deployment patterns, and generate secure defaults. In 2026, trends are clear:

  • Micro apps are mainstream: personal and small‑team apps that solve specific workflows.
  • AI copilots accelerate prototyping — non‑developers can generate working backends and UIs.
  • Edge and container tooling make local/vps deployment trivial: Docker Compose is still a pragmatic choice for micro services.

What you'll end up with

  • A minimal Flask dining micro‑app (API + tiny UI) generated and iterated with LLM prompts
  • A Dockerfile and docker‑compose.yml to run the app and an nginx reverse proxy
  • Instructions to add TLS with Let's Encrypt (certbot) and basic security and backup tips

1) Start with the right prompt: generate a focused prototype

Spend time crafting a prompt that gives the assistant constraints: language, storage, endpoints, and UI expectations. Here’s a high‑value prompt you can paste into Claude or ChatGPT:

Prompt: Build a minimal dining micro‑app prototype in Python Flask. Requirements:
- Use SQLite for storage and SQLAlchemy for ORM.
- Provide endpoints: GET /api/recs?group=alpha returns 3 restaurant recommendations based on stored preferences; POST /api/prefs to store a user's preferences; GET / shows a tiny HTML UI to pick a group and view recs.
- Keep code single‑file app.py for now (we'll split later), include Dockerfile and requirements.txt.
- Keep security basics: no debug mode, use environment variable for SECRET_KEY.
- Keep UI minimal (vanilla HTML + fetch) and mobile friendly.
- Output only working files (app.py, Dockerfile, requirements.txt, templates/index.html).

Why this works: it imposes structure (SQLite + SQLAlchemy), gives precise endpoints, and restricts complexity so the generated prototype is easy to review and run.

Iterate the prototype

Ask follow‑ups. Example prompts:

  • "Add a /health endpoint and logging to app.py."
  • "Persist DB to /data/db.sqlite and create an init script to seed sample restaurants."
  • "Make recommendations deterministic for reproducible tests (seeded RNG)."

2) Example prototype files (what the AI should return)

Below are condensed versions of the core files the assistant will generate. Use them as a baseline or compare with the code you receive.

app.py

from flask import Flask, request, jsonify, render_template
from flask_sqlalchemy import SQLAlchemy
import os, random

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////data/db.sqlite'
app.config['SECRET_KEY'] = os.environ.get('SECRET_KEY','devkey')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False

db = SQLAlchemy(app)

class Preference(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.String, nullable=False)
    tags = db.Column(db.String)  # comma separated tags
    group = db.Column(db.String, default='default')

class Restaurant(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.String, nullable=False)
    tags = db.Column(db.String)

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/api/prefs', methods=['POST'])
def add_pref():
    data = request.json
    p = Preference(name=data['name'], tags=','.join(data.get('tags',[])), group=data.get('group','default'))
    db.session.add(p)
    db.session.commit()
    return jsonify({'ok': True})

@app.route('/api/recs')
def recs():
    group = request.args.get('group','default')
    prefs = Preference.query.filter_by(group=group).all()
    tags = set(t for p in prefs for t in (p.tags or '').split(',') if t)
    restaurants = Restaurant.query.all()
    # simple tag overlap scoring
    scored = [(r, len(tags & set((r.tags or '').split(',')))) for r in restaurants]
    scored.sort(key=lambda x: -x[1])
    top = [r.name for r, s in scored[:3]]
    return jsonify({'recs': top})

@app.route('/health')
def health():
    return 'ok', 200

if __name__ == '__main__':
    db.create_all()
    # seed sample data if empty
    if not Restaurant.query.first():
        sample = [Restaurant(name='Bella Pizza', tags='italian,pizza'), Restaurant(name='Sushi Hana', tags='japanese,sushi'), Restaurant(name='Green Bowl', tags='vegan,healthy')]
        db.session.add_all(sample)
        db.session.commit()
    app.run(host='0.0.0.0', port=8080)

Dockerfile

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV FLASK_ENV=production
CMD ["gunicorn", "--bind", "0.0.0.0:8080", "app:app", "--workers", "2"]

requirements.txt

flask
flask_sqlalchemy
gunicorn

templates/index.html

<!doctype html>
<meta name="viewport" content="width=device-width,initial-scale=1"/>
<h3>Where2Eat (micro)</h3>
<div>
  <select id="group"><option>default</option></select>
  <button onclick="getRecs()">Get Recs</button>
</div>
<ul id="out"></ul>
<script>
async function getRecs(){
  const g = document.getElementById('group').value;
  const res = await fetch('/api/recs?group='+g);
  const json = await res.json();
  const out = document.getElementById('out'); out.innerHTML='';
  json.recs.forEach(r=>{ const li=document.createElement('li'); li.textContent=r; out.appendChild(li)})
}
</script>

3) Turn the prototype into a containerized micro‑app

Create a simple Docker Compose stack with two services: app (the Flask app) and nginx as the reverse proxy. Persist the DB and logs with volumes.

docker-compose.yml

version: '3.8'
services:
  app:
    build: .
    container_name: where2eat_app
    restart: unless-stopped
    environment:
      - SECRET_KEY=${SECRET_KEY}
    volumes:
      - data:/data
    expose:
      - 8080

  nginx:
    image: nginx:1.25-alpine
    container_name: where2eat_nginx
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - certs:/etc/letsencrypt
      - data:/data:ro

volumes:
  data:
  certs:

nginx/conf.d/where2eat.conf

server {
  listen 80;
  server_name where2eat.example.com; # replace with your domain

  location / {
    proxy_pass http://app:8080;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Note: Initially we accept HTTP on port 80. We'll provision TLS certificates next and update nginx to listen on 443.

4) Obtain TLS (Let's Encrypt) — pragmatic options

For production, always enable TLS. Two patterns work well for micro apps:

  • Use certbot on the host to obtain certificates and mount them into nginx (simple for single‑VM VPS).
  • Use a companion container like certbot/certbot with a small host hook to write certs into a shared volume.

Example quick host workflow (VPS):

  1. Start Docker Compose to get nginx bound to port 80: docker compose up -d.
  2. On the host, run: sudo certbot certonly --webroot -w /var/lib/docker/volumes/where2eat_certs/_data -d where2eat.example.com (adjust path and domain).
  3. Update nginx conf to reference the issued certificate and reload nginx.

Alternatively, if you use Cloudflare, use DNS challenge for zero downtime and API‑driven renewals.

5) Deploy: build, run, test

  1. Set environment variable: export SECRET_KEY="change_this_to_a_real_secret".
  2. Build and start: docker compose up -d --build.
  3. Check logs: docker compose logs -f app and docker compose logs -f nginx.
  4. Health check: curl http://localhost/health should return ok.

6) Move from prototype to a maintainable micro‑app

After the quick win, harden and organize the project.

  • Split code into modules: models.py, api.py, web.py, config.py.
  • Use migrations (Flask‑Migrate / alembic) instead of db.create_all().
  • Configure secrets via env files or a vault — do not commit SECRET_KEY.
  • Add basic auth or OAuth if you plan to share beyond close friends; enable rate limits.
  • Logging & monitoring: capture logs to a volume and rotate; add a simple /metrics endpoint for Prometheus scraping.

7) Backup, update and maintenance

Micro apps still need maintenance. Create small, automated routines:

  • Database backups: cron job to tar /var/lib/docker/volumes/data/_data/db.sqlite to a remote SFTP or object storage daily.
  • Image updates: pin digest or use automated CI that builds images and runs smoke tests before replacing containers.
  • Certificate renewal: cron or systemd timer running certbot renew and reloading nginx.
  • Rollback plan: keep previous working image tags and a simple docker compose rollback script.

8) Security checklist

  • Run containers with least privilege; set user in the Dockerfile (avoid root where possible).
  • Limit exposed ports — only nginx serves ports 80/443.
  • Use HTTP security headers via nginx (HSTS, X-Frame-Options, CSP for the UI).
  • Sanitize and validate user input server‑side before saving to the DB.

9) Tips to iterate the UX quickly with LLMs

Use Claude/ChatGPT for targeted tasks:

  • Generate a richer recommendation algorithm: feed the model a preferences JSON and ask it to output scoring heuristics.
  • Ask for a mobile‑first responsive UI or extract component HTML/CSS snippets.
  • Use the assistant to produce unit tests and integration tests for your endpoints.

Prompt example: improve recommendations

Prompt: The current recommender scores restaurants by tag overlap. Revise the algorithm to consider recency (more recently added prefs weigh more), preference intensity (prefers scored 1-5), and a fallback popularity score. Return only the revised Python function that takes (prefs, restaurants) and returns sorted list of names.

By breaking work into small, testable prompt tasks you keep control and reduce hallucination risk.

In 2026, several directions have become standard for micro app deployments you should consider as the project grows:

  • Traefik / Caddy simplify TLS and dynamic routing — replace nginx when you need automatic certs and service discovery.
  • Edge functions & WASM for super low latency UIs; use when offloading small compute from your app server.
  • GitOps for even small projects: tie Docker image builds and deployments to a repo and CI / GitOps pipeline for safety.
  • Local-first sync patterns: combine local caches and cloud sync for offline capable micro apps.

Real‑world case study: Where2Eat inspiration

Rebecca Yu’s week‑long build of a dining app proved the model: non‑developers using AI can produce usable, personal apps quickly. The key lessons:

  • Start tiny: one clear use case (where to eat) and scale only if necessary.
  • Automate the repetitive bits: containerization and a reverse proxy make deployment repeatable.
  • Keep the service private by default — share invites or short‑lived tokens with friends.

Troubleshooting quick checklist

  • If nginx shows 502, ensure app service is healthy and exposing port 8080 — docker exec -it where2eat_app curl http://127.0.0.1:8080/health.
  • If certbot can’t verify domain, check DNS A/AAAA records point to your VPS and port 80 is reachable.
  • If DB seems empty after restart, ensure the data volume is mounted and paths match between app and host.

Actionable takeaways (do this now)

  1. Copy the prompt above into Claude or ChatGPT and generate the prototype files.
  2. Run the prototype locally with Docker Compose: build, seed sample data, and test /api/recs.
  3. Provision a small VPS, point DNS, and deploy with nginx + certbot for TLS.
  4. Harden and add backups, then invite 2–5 friends to test and iterate with the LLM for UX improvements.

Final notes on risk and trust

LLMs accelerate development but require review. Verify generated code for security issues and avoid blindly trusting external snippets. Keep secrets off VCS, audit third‑party dependencies, and use small scopes when opening the app to others.

Call to action

Ready to build your micro‑app? Start with the prompt, spin up the prototype, and push it behind a small nginx reverse proxy. If you want a ready‑to‑clone repo and tested Docker Compose stack, visit our companion GitHub (link in the sidebar) or subscribe to our weekly self‑hosting brief for production checklists and automation scripts tuned for 2026.

Advertisement

Related Topics

#Tutorial#Micro apps#Docker
s

selfhosting

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-24T07:56:21.267Z