The shipyard for
production-ready AI apps.
Ship4AI is an opinionated Next.js + Vercel blueprint for teams that would rather launch than plumb. Gateway, queues, sandboxes and scheduled agents — pre-wired so you ship the AI, not the infrastructure around it.
Four pillars. One blueprint.
Everything that matters for a serious AI product — already converged in a single repo.
Gateway-first
One API key, every model. Ship4AI wires the Vercel AI Gateway in with observability, fallbacks and zero data retention out of the box.
Durable agents
Long-running agents run on Vercel Queues and Workflows — at-least-once delivery, graceful shutdown and cancellation without hand-rolling a job runner.
Sandboxed execution
User-generated code and tool calls run inside Vercel Sandbox. Safe-by-default execution for agentic features your users trigger.
Ship-ready UX
Marketing site, operator console and component library pre-wired. Edit copy, add a model, ship a preview URL in minutes — not sprints.
Built on the modern Vercel platform.
Ship4AI runs on Fluid Compute, uses the Vercel AI Gateway for every model call, and orchestrates long-running work with Queues and Workflows. Previews on every PR, rolling releases on main.
- Next.js 16
- React 19
- Tailwind 4
- AI SDK v6
- AI Gateway
- Vercel Queues
- Vercel Sandbox
- Fluid Compute
- Turborepo
From clone to launch in three ships.
Clone the shipyard
One template, two apps, one shared UI package. pnpm workspaces and Turborepo do the plumbing.
Wire the gateway
Drop in your Vercel AI Gateway key. Every provider and model becomes a single typed call.
Launch the fleet
Preview URLs on every PR, rolling releases on main, and cron-driven agents that deploy themselves.