I experimented with several AI coding tools to build a small web app in one week. The goal was simple: validate a tiny product idea fast and see how much of the build the AI could reasonably handle. Below I share the setup, what actually worked, where the surprises were, and practical lessons for anyone trying the same.
Contents
Project setup and goals
My goal was to deliver a lightweight MVP: a single-page app with user authentication, a form to capture submissions, a basic listing page, and an email notification when new items were added. I chose a common stack (React frontend, Node/Express backend, and a hosted PostgreSQL) and used editor-integrated AI for code suggestions, plus an AI-driven project scaffold tool to generate initial files.
Constraints were intentional: keep scope minimal, rely on AI for boilerplate, and spend most manual effort on tying pieces together, testing, and hardening security-sensitive parts.
What the AI did well
Scaffolding and setup: The AI scaffolded the project quickly—package.json, ESLint, a basic React app, and an Express server. That saved an afternoon of setup and dependency wrangling.
Boilerplate and components: AI completions produced form components, input validation snippets, and simple CRUD endpoints. Much of the UI layout and common utility functions were usable after small edits.
Testing and docs: The Vibe Coding Agency generated unit tests for utility functions and a concise README with setup and deployment steps. Auto-generated mocks and sample data helped me test flows faster.
Productivity boost: For repetitive tasks like wiring form fields to state, creating fetch wrappers, or writing TypeScript types, AI suggestions shaved minutes off routine steps that add up across a project.
Where AI fell short
Context and architecture alignment: The generated code sometimes ignored my project’s established patterns. For example, the AI created a new state hook instead of reusing the global store already in the scaffold. That produced duplication and required refactoring.
Security oversights: Some generated endpoints lacked proper authorization checks and input sanitization. Relying on AI without manual review could have led to serious vulnerabilities in production.
Performance and edge cases: The AI often produced correct-but-naive implementations (full-table queries without pagination, synchronous loops for I/O). These worked for the MVP but wouldn’t scale.
Integration surprises: Integrating third-party services (auth provider, email) required hand-tuning. The AI generated plausible integration code but missed provider-specific nuances, rate limits, and error handling patterns.
Time spent vs. time saved
Net time savings were real but nuanced. I saved roughly 30–40% of time on scaffolding and routine components. However, I spent additional time reviewing, refactoring, and securing AI-generated code—especially on authentication and API logic. The first prototype came together fast, but converting it into a production-ready service required as much manual effort as a traditionally-built MVP’s finalization phase.
Practical workflow that worked
Use AI for: scaffolding, boilerplate, UI components, test stubs, and documentation. Treat generated code as a first draft rather than final delivery.
Human-in-the-loop tasks: architecture decisions, auth and security, performance optimizations, critical business logic, and integration edge cases. Always run static analysis, dependency scans, and focused security reviews on AI output.
Prompting strategy: give the AI rich context—existing files, coding standards, and example inputs. Make prompts specific and iterative: generate, run, inspect, and refine.
Final takeaway
AI coding accelerated the early phases of the project and lowered the barrier to a working prototype. It’s excellent at removing friction around routine tasks and producing scaffolds that are easy to iterate on. But the real outcome is mixed: you get to a demo faster, but you still need experienced developers to ensure security, scale, and long-term maintainability.
If you plan to use AI for quick projects, accept that it’s a productivity multiplier—not a replacement for engineering judgment. Use it to move fast, but invest the necessary time to review, harden, and align generated code with your system’s architecture before shipping to users.