My AI-Built App Was a Messy, Insecure Disaster

My AI-Built App Was a Messy, Insecure Disaster

The alluring promise of transforming a simple idea into a functional software application without writing a single line of code has become the modern siren song of the technology industry, beckoning creatives, entrepreneurs, and hobbyists toward a future of effortless digital creation. This powerful narrative suggests that expertise is no longer a prerequisite for innovation, and that sophisticated artificial intelligence can act as the ultimate democratizing force in software development. An experiment was designed to test this very premise, following a writer with absolutely no background in programming as they attempted to build a complete application using only natural language prompts. The resulting journey through the landscape of AI-driven development revealed a stark and complex reality, one where the initial magic of instant creation quickly gave way to a cascade of technical failures, glaring security holes, and a profound lesson on the irreplaceable value of human knowledge. This case study serves not as a condemnation of AI, but as a critical examination of its current limitations and the hidden dangers of creating without understanding.

What Happens When a Writer with Zero Coding Knowledge Tries to Build an App Using Only AI

The central question was straightforward: could a person who understands storytelling but not syntax truly build a working piece of software? The experiment’s subject, a professional writer, was tasked with participating in a hackathon sponsored by the AI development platform Bolt and the social media giant Reddit. The challenge was to create a functional, albeit irreverent and intentionally simple, application from scratch. This scenario provided the perfect crucible to test the limits of AI-assisted development, stripping away any potential advantage that might come from prior technical experience.

The writer’s background represented a pure-use case for these emerging technologies. Lacking any familiarity with programming languages, development environments, or even basic command-line operations, the individual was entirely dependent on the AI’s ability to interpret natural language instructions and translate them into executable code. The experiment was not about optimizing an existing workflow for a seasoned developer but about seeing if AI could genuinely replace the developer altogether, transforming a novice into a creator through conversation alone. The outcome of this endeavor would offer a candid look at the gap between the advertised potential of no-code AI platforms and the practical reality of their application by the very audience they claim to empower.

Setting the Stage The Rise of Vibe Coding and the Promise of Effortless Creation

This new frontier of software development has been colloquially termed “vibe coding,” a phrase that captures the essence of directing an AI with abstract ideas and desired feelings rather than precise, logical commands. It represents a paradigm shift from the structured discipline of traditional programming to a more fluid, conversational process. Platforms championing this movement market themselves as an “easy button” for innovation, suggesting that the laborious process of learning to code can be circumvented entirely. Their core proposition is that anyone with a compelling idea can now bring it to life, effectively lowering the barrier to entry for digital creation to near zero.

This narrative, however, is often met with considerable skepticism from the professional development community. Experienced engineers warn of a “productivity tax”—the hidden cost of using AI-generated code. This tax manifests as the significant time and effort required to debug, refactor, and secure the “almost-but-not-quite-right” output that these models frequently produce. While the AI can assemble code that appears functional on the surface, it often lacks the architectural integrity, efficiency, and adherence to best practices that are critical for building scalable and maintainable software. The experiment sought to quantify this tax from the perspective of a user who would be unable to pay it, exposing the true cost of creating without foundational knowledge.

An Experimental Journey From Ten Minute Triumph to a Cascade of Failures

The project chosen for this test was “Yelp but for the worst bathrooms in the world,” a concept that was both simple enough to be achievable and humorous enough to fit the hackathon’s spirit. The experiment began not with complex technical specifications but with a single, un-engineered prompt typed into the AI interface: “Create an app for Reddit that’s like Yelp but for bad bathrooms.” This simple sentence was the sole blueprint for the entire application, a direct test of the AI’s ability to infer structure, features, and design from a high-level concept. The goal was to rely entirely on the “vibe” of the request rather than any technical instruction.

To the writer’s astonishment, the initial results were breathtakingly fast. In approximately ten minutes, the AI processed the prompt and generated a complete project structure, complete with code folders, a user interface, and a launchable test application. The screen displayed a rudimentary but recognizable app, seemingly conjured from a single sentence. This moment represented the peak of the AI’s promise—a dazzling illusion of instant, effortless creation. It felt as though the most difficult part of software development had been solved, confirming the narrative that anyone could now be a builder. This initial triumph, however, was incredibly deceptive and proved to be the calm before a storm of technical complications.

The illusion of success shattered the moment the “run” button was clicked. The application immediately crashed, filling the screen with intimidating red error messages and cryptic warnings that were completely unintelligible to the non-technical user. This initiated a painstaking 45-minute debugging loop, where the writer’s role was reduced to that of a conduit between the broken app and the AI. Each incomprehensible error was copied from the terminal and pasted into the AI’s chat window. The AI would then diagnose the problem, offering technical explanations about “API endpoints not being served at the root level,” and generate a fix, which the writer would then apply without any genuine understanding. This iterative process of blind translation continued until, through sheer persistence, the errors ceased.

Eventually, a functional prototype was born. The application allowed users to post reviews, assign ratings, and browse submissions. The writer was even able to make minor cosmetic changes using natural language commands like “make the submit button blue.” Yet, the sense of accomplishment was hollow. The resulting product was a façade of functionality. While the app worked, the process revealed that the writer had not “built” it in any meaningful sense. Instead, they had merely supervised an AI that was, in effect, debugging itself. The AI had done all the substantive work, leaving the human user as a passive observer who could not replicate, explain, or truly own the final product.

The Brutal Verdict A Human Review of the AIs Handiwork

Once the application was superficially functional, it was subjected to a review by a panel of human experts—professional software developers. Their verdict was swift and unforgiving. The first look at the codebase revealed what they described as a nightmare of poor practices, a tangible example of the “productivity tax.” The files were disorganized, with no logical structure to follow. Critical elements like styling were inlined directly into the code instead of being separated into stylesheets, a practice that makes maintenance nearly impossible. Furthermore, the application was built as a single, monolithic component, a cardinal sin in modern development where breaking down functionality into smaller, reusable pieces is standard practice.

The most damning critique from a professional standpoint was the complete absence of unit tests. In professional software engineering, tests are non-negotiable; they are automated checks that ensure every piece of the code works as intended and that future changes do not break existing functionality. The AI, focused solely on generating a working front-end, had omitted this crucial foundation entirely. This meant the application was incredibly fragile. Any attempt to add a new feature or fix a bug would be a high-stakes gamble, with no way to verify that the changes had not caused a ripple effect of failures across the entire system. This lack of a testing framework rendered the codebase fundamentally unmaintainable and unsuitable for any serious, long-term project.

The review then escalated from a critique of code quality to a serious warning about security. A senior colleague, upon a brief examination using standard browser inspection tools, discovered profound vulnerabilities. The application, which was designed to handle user-generated content, had no data protection or sanitization measures whatsoever. It was, in the expert’s words, “ripe for hacking.” This oversight was not a minor flaw but a critical failure that exposed the inherent danger of a tool that can generate functionality without the corresponding security protocols. An application that looked harmless on the surface was, in reality, dangerously insecure.

This single instance illuminated a much broader and more alarming risk associated with the proliferation of “vibe coding” among hobbyists. Well-meaning creators, empowered by AI to build passion projects, could easily design applications that collect sensitive user information—names, email addresses, location data—without understanding how to secure it. In their enthusiasm to create, they could unintentionally build a treasure trove of unprotected data for malicious actors. The experiment demonstrated a dangerous blind spot in the “code for everyone” movement: AI can give a novice the power to build, but it does not give them the wisdom to build responsibly.

A New Perspective AI as a Powerful Tutor Not a Replacement for Expertise

The failure of the experiment to produce a viable application did not signify that the technology was useless; rather, it suggested its true value lies in a completely different application. This became clear when contrasting the writer’s experience as a creator with a friend’s success using similar tools as a learner. This friend, a physicist transitioning into a technical role, used AI assistants not to build for him, but to teach him. When faced with a complex bug or an unfamiliar concept, he would prompt the AI to explain the underlying principles in detail. He used it as an interactive, infinitely patient tutor that could accelerate his journey toward genuine expertise.

This tale of two use cases points toward a more productive and responsible framework for leveraging these powerful models. The focus should shift from “vibe coding”—the attempt to create without knowledge—to “vibe learning,” where AI is used as a Socratic partner to build foundational understanding. Instead of asking the AI to “build an app,” a more effective approach is to ask it to “explain how an API works” or “walk through the steps of setting up a secure database.” This reframes the AI from a magic wand that replaces skill into a powerful tool that can democratize the learning of that skill. It empowers users to build their own mental models, not just a fragile and opaque application.

The writer’s project, initially conceived as a test of an AI’s creative capacity, ultimately became something far more valuable. In making the messy, insecure, and flawed codebase public on GitHub, the experiment transformed. It was no longer a demonstration of AI’s shortcomings but the first step in a human’s journey to learn. The project revealed that while AI can generate code with astounding speed, it cannot generate understanding, context, or responsibility. The process underscored the reality that foundational knowledge remains the critical ingredient for meaningful and secure innovation, suggesting the most exciting future for AI in development was not as a replacement for human experts, but as the most powerful educational tool ever created for aspiring ones.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later