“There’s a brand new form of coding I name ‘vibe coding,’ the place you totally give in to the vibes, embrace exponentials, and neglect that the code even exists.” claimed Andrej Karpathy in a publish on X again in February. This publish led to many individuals sharing their “vibe coded” purposes on social media or commenting on its effectiveness.
Curious, I downloaded Cursor to my house pc. The arrange was straightforward. My first immediate was “create an utility that asks for a zipper code and returns the climate for that location.” Cursor replied with clarifying questions like, did I “need the temperature in Fahrenheit?” did I “wish to present the humidity?” and did I “desire a blue button?” I stated sure to all of it. In minutes Cursor was finished, having generated three new recordsdata.
Sure, there have been points, however Cursor and I mounted them with out me a lot as glancing on the code — similar to Karapthy’s publish, “Generally the LLMs can’t repair a bug so I simply work round it or ask for random adjustments till it goes away.”
I used to be very happy with my creation and instantly despatched it to household and mates for group testing. I acquired characteristic requests reminiscent of “what to put on,” which I rapidly added. However after I went so as to add one other characteristic, Cursor prompted me to buy extra tokens. I used up all my free ones. And that was the tip of my vibe coding.
From Enjoyable To Useful To… Fortified? It’s Not By Default
I had prompted Cursor to do a safety assessment and grade its personal homework. To its credit score, Cursor got here again with findings reminiscent of a scarcity of enter sanitization, no price limiting, no correct error dealing with, and an API key in plain textual content, which Cursor then mounted.
Why didn’t Cursor write safe code from the beginning? Why did it need to be prompted to run a safety assessment? It is a enormous “gotcha” as builders can not assume the generated code is safe by default.
LLMs Are Not Safe Both
Cursor will not be alone. Whereas AI is getting higher at coding syntax, safety enhancements have plateaued. Actually, 45% of coding duties got here again with safety weaknesses. Moreover, a special research discovered that open-source LLMs counsel non-existent packages over 20% of the time and business fashions 5% of the time. Attackers exploit this by creating malicious packages with these names, main builders to unknowingly introduce vulnerabilities.
Vibe Coding Is Not Prepared For Enterprise Functions… But
Are we taking vide coding too far? For instance, are product managers, design professionals, and non-software builders vibe coding the subsequent cell banking utility and placing it into manufacturing? Hopefully not. I too share Karaphty’s sentiment: “[vibe coding] will not be too dangerous for throwaway weekend initiatives.” Within the skilled world, product managers, designers, software program builders, and testers can use AI-powered software program instruments to help in constructing purposes – from prototyping, to design, to coding, to testing, and even supply. However for now, people should stay within the loop.
What occurs to the function of utility safety? With LLMs serving to firms launch sooner, reminiscent of Microsoft and Google that boast over 25% of their code is written by AI, the quantity of weak code will solely improve, particularly within the short-term. DevSecOps greatest practices should be adopted for all code no matter how it’s developed – with AI or with out AI, by full time builders, a 3rd get together, or downloaded from open supply initiatives –or organizations will fail to innovate securely
“Vibe coding” instruments reminiscent of Cursor, Cognition Windsurf, and Claude Code are already entrenched in skilled software program improvement. There shall be a convergence with low-code platforms (options that enable technical and non-technical customers to rapidly construct and iterate on purposes with visible fashions). Within the subsequent three to 5 years, the software program improvement lifecycle will collapse and the function of the software program developer will evolve from programmer to agent orchestrator. AI-native AppGen platforms that combine ideation, design, coding, testing, and deployment right into a single generative act will rise to fulfill the problem of AI-enhanced coding inside guardrails. AI safety brokers will emerge to assist safety and improvement professionals keep away from a tsunami of insecure, poor high quality, and unmaintainable code, whether or not low coded or vibed.
Be part of Us In Austin To Be taught How To Safe AI-Generated Code
Taken with studying what the long run holds? Attend the Forrester’s Safety & Danger Summit in Austin, Texas, on November 5–7, 2025, the place my colleague Chris Gardner and I’ll present a glance into Software Safety In The Age Of AI-Generated Code and past.












