

There’s no such thing as “agents”, there must always be a human in the loop, they don’t just create code from nothing. Both in the sense of a human needing to prompt the LLM, and in the fact that they’re trained on human created code. “Agents” is just a buzzword made by tech CEOs and MBAs to make the general population think they’re doing more than they really are. They have no skills, they’re a statistical prediction model. And prediction models tend to fuck up a lot of things, especially as the data window grows.
You can use them to help code, yes, but don’t do what the billionaire class wants and make them seem like more than they really are.











Is your app as efficient as what an experienced developer would create? If you released the source code, would it have security vulnerabilities? These are just a couple of the more hidden issues that fly under the radar when shipping LLM-generated code.