Love the concept of creating somethin intentionally bad to show what can go wrong. The browser crash part made me laff. It's a clever way to ilustrate how blindly trusting AI output can lead to real problems, especially when the code gets deployed straight into production without review.
Thanks, that's exactly the point. Build a safe, ugly demo so people can understand why testing and review matters even more now in the world of genAI. Try it at VibeCrap.com and check our other projects at https://privacysafe.app
It's interesting how you guys decided to test the limits of what an LLM could *fail* to create, what other deliberately awful outputs do you think are lurking for us to prompt, this is such a cleaver experiment.
Thanks so much, we really appreciate it. I've thought about a few safe follow ups that show how things can go wrong without hurting anyone, like a form that nags on every keystroke until you give up. Same approach as VibeCrap.com, local first with clear limits and a kill switch, and no data leaves the page.
Basically, we're talking about anti-patterns that I'm sure will emerge / have emerged in some form already with vibe coding... these would be difficult to do for a novice coder, and would not be casual mistakes a novice could hope to introduce, *unless* they used an LLM coding companion :)
Love the concept of creating somethin intentionally bad to show what can go wrong. The browser crash part made me laff. It's a clever way to ilustrate how blindly trusting AI output can lead to real problems, especially when the code gets deployed straight into production without review.
Thanks, that's exactly the point. Build a safe, ugly demo so people can understand why testing and review matters even more now in the world of genAI. Try it at VibeCrap.com and check our other projects at https://privacysafe.app
It's interesting how you guys decided to test the limits of what an LLM could *fail* to create, what other deliberately awful outputs do you think are lurking for us to prompt, this is such a cleaver experiment.
Thanks so much, we really appreciate it. I've thought about a few safe follow ups that show how things can go wrong without hurting anyone, like a form that nags on every keystroke until you give up. Same approach as VibeCrap.com, local first with clear limits and a kill switch, and no data leaves the page.
Basically, we're talking about anti-patterns that I'm sure will emerge / have emerged in some form already with vibe coding... these would be difficult to do for a novice coder, and would not be casual mistakes a novice could hope to introduce, *unless* they used an LLM coding companion :)