/adrift/ by Stantonius

Floating without direction across the digital ocean in an unfinished dinghy.

The danger of convincing AI generated code

ยท Calculating...

Source: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/?__readwiseLocation=#atom-everything

Simon Willison blogged about the exact thing I wrote about in my Brownfield code updates blog

He emphasizes how convincing AI-generated code is:

LLM code will usually look fantastic: good variable names, convincing comments, clear type annotations and a logical structure. This can lull you into a false sense of security, in the same way that a gramatically correct and confident answer from ChatGPT might tempt you to skip fact checking or applying a skeptical eye.

However he also points out more directly what I am implying in this post - you need to QA the code yourself, which means actually running it.

And to further emphasize something I agree with him wholeheartedly - convincingly wrong or hallucinated code is not a reason to not use LLMs for coding. LLMs for coding are a gift - but they need to be used properly. You should allow LLMs to write lots of code for you, but you need to QA it by running and interact with it. Never trust it blindly.