• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle







  • No, not that either. Unless you consider “use LLM to summarize the changes/errors/inaccuracies, then have a human read the whole thing again” an improvement over “just have a human read the whole thing”.

    Because LLM will do all these things:

    • point you toward issues
    • point you toward non-issues
    • not point you toward issues
    • change stuff even when “instructed” not to

    If there is one thing you don’t want to throw an LLM at without full, unbiased review, it’s documents where the wording is legally binding. And if you have to do a full, unbiased review to begin with, where you can’t even trust your tool to have highlighted all the important parts, you may as well not bother with the tool.


  • If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

    Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…