From what I’ve seen, it’s mostly non-coding “tech” journalists, executives, and enthusiasts getting the LLMs to generate tutorial fodder, which it can do just fine. I’m sure there are also some coders doing the most milquetoast development tasks, like yet another thin custom UI that just frontends some data in a database in a straightforward way that it works for. One example was a vibe coder getting pissed because he wanted to implement some feature on top of the tutorial fodder and the AI kept failing to do so and he was completely lost. He didn’t understand why it could get as far as it could with “hard” stuff but be utterly unable to implement this thing he thought sounded like it should be “easier”
From my experience on my sort of work, it can occasionally suggest a serviceable couple of lines fairly frequently faster than I could type it. If I have a tedious but boilerplate sort of thing to do, it can probably present a good draft (for example, if you write a CLI utility just start using the variables you would imagine, then ask it to generate the argument parsing section and it has a good chance of getting 90%+ of the way there). It can also generate a decent draft docstring for a function, which can be nice particularly if you strongly suspect no human would ever read it anyway. Some people swear by its ability to comment functions, but seems like they are grading on quantity not quality, as it documents every single line in useless ways (x = 50 // Assign the value 50 to variable x) and then fails to comment the actual confusing bits of code.
So best scenario is using some code editor with AI integration to ambiently drive completion and quick access to prompt up specific context of code. But still be prepared to be annoyed as while the completions are occasionally useful enough to be worth the annoyance, you may find yourself discarding useless suggestions maybe most of the time. Still might be faster even with the annoyance, but there’s a natural urge to be annoyed at seeing the LLM be wrong just so much of the time.
From what I’ve seen, it’s mostly non-coding “tech” journalists, executives, and enthusiasts getting the LLMs to generate tutorial fodder, which it can do just fine. I’m sure there are also some coders doing the most milquetoast development tasks, like yet another thin custom UI that just frontends some data in a database in a straightforward way that it works for. One example was a vibe coder getting pissed because he wanted to implement some feature on top of the tutorial fodder and the AI kept failing to do so and he was completely lost. He didn’t understand why it could get as far as it could with “hard” stuff but be utterly unable to implement this thing he thought sounded like it should be “easier”
From my experience on my sort of work, it can occasionally suggest a serviceable couple of lines fairly frequently faster than I could type it. If I have a tedious but boilerplate sort of thing to do, it can probably present a good draft (for example, if you write a CLI utility just start using the variables you would imagine, then ask it to generate the argument parsing section and it has a good chance of getting 90%+ of the way there). It can also generate a decent draft docstring for a function, which can be nice particularly if you strongly suspect no human would ever read it anyway. Some people swear by its ability to comment functions, but seems like they are grading on quantity not quality, as it documents every single line in useless ways (x = 50 // Assign the value 50 to variable x) and then fails to comment the actual confusing bits of code.
So best scenario is using some code editor with AI integration to ambiently drive completion and quick access to prompt up specific context of code. But still be prepared to be annoyed as while the completions are occasionally useful enough to be worth the annoyance, you may find yourself discarding useless suggestions maybe most of the time. Still might be faster even with the annoyance, but there’s a natural urge to be annoyed at seeing the LLM be wrong just so much of the time.