I recently decided to bite the bullet and get into AI and specifically LLMs. Fuck corporate assholes and data hungry companies so I got some second hand GPUs and made a $800 rig that can run 32b models at very usable speeds. I started playing with it and decided to use pycharm with continue and a local ollama to have my own copilot.

I must say, this is the future. Not in the sense that soon we will all be prompt engineers (lol) but assuming that LLMs don’t reach a point of true intelligence and continue to be fancier and fancier regurgitation machines, this is like having an instant StackOverflow in your IDE. I don’t even bother with complex and unique ideas with it since it really seems that once you reach that area it becomes useless. Basically my programming flow has always started with writing down my rough idea of what I want to do and how I plan on doing it. Now I just feed that to the LLM and it gives me code that is like 90% accurate and I just do a code review, fix mistakes or weird logic and from there as I expand my code I use the LLM as a replacement for online searching. Need syntax help or basic library context? LLM got you covered. Need a function that does something basic or well known? LLM! It’s kind of like having an infinite boilerplate library that can custom fit your needs. I think that the reason it works so well for me is that I really only let it handle the basics, anything that I need to think through I do myself.

I doubt we will ever go back from this.

  • southsamurai@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 hours ago

    I don’t think your opinion as expressed is rare as much as it is people not liking/wanting the options that are currently out there as they are.

    If they were all open source and/or free of corporate manipulation, there’s big swaths of objectors that would be okay to enthusiastic about it.

    If they were fully capable rather than being a mishmash of levels of readiness, another swath would either support or cease to object.

    And, you’d also see increased support if the system underlying everything was more supportive of the people that are going to have to shift jobs when the models are fully capable rather than the varying stages of capability.

    Eventually, it’s going to be the dominant tool for almost all use cases, and there’s nothing wrong with a tool reducing the need for humans to do grunt work. It’s all the knock on effects that are the problem, not the fact.