• Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    9 days ago

    I’ve always said as a software developer that our longterm job is to program ourselves out of a job. In fact, in the long term EVERYBODY is “cooked” as automation becomes more and more capable. The eventual outcome will be that nobody will have to work. AI in its present state isn’t ready at all to replace programmers, but it can be a very helpful assistant.

    • DarkenLM@kbin.earth
      link
      fedilink
      arrow-up
      6
      ·
      9 days ago

      Management can’t blame AI when shit hits the fan, though. We’ll be fine. Either that or everything just collapses back into dust, which doesn’t sound so bad in the current times.

      • Lovable Sidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 days ago

        That’s the beauty of AI tho - AI shit rolls uphill, until it hits the manager who imposed the decision to use it (or their manager, or even their manager).

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      but it can be a very helpful assistant.

      can, but usually when stuff gets slightly more complex, being a fast typewriter is usually more efficient and results in better code.

      I guess it really depends on the aspiration for code-quality, complexity (yes it’s good at generating boilerplate). If I don’t care about a one-time use script that is quickly written in a prompt I’ll use it.

      Working on a big codebase, I don’t even get the idea to ask an AI, you just can’t feed enough context to the AI that it’s really able to generate meaningful code…

      • Lovable Sidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 days ago

        I actually don’t write code professionally anymore, I’m going on what my friend says - according to him he uses chatGPT every day to write code and it’s a big help. Once he told it to refactor some code and it used a really novel approach he wouldn’t have thought of. He showed it to another dev who said the same thing. It was like, huh, that’s a weird way to do it, but it worked. But in general you really can’t just tell an AI “Create an accounting system” or whatever and expect coherent working code without thoroughly vetting it.

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          I’ll use it also often. But when the situation is complex and needs a lot of context/knowledge of the codebase (which at least for me is often the case) it seems to be still worse/slower than just coding it yourself (it doesn’t grasp details). Though I like how quick I can come up with quick and dirty scripts (in Rust for the Lulz and speed/power).

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          Ughh I tried the gemini model and I’m not too happy with the code it came up with, there’s a lot of intrinsities and concepts that the model doesn’t grasp enough IMO. That said I’ll reevaluate this continuously converting large chunks of code often works ok…

          • Terrasque@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            8 days ago

            Well, it wasn’t a comment on the quality of the model, just that the context limitation has already been largely overcome by one company, and others will probably follow (and improve on it further) over time. Especially as “AI Coding” gets more marketable.

            That said, was this the new gemini 2.5 pro you tried, or the old one? I haven’t tried the new model myself, but I’ve heard good things about it.