• 0 Posts
  • 41 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • saltesc@lemmy.worldtoTechnology@lemmy.worldWhen the AI bubble bursts
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    11 days ago

    It will burst. AI is improving at the same rate it always has and no one’s surprised, just LLMs have gotten attention from normal users who seem to think “this is AI”.

    For actual AI, nothing has changed. You still need extremely well governed data and lots and lots of controlled training, lots and lots of condition farming and resolving, all at considerable cost not worth it for BAU, just AI-soecific projects.

    It’s already bursting, as people realise what is AGI and what is non-logic LLMs and why the latter has limited use, especially with awful mass “training”.

    The most realistic outcome is that LLMs are able to assist in increasing the pace of AGI.






  • What Deere did was even more harsh. They tried to block off not only self repair, but third-party firmware that made the tractors work better, especially older ones that were out of warranty.

    That’s straight up a major federal crime in my country. So that should give Americans an idea how balanced their scale of justice is at the moment.

    The consumer and supplier ALWAYS get equal and fair protection, lest a business becomes based on ripping people off with product instead of the product itself.








  • Light debugging I actually use an LLM for. Yes, I know, I know. But when you know it’s a syntax issue or something simple, but a quick skim through produces no results; AI be like, “Used a single quote instead of double quote on line 154, so it’s indirectly using a string instead of calling a value. Also, there’s a typo in the source name on line 93 because you spelled it like this everywhere else.”

    By design, LLMs do be good for syntax, whether a natural language or a digital one.

    Nothing worse than going through line by line, only to catch the obvious mistake on the third “Am I losing my sanity?!” run through.






  • We can, but it’s a lot of effort and time. Good AI requires a lot of patience and specificity.

    I’ve sort of accepted the gimmick of LLMs being a bit of a plateau in training. It has always been that we teach AI to learn, but currently the public has been exposed to what they perceive to be magic and that’s “good enough”. Like, being wrong so often due to bad information, bad interpretation of information, and bias within information is acceptable now, apparently. So teaching to learn isn’t a high mainstream priority compared to throwing in mass information instead—it’s far less exciting working on infrastructure.

    But here’s the cool thing about AI, it’s pretty fucking easy to learn. If you have patience and creativity to put toward training, you can do what you want. Give it a crack! But always be working on refining it. I’m sure out there right now someone’s been inspired enough to do what you’re talking about and in a few years of tears and insane electricity bills, there’ll be a viable model.


  • Yeah, get too far in or give it too much to start with, it can’t handle it. You can see this with visual generators. “Where’s the lollypop in its hand? Try again… Okay now you forgot about the top hat.”

    Have to treat them like simple interns that will do anything to please rather than admit the task is too complex or they’ve forgotten what they were meant to do.