• 0 Posts
  • 34 Comments
Joined 3 years ago
cake
Cake day: July 4th, 2023

help-circle
  • I agree with the sentiment but not with the advice “commit a felony to avoid maybe getting a felony”. There isn’t a chance you’ll get charged with destroying evidence if they’re already looking at you under a microscope like your hypothetical.

    Anyone that concerned needs to just not store sensitive data on their phone, and use a messaging app that doesn’t permanently store messages, either. That way you didn’t erase your phone, AND they find nothing. Attempting to secure your data from the cops while you’re already under the lens with a warrant is far too late.



  • Honestly I feel this was always the goal (one of several), but R&D is expensive. Shipping an odd phone that people still buy keeps the shareholders happy while the multi-year research process can eventually produce more usable results.

    Single-flip phones were the awkward teenagers, now this phone can be the 18-20 age young adult, fully featured, but needing refinement. Next gen or the one after this will add a lot more robustness.




  • Its just an API.

    There’s a few ways they could go about it. They could have part of the prompt be something like “when the customer is done taking their order, create a JSON file with the order contents” and set up a dumb register essentially that looks for those files and adds that order like a standard POS would.

    They could spell out a tutorial in the prompt, "to order a number 6 meal, type “system.order.meal(6)” calling the same functions that a POS system would, and have that output right to a terminal.

    They could have their POS system be open on an internal screen, and have a model that can process images, and have it specify a coordinate pair, to simulate a touch screen, and make it manually enter an order that way as an employee would.

    There’s lots of ways to hook up the AI, and it’s not actually that different from hooking up a normal POS system in the first place, although just because one method does allow an AI to interact doesn’t mean it’ll go about it correctly.


  • You’re correct but you have an off by 1 error.

    First, the genie grants the wish.

    NumWishes=0;

    Then, having completed the wish, the genie deducts that wish from the remaining wishes.

    NumWishes–;

    And to complete the thought,

    Lastly, the genie checks if the lampholder is out of wishes

    If(NumWishes==0) {…}

    (255==0) evaluates to False, so we fall past that check.









  • The difference is, if this were to happen and it was found later that a court case crucial to the defense were used, that’s a mistrial. Maybe even dismissed with prejudice.

    Courts are bullshit sometimes, it’s true, but it would take deliberate judge/lawyer collusion for this to occur, or the incompetence of the judge and the opposing lawyer.

    Is that possible? Sure. But the question was “will fictional LLM case law enter the general knowledge?” and my answer is “in a functioning court, no.”

    If the judge and a lawyer are colluding or if a judge and the opposing lawyer are both so grossly incompetent, then we are far beyond an improper LLM citation.

    TL;DR As a general rule, you have to prove facts in court. When that stops being true, liars win, no AI needed.



  • Nah that means you can ask an LLM “is this real” and get a correct answer.

    That defeats the point of a bunch of kinds of material.

    Deepfakes, for instance. International espionage, propaganda, companies who want “real people”.

    A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.

    You’d need every LLM to do this, and there’s open source models, there’s foreign ones. And as has already been proven, you can’t rely on an LLM detecting a generated product without it.

    The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn’t the be-all end-all for a model, but I dunno when when or if that’ll be the case.


  • No, because there’s still no case.

    Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren’t susceptible to this because they’re essentially a historical record. It’s like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.

    Also applies to law schools. People do reference back to cases all the time, there’s an opposing lawyer, after all, who’d love a slam dunk win of “your honor, my opponent is actually full of shit and making everything up”. Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.

    LLMs can deceive lawyers who don’t verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn’t the case, lawyers would make up cases themselves, they don’t need an LLM for that, but it doesn’t happen because it doesn’t work.