The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?

Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.

It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.

“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”

OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    The core infrastructure issue is distinguishing between queries made by individuals and those made by programs scraping the internet for AI training data. The answer is that you can’t. The way data is presented online makes such differentiation impossible.

    Either all data must be placed behind a paywall, or none of it should be. Selective restriction is impractical. Copyright is not the central issue, as AI models do not claim ownership of the data they train on.

    If information is freely accessible to everyone, then by definition, it is free to be viewed, queried, and utilized by any application. The copyrighted material used in AI training is not being stored verbatim—it is being learned.

    In the same way, an artist drawing inspiration from Michelangelo or Raphael does not need to compensate their estates. They are not copying the work but rather learning from it and creating something new.

    • Lifter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      I disagree. Machines aren’t “learning”. You are anthropomorphising theem. They are storing the original works, just in a very convoluted way which makes it hard to know which works were used when generating a new one.

      I tend to see it as they used “all the works” they trained on.

      For the sake of argument, assume I could make an “AI” mesh together images but then only train it on two famous works of art. It would spit out a split screen of half the first one to the left and half of the other to the right. This would clearly be recognized as copying the original works but it would be a “new piece of art”, right?

      What if we add more images? At some point it would just be a jumbled mess, but still consist wholly of copies of original art. It would just be harder to demonstrate.

      Morally - not practically - is the sophistication of the AI in jumbling the images together really what should constitute fair use?

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        That’s literally not remotely what llms are doing.

        And they most certainly do learn in the common sense of the term. They even use neural nets which mimic the way neurons function in the brain.

        • Lifter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Mimic, perhaps inspired but neural nets in machine learning doesn’t work at all like real neural nets. They are just variables in a huge matrix multiplication.

          FYI, I do have a Master’s degree in Machine Learning.

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Yes I also have a master’s and a PhD in machine learning as well which automatically qualifies me as an authority figure.

            And I can clearly say that you are wrong.