![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I don’t think that’s the best argument in favor of AI if you cared to make that argument. The infringement wasn’t for their parsing of the law, but for their parsing of the annotations and commentary added by westlaw.
If processing copy written material is infringement then what they did is definitively infringement.
The law is freely available to read without westlaw. They weren’t making the law available to everyone, they were making a paid product to compete with the westlaw paid product. Regardless of justification they don’t deserve any sympathy for altruism.
A better argument would be around if training on the words of someone you paid to analyze an analysis produces something similar to the original, is it sufficiently distinct to actually be copy written? Is training itself actually infringement?
https://natlawreview.com/article/court-training-ai-model-based-copyrighted-data-not-fair-use-matter-law
It sounds like the case you mentioned had a government entity doing the annotation, which makes it public even though it’s not literally the law.
Reuters seems to have argued that while the law and cases are public, their tagging, summarization and keyword highlighting is editorial.
The judge agreed and highlighted that since westlaw isn’t required to view the documents that everyone is entitled to see, training using their copy, including the headers, isn’t justified.
It’s much like how a set of stories being in the public domain means you can copy each of them, but my collection of those stories has curation that makes it so you can’t copy my collection as a whole, assuming my work curating the collection was in some way creative and not just “alphabetical order”.
Another major point of the ruling seems to rely on the company aiming to directly compete with Reuters, which undermines the fair use argument.