We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    24 hours ago

    I think it’s just one HTTP request to the nodeinfo API endpoint once a day or so. Can’t really be an issue regarding load on the instances.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        23 hours ago

        True. Question here is: if you run a federated service… Is that enough to assume you consent to federation? I’d say yes. And those Mastodon crawlers and statistics pages are part of the broader ecosystem of the Fediverse. But yeah, we can disagree here. It’s now going to get solved technically.

        I still wonder what these mentioned scrapers and crawlers do. And the reasoning for the people to be part of the Fediverse but at the same time not be a public part of the Fediverse in another sense… But I guess they do other things on GoToSocial than I do here on Lemmy.

        • JustAnotherKay@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 hours ago

          if you run a federated services… Is that enough to assume you consent

          If she says yes to the marriage that doesn’t mean she permanently says yes to sex. I can run a fully air gapped “federated” instance if I want to

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            7 hours ago

            Hmmh, I don’t think we’ll come to an agreement here. I think marriage is a good example, since that comes with lots of implicit consent. First of all you expect to move in together after you got engaged. You do small things like expect to eat dinner together. It’s not a question anymore whether everyone cooks their own meal each day. And it extends to big things. Most people expect one party cares for the other once they’re old. And stuff like that. And yeah. Intimacy isn’t granted. There is a protocol to it. But I’m way more comfortable to make the moves on my partner, than for example place my hands on a stranger on the bus, and see if they take my invitation…

            Isn’t that how it works? I mean going with your analogy… Sure, you can marry someone and never touch each other or move in together. But that’s kind of a weird one, in my opinion. Of course you should be able to do that. But it might require some more explicit agreement than going the default route. And I think that’s what happened here. Assumptions have been made, those turned out to be wrong and now people need to find a way to deal with it so everyone’s needs are met…

            I just can’t relate. Doesn’t being in a relationship change things? It sure did for me. And I surely act differently around my partner, than I do around strangers. And I’m pretty sure that’s how most people handle it. And I don’t even think this is the main problem in this case.

            • JustAnotherKay@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Going by your example

              Air gapping my service is the agreement you’re talking about in this analogy, but otherwise I do actually agree with you. There is a lot of implied consent, but I think we have a near miss misunderstanding on one part.

              In this scenario (analogies are nice but let’s get to reality) crawling the website to check the MAU, as harmless as it is, is still adding load to the server. A tiny amount, sure, but if you’re going to increase my workload by even 1% I wanna know beforehand. Thus, I put things on my website that say “don’t increase my workload” like robots.txt and whatnot.

              Other people aren’t this concerned with their workload, in which case it might be fine to go with implied consent. However, it’s always best to follow the best practices and just make sure with the owner of a server that it’s okay to do anything to their server IMO

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 hours ago

                I don’t think that’ll work. Asking for consent and retrieving the robots.txt is yet another request with a similar workload. So by that logic, we can’t do anything on the internet. Since asking for consent is work and that requires consent, which requires consent… And if you’re concerned with efficiency alone, cut the additional asking and complexity by just straightforward doing the single request.

                Plus, it’s not even that complex. Sending a few bytes of JSON with daily precalculated numbers is a fraction of what a single user interaction does. It’s maybe zero point something of a request. Or with a lots of more zero’s in-between if we look at what a server does each day. I mean every single refresh of the website or me opening the app loads several files, API endpoints, regularly loads hundreds of kilobytes of Javascript, images etc. There are lots of calculations and database requests involved to display several posts along with votes etc. I’d say one single pageview of me counts like the FediDB collecting stats each day for like 1000 years.

                I invented these numbers. They’re wrong. But I think you get what I’m trying to say… For all practical purposes, these requests are for free and have zero cost. Plus if it’s efficiency, it’s always a good idea not to ask to ask, but outright do it and deal with it while answering. So it really can’t be computational cost or network traffic. It has to be consent.

                (And in developer terms, some things don’t even add up. Computers can do billions of operations each second. Network infrastructure can handle somewhere in the ballpark of millions(?) of packets a second. And we’re talking about a few of them a day, here. I’d say this is more like someone moving grains of sand in the Sahara with their bare hands. You could do it all your life and it wouldn’t really change anything. For practical purposes, it’s meaningless on that scale.)

                • JustAnotherKay@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 hour ago

                  You’re definitely right that I went a bit extreme with what I used as a reason against it, but I feel like the point still stands about “just ask before you slam people’s servers with yet another bot on the pile of millions of bots hitting their F2B system”

              • notfromhere@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 hours ago

                How is it air gapped and federated? Do you unairgap it periodically for a refresh then reairgap it? I’ve not heard of airgapped federated servers before and am intrigued. Is it purely for security purposes or also bandwidth savings? Are there other reasons one may want to run an air gapped instance?

                • JustAnotherKay@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 hour ago

                  In this scenario, I have multiple servers which are networked together and federated via ActivityPub but the server cluster itself is air gapped.

                  As to your questions about feasibility and purposes, I will admit I both didn’t think about that, and should have been more clear that this air gapped federated instance was theoretical lol

        • WhoLooksHere@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          23 hours ago

          Why invent implied consent when complicit explicit has been the standard in robots.txt for ages now?

          Legally speaking there’s nothing they can do. But this is about consent, not legality. So why use implied?

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            22 hours ago

            I guess because it’s in the specification? Or absent from it? But I’m not sure. Reading the ActivityPub specification is complicated, because you also need to read ActivityStreams and lots of other references. And I frequently miss stuff that is somehow in there.

            But generally we aren’t Reddit where someone just says, no we prohibit third party use and everyone needs to use our app by our standards. The whole point of the Fediverse and ActivityPub is to interconnect. And to connect people across platforms. And it doen’t even make lots of assumptions. The developers aren’t forced to implement a Facebook clone. Or do something like Mastodon or GoToSocial does or likes. They’re relatively free to come up with new ideas and adopt things to their liking and use-cases. That’s what makes us great and diverse.

            I -personally- see a public API endpoint as an invitation to use it. And that’s kind of opposed to the consent thing. But I mean, why publish something in the first place, unless it comes with consent?

            But with that said… We need some consensus in some areas. There are use cases where things arent obvious from the start. I’m just sad that everyone is ao agitated and seems to just escalate. I’m not sure if they tried talking to each other nicely. I suppose it’s not a big deal to just implement the robots.txt and everyone can be happy. Without it needing some drama to get there.

            • WhoLooksHere@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              22 hours ago

              Robots.txt started I’m 1994.

              It’s been a consensus for decades.

              Why throw it out and replace it with imied consent to scrape?

              That’s why I said legally there’s nothing they can do. If people want to scrape it they can and will.

              This is strictly about consent. Just because you can doesn’t mean you should yes?

              I guess I haven’t read a convincing argument yet why robots.txt should be ignored.

              • Rimu@piefed.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                19 hours ago

                It’s been a consensus for decades

                Let’s see about that.

                Wikipedia lists http://www.robotstxt.org/ as the official homepage of robots.txt and the “Robots Exclusion Protocol”. In the FAQ at http://www.robotstxt.org/faq.html the first entry is “What is a WWW robot?” http://www.robotstxt.org/faq/what.html. It says:

                A robot is a program that automatically traverses the Web’s hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

                That’s not FediDB. That’s not even nodeinfo.

                • WhoLooksHere@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  edit-2
                  16 hours ago

                  From your own wiki link

                  robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

                  How is fedidb not an “other web robot”?

                  • Rimu@piefed.social
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    13 hours ago

                    Ok if you want to focus on that single phrase and ignore the whole rest of the page which documents decades of stuff to do with search engines and not a single mention of api endpoints, that’s fine. You can have the win on this, here’s a gold star.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                20 hours ago

                I just think you’re making it way more simple than it is… Why not implement 20 other standards that have been around for 30 years? Why not make software perfect and without issues? Why not anticipate what other people will do with your public API endpoints in the future? Why not all have the same opinions?

                There could be many reasons. They forgot, they didn’t bother, they didn’t consider themselves to be the same as a commercial Google or Yandex crawler… That’s why I keep pushing for information and refuse to give a simple answer. Could be an honest mistake. Could be honest and correct to do it and the other side is wrong, since it’s not a crawler alike Google or the AI copyright thieves… Could be done maliciously. In my opinion, it’s likely that it just hadn’t been an issue before, the situation changed and now it is. And we’re getting a solution after some pushing. Seems at least FediDB took it offline and they’re working on robots.txt support. They did not refuse to do it. So it’s fine. And I can’t comment on why it hadn’t been in place. I’m not involved with that project and the history of it’s development.

                And keep in mind, Fediverse discoverability tools aren’t the same as a content stealing bot. They’re there to aid the users. And part of the platform in the broader picture. Mastodon for example isn’t very useful unless it provides a few additional tools, so you can actually find people and connect with them. So it’d be wrong to just apply the exact same standards to it like some AI training crawler or Google. There is a lot of nuance to it. And did people in 1994 anticipate our current world and provide robots.txt with the nuanced distinctions so it’s just straightforward and easy to implement? I think we agree that it’s wrong to violate the other user’s demands/wishes now that the’re well known. Other than that, I just think it’s not very clear who’s at fault here, if any.

                Plus, I’d argue it isn’t even clear whether robots.txt applies to a statistics page. Or a part of a microblogging platform. Those certainly don’t crawl any content. Or it’s part of what the platform is designed to do. The term “crawler” isn’t well defined in RFC 9309. Maybe it’s debatable whether that even applies.

            • jmcs@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              4
              ·
              22 hours ago

              You can consent to a federation interface without consenting to having a bot crawl all your endpoints.

              Just because something is available on the internet it doesn’t mean all uses are legitimate - this is effectively the same problem as AI training with stolen content.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                22 hours ago

                Yes. I wholeheartedly agree. Not every use is legitimate. But I’d really need to know what exactly happeded and the whole story to judge here. I’d say if it were a proper crawler, they’d need to read the robots.txt. That’s accepted consensus. But is that what’s happened here?

                And I mean the whole thing with consensus and arbitrary use cases is just complicated. I have a website, and a Fediverse instance. Now you visit it. Is this legitimate? We’d need to factor in why I put it there. And what you’re doing with that information. If it’s my blog, it’s obviously there for you to read it… Or is it…!? Would you call me and ask for permission before reading it? …That is implied consent. I’d argue this is how the internet works. At least generally speaking. And most of the times it’s super easy to tell what’s right and what is wrong. But sometimes it isn’t.