We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.
We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.
Robots.txt is a lot like email in that it was built for a far simpler time.
It would be better if the server could detect bots and send them down a rabbit hole rather than trusting randos to abide by the rules.
It is not possible to detect bots. Attempting to do so will invariably lead to false positives denying access to your content to what is usually the most at-risk & marginalized folks
Just implement a cache and forget about it. If read only content is causing you too much load, you’re doing something terribly wrong.
It was built for the living, free internet.
For all ita dark corners, it was better than what we have now.
Already possible: Nepenthes.
Ooh, nice.
I’m sold
Because of AI bots ignoring robots.txt (especially when you don’t explicitly mention their user-agent and rather use a * wildcard) more and more people are implementing exactly that and I wouldn’t be surprised if that is what triggered the need to implement robots.txt support for FediDB.