Jump to content

Recommended Posts

Posted
On 6/29/2025 at 4:25 PM, roytam1 said:

AI crawlers don't respect robots.txt.

This is a preparatory stage for the introduction of personal identifiers on the network. Do you remember the Microsoft passport that they wanted to introduce back in 2003? They are psychologically preparing the herd (as they perceive people) to avoid mass discontent. I don't believe in kind-hearted individuals who want to make the internet free for nothing. These individuals are either bought off and remain silent, or they divert the masses towards imaginary threats (such as UFOs), concealing real threats in the noise.
For example, such a nuisance suddenly appeared on the last remaining Invidious sites a couple of months ago, and serpent can't pass this test, and I don't want to enable workers or scripts. Previously, these sites could be used without scripts. After introducing such a nuisance, there's no point in maintaining privacy. Alternatively, the owners of the last Invidious sites may have decided to earn extra money by collecting unique browser fingerprints using these protection measures against the dreaded pseudo-AI, which are highly interested in Invidious sites.


Posted
17 hours ago, Ben Markson said:

I have a lot more run-ins with Cloudflare than I do Anubis. Both of them can get stuck in an infinite validation loop. Cloudflare can get insanely aggressive, it will quite happily lock your browser into an irretrievable Loop of Death (who would write code like that?). At least Anubis can easily be stopped.

I think the thing I object to the most is that they focus on the way a browser looks rather than what it is actually doing. In civil society this is characterised as profiling rather than being intelligence led.

All that will happen is that the bots will better disguise themselves and their behaviour will go unchecked.

If you take the Anubis explanation (posted above by @VistaLover ) at its word, it seems to make sense. The idea is to make the user agent (browser or bot) do something rather hard, but not too hard; the idea being if you're just an ordinary user, the extra work is just a short delay in getting to the Web page; but if you're a bot crawling millions of pages, that extra work isn't worth the effort so you'll just abort the script after a few milliseconds and move on.

But, then - why insist on "modern" Javascript and why force users to disable their privacy guards? I'm still somewhat skeptical that Anubis was telling us the whole story above.

Posted
13 hours ago, Mathwiz said:

If you take the Anubis explanation (posted above by @VistaLover ) at its word, it seems to make sense. The idea is to make the user agent (browser or bot) do something rather hard, but not too hard; the idea being if you're just an ordinary user, the extra work is just a short delay in getting to the Web page; but if you're a bot crawling millions of pages, that extra work isn't worth the effort so you'll just abort the script after a few milliseconds and move on.

But, then - why insist on "modern" Javascript and why force users to disable their privacy guards? I'm still somewhat skeptical that Anubis was telling us the whole story above.

I'm assuming "modern" Javascript is ES6+ Those versions of Javascript are fairly recent and supports modern features.

Posted

Yes; the page could've been clearer on exactly how "modern" your browser's Javascript needed to be.

At any rate, UXP does seem up to the task, albeit inefficiently. There are many reasons that might have caused me to get the "denied" page, but it wasn't worth the effort to track it down. I was just wondering what kind of nonsense we WWW users have to deal with now, and why. My curiosity is "mostly" satisfied now.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...