Jump to content

Mathwiz

Member
  • Posts

    1,841
  • Joined

  • Last visited

  • Days Won

    50
  • Donations

    0.00 USD 
  • Country

    United States

Mathwiz last won the day on June 8

Mathwiz had the most liked content!

3 Followers

About Mathwiz

Profile Information

  • OS
    Windows 7 x64

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mathwiz's Achievements

1.3k

Reputation

  1. We need a "benign exploit" page (a page that triggers the bug but doesn't do anything harmful) to test for this vulnerability. We had one for the WebP vulnerability.
  2. You are right. You need version 138 or above to get the patch. If folks don't want to update, the patch is unavailable to them. For those folks, the only safe option is to turn off the V8 optimizer as described previously. I suppose, in theory, someone skilled in building Chromium could apply the patch to earlier versions, but I can't imagine anyone would do so, unless there were a very popular old version that many folks were reluctant to update from.
  3. Version 138 is required for the fix; the bug goes back earlier though: Good catch. Google is being tight-lipped on exactly when this vulnerability crept in. I doubt it goes all the way back to 2008, though. Today's V8 looks nothing like the original. I believe (and should have said) versions prior to the V8 optimizer are not vulnerable. I suspect 360EE (and Kafan MiniBrowser) aren't vulnerable because the option to turn off the optimizer isn't there (presumably because there's nothing to turn off), but I can't be sure with the limited info we have.
  4. It's well hidden: Settings / Privacy and Security / Manage V8 Security (near bottom of page - scroll down) / Don't allow sites to use the V8 optimizer (This will slow down Javascript) Really old Chromium versions (360EE) don't have V8 and so are (presumably) not vulnerable
  5. (Actually Moonchild said:) Good; so the "collective punishment" of being banned for living in the wrong country will end soon, hopefully. MC is wrong about one thing though: As noted here, Anubis unfortunately does require one more thing beyond being "a little patient the first time they visit:" turning off certain privacy guards. MC himself won't abuse this requirement: ... but other Anubis-protected sites may not be so civic-minded, and how's the end user supposed to know? One user presented a possible workaround though: I don't know if MC has Anubis configured this way, but those outside the geoblocks may experiment at their leisure.
  6. Yes; the page could've been clearer on exactly how "modern" your browser's Javascript needed to be. At any rate, UXP does seem up to the task, albeit inefficiently. There are many reasons that might have caused me to get the "denied" page, but it wasn't worth the effort to track it down. I was just wondering what kind of nonsense we WWW users have to deal with now, and why. My curiosity is "mostly" satisfied now.
  7. If you take the Anubis explanation (posted above by @VistaLover ) at its word, it seems to make sense. The idea is to make the user agent (browser or bot) do something rather hard, but not too hard; the idea being if you're just an ordinary user, the extra work is just a short delay in getting to the Web page; but if you're a bot crawling millions of pages, that extra work isn't worth the effort so you'll just abort the script after a few milliseconds and move on. But, then - why insist on "modern" Javascript and why force users to disable their privacy guards? I'm still somewhat skeptical that Anubis was telling us the whole story above.
  8. So it is a bandwidth issue. Fair enough. I had no idea that AI crawling had become such a burden for Web servers. Still having a hard time grokking why the AI crawlers don't respect robots.txt though. AIUI, their purpose is just to gather content to train AI engines; surely there's plenty of content even without violating such a longstanding norm! In any case, I question Anubis's assertion that "The idea is that at individual scales the additional load is ignorable." It took R3dfox v.139 several seconds to complete the challenge, to say nothing of UXP browsers. But I suppose there was a silver lining: MC probably had to ensure UXP could pass the challenge before using it to protect his own repo! It would be quite embarrassing if RPO couldn't be accessed by Pale Moon....
  9. I sort of figured, but why don't AI crawlers respect robots.txt, when other Web crawlers do? That's what I was really after. Which leads to another question: why do public repos need to block AI crawlers so badly that Gitea resorted to Anubis to do the job? Is it a bandwidth issue or a legal one?
  10. Anubis (from Egyptian mythology) was also the name of a villainous character on the Stargate SG-1 television series. AI crawling sounds bad but I'm not sure why, what it is, how it differs from ordinary Web crawling, or why robots.txt cannot be relied on.
  11. Unrelated to original problem, but WTF is this? FWIW, r3dfox passes whatever this is and lets you in (eventually). The WWW has become such an unpleasant place.
  12. TL;DR - The "Do Not Track" header was abandoned because virtually no Web sites paid any attention to it. Google may have made the decision to pull the plug but it was already comatose. It remains to be seen whether its successor, Global Privacy Control, will be similarly ignored. GPC may be enforceable in the EU, so sites may choose to honor it there.
  13. Good to hear. I use 55 myself, but I also keep a copy of 52 handy. The different names arose back in the early days. Originally Serpent was just a generic name used by "unbranded" builds of Basilisk; i.e., if you or I built the browser on our own PC, it would be called "Serpent" too. Basilisk was the "branded" name used for the official version distributed by MoonChild Productions. For several years the folks at MoonChild Productions expressed great irritation that some users sought support for @roytam1's Serpent builds at Basilisk's Web site (then run by MCP), so we were all taught to be careful never to refer to Roytam's builds as "Basilisk" and risk driving more Serpent users there.
  14. Discover.com works with r3dfox as-is, so no SSUAO needed there. (Edge or Thorium users aren't so lucky; Discover seems to demand a quite recent Chromium engine. Supermium would probably do the trick but I haven't tried it.) I see several sites with this SSUAO: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 This works with chase.com as well. So I guess it was the r3dfox bit that it didn't like after all. Win 7 doesn't seem to bother it.
  15. On Win 7, R3dfox is now my preferred replacement for M$ Edge. I had been using the latest Edge version for Win 7 (109), with a UAO to Chrome 125, but that's no longer good enough for some sites (e.g., discover.com). I did find that Chase.com doesn't like the R3dfox slice in the user agent - or was it the OS slice, revealing Win 7, that it was objecting to? It kept telling me to "upgrade" my browser even though R3dfox is up to version 139! Well, either way, a straight FF 128 on Win 10 user agent satisfies both Chase and Discover, at least for now. It's ridiculous how bloody finicky some Web sites - particularly financial ones - have become. Security I dig, but way too many folks equate "security" with "only using Chrome, Edge, or Firefox, and a version no older than a few months."
×
×
  • Create New...