- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lazysoci.al/post/15908451
I’ve been saying this and people keep arguing.
This seems like the critical part to me:
The paper, released in November 2023, notes that even back in 2016 researchers were able to defeat reCAPTCHA v2 image challenges 70 percent of the time. The reCAPTCHA v2 checkbox challenge is even more vulnerable – the researchers claim it can be defeated 100 percent of the time.
reCAPTCHA v3 has fared no better. In 2019, researchers devised a reinforcement learning attack that breaks reCAPTCHAv3’s behavior-based challenges 97 percent of the time.
So it isn’t even effective at deterring bots? Then what the hell was all this for?
Introducing a Captcha on a form on my website basically blocked bots 100% of the time. It’s arguably good enough from a practical standpoint.
If someone really wants to exploit my site, then they will find a way. You can only make it harder but never truly impossible if you don’t want to dispose of all convenience.
For getting free labor, of course.
We are basically training their models/bots for them.
Yeah it’s pretty clearly just getting people to manually train self-driving cars for a while now.
Always has been
I mean that is true but there is some nuance.
At one time it was a cheap way to protect your site from drive by scripts and make your users help pay for that protection.
They still work in that way on say the comment section of a tiny WordPress blog because the cost to solve them isn’t worth what a random boner pill ad is worth.
The issue now (made worse recently by LLMs) is that more bots then ever are scraping any and every thing so people are putting captchas on every bit of every web app content they have. This increases the work of your users while it only slows down the bots. The hope is that the cost to solve is slightly higher than the value of the data.