There’s another round of CSAM attacks and it’s really disturbing to see those images. It was really bothering to see those and they weren’t taken down immediately. There was even a disgusting shithead in the comments who thought it was funny?? the fuck
It’s gone now but it was up for like an hour?? This really ruined my day and now I’m figuring out how to download tetris. It’s really sickening.
How was it handled on Reddit? Did the moderators have to handle it there as well, or did Reddit filter it out beforehand?
Reddit uses a CSAM scanning tool to identify and block the content before it hits the site.
https://protectingchildren.google/#introduction is the one Reddit uses.
https://blog.cloudflare.com/the-csam-scanning-tool/ is another such tool.
Are any of the examples that your provided libre/free and open-source? I wasn’t able to find any info for Google’s, and Cloudflare seems to only offer theirs for free if you are already using Cloudflare’s services. If not the examples that you provided, does there exist any tools that are libre/free and open-source?
No.
The nature of the checksums and perceptual hashing is kept in confidence between the National Center for Missing and Exploited Children (NCMEC) and the provider. If the “is this classified as CSAM?” service was available as an open source project those attempting to circumvent the tool would be able to test it until the modifications were sufficient to get a false negative.
There are attempts to do “scan and delete” but this may add legal jeopardy to server admins even more than not scanning as server admins are required by law to report and preserve the images and log files associated with CSAM.
I’d strongly suggest anyone hosting a Lemmy instance to read https://www.eff.org/deeplinks/2022/12/user-generated-content-and-fediverse-legal-primer
The requirements for hosting providers are https://www.law.cornell.edu/uscode/text/18/2258A