I’m probably late to thinking this, and plenty of smarter people will have seen this, but I was just watching a video on Google’s proposal which read out Mozilla’s position on it, and noticed something that I haven’t heard mentioned. As it says, it’s designed to help detect and prevent ‘non-human traffic’, which would likely harm assistive technologies, testing, archiving and search engines. All of which Google is involved in.
If they’re an attesting body, which presumably they would be, they could just say that their indexing crawler is legitimate traffic and get all the data, while other search engines not accepted (yet) by an attesting body wouldn’t be able to. So search engines will be locked down to only what exists now. And AI training currently requires scraping large amounts of the internet, which they won’t be able to do. So this could also help create a moat for Google Bard, that their earlier memo said didn’t exist, to outstrip open-source models, just due to access to data.
I’ve heard people complain that this is an attempt to monopolise the browser market, but they practically already have done that, and I haven’t heard anyone mention this. If all I’ve said is accurate and I haven’t misunderstood something, this could allow them to monopolise (or at least oligopolise) everything that requires access to widespread internet data - basically everything they do.