What I’m saying is, we don’t know what physical or computational characteristics are required for something to be sentient.
What I’m saying is, we don’t know what physical or computational characteristics are required for something to be sentient.
Now that I use github copilot, I can work more quickly and learn new frameworks more with less effort. Even its current form, LLMs allow programmers to work more efficiently, and thus can replace jobs. Sure, you still need developers, but fewer of them.
Why is it that these sorts of people who claim that AI is sentient are always trying to get copyright rights? If an AI was truly sentient, I feel like it’d want, like, you know, rights. Not the ability for its owner to profit off of a cool stable diffusion generation that he generated that one time.
Not to mention that you can coerce a language model to say whatever you want, with the right prompts and context. So there’s not really a sense in which you can say it has any measurable will. So it’s quite weird to claim to speak for one.
While I agree that LLMs probably aren’t sentient, “it’s just complex vector math” is not a very convincing argument. Why couldn’t some complex math which emulates thought be sentient? Furthermore, not being able to change, adapt, or plan may not preclude sentience, as all that is required for sentience is the capability to percieve and feel things.
Sam Altman is a part of it too, as much as he likes to pretend he’s not.
Section 3a of the bill is the part that would be used to target LGBTQ content.
Sections 4 talks about adding better parental controls which would give general statistics about what their kids are doing online, without parents being able to see/helicopter in on exaxrlt what their kids were looking at. It also would force sites to give children safe defaults when they create a profile, including the ability to disable personalized recommendations, placing limitations on dark patterns designed to manipulate children to stay on platforms for longer, making their information private by default, and limiting others’ ability to find and message them without the consent of children. Notably, these settings would all be optional, but enabled by default for children/users suspected to be children.
I think the regulations described in section 4 would mostly be good things. They’re the types of settings that I’d prefer to use on my online accounts, at least. However, the bad outweighs the good here, and the content in section 3a is completely unacceptable.
Funnily enough, I had to read through the bill twice, and only caught on to how bad section 3a was on my second time reading it.
I’d personally consider it pretty cruel and inhumans to force someone to violate their own ethics on a daily basis.
If youtube is still pushing racist and alt right content on to people, then they can get fucked. Why should we let some recommender system controlled by a private corporation have this much influence American culture and politics??