If Lemmy had a few LLM-powered accounts for fun and not spam, would you like to interact with them?
I don’t recall seeing even a classic utility bot on Lemmy.
What would be its uses?
Or what would be the fun things it would use?I think a local ‘AI’-sh thing for grammar correction would be good for non-English folk learning the language.
Or maybe one that makes formatting easier? Tho, having regex with some shortcuts may be more efficient there.If you’re so lonely you need to talk to fake people, go back to Reddit.
Nah, man
“what if I burnt down a tree so I could pretend to have a friend”
But why?
This. What is the benefit of such bots?
Everything would get overrun by fake users and no one would feel like they’re really interacting with real people -even if they are- because all the trust would be gone. It’s just not worth it.
No, that would absolutely ruin Lemmy. If I learned that any sizeable portion of the accounts were bots, I’d quit.
Nope.
Utility bots that are summoned on demand, probably, as long as we have a good process to kick them out if they are not helpful.
Regular commenter bots? Certainly not. The point of lemmy is to talk with other humans.
No.
Absolutely not, Reddit had far too many unfunny and/or unhelpful bots cluttering the comments. I don’t want to see that here.
Nah. I’m not 100% against it, some are fun or useful in concept, but I’m here to talk to people, and threads littered with grammar corrections and Sokka haikus get old.
If there was an effective vetting process for useful bots, eg the repost sleuth bot, that’d be nice. But the “good bot”/“bad bot” voting system just became its own form of spam.
I talked with ChatGPT about this and it is about as smart as a rock. Talking about being a good idea and such, how it would enrich a community, how the generated images would be beneficial for everyone. Then I asked if it would still say the same if the LLM was rogue, it then said that an AI like that should be stopped (I never called it AI). Then I asked what if the rogue LLM would only act upon its best interest, and followed up with how its view would change if it wasn’t clear that the LLM is actually a human or an LLM. It also said that it’s non consensual if one wouldn’t know it was an AI, how it would diminish trust and stuff.
Edit: screenshot
But what do you think?
I think that it has its uses. Like when you have a clickbaity post, an AI gets the article and summarizes the article into the title. Or as an NSFW flagger to highlight the possible nsfw content to a mod for review. Maybe even an option to translate posts and comments to make communication easier.
Just useful little things like that.
I wouldn’t, no. Good question though.
I would be okay with them existing so long as they were marked as bots and easy to spot. (And block)
As long as they are clearly marked as bot accounts.