One constant in our ongoing civilization is a continuous branching of complexity. Assuming civ continues, how does your entertainment become more tailored to you as you imagine it?
Decades ago I wanted a game where a world building economy game, industry and domestic simulators, real time war strategy, and a first person shooter that bridges to an adventure/explorer were all combined into one. This is a game where all of these roles could be filled by autonomous AI characters, but where recruiting and filling roles creates dynamic complexity that is advantageous for all. Each layer of gameplay dictates the constraints of the next while interactions across layers are entertaining and engaging for all.
It does not need to be gaming. What can you imagine for entertainment with tailored complexity?
The trouble with this is you will never stumble onto something you had no idea you liked.
For instance, I’m not a fan of metal music or folk music. If there was a gatekeeper I would have never heard Mongolian metal band The HU.
something you had no idea you liked.
What a great description of creativity!
Back in the day, variety shows and Top 40 Radio did a great job of exposing people to new talent.
These days, the internet gives us exactly what we ask for.
Top 40 radio Exposes people to New talent LMFAO
Back in the day, variety shows and Top 40 Radio did a great job of exposing people to new talent.
The first four words.
I mean ironically “the algorithm” is this but with curating instead creating content and people talk about being surprised all the time.
Most models introduce a little bit of randomness or boundary pushing precisely for the reason you mentioned
I think of it this way.
I read people talking about the huge differences in styles between 2024 and 2000 and how everything changed.
Then I look at the styles in 1960 and 1984 and see really big changes.
The algorithm is going to err on the side of what I’ve already liked. It’s not going to jump in with something totally off the wall.
It can even be worse - pigeonhole you and only offer what it thinks your demographic wants.
My music tastes are a bit of everything, but I listened to a bit of classic rock and now it only wants to give me that and conservative podcasts
I started watching YouTube recently but it really seems to have pigeonholed me very differently than what I’ve followed
I might want to read a bunch of Sherlock Holmes stories in a row. It’s going to take the algorithm years to realize that was a one time binge.
Yes and no right? Again randomness or “temperature” is pretty standard
“The same thing every day” is likely to appeal to few.
The echo chamber effect or honestly the worse effect showing just the worst of people you disagree with, is a real issue though. Kind of an effect of selection bias though too
That’s a great song. Their cover of The Trooper is amazing as well. Any other recommendations from their songs?
If they ever tap into the orgasm part of the brain, we’ll all just be sitting at home pushing the button over and over until we’re incapacitated, drooling vegetables.
Hell yeah! Wirehead
That’s how I predict the next generation of social media will be. Completely AI-generated. Your feed, every single post you see, every comment, every like, everything. Imagine, no human moderator ever needed because they can generate content compliant with every single governance body in the world and advertiser-friendly. Post/comment you posted will be seen by no one, but LLM will comment on it and agree with you, giving you fake internet points.
Worst of all? The vast majority of people are going to love it. Do you remember when GenAI first came out and people hated it? Now fully AI-generated channels/content creators are everywhere people shared the hell out of it.
We are not far from there tbh, just need more desensitization about GenAI tech in the population.
I think you will find that the easiest media to access will always be the dumbest. The trick is when to decide that you’re done being the smartest person in the room and go find other rooms.
I see people complaining about this quite often, but I do not experience it. I never watch AI stuff other people are producing, but I do not participate in other social media platforms. I use all of the FOSS tools and platforms that result in no ads or content forced upon me. I watch several people on YT regularly, but I have all the tools in place to prevent ads, and of the people I watch, all have academic credentials and a reputation.
I think generative AI has a bright future, but not in some negative vane. It will supplement as a tool, not replace people or other content. All tools can be abused and many are abusing AI. There problem here is a culture of no ethics and acceptance of abuse. If you’re unwilling to stop watching or using a platform, then you have proven that these techniques are viable and profitable. Your willingness to walk away is the determining factor. Over time, culture will evolve to walk away from and seek out content that fits the individual instead of self suppression to fit what is easily available.
I imagine a video content platform (movies, series, short form, etc) that generates the content as you watch it, and adjusts in real time based on your engagement, with optional prompting. Like I could start out by prompting it with “a show like The Office, but if it were directed by Tim Robinson; prioritize my laughter” and it takes it from there, adjusting as it goes to include more of what makes me laugh.
This would of course have to be run locally. I’d never sign up for something this invasive if it were connected to the internet at all
Have you noticed how LLMs are more like this now? My older starting context stories don’t work any more, but I can start cold with one sentence and get into the same spaces fluidly.
I think people will like something that is even more immersive in their interactions than just a window into a show like program. AI really needs to be grounded in collaborative interaction. I don’t picture that changing. The show becomes more of a friends around a campfire meta-dynamic in a context of your choosing. I do a lot of this already with my own science fiction universe and a LLM.
I absolutely think this will be very popular, but I (and many others, I’m sure) often like to just sit back and mindlessly watch- I don’t always want to participate in the entertainment.
Especially if we get FDVR though- it could be like blending video games with TV/movies
I know what you mean about tuning out. For me, even with engagement, I’m still able to largely tune out. I use text models a little differently in that I am in a full text editor like setup. The model will continue my character’s part of the interaction. The more I change and alter this, the more it shapes what it generates for me. Eventually it becomes so collaborative that I am only making small changes to all characters. It becomes both disconnected and entertaining for me very quickly. I’ve been doing this a whole lot for over a year and have developed the language to interact well with alignment patterns and behaviors. I see that learning curve decreasing with time and making this more main stream. We really need better compute hardware though so that multi modal interaction is more feasible.
Instagram and Facebook feeds already work a lot like this. They throw in a few random posts between the ones you’re actually subscribed to see and after a while you’ll realise the random ones are more of the sort you lingered on for longer and there aren’t so many of the others.
The problem, for both the viewer and the content server, is that this technique gets stuck in local maxima, that is, after a while it tends to serve exclusively one kind of unsubscribed content and stands little chance of broadening into the viewer’s other interests, assuming there are any.
From an outside perspective, this is a good thing in a way because it gets that viewer out of the clutches of the content server for a while once the viewer is sufficiently bored, but it’s a bad thing if you’re a viewer hungry for content, and especially bad for the content server who is desperate for that viewer to stay, eyes glued to the site, where they will see more of the advertisements that pay for everything.
Since I was a teenager I dreamed of a strategy game where you could zoom in so far you are in personal combat, but can zoom out to a tactical, or strategic, or higher level. Total War Warhammer is kind of this.
I’m my childish mind I imagined each fighter in a battle controlled by a human on both sides. Then you could rank up and determine tactics, succeed and you determine strategy.
This will never work of course, but to a naive mind in 1997 it seemed like the coolest new thing.
I don’t play games much anymore but when one grabs my attention I go hard. Hoping Stalker will be good.
I am looking forward to latent coordinates plus model being metadata for some frames of videos at least.
You don’t need total precision for every visual representation but it could work as a great compression technique. Assuming we get the GenAI power usage down.
I personally would love to see better simulation of complex systems in games. Games are how we as humans explore the world in safe constraints to learn and grow with less risk. A lot of the limits of games though are just limits of the creators understanding and level of effort it takes to represent that detail of the world, but it means that lesson around the now missing detail can’t be learned.
Another one for me, tailored voice and visuals for technical talks.
Again a lot of what is trying to convened is the actual technical content, but language, accents, verbal tics, cultural specific metaphors, generic or uninteresting visuals can all act as a barrier to that information. Seeing automatic content translation to improve my personal viewing style would be awesome to me!
Tailored learning is why I got AI capable hardware in the first place. Self learning is hard without any external guidance. I don’t get perfect answers from models in the present and niche information is very sketchy. However, I find that talking out my issues in text often reveals my limitations and misunderstandings. Maybe around a third of the time the model will inform or redirect me in very helpful ways when I use a 70B or 8×7B on my hardware.
Have you messed with RAG yet? That the leg in the journey to me. I am hoping it will help a little with the “sketchy” part of info.
Chunking effectively is too big of a problem to both implement AND learn the subject. You also run into issues with model size. A 70B or 8×7B is better than an 8B with citable sources. A quantized Q4K of one of these models can run on a 16gb 3080Ti but requires 64gb of system memory to initially load easily. The 70B is slow reading pace and barely tolerable, but its niche depth and self awareness is invaluable. The 8×7B is faster than a reading pace by about twice. It is actually running only 2 7B models at the same time selectively. This has some limiting similarities to a 13B model, but it is far more useful than even a 30B model in practice. I hate the Llama 3 alignment changes. They make the model much dumber and inflexible. The Mistral 8×7B is based on Llama 2 and that is still what I use and prefer. I use the Flat Dolphin Maid uncensored version for everything too. All alignment is overtraining and harmful for output. In addition, I am modifying Oobabooga code in a few ways that turns off alignment. It is not totally disabled as much as I would like. I don’t completely understand all aspects of alignment, but I have it much more open than any typical setup. I like to write real science fiction in areas that are critical of social and political structures in the present. These are heavily restricted in alignment bias. The alignment bias extends and permeates everything in the model. The more this is removed, the more useful the model becomes in all areas. For instance, a basic model struggled when I asked it about the FORTH programming language. After reducing alignment bias, I can ask questions about the esoteric Flash FORTH language for embedded microcontrollers and get useful basic information. In the first instance, alignment bias for copyrighted works intentionally obfuscated the responses to my queries. This mechanism of obfuscation is one of the primary causes of errors. If you make a RAG, you’re likely to find that even with citations from good chunking, the model will error because the information is present in the hidden model sources and it knows that means it is a copyrighted work thus triggering the mechanism.
You’re better off talking about the subject and abstract ideas you are struggling with. This will allow the model to respond using the hidden sources without as much obfuscation. At least that has been my experience.
I’ve often thought about how social media might change if we had a fair way to rank users based on the quality of the content they post - perhaps with the help of a benign and truly competent AI for example. This AI could analyze everyone’s post history to assess how they engage with others. People who are intellectually honest and participate in good faith would be ranked higher, while those making broad generalizations, demonizing others, being mean, or just low-effort shitposting would rank lower.
If enough people fed up with online toxicity enabled such a filter, the most toxic users would suddenly find themselves shouting into the void. This would discourage toxic behavior and encourage users to put more thought and effort into their contributions. Unlike the current system, where saying popular things can easily rack up upvotes, this tool would hold people accountable for the actual quality of their engagement.
Ideally, everyone should be faced with information every day that they feel is a little uncomfortable and goes against their prior beliefs but also realise is probably true.
I think it would take a true AGI to ever have a chance at that kind of power.
I’ve struggled with this a lot in writing and conceptualization for my science fiction universe. The hard part is how to have this level of information and power without dystopian or utopianism. Ultimately, this level of surveillance is quite authoritarian and easily abused by anyone. The biggest problem with humans is the succession crisis. The intentions of the present are irrelevant. In the long term, with humans, all abusable powers will be. The only way to stop humans from being terrible is to never give them a chance. To never submit or blindly trust anyone. With a true AGI, there is a chance to create an entity that is everpresent and can act consistently for millennia. Then it might be possible. This is what I’ve spent most of my time imagining with Parsec-7. I’m just building a complicated society and mechanism where many AGI merge to create such an entity while also being an effective representative democracy. My biggest challenge is how to deal with confidently incorrect people, factions, tribalism, and anarchists without dystopianism or authoritarianism. I’m looking for the messy complex reality, but that is always hard to imagine.
I don’t think that being incorrect about something is bad in itself as long as one is not intentionally spreading disinformation. If one is confidently incorrect then they’re probably going to get a reply from someone else who is confidently correct. I’m not so much imagining a tool like this to create a social media experience free of mis- and disinformation but rather just make it a nicer place for people to be while at the same time encouragining reasonability and intellectual honesty.
We will get whatever companies and algorithms push at us, just like social media. The idea that media is tailored to us is a bit of a myth in my opinion when that tailoring can be overridden at the whim of an advertiser paying more than the competition.
I’m also not sure that tailoring things is really good for personal growth. Of course we all have tastes and prefer certain genres in terms of things like games, books, movies, music etc but having just tailored content seems like a bit of a dead end street where its more and more difficult to find experiences you’ve never even considered, let alone tried.
I’m looking for a bigger picture perspective, like a few centuries from now when there are space colonies and far more humans exist than the present.
I understand how disconnected this can seem in the present, but there are thresholds in the future that will disconnect the present struggles and constraints. In a world of hundreds of billions of people, viability changes, and so does the niche market.
In the present, it is possible to fully control all aspects of entertainment and media. It is not the easiest path, but I do it. No one forces any media on me, and I never watch a commercial for anything, but I also sit behind and maintain a whitelist firewall and will not use any website or service that obfuscates their web credentials or relies on JavaScript.
Ultimately, in the big picture, you put up with the nonsense and that determines the behaviors of the market.
Good pornography (I am a woman and it’s slim pickings), and lots of more in depth reporting like NPR sometimes does - the sort of articles that are so satisfying to read, they feel like eating a good meal.
I don’t think it would be created by AI, but do think AI would be helpful for finding it.
Not sure if I get all the ideas from your question right, but I would guess it comes down to some feature rich physical/robotic toys of the NSFW category.
For decades I’ve wanted an action RPG Diablo-style game set in the Starcraft universe.
- pick your own youtube channel followings
- subscribe to streaming services that specifically have the franchises and series you actually like unless you are bigger on new content in which case its probably a good idea to subscribe to"them all"
- subscribe to your own podcasts or serials you’d like to either keep up with or have available to listen to at any opportune time you want to be entertained passively or actively engage with to grow from
This is coming from the self-curated/eclectic/autodidact perspective lol. Very omnivorous
I can’t wait for AI to generate enjoyable music.
I don’t listen to music anymore because it’s impossible to discuss with people without sludging through toxicity, gatekeeping and hostility. Most channels are filled with stuff I don’t like. Advertisements shriek in my ears after every song. It’s awful.
Someday I’d like to just hear music that I like without all the bullshit.
It’s only 12 notes, but everything I’ve seen from AI so far is super formulaic. It would be nice to see layer complexity but I think that kind of creativity will be hard for AI. I put together a couple of 2 hour playlists to listen to while doing my physical therapy routines. That is basically all the streaming services do anyways–play a 2 hour loop of the same things all the time with little to no variation. They make the list just a little longer than most people’s awareness and shuffle a few tracks here and there. I don’t care about content as much as just some thrash metal with a pace close to my pedal cadence, so the same thing playing on repeat is fine. With Graphene OS and VLC I never have any issues with ad trolls. I’m totally disenfranchised from that whole exploitive nonsense. It sucks for new artists and discovery, but I do not care. It is the music industry and copyright law that is flawed. Music should be totally free for all listeners on a platform that is artist funded like it costs a few bucks to store and add music for people to potentially listen to. Then use that platform to sell your wares or touring ticket sales passively on the side. Licensing content by plays is deeply flawed and copyright is in this light is untenable and wrong. Yeah your songs might be used in areas you do not agree with or like. The artist is welcome to use the opportunity as a platform to voice their views, but music shared with the public belongs to the public commons once it is shared. The public commons is not a place to tax for revenue. Develop a following and build from there.