![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/b6a70bce-f540-4e4b-8719-6f7dd540c433.png)
Thanks for this suggestion, never seen these guys before but they’re incredibly talented and very enjoyable to watch
Thanks for this suggestion, never seen these guys before but they’re incredibly talented and very enjoyable to watch
I’m both impressed and concerned at the level of detail you supplied here, but…thank you? For some of the context
The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.
Definitely seems AGI related. Has to do with acing mathematical problems - I can see why a generative AI model that can learn, solve, and then extrapolate mathematical formulae could be a big breakthrough.
I saw one suggestion which was to so away with male and female competitions, and instead have “open” and “restricted” comps. Open would be available to anyone, male or female, while you could set up as many restricted comps as you needed for the particular sport or activity with whatever rules make sense. So the 100m sprint might have Open, Restricted - Testosterone, and Restricted - Height - with whatever T level or height in centimetres decided by the relevant authority. Whereas something like weightlifting might have Restricted - Weight as it’s own class. The idea being any gender can compete provided provided meet the restrictions in place to make an interesting/fair competition within that bracket.
I find it interesting that DALL-E still doesn’t understand text - look at all the random Daschuund spelling in the generated images. It knows what the word should look like, but has no framework to interpret or distinguish text from the other elements of the image. It looks like trying to spell in a dream.
Where it gets really challenging is that LLMs can take the assignment input and generate an answer that is actually more educational for the student than what they learned d in class. A good education system would instruct students in how to structure their prompts in a way that helps them learn the material - because the LLMs can construct virtually limitless examples and analogies and write in any kind of style, you can tailor them to each student with the correct prompts and get a level of engagement equal to a private tutor for every student.
So the act of using the tool to generate an assignment response could, if done correctly and with guidance, be more educational than anything the student picked up in class - but if its not monitored, if students don’t use the tool the right way, it is just going to be seen as a shortcut for answers. The education system needs to move quickly to adapt to the new tech but I don’t have a lot of hope - some individual teachers will do great as they always have, others will be shitty, and the education departments will lag behind a decade or two as usual.
Repeating what some have already said here:
https://youtu.be/OoJsPvmFixU?si=NrURYGlLii4Dbi1_
The host has a background in aerospace engineering and missile test flights - so its about as close to rocket science as you can get! He knows his stuff and has a lot more practical, engineering related videos - kind of makes you think about how to operationalise the more cerebral ideas of the other channels.
Hope you enjoy some or all of the suggestions here and from other commenters