cross-posted from: https://fedia.io/m/[email protected]/t/1446758
Let’s be happy it doesn’t have access to nuclear weapons at the moment.
cross-posted from: https://fedia.io/m/[email protected]/t/1446758
Let’s be happy it doesn’t have access to nuclear weapons at the moment.
AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.
Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.
Neither of these justify current levels of hype.
https://arstechnica.com/ai/2024/09/australian-government-trial-finds-ai-is-much-worse-than-humans-at-summarizing/
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
Go look at the models available on huggingface.
There’s applications in Visual Question Answering, Video to Text, Depth Estimation, 3D recreation from a photo, Object detection, visual classification, Translation from language to language, Text to realistic speech, Robotics Reinforcement learning, Weather Forecasting, and those are just surface-level models.
It absolutely justifies current levels of hype because the research done now will absolutely put millions out of jobs; and will be much cheaper than paying people to do it.
The people saying it’s hype are the same people who said the internet was a fad. Did we have a bubble of bullshit? Absolutely. But there is valid reason for the hype, and we will filter out the useless stuff eventually. It’s already changed entire industries practically overnight.
the reactionary opinions are almost hilarious. they’re like “ha this AI is so dumb it can’t even do complex systems analysis! what a waste of time” when 5 years ago text generation was laughably unusable and AI generated images were all dog noses and birds.
I think he’s talking about the LLMs, which…yeah. AI and LLMs are lumped together (which makes sense, but classification makes a huge difference here)
Even LLMs in the context of coding, I am no programmer - I have memory issues, and it means I can’t keep the web of information in my head long enough to debug the stuff I attempt to write.
With AI assistants, I’ve been able to create multiple microcontroller projects that I wouldn’t have even started otherwise. They are amazing assistive technologies. Many times, they’re even better than language documentation themselves because they can give an example of something that almost works. So yes, even LLMs deserve the amount of hype they’ve been given. I’ve made a whole game-server management back-end for ARK servers with the help of an LLM (qwen-coder 14b).
I couldn’t have done it otherwise; or I would have had to pay someone $60k; which I don’t have, and which means the software never would have existed.
I’ve even moved onto modifying some open source Android apps for a specialized camera application. Compared to a normal programmer, sure - maybe it’s not as good. But having it next to me as an inexperienced nobody allows me to write programs I wouldn’t have otherwise been able to, or that would have been too daunting of a task.