OpenAI, a non-profit AI company that will lose anywhere from $4 billion to $5 billion this year, will at some point in the next six or so months convert into a for-profit AI company, at which point it will continue to lose money in exactly the same way. Shortly after
There’s a lot of things that LLMs are really good at, or incredibly useful for, such as ingesting large bodies of text, and then analyzing them based on your ability to create well thought out prompts.
That’s the story people tell at least. The weasel phrase at the end is fun, I guess. Leaves a massive backdoor excuse when it doesn’t actually work.
But in practice, LLMs are falling down even at this job. They seem to have some yse in academic qualitaruve coding, but for summarizing novel or extended bodies of text, they struggle to actually tell people what they want to know.
Most people do not give a shit if text contains a reference to X. And if they do, they can generally just CTRL+F “X”.
Weasel phrase? You mean the fact that I don’t treat them like their actual Ai, but just a tool that needs to be used properly, monitored, and verified?
There’s a reason why I never call them AI, because they’re not. They’re just advanced machine learning tools, and just like I keep a steady hand when using a table saw, I only use LLMs for tasks that they can help me do something faster, but are easy to verify they did it right.
And as someone who has been using them very regularly, I feel confident in saying that. It’s not a weasel phrase, I’m not trying to sell anyone snake oil about what they can actually do, and I acknowledge that they’re an oversold and overhyped means of cooking the planet faster, so it’s not like I would be mad if they were banned tomorrow, but until then, I will keep using them in ways that are actually fruitful.
But sure, if all you need to do is find one word in a single body of text, that’s not really a good use of an LLM, but that wasn’t what I was talking about.
If I need examples of various legal or ethical concerns documented in one, or multiple, pieces of writing, or other conceptual topics, I can give it a list, and then ask it to highlight all examples of those issues, and include the verbatim text where their present. I can then give that same task to a multiple different LLMs, with the same prompts, and a task that would have taken me hours to complete, takes me 30 to 45 minutes, including the time it takes me to give it quick read through see if anything was missed. But yeah, that requires a well crafted prompt, and it’s not infallible.
That’s the story people tell at least. The weasel phrase at the end is fun, I guess. Leaves a massive backdoor excuse when it doesn’t actually work.
But in practice, LLMs are falling down even at this job. They seem to have some yse in academic qualitaruve coding, but for summarizing novel or extended bodies of text, they struggle to actually tell people what they want to know.
Most people do not give a shit if text contains a reference to X. And if they do, they can generally just CTRL+F “X”.
Weasel phrase? You mean the fact that I don’t treat them like their actual Ai, but just a tool that needs to be used properly, monitored, and verified?
There’s a reason why I never call them AI, because they’re not. They’re just advanced machine learning tools, and just like I keep a steady hand when using a table saw, I only use LLMs for tasks that they can help me do something faster, but are easy to verify they did it right.
And as someone who has been using them very regularly, I feel confident in saying that. It’s not a weasel phrase, I’m not trying to sell anyone snake oil about what they can actually do, and I acknowledge that they’re an oversold and overhyped means of cooking the planet faster, so it’s not like I would be mad if they were banned tomorrow, but until then, I will keep using them in ways that are actually fruitful.
But sure, if all you need to do is find one word in a single body of text, that’s not really a good use of an LLM, but that wasn’t what I was talking about.
If I need examples of various legal or ethical concerns documented in one, or multiple, pieces of writing, or other conceptual topics, I can give it a list, and then ask it to highlight all examples of those issues, and include the verbatim text where their present. I can then give that same task to a multiple different LLMs, with the same prompts, and a task that would have taken me hours to complete, takes me 30 to 45 minutes, including the time it takes me to give it quick read through see if anything was missed. But yeah, that requires a well crafted prompt, and it’s not infallible.