The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can’t do math or problem solve. It can only tell you what the most likely answer would be. It can’t do function things. It’s like Family Feud where it says what the most people surveyed said.
Some of them will “do math” but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What’s great is when it outputs results, it’s not clear if it engaged the math engine or just guessed.
when it outputs results, it’s not clear if it engaged the math engine or just guessed
That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that’s directly shown to the user, or if you only see the LLM’s final response based on it.
I know Lemmy hates AI with a fiery passion (and I too hate it for various reasons), but the ability to make this sort of prediction in a way far more stable than whatever else came before with natural language processing (fancy term of the day for those who havem’t heard of it), and however inefficiently built and ran it is, is useful if you can nudge it enough in a certain direction. It can’t do functional things reliably, but if you contain it to only parse human language and extract very specific information, show it in a machine-parsable way, and then use that as input for something you can program, you’ve essentially built something that feels like it can understand you in human language for a handful of tasks and carry out those tasks (even if the carrying out part isn’t actually done by an LLM). So pedantically, it’s not AI, but most people not in tech don’t know or care about the difference. It’s all magic all the way down like how computers should just magically do what they’re thinking of. That’s not changed.
My point though, and this isn’t targeting you specifically dear OC, is that we can circlejerk all we want here, but echoing this oversimplification of what LLMs can do is pretty irrelevant to the bigger discourse. Call these companies out on their practices! Their hypocrisy! Their indifference to the collapse of our biosphere, human suffering, letting the most vulnerable to hang high and dry!
Tech is a tool, and if our best argument is calling a tool useless when it’s demonstrably useful in specific ways, we’re only making a fool of ourselves, turning people away from us and discouraging others from listening to us.
But if your goal is to feel good by letting one out, please be my guest.
The only way to know if LLM output is accurate is to know what an accurate output should look like, and if you know that, you don’t need an LLM. If you don’t know what an accurate output should look like, an LLM is equally likely to confidently lie to you as it is to help you, making you dumber the more you use it. The only other situation is if you know what an accurate output should look like, but you want an inaccurate one, which is a bad thing to encourage.
“Demonstrably useful” is a lie. It’s a blatant and obvious lie. LLMs are so actively detrimental to their users, and society as a whole, that calling them useless is being generous. And even if they were the most beneficial thing on the planet, there is still no reason to use the billionaire’s toxic Nazi plagiarism machine.
The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can’t do math or problem solve. It can only tell you what the most likely answer would be. It can’t do function things. It’s like Family Feud where it says what the most people surveyed said.
Some of them will “do math” but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What’s great is when it outputs results, it’s not clear if it engaged the math engine or just guessed.
That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that’s directly shown to the user, or if you only see the LLM’s final response based on it.
I explain it as asking 100 people to Google something and taking the most common answer.
Yeah, that’s basically exactly what family feud does.
Yep but instead of “name something a woman keeps in her purse” it’s “write my legal document” or “is it ok to lick a lamp socket”
I know Lemmy hates AI with a fiery passion (and I too hate it for various reasons), but the ability to make this sort of prediction in a way far more stable than whatever else came before with natural language processing (fancy term of the day for those who havem’t heard of it), and however inefficiently built and ran it is, is useful if you can nudge it enough in a certain direction. It can’t do functional things reliably, but if you contain it to only parse human language and extract very specific information, show it in a machine-parsable way, and then use that as input for something you can program, you’ve essentially built something that feels like it can understand you in human language for a handful of tasks and carry out those tasks (even if the carrying out part isn’t actually done by an LLM). So pedantically, it’s not AI, but most people not in tech don’t know or care about the difference. It’s all magic all the way down like how computers should just magically do what they’re thinking of. That’s not changed.
My point though, and this isn’t targeting you specifically dear OC, is that we can circlejerk all we want here, but echoing this oversimplification of what LLMs can do is pretty irrelevant to the bigger discourse. Call these companies out on their practices! Their hypocrisy! Their indifference to the collapse of our biosphere, human suffering, letting the most vulnerable to hang high and dry!
Tech is a tool, and if our best argument is calling a tool useless when it’s demonstrably useful in specific ways, we’re only making a fool of ourselves, turning people away from us and discouraging others from listening to us.
But if your goal is to feel good by letting one out, please be my guest.
Peace
The only way to know if LLM output is accurate is to know what an accurate output should look like, and if you know that, you don’t need an LLM. If you don’t know what an accurate output should look like, an LLM is equally likely to confidently lie to you as it is to help you, making you dumber the more you use it. The only other situation is if you know what an accurate output should look like, but you want an inaccurate one, which is a bad thing to encourage.
“Demonstrably useful” is a lie. It’s a blatant and obvious lie. LLMs are so actively detrimental to their users, and society as a whole, that calling them useless is being generous. And even if they were the most beneficial thing on the planet, there is still no reason to use the billionaire’s toxic Nazi plagiarism machine.
We already have tools that can give us incorrect answers in natural human language.
And they post their videos to youtube for free.