●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Follow Slashdot blog updates by subscribing to our blog RSS feed
Forgot your password?
Close
This discussion has been archived.
No new comments can be posted.
Load All Comments
Full
Abbreviated
Hidden
/Sea
Score:
5
4
3
2
1
0
-1
More
Login
Forgot your password?
Close
Close
Log In/Create an Account
●
All
●
Insightful
●
Informative
●
Interesting
●
Funny
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
byJava Pimp ( 98454 ) writes:
We also tend not to put people exhibiting these behaviors in decision-making positions.
Except when we put one of them in charge of our country... Twice....
byTWX ( 665546 ) writes:
yeah, this part stood out to me:
A LLM will be just as confident when saying something completely wrong -- and obviously so, to a human -- as it will be when saying something true.
as pretty much now empowering both ignorance as if it's equivalent to knowledge and experience, and now asserting that the ignorant person's views are fully valid even when based on bogus, "research."
Assertion while ignorant or actively wrong is the sort of thing that a conman does, because the root of what a conman relies on is confidence, that's where the con- part comes from. AI may as well be a conman.
byDeanonymizedCoward ( 7230266 ) writes:
... empowering both ignorance as if it's equivalent to knowledge and experience, and now asserting that the ignorant person's views are fully valid even when based on bogus, "research." ...
Often found in the phrase "Don't take my word, do your own research!," this word no longer connotes tirelessly combing over vast troves of available information on the topic, viewing each claim found there with a skeptical eye and weighing it on its merits, etc. These days it's more about seeking confirmation, helped out by the social media algorithms. Directly beneath my flat-earth video where I admonish you to do your own research, you'll find a helpful list of more flat-earth videos which you can use to do said research.
I've seen plenty of instances of LLMs confidently asserting something obviously wrong. They love to do it with coding tasks, and will sometimes continue to do so even after being called on it or corrected. Aside from the usual "How many R's in strawberry?" questions, it can be comical to ask things like trying to compute a Luhn checksum. I've seen this produce a whole list of steps, each individually correct, and arrive at an obviously wrong answer, along the lines of "we have the digits 1 2 3 4 5, and the sum of these is 37."
I've never delved into how LLMs work, but I'd assume there has to be some kind of internal confidence factor for predicting the next token. If all of your leading contenders have confidence factors of 0.007 +/- 0.0003, sure, there will be a winner, but maybe there also ought to be an aggregate confidence score for the entire response, which is given to the user?
Parent
twitter
facebook
byaccount_deleted ( 4530225 ) writes:
Comment removed based on user account deletion
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Slashdot
●
●
Submit Story
It is much harder to find a job than to keep one.
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...