●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
Forgot your password?
Close
This discussion has been archived.
No new comments can be posted.
Load All Comments
Full
Abbreviated
Hidden
/Sea
Score:
5
4
3
2
1
0
-1
More
Login
Forgot your password?
Close
Close
Log In/Create an Account
●
All
●
Insightful
●
Informative
●
Interesting
●
Funny
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
bybyronivs ( 1626319 ) writes:
Whaaaaaaaat?
byshanen ( 462549 ) writes:
If you were going for Funny, the joke didn't stand. Nor the vacuous Subject.
However I do have a minor personal experience to share about ChatGPT losing its marbles. Project was file analysis using HTML with embedded JavaScript. First few sessions seemed quite productive, with lots of functionality, but then the so-called AI started cutting pieces away, seemingly at random. Maybe someone has a constructive suggestion?
Asking for constructive suggestions on Slashdot? Now that's ROFLMAO.
bynarcc ( 412956 ) writes:
You're dramatically overestimating what is possible with an LLM. Try to remember that there is a disconnect between what you'd expect from the interface and what's really happening in the background. LLMs generate text on the basis of how tokens appeared in relation to one another in the training data. It's not operating on facts and concepts. It's not composing replies after careful analysis and reasoning. Those things are not possible for an LLM.
The Three Mile Island disaster was caused by a similarly bad interface. A light that was supposed to signal that a valve was closed instead only signaled that the button to close the valve had been pressed. Had the light actually indicated the state of the valve, the disaster could have been averted. The button and the light were working as designed, just like an LLM works as designed. Both are dangerously misleading.
If you're expecting ChatGPT (or whatever) to write code the way a person would write code, you're going to be disappointed. That's not what the program is doing and its not something it can do. It can only generate text that looks like text in the training data. This can be a difficult fact to accept, given how convincing the illusion can be for some people, but that's reality. Keep expectations low and don't use it for anything unusual or particularly long and you'll feel better about the output you get.
Parent
twitter
facebook
byaccount_deleted ( 4530225 ) writes:
Comment removed based on user account deletion
byaccount_deleted ( 4530225 ) writes:
Comment removed based on user account deletion
byshanen ( 462549 ) writes:
Hmm... Not sure I had any clear expectations of what it could do. I saw it more as an almost random experiment that produced some surprising results at first and then went quite sour...
I've engaged ChatGPT in a number of dialogues. Some interesting, but I also suspect some of them may be harmful. Too easy for me to think like that? (Old joke: "Too much computer use is bad for mental hygiene."
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Slashdot
●
●
Submit Story
It is much harder to find a job than to keep one.
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...