●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
Forgot your password?
Close
This discussion has been archived.
No new comments can be posted.
Load All Comments
Full
Abbreviated
Hidden
/Sea
Score:
5
4
3
2
1
0
-1
More
Login
Forgot your password?
Close
Close
Log In/Create an Account
●
All
●
Insightful
●
Informative
●
Interesting
●
Funny
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
byTWX ( 665546 ) writes:
I've met plenty of biological beings that didn't seem to be particular conscious. Particularly when driving.
by0123456 ( 636235 ) writes:
He didn't say that all biological beings are conscious, but that only biological beings can be conscious.
Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.
byLord Kano ( 13027 ) writes:
An LLM can't suddenly decide to do something else which isn't programmed into it.
Can we?
It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Parent
twitter
facebook
byAleRunner ( 4556245 ) writes:
It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
As long as you program it to do things that it wasn't programmed to do and then let it "free" then that's already almost trivial and has been achieved even with things like expert systems that we more or less fully understand. Most LLMs include sources of randomness that have only limited constraints, so they can already come up with things that are beyond what's in their learned "database" of knowledge. Sometimes it's even right, though mostly it's just craziness. That doesn't make it unoriginal.
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Don't agre
byLord Kano ( 13027 ) writes:
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Don't agree at all and I think that's a morally dangerous approach. We're looking for a scientific definition of "desire" and "want". That's almost certainly a part of "conscious" and "self aware". Philosophy can help, but in the end, to know whether you are right or not you need the experimental results.
Experiments can be crafted in such a way as to exclude certain human beings from consciousness.
One day, it's extremely likely that a machine will say to us "I am alive. I am awake. I want..." and whether or not it's true is going to be increasingly hard to determine.
LK
byAleRunner ( 4556245 ) writes:
Experiments can be crafted in such a way as to exclude certain human beings from consciousness.
I mean yes, in a trivial sense. Experimental procedure: Ask subject for name. If name is "John" then mark as non conscious, otherwise mark as conscious.
That's a bad experiment though.
One day, it's extremely likely that a machine will say to us "I am alive. I am awake. I want..." and whether or not it's true is going to be increasingly hard to determine.
LK
Maybe, or maybe once we are able to actually define and identify consciousness we will be able to know exactly what it can do that nothing else can and build a test which easily and quickly gives the answer.
by0123456 ( 636235 ) writes:
> It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
Everything the "AI" does is the result of a program it's running. You can literally look inside it and say "oh, yeah, it did that because it ran these instructions with this data." It will never do anything except run instructions on data, even if you can change that data to make it do different things.
There's no "Ghost in the Machine" which can make it do anything else.
byLord Kano ( 13027 ) writes:
The day will come that an AI will learn something that we did not deliberately teach it. When an AI is able to improve its own code, it won't be bound by the limitations of its human creator. It's only a question of when.
LK
byhome-electro.com ( 1284676 ) writes:
You have no idea what you are talking about. Zero clue.
byAnonyrnous ( 10465021 ) writes:
I believe if I had a good enough scanner and computer I could scan every atom in your body and every electrical impulse and observe every input coming into your body and calculate every single thought that comes into your mind. But only because I believe that reality follows certain laws and that shit doesn't just happen randomly. Would me being able to predict all of your thoughts make you not intelligent? Of course this just goes back to the old free will debate.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Slashdot
●
●
Submit Story
It is much harder to find a job than to keep one.
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...