●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
Forgot your password?
Close
wnewsdaystalestupid
sightfulinterestingmaybe
cflamebaittrollredundantoverrated
vefunnyunderrated
podupeerror
×
7240134
story

Posted
by
timothy
ember 24, 2009 @09:49AM
from the why-stop-at-dog-brain-after-all dept.
kreyszig writes "The recent story of a cat brain simulation from IBM had me wondering if this was really possible as described. Now a senior researcher in the same field has publicly denounced IBM's claims."
More optimisticaly, dontmakemethink points out an "astounding article about new 'Neurogrid' computer chips which offer brain-like computing with extremely low power consumption. In a simulation of 55 million neurons on a traditional supercomputer, 320,000 watts of power was required, while a 1-million neuron Neurogrid chip array is expected to consume less than one watt."
Related Links
Intelsat Launches Hardware For Internet Routing From Space
Would You Use a Free Netbook From Google?
This discussion has been archived.
No new comments can be posted.
Load All Comments
Full
Abbreviated
Hidden
/Sea
Score:
5
4
3
2
1
0
-1
More
Login
Forgot your password?
Close
Close
Log In/Create an Account
●
All
●
Insightful
●
Informative
●
Interesting
●
Funny
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
bycyberspittle ( 519754 ) writes:
Think about it. Think about it like a cat.
twitter
facebook
byAnonymous Coward writes:
Somehow my pet parrot now seems oddly... delicious. :O
byj00r0m4nc3r ( 959816 ) writes:
I just left a scent marker on my co-workers desk. He gave me an odd look while I did it...
bymarqs ( 774373 ) writes:
"If a lion could talk, we could not understand him."
Ludwig Witgenstein - tractatus logico-philosophicus
byDriedClexler ( 814907 ) writes:
If philosophers could say something useful, they would not understand it.
byJesus_666 ( 702802 ) writes:
Okay.
Give me food. Now.
Parent
twitter
facebook
byJesus_666 ( 702802 ) writes:
Where's my food? I asked for food more than one minute ago and there's nothing here yet. I am outraged.
Parent
twitter
facebook
byJesus_666 ( 702802 ) writes:
I still see a distinct lack of you-provided food around here. Make it snappy, can opener slave!
Parent
twitter
facebook
byitsdapead ( 734413 ) writes:
Think about it. Think about it like a cat.
In block-capital Papyrus on top of a humourous cat photo.
byNeil Hodges ( 960909 ) writes:
I think you meant "block-capital Impact [wikipedia.org]."
byBlakey Rat ( 99501 ) writes:
FAIL!
LOLCAT font is IMPACT. Fool! Every cat should know that.
byjhoegl ( 638955 ) writes:
Hey... whats that moving dot on the wall? Why is it there? I must have it!
Great! I captured it! Wait, whats this? It escaped me, inconceivable!!! What luck, it stopped right by my paw, Ill will capture it again! NNNNOOOOOOO!!!!
Look, look there, its something moving under my feet. I must pounce it to figure out what it is! Weird, I pounced it and its still moving. Ill pounce it again! Ah, there it stopped moving, Ill sniff it now. Wait, its moving again... Curse you!
Parent
twitter
facebook
byCritical Facilities ( 850111 ) * writes:
Insightful??
Hmmmph! My cat Phydeaux must have mod points again.
byAbstrackt ( 609015 ) writes:
All right, here goes...
o hai
im in ur brain thinkin ur thots
No wonder my cats sleep all day...
byeldavojohn ( 898314 ) * writes:
Think about it. Think about it like a cat.
SMBC explained why cat translation products fail [smbc-comics.com]. Although there are financial endeavors to decode dog [wikipedia.org].
byQuiet_Desperation ( 858215 ) writes:
Think about it like a cat.
I tried, but all it did was make me crave a cheeseburger.
Oh, and some vision about a cat up in the ceiling or something.
byJunior J. Junior III ( 192702 ) writes:
I can haz brain simulation?
byJunior J. Junior III ( 192702 ) writes:
Think about it. Think about it like a cat.
I can haz brain simulation?
byAlpineR ( 32307 ) writes:
Whoa. Déjà vu.
byNathrael ( 1251426 ) writes:
I can haz cheezburger?
byGarble Snarky ( 715674 ) writes:
Wouldn't power consumption grow more than linearly with neuron count? I would think the number of connections is the dominant factor - so the comparison of two data points of power consumption vs neuron count is meaningless.
twitter
facebook
byjabuzz ( 182671 ) writes:
You assume all neurons are connected to all other neurons. My brain does not work like that, so why you would expect a simulated brain to work like that does not make sense.
bygnick ( 1211984 ) writes:
You assume all neurons are connected to all other neurons. My brain does not work like that...
Are you sure? I know that all of the neurons in your brain are not directly connected, but that doesn't imply that there's no path between them. So, while the power consumption involved with neuron interaction may not increase quite as much per added neuron as if you had direct connections between each of them, it still seems that it would be more complicated than a direct linear correlation.
byAlpineR ( 32307 ) writes:
It's nonlinear for small numbers of neurons since you need to count connections to second and third nearest neighbors as well as first nearest neighbors. But once you get past the length scale of the longest connection, it scales linearly from there.
It's like the road system. A city with a bunch of intersections will have more road segments between the intersections than there are intersections themselves. But a second city won't build roads from each of its intersections back to each intersection in the fi
byMrNaz ( 730548 ) * writes:
It makes sense if you assume that *his* brain works like that.
bypoetmatt ( 793785 ) writes:
considering that I can't even find the quote for the second article linked, I'll remain skeptical of the whole thing. The article on that "low power" version doesn't say anything about low power, in fact it talks about wattage woes and concerns due to the requirements to make a "neural" processor equivalent.
Also of note is that they're doing the same idea as intel, just at a horrendously lower capability. Basically a lack of information and whole lot of hype.
byRaffaello ( 230287 ) writes:
From the article on the "low power" neurogrid chip (page 3):
Just a few miles down the road, at the IBM Almaden Research Center in San Jose, a computer scientist named Dharmendra Modha recently used 16 digital Blue Gene supercomputer racks to mathematically simulate 55 million neurons connected by 442 billion synapses. The insights gained from that impressive feat will help in the design of future neural chips. But Modha’s computers consumed 320,000 watts of electricity, enough to power 260 American ho
bypz( 113803 ) writes:
Wouldn't power consumption grow more than linearly with neuron count? I would think the number of connections is the dominant factor - so the comparison of two data points of power consumption vs neuron count is meaningless.
Neurons are not typically fully connected in K-star like networks, they are more usually connected to a fixed number of other neurons that varies by type from a small handful to 10,000. The latter number (10,000) is used as when researchers and scientists want to estimate the total number of connections in the cortex, especially when talking about simulations or writing grant proposals where bigger numbers are more impressive.
So, power consumption should grow linearly with neuron count, if the simulation i
bydrainbramage ( 588291 ) writes:
All those neurons using less than 1 watt?
I know some people like that.
twitter
facebook
byL4t3r4lu5 ( 1216702 ) writes:
VINDICATION! [penny-arcade.com]
byNerdfest ( 867930 ) writes:
I'm being environmentally, friendly you insensitive clod!
Parent
twitter
facebook
bydontmakemethink ( 1186169 ) writes:
Actually if you read TFA, the long-pondered question of why humans only use 1-15% of their brain is largely a matter of power consumption, and the reason for the abundance of dormant neurons is for greater potential diversity of thought.
"While accounting for just 2 percent of our body weight, the human brain devours 20 percent of the calories that we eat."
"The brain achieves optimal energy efficiency by firing no more than 1 to 15 percent—and often just 1 percent—of its neurons at a time."
That seems to indicate that a human brain would burn more calories than the rest of the body if it were "always on".
Being a hypoglycemia sufferer, I can attest to the severe limitations of brain activity when deprived of sugar. Before being diagnosed I underwent tunnel vision and black-outs, not to mention the typical mood swings, shakiness, cold sensations, etc.
Never has my nickname been more appropriate...
Parent
twitter
facebook
byTrevin ( 570491 ) writes:
The cat's brain is made up of 1 BILLION neurons and 10 trillion synapses. So with the nuerogrid chips, it will require at least a kilowatt to simulate.
twitter
facebook
byL4t3r4lu5 ( 1216702 ) writes:
So with the nuerogrid chips, it will require at least a kilowatt to simulate.
So, a reduction of 319kW, then? That's pretty good.
byYvan256 ( 722131 ) writes:
In a simulation of 55 million neurons on a traditional supercomputer, 320,000 watts of power was required, while a 1-million neuron Neurogrid chip array is expected to consume less than one watt.
320kW / 55 = 5.818kW per million of neuro with a traditional supercomputer.
One watt per million of neuro with a Neurogrid chip array.
So if a cat's brain is 1 BILLION neurons, that would require 5818.182kW with a supercomputer and 1kW with the Neurogrid chip array.
A reduction of 5817.182kW.
Parent
twitter
facebook
bywatanabe ( 27967 ) writes:
The fine letter linked to in the above points out the real problems inherent in calculating this out: actually simulating NEURONS, rather than so-called "neural networks" is really hard, and requires a lot of computing power, plus development of techniques that are still cutting edge research. There is no chip array that can do all the (currently not completely specified) simulating of a cat brain at 1 kW.
byXest ( 935314 ) writes:
Damn, if only we could find such a great source of power!
byElky Elk ( 1179921 ) writes:
Well, cats presumably.
byhattig ( 47930 ) writes:
Their chip uses 340 transistors to model a neuron, and has 65536 neurons.
That means it has ~22m transistors for neurons, although there certainly more transistors managing non-neuron aspects.
It looks like it was made on a 130nm - 250nm process for the die size.
Shrink that to 45nm once the technology is proven, and you'll have 8 to 32 times as many neurons in a single chip. That's 512Ki to 2Mi neurons per chip.
A chip makes up a neural cluster, and you use multiple chips to simulate multiple neural clusters,
byafidel ( 530433 ) writes:
Yeah but it's going to require one hell of a crossbar configuration to connect those chips all to each other at decent speeds. Guess someone better pull the SGI and Cray patents out of mothballs.
bySponge Bath ( 413667 ) writes:
Might want to start with simulating a dog brain to save power. That's what, maybe 5 neurons, 1000 synapses, and half a dog biscuit?
bytmosley ( 996283 ) writes:
Or just one can of cat food. Better get the good stuff, though, she's a bit finicky.
byjabuzz ( 182671 ) writes:
If you have custom silicon to do each neuron then you are going to be hugely more power efficient that a general purpose processor simulating a neuron in software. There is nothing new there and anyone who thinks otherwise is just clueless. Given IBM have the facilities and resources to fabricate some custom silicon I fail to see the issue.
twitter
facebook
bybrian0918 ( 638904 ) writes:
Except that individual neurons have tens of thousands of possible connections to other neurons, and continually morph and change those connections. That's impossible to do on a rigid piece of hardware.
byeldavojohn ( 898314 ) * writes:
Except that individual neurons have tens of thousands of possible connections to other neurons, and continually morph and change those connections. That's impossible to do on a rigid piece of hardware.
I'm no expert and I've just been reading the second link's project site on Stanford's page [stanford.edu] but your assertion to continually morph and change those connections seems to be mitigated by this strategy:
Neurogrid simulates six billion synaptic connections by using local analog communication, another choice motivated by cortical studies. Cortical axons synapse profusely in a local area, course along for a while, then do it again. Thus, nearby neurons receive inputs from largely the same axons, as expected from the map-like organization of cortical areas. Local wires running between neighboring silicon neurons emulate these patches, invoking postsynaptic potentials within a programmable radius. Using a patch radius of 6 lets us increase the number of synaptic connections a hundredfold—from 600 million to six billion—without increasing digital communication.
If they connect most (if not all) possible connections that the morphing/changing synaptic channels can take, then they use a sort of addressing technique and RAM strategy to continually morph and change:
Instead of hardwiring the silicon neurons together, as Mead did in his silicon retina, we softwired them by assigning unique addresses. Every time a spike occurs, the chip outputs that neuron’s address. This address points to a memory location (RAM) that holds the synaptic target’s address, or to multiple memory locations if the neuron has multiple synaptic targets. When this address is fed back into the chip, a post-synaptic potential is triggered at the target. An extremely efficient technique, as the same post-synaptic circuit serves all the synapses that neuron receives—virtual synapses! Encoding, translating, and decoding an address happens fast enough to route several million spikes per second, allowing a million connections to be made among a thousand silicon neurons. These softwires may be rerouted simply by overwriting the RAM’s look-up table, making it possible to specify any desired synaptic connectivity.
Although their page is really hard for a lay person like myself to get through, it's very informative [stanford.edu]. Having read it, I'm considerably more optimistic about the future of biological tissues and nervous systems being translated to hardware. At least people are starting back at the simple components and adding new twists.
Parent
twitter
facebook
bybrian0918 ( 638904 ) writes:
invoking postsynaptic potentials within a programmable radius.
So basically some of the simulation is still software-side, then.
byJohn Whitley ( 6067 ) writes:
On that theme, it's easy to calculate some reasonable bounds, based off of actual cat metabolism. Small cats, around 7 lbs., will require ~125 kcal/day to maintain body weight. We can use that kcal/day value as a rough bound, which results in a mighty 6W. For the whole cat. Granted, that includes a lot of nap time, but it also includes all other metabolic functions.
Obviously, I have no trouble whatsoever believing that it's possible to do better than 320,000 W in simulating a cat brain. Even padding fo
byAnonymous Coward writes:
From the original FA: "The simulation, which runs 100 times slower than an actual cat's brain, is more about watching how thoughts are formed in the brain and how the roughly 1 billion neurons and 10 trillion synapses in a cat's brain work together."
So the most bad-ass computer simulation, assuming it worked, which this guy is saying it probably didn't, was still 100 times slower than a real cat's brain. A real cat's brain also fits inside a tiny furry space the size of a baseball... and it runs on a once-daily small bowl of cat food. We have a long ways to go.
twitter
facebook
byslashchuck ( 617840 ) writes:
... A real cat's brain also fits inside a tiny furry space the size of a baseball...
The brain size of the average cat is 5 centimeters in length and 30 grams. [wikipedia.org]
byBlakey Rat ( 99501 ) writes:
The brain size of the average cat is 5 centimeters in length and 30 grams. ... which is small enough to fit inside a baseball.
So, uh... thanks for correcting the already-correct post? I guess?
byAnonymous Coward writes:
More than this, their simulated neurons aren't anywhere close to the real thing. A real neuron, an individual cell, has tremendous computing power due to the distribution of a bunch of different ion channel types (active conductances) in a highly complex dendritic tree. Simulating a few seconds of just ONE neuron accurately can take several minutes to several hours of supercomputer time. I know this because I do it for a living.
Parent
twitter
facebook
byfbjon ( 692006 ) writes:
No surprise there. Raytracing a photorealistic scene takes far longer than just bouncing some photons around. Running Windows in a VM makes it really slow compared to running on hardware. This "brain" isn't all that different.
byNeil Hodges ( 960909 ) writes:
...I know this because I do it for a living.
Don't each of our brains do this for a living, too?
Parent
twitter
facebook
bymdielmann ( 514750 ) writes:
...I know this because I do it for a living.
Don't each of our brains do this for a living, too?
I only wish that were true.
byLockeOnLogic ( 723968 ) writes:
Suppose you were an idiot and a member of congress. But I repeat myself.
byIICV ( 652597 ) writes:
I'm a zombie, you insensitive clod!
byErikZ ( 55491 ) * writes:
It sounds very interesting. Do you know of a good reference for those of us who don't have Masters in Biology or Comp-Sci?
byRod Frey ( 1685360 ) writes:
Isn't there value in moving to a higher level of abstraction than a single neuron though? Or simplifying the basic elements for the sake of a tractable broader model?
Simulating a single atom, for example, is reasonably complex: it would be impossible with current computational resources to simulate the electromagnetic properties of a metal if we required accurate simulations of individual atoms. Yet despite ignoring what we know about the atomic models, the higher-level models are very predictive.
Not that we have such predictive, higher-level models for the brain. That's what some researchers are searching for: I'm just suggesting that those models hopefully won't require accurate simulation of individual neurons. That seems to be the pattern in other domains.
Parent
twitter
facebook
bytoppavak ( 943659 ) writes:
He's not arguing that it didn't work, he's arguing that they essentially ran a simulation of a large Artificial Neural Network, a relatively trivial task as long as you have a big enough computer behind it. ANNs are essentially points that connect to each other and learn by assigning weights to these various connections- this is essentially the simplest possible way to simulate the behavior of a neuron. The argument is being made that to claim an ANN, regardless of its size, approaches the capabilities of any mammalian brain is simply wrong, and that a true attempt to create such a simulation would need to factor in the stochasticity of ion channels, branchings in neurons and various other biological phenomena that have a tremendous impact on how our brains work.
Without reading more details on the original work, I'm inclined to say that he has a very valid point if they were indeed only running a large ANN model.
Parent
twitter
facebook
byZackbass ( 457384 ) writes:
Considering how little we know about the emergence of intelligence from networks how is it possible to claim outright that an ANN can't approach the capabilities of a human brain? Real neurons are vastly more complex and aren't accurately modeled with such simple systems, but we don't have any idea what those complexities have to do with intelligence, so it seems to be quite the leap of faith to make claims on the topic.
Parent
twitter
facebook
bysarkeizen ( 106737 ) writes:
But getting back to the letter from "Henry Markram". My reading of the article is that it says a few things: i) That this *isn't a simulation of a cats brain* which regardless of what one believes about intelligence appears to be correct. ii) This isn't anything new.
byXest ( 935314 ) writes:
It basically just seem to be a case of the same old AI arguments we've always heard even since Turing's days.
The problem is, we don't actually know what the limits of ANNs are, there is no proof that suggests that they can't, given ever greater amounts of computing power allow for the emergence of (at least seemingly) truly intelligent response to an event.
So on one hand we have the IBM guys overstating what they've achieved, and on the other we have a guy spouting out a view of the limits of ANNs without actually putting any effort into providing evidence for their limitations.
I don't know why but the AI field has always been horifically polarised, the kind of arguments you get in that field are just so immature it's beyond belief. You have people in the AI field following their viewpoint religiously, completely unwilling to consider the other viewpoint. To see what I mean just look up some of the discussions on Searle's chinese room argument.
If AI scientists spent as much time on research as they did bitching at each others experiments and theories we'd have a walking talking robo-jesus by now that could build worlds.
Parent
twitter
facebook
byLockeOnLogic ( 723968 ) writes:
There's is no proof that suggests that it can either.
byStradivarius ( 7490 ) writes:
Whilst I think that blue brain may be overkill on the complexity side, there is still more complexity required than provided by ANNs.
That depends on what exactly it is you're trying to demonstrate or investigate. If we're trying to probe the nature of "intelligence" (by whatever of the many possible and potentially limited definitions we may use), examining the properties of a huge ANN may provide insight. For example, how much of a brain's abilities could be achieved by such a network if large enough (regardless of efficiency), and how much requires more complex arrangements?
It seems to me like there is some value in doing such simula
bywurp ( 51446 ) writes:
I think we want a system that we can ask to do a complex task in natural language, and which will perform the task, only asking for further instruction when what we've told it is sufficiently ambiguous.
I suspect consciousness will be a byproduct in such a system (as it is in us), but to me, consciousness is not the goal. In fact, if we could achieve it without consciousness, that would be better, since a whole swath of ethical issues in AI go away.
Which reminds me of something else I thought yesterday rega
byL4t3r4lu5 ( 1216702 ) writes:
The simulation, which runs 100 times slower than an actual cat's brain, is more about watching how thoughts are formed in the brain...
What? I can already tell them that!
IF $stomach_contents = 0 THEN ConsumeFood;
IF $claw_count > 0 THEN ScratchShitOutOfFurniture;
IF $Sphincter_Tension > 0 THEN PoopAnywhereYouWant;
IF $TimeSinceSleep < 1800 THEN $TimeSinceSleep = $TimeSinceSleep + 1 ELSE YawnFishBreathInOwnersFaceAndFallAsleepOnComputerChair;
Parent
twitter
facebook
byRevWaldo ( 1186281 ) writes:
This could still be an accurate representation. Cats work in batch mode. They sleep 23 hours a day, during which they think about how they'll spend the hour they are awake. So if they're solely comparing the simulation's processing speed to how cats function in awake mode, it may actually be around four times slower in aggregate. Not to shabby.
byAlpineR ( 32307 ) writes:
So you're saying that a cat queues up its activities for the hour of wakefulness by planning during its hours of sleep? Kind of like the Mars rovers get commands issued from Earth, move a little, and then wait around for another batch? And therefore cats can't respond to any new stimuli during their wakeful hour until they have another sleep cycle to process the new information? Fascinating.
bysmooth wombat ( 796938 ) writes:
and you should know that people love retarded animals.
As evidenced by the popularity of "reality" shows.
byRik Sweeney ( 471717 ) writes:
I bet they just based their simulation on Simon's Cat [youtube.com] which, to be honest, is a pretty accurate representation.
bydebatem1 ( 1087307 ) writes:
I don't really see how they would have verified that they were able to simulate a cat's brain. AFAIK, we don't have single-neuron level imaging, and the resolution on FMRI and EEG put those right out. Looking at macro level behavior would be pretty absurd- I too, can write a program that will decide to play with yarn. Unless there's something I'm missing, IBM seems to have made a claim it can't support.
byXest ( 935314 ) writes:
Why do we need single neuron level imaging? The activity of a single neuron really tells us very little. The emergent patterns in the form of brain activity of multiple neurons are what matters. The question is whether we are getting the right responses in this respect from the right set of neurons in reaction to the corresponding trigger.
bydebatem1 ( 1087307 ) writes:
The question is whether we are getting the right responses in this respect from the right set of neurons in reaction to the corresponding trigger.
As I see it, there's several problems here.
The first is that we don't really understand neurology all that well- higher level thought is, for the most part, a mystery to us, so identifying the "right set" isn't really possible for us at this point.
The second is that even if we were able to select the "right set", I don't think we have the imaging technology necessary to distinguish between correct and incorrect states without inducing a margin of error that would qualify our hypothesis out of existenc
byvertinox ( 846076 ) writes:
AFAIK, we don't have single-neuron level imaging, and the resolution on FMRI and EEG put those right out.
Just so that you know... We can get higher resolutions on brains neurons by invasive means such as cutting the brain apart and looking at live cells slice by slice under a powerful microscope.
It is rather tedious and gruesome but it is a viable way to look at the neurons directly.
Its even been to done to humans after they have passed away, but animals you can sort of get away with doing it while the subj
bydebatem1 ( 1087307 ) writes:
You seem to have some knowledge here, so if you don't mind (and will forgive the pun) I'd like to pick your brain about this.
Lets say we have a tabby, an ocelot, and a simulation that we are told models one of the two. Given that we're able to perform any kind of scan or procedure on the two animals, could we determine which species the simulation was using only that data?
byxtracto ( 837672 ) writes:
So according to this guy rant letter, the "cat-brain simulation" was nothing more than the simulation of a ANN wiht X number of neurons with X equal to the average number of neurons in a cat.
However, it seems the /complexity/ of the simulated neurons is not remotely similar to that of the neurons of a real cat.
With that view, yes it seems less breakthrough. The experiment reminds me of AI researchers that thought that we could get intelligent machines using a brute-force kind of approach; this by adding /enough/ knowledge-rules, /enough/ processing power, etc...
twitter
facebook
bygolden age villain ( 1607173 ) writes:
This IBM announcement was just ridiculous. To cite only one argument, the brain does not consist only of neurons. It contains at least as many other cells which are also involved in signal processing. Mohda would be laughed at in any neuroscience conference and he certainly doesn't help the cause of theoreticians in the neuroscience field by making such stupid announcements. Eugene Izhikevich who designed the neuron model being used for these simulations had a PNAS paper not too long ago modeling the entire human brain and he did not claim that he successfully modeled the human brain. Plus no one has any clue how the brain computes really so making a claim about the formation of thoughts is just nonsense.
twitter
facebook
byradtea ( 464814 ) writes:
Plus no one has any clue how the brain computes really so making a claim about the formation of thoughts is just nonsense.
Unfortunately, what a certain class of pseudo-scientist has learned is that monkeys in suits are too stupid to know the difference between real, conservative, careful science and over-hyped handwaving. Since we live in a world where monkeys in suits have managed to get almost total control of the corporate system and used that to leverage thier way into political power, people who suck
byhiryuu ( 125210 ) writes:
Our world increasingly looks like Fredrick Pohl's story "The Marching Morons"...
Not saying this because I know better, but because your mention of the story intrigued me and I hoped to find it or at least find out more about it. It appears it was written by Cyril M. Kornbluth, a contemporary and good friend of Pohl's.
link [wikipedia.org]
I think I must find this story, as the premise of "Idiocracy" was interesting but the execution seemed, to me, quite flawed.
byagnosticnixie ( 1481609 ) writes:
There's also the number of assemblies we don't know about that have been disregarded early to be brought back on the table (name forgotten) as something that's a) important and b) much much much more powerful than we thought (Kurzweil likened them to RAM, which they are ostensibly not). Oh, and c) we have no real clue how they work because neuroscience isn't even there yet.
byNapalmScatterBrain ( 1288748 ) writes:
IBM has a known history of making overblown claims. This is what happens when you let your PR mesh with your technical research. Deep Blue was a giant PR stunt, and they had humans retooling the code in between matches.
What a crock. When they get a robot that catches mice, purrs, and jumps on the table to eat my burger when I leave the room for 2 seconds, maybe then I'll believe it.
by__aailob1448 ( 541069 ) writes:
I saw that story earlier and dismissed it for the crap that it was. I'd like to thank Henry Markram for vindicating my snap judgment with his flame email.
twitter
facebook
byyt.rabb at gmail ( 1091047 ) writes:
The only thing missing from that email was his momma.
Hey Mohda, Your momma's research methodology is so flawed, that she puts the hypothesis to be proven as an assumption.
Biatch.
Parent
twitter
facebook
bybellwould ( 11363 ) writes:
My research recently took me to some of Markram's work - the guy is brilliant and REALISTIC. His research goals are simple and attainable and any claims of success he has are *well* within the real world. He's incrementally worked his way up from a few neurons - the way a *real* scientist works; and to him, the simplest "brain simulation" of any sort is definitely possible, but far off in the future.
twitter
facebook
bySuperBanana ( 662181 ) writes:
Seriously, you haven't posted in 4-5 years, and you jump out to post now? Let me guess, you work in his lab...
bypalegray.net ( 1195047 ) writes:
The term "sockpuppet" is usually reserved for someone who posts under multiple accounts. I believe you were looking for "mouthpiece" instead.
byRogerborg ( 306625 ) writes:
You make a compelling argument. In fact, I won't even bother asking for citations, I'll just ask how I can send money to this scientific demigod. Is cash OK?
bysznupi ( 719324 ) writes:
http://intranet.cs.man.ac.uk/apt/projects/SpiNNaker/ [man.ac.uk]
It seems that for quite a lot of folks toying with topology and interconnects is a promising approach.
byTemujin_12 ( 832986 ) writes:
"I used to think that the brain was the most wonderful organ in my body. Then I realized who was telling me this."
twitter
facebook
byfahrbot-bot ( 874524 ) writes:
It's hard to verify anything cause the machine just sits there and ignores everyone.
byLocke2005 ( 849178 ) writes:
Until it can piss on my briefcase because it thinks I've been ignoring it we have no way of confirming that it is actually simulating a real cat's brain.
byTetsujin ( 103070 ) writes:
I think people are missing the obvious potential here. I mean, if you could engineer a computer to accurately simulate a cat's brain, then you could implant that computer in a sexy gynoid body, and have a robot-girl with the mind of a cat!
byDynaSoar ( 714234 ) writes:
If pressed, Dr. Boahen himself would contradict the Discover article and say the chips were not "brain like" at all. He's working from the same place Karl Pribram worked from 50 years ago, and Karl still can't say he knows how the brain works. Simulating a process that's assumed to be a part of brain function because it can produce results more effectively and/or efficiently that brute force digital computing does not make it "brain like". The comparison/contrast done on power consumption doesn't make a cas
byrussotto ( 537200 ) writes:
They did manage to simulate a cat brain... but they failed to mention it was a dead cat.
byAlpineR ( 32307 ) writes:
Digital computers are deterministic: Throw the same equation at them a thousand times and they will always spit out the same answer. Throw a question at the brain and it can produce a thousand different answers, canvassed from a chorus of quirky neurons. "The evidence is overwhelming that the brain computes with probability," Sejnowski says. Wishy-washy responses may make life easier in an uncertain world where we do not know which way an errant football will bounce, or whether a growling dog will lunge. Un
bynokiator ( 781573 ) writes:
Matching the neuron count and connection count of a cat brain is clearly not sufficient to simulate the functionality. Neurons in a mammal brain are not randomly connected. A great level of organization happens during the growth of the brain cells and connections starting from the embryonic stage. Much of the functionality is "hardwired" as result of this organized growth process which has evolved over hundreds of millions of years, and for higher level mammal like a cat a lot of the functionality is wired
bygestalt_n_pepper ( 991155 ) writes:
Well yes, I've been doing this for years now.
byMeatBag PussRocket ( 1475317 ) writes:
... or maybe you havent...
byagnosticnixie ( 1481609 ) writes:
You need 10 more points now [nytimes.com]
bygeekoid ( 135745 ) writes:
It's not nearly as bad as it seems.
The media doesn't understand what fair and balance is. They assume every opinion are equal and as valid as facts. They are not.
Media generates controversy and then display it for all to see. Hence, the perception is that it's all a fight and confusion. This is generally incorrect.
Science marches and and continues to deliver the goods.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
●
396 commentsMeta Says Llama 4 Targets Left-Leaning Bias
●
377 commentsAmericans' Junk-Filled Garages Are Hurting EV Adoption, Study Says
●
371 commentsMexico Threatens To Sue Google Over Gulf Renaming
●
363 commentsAmericans are Buying Twice as Many Hybrids as Fully Electric Vehicles. Is The Next Step Synthetic Fuels?
●
323 commentsEV Sales Keep Growing In the US, Represent 20% of Global Car Sales and Half in China
Would You Use a Free Netbook From Google?
Intelsat Launches Hardware For Internet Routing From Space
Slashdot Top Deals
Slashdot
●
●
of loaded
●
Submit Story
It is much harder to find a job than to keep one.
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...