Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 The theory  





2 Predictions of the theory of the memory-prediction framework  



2.1  Enhanced neural activity in anticipation of a sensory event  





2.2  Spatially specific prediction  





2.3  Prediction should stop propagating in the cortical column at layers 2 and 3  





2.4  "Name cells" at layers 2 and 3 should preferentially connect to layer 6 cells of cortex  





2.5  "Name cells" should remain ON during a learned sequence  





2.6  "Exception cells" should remain OFF during a learned sequence  





2.7  "Exception cells" should propagate unanticipated events  





2.8  "Aha! cells" should trigger predictive activity  





2.9  Pyramidal cells should detect coincidences of synaptic activity on thin dendrites  





2.10  Learned representations move down the cortical hierarchy, with training  





2.11  "Name cells" exist in all regions of cortex  







3 See also  





4 References  





5 External links  



5.1  Reviews  
















On Intelligence






Русский
Українська
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines
Front cover
AuthorJeff Hawkins & Sandra Blakeslee
LanguageEnglish
SubjectPsychology
PublisherTimes Books

Publication date

2004
Publication placeUnited States
Media typePaperback
Pages272
ISBN0-8050-7456-2
OCLC55510125

Dewey Decimal

612.8/2 22
LC ClassQP376 .H294 2004

On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines is a 2004 book[1]byJeff Hawkins and Sandra Blakeslee. The book explains Hawkins' memory-prediction framework theory of the brain and describes some of its consequences.

The theory[edit]

Hawkins' basic idea is that the brain is a mechanism to predict the future, specifically, hierarchical regions of the brain predict their future input sequences. Perhaps not always far in the future, but far enough to be of real use to an organism. As such, the brain is a feed forward hierarchical state machine with special properties that enable it to learn.[1]: 208–210, 222 

The state machine actually controls the behavior of the organism. Since it is a feed forward state machine, the machine responds to future events predicted from past data.

The hierarchy is capable of memorizing frequently observed sequences (Cognitive modules) of patterns and developing invariant representations. Higher levels of the cortical hierarchy predict the future on a longer time scale, or over a wider range of sensory input. Lower levels interpret or control limited domains of experience, or sensory or effector systems. Connections from the higher level states predispose some selected transitions in the lower-level state machines.

Hebbian learning is part of the framework, in which the event of learning physically alters neurons and connections, as learning takes place.[1]: 48, 164 

Vernon Mountcastle's formulation of a cortical column is a basic element in the framework. Hawkins places particular emphasis on the role of the interconnections from peer columns, and the activation of columns as a whole. He strongly implies that a column is the cortex's physical representation of a state in a state machine.[1]: 50, 51, 55 

As an engineer, any specific failure to find a natural occurrence of some process in his framework does not signal a fault in the memory-prediction framework per se, but merely signals that the natural process has performed Hawkins' functional decomposition in a different, unexpected way, as Hawkins' motivation is to create intelligent machines. For example, for the purposes of his framework, the nerve impulses can be taken to form a temporal sequence (but phase encoding could be a possible implementation of such a sequence; these details are immaterial for the framework).

Predictions of the theory of the memory-prediction framework[edit]

His predictions use the visual system as a prototype for some example predictions, such as Predictions 2, 8, 10, and 11. Other predictions cite the auditory system ( Predictions 1, 3, 4, and 7).

Enhanced neural activity in anticipation of a sensory event[edit]

1. In all areas of cortex, Hawkins (2004) predicts "we should find anticipatory cells", cells that fire in anticipation of a sensory event.

Note: As of 2005 mirror neurons have been observed to fire before an anticipated event.[2]

Spatially specific prediction[edit]

2. In primary sensory cortex, Hawkins predicts, for example, "we should find anticipatory cells in or near V1, at a precise location in the visual field (the scene)". It has been experimentally determined, for example, after mapping the angular position of some objects in the visual field, there will be a one-to-one correspondence of cells in the scene to the angular positions of those objects. Hawkins predicts that when the features of a visual scene are known in a memory, anticipatory cells should fire before the actual objects are seen in the scene.

Prediction should stop propagating in the cortical column at layers 2 and 3[edit]

3. In layers 2 and 3, predictive activity (neural firing) should stop propagating at specific cells, corresponding to a specific prediction. Hawkins does not rule out anticipatory cells in layers 4 and 5.

"Name cells" at layers 2 and 3 should preferentially connect to layer 6 cells of cortex[edit]

4. Learned sequences of firings comprise a representation of temporally constant invariants. Hawkins calls the cells which fire in this sequence "name cells". Hawkins suggests that these name cells are in layer 2, physically adjacent to layer 1. Hawkins does not rule out the existence of layer 3 cells with dendrites in layer 1, which might perform as name cells.

"Name cells" should remain ON during a learned sequence[edit]

5. By definition, a temporally constant invariant will be active during a learned sequence. Hawkins posits that these cells will remain active for the duration of the learned sequence, even if the remainder of the cortical column is shifting state. Since we do not know the encoding of the sequence, we do not yet know the definition of ONoractive; Hawkins suggests that the ON pattern may be as simple as a simultaneous AND (i.e., the name cells simultaneously "light up") across an array of name cells.

See Neural ensemble#Encoding for grandmother neurons which perform this type of function.

"Exception cells" should remain OFF during a learned sequence[edit]

6. Hawkins' novel prediction is that certain cells are inhibited during a learned sequence. A class of cells in layers 2 and 3 should NOT fire during a learned sequence, the axons of these "exception cells" should fire only if a local prediction is failing. This prevents flooding the brain with the usual sensations, leaving only exceptions for post-processing.

"Exception cells" should propagate unanticipated events[edit]

7. If an unusual event occurs (the learned sequence fails), the "exception cells" should fire, propagating up the cortical hierarchy to the hippocampus, the repository of new memories.

"Aha! cells" should trigger predictive activity[edit]

8. Hawkins predicts a cascade of predictions, when recognition occurs, propagating down the cortical column (with each saccade of the eye over a learned scene, for example).

Pyramidal cells should detect coincidences of synaptic activity on thin dendrites[edit]

9. Pyramidal cells should be capable of detecting coincident events on thin dendrites, even for a neuron with thousands of synapses. Hawkins posits a temporal window (presuming time-encoded firing) which is necessary for his theory to remain viable.

Learned representations move down the cortical hierarchy, with training[edit]

10. Hawkins posits, for example, that if the inferotemporal (IT) level has learned a sequence, that eventually cells in V4 will also learn the sequence.

"Name cells" exist in all regions of cortex[edit]

11. Hawkins predicts that "name cells" will be found in all regions of the cortex.

See also[edit]

References[edit]

  1. ^ a b c d Hawkins, Jeff (2004). On Intelligence (1st ed.). Times Books. pp. 272. ISBN 978-0805074567.
  • ^ Fogassi, Leonardo; Ferrari, Pier Francesco; Gesierich, Benno; Rozzi, Stefano; Chersi, Fabian; Rizzolatti, Giacomo (April 29, 2005). "Parietal lobe: from action organization to intention understanding" (PDF). Science. 308 (5722): 662–667. Bibcode:2005Sci...308..662F. doi:10.1126/science.1106138. PMID 15860620. S2CID 5720234. Archived from the original (PDF) on 2017-08-09. Retrieved 2006-11-18.
  • External links[edit]

    Reviews[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=On_Intelligence&oldid=1220031572"

    Categories: 
    2004 non-fiction books
    Non-fiction books about Artificial intelligence
    Books about human intelligence
    Hidden categories: 
    Articles with short description
    Short description matches Wikidata
    Articles needing additional references from April 2013
    All articles needing additional references
    CS1 errors: missing periodical
     



    This page was last edited on 21 April 2024, at 11:46 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki