Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Overview  





2 Processor-in-/near-memory  





3 Examples  





4 DRAM-based PIM Taxonomy  





5 See also  





6 References  





7 Bibliography  














Computational RAM






فارسی


Русский
Türkçe
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Computational RAM (C-RAM) is random-access memory with processing elements integrated on the same chip. This enables C-RAM to be used as a SIMD computer. It also can be used to more efficiently use memory bandwidth within a memory chip. The general technique of doing computations in memory is called Processing-In-Memory (PIM).

Overview[edit]

The most influential implementations of computational RAM came from The Berkeley IRAM Project. Vector IRAM (V-IRAM) combines DRAM with a vector processor integrated on the same chip.[1]

Reconfigurable Architecture DRAM (RADram) is DRAM with reconfigurable computing FPGA logic elements integrated on the same chip.[2] SimpleScalar simulations show that RADram (in a system with a conventional processor) can give orders of magnitude better performance on some problems than traditional DRAM (in a system with the same processor).

Some embarrassingly parallel computational problems are already limited by the von Neumann bottleneck between the CPU and the DRAM. Some researchers expect that, for the same total cost, a machine built from computational RAM will run orders of magnitude faster than a traditional general-purpose computer on these kinds of problems.[3]

As of 2011, the "DRAM process" (few layers; optimized for high capacitance) and the "CPU process" (optimized for high frequency; typically twice as many BEOL layers as DRAM; since each additional layer reduces yield and increases manufacturing cost, such chips are relatively expensive per square millimeter compared to DRAM) is distinct enough that there are three approaches to computational RAM:

Some CPUs designed to be built on a DRAM process technology (rather than a "CPU" or "logic" process technology specifically optimized for CPUs) include The Berkeley IRAM Project, TOMI Technology[4][5] and the AT&T DSP1.

Because a memory bus to off-chip memory has many times the capacitance of an on-chip memory bus, a system with separate DRAM and CPU chips can have several times the energy consumption of an IRAM system with the same computer performance. [1]

Because computational DRAM is expected to run hotter than traditional DRAM, and increased chip temperatures result in faster charge leakage from the DRAM storage cells, computational DRAM is expected to require more frequent DRAM refresh. [2]

Processor-in-/near-memory[edit]

Aprocessor-in-/near-memory (PINM) refers to a computer processor (CPU) tightly coupled to memory, generally on the same silicon chip.

The chief goal of merging the processing and memory components in this way is to reduce memory latency and increase bandwidth. Alternatively reducing the distance that data needs to be moved reduces the power requirements of a system.[6] Much of the complexity (and hence power consumption) in current processors stems from strategies to deal with avoiding memory stalls.

Examples[edit]

In the 1980s, a tiny CPU that executed FORTH was fabricated into a DRAM chip to improve PUSH and POP. FORTH is a stack-oriented programming language and this improved its efficiency.

The transputer also had large on chip memory given that it was made in the early 1980s making it essentially a processor-in-memory.

Notable PIM projects include the Berkeley IRAM project (IRAM) at the University of California, Berkeley[7] project and the University of Notre Dame PIM[8] effort.

DRAM-based PIM Taxonomy[edit]

DRAM-based near-memory and in-memory designs can be categorized into four groups:

See also[edit]

References[edit]

  1. ^ a b c Christoforos E. Kozyrakis, Stylianos Perissakis, David Patterson, Thomas Anderson, et al. "Scalable Processors in the Billion-Transistor Era: IRAM". IEEE Computer (magazine). 1997. says "Vector IRAM ... can operate as a parallel built-in self-test engine for the memory array, significantly reducing the DRAM testing time and the associated cost."
  • ^ a b Mark Oskin, Frederic T. Chong, and Timothy Sherwood. "Active Pages: A Computation Model for Intelligent Memory" Archived 2017-09-22 at the Wayback Machine. 1998.
  • ^ Daniel J. Bernstein. "Historical notes on mesh routing in NFS". 2002. "programming a computational RAM"
  • ^ "TOMI the milliwatt microprocessor"[permanent dead link]
  • ^ Yong-Bin Kim and Tom W. Chen. "Assessing Merged DRAM/Logic Technology". 1998. "Archived copy" (PDF). Archived from the original (PDF) on 2011-07-25. Retrieved 2011-11-27.{{cite web}}: CS1 maint: archived copy as title (link) [1]
  • ^ "GYRFALCON STARTS SHIPPING AI CHIP". electronics-lab. 2018-10-10. Retrieved 5 December 2018.
  • ^ IRAM
  • ^ "PIM". Archived from the original on 2015-11-09. Retrieved 2015-05-26.
  • ^ Hadi Asghari-Moghaddam, et al., "Chameleon: Versatile and practical near-DRAM acceleration architecture for large memory systems".
  • ^ Liu Ke, et al., "RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing".
  • ^ Dongping, Zhang, et al., "TOP-PIM: Throughput-oriented programmable processing in memory".
  • ^ Sukhan Lee, et al., "Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology : Industrial Product".
  • ^ Shuangchen Li, et al.,"DRISA: A dram-based reconfigurable in-situ accelerator".
  • ^ Marzieh Lenjani, et al., "Fulcrum: a Simplified Control and Access Mechanism toward Flexible and Practical In-situ Accelerators".
  • Bibliography[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Computational_RAM&oldid=1190816662"

    Categories: 
    Computer memory
    Computer architecture
    Hidden categories: 
    Webarchive template wayback links
    All articles with dead external links
    Articles with dead external links from August 2017
    Articles with permanently dead external links
    CS1 maint: archived copy as title
    Articles with short description
    Short description matches Wikidata
    Articles lacking reliable references from August 2012
    All articles lacking reliable references
    Articles needing additional references from August 2012
    All articles needing additional references
     



    This page was last edited on 20 December 2023, at 01:09 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki