Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Development  



1.1  Operating systems  







2 See also  





3 References  





4 Bibliography  





5 External links  














Beowulf cluster






العربية
Čeština
Dansk
Deutsch
Ελληνικά
Español
فارسی
Français

Italiano
Nederlands

Polski
Português
Русский
Türkçe
Українська
Tiếng Vit

 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 




In other projects  



Wikimedia Commons
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


The Borg, a 52-node Beowulf cluster used by the McGill University pulsar group to search for pulsations from binary pulsars
External image
image icon The original Beowulf cluster built in 1994byThomas Sterling and Donald BeckeratNASA. The cluster comprises 16 white box desktops each running a i486 DX4 processor clocked at 100 MHz, each containing a 500 MB hard disk drive, and each having 16 MB of RAM between them, leading to a total of roughly 8 GB of fixed disk storage and 256 MB of RAM shared within the cluster and a performance benchmark of 500 MFLOPS.

ABeowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

Beowulf originally referred to a specific computer built in 1994 by Thomas Sterling and Donald BeckeratNASA.[1] They named it after the Old English epic poem, Beowulf.[2]

No particular piece of software defines a cluster as a Beowulf. Typically only free and open source software is used, both to save cost and to allow customization. Most Beowulf clusters run a Unix-like operating system, such as BSD, Linux, or Solaris. Commonly used parallel processing libraries include Message Passing Interface (MPI) and Parallel Virtual Machine (PVM). Both of these permit the programmer to divide a task among a group of networked computers, and collect the results of processing. Examples of MPI software include Open MPIorMPICH. There are additional MPI implementations available.

Beowulf systems operate worldwide, chiefly in support of scientific computing. Since 2017, every system on the Top500 list of the world's fastest supercomputers has used Beowulf software methods and a Linux operating system. At this level, however, most are by no means just assemblages of commodity hardware; custom design work is often required for the nodes (often blade servers), the networking and the cooling systems.

Development[edit]

Detail of the first Beowulf cluster at Barcelona Supercomputing Center

A description of the Beowulf cluster, from the original "how-to", which was published by Jacek Radajewski and Douglas Eadline under the Linux Documentation Project in 1998:[3]

Beowulf is a multi-computer architecture which can be used for parallel computations. It is a system which usually consists of one server node, and one or more client nodes connected via Ethernet or some other network. It is a system built using commodity hardware components, like any PC capable of running a Unix-like operating system, with standard Ethernet adapters, and switches. It does not contain any custom hardware components and is trivially reproducible. Beowulf also uses commodity software like the FreeBSD, Linux or Solaris operating system, Parallel Virtual Machine (PVM) and Message Passing Interface (MPI). The server node controls the whole cluster and serves files to the client nodes. It is also the cluster's console and gateway to the outside world. Large Beowulf machines might have more than one server node, and possibly other nodes dedicated to particular tasks, for example consoles or monitoring stations. In most cases, client nodes in a Beowulf system are dumb, the dumber the better. Nodes are configured and controlled by the server node, and do only what they are told to do. In a disk-less client configuration, a client node doesn't even know its IP address or name until the server tells it.

One of the main differences between Beowulf and a Cluster of Workstations (COW) is that Beowulf behaves more like a single machine rather than many workstations. In most cases client nodes do not have keyboards or monitors, and are accessed only via remote login or possibly serial terminal. Beowulf nodes can be thought of as a CPU + memory package which can be plugged into the cluster, just like a CPU or memory module can be plugged into a motherboard.

Beowulf is not a special software package, new network topology, or the latest kernel hack. Beowulf is a technology of clustering computers to form a parallel, virtual supercomputer. Although there are many software packages such as kernel modifications, PVM and MPI libraries, and configuration tools which make the Beowulf architecture faster, easier to configure, and much more usable, one can build a Beowulf class machine using a standard Linux distribution without any additional software. If you have two networked computers which share at least the /home file system via NFS, and trust each other to execute remote shells (rsh), then it could be argued that you have a simple, two node Beowulf machine.

Operating systems[edit]

A home-built Beowulf cluster composed of white box PCs

As of 2014 a number of Linux distributions, and at least one BSD, are designed for building Beowulf clusters. These include:

The following are no longer maintained:

A cluster can be set up by using Knoppix bootable CDs in combination with OpenMosix. The computers will automatically link together, without need for complex configurations, to form a Beowulf cluster using all CPUs and RAM in the cluster. A Beowulf cluster is scalable to a nearly unlimited number of computers, limited only by the overhead of the network.

Provisioning of operating systems and other software for a Beowulf Cluster can be automated using software, such as Open Source Cluster Application Resources. OSCAR installs on top of a standard installation of a supported Linux distribution on a cluster's head node.

See also[edit]

References[edit]

  1. ^ Becker, Donald J; Sterling, Thomas; Savarese, Daniel; Dorband, John E; Ranawak, Udaya A; Packer, Charles V (1995). "BEOWULF: A parallel workstation for scientific computation". Proceedings, International Conference on Parallel Processing. 95.
  • ^ See Francis Barton Gummere's 1909 translation, reprinted (for example) in Beowulf. Translated by Francis B. Gummere. Hayes Barton Press. 1909. p. 20. ISBN 9781593773700. Retrieved 2014-01-16.[dead link]
  • ^ Radajewski, Radajewski; Eadline, Douglas (22 November 1998). "Beowulf HOWTO". ibiblio.org. v1.1.1. Retrieved 8 June 2021.
  • Bibliography[edit]

    External links[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Beowulf_cluster&oldid=1189158357"

    Categories: 
    Cluster computing
    Parallel computing
    Job scheduling
    NASA spin-off technologies
    Unix software
    Hidden categories: 
    All articles with dead external links
    Articles with dead external links from September 2023
    Articles with short description
    Short description is different from Wikidata
    Articles needing additional references from November 2020
    All articles needing additional references
    Articles containing potentially dated statements from 2014
    All articles containing potentially dated statements
    Articles with Curlie links
    Articles with J9U identifiers
    Articles with LCCN identifiers
     



    This page was last edited on 10 December 2023, at 02:35 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki