Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Design  





2 History  





3 Core elements  



3.1  Thread creation  





3.2  Work-sharing constructs  





3.3  Variant directives  





3.4  Clauses  





3.5  User-level runtime routines  





3.6  Environment variables  







4 Implementations  





5 Pros and cons  





6 Performance expectations  





7 Thread affinity  





8 Benchmarks  





9 See also  





10 References  





11 Further reading  





12 External links  














OpenMP






العربية
Azərbaycanca
Català
Čeština
Deutsch
Español
فارسی
Français

Italiano
Lietuvių
Nederlands

Norsk bokmål
Polski
Português
Русский
Shqip
Türkçe
Українська

 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 




In other projects  



Wikimedia Commons
Wikibooks
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


OpenMP
Original author(s)OpenMP Architecture Review Board[1]
Developer(s)OpenMP Architecture Review Board[1]
Stable release

5.2 / November 2021; 2 years ago (2021-11)

Operating systemCross-platform
PlatformCross-platform
TypeExtension to C, C++, and Fortran; API
LicenseVarious[2]
Websiteopenmp.org

OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran,[3] on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.[2][4][5]

OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (orOpenMP ARB), jointly defined by a broad swath of leading computer hardware and software vendors, including Arm, AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, and Oracle Corporation.[1]

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems,[6] to translate OpenMP into MPI[7][8] and to extend OpenMP for non-shared memory systems.[9]

Design[edit]

An illustration of multithreading where the primary thread forks off a number of threads which execute blocks of code in parallel

OpenMP is an implementation of multithreading, a method of parallelizing whereby a primary thread (a series of instructions executed consecutively) forks a specified number of sub-threads and the system divides a task among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed.[3] Each thread has an ID attached to it which can be obtained using a function (called omp_get_thread_num()). The thread ID is an integer, and the primary thread has an ID of 0. After the execution of the parallelized code, the threads join back into the primary thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based on environment variables, or the code can do so using functions. The OpenMP functions are included in a header file labelled omp.hinC/C++.

History[edit]

The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.[citation needed]

Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-oriented numerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features in Cilk, X10 and Chapel.[10]

Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept of tasks and the task construct,[11] significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0.[12]

Version 4.0 of the specification was released in July 2013.[13] It adds or improves the following features: support for accelerators; atomics; error handling; thread affinity; tasking extensions; user defined reduction; SIMD support; Fortran 2003 support.[14][full citation needed]

The current version is 5.2, released in November 2021.[15]

Version 6.0 is due for release in 2024.[16]

Note that not all compilers (and OSes) support the full set of features for the latest version/s.

Core elements[edit]

Chart of OpenMP constructs

The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables.

In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below.

Thread creation[edit]

The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted as master thread with thread ID 0.

Example (C program): Display "Hello, world." using multiple threads.

#include <stdio.h>
#include <omp.h>

int main(void)
{
    #pragma omp parallel
    printf("Hello, world.\n");
    return 0;
}

Use flag -fopenmp to compile using GCC:

$ gcc -fopenmp hello.c -o hello -ldl

Output on a computer with two cores, and thus two threads:

Hello, world.
Hello, world.

However, the output may also be garbled because of the race condition caused from the two threads sharing the standard output.

Hello, wHello, woorld.
rld.

Whether printf is atomic depends on the underlying implementation[17] unlike C++11's std::cout, which is thread-safe by default.[18]

Work-sharing constructs[edit]

Used to specify how to assign independent work to one or all of the threads.

Example: initialize the value of a large array in parallel, using each thread to do part of the work

int main(int argc, char **argv)
{
    int a[100000];

    #pragma omp parallel for
    for (int i = 0; i < 100000; i++) {
        a[i] = 2 * i;
    }

    return 0;
}

This example is embarrassingly parallel, and depends only on the value of i. The OpenMP parallel for flag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable.[19] For instance, with two worker threads, one thread might be handed a version of i that runs from 0 to 49999 while the second gets a version running from 50000 to 99999.

Variant directives[edit]

Variant directives are one of the major features introduced in OpenMP 5.0 specification to facilitate programmers to improve performance portability. They enable adaptation of OpenMP pragmas and user code at compile time. The specification defines traits to describe active OpenMP constructs, execution devices, and functionality provided by an implementation, context selectors based on the traits and user-defined conditions, and metadirective and declare directive directives for users to program the same code region with variant directives.

The mechanism provided by the two variant directives for selecting variants is more convenient to use than the C/C++ preprocessing since it directly supports variant selection in OpenMP and allows an OpenMP compiler to analyze and determine the final directive from variants and context.

// code adaptation using preprocessing directives

int v1[N], v2[N], v3[N];
#if defined(nvptx)
 #pragma omp target teams distribute parallel for map(to:v1,v2) map(from:v3)
  for (int i= 0; i< N; i++) 
     v3[i] = v1[i] * v2[i];
#else 
 #pragma omp target parallel for map(to:v1,v2) map(from:v3)
  for (int i= 0; i< N; i++) 
     v3[i] = v1[i] * v2[i];
#endif

// code adaptation using metadirective in OpenMP 5.0

int v1[N], v2[N], v3[N];
#pragma omp target map(to:v1,v2) map(from:v3)
  #pragma omp metadirective \
     when(device={arch(nvptx)}: target teams distribute parallel for)\
     default(target parallel for)
  for (int i= 0; i< N; i++) 
     v3[i] = v1[i] * v2[i];

Clauses[edit]

Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race conditions and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are:

Data sharing attribute clauses
Synchronization clauses
Scheduling clauses
IF control
Initialization
Data copying
Reduction
Others

User-level runtime routines[edit]

Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc

Environment variables[edit]

A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example, OMP_NUM_THREADS is used to specify number of threads for an application.

Implementations[edit]

OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions[20][21][22]), as well as Intel Parallel Studio for various processors.[23] Oracle Solaris Studio compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC has also supported OpenMP since version 4.2.

Compilers with an implementation of OpenMP 3.0:

Several compilers support OpenMP 3.1:

Compilers supporting OpenMP 4.0:

Several Compilers supporting OpenMP 4.5:

Partial support for OpenMP 5.0:

Auto-parallelizing compilers that generates source code annotated with OpenMP directives:

Several profilers and debuggers expressly support OpenMP:

Pros and cons[edit]

Pros:

Cons:

Performance expectations[edit]

One might expect to get an N times speedup when running a program parallelized using OpenMP on a N processor platform. However, this seldom occurs for these reasons:

Thread affinity[edit]

Some vendors recommend setting the processor affinity on OpenMP threads to associate them with particular processor cores.[45][46][47] This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).

Benchmarks[edit]

A variety of benchmarks has been developed to demonstrate the use of OpenMP, test its performance and evaluate correctness.

Simple examples

Performance benchmarks include:

Correctness benchmarks include:

See also[edit]

  • Heterogeneous System Architecture
  • Parallel programming model
  • POSIX Threads
  • Unified Parallel C
  • Bulk synchronous parallel
  • Partitioned global address space
  • SequenceL
  • References[edit]

    1. ^ a b c "About the OpenMP ARB and". OpenMP.org. 2013-07-11. Archived from the original on 2013-08-09. Retrieved 2013-08-14.
  • ^ a b "OpenMP Compilers & Tools". OpenMP.org. November 2019. Retrieved 2020-03-05.
  • ^ a b Gagne, Abraham Silberschatz, Peter Baer Galvin, Greg (2012-12-17). Operating system concepts (9th ed.). Hoboken, N.J.: Wiley. pp. 181–182. ISBN 978-1-118-06333-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • ^ OpenMP Tutorial at Supercomputing 2008
  • ^ Using OpenMP – Portable Shared Memory Parallel Programming – Download Book Examples and Discuss
  • ^ Costa, J.J.; et al. (May 2006). "Running OpenMP applications efficiently on an everything-shared SDSM". Journal of Parallel and Distributed Computing. 66 (5): 647–658. doi:10.1016/j.jpdc.2005.06.018. hdl:2117/370260.
  • ^ Basumallik, Ayon; Min, Seung-Jai; Eigenmann, Rudolf (2007). "Programming Distributed Memory Systems Using OpenMP". 2007 IEEE International Parallel and Distributed Processing Symposium. New York: IEEE Press. pp. 1–8. CiteSeerX 10.1.1.421.8570. doi:10.1109/IPDPS.2007.370397. ISBN 978-1-4244-0909-9. S2CID 14237507.Apreprint is available on Chen Ding's home page; see especially Section 3 on Translation of OpenMP to MPI.
  • ^ Wang, Jue; Hu, ChangJun; Zhang, JiLin; Li, JianJiang (May 2010). "OpenMP compiler for distributed memory architectures". Science China Information Sciences. 53 (5): 932–944. doi:10.1007/s11432-010-0074-0. (As of 2016 the KLCoMP software described in this paper does not appear to be publicly available)
  • ^ Cluster OpenMP (a product that used to be available for Intel C++ Compiler versions 9.1 to 11.1 but was dropped in 13.0)
  • ^ Ayguade, Eduard; Copty, Nawal; Duran, Alejandro; Hoeflinger, Jay; Lin, Yuan; Massaioli, Federico; Su, Ernesto; Unnikrishnan, Priya; Zhang, Guansong (2007). A proposal for task parallelism in OpenMP (PDF). Proc. Int'l Workshop on OpenMP.
  • ^ "OpenMP Application Program Interface, Version 3.0" (PDF). openmp.org. May 2008. Retrieved 2014-02-06.
  • ^ LaGrone, James; Aribuki, Ayodunni; Addison, Cody; Chapman, Barbara (2011). A Runtime Implementation of OpenMP Tasks. Proc. Int'l Workshop on OpenMP. pp. 165–178. CiteSeerX 10.1.1.221.2775. doi:10.1007/978-3-642-21487-5_13.
  • ^ "OpenMP 4.0 API Released". OpenMP.org. 2013-07-26. Archived from the original on 2013-11-09. Retrieved 2013-08-14.
  • ^ "OpenMP Application Program Interface, Version 4.0" (PDF). openmp.org. July 2013. Retrieved 2014-02-06.
  • ^ "OpenMP 5.2 Specification".
  • ^ "OpenMP ARB Releases Technical Report 12".
  • ^ "C - How to use printf() in multiple threads".
  • ^ "std::cout, std::wcout - cppreference.com".
  • ^ "Tutorial – Parallel for Loops with OpenMP". 2009-07-14.
  • ^ Visual C++ Editions, Visual Studio 2005
  • ^ Visual C++ Editions, Visual Studio 2008
  • ^ Visual C++ Editions, Visual Studio 2010
  • ^ David Worthington, "Intel addresses development life cycle with Parallel Studio" Archived 2012-02-15 at the Wayback Machine, SDTimes, 26 May 2009 (accessed 28 May 2009)
  • ^ "XL C/C++ for Linux Features", (accessed 9 June 2009)
  • ^ "Oracle Technology Network for Java Developers | Oracle Technology Network | Oracle". Developers.sun.com. Retrieved 2013-08-14.
  • ^ a b "openmp – GCC Wiki". Gcc.gnu.org. 2013-07-30. Retrieved 2013-08-14.
  • ^ Kennedy, Patrick (2011-09-06). "Intel® C++ and Fortran Compilers now support the OpenMP* 3.1 Specification | Intel® Developer Zone". Software.intel.com. Retrieved 2013-08-14.
  • ^ a b "IBM XL C/C++ compilers features". IBM. 13 December 2018.
  • ^ a b "IBM XL Fortran compilers features". 13 December 2018.
  • ^ a b "Clang 3.7 Release Notes". llvm.org. Retrieved 2015-10-10.
  • ^ "Absoft Home Page". Retrieved 2019-02-12.
  • ^ "GCC 4.9 Release Series – Changes". www.gnu.org.
  • ^ "OpenMP* 4.0 Features in Intel Compiler 15.0". Software.intel.com. 2014-08-13. Archived from the original on 2018-11-16. Retrieved 2014-11-10.
  • ^ "GCC 6 Release Series - Changes". www.gnu.org.
  • ^ "OpenMP Compilers & Tools". openmp.org. www.openmp.org. Retrieved 29 October 2019.
  • ^ a b "OpenMP Support — Clang 12 documentation". clang.llvm.org. Retrieved 2020-10-23.
  • ^ "GOMP — An OpenMP implementation for GCC - GNU Project - Free Software Foundation (FSF)". gcc.gnu.org. Archived from the original on 2021-02-27. Retrieved 2020-10-23.
  • ^ "OpenMP* Support". Intel. Retrieved 2020-10-23.
  • ^ a b Amritkar, Amit; Tafti, Danesh; Liu, Rui; Kufrin, Rick; Chapman, Barbara (2012). "OpenMP parallelism for fluid and fluid-particulate systems". Parallel Computing. 38 (9): 501. doi:10.1016/j.parco.2012.05.005.
  • ^ Amritkar, Amit; Deb, Surya; Tafti, Danesh (2014). "Efficient parallel CFD-DEM simulations using OpenMP". Journal of Computational Physics. 256: 501. Bibcode:2014JCoPh.256..501A. doi:10.1016/j.jcp.2013.09.007.
  • ^ OpenMP Accelerator Support for GPUs
  • ^ Detecting and Avoiding OpenMP Race Conditions in C++
  • ^ "Alexey Kolosov, Evgeniy Ryzhkov, Andrey Karpov 32 OpenMP traps for C++ developers". Archived from the original on 2017-07-07. Retrieved 2009-04-15.
  • ^ Stephen Blair-Chappell, Intel Corporation, Becoming a Parallel Programming Expert in Nine Minutes, presentation on ACCU 2010 conference
  • ^ Chen, Yurong (2007-11-15). "Multi-Core Software". Intel Technology Journal. 11 (4). doi:10.1535/itj.1104.08.
  • ^ "OMPM2001 Result". SPEC. 2008-01-28.
  • ^ "OMPM2001 Result". SPEC. 2003-04-01. Archived from the original on 2021-02-25. Retrieved 2008-03-28.
  • Further reading[edit]

    • Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
  • R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, J. McDonald, Parallel Programming in OpenMP. Morgan Kaufmann, 2000. ISBN 1-55860-671-8
  • R. Eigenmann (Editor), M. Voss (Editor), OpenMP Shared Memory Parallel Programming: International Workshop on OpenMP Applications and Tools, WOMPAT 2001, West Lafayette, IN, USA, July 30–31, 2001. (Lecture Notes in Computer Science). Springer 2001. ISBN 3-540-42346-X
  • B. Chapman, G. Jost, R. van der Pas, D.J. Kuck (foreword), Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press (October 31, 2007). ISBN 0-262-53302-2
  • Tom Deakin and Timothy G. Mattson: Programming Your GPU with OpenMP: Performance Portability for GPUs, The MIT Press, ISBN 978-0-262547536 (Nov, 7,2023).
  • Parallel Processing via MPI & OpenMP, M. Firuziaan, O. Nommensen. Linux Enterprise, 10/2002
  • MSDN Magazine article on OpenMP
  • SC08 OpenMP Tutorial Archived 2013-03-19 at the Wayback Machine (PDF) – Hands-On Introduction to OpenMP, Mattson and Meadows, from SC08 (Austin)
  • OpenMP Specifications Archived 2021-03-02 at the Wayback Machine
  • Miguel Hermanns:Parallel Programming in Fortran 95 using OpenMP (19th, April, 2002) (PDF) (OpenMP ver.1 and ver.2)
  • External links[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=OpenMP&oldid=1226891046"

    Categories: 
    Application programming interfaces
    C programming language family
    Fortran
    Parallel computing
    Hidden categories: 
    CS1 maint: multiple names: authors list
    Articles containing potentially dated statements from 2016
    All articles containing potentially dated statements
    Webarchive template wayback links
    Articles with short description
    Short description matches Wikidata
    All articles with unsourced statements
    Articles with unsourced statements from September 2022
    All articles with incomplete citations
    Articles with incomplete citations from March 2015
    Articles needing additional references from February 2017
    All articles needing additional references
    Articles containing potentially dated statements from 2017
    Articles with example Fortran code
     



    This page was last edited on 2 June 2024, at 12:16 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki