Home  

Random  

Nearby  



Log in  



Settings  



Donate  



About Wikipedia  

Disclaimers  



Wikipedia





Precision (computer science)





Article  

Talk  



Language  

Watch  

Edit  





Incomputer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.

Some of the standardized precision formats are

Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.

Rounding error

edit

Precision is often the source of rounding errorsincomputation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).

See also

edit

References

edit

Retrieved from "https://en.wikipedia.org/w/index.php?title=Precision_(computer_science)&oldid=1172300949"
 



Last edited on 26 August 2023, at 06:36  





Languages

 


العربية
Català
Español
Euskara
Français
Galego
Hrvatski
Italiano

Norsk bokmål
Português
Русский
Simple English
 

Wikipedia


This page was last edited on 26 August 2023, at 06:36 (UTC).

Content is available under CC BY-SA 4.0 unless otherwise noted.



Privacy policy

About Wikipedia

Disclaimers

Contact Wikipedia

Code of Conduct

Developers

Statistics

Cookie statement

Terms of Use

Desktop