ⓘ Precision, computer science. In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is u ..

                                     

ⓘ Precision (computer science)

In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.

Some of the standardized precision formats are

  • Single-precision floating-point format
  • Double-precision floating-point format
  • Half-precision floating-point format
  • Quadruple-precision floating-point format
  • Octuple-precision floating-point format

Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.

                                     

1. Rounding error

Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin0.1" in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made using the data although it can also be reduced.

                                     
  • a measurement Precision and recall, in information retrieval: the percentage of relevant documents returned Precision computer science a measure of
  • accuracy refers to closeness of the measurements to a specific value, while precision refers to the closeness of the measurements to each other. Accuracy has
  • computing, half precision is a binary floating - point computer number format that occupies 16 bits two bytes in modern computers in computer memory. In the
  • In computer science arbitrary - precision arithmetic, also called bignum arithmetic, multiple - precision arithmetic, or sometimes infinite - precision arithmetic
  • Institute of Precision Mechanics and Computer Engineering IPMCE is a Russian research institution. It used to be a Soviet Academy of Sciences organization
  • actually retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. Suppose a computer program for recognizing
  • analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. The precision of
  • polynomials, for example. Arithmetic precision Floor function Quantization signal processing Precision computer science Truncation statistics Spivak
  • Precision agriculture PA satellite farming or site specific crop management SSCM is a farming management concept based on observing, measuring and
  • seven basic operations: add, subtract, and multiply single precision and double precision versions comparison, data extraction, input and output. Several