Help
I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.
Using ErrorGerm
Valid names
The name of a variable must be valid, a name to be valid must begin with an ''alpha character'' and can be followed by an ''alpha character'' or a number
An ''alpha character'' in ErrorGerm is almost equal to a math.js's alpha character, but excluding mathematical alphanumeric symbols (\u{1D400} - \u{1D7FF}
)
Examples of valid names are:
- _1
- α_1
- área
Examples of invalid names are:
- 1
- 变量
Measuring errors
It is a well-established fact of scientific investigation that the first time an experiment is performed, the results often bear all too little resemblance to the "truth" being sought. As the experiment is repeated, with successive refinements of technique and method, the results gradually and asymptotically approach what we may accept with some confidence to be a reliable description of events. We may sometimes feel that nature is loath to give up her secrets without a considerable expenditure of effort on our part, and that the first steps in experimentation are bound to fail. Whatever the reason, it is certainly true that for all physical experiments, there are errors and uncertainties that must be reduced by improved experimental techniques and repeated measurements. And those errors remaining must always be estimated to establish the validity of our results.
Accuracy vs. Precision
It is important to distinguish between the terms accuracy and precision. The accuracy of an experiment is a measure of how close the result of the experiment is to the true value; the precision is a measure of how well the result has been determined without reference to its agreement with the true value. The precision is also a measure of the reproducibility of the result in a given experiment.
In general, when we quote the uncertainty or error in an experimental result, we are referring to the precision with which that result has been determined. Absolute precision indicates the magnitude of the uncertainty in the result in the same units as the result; relative precision indicates the uncertainty in terms of a fraction of the value of the result.
Systematic errors
The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the "true" values with reproducible discrepancies. Errors of this type are not easy to detect and are not easily studied by statistical analysis.
Random errors
The precision of an experiment depends upon how well we can overcome random errors, fluctuations in observations that yield different results each time the experiment is repeated, and thus require repeated experimentation to yield precise results. A given accuracy implies an equivalent precision and, therefore, also depends to some extent on random errors.
Significant figures
The precision of an experimental result is implied by the number of digits recorded in the result, although typically the uncertainty should be quoted specifically as well. The number of significant figures in a result is defined as follows:
- The leftmost nonzero digit is the most significant digit.
- If there is no decimal point, the rightmost nonzero digit is the least significant digit.
- If there is a decimal point, the rightmost digit is the least significant digit, even if it is a 0.
- All digits between the least and most significant digits are counted as significant digits.
Uncertainty of a magnitude measured once
The best value will be the measured value, and the uncertainty will be the nominal error of the measuring instrument.
\(\frac{m}{2}\) is often used as the nominal error, where \(m\) is the minimum value possibly measured by the measuring instrument.
Uncertainty of a magnitude measured \(N\) times
To minimize random errors, we measure multiple times. Suppose we made \(N\) measurements of the same magnitude with results \(x_1,x_2,x_3,x_4,\ldots\) then the best value is the mean \(\overline{x}\)
Where \(\sigma_{nom}\) is the nominal error of the measuring instrument, \(\sigma_{x} = \frac{\sigma}{\sqrt{N}} \) and \( \sigma = \sqrt{\frac{\sum_{k=1}^{N}(x_k - \overline{x})}{N-1}} \)
Uncertainty of a magnitude measured \(N\) independent times
Sometimes, each repeated measurement might have a different uncertainty than the rest, because of another measuring method or instrument; in this case, the best value is
And the uncertainty is
How many measurements should be made?
Let's remember that \(\sigma\) represents the dispersion of each measurement and does not depend on \(N\), but rather on the quality of the measurements. On the other hand, \(\sigma_x\) does depend on \(N\), and decreases as \(N\) increases. For instance, if we are measuring a length with a ruler graduated in millimeters, it's clear that increasing the number of measurements can reduce the statistical error. However, with this instrument, we cannot achieve certainty in figures at the micron level, no matter how many measurements we take.
As \(N\) increases, \(\sigma_x\) decreases, but from a physical standpoint, the error in \(\overline{x}\) can only decrease until it becomes equal to or of the order of the instrument's nominal error (\(\sigma_{nom}\)). Therefore, it is not reasonable to strive to reduce \(\sigma_x\) much more than \(\sigma_{nom}\). The optimal balance is achieved when \(\sigma_x \approx \sigma_{nom}\).
This provides us with a criterion to decide the optimal number of measurements to perform for a measurement. Assuming that \(\sigma\) is constant with \(N\), the idea is to conduct a pilot test with some preliminary measurements (\( N_{prel} \)), say about 5 to 10, and then calculate \(\sigma\). Like this
Steps to follow to measure a physical quantity \( x \):
- Perform about 5 to 10 preliminary measurements and determine the average error of each measurement \(\sigma\).
- Discard extreme values.
- Determine \( N_{op} \) .
- Complete the \( N_{op} \) measurements of \( x \), if necessary.
- Calculate the average \(\overline{x}\) and its statistical uncertainty \(\sigma_x\).
- Calculate the value of the effective or total uncertainty \(\Delta x = \sqrt{\sigma_x^2 + \sigma_{nom}^2}\).
- Write the result in the form \(x = (\overline{x} \pm \Delta x) [\text{unit}]\).
- Calculate the percent relative error \( \varepsilon = 100\cdot\Delta x/\overline{x} \).
- If desired, to verify that the value distribution is normal, compare the distribution histogram of data with the corresponding normal curve, that is, with a normal distribution of mean \(\overline{x}\) and standard deviation \(\sigma\).
- Analyze possible sources of systematic errors and correct the measured value.
- Evaluate the absolute uncertainty of the measurement by combining the statistical and systematic uncertainties.
Propagation of errors
We often want to determine a dependent variable x that is a function of one or more different measured variables. We must know how to propagate or carry over the uncertainties in the measured variables to determine the uncertainty in the dependent variable.
General formula
Suppose a dependent variable V which is a function \(V = V(x_0, x_1, x_2, \ldots)\) where all \(x_i\) are measures with known uncertainties \(\Delta x_0, \Delta x_1, \Delta x_2 \ldots\) then we can calculate the uncertainty of \(V\) with the formula