The method of least squares explained above makes at least 4 assumptions, the adherence to which
may be checked a posteriori. These assumptions concern with the error
*e*_{i} =
- *y*_{i}, i.e., the
difference between the best fit prediction and the observation. The assumptions are that these errors
(a). are mutually independent (b). have zero mean (c). have a constant variance across all the values of the
statistical variables and (d). are *normally* distributed. Violation of these assumptions can be
identified in many cases by simply examining a plot of *e*_{i} vs. *x*_{i}. Note that the first of Eq. 4
guarantees that the mean value of *e*_{i} is 0 within the precision of the computation (To see this better
rewrite that equation as
(*b* + *ax*_{i} - *y*_{i}) = 0). However, this is not necessarily true of non-linear regression
analysis.