Quadratic Mean Theories for Two Dimensions in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing
by
© Ph. D. & Dr. Sc. Lev Gelimson
Academic Institute for Creating Fundamental Sciences (Munich, Germany)
Mathematical Journal
of the "Collegium" All World Academy of Sciences
Munich (Germany)
10 (2010), 4
To solving contradictory (e.g., overdetermined) problems in approximation and data processing, the least square method (LSM) [1] by Legendre and Gauss only usually applies. Overmathematics [2, 3] and fundamental sciences of estimation [4], approximation [5], data modeling [6] and processing [7] have discovered a lot of principal shortcomings [2-8] of this method. Additionally, minimizing the sum of the squared differences of the alone preselected coordinates (e.g., ordinates in a two-dimensional problem) of the graph of the desired approximation function and of everyone among the given data depends on this preselection, ignores the remaining coordinates, and provides no objective sense of the result. Moreover, the method is correct in the unique case of a constant approximation only and gives systematic errors increasing together with the declination of an approximation function.
In fundamental sciences of estimation [4], approximation [5], data modeling [6] and processing [7], quadratic mean theories (QMT) are valid by coordinate system linear transformation invariance of the given data. Show the essence of these theories by a linear approximation in the two-dimensional case.
Given n (n ∈ N+ = {1, 2, ...}, n > 2) points [j=1n (x'j , y'j )] = {(x'1 , y'1), (x'2 , y'2), ... , (x'n , y'n)] with any real coordinates. Use clearly invariant centralization transformation x = x' - Σj=1n x'j / n , y = y' - Σj=1n y'j / n to provide coordinate system xOy central for the given data and further work in this system with points [j=1n (xj , yj)] to be approximated with a straight line y = ax containing origin O(0, 0).
First, use the least square method [1] by its common approach to minimizing the sum of the squared y-coordinate differences between this line and everyone of the n data points [j=1n (xj , yj)]:
2yS(a) = Σj=1n (axj - yj)2, 2yS'a = 2Σj=1n (axj - yj)xj = 0, Σj=1n xj2 a = Σj=1n xjyj , ay = Σj=1n xjyj / Σj=1n xj2 , 2yS''aa = 2Σj=1n xj2 > 0
(in any nontrivial case) providing namely the minimum of 2yS(a) at ay as the value of a by minimizing the sum of the squared y-coordinate differences.
Secondly, minimize the sum of the squared x-coordinate differences:
x = 1/a y , 1/a = a', 2xS(a') = Σj=1n (a'yj - xj)2, 2xS'a' = 2Σj=1n (a'yj - xj)yj = 0,
Σj=1n yj2 a' = Σj=1n xjyj , a'x = Σj=1n xjyj / Σj=1n yj2 , ax = 1/a'x = Σj=1n yj2 / Σj=1n xjyj , 2xS''a'a' = 2Σj=1n yj2 > 0
(in any nontrivial case) providing namely the minimum of 2xS(a') at ax as the value of a by minimizing the sum of the squared x-coordinate differences.
Now (similarly to the direct solution method in fundamental science of solving general problems [9]) immediately take
a = (axay)1/2 sign Σj=1n xjyj = (Σj=1n yj2 / Σj=1n xj2)1/2 sign Σj=1n xjyj ,
y = sign Σj=1n xjyj (Σj=1n yj2 / Σj=1n xj2)1/2 x
for the transformed centralized data, whereas for the initial noncentralized data we obtain
y' = sign[Σj=1n (x'j - Σj=1n x'j / n)(y'j - Σj=1n y'j / n)] [Σj=1n (y'j - Σj=1n y'j / n)2/ Σj=1n (x'j - Σj=1n x'j / n)2]1/2(x'j - Σj=1n x'j / n) + Σj=1n y'j / n .
By nonzero but relatively very small absolute values of the divisor by ax , Σj=1n xjyj , namely by |Σj=1n xjyj| << (Σj=1n xj2 Σj=1n yj2)1/2, the used sign can become oversensitive to small data variations. In such a case, use either horizontal y = 0 (y' = Σj=1n y'j / n) by 2yS = Σj=1nyj2 - (Σj=1nyj)2/n < 2xS = Σj=1nxj2 - (Σj=1nxj)2/n or vertical x = 0 (x' = Σj=1n x'j / n) by 2yS > 2xS straight line approximation. The last line cannot be obtained by general equation y = ax and has to be considered separately.
There is an even more direct and natural deductive way to obtain the above formula for a . After centralization, additionally introduce normalization transformation
X = x/(Σj=1n xj2)1/2 ,
Y = y/(Σj=1n yj2)1/2
to provide coordinate system XOY which is central normalized for the given data and further work in this system with points [j=1n (Xj , Yj)] to be approximated with a straight line Y = AX containing origin O(0, 0). Note that y = ax gives
Y(Σj=1n yj2)1/2 = a(Σj=1n xj2)1/2 X
and hence
A = (Σj=1n xj2 / Σj=1n yj2)1/2 a ,
a = (Σj=1n yj2 / Σj=1n xj2)1/2 A .
Unlike previously considering the squared differences of y-coordinates and x-coordinates separately, now regard their sum of Y-coordinates and X-coordinates by a ≠ 0 and A ≠ 0 at once, which is reasonable due to equalizing the weights of the both normalized data point coordinates:
2YXS(A) = Σj=1n [(AXj - Yj)2 + (Yj/A - Xj)2],
2YS'A = 2Σj=1n [(XjA - Yj)Xj + (Yj/A - Xj)Yj(-1/A2)] = 0,
Σj=1n Xj2 A - Σj=1n XjYj - Σj=1n Yj2 / A3 + Σj=1n XjYj / A2 = 0,
Σj=1n Xj2 A4 - Σj=1n XjYj A3 + Σj=1n XjYj A - Σj=1n Yj2 = 0.
Note that, due to normalization,
Σj=1n Xj2 = Σj=1n [xj/ (Σj=1n xj2)1/2]2 = 1,
Σj=1n Yj2 = Σj=1n [yj/ (Σj=1n yj2)1/2]2 = 1.
Then we obtain
A4 - Σj=1n XjYj A3 + Σj=1n XjYj A - 1 = 0,
(A2 - 1)(A2 - Σj=1n XjYj A + 1) = 0.
Such representation of this equation of the 4th power in one unknown A allows to find two solutions to this equation at once:
A2 - 1 = 0,
A1 = 1,
A2 = - 1.
Note that generally by any real Xj and Yj , inequality
(Σj=1n XjYj)2 ≤ Σj=1n Xj2 Σj=1n Yj2
holds. Here due to normalization, we have
(Σj=1n XjYj)2 ≤ Σj=1n Xj2 Σj=1n Yj2 = 1
Hence the discriminant
(Σj=1n XjYj)2 - 4
of the remaining quadratic equation
A2 - Σj=1n XjYj A + 1 = 0
is negative, the both solutions to this equation are imaginary, and there are no additional real solutions to this equation of the 4th power in one unknown A .
Compare
2YXS(A) = Σj=1n Xj2 A2 - 2Σj=1n XjYj A + Σj=1n Yj2 + Σj=1n Yj2 / A2 - 2Σj=1n XjYj / A + Σj=1n Xj2,
or, due to normalization,
2YXS(A) = A2 - 2Σj=1n XjYj A + 2 - 2Σj=1n XjYj / A + 1/A2,
by A = A1 , 2 providing 2YXSmin(A) and 2YXSmax(A), or simply 2Smin(A) and 2Smax(A).
Note that theoretically by
Σj=1n XjYj = 0,
practically by
|Σj=1n XjYj| << 1,
we have to investigate the pair of straight lines Y = 0 and X = 0.
Otherwise, we have Y = X and Y = - X obtained above by a ≠ 0 and A ≠ 0.
Namely
A = sign(Σj=1n XjYj)
provides 2Smin(A), whereas
A = - sign(Σj=1n XjYj)
provides 2Smax(A).
Determine 2Smin(A), 2Smax(A), and then define and determine
SL = [2Smin(A) / 2Smax(A)]1/2
as a measure of data scatter with respect to linear approximation.
This is an upper estimation of data scatter with respect to approximation at all because nonlinear approximation is also possible.
Denote a measure of data scatter with respect to approximation at all with S . Then SL ≥ S .
Also introduce a measure of data trend with respect to linear approximation
TL = 1 - SL = 1 - [2Smin(A) / 2Smax(A)]1/2
and a measure of data trend with respect to approximation at all
T = 1 - S .
Then, naturally, TL ≤ T .
It is possible to give still more universal (but much more complicated) formulae for a and y . Namely, denote
t = |Σj=1n xjyj|/(Σj=1n xj2 Σj=1n yj2)1/2 ,
r = 1/2 - t + |1/2 - t|.
Then
a = (Σj=1n yj2 / Σj=1n xj2)1/2 tr sign Σj=1n xjyj ,
y = sign Σj=1n xjyj (Σj=1n yj2 / Σj=1n xj2)1/2 tr x
for the transformed centralized data, whereas for the initial noncentralized data we obtain
y' = sign[Σj=1n (x'j - Σj=1n x'j / n)(y'j - Σj=1n y'j / n)] [Σj=1n (y'j - Σj=1n y'j / n)2/ Σj=1n (x'j - Σj=1n x'j / n)2]1/2 tr (x'j - Σj=1n x'j / n) + Σj=1n y'j / n .
Unlike the LSM, QMT provide best linear approximation to the given data, e.g. in numeric tests, see Figures 1, 2 with replacing (x’, y’) via (x , y):
Figure 1. SL = 0.218. TL = 0.782
Figure 2. SL = 0.507. TL = 0.493
Nota bene: By linear approximation, the results of distance quadrat theories (DQT) and general theories of moments of inertia (GTMI) [4, 5] coincide. By Σj=1n yj2 = Σj=1n xj2 (and the best linear approximation y = ± x + C), the same also holds for QMT. Here y = x + 2 (Figures 1, 2). By Σj=1n yj2 ≠ Σj=1n xj2 , QMT give other results than DQT and GTMI. But QMT are valid by another invariance type than DQT and GTMI. The data symmetry straight line y = x + 2 is the best linear approximation in the both above tests. The LSM gives y = 0.909x + 2.364 (Figure 1) and even y = 0.591x + 3.636 (Figure 2) with the same data center (4, 6) and underestimating the modulus (absolute value) of the declination to the x-axis (which is typical) due to considering y-coordinate differences instead of distances with ignoring the declination of the approximation straight line to the x-axis.
Quadratic mean theories are very efficient in data estimation, approximation, and processing and reliable even by great data scatter.
Acknowledgements to Anatolij Gelimson for our constructive discussions on coordinate system transformation invariances and his very useful remarks.
References
[1] Encyclopaedia of Mathematics. Ed. M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994
[2] Lev Gelimson. Providing Helicopter Fatigue Strength: Flight Conditions. In: Structural Integrity of Advanced Aircraft and Life Extension for Current Fleets – Lessons Learned in 50 Years After the Comet Accidents, Proceedings of the 23rd ICAF Symposium, Dalle Donne, C. (Ed.), 2005, Hamburg, Vol. II, 405-416
[3] Lev Gelimson. Overmathematics: Fundamental Principles, Theories, Methods, and Laws of Science. The ”Collegium” All World Academy of Sciences Publishers, Munich, 2010
[4] Lev Gelimson. Fundamental Science of Estimation. The ”Collegium” All World Academy of Sciences Publishers, Munich, 2010
[5] Lev Gelimson. Fundamental Science of Approximation. The ”Collegium” All World Academy of Sciences Publishers, Munich, 2010
[6] Lev Gelimson. Fundamental Science of Data Modeling. The ”Collegium” All World Academy of Sciences Publishers, Munich, 2010
[7] Lev Gelimson. Fundamental Science of Data Processing. The ”Collegium” All World Academy of Sciences Publishers, Munich, 2010
[8] Lev Gelimson. Corrections and Generalizations of the Least Square Method. In: Review of Aeronautical Fatigue Investigations in Germany during the Period May 2007 to April 2009, Ed. Dr. Claudio Dalle Donne, Pascal Vermeer, CTO/IW/MS-2009-076 Technical Report, International Committee on Aeronautical Fatigue, ICAF 2009, EADS Innovation Works Germany, 2009, 59-60
[9] Lev Gelimson. Fundamental Science of Solving General Problems. The ”Collegium” All World Academy of Sciences Publishers, Munich, 2010