Iterative Polar Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing

by

© Ph. D. & Dr. Sc. Lev Gelimson

Academic Institute for Creating Fundamental Sciences (Munich, Germany)

Mathematical Journal

of the "Collegium" All World Academy of Sciences

Munich (Germany)

11 (2011), 8

By data modeling, processing, estimation, and approximation [1], data scatter is relatively great in many cases and often allows no discriminating different analytic approximation expression types or forms, e.g. linear, piecewise linear, parabolic, hyperbolic, circumferential, elliptic, sinusoidal, etc. by two-dimensional data or linear, piecewise linear, paraboloidal, hyperboloidal, spherical, ellipsoidal, etc. by three-dimensional data. In such a situation, pure analytic approach alone is blind and often leads to false results. Without graphically interpreting the given data, it is almost impossible to discover relations between them and their laws to provide adequate data processing. For reasonably analytically approximating the given data, it is necessary and very useful to create conditions for efficiently applying analytic methods.

As ever, the fundamental principle of tolerable simplicity [2-7] plays a key role.

In overmathematics [2-7] and fundamental sciences of estimation [8-13], approximation [14, 15], as well as data modeling [16] and processing [17], to clearly graphically interpret the given three-dimensional data, it is very useful to provide their two-dimensional modeling via suitable data transformation if possible. For example, this is the case by strength data due to fundamental science of strength data unification, modeling, analysis, processing, approximation, and estimation [18, 19].

Iterative polar theories in fundamental sciences of estimation, approximation, data modeling and processing are applicable to practically arbitrary given data for which there exists a certain probable approximation law. In particular, it is also possible to combine these theories with other theories transforming the given data and to use not only their end data but also their intermediate data. In particular, preliminarily apply graph-analytic theories [20], principal graph types theories [21], invariance theories [22], groupwise centralization theories [23], bounds mean theories [24], linear two-dimensional [25] and three-dimensional [26] theories of moments of inertia, as well as general theories of moments of inertia [27] in fundamental sciences of estimation, approximation, data modeling and processing to the given data.

Iterative polar theories are completing (supplementing) additions to all these and other theories and consider all the given data points.

The ideas and essence of general central normalization theories are as follows:

1. Determining the least closed area containing all the data points.

2. Very roughly purely graphically determining a certain probable approximation law with the corresponding probable approximation graph, namely line (straight line, broken straight line, or curve) in the two-dimensional case, as well as plane, planes parts union, or surface in the three-dimensional case.

3. Dividing the area boundary graph into two subgraphs, namely above and below subgraphs (if this probable approximation law graph is nonclosed) or outer and inner subgraphs (if this probable approximation law graph is closed), with building the both extreme levels of the given data points.

4. Determining the type [21] of this law graph.

4.1. If this type is linear, namely a straight line in the two-dimensional case and a plane in the three-dimensional case, then by the fundamental principle of tolerable simplicity [2-7], first apply namely linear theories, e.g. least squared distance theories [28, 29], least biquadratic method [30], and quadratic mean theories for two [31] and three dimensions [32] in fundamental sciences of estimation, approximation, data modeling and processing to the given data points. If there are bounds and limitations which allow using certain predefined parts only, it leads to clear complications. This holds even in the simplest case of a straight line whose limited parts can be its intervals, half-closed intervals, and closed intervals, or segments, as well as more complicated parts. All the more, bounds and limitations for a plane in the three-dimensional case lead to much more complications of higher levels. For example, if limited parts of a straight line or a plane do not contain the base of the perpendicular from a given data point onto a graph, then the graph point which is the nearest to a given data point can be not alone but multiple and it is also possible that there are no such nearest points at all, e.g. in the case when in any arbitrarily small neighborhood (vicinity) of the base of that perpendicular, there are points in admissible parts of graphs. The same can be valid not only by this linear graph type. Secondly, it is also possible to apply general central normalization theories to the given data points in order to investigate whether theories nonlinearity can give essential advantages compared with linear theories.

4.2. If this type is piecewise linear, namely a broken straight line in the two-dimensional case or union of limited parts of planes in the three-dimensional case, then by the fundamental principle of tolerable simplicity [2-7], first divide the given data into appropriate parts and consider them separately along with the corresponding parts of linear graphs.

4.3. If this type is quasilinear with equal curvature signs, namely a result of relatively slightly deforming (bending, twisting, distorting, or warping) parts of linear graphs (straight lines, usual two-dimensional and multidimensional planes) to obtain arcs and surfaces without deflection (changing curvature signs), then by the fundamental principle of tolerable simplicity [2-7], along with Cartesian coordinate systems with their transformations equalizing the generally different mean curvatures, using polar coordinate systems with either predefined (fixed) or variable poles can bring additional advantages. To select such poles, preliminarily consider probable centers of curvature and ranges of their variability.

4.4. If this type is quasilinear with piecewise equal curvature signs, namely combining limited parts of quasilinear graphs with equal curvature signs, then by the fundamental principle of tolerable simplicity [2-7], first divide the given data into appropriate parts and consider them separately along with the corresponding parts of quasilinear graphs with piecewise equal curvature signs.

4.5. If this type is closed quasilinear with equal curvature signs which contains, e.g., circumferences and ellipses in the two-dimensional case, as well as spheres and ellipsoids in the three-dimensional case, then by the fundamental principle of tolerable simplicity [2-7], along with Cartesian coordinate systems with their transformations equalizing the sums of the second powers of the homonymous coordinates of the given data points, using polar coordinate systems can bring additional advantages, too. To begin with, select the given data center as a pole.

Consider such an initially introduced polar coordinate system.

By using a polar coordinate system, it is also possible to additionally introduce the Cartesian interpretation of this polar coordinate system, e.g. with a polar angle as an abscissa and a polar radius (distance) as an ordinate in the two-dimensional case (with adding the further polar angles in the multidimensional case).

Further use namely such a Cartesian-polar coordinate system OCprφ and its Cartesian-polar interpretation of the given data. Here there is no rotation invariance. Hence apply either the least biquadratic method [30] or, much better, quadratic mean theories for two [31] and three dimensions [32] in fundamental sciences of estimation, approximation, data modeling and processing to the given data points.

Given n (n ∈ N+ = {1, 2, ...}, n > 2) data points

[j=1n (xj , yj)] = {(x1 , y1), (x2 , y2), ... , (xn , yn)]

with any real coordinates in the initial Cartesian two-dimensional coordinate system Oxy .

1. Determine the polar radius rj and the polar angle φj of the jth data point in the initial polar two-dimensional coordinate system Orφ .

2. Consider the Cartesian-polar interpretation of the given data

[j=1n (rj , φj)] = {(r1 , φ1), (r2 , φ2), ... , (rn , φn)]

as their initial set which has been never, i.e. 0 times, transformed (see this transformation further):

{j=1n [rj(0), φj(0)]} = {[r1(0), φ1(0)], [r2(0), φ2(0)], ... , [rn(0), φn(0)]}.

3. Consider the linear type of Cartesian-polar dependences (a and bare any real numbers)

r(φ) = aφ + b .

4. Now show the general kth (k = 0, 1, 2, ...) step (stage) of applying the following iteration algorithm to be further systematically applied:

I. Consider the k times transformed set

{j=1n [rj(k), φj(k)]} = {[r1(k), φ1(k)], [r2(k), φ2(k)], ... , [rn(k), φn(k)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(k), φj]} = {[r1(k), φ1], [r2(k), φ2], ... , [rn(k), φn]},

of the Cartesian-polar interpretation of the given data.

II. Consider the same linear type of Cartesian-polar dependences

r(φ) = aφ + b .

III. Provide the best linear Cartesian-polar approximation

r(k)(φ) = a(k)φ + b(k)

to the k times transformed set

{j=1n [rj(k), φj(k)]} = {[r1(k), φ1(k)], [r2(k), φ2(k)], ... , [rn(k), φn(k)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(k), φj]} = {[r1(k), φ1], [r2(k), φ2], ... , [rn(k), φn]},

of the Cartesian-polar interpretation of the given data.

IV. Consequently apply the inverse transformations (to all the already applied direct transformations separately), namely multiplying with factor [a(k-h)φ + b(k-h)] at the hth step (stage) (h = 1, 2, ...) of applying this inverse transformation to the last already obtained best linear Cartesian-polar approximation

r(k)(φ) = a(k)φ + b(k)

to the k times transformed set

{j=1n [rj(k), φj(k)]} = {[r1(k), φ1(k)], [r2(k), φ2(k)], ... , [rn(k), φn(k)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(k), φj]} = {[r1(k), φ1], [r2(k), φ2], ... , [rn(k), φn]},

of the Cartesian-polar interpretation of the given data.

Nota bene: Do not confuse these hth steps (stages) (h = 1, 2, ...) of applying these inverse transformations with those kth (k = 0, 1, 2, ...) steps (stages) of applying the following iteration algorithm. For exactly one kth iteration step (stage), there are exactly k hth inverse transformation steps (stages).

V. Consider the obtained result R(k)(φ) as the kth approximation to the desired probable law

R = R(φ) .

VI. Apply the following transformation to the last already obtained transformed set of the Cartesian-polar interpretation of the given data:

VI.1. Ever conserve their polar angles:

φj(0) = φj(1) = φj(2) = ... = φj(k) = ... = φj (j = 1, 2, ... , n ; k = 0, 1, 2, ...).

VI.2. Divide their polar radii by the polar radii of the last already obtained best linear Cartesian-polar approximation by the same polar angles:

rj(k+1) = rj(k) / r(k)j) = rj(k) / [a(k)φ + b(k)] (j = 1, 2, ... , n ; k = 0, 1, 2, ...).

VII. Consider the k+1 times transformed set

{j=1n [rj(k+1), φj(k+1)]} = {[r1(k+1), φ1(k+1)], [r2(k+1), φ2(k+1)], ... , [rn(k+1), φn(k+1)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(k+1), φj]} = {[r1(k+1), φ1], [r2(k+1), φ2], ... , [rn(k+1), φn]},

of the Cartesian-polar interpretation of the given data.

Now explicitly show the 0th (k = 0) and 1st (k = 1) steps (stages) of applying this iteration algorithm.

5. Provide the best linear Cartesian-polar approximation

r(0)(φ) = a(0)φ + b(0)

to the 0 times transformed set

{j=1n [rj(0), φj(0)]} = {[r1(0), φ1(0)], [r2(0), φ2(0)], ... , [rn(0), φn(0)]}

of the Cartesian-polar interpretation of the given data.

6. Consequently apply the inverse transformations (to all the already applied direct transformations separately), namely multiplying with factor [a(0-h)φ + b(0-h)] at the hth step (stage) (h = 1, 2, ...) of applying this inverse transformation to the last already obtained best linear Cartesian-polar approximation

r(0)(φ) = a(0)φ + b(0)

to the 0 times transformed set

{j=1n [rj(0), φj(0)]} = {[r1(0), φ1(0)], [r2(0), φ2(0)], ... , [rn(0), φn(0)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(0), φj]} = {[r1(0), φ1], [r2(0), φ2], ... , [rn(0), φn]},

of the Cartesian-polar interpretation of the given data.

7. The obtained result is

R(0)(φ) = r(0)(φ) = a(0)φ + b(0)

because no direct transformation has been already used and hence no inverse transformation has to be used. Purely formally, we should multiply [a(0)φ + b(0)] with hth factor [a(-h)φ + b(-h)] at the hth step (stage) (h = 1, 2, ...) of applying this inverse transformation. Each of these factors has not been defined and hence does not exist. In overmathematics [2-7], this void (empty) operation is no operation at all, i.e. the result is the unique existent factor [a(0)φ + b(0)] itself. Formally,

[a(0)φ + b(0)][a(-1)φ + b(-1)][a(-2)φ + b(-2)][a(-3)φ + b(-3)]... = [a(0)φ + b(0)].

Nota bene: multiplying [a(0)φ + b(0)] with nonexistent factor [a(-1)φ + b(-1)] is multiplying with the void (empty) factor (or product) which has to be considered to be equal to 1 in order to bring no changes by multiplication. Similarly, the void (empty) addend (summand) (or sum) has to be considered to be equal to 0 in order to bring no changes by addition. It would be false to consider the void (empty) factor (or product) to be equal to 0 because this would generally bring changes, which has to be impossible for any void (empty) operation.

8. Consider the obtained result as the 0th approximation to the desired probable law

R = R(φ) .

9. Apply the following transformation to the last already obtained transformed set of the Cartesian-polar interpretation of the given data:

9.1. Ever conserve their polar angles:

φj(0) = φj(1) = φj(2) = ... = φj(k) = ... = φj (j = 1, 2, ... , n ; k = 0, 1, 2, ...).

9.2. Divide their polar radii by the polar radii of the last already obtained best linear Cartesian-polar approximation by the same polar angles:

rj(1) = rj(0) / r(0)j) = rj(0) / [a(0)φ + b(0)] (j = 1, 2, ... , n).

10. Consider the 1 time transformed set

{j=1n [rj(1), φj(1)]} = {[r1(1), φ1(1)], [r2(1), φ2(1)], ... , [rn(1), φn(1)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(1), φj]} = {[r1(1), φ1], [r2(1), φ2], ... , [rn(1), φn]},

of the Cartesian-polar interpretation of the given data.

11. Consider the same type of linear Cartesian-polar dependences

r(φ) = aφ + b .

12. Provide the best linear Cartesian-polar approximation

r(1)(φ) = a(1)φ + b(1)

to the 1 time transformed set

{j=1n [rj(1), φj(1)]} = {[r1(1), φ1(1)], [r2(1), φ2(1)], ... , [rn(1), φn(1)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(1), φj]} = {[r1(1), φ1], [r2(1), φ2], ... , [rn(1), φn]},

of the Cartesian-polar interpretation of the given data.

13. Consequently apply the inverse transformations (to all the already applied direct transformations separately), namely multiplying with factor [a(1-h)φ + b(1-h)] at the hth step (stage) (h = 1, 2, ...) of applying this inverse transformation to the last already obtained best linear Cartesian-polar approximation

r(1)(φ) = a(1)φ + b(1)

to the 1 time transformed set

{j=1n [rj(1), φj(1)]} = {[r1(1), φ1(1)], [r2(1), φ2(1)], ... , [rn(1), φn(1)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(1), φj]} = {[r1(1), φ1], [r2(1), φ2], ... , [rn(1), φn]},

of the Cartesian-polar interpretation of the given data.

Drop all the void (empty) multiplications by h = 2, 3, ... bringing nothing and apply 1 time (k = h = 1) the inverse transformation (to the following transformation), namely multiplying with the factor [a(0)φ + b(0)] at the 1st step (stage) of applying this inverse transformation to this last already obtained best linear Cartesian-polar approximation.

14. The obtained result is

R(1)(φ) = r(0)(φ)r(1)(φ) = [a(0)φ + b(0)][a(1)φ + b(1)].

15. Consider the obtained result as the 1st approximation to the desired probable law

R = R(φ) .

16. Apply the following transformation to the last already obtained transformed set of the Cartesian-polar interpretation of the given data:

16.1. Ever conserve their polar angles:

φj(0) = φj(1) = φj(2) = ... = φj(k) = ... = φj (j = 1, 2, ... , n ; k = 0, 1, 2, ...).

16.2. Divide their polar radii by the polar radii of the last already obtained best linear Cartesian-polar approximation by the same polar angles:

rj(2) = rj(1) / r(1)j) = rj(1) / [a(1)φ + b(1)] (j = 1, 2, ... , n).

17. Consider the 2 times transformed set

{j=1n [rj(2), φj(2)]} = {[r1(2), φ1(2)], [r2(2), φ2(2)], ... , [rn(2), φn(2)]},

or, due to ever conserving their polar angles by this transformation,

{j=1n [rj(2), φj]} = {[r1(2), φ1], [r2(2), φ2], ... , [rn(2), φn]},

of the Cartesian-polar interpretation of the given data.

Clearly, the kth approximation to the desired probable law

R = R(φ)

is

R(k)(φ) = r(0)(φ)r(1)(φ)...r(k)(φ) = [a(0)φ + b(0)][a(1)φ + b(1)]...[a(k)φ + b(k)],

which can be rigorously deductively proved via the method of mathematical induction.

Nota bene: Iteration convergence is not necessary here. It is possible to stop iterations at any kth iteration step (stage) to obtain a polynomial approximation of the (k+1)th power to the desired probable law. Each next iteration step (stage) shows whether it brings advantages with taking both approximation quality improvement and expression complication into account.

The essence of iterative polar theories is shown above in the simplest case of the linearity of the desired probable law at every iteration step (stage). But such iteration also provides considering polynomial laws of higher powers, too. Naturally, it is possible to also take data point polar angles variation into account, as well as arbitrary nonlinear desired probable laws at every iteration step (stage). Multiplication-division combination is not the unique choice, too. Addition-subtraction combination is alo possible, as well as arbitrary operation combinations and their variability at every iteration step (stage).

Generally, we can consider any kth step (stage) approximation law

r = kr(φ)

where kr(φ) are any functions depending on the step (stage) number,

as well as any invertible kth step (stage) transformation functions (operators) kT depending on the step (stage) number.

Further natural generalization is also possible, e.g. any coordinate systems also depending on the step (stage) number.

In many practically important cases, these simplest graph types suffice for data modeling, processing, estimation, and approximation. Otherwise, additionally introduce more complicated graph types, e.g. quantigraph types containing quantigraphs belonging to the quantisets building quantialgebras in quantianalysis in overmathematics [2-7].

Iterative polar theories consider all the given data points and provide relatively simply approximating all the given data.

To improve data modeling, processing, estimation, and approximation, it is also possible to preliminarily locally represent each data point group with its center whose quantity equals the number of the points in this group and then applying both graphical and analytic approaches to the already groupwise centralized data, namely to a quantiset [2-7] of their local groupwise centers.

The variety of iterative polar theories and their variability provide their algorithms flexibility.

These theories are very efficient in estimation, approximation, data modeling and processing.

Acknowledgements to Anatolij Gelimson for our constructive discussions on coordinate system transformation invariances and his very useful remarks.

References

[1] Encyclopaedia of Mathematics. Ed. M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994

[2] Lev Gelimson. Basic New Mathematics. Monograph. Drukar Publishers, Sumy, 1995

[3] Lev Gelimson. General Analytic Methods. Abhandlungen der WIGB (Wissenschaftlichen Gesellschaft zu Berlin), 3 (2003), Berlin

[4] Lev Gelimson. Elastic Mathematics. Abhandlungen der WIGB (Wissenschaftlichen Gesellschaft zu Berlin), 3 (2003), Berlin

[5] Lev Gelimson. Elastic Mathematics. General Strength Theory. Mathematical, Mechanical, Strength, Physical, and Engineering Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2004

[6] Lev Gelimson. Providing Helicopter Fatigue Strength: Flight Conditions. In: Structural Integrity of Advanced Aircraft and Life Extension for Current Fleets – Lessons Learned in 50 Years After the Comet Accidents, Proceedings of the 23rd ICAF Symposium, Dalle Donne, C. (Ed.), 2005, Hamburg, Vol. II, 405-416

[7] Lev Gelimson. Overmathematics: Fundamental Principles, Theories, Methods, and Laws of Science. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2009

[8] Lev Gelimson. General estimation theory. Transactions of the Ukraine Glass Institute, 1 (1994), 214-221

[9] Lev Gelimson. General Estimation Theory. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2001

[10] Lev Gelimson. General Estimation Theory Fundamentals. Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 1 (2001), 3

[11] Lev Gelimson. General Estimation Theory Fundamentals (along with its line by line translation into Japanese). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 9 (2009), 1

[12] Lev Gelimson. General Estimation Theory (along with its line by line translation into Japanese). Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[13] Lev Gelimson. Fundamental Science of Estimation. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[14] Lev Gelimson. General Problem Theory. Abhandlungen der WIGB (Wissenschaftlichen Gesellschaft zu Berlin), 3 (2003), Berlin

[15] Lev Gelimson. Fundamental Science of Approximation. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[16] Lev Gelimson. Fundamental Science of Data Modeling. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[17] Lev Gelimson. Fundamental Science of Data Processing. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[18] Lev Gelimson. Fundamental Science of Strength Data Unification, Modeling, Analysis, Processing, Approximation, and Estimation (Essential). Strength and Engineering Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 3

[19] Lev Gelimson. Fundamental Science of Strength Data Unification, Modeling, Analysis, Processing, Approximation, and Estimation (Fundamentals). Strength Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2010

[20] Lev Gelimson. Graph-Analytic Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 2

[21] Lev Gelimson. Principal Graph Types Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 3

[22] Lev Gelimson. Data, Problem, Method, and Result Invariance Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing, and Solving General Problems (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 1

[23] Lev Gelimson. Groupwise Centralization Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mechanical and Physical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 1

[24] Lev Gelimson. Bounds Mean Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 4

[25] Lev Gelimson. Linear Two-Dimensional Theories of Moments of Inertia in Fundamental Sciences of Estimation, Approximation, and Data Processing (Essential). Mechanical and Physical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 1

[26] Lev Gelimson. Linear Three-Dimensional Theories of Moments of Inertia in Fundamental Sciences of Estimation, Approximation, and Data Processing (Essential). Mechanical and Physical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 2

[27] Lev Gelimson. General Theories of Moments of Inertia in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mechanical and Physical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 3

[28] Lev Gelimson. Least Squared Distance Theory in Fundamental Science of Solving General Problems (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 1

[29] Lev Gelimson. Least Squared Distance Theories in Fundamental Sciences of Estimation, Approximation, and Data Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 2

[30] Lev Gelimson. Least Biquadratic Method in Fundamental Sciences of Estimation, Approximation, and Data Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 3

[31] Lev Gelimson. Quadratic Mean Theories for Two Dimensions in Fundamental Sciences of Approximation and Data Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 4

[32] Lev Gelimson. Quadratic Mean Theories for Three Dimensions in Fundamental Sciences of Approximation and Data Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 10 (2010), 5

[33] Lev Gelimson. Circumferential Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 5

[34] Lev Gelimson. Spherical Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 6

[35] Lev Gelimson. Iterative Polar Theories in Fundamental Sciences of Estimation, Approximation, Data Modeling and Processing (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 7