Unimathematical Approximation Fundamental Sciences System (Essential)
by
© Ph. D. & Dr. Sc. Lev Gelimson
Academic Institute for Creating Fundamental Sciences (Munich, Germany)
Mathematical Journal
of the "Collegium" All World Academy of Sciences
Munich (Germany)
12 (2012), 8
UDC 501:510
2010 Math. Subj. Classification: primary 00A71; second. 03E10, 03E72, 08B99, 26E30, 28A75.
Keywords: Overmathematics, general problem fundamental sciences system, pseudosolution, quantiset, subproblem, strategy, invariance, quantibound, estimation, trend, bisector, iteration.
In the very fundamentals of classical applied and computational mathematics [1] with its own evident cardinal defects of principle, there were well-known attempts to consider some separate objects and systems with chosen complications, e.g., approximation and finite overdetermined sets of equations. To anyway consider them, classical mathematics only has very limited, nonuniversal, and inadequate concepts and methods such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of principal mistakes. The same holds for classical mathematics estimators and methods.
1. The absolute error Δ [1] alone is noninvariant and insufficient for quality estimation giving, for example, the same result 1 for acceptable formal (correct or not) equality 1000 =? 999 and for inadmissible formal equality 1 =? 0. Further the absolute error is not invariant by equivalent transformations of a problem because, for instance, when multiplying a formal equality by a nonzero number, the absolute error is multiplied by the norm (modulus, absolute value) of that number.
2. The relative error δ [1] should play a supplement role. But even in the case of the simplest formal equality a =? b with two numbers, there are at once two propositions, to use either δ1 = |a - b|/|a| or δ2 = |a - b|/|b| as an estimating fraction. It is a generally inadmissible uncertainty that could be acceptable only if the ratio a/b is close to 1. Further the relative error is so intended that it should always belong to segment [0, 1]. But for 1 =? 0 by choosing 0 as the denominator, the result is +∞ , for 1 =? -1 by each denominator choice, the result is 2. Hence the relative error has a restricted range of applicability amounting to the equalities of two elements whose ratio is close to 1. By more complicated equalities with at least three elements, e.g., by 100 - 99 =? 0 or 1 - 2 + 3 - 4 =? -1, the choice of a denominator seems to be vague at all. This is why the relative error is uncertain in principle, has a very restricted domain of applicability, and is practically used in the simplest case only and very seldom for variables and functions.
3. The least square method [1] can give adequate results in very special cases only. Its deep analysis [2] by the principles of constructive philosophy, overmathematics, and other fundamental mathematical sciences has discovered many fundamental defects both in the essence (as causes) and in the applicability (as effects) of this method that is adequate in some rare special cases only and even in them needs thorough adequacy analysis. The method is based on the absolute error alone not invariant by equivalent transformations of a problem and ignores the possibly noncoinciding physical dimensions (units) of relations in a problem. The method does not correlate the deviations of the objects approximations from the approximated objects with these objects themselves, simply mixes those deviations without their adequately weighing, and considers equal changes of the squares of those deviations with relatively less and greater moduli (absolute values) as equivalent ones. The method foresees no iterating, is based on a fixed algorithm accepting no a priori flexibility, and provides no own a posteriori adapting. The method uses no invariant estimation of approximation, considers no different approximations, foresees no comparing different approximations, and considers no choosing the best approximation among different ones. These defects in the method essence lead to many fundamental shortcomings in its applicability. Among them are applicability sense loss by a set of equations with different physical dimensions (units), no objective sense of the result noninvariant by equivalent transformations of a problem, restricting the class of acceptable equivalent transformations of a problem, no essentially unique correction of applicability sense loss, possibly ignoring subproblems of a problem, paradoxical approximation, no analyzing the deviations of the result, no adequate estimating and evaluating its quality, no refining the results, no choice, and the best quasisolution illusion as the highest truth fully ungrounded and inadequate. Additionally consider the simplest least square method [1] approach which is typical. Minimizing the sum of the squared differences of the alone preselected coordinates (e.g., ordinates in a two-dimensional problem) of the graph of the desired approximation function and of everyone among the given data depends on this preselection, ignores the remaining coordinates, and provides no coordinate system rotation invariance and hence no objective sense of the result. Moreover, the method is correct by constant approximation or no data scatter only and gives systematic errors increasing together with data scatter and the deviation (namely declination) of an approximation from a constant. Therefore, the least square method [1] has many fundamental defects both in the essence (as causes) and in the applicability (as effects), is adequate only in some rare special cases and even in them needs thorough adequacy analysis. Experimental data are inexact, and their amount is always taken greater than that of the parameters in an approximating function often geometrically interpretable by a straight line or curve, plane or surface. That is why this method was possibly the most important one for any data processing and seemed to be irreplaceable.
4. Further in classical mathematics [1], there is no sufficiently general concept of a quantitative mathematical problem. The concept of a finite or countable set of equations ignores their quantities like any Cantor set [1]. They are very important by contradictory (e.g., overdetermined) problems without precise solutions. Besides that, without equations quantities, by subjoining an equation coinciding with one of the already given equations of such a set, this subjoined equation is simply ignored whereas any (even infinitely small) changing this subjoined equation alone at once makes this subjoining essential and changes the given set of equations. Therefore, the concept of a finite or countable set of equations is ill-defined [1]. Uncountable sets of equations (also with completely ignoring their quantities) are not considered in classical mathematics [1] at all.
Therefore, the very fundamentals of classical applied and computational mathematics [1] have a lot of obviously deep and even cardinal defects of principle.
Consequently, to make classical applied and computational mathematics [1] adequate, its evolutionarily locally correcting, improving, and developing which can be useful are, unfortunately, fully insufficient. Classical applied and computational mathematics [1] needs revolutionarily replacing its inadequate very fundamentals via adequate very fundamentals.
Nota bene: Naturally, if possible, any revolution in classical applied and computational mathematics [1] has to be based on an adequate revolution in classical pure mathematics [1].
Computational megascience [2] based on applied megamathematics [2] and hence on pure megamathematics [2] and on overmathematics [2] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically unimathematically modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) data. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the unimathematical approximation fundamental sciences system [2] including:
fundamental science on general approximation problem essence including general approximation problem type and setting theory, general approximation problem quantiobject theory, general approximation problem quantisystem theory, and general approximation problem quantioperation theory;
fundamental science of general approximation problem pseudosolution including general approximation problem pseudosolution theory, general approximation problem quasisolution theory, general approximation problem supersolution theory, and general approximation problem antisolution theory;
fundamental science on general approximation problem solving strategy and tactic including general approximation problem solving strategy theory and general approximation problem solving tactic theory;
fundamental science of general approximation problem transformation including general approximation problem transformation theory, general approximation problem structuring theory, general approximation problem restructuring theory, and general approximation problem partitioning theory;
fundamental science of general approximation problem analysis including general approximation problem analysis theory, general approximation subproblem theory, and general approximation subproblem criterion theory;
fundamental science of general approximation problem synthesis including general approximation problem synthesis theory, general approximation problem symmetry theory, and general approximation problem criterion theory;
fundamental science on general approximation problem invariance including general approximation problem homogeneous coordinate system theory, general approximation problem nonhomogeneous coordinate system theory, general approximation problem invariance theory, general approximation problem data invariance theory, general approximation problem method invariance theory, general approximation problem pseudosolution invariance theory, general approximation problem quasisolution, supersolution, and antisolution invariance theories;
fundamental science of general approximation subproblem estimation including general approximation subproblem estimation theory, difference norm estimation theory, deviation estimation theory, distance estimation theory, linear unierror estimation theory, square unierror estimation theory, reserve estimation theory, reliability estimation theory, and risk estimation theory;
fundamental science of general approximation problem estimation including general approximation problem estimation theory, power estimation theories family, product estimation theories family, power difference estimation theories family, and quantibound estimation theories family;
fundamental science on general approximation problem solving criteria including distance minimization theory, linear unierror minimization theory, square unierror minimization theory, reserve maximization theory, distance equalization theory, linear unierror equalization theory, square unierroro equalization theory, reserve equalization theory, distance quantiinfimum theory, linear and square unierrors quantiinfimum theories, and reserve quantisupremum theory;
fundamental science on general approximation problem solving methods including approximation subproblem subjoining theory, distance function theories family, linear unierror function theories family, square unierror function theories family, power increase theory, distance product theories family, linear and square unierrors product theories families, distance power difference theories family, linear and square unierrors power difference theories families, distance quantibound theory, linear and square unierrors quantibound theories, reserve quantibound theory, trial pseudosolution and direct solving theories families;
fundamental science on general approximation problem iteration including single-source iteration theory, multiple-sources iteration theory, intelligent iteration theory, general trend multistep theory, trend multistep distance function theories family, trend multistep linear and square unierrors function theories families, and iteration acceleration theory;
fundamental science on general approximation problem bisectors including general center and bisector theory, distance, linear and square unierrors bisector theories, recurrent bisector theories family, incenter theories family, triangles incenters theories family, equidistance theories family, linear unierror equalizing theories family, square unierror equalizing theories family, internal bisectors intersections center theories family, sides pairs bisectors and equidistance theories families, adjacent sides bisectors theories family, adjacent corners bisectors theories family, opposite sides bisectors theories family, and opposite corners bisectors theories family;
analytic solving fundamental science including general power solution theory, power analytic macroelement theory, and integral analytic macroelement theory;
fundamental science of general approximation problem testing including directed test system theory, distribution theory, general center theory, triangle, tangential polygon, and quadrilateral theories;
fundamental science of general approximation problem application including overmathematics development theory, pure megamathematics development theory, applied megamathematics development theory, computational fundamental megascience development theory, fundamental mechanical, strength, and physical sciences systems development theories.
The unimathematical approximation fundamental sciences system is universal and very efficient.
References
[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994.
[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.