Overcoming Complication Fundamental Sciences System (Essential)
by
© Ph. D. & Dr. Sc. Lev Gelimson
Academic Institute for Creating Fundamental Sciences (Munich, Germany)
Mathematical Journal
of the "Collegium" All World Academy of Sciences
Munich (Germany)
11 (2011), 27
UDC 501:510
2010 Math. Subj. Classification: primary 00A71; second. 03E10, 03E72, 08B99, 26E30, 28A75.
Keywords: Overmathematics, overcoming complication fundamental sciences system, contradiction, infringement, hindrance, obstacle, restriction, mistake, distortion, error, information incompleteness, multivariant approach, damage tolerance, unierror, reserve.
All existing objects and systems in nature, society, and thinking have complications, e.g., contradictoriness, and hence exist without adequate models in classical mathematics [1]. It intentionally avoids, ignores, and cannot (and possibly hence does not want to) adequately consider, model, express, measure, evaluate, and estimate many complications. Among them are contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, multivariant approach, etc. There were well-known attempts to consider some separate objects and systems with chosen complications, e.g., approximation and finite overdetermined sets of equations. To anyway consider them, classical mathematics only has very limited, nonuniversal, and inadequate concepts and methods such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of principal mistakes. Moreover, the same holds for the very fundamentals of classical mathematics such as the real numbers with gaps; the Cantor sets, relations, and at most countable only restrictedly reversible operations with ignoring elements quantities, absorption, and contradicting the conservation law of nature; the cardinality sensitive to finite unions of disjoint finite sets only and giving the same continuum cardinality C for distinct point sets between two parallel lines or planes differently distant from one another; the measures which are finitely sensitive within a certain dimensionality, give either 0 or +∞ for distinct point sets between two parallel lines or planes differently distant from one another, and cannot discriminate the empty set ∅ and null sets, namely zero-measure sets; the probabilities which cannot discriminate impossible and some differently possible events. The same holds for classical mathematics estimators and methods.
The absolute error Δ [1] alone is noninvariant and insufficient for quality estimation giving, for example, the same result 1 for the acceptable formal (correct or not) equality 1000 =? 999 and for the inadmissible one 1 =? 0. Further the absolute error is not invariant by equivalent transformations of a problem because, for instance, when multiplying a formal equality by a nonzero number, the absolute error is multiplied by the norm (absolute value) of that number.
The relative error δ [1] should play a supplement role. But even in the case of the simplest formal equality a =? b with two numbers, there are at once two propositions, to use either δ1 = |a - b|/|a| or δ2 = |a - b|/|b| as an estimating fraction. It is a generally inadmissible uncertainty that could be acceptable only if the ratio a/b is close to 1. Further the relative error is so intended that it should always belong to segment [0, 1]. But for 1 =? 0 by choosing 0 as the denominator, the result is +∞ , for 1 =? -1 by each denominator choice, the result is 2. Hence the relative error has a restricted range of applicability amounting to the equalities of two elements whose ratio is close to 1. By more complicated equalities with at least three elements, e.g., by 100 - 99 =? 0 or 1 - 2 + 3 - 4 =? -1, the choice of a denominator seems to be vague at all. This is why the relative error is uncertain in principle, has a very restricted domain of applicability, and is practically used in the simplest case only and very seldom for variables and functions.
The least square method [1] can give adequate results in very special cases only. Its deep analysis [2] by the principles of constructive philosophy, overmathematics, and other fundamental mathematical sciences has discovered many fundamental defects both in the essence (as causes) and in the applicability (as effects) of this method that is adequate in some rare special cases only and even in them needs thorough adequacy analysis. The method is based on the absolute error alone not invariant by equivalent transformations of a problem and ignores the possibly noncoinciding physical dimensions (units) of relations in a problem. The method does not correlate the deviations of the objects approximations from the approximated objects with these objects themselves, simply mixes those deviations without their adequately weighing, and considers equal changes of the squares of those deviations with relatively less and greater moduli (absolute values) as equivalent ones. The method foresees no iterating, is based on a fixed algorithm accepting no a priori flexibility, and provides no own a posteriori adapting. The method uses no invariant estimation of approximation, considers no different approximations, foresees no comparing different approximations, and considers no choosing the best approximation among different ones. These defects in the method essence lead to many fundamental shortcomings in its applicability. Among them are applicability sense loss by a set of equations with different physical dimensions (units), no objective sense of the result noninvariant by equivalent transformations of a problem, restricting the class of acceptable equivalent transformations of a problem, no essentially unique correction of applicability sense loss, possibly ignoring subproblems of a problem, paradoxical approximation, no analyzing the deviations of the result, no adequate estimating and evaluating its quality, no refining the results, no choice, and the highest truth ungrounded.
Computational megascience [2] based on applied megamathematics [2] and hence on pure megamathematics [2] and on overmathematics [2] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically uniquantitatively modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) data. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the overcoming complication fundamental sciences system [2] including:
complication fundamental science including contradiction theory, infringement theory, damage theory, hindrance theory, obstacle theory, restriction theory, mistake theory, distortion theory, error theory, information incompleteness theory, multivariant approach theory;
complication modeling fundamental science including general mathematical theories and methods of rationally and adequately modeling complications themselves, as well as general objects and systems with complications;
complication measurement fundamental science including general mathematical theories and methods of rationally and adequately measuring complications themselves, as well as general objects and systems with complications;
complication estimation fundamental science including general mathematical theories and methods of rationally and adequately estimating complications themselves, as well as general objects and systems with complications;
complication processing fundamental science including general mathematical theories and methods of rationally and adequately processing complications themselves, as well as general objects and systems with complications;
complication testing fundamental science including general mathematical theories and methods of rationally and adequately testing complications themselves, as well as general objects and systems with complications;
complication tolerance fundamental science including general mathematical theories and methods of the creation, successful functioning, improvement, perfection, and analysis of general objects and systems with complications such as contradiction tolerance theory, infringement tolerance theory, damage tolerance theory, hindrance tolerance theory, obstacle tolerance theory, restriction tolerance theory, mistake tolerance theory, distortion tolerance theory, error tolerance theory, information incompleteness tolerance theory, and multivariant approach tolerance theory;
complicated system control fundamental science including general mathematical theories and methods of rationally and optimally controlling general objects and systems with complications;
complication utilization fundamental science including general mathematical theories and methods of efficiently utilizing complications for developing general objects, systems, and their mathematical models, as well as for solving general problems.
The unierror E ∈ [0, 1] [2] irreproachably corrects the relative error and generalizes it possibly for any conceivable range of applicability. Introduce extended division a//b = a/b by a ≠ 0; a//b = 0 by a = 0 independently of the existence and value of b. For a =? b , a unierror can be linear estimating fraction Ea =? b = |a - b|// (|a| + |b|). Introduce positive uninumber [2] p and/or uninumber h: Ea =? b (p, h) = |a - b|//(|a - h| + |b - h| + p). Use quadratic estimating fraction 2Ea =? b = |a - b|//[2(a2 + b2)]1/2. E0 =? 0 = 0; E1 =? 0 = 1; E100 =? 99 = 1/199; 2E0 =? 0 = 0; 2E1 =? 0 = 1/21/2; 2E1 =? -1 = 1, E100 - 99 =? 0 = 1/199 = E100 =? 99; E1 - 2 + 3 - 4 =? -1 = |1 - 2 + 3 - 4 + 1|/(1 + 2 + 3 + 4 + 1) = 1/11.
The reserve R ∈ [-1, 1] [2] extends the unierror and for the first time discriminates exact objects or models by the confidence in their exactness reliability, e.g., exact solutions x1 = 1 + 10-10 practically unreliable and x2 = 1 + 1010 guaranteed to inequation x > 1. For each inexact object I, E(I) > 0, take R(I) = - E(I). For each exact object E, E(E) = 0, R(E) ≥ 0, define a suitable mapping of the object with respect to its exactness boundary and take the unierror of the mapped object. For inequalities, use opposite relations: Rx > 1(x1) = Rx > 1(1 + 10-10) = Ex <? 1(1 + 10-10) =10-10/(2 + 10-10) and Rx > 1(x2) = Rx > 1(1 + 1010) = Ex <? 1(1 + 1010) =1010/(2 + 1010).
The overcoming complication fundamental sciences system is universal and very efficient.
References
[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994.
[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.