Unimathematical Measurement Fundamental Sciences System (Essential)
by
© Ph. D. & Dr. Sc. Lev Gelimson
Academic Institute for Creating Fundamental Sciences (Munich, Germany)
Mechanical and Physical Journal
of the "Collegium" All World Academy of Sciences
Munich (Germany)
11 (2011), 8
Keywords: Overmathematics, unimathematical measurement fundamental sciences system, measurement, object, process, system, physical model, mathematical model, concession, contradiction, infringement, damage, hindrance, obstacle, restriction, mistake, distortion, error, harmony (consistency), order (regularity), integrity, preference, assistance, open space, correctness, adequacy, accuracy, reserve, resource, reliability, risk, unjustified artificial randomization, deviation.
Classical science possibilities in measuring objects and processes and determining true measurement data are very limited, nonuniversal, and inadequate.
Classical mathematics [1] with hardened systems of axioms, intentional search for contradictions and even their purposeful creation cannot (and does not want to) regard very many problems in science, engineering, and life. This generally holds in particular by measuring very inhomogeneous objects and systems, as well as rapidly changeable processes. It is discovered [2] that classical fundamental mathematical theories, methods, and concepts [1] are insufficient for adequately solving and even considering many typical urgent problems.
Even the very fundamentals of classical mathematics [1] have evident lacks and shortcomings.
1. The real numbers R evaluate no unbounded quantity and, because of gaps, not all bounded quantities. The same probability pn = p of the random sampling of a certain n ∈ N = {0, 1, 2, ...} does not exist in R , since ∑n∈N pn is either 0 for p = 0 or +∞ for p > 0. It is urgent to exactly express (in some suitable extension of R) all infinite and infinitesimal quantities, e.g., such a p for any countable or uncountable set, as well as distributions and distribution functions on any sets of infinite measures.
2. The Cantor sets [1] with either unit or zero quantities of their possible elements may contain any object as an element either once or not at all with ignoring its true quantity. The same holds for the Cantor set relations and operations with absorption. That is why those set operations are only restrictedly invertible. In the Cantor sets, the simplest equations X ∪ A = B and X ∩ A = B in X are solvable by A ⊆ B and A ⊇ B only, respectively [uniquely by A = ∅ (the empty set) and A = B = U (a universal set), respectively]. The equations X ∪ A = B and X = B \ A in the Cantor sets are equivalent by A = ∅ only. In a fuzzy set, the membership function of each element may also lie strictly between these ultimate values 1 and 0 in the case of uncertainty only. Element repetitions are taken into account in multisets with any cardinal numbers as multiplicities and in ordered sets (tuples, sequences, vectors, permutations, arrangements, etc.) [1]. They and unordered combinations with repetitions cannot express many typical objects collections (without structure), e.g., that of half an apple and a quarter pear. For any concrete (mixed) physical magnitudes (quantities with measurement units), e.g., "5 L (liter) fuel", there is no suitable mathematical model and no known operation, say between "5 L" and "fuel" (not: "5 L" × "fuel" or "fuel" × "5 L"). Note that multiplication is the evident operation between the number "5" and the measurement unit "L". The Cantor set relations and operations only restrictedly reversible and allowing absorption contradict the conservation law of nature because of ignoring element quantities and hinder constructing any universal degrees of quantity.
3. The cardinality is sensitive to finite unions of disjoint finite sets only but not sufficiently sensitive to infinite sets and even to intersecting finite sets (because of absorption). It gives the same continuum cardinality C for clearly very distinct point sets in a Cartesian coordinate system between two parallel lines or planes differently distant from one another.
4. The measures are finitely sensitive within a certain dimensionality, give either 0 or +∞ for distinct point sets between two parallel lines or planes differently distant from one another, and cannot discriminate the empty set ∅ and null sets, namely zero-measure sets [1].
5. The probabilities cannot discriminate impossible and some differently possible events.
6. The operations are considered to be at most countable.
Further all existing objects and systems in nature, society, and thinking have complications, e.g., contradictoriness, and hence exist without adequate models in classical mathematics [1]. It intentionally avoids, ignores, and cannot (and possibly hence does not want to) adequately consider, model, express, measure, evaluate, and estimate many complications. Among them are contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, multivariant approach, etc. There were well-known attempts to consider some separate objects and systems with chosen complications, e.g., approximation and finite overdetermined sets of equations. To anyway consider them, classical mathematics only has very limited, nonuniversal, and inadequate concepts and methods such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of principal mistakes.
The absolute error Δ [1] alone is noninvariant and insufficient for quality estimation giving, for example, the same result 1 for the acceptable formal (correct or not) equality 1000 =? 999 and for the inadmissible one 1 =? 0. Further the absolute error is not invariant by equivalent transformations of a problem because, for instance, when multiplying a formal equality by a nonzero number, the absolute error is multiplied by the norm (absolute value) of that number.
The relative error δ [1] should play a supplement role. But even in the case of the simplest formal equality a =? b with two numbers, there are at once two propositions, to use either δ1 = |a - b|/|a| or δ2 = |a - b|/|b| as an estimating fraction. It is a generally inadmissible uncertainty that could be acceptable only if the ratio a/b is close to 1. Further the relative error is so intended that it should always belong to segment [0, 1]. But for 1 =? 0 by choosing 0 as the denominator, the result is +∞ , for 1 =? -1 by each denominator choice, the result is 2. Hence the relative error has a restricted range of applicability amounting to the equalities of two elements whose ratio is close to 1. By more complicated equalities with at least three elements, e.g., by 100 - 99 =? 0 or 1 - 2 + 3 - 4 =? -1, the choice of a denominator seems to be vague at all. This is why the relative error is uncertain in principle, has a very restricted domain of applicability, and is practically used in the simplest case only and very seldom for variables and functions.
The least square method [1] can give adequate results in very special cases only. Its deep analysis [2] by the principles of constructive philosophy, overmathematics, and other fundamental mathematical sciences has discovered many fundamental defects both in the essence (as causes) and in the applicability (as effects) of this method that is adequate in some rare special cases only and even in them needs thorough adequacy analysis. The method is based on the absolute error alone not invariant by equivalent transformations of a problem and ignores the possibly noncoinciding physical dimensions (units) of relations in a problem. The method does not correlate the deviations of the objects approximations from the approximated objects with these objects themselves, simply mixes those deviations without their adequately weighing, and considers equal changes of the squares of those deviations with relatively less and greater moduli (absolute values) as equivalent ones. The method foresees no iterating, is based on a fixed algorithm accepting no a priori flexibility, and provides no own a posteriori adapting. The method uses no invariant estimation of approximation, considers no different approximations, foresees no comparing different approximations, and considers no choosing the best approximation among different ones. These defects in the method essence lead to many fundamental shortcomings in its applicability. Among them are applicability sense loss by a set of equations with different physical dimensions (units), no objective sense of the result noninvariant by equivalent transformations of a problem, restricting the class of acceptable equivalent transformations of a problem, no essentially unique correction of applicability sense loss, possibly ignoring subproblems of a problem, paradoxical approximation, no analyzing the deviations of the result, no adequate estimating and evaluating its quality, no refining the results, no choice, and the highest truth ungrounded.
Further in classical mathematics [1], there is no sufficiently general concept of a quantitative mathematical problem. The concept of a finite or countable set of equations ignores their quantities like any Cantor set [1]. They are very important by contradictory (e.g., overdetermined) problems without precise solutions. Besides that, without equations quantities, by subjoining an equation coinciding with one of the already given equations of such a set, this subjoined equation is simply ignored whereas any (even infinitely small) changing this subjoined equation alone at once makes this subjoining essential and changes the given set of equations. Therefore, the concept of a finite or countable set of equations is ill-defined [1]. Uncountable sets of equations (also with completely ignoring their quantities) are not considered in classical mathematics [1] at all.
Applied megamathematics [2] based on pure megamathematics [2] and on overmathematics [2] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically unimathematically modeling, expressing, measuring, evaluating, and estimating objects, as well as setting and solving general problems in science, engineering, and life. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the unimathematical measurement fundamental sciences system [2] including:
fundamental science of unimathematical object measurement which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects with possibly recovering true measurement information using incomplete changed data;
fundamental science of unimathematical system measurement which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general systems with possibly recovering true measurement information using incomplete changed data;
fundamental science of unimathematical physical model measurement which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of physical models with possibly recovering true measurement information using incomplete changed data;
fundamental science of unimathematical model measurement which includes general theories and methods of developing and applying unimathematical uniquantity as universal perfectly sensitive quantimeasure of mathematical models with possibly recovering true measurement information using incomplete changed data;
fundamental science of measuring concessions which for the first time regularly applies and develops universal overmathematical theories and methods of measuring contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, and errors, and also of rationally and optimally controlling them and even of their efficient utilization for developing general objects, systems, and their mathematical models, as well as for solving general problems;
fundamental science of measuring reserves further naturally generalizing fundamental science of measuring concessions and for the first time regularly applying and developing universal overmathematical theories and methods of measuring not only contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, and errors, but also harmony (consistency), order (regularity), integrity, preference, assistance, open space, correctness, adequacy, accuracy, reserve, resource, and also of rationally and optimally controlling them and even of their efficiently utilization for developing general objects, systems, and their mathematical models, as well as for solving general problems;
fundamental sciences of measuring reliability and risk for the first time regularly applying and developing universal overmathematical theories and methods of quantitatively measuring the reliabilities and risks of real general objects and systems and their ideal mathematical models with avoiding unjustified artificial randomization in deterministic problems;
fundamental science of measuring deviation for the first time regularly applying overmathematics to measuring deviations of real general objects and systems from their ideal mathematical models, and also of mathematical models from one another. And in a number of other fundamental sciences at rotation invariance of coordinate systems, general (including nonlinear) theories of the moments of inertia establish the existence and uniqueness of the linear model minimizing its square mean deviation from an object whereas least square distance (including nonlinear) theories are more convenient for the linear model determination. And the classical least square method by Legendre and Gauss ("the king of mathematics") is the only known (in classical mathematics) applicable to contradictory (e.g., overdetermined) problems. In the two-dimensional Cartesian coordinate system, this method minimizes the sum of the squares of ordinate differences and ignores a model inclination. This leads not only to the systematic regular error breaking invariance and growing together with this inclination and data variability but also to paradoxically returning rotating the linear model. By coordinate system linear transformation invariance, power (e.g., square) mean (including nonlinear) theories lead to optimum linear models. Theories and methods of measuring data scatter and trend give corresponding invariant and universal measures concerning linear and nonlinear models. Group center theories sharply reduce this scatter, raise data scatter and trend, and for the first time also consider their outliers. Overmathematics even allows to divide a point into parts and to refer them to different groups. Coordinate division theories and especially principal bisector (as a model) division theories efficiently form such groups. Note that there are many reasonable deviation arts, e.g., the following:
the value of a nonnegative binary function (e.g., the norm of the difference of the parts of an equation as a subproblem in a problem after substituting a pseudosolution to this problem, distance from the graph of this equation, its absolute error [1], relative error [1], unierror [2], etc.) of this object and each of all the given objects;
the value of a nonnegative function (e.g., the power mean value) of these values for all the equations in a general problem by some positive power exponent.
Along with the usual straight line square distance, we may also use, e.g., other possibly curvilinear (by additional limitations and other conditions such as using curves lying in a certain surface, etc.) power distances. By point objects and the usual straight line square distance, e.g., we obtain the only quasisolution by two points on a straight line, three points in a plane, or four points in the three-dimensional space. Using distances only makes this criterion invariant by coordinate system translation and rotation.
The unimathematical measurement fundamental sciences system is universal and very efficient.
References
[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994.
[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.