Unimathematical Modeling Fundamental Sciences System (Essential)

by

© Ph. D. & Dr. Sc. Lev Gelimson

Academic Institute for Creating Fundamental Sciences (Munich, Germany)

Mechanical and Physical Journal

of the "Collegium" All World Academy of Sciences

Munich (Germany)

11 (2011), 6

Keywords: Overmathematics, unimathematical modeling fundamental sciences system, strategy, grouping, restructuring, scatter, trend, outlier, bisector, measurement, object, process, system, physical model, mathematical model, concession, contradiction, infringement, damage, hindrance, obstacle, restriction, mistake, distortion, error, harmony (consistency), order (regularity), integrity, preference, assistance, open space, correctness, adequacy, accuracy, reserve, resource, reliability, risk, unjustified artificial randomization, deviation.

Classical science possibilities in modeling objects and processes and determining true measurement data are very limited, nonuniversal, and inadequate.

Classical mathematics [1] with hardened systems of axioms, intentional search for contradictions and even their purposeful creation cannot (and does not want to) regard very many problems in science, engineering, and life. It is discovered [2] that classical fundamental mathematical theories, methods, and concepts [1] are insufficient for adequately solving and even considering many typical urgent problems.

Even the very fundamentals of classical mathematics [1] have evident lacks and shortcomings.

1. The real numbers R evaluate no unbounded quantity and, because of gaps, not all bounded quantities. The same probability pn = p of the random sampling of a certain n ∈ N = {0, 1, 2, ...} does not exist in R , since ∑n∈N pn is either 0 for p = 0 or +∞ for p > 0. It is urgent to exactly express (in some suitable extension of R) all infinite and infinitesimal quantities, e.g., such a p for any countable or uncountable set, as well as distributions and distribution functions on any sets of infinite measures.

2. The Cantor sets [1] with either unit or zero quantities of their possible elements may contain any object as an element either once or not at all with ignoring its true quantity. The same holds for the Cantor set relations and operations with absorption. That is why those set operations are only restrictedly invertible. In the Cantor sets, the simplest equations X ∪ A = B and X ∩ A = B in X are solvable by A ⊆ B and A ⊇ B only, respectively [uniquely by A = ∅ (the empty set) and A = B = U (a universal set), respectively]. The equations X ∪ A = B and X = B \ A in the Cantor sets are equivalent by A = ∅ only. In a fuzzy set, the membership function of each element may also lie strictly between these ultimate values 1 and 0 in the case of uncertainty only. Element repetitions are taken into account in multisets with any cardinal numbers as multiplicities and in ordered sets (tuples, sequences, vectors, permutations, arrangements, etc.) [1]. They and unordered combinations with repetitions cannot express many typical objects collections (without structure), e.g., that of half an apple and a quarter pear. For any concrete (mixed) physical magnitudes (quantities with measurement units), e.g., "5 L (liter) fuel", there is no suitable mathematical model and no known operation, say between "5 L" and "fuel" (not: "5 L" × "fuel" or "fuel" × "5 L"). Note that multiplication is the evident operation between the number "5" and the measurement unit "L". The Cantor set relations and operations only restrictedly reversible and allowing absorption contradict the conservation law of nature because of ignoring element quantities and hinder constructing any universal degrees of quantity.

3. The cardinality is sensitive to finite unions of disjoint finite sets only but not sufficiently sensitive to infinite sets and even to intersecting finite sets (because of absorption). It gives the same continuum cardinality C for clearly very distinct point sets in a Cartesian coordinate system between two parallel lines or planes differently distant from one another.

4. The measures are finitely sensitive within a certain dimensionality, give either 0 or +∞ for distinct point sets between two parallel lines or planes differently distant from one another, and cannot discriminate the empty set ∅ and null sets, namely zero-measure sets [1].

5. The probabilities cannot discriminate impossible and some differently possible events.

6. The operations are considered to be at most countable.

Further all existing objects and systems in nature, society, and thinking have complications, e.g., contradictoriness, and hence exist without adequate models in classical mathematics [1]. It intentionally avoids, ignores, and cannot (and possibly hence does not want to) adequately consider, model, express, measure, evaluate, and estimate many complications. Among them are contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, multivariant approach, etc. There were well-known attempts to consider some separate objects and systems with chosen complications, e.g., approximation and finite overdetermined sets of equations. To anyway consider them, classical mathematics only has very limited, nonuniversal, and inadequate concepts and methods such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of principal mistakes.

The absolute error Δ [1] alone is noninvariant and insufficient for quality estimation giving, for example, the same result 1 for the acceptable formal (correct or not) equality 1000 =? 999 and for the inadmissible one 1 =? 0. Further the absolute error is not invariant by equivalent transformations of a problem because, for instance, when multiplying a formal equality by a nonzero number, the absolute error is multiplied by the norm (absolute value) of that number.

The relative error δ [1] should play a supplement role. But even in the case of the simplest formal equality a =? b with two numbers, there are at once two propositions, to use either δ1 = |a - b|/|a| or δ2 = |a - b|/|b| as an estimating fraction. It is a generally inadmissible uncertainty that could be acceptable only if the ratio a/b is close to 1. Further the relative error is so intended that it should always belong to segment [0, 1]. But for 1 =? 0 by choosing 0 as the denominator, the result is +∞ , for 1 =? -1 by each denominator choice, the result is 2. Hence the relative error has a restricted range of applicability amounting to the equalities of two elements whose ratio is close to 1. By more complicated equalities with at least three elements, e.g., by 100 - 99 =? 0 or 1 - 2 + 3 - 4 =? -1, the choice of a denominator seems to be vague at all. This is why the relative error is uncertain in principle, has a very restricted domain of applicability, and is practically used in the simplest case only and very seldom for variables and functions.

The least square method [1] can give adequate results in very special cases only. Its deep analysis [2] by the principles of constructive philosophy, overmathematics, and other fundamental mathematical sciences has discovered many fundamental defects both in the essence (as causes) and in the applicability (as effects) of this method that is adequate in some rare special cases only and even in them needs thorough adequacy analysis. The method is based on the absolute error alone not invariant by equivalent transformations of a problem and ignores the possibly noncoinciding physical dimensions (units) of relations in a problem. The method does not correlate the deviations of the objects approximations from the approximated objects with these objects themselves, simply mixes those deviations without their adequately weighing, and considers equal changes of the squares of those deviations with relatively less and greater moduli (absolute values) as equivalent ones. The method foresees no iterating, is based on a fixed algorithm accepting no a priori flexibility, and provides no own a posteriori adapting. The method uses no invariant estimation of approximation, considers no different approximations, foresees no comparing different approximations, and considers no choosing the best approximation among different ones. These defects in the method essence lead to many fundamental shortcomings in its applicability. Among them are applicability sense loss by a set of equations with different physical dimensions (units), no objective sense of the result noninvariant by equivalent transformations of a problem, restricting the class of acceptable equivalent transformations of a problem, no essentially unique correction of applicability sense loss, possibly ignoring subproblems of a problem, paradoxical approximation, no analyzing the deviations of the result, no adequate estimating and evaluating its quality, no refining the results, no choice, and the best quasisolution illusion as the highest truth fully ungrounded and inadequate. Additionally consider the simplest least square method [1] approach which is typical. Minimizing the sum of the squared differences of the alone preselected coordinates (e.g., ordinates in a two-dimensional problem) of the graph of the desired approximation function and of everyone among the given data depends on this preselection, ignores the remaining coordinates, and provides no coordinate system rotation invariance and hence no objective sense of the result. Moreover, the method is correct by constant approximation or no data scatter only and gives systematic errors increasing together with data scatter and the deviation (namely declination) of an approximation from a constant. Therefore, the least square method [1] has many fundamental defects both in the essence (as causes) and in the applicability (as effects), is adequate only in some rare special cases and even in them needs thorough adequacy analysis. Experimental data are inexact, and their amount is always taken greater than that of the parameters in an approximating function often geometrically interpretable by a straight line or curve, plane or surface. That is why this method was possibly the most important one for any data processing and seemed to be irreplaceable.

Further in classical mathematics [1], there is no sufficiently general concept of a quantitative mathematical problem. The concept of a finite or countable set of equations ignores their quantities like any Cantor set [1]. They are very important by contradictory (e.g., overdetermined) problems without precise solutions. Besides that, without equations quantities, by subjoining an equation coinciding with one of the already given equations of such a set, this subjoined equation is simply ignored whereas any (even infinitely small) changing this subjoined equation alone at once makes this subjoining essential and changes the given set of equations. Therefore, the concept of a finite or countable set of equations is ill-defined [1]. Uncountable sets of equations (also with completely ignoring their quantities) are not considered in classical mathematics [1] at all.

Applied megamathematics [2] based on pure megamathematics [2] and on overmathematics [2] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically unimathematically modeling, expressing, measuring, evaluating, and estimating objects, as well as setting and solving general problems in science, engineering, and life. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the unimathematical modeling fundamental sciences system [2] including:

fundamental science of universal mathematical and physical modeling essence and strategy including universal mathematical and physical modeling problem setting theories, universal mathematical and physical modeling problem pseudosolution theories, universal mathematical and physical modeling problem solving strategy theories, and universal mathematical and physical modeling problem transformation theories;

fundamental science of universal mathematical and physical model analysis and synthesis including universal mathematical and physical model analysis theories and universal mathematical and physical model synthesis theories;

fundamental science of universal mathematical and physical model invariance and symmetry including universal mathematical and physical model data invariance theories, universal mathematical and physical model problem invariance theories, universal mathematical and physical model method invariance theories, universal mathematical and physical model result invariance theories, and universal mathematical and physical model symmetry theories;

fundamental science of universal mathematical and physical model data unification and grouping including universal mathematical and physical model data unification theories and universal mathematical and physical model data grouping theories;

fundamental science of universal mathematical and physical model data structuring and restructuring including universal mathematical and physical model data structuring theories and universal mathematical and physical model data restructuring theories;

fundamental science of universal mathematical and physical model data scatter and trend including universal mathematical and physical model data direction theories, universal mathematical and physical model data scatter theories, universal mathematical and physical model data trend theories, and general power universal mathematical and physical model data scatter and trend measure and estimation theories;

fundamental science of unimathematically considering mathematical and physical model data outliers including universal mathematical and physical model data outlier determination theories, universal mathematical and physical model data outlier centralization theories, universal mathematical and physical model data outlier transformation theories, universal mathematical and physical model data outlier compensation theories, and universal mathematical and physical model data outlier estimation theories;

fundamental science of universal mathematical and physical model measurement which includes general theories and methods of developing and applying unimathematical uniquantity as universal perfectly sensitive quantimeasure of universal mathematical and physical models with possibly recovering true measurement information using incomplete changed data;

fundamental science of measuring universal mathematical and physical model concessions which for the first time regularly applies and develops unimathematical theories and methods of measuring universal mathematical and physical model contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, and errors, and also of rationally and optimally controlling them and even of their efficient utilization for developing general objects, systems, and their mathematical models, as well as for solving general problems;

fundamental science of measuring universal mathematical and physical model reserves further naturally generalizing fundamental science of measuring universal mathematical and physical model concessions and for the first time regularly applying and developing universal overmathematical theories and methods of measuring not only universal mathematical and physical model contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, and errors, but also harmony (consistency), order (regularity), integrity, preference, assistance, open space, correctness, adequacy, accuracy, reserve, resource, and also of rationally and optimally controlling them and even of their efficiently utilization for developing mathematical and physical models, as well as for solving general problems;

fundamental sciences of measuring universal mathematical and physical model reliability and risk for the first time regularly applying and developing universal overmathematical theories and methods of quantitatively measuring the reliabilities and risks of universal mathematical and physical models with avoiding unjustified artificial randomization in deterministic problems;

fundamental science of measuring universal mathematical and physical model deviation for the first time regularly applying overmathematics to measuring deviations of real general objects and systems from their ideal universal mathematical and physical models, and also of universal mathematical and physical models from one another. And in a number of other fundamental sciences at rotation invariance of coordinate systems, general (including nonlinear) theories of the moments of inertia establish the existence and uniqueness of the linear model minimizing its square mean deviation from an object whereas least square distance (including nonlinear) theories are more convenient for the linear model determination. And the classical least square method by Legendre and Gauss ("the king of mathematics") is the only known (in classical mathematics) applicable to contradictory (e.g., overdetermined) problems. In the two-dimensional Cartesian coordinate system, this method minimizes the sum of the squares of ordinate differences and ignores a model inclination. This leads not only to the systematic regular error breaking invariance and growing together with this inclination and data variability but also to paradoxically returning rotating the linear model. By coordinate system linear transformation invariance, power (e.g., square) mean (including nonlinear) theories lead to optimum linear models. Theories and methods of measuring data scatter and trend give corresponding invariant and universal measures concerning linear and nonlinear models. Group center theories sharply reduce this scatter, raise data scatter and trend, and for the first time also consider their outliers. Overmathematics even allows to divide a point into parts and to refer them to different groups. Coordinate division theories and especially principal bisector (as a model) division theories efficiently form such groups. Note that there are many reasonable deviation arts, e.g., the following:

the value of a nonnegative binary function (e.g., the norm of the difference of the parts of an equation as a subproblem in a problem after substituting a pseudosolution to this problem, distance from the graph of this equation, its absolute error [1], relative error [1], unierror [2], etc.) of this object and each of all the given objects;

the value of a nonnegative function (e.g., the power mean value) of these values for all the equations in a general problem by some positive power exponent.

Along with the usual straight line square distance, we may also use, e.g., other possibly curvilinear (by additional limitations and other conditions such as using curves lying in a certain surface, etc.) power distances. By point objects and the usual straight line square distance, e.g., we obtain the only quasisolution by two points on a straight line, three points in a plane, or four points in the three-dimensional space. Using distances only makes this criterion invariant by coordinate system translation and rotation.

The unimathematical modeling fundamental sciences system is universal and very efficient.

References

[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994.

[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.