Fundamental Defects of Classical Computational Sciences

by

© Ph. D. & Dr. Sc. Lev Gelimson

Academic Institute for Creating Fundamental Sciences (Munich, Germany)

Mathematical Monograph

The “Collegium” All World Academy of Sciences Publishers

Munich (Germany)

2012

Keywords: Computational science, computational mathematics, megascience, revolution, megamathematics, overmathematics, unimathematical test fundamental metasciences system, knowledge, philosophy, strategy, tactic, analysis, synthesis, object, operation, relation, criterion, conclusion, evaluation, measurement, estimation, expression, modeling, processing, symmetry, invariance, bound, level, worst case, defect, mistake, error, reserve, reliability, risk, supplement, improvement, modernization, variation, modification, correction, transformation, generalization, replacement.

Introduction

There are many separate scientific achievements of mankind but they often bring rather unsolvable problems than really improving himan life quality. One of the reasons is that the general level of earth science is clearly insufficient to adequately solve and even consider many urgent himan problems. To provide creating and developing applicable and, moreover, adequate methods, theories, and sciences, we need their testing via universal if possible, at least applicable and, moreover, adequate test metamethods, metatheories, and metasciences whose general level has to be high enough. Mathematics as universal quantitative scientific language naturally has to play here a key role.

But classical mathematics [1] with hardened systems of axioms, intentional search for contradictions and even their purposeful creation cannot (and does not want to) regard very many problems in science, engineering, and life. This generally holds when solving valuation, estimation, discrimination, control, and optimization problems as well as in particular by measuring very inhomogeneous objects and rapidly changeable processes. It is discovered [2] that classical fundamental mathematical theories, methods, and concepts [1] are insufficient for adequately solving and even considering many typical urgent problems.

Megamathematics including overmathematics [2] based on its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides universally and adequately modeling, expressing, measuring, evaluating, and estimating general objects. This all creates the basis for many further megamathematics fundamental sciences systems developing, extending, and applying overmathematics. Among them are, in particular, science unimathematical test fundamental metasciences systems [3] which are universal.

Computational Science Unimathematical Test Fundamental Metasciences System

Computational science unimathematical test fundamental metasciences system in megamathematics [2] is one of such systems and can efficiently, universally and adequately strategically unimathematically test any pure science. This system includes:

fundamental metascience of computational science test philosophy, strategy, and tactic including computational science test philosophy metatheory, computational science test strategy metatheory, and computational science test tactic metatheory;

fundamental metascience of computational science consideration including computational science fundamentals determination metatheory, computational science approaches determination metatheory, computational science methods determination metatheory, and computational science conclusions determination metatheory;

fundamental metascience of computational science analysis including computational subscience analysis metatheory, computational science fundamentals analysis metatheory, computational science approaches analysis metatheory, computational science methods analysis metatheory, and computational science conclusions analysis metatheory;

fundamental metascience of computational science synthesis including computational science fundamentals synthesis metatheory, computational science approaches synthesis metatheory, computational science methods synthesis metatheory, and computational science conclusions synthesis metatheory;

fundamental metascience of computational science objects, operations, relations, and criteria including computational science object metatheory, computational science operation metatheory, computational science relation metatheory, and computational science criterion metatheory;

fundamental metascience of computational science evaluation, measurement, and estimation including computational science evaluation metatheory, computational science measurement metatheory, and computational science estimation metatheory;

fundamental metascience of computational science expression, modeling, and processing including computational science expression metatheory, computational science modeling metatheory, and computational science processing metatheory;

fundamental metascience of computational science symmetry and invariance including computational science symmetry metatheory and computational science invariance metatheory;

fundamental metascience of computational science bounds and levels including computational science bound metatheory and computational science level metatheory;

fundamental metascience of computational science directed test systems including computational science test direction metatheory and computational science test step metatheory;

fundamental metascience of computational science tolerably simplest limiting, critical, and worst cases analysis and synthesis including computational science tolerably simplest limiting cases analysis and synthesis metatheories, computational science tolerably simplest critical cases analysis and synthesis metatheories, computational science tolerably simplest worst cases analysis and synthesis metatheories, and computational science tolerably simplest limiting, critical, and worst cases counterexamples building metatheories;

fundamental metascience of computational science defects, mistakes, errors, reserves, reliability, and risk including computational science defect metatheory, computational science mistake metatheory, computational science error metatheory, computational science reserve metatheory, computational science reliability metatheory, and computational science risk metatheory;

fundamental metascience of computational science test result evaluation, measurement, estimation, and conclusion including computational science test result evaluation metatheory, computational science test result measurement metatheory, computational science test result estimation metatheory, and computational science test result conclusion metatheory;

fundamental metascience of computational science supplement, improvement, modernization, variation, modification, correction, transformation, generalization, and replacement including computational science supplement metatheory, computational science improvement metatheory, computational science modernization metatheory, computational science variation metatheory, computational science modification metatheory, computational science correction metatheory, computational science transformation metatheory, computational science generalization metatheory, and computational science replacement metatheory.

The computational science unimathematical test fundamental metasciences system in megamathematics [2] is universal and very efficient.

In particular, apply the computational science unimathematical test fundamental metasciences system to classical computational mathematics [1].

Nota bene: Naturally, all the fundamental defects of classical both pure and applied mathematics [1] discovered due to the pure science unimathematical test fundamental metasciences system and the applied science unimathematical test fundamental metasciences system in megamathematics [2] also hold in classical computational sciences [1].

Fundamental Defects of Pure Mathematics

Even the very fundamentals of classical pure mathematics [1] have evident cardinal defects of principle.

1. The real numbers R evaluate no unbounded quantity and, because of gaps, not all bounded quantities. The same probability pn = p of the random sampling of a certain n ∈ N = {0, 1, 2, ...} does not exist in R , since ∑n∈N pn is either 0 for p = 0 or +∞ for p > 0. It is urgent to exactly express (in some suitable extension of R) all infinite and infinitesimal quantities, e.g., such a p for any countable or uncountable set, as well as distributions and distribution functions on any sets of infinite measures.

2. The Cantor sets [1] with either unit or zero quantities of their possible elements may contain any object as an element either once or not at all with ignoring its true quantity. The same holds for the Cantor set relations and operations with absorption. That is why those set operations are only restrictedly invertible. In the Cantor sets, the simplest equations X ∪ A = B and X ∩ A = B in X are solvable by A ⊆ B and A ⊇ B only, respectively [uniquely by A = ∅ (the empty set) and A = B = U (a universal set), respectively]. The equations X ∪ A = B and X = B \ A in the Cantor sets are equivalent by A = ∅ only. In a fuzzy set, the membership function of each element may also lie strictly between these ultimate values 1 and 0 in the case of uncertainty only. Element repetitions are taken into account in multisets with any cardinal numbers as multiplicities and in ordered sets (tuples, sequences, vectors, permutations, arrangements, etc.) [1]. They and unordered combinations with repetitions cannot express many typical objects collections (without structure), e.g., that of half an apple and a quarter pear. For any concrete (mixed) physical magnitudes (quantities with measurement units), e.g., "5 L (liter) fuel", there is no suitable mathematical model and no known operation, say between "5 L" and "fuel" (not: "5 L" × "fuel" or "fuel" × "5 L"). Note that multiplication is the evident operation between the number "5" and the measurement unit "L". The Cantor set relations and operations only restrictedly reversible and allowing absorption contradict the conservation law of nature because of ignoring element quantities and hinder constructing any universal degrees of quantity.

3. The cardinality is sensitive to finite unions of disjoint finite sets only but not sufficiently sensitive to infinite sets and even to intersecting finite sets (because of absorption). It gives the same continuum cardinality C for clearly very distinct point sets in a Cartesian coordinate system between two parallel lines or planes differently distant from one another.

4. The measures are finitely sensitive within a certain dimensionality, give either 0 or +∞ for distinct point sets between two parallel lines or planes differently distant from one another, and cannot discriminate the empty set ∅ and null sets, namely zero-measure sets [1].

5. The probabilities cannot discriminate impossible and some differently possible events.

6. The operations are considered to be at most countable.

7. All existing objects and systems in nature, society, and thinking have complications, e.g., contradictoriness, and hence exist without adequate models in classical mathematics [1]. It intentionally avoids, ignores, and cannot (and possibly hence does not want to) adequately consider, model, express, measure, evaluate, and estimate many complications. Among them are contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, multivariant approach, etc.

Naturally, there are very many other lacks and shortcomings of classical pure mathematics [1]. For example, a power of a negative number is well-defined for even positive integer exponents only, see counterexamples

(-1)3 = -1 ≠ 1 = [(-1)6]1/2 = (-1)6/2 ,

(-1)1/3 = -1 ≠ 1 = [(-1)2]1/6 = (-1)2/6 .

Therefore, the very fundamentals of classical pure mathematics [1] have a lot of obviously deep and even cardinal defects of principle.

Fundamental Defects of Applied Mathematics

In the very fundamentals of classical applied mathematics [1] with its own evident cardinal defects of principle, there were well-known attempts to consider some separate objects and systems with chosen complications, e.g., approximation and finite overdetermined sets of equations. To anyway consider them, classical mathematics only has very limited, nonuniversal, and inadequate concepts and methods such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of principal mistakes. The same holds for classical mathematics estimators and methods.

8. The absolute error Δ [1] alone is noninvariant and insufficient for quality estimation giving, for example, the same result 1 for acceptable formal (correct or not) equality 1000 =? 999 and for inadmissible formal equality 1 =? 0. Further the absolute error is not invariant by equivalent transformations of a problem because, for instance, when multiplying a formal equality by a nonzero number, the absolute error is multiplied by the norm (modulus, absolute value) of that number.

9. The relative error δ [1] should play a supplement role. But even in the case of the simplest formal equality a =? b with two numbers, there are at once two propositions, to use either δ1 = |a - b|/|a| or δ2 = |a - b|/|b| as an estimating fraction. It is a generally inadmissible uncertainty that could be acceptable only if the ratio a/b is close to 1. Further the relative error is so intended that it should always belong to segment [0, 1]. But for 1 =? 0 by choosing 0 as the denominator, the result is +∞ , for 1 =? -1 by each denominator choice, the result is 2. Hence the relative error has a restricted range of applicability amounting to the equalities of two elements whose ratio is close to 1. By more complicated equalities with at least three elements, e.g., by 100 - 99 =? 0 or 1 - 2 + 3 - 4 =? -1, the choice of a denominator seems to be vague at all. This is why the relative error is uncertain in principle, has a very restricted domain of applicability, and is practically used in the simplest case only and very seldom for variables and functions.

10. The least square method [1] can give adequate results in very special cases only. Its deep analysis [2] by the principles of constructive philosophy, overmathematics, and other fundamental mathematical sciences has discovered many fundamental defects both in the essence (as causes) and in the applicability (as effects) of this method that is adequate in some rare special cases only and even in them needs thorough adequacy analysis. The method is based on the absolute error alone not invariant by equivalent transformations of a problem and ignores the possibly noncoinciding physical dimensions (units) of relations in a problem. The method does not correlate the deviations of the objects approximations from the approximated objects with these objects themselves, simply mixes those deviations without their adequately weighing, and considers equal changes of the squares of those deviations with relatively less and greater moduli (absolute values) as equivalent ones. The method foresees no iterating, is based on a fixed algorithm accepting no a priori flexibility, and provides no own a posteriori adapting. The method uses no invariant estimation of approximation, considers no different approximations, foresees no comparing different approximations, and considers no choosing the best approximation among different ones. These defects in the method essence lead to many fundamental shortcomings in its applicability. Among them are applicability sense loss by a set of equations with different physical dimensions (units), no objective sense of the result noninvariant by equivalent transformations of a problem, restricting the class of acceptable equivalent transformations of a problem, no essentially unique correction of applicability sense loss, possibly ignoring subproblems of a problem, paradoxical approximation, no analyzing the deviations of the result, no adequate estimating and evaluating its quality, no refining the results, no choice, and the best quasisolution illusion as the highest truth fully ungrounded and inadequate. Additionally consider the simplest least square method [1] approach which is typical. Minimizing the sum of the squared differences of the alone preselected coordinates (e.g., ordinates in a two-dimensional problem) of the graph of the desired approximation function and of everyone among the given data depends on this preselection, ignores the remaining coordinates, and provides no coordinate system rotation invariance and hence no objective sense of the result. Moreover, the method is correct by constant approximation or no data scatter only and gives systematic errors increasing together with data scatter and the deviation (namely declination) of an approximation from a constant. Therefore, the least square method [1] has many fundamental defects both in the essence (as causes) and in the applicability (as effects), is adequate only in some rare special cases and even in them needs thorough adequacy analysis. Experimental data are inexact, and their amount is always taken greater than that of the parameters in an approximating function often geometrically interpretable by a straight line or curve, plane or surface. That is why this method was possibly the most important one for any data processing and seemed to be irreplaceable.

11. Further in classical mathematics [1], there is no sufficiently general concept of a quantitative mathematical problem. The concept of a finite or countable set of equations ignores their quantities like any Cantor set [1]. They are very important by contradictory (e.g., overdetermined) problems without precise solutions. Besides that, without equations quantities, by subjoining an equation coinciding with one of the already given equations of such a set, this subjoined equation is simply ignored whereas any (even infinitely small) changing this subjoined equation alone at once makes this subjoining essential and changes the given set of equations. Therefore, the concept of a finite or countable set of equations is ill-defined [1]. Uncountable sets of equations (also with completely ignoring their quantities) are not considered in classical mathematics [1] at all.

Therefore, the very fundamentals of classical applied mathematics [1] have a lot of obviously deep and even cardinal defects of principle.

Fundamental Defects of Classical Computational Sciences

Additionally, even the very fundamentals of classical computational mathematics [1] and, moreover, any classical computational science at all have own evident lacks and shortcomings. Among them are the following:

12. Classical computational mathematics [1] and, moreover, any classical computational science at all directly use the available computer (hardware with software) abilities only. But such abilities are very restricted.

13. Each computer aided data modeling and processing (representation, evaluation, estimation, approximation, calculation, etc.) is directly based on the available computer (hardware with software) abilities only to represent real numbers. But such abilities are very restricted.

14. There are the computer least negative number (computer minus infinity) and the computer greatest positive number (computer plus infinity) both limiting the real numbers range from below and above, respectively, by each operation and hence investigation range and deepness.

15. There are the computer greatest negative number and the computer least positive number so that each real number between them divided by 2 (due to rounding) is a computer zero. This limits both representation sensitivity not only for such real numbers but also naturally for the real numbers at all by each operation and hence investigation range and deepness.

16. A computer cannot think, typically works blindwise following a priori nonuniversal algorithms from the beginning to the end without any one-operation result check, test, and estimation accompanied by "learning by doing".

17. Many methods available in classical computational mathematics [1] practically ignore these and other very essential specific features of computer aided data modeling and processing and use a computer as a high-speed calculator only.

18. Classical computational mathematics [1] ignore the influence of a power exponent by using power mean values and practically consider the second power only which brings clear analytic simplicity by hand-made calculation but typically fully inadequate results and has almost no advantages in computation.

19. The computer built-in standard functions (of rounding etc.) in each commercial software have their own cardinal defects of principle and lead to errors which can prohibit executing relatively precise calculation programs, e.g., in book-keeping leading to a so-called one-cent problem.

20. The finite element method (FEM) is regarded standard in computer aided solving problems. To be commercial, its software cannot consider nonstandard features of studied objects. There are no trials of exactly satisfying the fundamental equations of balance and deformation compatibility in the volume of each finite element. Moreover, there are no attempts even to approximately estimate pseudosolution errors of these equations in this volume. Such errors are simply distributed in it without any known law. Some chosen elementary test problems of elasticity theory with exact solutions show that FEM pseudosolutions can theoretically converge to those exact solutions to those problems only namely by suitable (a priori fully unclear) object discretization with infinitely many finite elements. To provide engineer precision only, we usually need very many sufficiently small finite elements. It is possible to hope (without any guarantee) for comprehensible results only by a huge number of finite elements and huge information amount which cannot be captured and analyzed. And even such unconvincing arguments hold for those simplest fully untypical cases only but NOT for real much more complicated problems. In practically solving them, to save human work amount, one usually provides anyone accidental object discretization with too small number of finite elements and obtains anyone "black box" result without any possibility and desire to check and test it. But it has beautiful graphic interpretation also impressing unqualified customers. They simply think that nicely presented results cannot be inadequate. Adding even one new node demands full recalculation once again that is accompanied by enormous volume of handwork which cannot be assigned by programming to the computer. Experience shows that by unsuccessful (and good luck cannot be expected in advance!) object discretization into finite elements, even skilled researchers come to absolutely unusable results inconsiderately declared as the ultimate truth actually demanding blind belief. The same also holds for the FEM fundamentals such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of cardinal defects of principle, and, moreover, for the very fundamentals of classicalal mathematics [1]. Long-term experience also shows that a computer cannot work at all how a human thinks of it, and operationwise control with calculation check is necessary but practically impossible. It is especially dangerous that the FEM creates harmful illusion as if thanks to it, almost each mathematician or engineer is capable to successfully calculate the stress and strain states of any very complicated objects even without understanding their deformation under loadings, as well as knowledge in mathematics, strength of materials, and deformable solid mechanics. Spatial imagination only seems to suffice to break an object into finite elements. Full error! To carry out responsible strength calculation even by known norms, engineers should possess analytical mentality, big and profound knowledge, the ability to creatively and actively use them, intuition, long-term experience, even a talent. The same also holds in any computer aided solving problems, e.g., in hydrodynamics. A computer is a blind powerful calculator only and cannot think and provide human understanding but quickly gives voluminously impressive and beautifully issued illusory "soluions" to any problems with a lot of failures and catastrophes. Hence the FEM alone is unreliable but can be very useful as a supplement of analytic theories and methods if they provide testing the FEM and there is result correlation. Then the FEM adds both details and beautiful graphic interpretation.

Therefore, the very fundamentals of classical computational sciences [1] have a lot of obviously deep and even cardinal defects of principle.

References

[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994.

[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.

[3] Lev Gelimson. Science Unimathematical Test Fundamental Metasciences Systems. Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 12 (2012), 1.