Computer Fundamental Sciences System (Essential)

by

© Ph. D. & Dr. Sc. Lev Gelimson

Academic Institute for Creating Fundamental Sciences (Munich, Germany)

Mathematical Journal

of the "Collegium" All World Academy of Sciences

Munich (Germany)

12 (2012), 7

UDC 501:510

2010 Math. Subj. Classification: primary 00A71; second. 03E10, 03E72, 08B99, 26E30, 28A75.

Keywords: Overmathematics, general problem fundamental sciences system, pseudosolution, quantiset, subproblem, strategy, invariance, quantibound, estimation, trend, bisector, iteration.

Even the very fundamentals of classical computational mathematics [1] and, moreover, any classical computational science at all have own evident lacks and shortcomings. Among them are the following:

1. Classical computational mathematics [1] and, moreover, any classical computational science at all directly use the available computer (hardware with software) abilities only. But such abilities are very restricted.

2. Each computer aided data modeling and processing (representation, evaluation, estimation, approximation, calculation, etc.) is directly based on the available computer (hardware with software) abilities only to represent real numbers. But such abilities are very restricted.

3. There are the computer least negative number (computer minus infinity) and the computer greatest positive number (computer plus infinity) both limiting the real numbers range from below and above, respectively, by each operation and hence investigation range and deepness.

4. There are the computer greatest negative number and the computer least positive number so that each real number between them divided by 2 (due to rounding) is a computer zero. This limits both representation sensitivity not only for such real numbers but also naturally for the real numbers at all by each operation and hence investigation range and deepness.

5. A computer cannot think, typically works blindwise following a priori nonuniversal algorithms from the beginning to the end without any one-operation result check, test, and estimation accompanied by "learning by doing".

6. Many methods available in classical computational mathematics [1] practically ignore these and other very essential specific features of computer aided data modeling and processing and use a computer as a high-speed calculator only.

7. Classical computational mathematics [1] ignore the influence of a power exponent by using power mean values and practically consider the second power only which brings clear analytic simplicity by hand-made calculation but typically fully inadequate results and has almost no advantages in computation.

8. The computer built-in standard functions (of rounding etc.) in each commercial software have their own cardinal defects of principle and lead to errors which can prohibit executing relatively precise calculation programs, e.g., in book-keeping leading to a so-called one-cent problem.

9. The finite element method (FEM) is regarded standard in computer aided solving problems. To be commercial, its software cannot consider nonstandard features of studied objects. There are no trials of exactly satisfying the fundamental equations of balance and deformation compatibility in the volume of each finite element. Moreover, there are no attempts even to approximately estimate pseudosolution errors of these equations in this volume. Such errors are simply distributed in it without any known law. Some chosen elementary test problems of elasticity theory with exact solutions show that FEM pseudosolutions can theoretically converge to those exact solutions to those problems only namely by suitable (a priori fully unclear) object discretization with infinitely many finite elements. To provide engineer precision only, we usually need very many sufficiently small finite elements. It is possible to hope (without any guarantee) for comprehensible results only by a huge number of finite elements and huge information amount which cannot be captured and analyzed. And even such unconvincing arguments hold for those simplest fully untypical cases only but NOT for real much more complicated problems. In practically solving them, to save human work amount, one usually provides anyone accidental object discretization with too small number of finite elements and obtains anyone "black box" result without any possibility and desire to check and test it. But it has beautiful graphic interpretation also impressing unqualified customers. They simply think that nicely presented results cannot be inadequate. Adding even one new node demands full recalculation once again that is accompanied by enormous volume of handwork which cannot be assigned by programming to the computer. Experience shows that by unsuccessful (and good luck cannot be expected in advance!) object discretization into finite elements, even skilled researchers come to absolutely unusable results inconsiderately declared as the ultimate truth actually demanding blind belief. The same also holds for the FEM fundamentals such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of cardinal defects of principle, and, moreover, for the very fundamentals of classical mathematics [1]. Long-term experience also shows that a computer cannot work at all how a human thinks of it, and operationwise control with calculation check is necessary but practically impossible. It is especially dangerous that the FEM creates harmful illusion as if thanks to it, almost each mathematician or engineer is capable to successfully calculate the stress and strain states of any very complicated objects even without understanding their deformation under loadings, as well as knowledge in mathematics, strength of materials, and deformable solid mechanics. Spatial imagination only seems to suffice to break an object into finite elements. Full error! To carry out responsible strength calculation even by known norms, engineers should possess analytical mentality, big and profound knowledge, the ability to creatively and actively use them, intuition, long-term experience, even a talent. The same also holds in any computer aided solving problems, e.g., in hydrodynamics. A computer is a blind powerful calculator only and cannot think and provide human understanding but quickly gives voluminously impressive and beautifully issued illusory "soluions" to any problems with a lot of failures and catastrophes. Hence the FEM alone is unreliable but can be very useful as a supplement of analytic theories and methods if they provide testing the FEM and there is result correlation. Then the FEM adds both details and beautiful graphic interpretation.

Therefore, the very fundamentals of classical computational sciences [1] have a lot of obviously deep and even cardinal defects of principle.

Consequently, to make classical computational sciences [1] adequate, evolutionarily locally correcting, improving, and developing them which can be useful are, unfortunately, fully insufficient. Classical computational sciences [1] need revolutionarily replacing their inadequate very fundamentals via adequate very fundamentals.

Nota bene: Naturally, if possible, any revolution in classical computational sciences [1] has to be based on adequate revolutions in classical pure and applied mathematics [1].

Computational megascience [2] based on applied megamathematics [2] and hence on pure megamathematics [2] and on overmathematics [2] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically uniquantitatively modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) data. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the computer fundamental sciences system [2] including:

software fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to rationally select and use available computer software to provide its efficient functioning, as well as to develop further computer software;

built-in functions fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to available computer built-in functions to transform them to provide their perfect functioning, as well as to developing further standard functions;

avoiding computer zero and infinity fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to available computer theories, methods, and algorithms to transform them to provide avoiding computer zero and infinity, as well as to developing further computer theories, methods, and algorithms with avoiding computer zero and infinity;

megamathematical microscope and telescope fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating computer theories, methods, and algorithms with individual possibly inhomogeneous megamathematical microscopes and telescopes to suitably transform number and uninumber scales to always provide computer calculation feasibility and sensitivity with avoiding computer zero and infinity;

algorithm universalization fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating universal computer algorithms to always provide computer calculation feasibility and sensitivity with avoiding computer zero and infinity;

computer intelligence fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating namely intelligent highly efficient computer algorithms to always provide computer calculation feasibility and sensitivity with avoiding computer zero and infinity and no representing unnecessary intermediate results;

cryptography fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating universal cryptography systems hierarchies.

The computer fundamental sciences system is universal and very efficient.

References

[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994.

[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.