Computational Unimathematics (Mega-Overmathematics) as a System of Revolutions in Computational Sciences

by

© Ph. D. & Dr. Sc. Lev Gelimson

Academic Institute for Creating Fundamental Sciences (Munich, Germany)

Mathematical Monograph

The “Collegium” All World Academy of Sciences Publishers

Munich (Germany)

12th Edition (2012)

11th Edition (2010)

10th Edition (2004)

9th Edition (2003)

8th Edition (2002)

7th Edition (2001)

6th Edition (2000)

5th Edition (1997)

4th Edition (1995)

3nd Edition (1994)

2nd Edition (1993)

1st Edition (1992)

Abstract

2010 Mathematics Subject Classification: primary 68U01; secondary 65F10, 65F20, 68T20, 68W40.

Keywords: Computational science, computational mathematics, megascience, revolution, megamathematics, unimathematics, mega-overmathematics, unimathematical test fundamental metasciences system, knowledge, philosophy, strategy, tactic, analysis, synthesis, object, operation, relation, criterion, conclusion, evaluation, measurement, estimation, expression, modeling, processing, symmetry, invariance, bound, level, worst case, defect, mistake, error, reserve, reliability, risk, supplement, improvement, modernization, variation, modification, correction, transformation, generalization, replacement.

Mathematics is usually divided into pure, applied, and computational mathematics. Pure mathematics can be further divided into fundamental and advanced mathematics.

Classical mathematics, its concepts, approaches, methods, and theories are based on inflexible axiomatization, intentional search for artificial contradictions, and even their purposeful creation to desist from further research. These and other fundamental defects do not allow us to acceptably and adequately consider, formulate, and solve many classes of typical urgent problems in science, engineering, and life. Mathematicians select either set theory or mereology as if these were incompatible. The real numbers cannot fill the number line because of gaps between them and hence evaluate even not every bounded quantity. The sets, fuzzy sets, multisets, and set operations express and form not all collections. The cardinalities and measures are not sufficiently sensitive to infinite sets and even to intersecting finite sets due to absorption. No conservation law holds beyond the finite. Infinity seems to be a heap of very different infinities the cardinality only can very roughly discriminate and no tool can exactly measure. Known hypernumber systems, starting with nonstandard analysis, demonstrate the possibility of their construction and use to more intuitively prove well-known theorems but cannot namely quantitatively solve many classes of typical urgent problems. Operations are typically considered for natural numbers or countable sets of operands only and cannot model any mixed magnitude. Exponentiation is well-defined for nonnegative bases only. Exponentiation and further hyperoperations are noncommutative. Division by zero is considered when unnecessary, ever brings insolvable problems, and is never efficiently utilized. The probabilities not always existing cannot discriminate impossible and other zero-measure events differently possible. The absolute error is noninvariant and alone insufficient for quality estimation. The relative error applies to the simplest formal equalities of two numbers only and even then is ambiguous and can be infinite. Mathematical statistics and the least square method irreplaceable in overdetermined problems typical for data processing are based on the noninvariant absolute error and on the second degree analytically simplest but usually very insufficient. This method is unreliable and not invariant by equivalent transformations of a problem, makes no sense by noncoinciding physical dimensions (units) in a problem to be solved, and can give predictably inacceptable and even completely paradoxical outputs without any estimation and improvement. Artificial randomization brings unnecessary complications. One-source iteration with a rigid algorithm requires an explicit expression of the next approximation via the previous approximations with transformation contractivity and often leads to analytic difficulties, slow convergence, and even noncomputability. Real number computer modeling brings errors via built-in standard function rounding and finite signed computer infinities and zeroes, which usually excludes calculation exactness, limits research range and deepness, and can prevent executing calculation for which even the slightest inconsistencies are inadmissible, e.g. in accounting. The finite element method gives visually impressive "black box" results not verifiable and often unacceptable and inadequate.

Every new alternative mathematics can be considered as an external revolution in mathematics which becomes megamathematics. In any new alternative mathematics itself, creating its own cardinally new very fundamentals replacing the very fundamentals of classical mathematics can be considered as an internal revolution in alternative mathematics also if classical mathematics itself remains unchanged.

Mega-overmathematics (by the internal entity), or unimathematics (by the external phenomenon), created and developed has the character of a superstructure (with useful creative succession, or inheritance) over conventional mathematics as a basis without refusing any achievement of ordinary mathematics. Moreover, unimathematics even calls for usefully applying ordinary mathematics if possible, permissible, acceptable, and adequate.

In these names, the prefix "mega" means infinitely many distinct overmathematics with including different infinities and overinfinities into the real numbers.

The prefix "uni" is here associated both with the union, or the general system, of these infinitely many distinct overmathematics and with the universality of these union and system.

The prefix "over" here means:

1) the superstructural character of mega-overmathematics, or unimathematics, with respect to conventional mathematics;

2) the addional nature of new possibilities offered by mega-overmathematics besides the usual opportunities of ordinary mathematics;

3) overpossibilities as the qualitatively new features of mega-overmathematics in setting, considering, and solving whole classes of typical urgent problems so that these overpossibilities often have a much higher order of magnitude compared with the possibilities of conventional mathematics. For example, one of such overpossibilities is oversensitivity as perfect unlimited sensitivity with exactly satisfying universal conservation laws and with complete exclusion of any absorption so that infinitely or overinfinitely great magnitudes are exactly separated from one another even by infinitesimal or overinfinitesimal differences.

Unimathematics can be called not only universal and unified but also general, natural, physical, intuitive, nonrigorous, free, flexible, perfectly sensitive, practical, useful, exclusively constructive, creative, inventive, etc.

Mega-overmathematics is a system of infinitely many diverse overmathematics which differ by possible hyper-Archimedean structure-preserving extensions of the real numbers via including both specific subsets of some infinite cardinal numbers as canonic positive infinities and signed zeroes reciprocals as canonic overinfinities, which gives the uninumbers. They provide adequately and efficiently considering, setting, and namely quantitatively solving many typical urgent problems. In created uniarithmetics, quantialgebra, and quantianalysis of the finite, the infinite, and the overinfinite with quantioperations and quantirelations, the uninumbers evaluate, precisely measure, and are interpreted by quantisets algebraically quantioperable with any quantity of each element and with universal, perfectly sensitive, and even uncountably algebraically additive uniquantities so that universal conservation laws hold. Quantification builds quantielements, integer and fractional quantisets, mereologic quantiaggregates (quanticontents), and quantisystems with unifying mereology and set theory. Negativity conserving multiplication, base sign conserving exponentiation, exponentiation hyperefficiency, composite (combined) commutative exponentiation and hyperoperations, root-logarithmic overfunctions, self-root-logarithmic overfunctions, the voiding (emptifying) neutral element (operand), and operations with noninteger and uncountable quantities of operands are also introduced. Division by zero is regarded when necessary and useful only and is efficiently utilized to create overinfinities. Unielements, unisets, mereologic uniaggregates (unicontents), unisystems, unipositional unisets, unimappings, unisuccessions, unisuccessible unisets, uniorders, uniorderable unisets, unistructures, unicorrespondences, and unirelation unisystems are also introduced. The same holds for unitimes, potential uniinfinities, general uniinfinities, subcritical, critical, and supercritical unistates and uniprocesses, as well as quasicritical unirelations. Unidestructurizators, unidiscriminators, unicontrollers, unimeaners, unimean unisystems, unibounders, unibound unisystems, unitruncators, unilevelers, unilevel unisystems, unilimiters, uniseries uniestimators, unimeasurers, unimeasure unisystems, uniintegrators, uniintegral unisystems, uniprobabilers, uniprobability unisystems, and unicentral uniestimators efficiently provide unimeasuring and uniestimating. The universalizing separate similar (proportional) limiting reduction of objects, systems, and their models to their own similar (proportional) limits as units provides the commensurability and comparability of disproportionate and, therefore, not directly commensurable and comparable objects, systems, and their models. The unierror irreproachably corrects and generalizes the relative error. The unireserve, unireliability, and unirisk based on the unierror additionally estimate and discriminate exact objects, models, and solutions by the confidence in their exactness with avoiding unnecessary randomization. All these uniestimators for the first time evaluate and precisely measure both the possible inconsistency of a uniproblem (as a unisystem which includes unknown unisubsystems) and its pseudosolutions including quasisolutions, supersolutions, and antisolutions. Multiple-sources iterativity and especially intelligent iterativity (coherent, or sequential, approximativity) are much more efficient than common single-source iterativity. Intelligent iterability universalization leads to collective coherent reflectivity, definability, modelablity, expressibility, evaluability, determinability, estimability, approximability, comparability, solvability, and decisionability. This holds, in particular, in truly multidimensional and multicriterial systems of the expert definition, modeling, expression, evaluation, determination, estimation, approximation, and comparison of the qualities of objects, systems, and models which are disproportionate and hence incommensurable and not directly comparable, as well as in truly multidimensional and multicriterial decision-making systems. Sufficiently increasing the exponent in power mean theories and methods can bring adequate results. This holds for linear and nonlinear unibisector theories and methods with distance or unierror minimization, unireserve maximization, as well as for distance, unierror, and unireserve equalization, respectively. Unimathematical data coordinate and/or unibisector unipartitioning, unigrouping, unibounding, unileveling, scatter and trend unimeasurement and uniestimation very efficiently provide adequate data processing with efficiently utilizing outliers and even recovering true measurement information using incomplete changed data. Universal (in particular, infinite, overinfinite, infinitesimal, and overinfinitesimal) continualization provides perfect computer modeling of any uninumbers. Perfectioning built-in standard functions brings always feasible and proper computing. Universal transformation and solving algorithms ensure avoiding computer zeroes and infinities with computer intelligence and universal cryptography systems hierarchies. It becomes possible to adequately consider, model, express, measure, evaluate, estimate, overcome, and even efficiently utilize many complications such as contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, variability, etc. Unimathematics (mega-overmathematics) also includes knowledge universal test and development fundamental metasciences.

Unimathematics as a megasystem of revolutions in mathematics is divided into fundamental, advanced, applied, and computational unimathematics as systems of revolutions in fundamental, advanced, applied, and computational mathematics.

The single most common hierarchization of scientific revolutions in philosophy, mathematics and physics through their own unique philosophy, mathematics and uni uni physics is given by the underlying science of the University of hierarchy in advanced mathematics. In addition, for ergonomics (one of the principles of uni philosophy), including visibility, every single revolution in scientific principles and / or results will certainly say, designated and may more fully expressed in a single word (have often composite, a new, unfamiliar) as a linguistic variable. The nature and content of these revolutions is given in the future course of the presentation and content of the unique nature of philosophy, University of Mathematics and Physics, University.

Alternative extremely versatile creative creative philosophy (University philosophy) is largely based on universality. But in the names of its principles generally implies uni-prefix is ​​often omitted in order to facilitate the first reading and perception.

The next most common principles unique philosophy and implementation of the apparent conversion terms are Supersystem most common pairs of scientific revolutions in the principles and essence of philosophy, including those of:

I) target systems revolution in the principles and philosophy of nature, which includes the following objectives and principles of the unique philosophy of implementation:

1) urgency (unconditional primacy and exclusivity it is only typical urgent and vital tasks that fail and by all means should be addressed to the proper setting, comprehensive solution and useful application (with full exception unnecessary arguments) as the sole criterion necessity and utility of the creation and development of new knowledge with the rest of the secondary, which include conventional dogma, representations, agreements, and authorities raised their problems with all due respect to them);

2) of the (indispensable comprehensive creative use of everything, including routine contradictions, difficulties, problems and other complications, and even an artificial creation of only necessary and useful conflicting objects and models, including the usefulness of knowledge: useful quality (acceptability, depth, accuracy, structure , regularity, inheritance, universality, strength, stability, reliability, flexibility, ...) and quantity (volume, fullness, ...) objects, models, knowledge, information, data, and their very sensitive creation, analysis, synthesis and verification , testing, structuring, organizing, hierarchization, generalizations, universalization, modeling, measurement, evaluation, management, improvement, development and good governance);

3) practicality (exclusively practical focus of all creative activity with an irrepressible sense of purpose and primary focus only and exclusively on the feasibility with the greatest possible usefulness and almost purely scientific proven truth and even the secondary criteria in classical knowledge, including concepts, approaches, methods, theories, teaching and research);

4) flexibility (flexible unlimited creativity, and if necessary - the creation of new knowledge (concepts, approaches, methods, theories, and even scientists and science) for adequate consideration, the formulation and solution of typical urgent tasks);

5) izyskivaemost (being installed all required previously unknown necessary and useful objects, including the controversial, and the elements and systems);

6) the solubility of (as well as scientific optimism, duty and commitment: each pressing task can and should be quite acceptable and useful solved);

7) svershaemost (focus on discoveries and inventions, double unity and harmony of academic and novelty, opening events essentially inventive climbing, bridges useful knowledge, creative multilingualism, scientific art, antizavist, instructive, terminologichnost);

II) of the essential principles and revolutions in the essence of philosophy, which include the following essential principles unifilosofii and their implementation:

1) The solubility of (almost exclusively targeted, useful and proven free unlimited creativity, intuition and the imagination);

2) freedom (unrestricted creative freedom of expression with the necessary and sufficient condition for the greatest possible utility);

3) vseotvetstvennost (full sole responsibility of scientists for the quality and outcomes with all means of personal vypolnyaemostyu any and all related works, because you can only rely on themselves, and usually inevitable nesovpadaemost irreconcilability and even incompatibility of interests and experience, instinct, intuition, and often crucial subtle nuances fundamentally incommunicable);

4) self-governing (complete and self-governing samoopredelyaemost scientist);

5) otvorchestvlyaemost (even purely technical and design works related to research, creating a unique environment for thinking through the deepest thanks to forced slowness of thought and concentration, with a creative use of complications and even routine, all-consuming thirst for creativity as a creative, inventive and aimed at opening direction: focus on the creation and invention of new knowledge and know-how as well as a reasonable discovery of new phenomena and laws of nature, society and thought, along with the ability to generalize, universalization, systematization and hierarchization of discoveries and inventions in Two scientific and technical architecture);

6) intuitiveness (necessarily intuitive logical primary, and with the necessary and sufficient condition for the futility of austerity and secondary evidence is exceptional, that is extremely useful and intuitive proof, namely, intelligent and intuitive ideas blurred without axiomatic rigor, if necessary and appropriate);

7) Natural (primary, and with the necessary and sufficient condition of futility secondary artificiality only);

8) creativity (great natural creativity with a complete absence of artificial destructive);

9) peacefulness (invariably peaceful development of research and the variety of life, provided an unrestricted free exceptionally creative and useful self-determination, self-governance and activities, particularly in the research, creation and development of knowledge);

10) heritability, or useful creative succession (inheritance, analysis, evaluation, improvement, application and development of already existing knowledge);

III) the conceptual and methodological framework of revolution in the principles and philosophy of nature, which includes the following conceptual and methodological principles unique philosophy and their implementation:

1) zamyshlyaemost (conceptual);

2) osnovopolagaemost (primary intuitive conceptual and methodological osnovopolagaemosti: creation and useful application of a common basis of knowledge in relation to the underlying general systems, including objects, models and intuitive fuzzy principles, concepts and methodology);

3) sozdavaemost (all necessary and useful objects and models);

4) Feasibility (or at least a symbolic existence of all necessary and useful even contradictory objects and models);

5) clear (vyrazhaemost concepts all necessary and useful objects, including the controversial, and the elements and systems);

6) doopredelyaemost (utochnyaemost vyrazhaemosti concepts all the necessary and useful objects in the process of cognitive activity and / or in the course of construction of knowledge);

7) soopredelyaemost (in particular, may have a non-linear construction of knowledge with consistent mutual redefining concepts);

8) comparability (all necessary and useful objects, including the controversial, and the elements and systems);

9) multivariate (facilities and systems and their models, including software and useful application of the unity of diversity and diversity);

10) Multiple (parallel to the usability of many criteria);

11) mnogometodichnost (parallel to the usability of many approaches, methods, theories, doctrines, sciences, philosophies and methodologies);

12) The simulated (and vyrazhaemost all necessary and useful objects, including the controversial, and the elements and systems);

13) approximability (all necessary and useful objects, including the controversial, and the elements and systems, other objects and models, if necessary and appropriate);

14) is reducible, or permissible simplicity (the best choice is not obvious in the simplest acceptable);

15) osmyslyaemost (primacy of philosophical, mathematical, physical and engineering meaningfulness, synergies and intelligence with intuitive clarity, instructive, useful, beautiful and dual harmony of quality and quantity, as well as the applicability and acceptability);

16) strukturizuemost;

17) regimented;

18) ierarhiziruemost;

19) verifiability;

20) estimability;

21) pereotsenivaemost;

22) generalizability;

23) universalizuemost (necessary and useful unlimited generalizability);

24) University legality (universalizuemost laws of nature, society and thought);

25) mergeable (effective connectivity of objects and models, in particular, only partially differentiated opposites, such as a valid / becoming, real / fictional, concrete / abstract, accurate / inaccurate, definitely / probably, pure / applied, the theory / experiment / practice nature / life / science, such as generally inaccurate include precise limit as a special case of zero error);

26) separability (separability useful objects and models);

27) razvivaemost (useful razvivaemost as individuals and objects and models);

28) perfected (useful perfected as individuals and objects and models);

29) control (active control as objects and models, and activities: step of the subject, testability and estimability, invariance, firmness, strength, stability and reliability of the data, outputs and outcomes, information, and knowledge in general, including the concepts, approaches, methods, theories, scientists and science with the ability to correct them, the comprehensive improvement of generalization, universalization, structuring, organizing, and hierarchization);

30) ergonomics.

Uniphilosophy (Exclusively Constructive Creative Philosophy) Principles as a System of Revolutions in Philosophy

Fundamental principles of uniphilosophy (exclusively constructive creative philosophy) build a fundamental system of revolutions in philosophy, in particular, the following subsystems.

1. Fundamental Principles of Uniphilosophy as a Fundamental Subsystem of Revolutions in Philosophy

The fundamental subsystem of revolutions in philosophy includes the following fundamental principles of uniphilosophy:

1. Exceptional natural constructivism (with the complete absence of artificial destructiveness).

2. Free efficient creativity (exclusively practically purposeful, verified, and efficient unlimitedly free creativity, intuition, and phantasy flight).

3. Scientific optimism and duty (each urgent problem can and must be solved adequately and efficiently enough).

4. Complication utilization (creating, considering, and efficiently utilizing only necessary and useful also contradictory objects and models, as well as difficulties, problems, and other complications).

5. Symbolic feasibility (at least symbolic existence of all the necessary and useful even contradictory objects and models).

2. Advanced Principles of Uniphilosophy as an Advanced Subsystem of Revolutions in Philosophy

The advanced subsystem of revolutions in philosophy includes the following advanced principles of uniphilosophy:

1. Exclusively efficient intuitive evidence and provability (reasonable fuzziness, intuitive ideas without axiomatic rigor if necessary and useful).

2. Unrestrictedly flexible constructivism (if necessary even creating new knowledge (concepts, approaches, methods, theories, doctrines, and even sciences) to adequately set, consider, and solve urgent problems).

3. Tolerable simplicity (choosing the best in the not evidently unacceptable simplest).

4. Perfect sensitivity, or conservation laws universality (no uncompensated change in a general object conserves its universal measures).

5. Exact discrimination of noncoinciding objects and models (possibly infinitely or overinfinitely large with infinitesimal or overinfinitesimal distinctions and differences).

6. Separate similar (proportional) limiting universalizability (the reduction of objects, systems, and their models to their own similar (proportional) limits as units).

7. Collective coherent reflectivity, definability, modelablity, expressibility, evaluability, determinability, estimability, approximability, comparability, solvability, and decisionability (in particular, in truly multidimensional and multicriterial systems of the expert definition, modeling, expression, evaluation, determination, estimation, approximation, and comparison of objects, systems, and models qualities which are disproportionate and hence incommensurable and not directly comparable, as well as in truly multidimensional and multicriterial decision-making systems).

3. Some Other Principles of Uniphilosophy

Among other principles of uniphilosophy are the following:

1. Truth priority (primacy of practically verified purely scientific truths and criteria prior to commonly accepted dogmas, views, agreements, and authority, with all due respect to them).

2. Peaceful pluralism (with peaceful development of scientific and life diversity).

3. Efficient creative inheritance (efficiently using, analyzing, estimating, and developing already available knowledge and information).

4. Efficient constructive freedom (unrestrictedly free exclusively constructive and useful self-determination and activity, in particular, in knowledge and information research, creation, and development).

5. Fundamentality priority (primacy of conceptual and methodological fundamentals).

6. Knowledge efficiency (only useful quality (acceptability, adequacy, depth, accuracy, etc.) and amount (volume, completeness, etc.) of knowledge, information, data, as well as creation, analysis, synthesis, verification, testing, structuring, systematization, hierarchization, generalization, universalization, modeling, measurement, evaluation, estimation, utilization, improvement, and development of objects, models, knowledge, information, and data along with intelligent management and self-management of activity).

7. Mutual definability and generalizability (relating successive generalization of concepts in definitions with optional linear sequence in knowledge construction).

8. Efficient unificability of opposites only conditionally distinguished (such as real/potential, real/ideal, specific/abstract, exact/inexact, definitively/possibly, pure/applied, theory/experiment/practice, nature/life/science, for example, the generally inaccurate includes the accurate as the limiting particular case with the zero error).

9. Partial laws sufficiency (if there are no known more general laws).

10. Focus on discoveries and inventions (dualistic unity and harmony of academic quality and originality, discovering phenomena of essence, inventive climbing, helpful knowledge bridges, creative multilingualism, scientific art, anti-envy, learnability, teachability, and terminology development).

Alternative extremely versatile creative mathematics (University mathematics, or a mega-super mathematics) completely takes over every single set of principles unifilosofii with preservation and continuation of the numbering, usually with the addition of prefix forced unitary. For example, in classical mathematics estimability usually involves the use of absolute and relative errors, standard deviations, etc. A unique estimability in uni math error means the application of uni, uni stocks unique reliability, unique risks, etc.

Along with changes in the principles so unique philosophy following most general principles of mathematics and their unique implementation of the apparent transformation of the super-system terminology are the most common pair of scientific revolutions in the principles and the nature of mathematics, including such systems and subsystems:

IV) related to basic mathematics related to uni osnovopolagaemostyu of revolutions in the principles and the nature of mathematics, which includes the following principles University mathematics and their implementation:

- Related to the consistency of the subsystems in the principles and revolutions of the nature of mathematics, including such fundamental principles unimatematiki and their implementation:

1) besprotivorechivost (useful ustranyaemost inconsistencies with full excludability artificial contradictions typical in classical mathematics);

2) Partial (mergeable relations belonging, inclusion and part-whole);

3) vyzhidaemost (arrestance decision, if necessary and appropriate, for example when assessing the existence and meaning of a possible further revaluation's consideration);

4) zaprotivorechivost (be done as a full-fledged utility and applicability of inconsistency);

- Related to the University obnulyaemostyu subsystem revolution in the principles and the nature of mathematics, including such fundamental principles of mathematics and their unique implementation:

1) zero-excludability (excludability division by zero if necessary and / or utility);

2) zero-usability (usability division by zero);

3) null-character (razlichaemost zeros with positive and negative signs);

4) zero-uptake (uptake of zeros with signs);

5) sverhdelimost (hypersensitivity to the dividend are divisible by zeros with signs);

- Related to the subsystem uniopustoshaemostyu revolution in the principles and the nature of mathematics, including such fundamental principles unimatematiki and their implementation:

1) uniopustoshaemost (used as a universal unipustoty empty and devastating element and as a result of the empty set of all operations on any set of arbitrary operands);

2) unibezdeystvennost (used as a universal unipustoty indifferent and inactive operand, which neutralizes any action on them keeping up the result of this action);

- Related to the subsystem osnovopolagaemostyu revolution in the principles and the nature of mathematics, including such fundamental principles unimatematiki and their implementation:

1) unichislennost;

2) quantifiable (general (not logical) quantifiable, or quantitative: the appointment, assignment, determination, finding and measuring the amount of a single item that becomes kvantielementom, and the number of individual items in the set, which gets kvantimnozhestvom);

3) unikolichestvennost;

4) unideystvennost (universal perfect efficiency, or unioperatsionnost, including with non-integer number or an uncountable set of operands);

- Related to the number of unique sub revolution in the principles and the nature of mathematics, including such fundamental principles of mathematics and their unique implementation:

1) kardinaliziruemost (kanoniziruemost infinite cardinals: infinite cardinal numbers as canonical positive infinity and the real, and not becoming);

2) multiple or multiple kanoniziruemost or set-kanoniziruemost (kanoniziruemost system selected sets whose unikolichestva are infinite cardinal number, so every infinite cardinal number is unikolichestvu one and only one set of the system);

3) above the infinity (kanoniziruemost over endless appeals zeros zeros treatment with signs as canonical beyond infinity, and real, and not becoming);

4) over the archimedean (natural generalizability Archimedean axiom of infinity to infinity and beyond);

- Associated with quantifiable sub revolution in the principles and the nature of mathematics, including such fundamental principles of mathematics and their unique implementation:

1) of the quantum of the object;

2) Quantum of elementary;

3) Quantum of multiplicity;

4) Quantum of richness;

5) Quantum of regimented;

- Related to the unique subsystem of quantitative revolution in the principles and the nature of mathematics, including such fundamental principles of mathematics and their unique implementation:

1) University schitaemost (generalizability completely accurate final accounts and based on it the point at infinity, and measures over infinitely large and small);

2) unique dimension (universalizuemost measures in the finite, infinite, and sverhbeskonechno large and small);

3) excess sensitivity (very sensitive, invariant and universal utility modeling, expression, considers the measurement, evaluation and synthesis of essential vital objects, relationships, structures, systems, and their contents, and the quantum of summarizing multiple sets);

4) over the accuracy (accurate razlichaemost not matching objects and models, even in the infinite and beyond the infinite: it is quite sensitive, invariant and universal infinite and infinitely beyond the large and small numbers unichislami synthesis with accurate measurements due to generalize, unlimited (including even integer, and uncountable ) operable and precise distinction in the infinite and infinitely beyond even infinite and beyond infinitesimal differences and differences);

- Related to the unique efficacy subsystem revolution in the principles and the nature of mathematics, including such fundamental principles of mathematics and their unique implementation:

1) University persistence (universalizuemost conservation laws in the finite, infinite and infinitely beyond the large and small: no change in the overall unbalanced object does not retain its universal precautions);

2) University legality (universalizuemost laws of nature, society and thought in the end, infinite and sverhbeskonechno large and small);

- Related to the subsystem universalizuemostyu revolution in the principles and the nature of mathematics, including such fundamental principles of mathematics and their unique implementation:

1) University Arithmetic;

2) Quantum of algebraic;

3) Quantum of analyticity;

Principles of Unimathematics as a System of Revolutions in the Principles of Mathematics

The principles of exclusively constructive creative unimathematics (mega-overmathematics) constitute a system of scientific revolutions in the principles of mathematics including the following subsystems.

1. Fundamental Principles of Unimathematics as a Fundamental Subsystem of Revolutions in the Principles of Mathematics

The fundamental subsystem of revolutions in the principles of mathematics includes the following principles of unimathematics:

1. Typical urgent problems priority and exclusiveness (adequately setting and solving and efficiently using urgent problems only with completely avoiding unnecessary considerations is the only criterion of the necessity and usefulness of creating and developing new knowledge including concepts, approaches, methods, theories, doctrines, and sciences).

2. Intuitive conceptual and methodological fundamentality priority (creating and efficiently using unified knowledge foundation due to fundamental general systems including objects, models, and intuitive fuzzy principles, concepts, and methodology).

3. Collective coherent reflectivity, definability, modelablity, expressibility, evaluability, determinability, estimability, approximability, comparability, solvability, and decisionability (in particular, in constructing nonlinear conceptual systems of knowledge and in truly multidimensional and multicriterial systems of the expert definition, modeling, expression, evaluation, determination, estimation, approximation, and comparison of objects, systems, and models qualities which are disproportionate and hence incommensurable and not directly comparable, as well as in truly multidimensional and multicriterial decision-making systems).

4. Reasonable fuzziness with useful rigor only (exclusively practically useful axiomatization, deductivity, and rigorously proving, as well as intuitive ideas without axiomatic strictness if necessary and useful).

5. Unrestrictedly flexible constructivism (even creating new sciences to adequately set, consider, and solve typical urgent problems).

2. Noncontradictoriness Principles of Unimathematics as a Noncontradictoriness Subsystem of Revolutions in the Principles of Mathematics

The noncontradictoriness subsystem of revolutions in principles of mathematics includes the following principles of unimathematics:

1. The unificability of membership, inclusion, and part-whole relations.

2. Necessary and useful creativity exclusiveness (efficiently and intelligently creating and considering exclusively necessary and useful objects and models with completely ignoring any artificial contradictions typical in classical mathematics).

3. The efficient utilizability of contradictoriness and other complications (creating, considering, and efficiently utilizing exclusively necessary and useful contradictory objects and models, as well as difficulties, problems, and other complications).

4. Symbolic feasibility (at least symbolic existence of all the necessary and useful even contradictory objects and models).

5. Decision-making delayability (if necessary and useful, e.g. by estimating existence and sense with a possible further revaluation in the course of review).

3. Universalizability Principles of Unimathematics as a Universalization Subsystem of Revolutions in the Principles of Mathematics

The universalizability subsystem of revolutions in principles of mathematics includes the following principles of unimathematics:

1. Infinite cardinals canonizability (infinite cardinal numbers as canonical positive infinities namely real but not potential).

2. Zeroes reciprocals overinfinities canonizability (signed zeroes reciprocals as canonical overinfinities namely real but not potential).

3. Hyper-Archimedean axiomability (naturally generalizing the Archimedes axiom to the infinite and the overinfinite).

4. Exactness of the infinite and the overinfinite (perfectly sensitive, invariant, and universal infinite and overinfinite, infinitesimal and overinfinitesimal generalization of the numbers by the uninumbers with exact measurement generalizing counting, unlimited (possibly even noninteger and uncountable) manipulation and operability, as well as exact discrimination in the infinite and the overinfinite even by infinitesimal and overinfinitesimal distinctions and differences).

5. General (nonlogical) quantificability (assignment, definition, determination, and measurement of the individual quantity of a element becoming a quantielement and of the individual quantities of elements in a set which becomes a quantiset).

6. Separate similar (proportional) limiting universalizability (the reduction of objects, systems, and their models to their own similar (proportional) limits as units, in particular, of magnitudes to the moduli of their own unidirectional limits with the same signs).

7. Perfect manipulability (perfectly sensitive, invariant, and universal useful modeling, expression, evaluation, counting measurement, estimation, and essential generalization of urgent objects, relations, structures, systems, and their contents extending sets and quantisets).

8. Conservation laws universalizability (in the overinfinitesimal, the infinitesimal, the finite, the infinite, and the overinfinite).

4. Efficiency Principles of Unimathematics as an Efficiency Subsystem of Revolutions in the Principles of Mathematics

The efficiency subsystem of revolutions in principles of mathematics includes the following principles of unimathematics:

1. Uniproblem unisolvability (existence and expressibility of the best quasisolution, solution, and supersolution among possibly inexact meaningful pseudosolutions to any urgent uniproblem with setting as a unisystem with unknown unisubsystems).

2. Tolerable simplicity (selecting the best in the class of not evidently unacceptable simplest meaningful pseudosolutions).

3. Efficient knowledge (efficient quality (acceptability, adequacy, profundity, exactness, structurality, systematization, inheritance, universality, invariance, strength, stability, reliability, flexibility, etc.) and quantity (volume, completeness, etc.) of objects, models, knowledge, information, data, and their perfectly sensitive creation, analysis, synthesis, verification, testing, structuring, systematization, hierarchization, generalization, universalization, modeling, evaluation, measurement, estimation, utilization, improvement, development, and reasonable control).

4. Free intuitive intelligent iterativity (coherent, or sequential, approximativity) (possibly with many sources and directions, unrestrictedly flexible universal algorithms with avoiding computer zeroes and infinities and independent of analytic solvability with providing mapping contractivity).

5. Collective coherent reflectivity, definability, modelablity, expressibility, evaluability, determinability, estimability, approximability, comparability, solvability, and decisionability (in particular, in truly multidimensional and multicriterial systems of the expert definition, modeling, expression, evaluation, determination, estimation, approximation, and comparison of objects, systems, and models qualities which are disproportionate and hence incommensurable and not directly comparable, as well as in truly multidimensional and multicriterial decision-making systems).

6. General noncriticality (subcritical, critical, and supercritical states, processes, and phenomena in a general structured system which are defined and determined by generally noncritical relationships).

7. General nonlimitability (underlimiting, limiting, and overlimiting states, processes, and phenomena in a general structured system which are defined and determined by generally nonlimiting relationships).

The Principles of Computational Unimathematics

The principles of computational unimathematics build the computational system of revolutions in the principles of mathematics and include:

1. Uninumber continualization (in the overinfinite, infinite, finite, infinitesimal, and overinfinitesimal).

2. Efficiently perfectioning built-in standard functions transformations.

3. Software efficiency (in selecting, using, and developing standard and other available computer programs, as well as creating new programs).

4. Universal algorithmizability (unlimited flexibility of universal efficient algorithms avoiding noncomputability and obvious unacceptability including the limitations of computer zeroes and infinities).

5. Inventive computational intelligence.

6. Free intuitive intelligent multi-sources multidirectional uniiterativity (independent of analytic solvability with providing the mapping contractivity).

7. The universality of power mean distances and unierrors (with the freedom to increase the exponent).

8. Data approximability via unibisectors.

9. Unigrouping data in the coordinates and/or unibisectors.

10. The unimeasurability and uniestimability of data trend and scatter (with the extraproportionality of the influence of outlier points as their definition criterion).

11. The unidivisibility of a point into any parts (with the possibility of their joining the various unigroups).

12. Outlier points efficiency.

13. Complications efficiency.

14. Inventive and discovering creative purposefulness and dedication (the directionality of numerical tests and experiments on inventing, discovering, and creating new knowledge, namely, inventing and creating new concepts, approaches, methods, theories, doctrines, and sciences, as well as discovering new phenomena and laws).

Computational unimathematics includes:

1. The system of fundamental computational sciences.

2. The system of the fundamental sciences of unimathematically overcoming and efficiently utilizing complications.

3. The system of the fundamental sciences of data unimathematics.

The system of fundamental computational sciences includes:

1) the fundamental science of uniprogramming including general theories and methods of developing and applying unimathematics to reasonably choosing and developing efficient computer programs;

2) the fundamental science of efficiently unitransforming built-in standard functions that applies to them unimathematical theories and methods to provide perfectly utilizing these built-in functions and further developing new standard functions;

3) the fundamental science of unicomputability including general theories and methods of developing and applying unimathematics to available computer theories, methods, and algorithms for their transformation and further development in order to provide their flawless performance and efficiency with avoiding noncomputability and obvious unacceptability including restrictions related to computer zeros and finite infinities of the both signs;

4) the fundamental science of unimathematical microscopes and telescopes including general theories and methods of developing and applying unimathematics to creating theories, methods, and algorithms with (possibly heterogeneous) namely operational (but not simply for observation only) unimathematical microscopes and telescopes for transforming number and uninumber scales with providing sufficiently sensitive computer-based calculations with avoiding noncomputability and obvious unacceptability including restrictions related to computer zeros and finite infinities of the both signs;

5) the fundamental science of unimathematically universalizing algorithms including general theories and methods of developing and applying unimathematics to creating and developing of universal efficient computer algorithms;

6) the fundamental science of unimathematical computer intelligence including general theories and methods of developing and applying unimathematics to creating intelligent efficient computer algorithms;

7) the fundamental science of unimathematical cryptography including relevant general theories and methods and universal cryptographic systems hierarchies.

The system of the fundamental sciences of unimathematically overcoming and efficiently utilizing complications includes:

1) the fundamental science of unimathematical tolerance for contradictions, violations, damage, interference, obstacles, limitations, mistakes, distortions, inaccuracies, errors, incomplete knowledge and data, multivariance, and other complications which includes unimathematical theories and methods of establishing and maintaining the performance and analysis of objects and systems with complications;

2) the fundamental science of the unimathematical reasonable and best control of complications;

3) the fundamental science of efficiently unimathematically applying complications to developing and improving uniobjects, unisystems, and unimathematical unimodels, as well as to unisolving uniproblems.

The system of the fundamental sciences of data unimathematics includes:

1) the fundamental science of unimathematical data modeling including general theories of homogeneous and inhomogeneous data, its transformation to uniformity, unigrouping, unistructuring, unirestructuring, uniset representation in the coordinate systems, invariance, and symmetry, as well as systematically developing the theories and methods of applying unimathematics to mathematically modeling data on actual uniobjects and unisystems and their physical models;

2) the fundamental science of unimathematical data processing including general theories of unioperations, unirelations, unicentralization, uninormalization, unigrouping, unistructuring, unirestructuring, unidiscretization, unicontinualization, linear, piecewise linear, and nonlinear unitransformation, uniapproximation also by parts, unibisectors, distance powers, moments of inertia, increasing exponents up to many thousands if necessary, uniboundaries, unilevels, multi-sources, multidirectional, and intelligent uniiteration and its acceleration, as well as the universal theories and graphic-analytical methods of applying uninumbers and operable unisets to processing data on actual uniobjects and unisystems and their physical models.

Introduction

There are many separate scientific achievements of mankind but they often bring rather unsolvable problems than really improving himan life quality. One of the reasons is that the general level of earth science is clearly insufficient to adequately solve and even consider many urgent himan problems. To provide creating and developing applicable and, moreover, adequate methods, theories, and sciences, we need their testing via universal if possible, at least applicable and, moreover, adequate test metamethods, metatheories, and metasciences whose general level has to be high enough. Mathematics as universal quantitative scientific language naturally has to play here a key role.

But classical mathematics [1] with hardened systems of axioms, intentional search for contradictions and even their purposeful creation cannot (and does not want to) regard very many problems in science, engineering, and life. This generally holds when solving valuation, estimation, discrimination, control, and optimization problems as well as in particular by measuring very inhomogeneous objects and rapidly changeable processes. It is discovered [2] that classical fundamental mathematical theories, methods, and concepts [1] are insufficient for adequately solving and even considering many typical urgent problems.

Megamathematics including overmathematics [2] based on its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides universally and adequately modeling, expressing, measuring, evaluating, and estimating general objects. This all creates the basis for many further megamathematics fundamental sciences systems developing, extending, and applying overmathematics. Among them are, in particular, science unimathematical test fundamental metasciences systems [3] which are universal.

Computational Science Unimathematical Test Fundamental Metasciences System

Computational science unimathematical test fundamental metasciences system in megamathematics [2] is one of such systems and can efficiently, universally and adequately strategically unimathematically test any pure science. This system includes:

fundamental metascience of computational science test philosophy, strategy, and tactic including computational science test philosophy metatheory, computational science test strategy metatheory, and computational science test tactic metatheory;

fundamental metascience of computational science consideration including computational science fundamentals determination metatheory, computational science approaches determination metatheory, computational science methods determination metatheory, and computational science conclusions determination metatheory;

fundamental metascience of computational science analysis including computational subscience analysis metatheory, computational science fundamentals analysis metatheory, computational science approaches analysis metatheory, computational science methods analysis metatheory, and computational science conclusions analysis metatheory;

fundamental metascience of computational science synthesis including computational science fundamentals synthesis metatheory, computational science approaches synthesis metatheory, computational science methods synthesis metatheory, and computational science conclusions synthesis metatheory;

fundamental metascience of computational science objects, operations, relations, and criteria including computational science object metatheory, computational science operation metatheory, computational science relation metatheory, and computational science criterion metatheory;

fundamental metascience of computational science evaluation, measurement, and estimation including computational science evaluation metatheory, computational science measurement metatheory, and computational science estimation metatheory;

fundamental metascience of computational science expression, modeling, and processing including computational science expression metatheory, computational science modeling metatheory, and computational science processing metatheory;

fundamental metascience of computational science symmetry and invariance including computational science symmetry metatheory and computational science invariance metatheory;

fundamental metascience of computational science bounds and levels including computational science bound metatheory and computational science level metatheory;

fundamental metascience of computational science directed test systems including computational science test direction metatheory and computational science test step metatheory;

fundamental metascience of computational science tolerably simplest limiting, critical, and worst cases analysis and synthesis including computational science tolerably simplest limiting cases analysis and synthesis metatheories, computational science tolerably simplest critical cases analysis and synthesis metatheories, computational science tolerably simplest worst cases analysis and synthesis metatheories, and computational science tolerably simplest limiting, critical, and worst cases counterexamples building metatheories;

fundamental metascience of computational science defects, mistakes, errors, reserves, reliability, and risk including computational science defect metatheory, computational science mistake metatheory, computational science error metatheory, computational science reserve metatheory, computational science reliability metatheory, and computational science risk metatheory;

fundamental metascience of computational science test result evaluation, measurement, estimation, and conclusion including computational science test result evaluation metatheory, computational science test result measurement metatheory, computational science test result estimation metatheory, and computational science test result conclusion metatheory;

fundamental metascience of computational science supplement, improvement, modernization, variation, modification, correction, transformation, generalization, and replacement including computational science supplement metatheory, computational science improvement metatheory, computational science modernization metatheory, computational science variation metatheory, computational science modification metatheory, computational science correction metatheory, computational science transformation metatheory, computational science generalization metatheory, and computational science replacement metatheory.

The computational science unimathematical test fundamental metasciences system in megamathematics [2] is universal and very efficient.

In particular, apply the computational science unimathematical test fundamental metasciences system to classical computational mathematics [1].

Nota bene: Naturally, all the fundamental defects of classical both pure and applied mathematics [1] discovered due to the pure science unimathematical test fundamental metasciences system and the applied science unimathematical test fundamental metasciences system in megamathematics [2] also hold in classical computational sciences [1].

Fundamental Defects of Computational Sciences

Additionally, even the very fundamentals of classical computational mathematics [1] and, moreover, any classical computational science at all have own evident lacks and shortcomings. Among them are the following:

12. Classical computational mathematics [1] and, moreover, any classical computational science at all directly use the available computer (hardware with software) abilities only. But such abilities are very restricted.

13. Each computer aided data modeling and processing (representation, evaluation, estimation, approximation, calculation, etc.) is directly based on the available computer (hardware with software) abilities only to represent real numbers. But such abilities are very restricted.

14. There are the computer least negative number (computer minus infinity) and the computer greatest positive number (computer plus infinity) both limiting the real numbers range from below and above, respectively, by each operation and hence investigation range and deepness.

15. There are the computer greatest negative number and the computer least positive number so that each real number between them divided by 2 (due to rounding) is a computer zero. This limits both representation sensitivity not only for such real numbers but also naturally for the real numbers at all by each operation and hence investigation range and deepness.

16. A computer cannot think, typically works blindwise following a priori nonuniversal algorithms from the beginning to the end without any one-operation result check, test, and estimation accompanied by "learning by doing".

17. Many methods available in classical computational mathematics [1] practically ignore these and other very essential specific features of computer aided data modeling and processing and use a computer as a high-speed calculator only.

18. Classical computational mathematics [1] ignore the influence of a power exponent by using power mean values and practically consider the second power only which brings clear analytic simplicity by hand-made calculation but typically fully inadequate results and has almost no advantages in computation.

19. The computer built-in standard functions (of rounding etc.) in each commercial software have their own cardinal defects of principle and lead to errors which can prohibit executing relatively precise calculation programs, e.g., in book-keeping leading to a so-called one-cent problem.

20. The finite element method (FEM) is regarded standard in computer aided solving problems. To be commercial, its software cannot consider nonstandard features of studied objects. There are no trials of exactly satisfying the fundamental equations of balance and deformation compatibility in the volume of each finite element. Moreover, there are no attempts even to approximately estimate pseudosolution errors of these equations in this volume. Such errors are simply distributed in it without any known law. Some chosen elementary test problems of elasticity theory with exact solutions show that FEM pseudosolutions can theoretically converge to those exact solutions to those problems only namely by suitable (a priori fully unclear) object discretization with infinitely many finite elements. To provide engineer precision only, we usually need very many sufficiently small finite elements. It is possible to hope (without any guarantee) for comprehensible results only by a huge number of finite elements and huge information amount which cannot be captured and analyzed. And even such unconvincing arguments hold for those simplest fully untypical cases only but NOT for real much more complicated problems. In practically solving them, to save human work amount, one usually provides anyone accidental object discretization with too small number of finite elements and obtains anyone "black box" result without any possibility and desire to check and test it. But it has beautiful graphic interpretation also impressing unqualified customers. They simply think that nicely presented results cannot be inadequate. Adding even one new node demands full recalculation once again that is accompanied by enormous volume of handwork which cannot be assigned by programming to the computer. Experience shows that by unsuccessful (and good luck cannot be expected in advance!) object discretization into finite elements, even skilled researchers come to absolutely unusable results inconsiderately declared as the ultimate truth actually demanding blind belief. The same also holds for the FEM fundamentals such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of cardinal defects of principle, and, moreover, for the very fundamentals of classical mathematics [1]. Long-term experience also shows that a computer cannot work at all how a human thinks of it, and operationwise control with calculation check is necessary but practically impossible. It is especially dangerous that the FEM creates harmful illusion as if thanks to it, almost each mathematician or engineer is capable to successfully calculate the stress and strain states of any very complicated objects even without understanding their deformation under loadings, as well as knowledge in mathematics, strength of materials, and deformable solid mechanics. Spatial imagination only seems to suffice to break an object into finite elements. Full error! To carry out responsible strength calculation even by known norms, engineers should possess analytical mentality, big and profound knowledge, the ability to creatively and actively use them, intuition, long-term experience, even a talent. The same also holds in any computer aided solving problems, e.g., in hydrodynamics. A computer is a blind powerful calculator only and cannot think and provide human understanding but quickly gives voluminously impressive and beautifully issued illusory "soluions" to any problems with a lot of failures and catastrophes. Hence the FEM alone is unreliable but can be very useful as a supplement of analytic theories and methods if they provide testing the FEM and there is result correlation. Then the FEM adds both details and beautiful graphic interpretation.

Therefore, the very fundamentals of classical computational sciences [1] have a lot of obviously deep and even cardinal defects of principle.

Consequently, to make classical computational sciences [1] adequate, evolutionarily locally correcting, improving, and developing them which can be useful are, unfortunately, fully insufficient. Classical computational sciences [1] need revolutionarily replacing their inadequate very fundamentals via adequate very fundamentals.

Nota bene: Naturally, if possible, any revolution in classical computational sciences [1] has to be based on adequate revolutions in classical pure and applied mathematics [1].

Revolution in Computational Sciences

Computational fundamental megascience [21-24] revolutionarily replaces the inadequate very fundamentals of classical computational sciences [1] via adequate very fundamentals.

I. Computer Fundamental Sciences System. Summary

Computational fundamental megascience [21-24] based on applied megamathematics [8-20] and hence on pure megamathematics [2-7] and on overmathematics [2-7] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically unimathematically modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) data. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the computer fundamental sciences system [21] including:

software fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to rationally select and use available computer software to provide its efficient functioning, as well as to develop further computer software;

built-in functions fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to available computer built-in functions to transform them to provide their perfect functioning, as well as to developing further standard functions;

avoiding computer zero and infinity fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to available computer theories, methods, and algorithms to transform them to provide avoiding computer zero and infinity, as well as to developing further computer theories, methods, and algorithms with avoiding computer zero and infinity;

megamathematical microscope and telescope fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating computer theories, methods, and algorithms with individual possibly inhomogeneous megamathematical microscopes and telescopes to suitably transform number and uninumber scales to always provide computer calculation feasibility and sensitivity with avoiding computer zero and infinity;

algorithm universalization fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating universal computer algorithms to always provide computer calculation feasibility and sensitivity with avoiding computer zero and infinity;

computer intelligence fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating namely intelligent highly efficient computer algorithms to always provide computer calculation feasibility and sensitivity with avoiding computer zero and infinity and no representing unnecessary intermediate results;

cryptography fundamental science which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of general objects, systems, and their mathematical models to creating universal cryptography systems hierarchies.

The computer fundamental sciences system is universal and very efficient.

III. Overcoming Complication Fundamental Sciences System. Summary

All existing objects and systems in nature, society, and thinking have complications, e.g., contradictoriness, and hence exist without adequate models in classical mathematics [1]. It intentionally avoids, ignores, and cannot (and possibly hence does not want to) adequately consider, model, express, measure, evaluate, and estimate many complications. Among them are contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, multivariant approach, etc. There were well-known attempts to consider some separate objects and systems with chosen complications, e.g., approximation and finite overdetermined sets of equations. To anyway consider them, classical mathematics only has very limited, nonuniversal, and inadequate concepts and methods such as the absolute error, the relative error, and the least square method (LSM) [1] by Legendre and Gauss ("the king of mathematics") with producing own errors and even dozens of principal mistakes. Moreover, the same holds for the very fundamentals of classical mathematics such as the real numbers with gaps; the Cantor sets, relations, and at most countable only restrictedly reversible operations with ignoring elements quantities, absorption, and contradicting the conservation law of nature; the cardinality sensitive to finite unions of disjoint finite sets only and giving the same continuum cardinality C for distinct point sets between two parallel lines or planes differently distant from one another; the measures which are finitely sensitive within a certain dimensionality, give either 0 or +∞ for distinct point sets between two parallel lines or planes differently distant from one another, and cannot discriminate the empty set ∅ and null sets, namely zero-measure sets; the probabilities which cannot discriminate impossible and some differently possible events. The same holds for classical mathematics estimators and methods.

Computational fundamental megascience [21-24] based on applied megamathematics [8-20] and hence on pure megamathematics [2-7] and on overmathematics [2-7] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically unimathematically modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) data. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the overcoming complication fundamental sciences system [23] including:

complication fundamental science including contradiction theory, infringement theory, damage theory, hindrance theory, obstacle theory, restriction theory, mistake theory, distortion theory, error theory, information incompleteness theory, multivariant approach theory;

complication modeling fundamental science including general mathematical theories and methods of rationally and adequately modeling complications themselves, as well as general objects and systems with complications;

complication measurement fundamental science including general mathematical theories and methods of rationally and adequately measuring complications themselves, as well as general objects and systems with complications;

complication estimation fundamental science including general mathematical theories and methods of rationally and adequately estimating complications themselves, as well as general objects and systems with complications;

complication processing fundamental science including general mathematical theories and methods of rationally and adequately processing complications themselves, as well as general objects and systems with complications;

complication testing fundamental science including general mathematical theories and methods of rationally and adequately testing complications themselves, as well as general objects and systems with complications;

complication tolerance fundamental science including general mathematical theories and methods of the creation, successful functioning, improvement, perfection, and analysis of general objects and systems with complications such as contradiction tolerance theory, infringement tolerance theory, damage tolerance theory, hindrance tolerance theory, obstacle tolerance theory, restriction tolerance theory, mistake tolerance theory, distortion tolerance theory, error tolerance theory, information incompleteness tolerance theory, and multivariant approach tolerance theory;

complicated system control fundamental science including general mathematical theories and methods of rationally and optimally controlling general objects and systems with complications;

complication utilization fundamental science including general mathematical theories and methods of efficiently utilizing complications for developing general objects, systems, and their mathematical models, as well as for solving general problems.

The overcoming complication fundamental sciences system is universal and very efficient.

IV. Unimathematical Data Processing Fundamental Sciences System. Summary

Classical science possibilities in modeling objects and processes and determining true measurement data are very limited, nonuniversal, and inadequate. Classical mathematics [1] with hardened systems of axioms, intentional search for contradictions and even their purposeful creation cannot (and does not want to) regard very many problems in science, engineering, and life. It is discovered [2] that classical fundamental mathematical theories, methods, and concepts [1] are insufficient for adequately solving and even considering many typical urgent problems.

Computational fundamental megascience [21-24] based on applied megamathematics [8-20] and hence on pure megamathematics [2-7] and on overmathematics [2-7] with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides efficiently, universally and adequately strategically unimathematically modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) data. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them is, in particular, the unimathematical data processing fundamental sciences system [24] including:

the fundamental science of unimathematical data processing essence and strategy including geometric data processing theory, analytic data processing theory, single-stage and multiple-stage graph-analytic data processing theories, and principal graph types theories;

the fundamental science of unimathematical data operations including theories of known operations with uninumbers, quantisets, and quantisystems, as well as theories of quantifying, quantity and uniquantity definition and determination;

the fundamental science of unimathematical data relations including theories of known relations with uninumbers, quantisets, and quantisystems, as well as theories of quantirelations;

the fundamental science of unimathematical data invariance and symmetry including unimathematical data invariance theory, unimathematical problem invariance theory, unimathematical method invariance theory, unimathematical result invariance theory, and unimathematical data symmetry theory;

the fundamental science of unimathematical data transformation, normalization, and centralization including data transformation theory, groupwise centralizing data theory, invariant central transformation theory, and general center theory;

the fundamental science of unimathematical data unification and grouping including unimathematical data unification theory and unimathematical data grouping theory;

the fundamental science of unimathematical data partitioning including longitudinal data partitioning theory, transversal data partitioning theory, complex data partitioning theory, coordinate partition theories, and principal bisector partition theories;

the fundamental science of unimathematical data structuring and restructuring including unimathematical data structuring theory and unimathematical data restructuring theory;

the fundamental science of unimathematical data discretization, continualization, and clustering including data discretization, continualization, and clustering theories;

the fundamental science of unimathematical data bisectors including general data bisector theory and data bisector theories for different data and bisector dimensionalities;

the fundamental science of unimathematical data bounds including piecewise linear and mean curvilinear bounds theories, mean piecewise linear bounds theory, weighted mean curvilinear and piecewise linear bounds theories, and locally weighted mean curvilinear bounds theory;

the fundamental science of unimathematical data levels including piecewise linear levels theory, mean curvilinear levels theory, mean piecewise linear levels theory, weighted mean curvilinear levels theory, weighted mean piecewise linear levels theory, and locally weighted mean curvilinear levels theory;

the fundamental science of unimathematical data scatter and trend including unimathematical data direction theory, unimathematical data scatter theory, unimathematical data trend theory, and general power unimathematical data scatter and trend measure and estimation theory;

the fundamental science of unimathematically considering data outliers including unimathematical data outlier determination theory, unimathematical data outlier centralization theory, unimathematical data outlier transformation theory, unimathematical data outlier compensation theory, and unimathematical data outlier estimation theory;

the fundamental science of unimathematical data measurement which includes general theories and methods of developing and applying overmathematical uniquantity as universal perfectly sensitive quantimeasure of unimathematical data with possibly recovering true measurement information using incomplete changed data;

the fundamental science of measuring data concessions which for the first time regularly applies and develops unimathematical theories and methods of measuring data contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, and errors, and also of rationally and optimally controlling them and even of their efficient utilization for developing general objects, systems, and their mathematical models, as well as for solving general problems;

the fundamental science of measuring unimathematical data reserves further naturally generalizing the fundamental science of measuring data concessions and for the first time regularly applying and developing unimathematical theories and methods of measuring not only data contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, and errors, but also harmony (consistency), order (regularity), integrity, preference, assistance, open space, correctness, adequacy, accuracy, reserve, resource, and also of rationally and optimally controlling them and even of their efficiently utilization for developing mathematical and physical models, as well as for solving general problems;

the fundamental sciences of measuring unimathematical data reliability and risk for the first time regularly applying and developing universal overmathematical theories and methods of unimathematically measuring the reliabilities and risks of data with avoiding unjustified artificial randomization in deterministic problems;

the fundamental science of measuring unimathematical data deviation for the first time regularly applying overmathematics to measuring deviations of real general objects and systems from their ideal universal mathematical and physical models, and also of universal mathematical and physical models from one another. And in a number of other fundamental sciences at rotation invariance of coordinate systems, general (including nonlinear) theories of the moments of inertia establish the existence and uniqueness of the linear model minimizing its square mean deviation from an object whereas least square distance (including nonlinear) theories are more convenient for the linear model determination. And the classical least square method by Legendre and Gauss ("the king of mathematics") is the only known (in classical mathematics) applicable to contradictory (e.g., overdetermined) problems. In the two-dimensional Cartesian coordinate system, this method minimizes the sum of the squares of ordinate differences and ignores a model inclination. This leads not only to the systematic regular error breaking invariance and growing together with this inclination and data variability but also to paradoxically returning rotating the linear model. By coordinate system linear transformation invariance, power (e.g., square) mean (including nonlinear) theories lead to optimum linear models. Theories and methods of measuring data scatter and trend give corresponding invariant and universal measures concerning linear and nonlinear models. Group center theories sharply reduce this scatter, raise data scatter and trend, and for the first time also consider their outliers. Overmathematics even allows to divide a point into parts and to refer them to different groups. Coordinate division theories and especially principal bisector (as a model) division theories efficiently form such groups. Note that there are many reasonable deviation arts, e.g., the value of a nonnegative binary function (e.g., the norm of the difference of the parts of an equation as a subproblem in a problem after substituting a pseudosolution to this problem, distance from the graph of this equation, its absolute error [1], relative error [1], unierror [2], etc.) of this object and each of all the given objects, as well as the value of a nonnegative function (e.g., the power mean value) of these values for all the equations in a general problem by some positive power exponent. Along with the usual straight line square distance, we may also use, e.g., other possibly curvilinear (by additional limitations and other conditions such as using curves lying in a certain surface, etc.) power distances. By point objects and the usual straight line square distance, e.g., we obtain the only quasisolution by two points on a straight line, three points in a plane, or four points in the three-dimensional space. Using distances only makes this criterion invariant by coordinate system translation and rotation;

the fundamental science of unimathematical data estimation including data power coordinate difference estimation theory, data power error estimation theory, data power deviation estimation theory, data power distance estimation theory, and data power unierror estimation theory;

the fundamental science of unimathematical data approximation including finite and iterative theories of linear, piecewise linear, and nonlinear data approximation, geometric mean data approximation theory, power and equalizing distance and unierror data approximation theories, circular, spherical, and rotation-quasi-invariant power data approximation theories;

the fundamental science of unimathematical data iterations and their acceleration including single-source, multiple-sources, and intelligent iteration calculation management theories, iterative polar theories, and iteration acceleration theories;

the fundamental science of unimathematical data moments including theories of data moments about linear and curvilinear axes, general theories of moments of inertia, and critical distance power and moment order theories;

the fundamental science of unimathematical directed data tests including metatheories of directed numeric test and check systems with and without using data sets with clear trend;

the fundamental science of unimathematical true data recovery including data measurement theory, data transformation theory, and true data recovery theory.

The unimathematical data processing fundamental sciences system is universal and very efficient. It led to discovering the new phenomenon of the paradoxical behavior of the least square method by rotating two-dimensional data to be linearly approximated. By the data axis near the y-axis, this method rotates the data bisector to the x-axis.

Revolution in Computational Sciences

The system of revolutions in computational mathematics includes:

the fundamental subsystem of revolutions including infinite, overinfinite, infinitesimal, and overinfinitesimal uninumber uniscale continualization, as well as efficiently unitransforming, developing, and perfectioning built-in standard functions;

the advanced subsystem of revolutions in computational mathematics including namely operational (but not simply for observation only) unimathematical microscopes and telescopes, universal algorithms of transforming uniobjects and unimodels and of unisolving uniproblems with avoiding computer zeroes and infinities, as well as freely increasing the exponent in power mean methods and theories;

the applied subsystem of revolutions including computer-aided intelligence in creating universal algorithms, methods, and theories, as well as cryptography systems hierarchies;

the data processing subsystem of revolutions including unimathematical data coordinate and/or bisector partitioning, grouping, bounding, leveling, scatter and trend measurement and power mean and multilevel underbisector-bisector-overbisector estimation, outliers deterination due to their overproportional impact on such estimation, efficient utilization of outliers, unimathematically dividing a single point into any parts separately attachable to appropriate point unigroups, and recovering true measurement information using incomplete changed data;

the complication subsystem of revolutions including universally considering, modeling, expressing, evaluating, measuring, estimating, overcoming, and even efficiently utilizing complications such as contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, inaccuracies, errors, information incompleteness, miltivariability, etc.

Basic Results and Conclusions

1. The very fundamentals of classical computational sciences [1] have a lot of obviously deep and even cardinal defects of principle. Classical mathematics with hardened systems of axioms, intentional search for contradictions and even their purposeful creation cannot (and does not want to) regard very many problems in science, engineering, and life. Classical fundamental mathematical theories, methods, and concepts are insufficient for adequately solving and even considering many typical urgent valuation, estimation, discrimination, control, and optimization problems, in particular, in classical computational sciences. Their very fundamentals also have own evident lacks and shortcomings. Classical computational sciences directly use the available very restricted computer (hardware with software) abilities only, e.g., computer real-number modeling, processing, and hence investigation ranges and sensitivity, as well as computer built-in standard functions with lacks. These sciences ignore the influence of a power exponent by using power mean values and practically consider the second power only which brings clear analytic simplicity by hand-made calculation but typically fully inadequate results and has almost no advantages in computation. The finite element method (FEM) regarded standard in computer aided solving problems brings fully unreliable "black box" results without any possibility and desire to check and test them. to make classical computational sciences adequate, evolutionarily locally correcting, improving, and developing them which can be useful are, unfortunately, fully insufficient. Classical computational sciences need revolutionarily replacing their inadequate very fundamentals via adequate very fundamentals.

2. Computational fundamental megascience based on applied megamathematics and hence on pure megamathematics and on overmathematics with its uninumbers, quantielements, quantisets, and uniquantities with quantioperations and quantirelations provides universally and adequately modeling (expressing, representing, etc.) and processing (measuring, evaluating, estimating, approximating, calculating, etc.) general objects and systems data in science, engineering, and life problems. This all creates the basis for many further fundamental sciences systems developing, extending, and applying overmathematics. Among them are, in particular, the computer fundamental sciences system, unimathematical approximation fundamental sciences system, overcoming complication fundamental sciences system, and unimathematical data processing fundamental sciences system including many universal, adequate, and very efficient theories, methods, and algorithms.

3. The computer fundamental sciences system provides efficiently functioning software with perfect transformed built-in and further standard functions and avoiding computer zero and infinity to universally perfectly sensitively quantimeasure general objects, systems, and their mathematical models using megamathematical microscope and telescope with algorithm universalization and computer intelligence.

4. The unimathematical approximation fundamental sciences system provides efficiently, universally and adequately strategically unimathematically setting, transforming, analyzing, synthezing and efficiently strategically solving general approximation problems with adequately estimating them and their invariance due to replacing absolute errors via distances, linear and square unierrors, reserves, reliabilities, and risks with increasing power exponents up to 104 using intelligent iteration. Analytic solving fundamental science including general power solution theory, power analytic macroelement theory, and integral analytic macroelement theory has many advantages vs. the finite element method (FEM) and can directedly test and correct it.

5. The overcoming complication fundamental sciences system provides efficiently, universally and adequately strategically unimathematically considering, modeling, measuring, estimating, processing, testing, and even efficiently utilizing complications (contradictions, infringements, damages, hindrances, obstacles, restrictions, mistakes, distortions, errors, information incompleteness, multivariant approach, etc.) themselves, as well as general objects and systems with complications. It also provides the creation, successful functioning, improvement, perfection, and analysis of general objects, systems, and models with complications due to complication tolerance.

6. The unimathematical data processing fundamental sciences system provides efficiently, universally and adequately strategically unimathematically considering, transforming, normalizing, centralizing, grouping, partitioning, structuring, restructuring, discretizing, continualizing, clustering, bounding, leveling, modeling, measuring, estimating, approximating, processing, and testing general objects, systems, and models data including outliers, its invariance, symmetry, deviations, distances, unierrors, reserves, reliabilities, risks, scatter and trend measure and estimation with possibly recovering true measurement information using incomplete changed data. This universal and very efficient sciences system led to discovering the new phenomenon of the paradoxical behavior of the least square method by rotating two-dimensional data to be linearly approximated. By the data axis near the y-axis, this method rotates the data bisector to the x-axis.

7. Computational fundamental megascience [21-24] revolutionarily replaces the inadequate very fundamentals of classical computational sciences via adequate very fundamentals, applies, develops, and tests overmathematics, pure megamathematics, and applied megamathematics.

References

[1] Encyclopaedia of Mathematics / Managing editor M. Hazewinkel. Volumes 1 to 10. Kluwer Academic Publ., Dordrecht, 1988-1994

11. O. C. Zienkiewicz, Y. K. Cheung (1967) The Finite Elements Method in Structural and Continuum Mechanics. McGraw-Hill, N. Y.

[3] O. C. Zienkiewicz, R. L. Taylor. Finite Element Method. Volumes 1 to 3. Butterworth Heinemann Publ., London, 2000

[20] Lev Gelimson. Basic New Mathematics. Drukar Publ., Sumy, 1995

[2] Lev Gelimson. Elastic Mathematics. General Strength Theory. The "Collegium" All World Academy of Sciences Publishers, Munich (Germany), 2004, 496 pp.

[3] Lev Gelimson. Science unimathematical Test Fundamental Metasciences Systems. Monograph. The “Collegium” All World Academy of Sciences, Munich (Germany), 2011

[4] Lev Gelimson. Overmathematics Essence. Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 25

[5] Lev Gelimson. Overmathematics: Fundamental Principles, Theories, Methods, and Laws of Science. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2009

[6] Lev Gelimson. Uniarithmetics, Quantianalysis, and Quantialgebra: Uninumbers, Quantielements, Quantisets, and Uniquantities with Quantioperations and Quantirelations (Essential). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 11 (2011), 26

[7] Lev Gelimson. Uniarithmetics, Quantialgebra, and Quantianalysis: Uninumbers, Quantielements, Quantisets, and Uniquantities with Quantioperations and Quantirelations. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2010

[8] Lev Gelimson. Overmathematics: Principles, Theories, Methods, and Applications. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2009

[9] Lev Gelimson. Corrections and Generalizations of the Absolute and Relative Errors. In: Review of Aeronautical Fatigue Investigations in Germany During the Period May 2005 to April 2007, Ed. Dr. Claudio Dalle Donne, Pascal Vermeer, CTO/IW/MS-2007-042 Technical Report, Aeronautical fatigue, ICAF2007, EADS Innovation Works Germany, 2007, 49-50

[10] Lev Gelimson. General System Reserve and Methods to Determine It. Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 1 (2001), 2

[11] Lev Gelimson. Basic Reliability Science in Overmathematics. Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 7 (2007), 1

[12] Lev Gelimson. Basic Risk Science in Overmathematics. Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 8 (2008), 2

[13] Lev Gelimson. Fundamental Methods of Solving General Problems via Unierrors, Reserves, Reliabilities, and Risks. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2007

[14] Lev Gelimson. Unimathematical Modeling Fundamental Sciences System. Monograph. The “Collegium” All World Academy of Sciences, Munich (Germany), 2011

[15] Lev Gelimson. Unimathematical Measurement Fundamental Sciences System. Monograph. The “Collegium” All World Academy of Sciences, Munich (Germany), 2011

[16] Lev Gelimson. Unimathematical Estimation Fundamental Sciences System. Monograph. The “Collegium” All World Academy of Sciences, Munich (Germany), 2011

[17] Lev Gelimson. General Estimation Theory. Mathematical Monograph. 9th Edition. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2010

[18] Lev Gelimson. General Estimation Theory Fundamentals (along with its line by line translation into Japanese). Mathematical Journal of the “Collegium” All World Academy of Sciences, Munich (Germany), 9 (2009), 1

[19] Lev Gelimson. General Estimation Theory (along with its line by line translation into Japanese). Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[20] Lev Gelimson. General Problem Fundamental Sciences System. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[21] Lev Gelimson. Computer Fundamental Sciences System. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[22] Lev Gelimson. Unimathematical Approximation Fundamental Sciences System. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[23] Lev Gelimson. Overcoming Complication Fundamental Sciences System. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[24] Lev Gelimson. Unimathematical Data Processing Fundamental Sciences System. Mathematical Monograph. The “Collegium” All World Academy of Sciences Publishers, Munich (Germany), 2011

[64] A. A. Borisenko. Methods of Synthesing Information Systems on the Base of Positional Numbers with Inhomogeneous Structure [In Russian]. Dr. Sc. Dissertation, 1991

[65] A. A. Borisenko. Lectures on Discrete Mathematics: Sets and Logic [In Russian]. Sumy State University Publishers, Sumy, 1998

[66] A. A. Borisenko. Lectures on Discrete Mathematics: Sets and Logic [In Ukrainian]. University Book Publishers, Sumy, 2002

[67] A. A. Borisenko. Numeration Systems in Computer Technology [In Russian]. Trinity Academy Publishers, Sumy, 2009

[68] A. A. Borisenko. Introduction to Binomial Enumeration Theory [In Russian]. University Book Publishers, Sumy, 2004

L. B. Tsvik. Finite Element Method Application to Static Deformation [In Russian]. Irkutsk State University Publishers, Irkutsk, 1995

L. B. Tsvik. Triaxial Stress and Strength of Single-Layered and Multilayered High Pressure Vessels with Branch Pipes [In Russian]. Dr. Sc. Dissertation, Irkutsk, 2001

L. B. Tsvik. Matrix Algorithm for Generating the Input Data Finite Element Method Grid and Computing Resources Optimization [In Russian]. Irkutsk CSTI Publishers, Irkutsk, 1993

L. B. Tsvik. Calculating the stressed states of multilayer cylindric shells by the iterative method [In Russian]. Strength Problems, 1977, 7, 37-40

L. B. Tsvik. Computational Mechanics of Structural Elements Deformation and the Finite Element Method [In Russian]. GUPS Publishers, Irkutsk, 2005

L. B. Tsvik. Computer Technology, Stress and Strain Fields Modeling [In Russian]. Irkutsk State Technical University Publishing House, Irkutsk, 2005