The Semantic Derivation of Infinity

Basic concepts of an intelligent logic implementation

Ontology (in the philosophical sense) is a study of the nature of being, and existence (or reality), as well as the basic categories of being and their relations. At one end of the spectrum (the very heavy end) of ontological inquiry is the collection of shared mental events which constitute the discourse of understanding in which we contend that entitles are said to “exist”.

Metaontology concerns itself with the nature and methodology of ontology, which leads to the interpretation and significance of ontological issues such as those philosopical considerations. The problem of ontological entailment is a problem in meta-ontology rather than ontology proper. The meta-ontologist asks (among other things): What entities or kinds of entity exist according to a given theory or discourse, and thus are among its ontological commitments? Having a criterion of ontological commitment or entailment for theories allows one to place the problem of ontology into the perspective of a mechanized logic: typically, we accept entities into our ontology via accepting theories that are ontologically committed to those entities. A criterion of ontological commitment, then, is a pre-requisite for ontological inquiry.

Wikipedia defines Ontology (as an information science) as formally representing knowledge as a set of concepts within a domain, using a shared vocabulary to denote the types, properties and inter-relationships of those concepts.

This dialog will necessarily leave the existential considerations of ontology to the philosophers, and will focus on the more limited subject of the categorization of being and their relations, and how entities can be grouped within a hierarchy, approaching the philosophic aspect of ontology only with a formalization of semantic definition from the perspective of an intelligent self-organizing agent, and the necessary declaration of an intelligent logic implementation of entailment.

This discourse on the symbolic expression of being (as opposed to the nature of being) is  confined to a strictly narrow viewpoint for exploring the categorical aspects of being, the definitions of ordinal relations, changes in entities and causation relations, and the temporal concepts necessary to describe the models of real-world entities.

Ontologies are the structural framework for organizing information. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework.

This starts with the most basic definitions of form, to establish the formalized expression of categorical being. This is essentially a discourse on the intrinsic properties, discriminating attributes, and generalized principles of family, to create a taxonomy as a quantified (or scoped) systematic of categorical being expressing the concepts of Extension and Intension and a derivation of Ontological Existence.

It then builds on these basic definitions of form, which establishes classification and the abstraction of categorical being, to create concepts for the formalization of relationship, dependency, possession and ownership.

From this systematic of definition, characterization and classification, the relation and dependency generalizations can lead to concepts of change and the building of abstract expressions for cause and effect, consequence and agency structures, and finally leading to the establishment of the temporal relations and expressions for parametric change.

In order to implement these organizing structures within an artificial intelligence systematic which is itself changing, the very concept of “ontological definition” cannot remain static, even the very definitions of relation and dependency themselves. And most important, the fundamental definitions of Time and Space themselves cannot be imposed on these dynamic structures, but must be established in the very processes of self-organization along with the first-order semantics of predication, objectification, change and dependency.

In an artificial intelligence systematic, ontological structure must remain dynamic, as even the definition of “definition” is a fluid concept, being relative only to the intelligent agent (as it also changes), and not to a static, common perspective.



Now, for all of the remaining discussions of the Organon Sutra and the engineering of true artificial intelligence, the dialog will refer to any implementation of this engineered systematic as a True Synthetic Intelligent Agent (the TSIA), which is a systematic (rendered either in digital software or hardware) that is engineered according to the principles of the Organon Sutra and the Logic of Infinity, which expresses intelligent organic adaptation behavior without a dependence upon an internal energy cycle of its own.



Although an artificial mechanism, a TSIA must still behave in many ways just as the organic machines of Nature, as they proceed in their adaptation to an environment. This behavior must observe the same rules (constraints) or laws of Nature that are evident in that environment in order to accomplish that adaptation.

However, because of the infinite nature in the variety that a real-world environment presents to a self-organizing machine, the mechanization of a TSIA’s adaptation processes cannot be engineered in a hierarchical, top-down fashion, due to the simple reality that mathematical and logical semantics by themselves cannot establish axiomatic boundaries around that infinity.

Self-organized adaptation is governed by the general constraint law of assimilation, which states: “Each incremental increase in the assimilation of information by an organism must be paid for by a correspondingly incremental decrease in the freedom with which the organism can deal with new assimilation”.

In the act of biological adaptation, an organism will absorb some information from its environment. To do this, an organism must have an apparatus to sequester selected subsets of the continual flow of instantaneous signaling within its neural assemblies. However, if an organism were to develop an ability to temporally retain all of its exteroceptive signaling, then it will soon find its behavior following the entropy of its environment, and by definition, in time will simply become inorganic. This is the current state of the Google machine, which began as a self-organizing complex, but because its structure perpetually absorbed the entropy of its “world”, and never reversed it, the Google organism devolved into an inorganic “black hole of information”. (Although it is stressed here that this is just a pragmatic observation, and is absolutely not a criticism of the Google organization. By any other account, Google remains a cultural milestone in the information age).

Nature fashions the adaptive behavior of natural organisms by presenting dangers in the environment as well as attractions. Because of this polarity, the first order of natural adaptation involved the development of those behaviors needed to respond to averse conditions as well as attractive conditions. This first order adaptation requires a mechanism to differentiate the present signaling experienced by an organism into a dichotomous polarity of environmental aversive or attractive signaling, a functionality the Organon Sutra has termed “perceptual diffraction”.

In natural organisms, the behaviors to respond to averse conditions as well as attractive conditions are developed phylogenetically. However, the environment for artificial agents may not have environmentally defined dangers and attractions, since the artificial agent does not have to maintain an internal energy cycle, and by definition would not be subjected to systemic threats to that energy cycle from its environment. For an artificial agent, this perceptual diffraction must be developed synthetically, and that development cannot occur until after the agent is exposed to the environment it will be adapting to.

With an infinite variety of state changes to characterize, this synthetic perceptual diffraction, or bias, cannot be defined with the symbolic languages of mathematics or logic, whose bounded axiomatic syntactics cannot function in an environment of boundless variety. The TSIA must be engineered with the capacity to perceive the state changes in its environment at this basic perceptual level, which cybernetics theory calls “noise”, and this process begins the endeavor of Abstraction in the self-organizing adaptations of any TSIA.

Adaptive systems must first and foremost be based on a rigorous methodology of this abstraction of information, always from the particular toward the general. This methodology requires an explicit discipline in the mechanization of infinite representations, as the representation of an infinite logic system and its abstractions must always be codified explicitly, as opposed to being represented implicitly in algorithms, or symbolic axioms.

And because of the adaptive nature of this mechanization, infinite logic representations cannot function under a true-false or present-absent binary logic, but at its most fundamental level must embrace a true-false-unknown or present-absent-indeterminate Ternary Logic, which describes the functional (syntactic) aspects of an intelligent logic. In human binary logics, there is no accommodation for the unknown state.

Adaptive representations should be self-organizing, using a process for the incremental re-structuring of semantics and their representations based on the structural entropy introduced in the mechanized temporality of adaptive elements.

Adaptive representations should have an element structure that is sufficiently diverse to allow dynamic interaction among elements with no defined rigid interaction. The element structure should allow nonlinear interactions, and the behaviors of some elements should demonstrate feedback or recurrency.

The systematic must be equally adept at expressions at all levels of Typology (typically defined as levels of scale: nominal, ordinal, interval and ratio). Adaptive systems must also provide for situations where expressions have inter-mixed levels of typology. The systematic should strive toward a consistent and unified model of typological construction. Adaptive representations must also allow for partial constructs at every level, from atomic specifications in expressions up to an incremental development of the whole system.

Additionally, every level of the systematic must work with ambiguity, incompleteness (partial constructs and fragmentary evidence), inconsistency, and “information independent” constraints (constraints imposed by the implementation of any symbolic structure), supported by a natural language why/how/explanation facility.

The principles of adaptive representation merge the two subjects of information theory and coding theory. Information theory deals with the static representation of information, where coding theory deals with the active communication of information. This focuses the engineering intent for the systematic to demonstrate emergent behaviors, but at the same time, until there is a true deductive theory implementation in infinite logic, the systematic does not have to demonstrate consistency (have proof of non-contradictions), or completeness (all logical formulations are a logical consequence of axiomatic theory). Because of this “loose requirements” approach, there is the very real possibility that one of the behaviors of the system will be to oscillate around a null resolution, and never arrive at a goal resolution (a product of loose consistency), or the process may arrive at inconsequential or false conclusions, due to fallacies of middle arguments (a product of loose completeness). To alleviate this, the systematic shall introduce the active process of tentative reasoning. This is not a logically formal process, but a mechanized corollary to a temporally-aware, abductive hypothesis building approach. Humans are never completely consistent. What is important is how the reasoning process handles derived paradox or conflict, and how the process learns from mistakes, instead of a rigid program intended to prevent them.

And, an adaptive representation must exhibit another unique behavior, another active process. True “knowledge” is useless in isolation. Just as a pebble tossed into a pond causes a wave which dissipates and becomes integrated into the whole surface of the pond, knowledge becomes useful only when participating in the whole of a reasoning process.

But before an intelligent logic can get to the point of holding up a discrete item of “knowledge”, and then in semantic certitude, subject that jewel to the machinations of inference, (the “light” in any reasoning process), the logic system must first create the very “universe of discourse” which forms the semantic frame of reference for any reasoning in the first place.

And the challenge that any intelligent logic faces in first building the very semantics for which it will subsequently reason about, comes from the infinite nature of the environment that the systematic ultimately seeks to objectify, and also from the limitations of the very representation systematic used to derive those semantics. For instance, correlation and causality are different semantics, although the temporal nature of perceptions can cloud their semantic formation. This can be seen if, say for example, the systematic might effectively discriminate their semantic difference, only to have subsequent inference sabotaged by the representation system, where, say, a correlation implies causality (false inference).

So it is the first foundational principle of intelligent logics, the fundamental concept of identity, which immediately dissolves the role of simple symbols as the fundamental carriers of semantics, and which brings temporal dimensionality to the immediate expression of all semantics, but what exactly is the nature for this substitute for those dimensionless symbolics?


Where contemporary mathematics and logics begin with zero dimensional exemplars which then build to an “infinity” defined by those non-dimensional symbolics, an intelligent logic begins with the exemplar of infinity, and then proceeds to actively tessellate that singularity with the manifold dimensions of derived, entropy-defined semantics.

A systematic to mechanize this representation for an engineered TSIA has been developed by the First Frontier Project, one that demonstrates the topodemic (fluid tensor) tessellation of infinity, utilizing a process which produces the (manifold) dimensions within dimensions which expresses infinity, in a proprietary graphic process termed SupraGraphics, combined with a reasoning system based on a Ternary Logic for the inferential production of those dimensions and their manifolds. The SupraGraphic process defines a tensor space, and then uses the temporal qualities of state memory in individual tensic entities to create a dynamic fabric in that space, like the difference between a flag on display hanging on a wall, and that same flag fluttering in the wind.

However, the mechanization of this process is another story in itself, one which cannot be told before an exposition of mechanized reasoning, based on a tertiary logic, has been developed.

Go to the Ternary Logic Principles

Go to Top of Page

Copyright © 2020 All Rights Reserved