Discussion 8: Where the journey is heading

It is the desire of this dialog to tease out this increasingly tangled knot and weave the threads of discussion into a much more orderly tapestry. To the interested reader, it may seem as if we discussed just about everything under the heading of AI, but the dialog has barely scratched the surface. Since the dialog has necessarily covered a lot of territory, with much still to uncover, we will summarize the examination so far. Coming to this point, the dialog has introduced the guiding definition of intelligence and the foundational Orders of Functionality for Artificial Perception Systematics.

Additionally, in the effort to refine the architecture for implementing artificial agents and the representations utilized to model their environments, the dialog established that a scheme based on symbolics alone was insufficient and explored the dual hemisphere approach chosen by Nature for natural intelligence.

And finally, to define those core, native behaviors an artificial agent must possess to begin adaptation to its environment, the discourse began a conversation on exactly what the formalism is that “selects” the information to be absorbed by the artificial agent during the process of adaptation.

The goal of the Organon Sutra design is of course, to engineer an artificial agent which possesses the three pivotal emergent behaviors of adaptive interactivity with its environment, gestalt abstraction and a non-symbolic capability for analogy, using the guiding principles just summarized and those whose discovery awaits further discussions. To get this far, the dialog had to fully define the nature of knowledge and its requisite process of knowledge acquisition. But before we can continue with the architecture planning for our artificial agent, let us step back and examine the bigger picture in our design goals.

Once we fully develop the perception systematic for our artificial agent, we will be able to provide the agent with the functionality to begin the process of understanding its environment. Although this is a necessary overall consideration, it only sets the direction for the big picture view.

The ability to objectify its environment would be sufficient for our artificial agent if its environment were a static affair, which would make a functional perception systematic all that is necessary. But complex environments are rarely unchanging. A dynamic environment will require our artificial agent to develop skills to interact with its environment, and even more importantly, our agent must possess the ability to respond to changing environmental conditions.

But is this even enough? Nature has created many organisms with non-intelligent adaptive response mechanisms. To approach the emergence of intelligent behavior we must take an even bigger picture view. Using an analogy borrowed from mathematics to symbolize change, in which mathematical calculus defines derivatives to conceptualize change in a function, if we were to consider the functionality of our agent to understand (or more specifically, to objectify) its environment to be a “base function”, then the first derivative of this function would be the function to objectify changes in the environment.

And taking the analogy further, we would define a second derivative of the objectification function. In the case of our artificial agent, the second derivative function would predict future states of the environment, based on current objectified states. Although this conceptualization sounds a lot like just a re-iteration of the Orders of Functionality for Artificial Perception Systematics, those foundational processes occur on a level which does not take the ability to interact with an agents’ environment into account. In other words, the Perceptual Systematic operates at the level of objectifying noise in the first place. Only then, once that objectification is assimilated, can our artificial agent begin to work at the level of adapting to the environment. Interaction with an environment cannot occur until a certain amount of objectification has been accomplished.

For sure, it is a bit of a stretch to use the analogy of derivatives to conceptualize the various orders of environment objectification for our artificial agent. But the concept is still useful in defining our “big picture view”. Indeed, it would be convenient to adopt the entire concept of mathematical differentiation to our planning, but the paradigm of “knowledge derivatives” does necessarily differ in several ways from its mathematical counterpart.

One difference is that in numerical differentiation, a derivative measures the sensitivity to change of a functions’ output with respect to changes in its input. In the realm of artificial knowledge acquisition however, we cannot speak of “inputs” which functionally create “outputs”. The products of the knowledge acquisition process do not have discrete entities that could be defined as “inputs” to the process.

Another significant variation to the meme is perfectly suited to launch the dialog into the next conversation, now that we have paused briefly to look at the bigger picture in our design. A conversation that gets at the very crux of the difficulty in reconciling a formal symbolics system with the infinite variety of a complex, dynamic environment.

As the dialog has said, the analogy of mathematical derivatives with that of “knowledge derivatives” does help to shape our thinking, but the metaphor is at best a loose parallel and there is one particular comparison in which we very much need to avoid. The concept of a derivative as characterized in mathematical differentiation defines a ratio of two infinitesimal quantities, and as this notation is applied to real numbers, the denominator in many cases is frequently referenced to some dimension of time. As the dialog shall explain shortly, we must be careful not to be seduced by the tidy concept of change over time in the design of our representations.

Although it is difficult to imagine an environment of any complexity in which our artificial agent would be exposed to that does not exhibit some temporal dimension, there is a very important reason why time cannot be formalized into any design of the representations comprising the Organon Sutra. Time is an internalization which must be acquired with other knowledge only after an agent is exposed to the environment it will be adapting to, and must remain as just another aspect of an artificial agents’ environment to be objectified. This distinction will be a cornerstone for gestalt abstraction, and if it is not observed, then any abstraction behavior an agent might exhibit will degenerate into mere one-dimensional categorization maps, having little inherent utility.

Additionally, abstracting time before our artificial agent has even begun the objectification of its environment will also short-circuit other significant processes. Although the concepts underlying model building will not be introduced until later in this dialog, the mechanisms necessary to implement this compelling behavior are also products of gestalt abstraction.

The primary reason that there is such a confusion between the concepts of mathematical time and time as it is used in the sense of the Organon Sutra is because the two terms are separate definitions which have become conflated. Mathematical time is an arbitrary, already abstracted quantity, whose definition exists as an independent dimension. Perceptual time, as it will be referred to in the Organon Sutra, is a real quantity which does not exist by itself, but is initially attached to, or predicated upon, some objectification of an artificial agents’ environment.

So we cannot abstract time at this point because we have not introduced the functional capability to predicate it. Since the only formalization we can make about knowledge so far is the definition of noise, the persistence of noise, and changes in the objectification of noise as outlined in the perception systematics previously defined, we cannot abstract any predication in our design of the artificial agent at this point.


And there is another, even more compelling reason for the dialog to pause and specify a previous consideration. Although not entirely deceptive, this dialog has from the very beginning been somewhat specious in its definition of artificial intelligence. In the very opening paragraphs the discussion lamented the lack of a functional definition for artificial intelligence, and after offering an interpretation for natural intelligence, the discourse has been fairly cagey by bouncing between discussions on human intelligence and artificial intelligence as if the two were synonymous. The definition for natural intelligence as quoted in the Introduction remains as the cornerstone for everything that followed, but to this point the dialog has been somewhat hypocritical in not offering its own working definition for artificial intelligence after bemoaning the lack of a concise definition in the literature.

This hypocrisy is due to the difference between natural intelligence and artificial intelligence, a difference which lies primarily in the nature of the adaptation processes between the two. In the adaptation accomplished by natural organisms, both information and energy must form a one-way induction channel into the organism, with entropy forming a one-way channel from the organism out to the environment in exchange. The adaptation accomplished by artificial agents, as opposed to that of natural organisms, differs in that artificial agents need form a one-way induction channel of information only, and do not require an energy cycle in their “equation”. The “artificial” characterization of AI does not imply that the intelligence demonstrated by an AI agent is any different than natural intelligence, only that the adaptive nature of the agent differs from organic intelligence. This greatly alters the nature of the “entropy” which the agent transfers into the environment. It is this re-defined “synthetic entropy” which lies at the epicenter of all the fears and misconceptions surrounding the introduction of true artificial intelligence into the world of man and nature.

Because of the complexities and implications of this redefined “entropy”, it is admittedly difficult to nail down a working definition for artificial intelligence with an expressiveness to serve as a basis for designing an artificially intelligent agent. And we have had to wait in order to establish the necessary background for many underlying definitions. But no matter how you characterize artificial intelligence, one must be wary of taking too much from the concept of human intelligence. That behavior that we ascribe to human intelligence is the product of almost 100 billion biological neurons with over 100 trillion synapses interconnected by a genome evolved over billions of years, and all this in an organism that has been exposed to a complex environment for a number of years itself, with the contribution of a culture that has developed over many thousands of years.

To assuage this hypocrisy of definitions for intelligence, the dialog will develop a formal definition for artificial intelligence in time, but certainly before that can be accomplished there is a number of fundamentally complex emotional and moral issues involved with this re-definition of the “entropy” which artificial agents introduce into their environment that also must be addressed, issues which require discussing the quantifiable things which this engineering might unleash into mans’, and natures’, environment. The depth and comprehensiveness of these complex emotional and moral implications will require an entire volume unto themselves, and issues such as self-preservation, ownership and purpose will be deferred until the final part of the Organon Sutra.

And there are other issues for which society should collectively have a voice in defining. There are non-technical aspects such as the degree of autonomy that any unsupervised, self-programming technology should have, along with considerations of security and reliability, privacy and trust, and transparency in the control of information distribution. Society should be able to demand structural protections that prevent a tool intended to serve from becoming a weapon of manipulation against a populace.

In the meantime, the dialog can only present an interim definition for artificial intelligence. Although there are peripheral issues of sentience and identity, for the present discussion, our definition for artificial intelligence must begin as a tentative description of behaviors, which we began to outline in our “big picture view”, and to further narrow the (tentative) definition, we will detail the functionality underlying these behaviors as we proceed to tie them into the comprehensive notions of gestalt abstraction and synthetic analogy. And to further assure the interested reader, a more concise definition will be forthcoming as we complete the seemingly tedious process of what must appear like an endless procession of introductory definitions.


The three major top-level components outlined in the beginning of this discussion that will collectively demonstrate the emergent behavior of intelligence we intend for our artificial agent to exhibit are:

> The development of a comprehension of the agents’ environment. This includes the objectification of experiential noise, the patterning or classification of that objectification by way of abstracted predication, the objectification of changes to the environment, and an interactive (scientific) exploration for constraints (as defined by cybernetic theory) in the environment.

> The development of adaptive behaviors in the agents’ response to changes in the environment coupled with the development of skills to interact with the environment.

> The development of internal constructions in the artificial agent which predict future states of the changing environment.

However, these are behaviors, and do not tell us what the underlying processes are which produce the behaviors. Recall that when conceiving of these behaviors, we employed a layered theme, using a metaphor of mathematical derivatives, to develop these basic adaptations to an environment. This meme hints that the underlying processes that effect these behaviors will also follow something of a hierarchy.

In the discussion on perception fundamentals we introduced the four orders of perceptual functionality, which we shall declare as forming the lowest tier in this hierarchy of processes, and to finally answer the question of what core, native attributes an artificial agent requires beyond this perceptual functionality, we will define a conceptually higher tier occupying a notionally intermediate position between the orders of perceptual functionality and the top-level adaptive behaviors.

This intermediate tier of processes will encompass the functionality that will be prescribed for manipulating the products of the perception systematic, the products which we come to refer to as knowledge. (We will establish a more formal definition for the products of perception in short order, so for now the generic term of knowledge will be employed.) Those products will include the objectifications and their predications formed as a result of the perceptual functions, in addition to the anticipatory schematas created or derived during the perception cycles.

And following in the same fashion as the top-level behaviors and the lower tier perceptual functionality, the intermediate tier of knowledge processes will also be grouped into an intertwined, echelon manner of three definitions:

> Knowledge acquisition (of both sense knowledge and intellectual knowledge)

> Knowledge assimilation

> Knowledge creation

Knowledge acquisition is of course those processes by which the artificial agent absorbs information from the environment by way of its perception systematic (in whatever modality the perception form might take.)

Knowledge assimilation is a very broad definition encompassing those processes which organize and utilize knowledge once acquired. The Organon Sutra will naturally devote a significant amount of attention to this category, but as we shall see, knowledge assimilation is not an end within itself.

Knowledge creation must not be confused with knowledge acquisition, as the latter is direct knowledge absorbed from the environment, and the former is that indirect, synthesized knowledge derived from it.

These three tiers we have been formulating, the top-level adaptive behaviors, the knowledge processes under those, and the lower tier of perceptual functionality covers a huge range, and besides some vague labels, have yet to be described. But before we can flesh them out, there are some even more fundamental precepts which the conversation must now focus on.

Following the initial proposal of the four orders of perception functionality (the lowest tier of behaviors), the dialog continually asked what additional core, native processes were needed to implement the basic behaviors of our artificial agent. But it could not answer that question until we gained a glimpse of those top-level behaviors we desired to develop. Now that we have that glimpse (and a glimpse is about all that we have discerned so far), and have a vague definition of the intermediary processes that lie between the lower-tier perceptual functionality and the top-level behaviors, we can finally focus on the key concepts that are fundamental to implementing all of these design goals.


There are three fundamental precepts which must be fully understood before proceeding with the design of our artificial agent, and although they are fundamental to all of the tier elements, there is little commonality between them, which will demand a lengthy description for each one. Although the dialog has had to introduce the tier elements before talking about these precepts, it can now finalize the spectrum of core, native attributes that were being discussed.

To summarize, the three fundamental precepts are:

One: The concept of a process implementation within a systematic that is massively asynchronous, composed of a large number of asynchronously connected elements, all of which are driven by the process cycles of its individual asynchronous units, as opposed to the typical executive model of process direction beginning with top-level tier directives and the temporal ordering of hierarchically lower elements. The definition of ‘massively’ must be constrained here by an acknowledgement of fixed, finite bounds in its implementation. This precept develops the formalisms which will guide the design and functionality of the basic elements, and the ultimate architecture of Massively Asynchronous Assemblies, systems whose temporal ordering exists only at the lowest level.

Two: The very important concepts of state and state retention; including the concepts of state for individual asynchronous elements, extending out to state of the assembly and the agent as a whole, then proceeding to concepts for the state of the environment. With the formalization of state we can establish the true formalization of memory. These two separate concepts have become conflated with so many other analogous concepts in contemporary AI research as to become almost unintelligible.

Three: The fundamental predication of change, as experienced from the perspective of the asynchronous assembly. This leads to precepts involving the collateral predication of time and space, two concepts which cannot be predicated until there is an executive abstraction of change. Although change has already been introduced in the context of perception, this third precept of change defines the underlying concepts which drive the engineering of this response mode in an agents’ perceptual mechanics. (Change is also the agency which leads to the abstraction of cause, and the dawn of limbic emotion in the mammalian brain.)

The entire design enterprise of the Organon Sutra began with the intent of engineering an intelligent agent, and in the attempt to draft the requisite design particulars to implement this conceptualization, the design has so far imagined only a conglomerate of vaguely defined notions, presented in a top-down, hierarchical fashion. But much like the watchmaker designing intricately complex timepieces, this design cannot blueprint and machine any one gear or spring until it has completely defined every intermeshing component. The design enterprise must proceed from the bottom-up.

For the interested reader who has persevered through the discourse so far, take heart in the fact that these three fundamental precepts are the final nebulous concepts that need to be specified before finally providing form and substance to all of the ingredients that the design has been dreaming up for our artificial intelligence recipe. (Although be forewarned, because these concepts receive so little attention in conventional AI research circles, they will require a tremendous amount of introduction, for which the reader must remain both patient and disciplined.)

The discourse since the introduction might seem akin to the stage juggler performing the spinning plates act with plate after plate suspended on sticks, increasing the suspense as each new plate is added to the reeling demonstration. So far the dialogue has merely leaped from one amorphous idea to another just as the juggler jumps from stick to stick to keep each plate from wobbling and falling to the ground.

In the same way, the conversation may seem disconnected because of the somewhat contradictory nature of intelligence itself, because we really cannot make sense of any one thing with regards to intelligence until we make sense of everything. It is much like the difficulties in building an arch one brick at a time, when in practice every brick depends for its stability on the prior placement of all the bricks around it. And there is the other caveat of “designing” intelligent systematics. Nature herself has discovered that natural intelligence cannot be engineered by designing the adaptive behaviors into an organism before that organism is exposed to the environment that it will be adapting to.

So our conversation regarding the design of the multiple tiers of processes must involve a significant amount of introductory dialogue, because each one of the tiers, from the top-level adaptive behaviors, to the knowledge processes under those, and the lower tier of perceptual functionality, is a complex subject in itself. But even they cannot be clarified until we set forth a manifest understanding of the fundamental precepts, which form the “glue” to conceptually bind the three tiers together. This requires that the direction of this conversation must be from the “bottom-up”. We must abandon the traditional hierarchical thinking employed with engineering systems of this complexity.

And now that the dialog has ushered in these three fundamental precepts, it can finally commence the process of providing form and definition to all of the vague notions that have been introduced in the discourse. With that, let us begin with a discussion of the First Fundamental Precept, an introduction to Massively Asynchronous Assemblies, which is where the design of any artificially intelligent agent really should have begun in the first place.

GO TO NEXT DISCUSSION

GO TO TOP OF DISCUSSION

Copyright © 2019 All rights reserved