Introduction: The Stupid Agent

The Organon Sutra represents the design philosophy which has been developed to satisfy a new generation in computer functionality, a quantum leap beyond the “programmed” nature of today’s digital devices.

This new functionality was inspired by a manifest shortcoming in computer programming, which has been evident ever since the advent of computers themselves: Programmers rarely know what most peoples’ problems are, and most people do not know how to program their own tools. The solution to this shortcoming demands an artificial intelligence methodology, and challenges the very foundations of computer engineering, advancing a new evolutionary development in digital systems.

This dialog has been drafted to be readable by as wide an audience as possible, but it is unfortunate that, as an engineering plan demanding an extraordinary amount of introductory material, its writing style had to subordinate prose over exhibition. With no harmony to join the melody, it will take a commitment on the part of the reader to endure the lack of subconscious content, because the story of Natures’ organic Central Nervous Systems really deserves a more elegant narrative. Since the dialog must be content with a less scenic but more direct path in its presentation, it is hoped that the necessary depth and detail has not left the discussion too stuffy.


Given the many innovative and unique design goals of an artificial intelligence design initiative, the engineering of its manifold components required revolutionary approaches and an entirely new design paradigm.

And yet, with no seismic developments in artificial intelligence technology beyond those narrow applications in robotics, linguistics and game mastery that are frequently cited, applied AI was until now more promise than reality. In his charismatic book The Sentient Machine , Amir Husain characterizes much of the current developments in artificial intelligence and “deep learning” as ANI – Artificial Narrow Intelligence. He goes on to characterize a machine that can learn, and generalize on that learning, as AGI – Artificial Generalized Intelligence. Indeed, since it seems that we had not arrived at AGI before the development of the Organon Sutra, and because artificial intelligence has been characterized in so many different ways, one gets the impression that neither academia nor industry has agreed on the very definition of AI itself.

With the general cloudiness surrounding the definition of AI in so many research circles, in order to proceed with the creation of the model for an “intelligent” implementation of the Organon Sutra, this dialog shall start with its own definition of intelligence.

According to one dictionary definition, intelligence is defined as:

noun 1. The ability to acquire and apply knowledge and skills.

Although concise, this somewhat notional definition is self-referencing in that ‘ability’ is essentially a skill, so all it is saying is that intelligence is a skill to acquire skills, and does not provide much guidance in engineering an inorganic intelligent agent. (This dialog will characterize a non-organic intelligent entity as an agent because part of the definition of intelligence is the application of knowledge, which implies a certain agency in the systematic. And as it refines its ideas, the dialog will demonstrate the critical importance of agency in developing intelligent behavior.)

For the needed definition, we turn to the Swiss psychologist and pioneer Jean Piaget, whose insightful interpretation was succinctly quoted as:

“Intelligence is not what you know, but what you do when you don’t know.”

This definition might also seem to be vague and overly generalized, without any of the strict, formal character of the dictionary definition. But it does capture the discriminating essence that exemplifies a functional interpretation, although however, in a negative sense, so we need something more definitive. To fully denote the concept, the dialog will refine its final definition in terms of organic adaptation, which is a close biological analog:

Adaptation is the assimilation of information from the environment of an entity to further self-organize that entity.

Intelligence is the assimilation of information from the future environment of an entity to further self-organize that present entity.

For those readers who tend toward acutely literal interpretations and who do not immediately grasp the intuitive meaning of the definition, the dialog is not implying that intelligence requires a time machine. On the contrary, this document has been prepared specifically to present a plan for designing an artificial implementation of this “intelligence”.

With that guiding definition to set our direction, it is apparent that our artificial intelligent agent must interact with some external environment for which we might also want to define. If we were to look to the enormous volume of research in artificial intelligence for design guidance, we find that many efforts in applied AI proceed only by narrowing the scope of their solution or by limiting the environment that their “agents” are to gain intellectual comprehension of.

Complexity is pervasive in the real world around us. If we placed our artificial agent into a non-complex environment, there would be no need to instill intelligent behaviors in it. We could simply model the environment explicitly, and then place that model into our agent.

But the real world offers inexhaustible variety. Given this, it becomes apparent that we cannot make any assumptions about the environment our agent will be interacting with. Because we cannot explicitly place boundaries around this complexity, our artificial agent must have mechanisms to develop its own rule-governed models of the complex environment it will be adapting to.

This introduces the first fundamental dilemma in artificial intelligence programming and design: A programming entity cannot create intelligent program mechanisms until the program encounters the problem.

What this means is that our design process is really not concerned with engineering any of the agent developed models, since, beyond a certain kernel of core, native behaviors, we are speaking in essence about a machine that is self-programming, or one that builds itself as it is adapting to its environment. This also means that we are essentially creating an artificial “stupid agent”, one that starts out with little inherent knowledge, but one that has the capability to adapt intelligently to its environment.

Instead of designing an agent that behaves intelligently right out of the box, so to speak, which is the goal of almost all AI efforts to date, our design process should be directed toward creating an agent that can adapt to its particular, infinitely complex environment as intelligently as possible. Life seems full of oxymorons, and as incongruous as this demand sounds, it illustrates precisely how difficult our design task is.

As this changes our design perspective, we turn to that singular example of naturally evolved intelligence for design guidance, the human central nervous system. Without violating the Second Law of Thermodynamics, the human brain must accomplish adaptation to its environment, while at the same time changing in a direction that is opposing to that of entropy. And although the sheer number of neurons and an astronomical amount of synapses provide the brain with a near infinity of processing capabilities to accomplish this adaptation, there are also some fundamental restrictions that limit this capability.

The first and most pervasively limiting behavior of neurons is neural fatigue, or more specifically, synaptic fatigue. During extended bursts of activity, the synapses in neurons experience a temporary depletion of neurotransmitter, which depresses the ability of the synapse to respond to further action potentials until a neurotransmitter “recycling” takes place. The phenomenon of synaptic fatigue sets an upper limit on the long-term signaling rate of synapses.

The second and more abstract process that limits human neurological activity is an effect known as ‘neural biasing’. This is where the creation of long-term structures in neuron organization has the effect of narrowing the options for, or ‘biasing’, the creation of future structures. Like most other things in life, the assimilation of knowledge into an organism does not come free. Each incremental increase in knowledge assimilation must be paid for by a correspondingly incremental decrease in the freedom with which the organism can deal with the assimilation of new knowledge.

Because neurons are biological entities, they are all subject to these two main restrictions. So what can Nature teach us by the manner in which she parlays these two fundamental limitations into corresponding assets to complement the given strengths in neuron numbers? And how can we use Natures’ solution in creating natural intelligence to design those core, native behaviors our Stupid Agent will require?

As we are asking these questions, there is another fundamental issue that requires consideration, one that has not been given much discussion in the AI literature. The issue is postulated in the form of a philosophical question, although it has direct applied computational aspects as well. The Organon Sutra has labeled the question the “Fundamental Conundrum”, and it is summarized as follows:

“How can an artificial intelligence system change in response to an arbitrarily complex environment and still remain intelligent?”

So far, nowhere in the literature can an AI researcher find a theory that can resolve this fundamental issue. According to our guiding definition, defined at the beginning of this introduction, a self-organizing entity is changing itself as it absorbs information from a chaotic environment, and this assimilation may, indeed probably will, alter the very mechanisms that select the information to be absorbed in the first place, mechanisms which the guiding definition has characterized as “intelligent”. How can an artificially intelligent agent suffer changes to these mechanisms and remain intelligent?

This fundamental issue will be reserved for a later discussion in the dialog, as there is the huge conversation regarding the path to artificial intelligence in the first place, but the resiliency of intelligence should at least be noted here.

And certainly the solutions to all of the foregoing questions must involve an abundant number of complexities that require significant engineering consideration, but in the design planning for the Organon Sutra, it was ultimately determined that there are three compelling behaviors that have been collectively absent from every other AI approach in the literature. These three emergent behaviors are integral to any mechanical systematic designed for a higher level behavior of “intelligence”.

Since it was apparent that interaction with a complex environment was a necessary activity in order to discover the constraints in that environment, the assertion was made that an artificial agent can only accomplish adaptation to an arbitrary environment by interacting with it. This is the first, and rather obvious behavior, and it was implied when we first began to characterize our intelligent systematic as an agent in the very beginning.

The second critical ingredient can be characterized as a synthetic implementation of true gestalt abstraction. Because any reference to this pivotal emergent behavior is missing in almost every AI approach found in the literature, there is no succinct bumper-sticker definition to relate it to. Even the formal definition for its fundamental nature is a venture in discovery, and will require considerable explanation.

The third fundamental constituent necessary for the core, native kernel is the design of a non-symbolic analogy capability. The Power of If. Much like gestalt abstraction, a concise definition of this activity is also elusive, and will also have to await its place in this dialog.

With all of that, the implementation of these foundational behaviors, and a description of the subsequent components of the Organon Sutra, is a journey in itself.

GO TO NEXT DISCUSSION

GO TO TOP OF DISCUSSION

Copyright © 2019 All rights reserved