Discussion 11E: The invariance in Change

Human memories are like shadows that owe their existence more to the light, and not to the obscuring object which casts them. Unfortunately, within the research community, there has arisen a profusion of various labels and their models for the multiple facets of the phenomenon which is collectively referred to under the singular banner of “memory”. Equally unfortunately, this profusion has led to a conflation in the concepts regarding the neural mechanisms which retain the temporally stateless experiences of an organism with the retained experiences themselves. In the remaining exposition for the dialog’s third fundamental precept, the bottom-up engineer must be clear about the two conceptualizations.

Throughout this dialog, the Organon Sutra has admonished the bottom-up engineer to purge all use of computer metaphors when conceptualizing the designs for artificial intelligence, and this discipline is especially necessary when forming models of intelligent memory. One of the most corrupting concepts an AI engineer can implement is the separation of memory storage from memory retrieval. As selective pressures evolve even more sophisticated forms of organic memory, any reliance on silicon or analog metaphors for these processes will hopelessly over-complicate any models that might be applied.

A fundamental dilemma that develops when the mechanism for memory “storage” is separated from the mechanism for its retrieval is seen when multiple memories are stored and recalled, resulting in the need for a scheme to “locate” an individual datum when needed, a scheme which universally grows in exponential complexity with a linear increase in the number of memories. Although this dilemma has yet to be resolved in the architectural designs of artificial neural networks, producing the phenomenon of catastrophic interference alluded to in Discussion 2, some might point to content addressable memories, but they suffer from a fixed context space, and once “filled”, cannot be arbitrarily scaled up in size without encountering the devilish “curse of dimensionality”. The singular datum of intelligent memory must exist at the point of its abstraction.


In traditional computer programming, the programmer sets out to design an algorithm, which is a state machine with a set number of arbitrary machine states, and whose specific state transitions are dictated by the real-time “inputs” that it is exposed to at the point of program execution. Although the memory mechanism is the same for “code” as well as “data”, because of this memory mechanism, the discrete bits for “code” must remain separate from the discrete bits for “data”. In an intelligent machine, this separation cannot be absolute, in an intelligent machine there must be some mechanism for the “data” to change the “code”. It is the intention of the Organon Sutra to develop in the bottom-up engineer a mindset whereby program design does not begin with code assembly which is later exposed to “data” after the program begins execution, but instead an inverse, bottom-up approach begins with the “data” in the environment of the intelligent agent, data which builds the code that will ultimately demonstrate intelligence. So from the very beginning, the bottom-up engineer is encouraged to ditch all prior conceptualizations of “memory”, and replace them with the conceptualizations of abstraction, as conceived throughout this dialog.


The human brain endeavors to acquire knowledge about the permanent, unchanging properties of the world that it experiences, but that acquisition is performed by experiencing a world that is continually changing. In this endeavor, the mere perception of invariance is not enough to reverse the flow of entropy in an organism, there must also exist a mechanism to assimilate invariance into the very machinery that perceives it. In the organic machine, the data must build the code.

There is a difference between the processes whereby an organism perceives instant sensory states of its environment, and the processes whereby it acquires information about the states of its environment. In the former, the organism is neurologically the same before perceiving instant sensations as it is afterwards, whereas, as an organism assimilates new information about its environment, it is not quite the same neurologically as before.

In traditional computer programming, the programmer designs the various states that an algorithm may express, in addition to specifying the transitions where an individual state may assume a different state. In organic adaptation, however, transitions in the state of the organism must be modulated by the environment of an organism, and not entirely by ontology. Knowledge is all about change in the environment, and the structures to assimilate this knowledge must also change the organism as well.

And ultimately, for human organisms, a fundamental paradigm arises when, after the organism acquires the mechanisms to assimilate information about changes in its environment, it develops the ability to assimilate information about its own change, and this paradigm marks the transition from cognition to intelligence.

The good news for the bottom-up engineer is that the assimilation mechanisms on both sides of this paradigm are essentially the same. But the tricky part is understanding how the latter assimilation structures come about from self-organizing changes in the former assimilation mechanism.


In the very first pages of this dialog, the Organon Sutra specified that there were two fundamental organic processes which proscribed the entropic behavior in central nervous systems. The first process, the phenomenon of neural fatigue, proscribes entropy at the cellular level. The second process, the abstract phenomenon of neural biasing, was described as a restriction which asserts a cost for all assimilation. There is a specific reason why the dialog brought this concept out at the very beginning of this journey, because this cost proscribes the entropic process at the very interface between the organism and its environment, which is the perspective that the bottom-up engineer must also assume in the design of her artificial agents.

As we left off with the adaptations of our primordial mammals following the K-Pg asteroid event, the evolving cerebral cortex had developed a number of neuro-architectural assemblies, which the dialog loosely described as the three metric sense modality areas forming a sensory integration triangle, and a motor control area formed anatomically frontal to the sensory triad area. Together, these four cortical areas created the neurological maps for which the cerebellum orchestrated the programming of the animals’ physical musculature to effect terrestrial locomotion.

And as the emotive complex evolved into the limbic system, various limbic structures provided an interface for the nominal senses of smell and taste to develop association maps within the integrative sensory triad, providing the cerebellum with additional dimensions for the programmatic creation of episodic memory, and the foundations for cognitive abstraction and allocentric behaviors beyond egocentric routines.

This neural enterprise allowed the primordial mammals to adapt to the chaotic environment following the K-Pg asteroid event with evolutionary mechanisms beyond the generational memory of the genome, but even this inter-generational adaptation mechanism became taxed as mammalian species flourished and the increasing sizes of their habitats presented them with increasing complexity in environmental conditions. As always, the successes of Natures’ adaptations spin the selective pressures for the next evolutionary step.

As the habitat of mammals increased in area, the exploitation of the association maps formed by the hippocampus and reinforced by the cerebellum would require a more sophisticated marshaling of their stimulus-bound implementation. Our primordial mammal has to this point developed the ability to adapt to changes in its environment within the lifetime of a single organism, but now environmental selection pressures from expanding habitats will demand that it develop the ability to internalize environmental changes into the very association processes themselves. As the bottom-up engineer will come to understand, where the cerebellum is all about invariance, the cerebral cortex is all about change.

For the remaining evolution of the mammalian brain, the cerebral question is, change in what.

GO TO NEXT DISCUSSION

GO TO TOP OF DISCUSSION

Copyright © 2019 All rights reserved