Discussion 9B: The beginnings in Bottom-up design

It is not the intention of this discourse on the cerebral cortex to teach neuroanatomy, but to illustrate how Nature has begun with a simple signaling unit, the neuron, and after linking together a large number of these basic elements in a purely asynchronous fashion, can get from the basic propagation of individual neuron activation patterns to the incredibly complex emergent behavior of intelligence without the hierarchical imposition of any temporal ordering in the signaling pattern of local units.

Because we do not fully understand the intricate complexity of the connection design or the signaling mechanics of individual neurons ourselves, we cannot just copy this architecture to use as a model for our artificial agent, nor do we want to. But Nature can teach us many important lessons about the architecture we do intend to build, if we just know where to look.


If we were to start with the smallest component in the human central nervous system, the molecule, and move through the hierarchy of organization from molecule to brain, pausing at each level to examine the fundamental physical attributes that nature has exploited at that level which contribute to the emergent behavior we call intelligence, it is tempting to ask whether any particular property is the critical characteristic that marks the difference between intelligent behavior and mere adaptation.

At the molecular level, there are innumerable elemental processes occurring, but in particular there is a process that Nature employs in many instances called phosphorylation, which is a chemical operation having the property to activate some proteins and deactivate others. Phosphorylation can be readily reversed and can serve as a simple molecular switch, turning the biochemical activity of a protein on or off. Is this the crucial ingredient for intelligence?

Just above the molecular level, at the rank of neural membranes, the very act of neuron excitation comes about from the ability of cell membranes to change the transport of ions through the neural membrane barrier, and thus vary the voltage potential across the membrane. Nature has engineered two different membrane properties that regulate this change in a membranes’ ion transport, called the electrotonic and electrogenic properties of a membrane. Electrotonic membrane response occurs passively, whereas electrogenic membrane changes require an energy expenditure on the part of the cell body. Nature exploits both of these properties to create a variety of neuron cell behaviors.

And there is an added complexity to membrane composition, as Nature has devised different electrogenic mechanisms to gate the transport of ions. These mechanisms can be broadly grouped into what are termed ionotropic receptors and metabotropic receptors. These receptors are differentiated by their affinity for various neurotransmitters, but are chiefly characterized by the time frames of their response. Ionotropic receptors have quick response times, where metabotropic receptors are slower to respond but their activation persists for longer periods. Certainly, here again, Nature takes advantage of this variety.

As we rise to the level of the neuron cell proper, we might see how the morphological complexity and extraordinary elaboration of neuron dendrites could be viewed by neurohistologists as evidence for the diversity in function of neurons. There is the profuse branching of axons, and the compelling structures that have been developed for axon synapses. We see in the construction of axons how Nature sacrificed speed in transmission of electrical signals to compensate for the signal loss inherent in passive transmission lines.

And continuing to examine the complexities of organization above the neural cell level, we encounter the vast possibilities as individual neurons are linked together, where static interconnections become dynamic interactions and simple interrelations become complex interdependencies. We find excitatory synapses and inhibitory synapses, with a multitude of neurotransmitters each with its own special story. Axons that traverse relatively large cortical distances only to target specific patches of dendrites mere micrometers apart. We find neuron dendrites performing message integration by way of a spatial summation of inputs and by way of the temporal summation of signals. And as we pause here to ask whether this complexity might be the progenitor of intelligence, it is difficult to conceive of the blueprint which directs this hypercomplex interconnection of so many varied neurons in so particular and definitive a fashion.

The more we look for answers, the more Nature reminds us that ever more questions await us. In the course of billions of years of evolution, it is not surprising that Nature has taken advantage of every possible nuance in chemical signaling that physical laws have proscribed. But we can generalize much of what we do know as we examine how neurons are employed in massively asynchronous assemblies.


Since we do not yet possess the scientific apparatus that would reveal the true inner workings of Natures’ marvelous neural creations, we can only construct models that might possibly capture their essence. When considered from the level of neural organization that brings about the phenomenon we call consciousness and intelligence, at the opposite end of the organizational spectrum from the collection of individual signaling units, the specific activities of the human neocortex are varied, although they can be generalized into a number of models types which attempt to demonstrate the various dynamics of neural assemblies.

In his groundbreaking book Higher Cortical Functions in Man, the Russian neuropsychologist Aleksandr Luria explains how post eighteenth century attempts at understanding the complexity of the human brain “had to contend with two opposing schools of thought, one attempting to relate mental processes to circumscribed areas of the brain, with the brain regarded as an aggregate of separate organs, and the other assuming that mental activity is a single, indivisible phenomenon, a function of the whole brain working as a single entity.”

He went on to detail a pioneering viewpoint that the ideas of localization of functions in the human brain should be centered around certain “nuclear zones of analyzers”, particular areas of the brain in which the concentration of specific elements of cortical analysis and their corresponding connections is maximal. Although somewhat simplistic, in its time this characterization helped to steer the conversation in a definitive cytoarchitectural direction.

This dialog will adopt a somewhat different approach to building artificial intelligence, by developing those critical formalisms needed to engineer massively asynchronous assemblies. In the Introduction, the dialog began with a discussion on the architecture that would form the substrate for this implementation, but in the ensuing chapters it has been determined that we must understand the nature of massively asynchronous assemblies before we can even start to engineer their behaviors. Much like some approaches to building a house, where the builder would decide on a number of fundamentals like square footage, room arrangement and wall placement before deciding to build with wood, brick or metal, the formalisms which define the basic elements of a massively asynchronous assembly must not be specific in the particular architecture of their implementation before understanding the nature of their behavior.

Given that, we will be making explicit definitions for many of the new concepts that will be introduced, and the first definition that we must limit the connotations for is in regards to the term ‘architecture’. Throughout this document the term has been used to denote the form of computing elements used to implement a computing system, and the dialog cited the three most prevalent element types – biological neurons, artificial neural nets (or more precisely, connectionist models) and of course, digital computer algorithms. However, for the remainder of the dialog, we will refer to these terms as ‘implementation methods’, and reserve the term ‘architecture’ to refer to specific design attributes of massively asynchronous assemblies which will be made explicit in the discussion.


So with the previously outlined introduction of the human cerebral cortex in hand to serve as a model of comparison, our next step should be establishing a clearer understanding of these massively asynchronous assembly formalisms. But before we can introduce them, there must be a shift in perspectives, for without a doubt, if we are to practice a discipline of bottom-up design, we must form different habits than before.

And the first habit that must be adopted is actually the changing of old habits and thinking in a hierarchical fashion. In top-down design, we attempt to sequester the many complexities that we wish to engineer into a synthesized hierarchy, a collective behavior, a system that can be differentiated, and compartmentalized. This is then broken down into its manifest sub-compartments, and further divided until we reach some level of reproducible units. This approach is effective for much of Mans’ engineering endeavors, but one salient fact emerges when we examine Natures’ solution to developing natural intelligence: Fundamental cognitive operations are carried on without the benefit of predesigned circuitry. Recall that much has been said in this dialog to stress the fundamental fact that adaptive behaviors cannot develop until an organism is exposed to the environment it will be adapting to. This concept is wholly alien to the hierarchically minded engineer.

Unfortunately, this hierarchical thinking carries on into our conceptualization. For example, in conceptualizing vertebrate vision systems, there is an almost universal meme that relates how visual stimulus can be broken down into its constituent elements, such as basic lines and edges, with which the nervous system builds an image by analyzing these components first, and then re-assembling them, building block fashion, at successive stages of visual pathways, ending up still somewhat mysteriously with an apprehension of a singular, cohesive visual field.


Bottom-up design is an acknowledgement that the complexities that result in emergent behavior cannot be expressed explicitly, and so we must abandon this habit of hierarchical conceptualization. The new habits we must adopt are expressed in the formalisms needed to engineer massively asynchronous assemblies, with which we can define and devise those core, native behaviors that have been alluded to from the very beginning of this dialog.

As a corollary to this shift in design pragmatics, there is another near universal conceptualization which is not directly related to our overall methodology, but which has nonetheless been a steering influence throughout much of neuroscientific research. There is a tremendous amount of investigation being conducted in the neuroscience community on the nature of neural plasticity, and on the role that synaptic connectivity plays in learning and its’ theorized progenitor process of memory. From this research, an overwhelming amount of speculation has developed regarding memory and learning, but so much has yet to be discovered that a working model of natural intelligence at the neural and synaptic level has until now remained elusive.

One of the disciplinary themes that this dialog will contend is that this focus on individual synaptic plasticity is misplaced, and that a telling model of natural or artificial intelligence can be found with a slightly different perspective. Although it is not the desire to minimize the research being conducted at the neural synaptic level, and indeed there is still much to be learned about the fundamental signaling properties of individual synapses, it will be shown in this dialog how plasticity should not be expressed at every synaptic juncture. Unfortunately, in the fervor to find the Grand Theory of Learning, many researchers have conflated the synaptic properties of signaling with the misplaced property of plasticity.

The Hebbian doctrine of neural plasticity has dominated neural research and prejudiced many resultant conceptions in a profound way. It is popular because it tries to explain the phenomenon of learning, but in this it fails, for the principle reason that it treats all synaptic activity as both functional and equally pliable, by changing the definition of every synapse from a signaling device to a learning device. It is important in our changing perspective as bottom-up engineers of AI to keep a clear separation between these two properties, so the conversation in this dialog will try to steer away from the almost commonplace mindset that neural activity is principally a result of the strength of connections.

GO TO NEXT DISCUSSION

GO TO TOP OF DISCUSSION

Copyright © 2019 All rights reserved