Discussion 1: The Plan for the blueprint

One of the first decisions that had to be made in the early stages of the Organon Sutra design regarded the implementation architecture that would be employed.

With no readily suitable artificial intelligence solutions available, the design planning had to start from scratch, so all options for a deployment architecture were considered. However, designing an entirely novel AI application is not so straightforward.

Of course, the most direct approach would be to just emulate the human cerebral cortex in the digital form of our fastest computers.

Unfortunately, there are significant drawbacks to this. The most significant of which is that we simply do not fully understand all of the elaborate mechanisms of the human brain ourselves. Very few would argue with the assertion that the human central nervous system is the most complex creation of Nature, and we are only now beginning to resolve a very small part of its many wondrous secrets.

And assuming that we could properly model the true behavior of its intricately connected parts, we would still be confronted with the The Sheer Numbers.

According to one estimate, the human brain has approximately 85 billion neurons, and although the number varies from neuron to neuron, each neuron has on average about a thousand connections with other neurons. Modeling each one of these 85 trillion connections would present a mind-boggling (pun intended) challenge.

The reason for this is, even before we get to an understanding of the complex interconnectedness of the cerebral cortex, if we examine the functional characteristics of these synapse connections we intend to emulate, we find that individual synapses are far more complex than the simple excitatory/inhibitory mechanisms they were previously portrayed as.

At the post-synaptic site of most of these neural connections, complex structures of molecular chains are formed just below the neural membrane, built up in response to the neuroplasticity of that particular neuron.

These molecular chains are built up within each post-synaptic site over the lifetime of the dendritic spine that supports the synapse, and their molecular combinations are the product of the past signaling patterns of fast neurotransmitters (typically glutamate or gamma-aminobutyric acid), modulated by the signaling patterns of slow neurotransmitters (mostly the biogenic amines dopamine, norepinephrine, epinephrine, histamine and serotonin).

Because they are built up over time, changing over the course of neural modulation, these molecular chains can be unique to each post-synaptic connection site, and as complex proteins they encode a type of “post-synaptic logic”, conferring distinct patterns of membrane response to local pre-synaptic neurotransmitter signaling.

And because they encode a unique logic for each post-synaptic site, our supercomputer emulation will require the programming of a different “subroutine” for each distinct neural connection. This subroutine would include such synaptic information as proximal/distal location of the synapse on its dendritic branch, the type of neurotransmitters it responds to, and a description of these molecular chains. It is not so hard to see the stupendous task of simulating the varied logic of 85 trillion disparate neural connections, each with a unique molecular logic encoding and dendritic morphology. And even if we found a way to automate that process, we would still be faced with the next Sheer Number.

One of the basic physiological characteristics of all biological neurons is referred to as the Refractory Period, which describes the recovery time a neuron requires to return to normal from a depolarizing action potential. Due to the ion channel properties in a neurons’ membrane, there is a lag in the membrane’s voltage potential as it recovers from its depolarized potential and returns to its resting potential. The refractory period is about 2 milliseconds in most biological neurons, and it is significant in any simulation of these neurons, in that it would set the basic signaling rate for all neuron activity.

This limit on the neuron signaling rate would also establish the basic “cycle time” that our super-computer simulation of the human cortex would have in order to complete a single pass of “emulation code” for all 85 trillion connections. In more simpler terms, this would require our simulation to determine the state transitions of all synapse connections within a single refractory period.

At the time of this writing, the world’s fastest supercomputer was the Sunway TaihuLight in China, with an advertised capability of 93 quadrillion (million billion) floating point operations per second. If we rounded this up to 100 peta-flops, and adjusted for the 2 millisecond refractory period just mentioned, the TaihuLight can just manage 20 trillion floating point operations per refractory period, not quite enough to execute all of the code for our 85 trillion connections, but tantalizingly close.

However, even if we were to fudge a bit and stretch out the cycle time, this would still not be a workable architecture. Super computers achieve their peta-flop speed with massive parallelization. In the case of the Sunway system, the TaihuLight sports 10,649,600 processing cores, but the catch is that all of the cores must run the same instruction program, only each core works with different chunks of data.

Any variation in the instruction program run between cores introduces the very tricky problem of process deadlock, so this supercomputer, running a different connection subroutine on each of its 10 million+ cores (because remember each synapse is unique), would be so deadlocked that very little computation would occur at all.

At this point, it should be noted that although this imagination exercise demonstrates that a supercomputer simulation would not be a viable architecture for the Organon Sutra, the inquiry was not without merit. From it there were several resultant observations that proved to be hugely constructive as the design process continued, and so the conceptual planning was carried on with this blue-sky thinking about why the extensive parallelization of a supercomputer is wholly unsuited for the emulation of a biological central nervous system.

Given the premise that the basic neural units of a biological central nervous system are all essentially operating in an asynchronous manner, in which there is no temporal ordering imposed on inputs and outputs to any individual neuron, the casual conjecturing was continued on Natures’ solution to the organization of massively asynchronous systems. (It is this lack of temporal ordering that makes it impossible for the massively parallelized supercomputer architecture to emulate a biological CNS).

The inquiry was centered on how the human cerebrum as a whole can maintain executive task cohesion while implementing billions of asynchronous elements. In the Introduction, the question was posed as to how Nature would parlay the limitation of neural fatigue into a constructive factor when evolving natural intelligence, and the conclusions to this inquiry suggested that neural fatigue was utilized as a form of negative feedback among the connections between neurons, and was indeed advantageously employed to assert a form of temporal cohesion among neural units, thereby mitigating the implicit chaos of a purely massive asynchronous system. Nature always seems to find a way to exploit the resources that are available, even when these resources begin as limitations.

This perspective was then coupled with a prior investigation of theories on human prefrontal lobe functionality, which ultimately led to a far-reaching hypothesis on the emergent nature of biological consciousness. (This hypothesis will not be discussed at length in this dialog, because, curiously, there was no determined correlation between biological consciousness and natural intelligence, besides the observation that both are emergent behaviors, but the thesis will at least be summarized in a definition of the kinetics of Consciousness, as we experience consciousness because of the nature of its neural dynamics).

It was this generalized formalization of the emergent nature of consciousness that established the foundational conceptualization for Gestalt Abstraction (which is itself a massively asynchronous process, and must be conceived of and implemented in this way). Now, at this early point in the discussion, we can only use a very distant paradigm to help the reader envision how gestalt abstraction as it was first imagined can flow from a system with massively asynchronous components: Have you ever watched a very large flock of many hundreds of birds all flying tightly together in what seems to be a singular, cohesive “cloud” of flapping wings, only to see the entire flock turn in an instant and fly off in an altogether different direction as if all of the individual birds were interconnected and choreographed in some fashion?

GO TO NEXT DISCUSSION

GO TO TOP OF DISCUSSION

Copyright © 2019 All rights reserved