Discussion 6: Our words are our perceptual Anticipatory Structures

Although these anticipatory structures will be numerous and diversified, we can begin to resolve the apparent paradox by generalizing this community of structures into more basic constituent classes. An early design of the Organon Sutra outlined four basic “orders of functionality”, from which all of the desired anticipatory structures would be developed. These “orders of functionality” would modulate both the perceptual cycles proper, and the process of perceptual learning of new anticipatory schemata.

The first order of functionality in perceptual processes is termed the filter level. This order of functionality is considered to be the characterization or objectification of noise. This objectification is based on identifying the noise that demonstrates persistence in the stimulus environment, as reported by the sense modalities our artificial agent possesses for its particular environment. Noise is used in this context to refer to any input from a perception modality which has not yet been objectified. In fact, at the commencement of our Stupid Agents’ very first adaptation to a new environment, almost all perceptions are by definition “noise”.

At this order of functionality, we differentiate noise (present persistences) from background, much like the way modern submarines sort out sounds by passive sonar, and this is where the symbolic language of mathematics dominates in the definition. The Organon Sutra does have a formal definition for noise, but the terminology surrounding that definition will not be developed until a later discussion, and so the aforementioned reference will be used in the interim.

The next order of functionality in perceptual processes is termed the association level. This second order functionality is where first order objects are associated with their background, and this level is considered to be the basis for predication. (Predicate is used in this context to mean “to found or base something on”.) This is where the symbolic language of logic dominates in the definition.

The third order of functionality in perceptual processes is termed the temporal objectification level. This next order functionality is where the artificial agent identifies and gives names to change in first and second order objects. This is where natural language dominates in the definition, as the symbolic languages of mathematics and logic become less expressive at this level.

Indeed, it is our words that act to organize the temporal structures and sort out the sometimes messy and oftentimes illogical interdependencies that are needed for this important phase in perception. This also illustrates how our words perform a function beyond mere communication. As Marvin Minsky stated so presciently in his brilliant essay collection The Society of Mind, our words also serve to structure our very thinking. Because speech fundamentally evolved as a memory aid before it became a vehicle for communication, it cannot be helped but to think that this is probably the mechanism that it is employed in our “inner voice”, and the reason people talk to themselves, either silently or even out loud.

The fourth order of functionality in the scheme of perceptual processes represents the highest level and is termed the abstraction level. At this process level, first, second and third order perceptual structures and functions become an object in themselves, to be processed as objects within first order (objectification) and second order (predication) levels.

Here, all literal and symbolic languages fail, as we intuitively grasp that perceptual abstraction is a graphic activity that cannot be processed symbolically or literally, where form and content is highly fluid, and gestalt representations are necessary. During perceptual abstraction, the boundary between objects and structures is blurred, often inverted, and constantly changing.

At this level, we also invert the very anticipatory structures that we have been building, to become objects themselves for processing at the second and third orders. And it is also at this fourth order that we begin to understand why our words become so expressionless when describing the so-called “right-brain” range of behaviors in the human CNS. It must be emphasized here that the moniker of ‘perceptual abstraction level’ should not be confused with the processes of gestalt abstraction alluded to in the Introduction. Although the two share many analogous factors, perception in an artificially intelligent agent is considered a relatively low-level process, and the dialog will soon outline how it forms a basic substrate to the much higher level of true gestalt abstraction.

These four procedural levels of functionality were originally conceived to represent the basic mechanisms for the perception processes in our artificial agent to build the anticipatory structures that have been introduced. In fact, we can generalize these four orders even more.

As we move through the successive layers of the perceptual “orders”, we can relate that we are moving through successive characterizations of noise; whereas we are moving from the mathematical definition of noise, to the information theory depiction of noise, to the cybernetic narrative of noise.

At all levels, our perceptual learning is that of the objectification of noise as demonstrated in the current environment an artificial agent may be interacting with.


The successive layers of perceptual orders provide a design guideline for implementing the mechanisms with which our artificial agent would objectify noise in the interactive environment, and we can see how our words, acting in a capacity beyond social communication, serve to structure those processes which develop and deploy these so-called anticipatory structures. Now, although they are structured by our words, how do these “anticipatory schematas” come about?

By definition, an anticipatory schemata is context dependent, and has relevance only to that environment from which a perception system is operating in.

This means that an artificial agent, as well as a human individual, cannot begin to build any anticipatory schematas until they are exposed to the environment they will be perceiving. This also means that from a design standpoint, anticipatory schematas cannot be engineered into an artificial agent, so we are still left with defining only those core, native behaviors of the perception systematic that an artificial agent can possess beforehand.

Having defined the four basic perceptual orders, future anticipatory schematas are created by the agent as it progresses from objectified noise to knowledge. So what additional core, native behaviors are required for this perception apparatus to transform objectified noise into knowledge? And at what point can we say that we have designed a sufficient amount of “built-in” capability?

To illustrate the trade-off these two questions pose, consider this example: There is an abstract paradigm occurring when an infant moves from relating objects relative to the spatial position of the infant, to relating the position of objects relative to an external point of reference. There is a very subtle difference between ‘location’ and ‘position’, a subtlety that cannot be taught but must be intuitively grasped.

And there are abstractions specific to the spatial positions of two objects relative to each other, and more generalized abstractions dealing with cause and effect. Should the machinery to enable these types of paradigms be engineered beforehand or should this be learned?

Again, we must further our inquiry into perceptual knowledge.

GO TO NEXT DISCUSSION

GO TO TOP OF DISCUSSION

Copyright © 2019 All rights reserved