Putting aside any other philosophical considerations regarding the fundamental nature of knowledge, the preceding discussion has established our working definition for knowledge.
As we include this working definition into the basic tenets driving the design of the Organon Sutra, we can put the definition into perspective and declare that as a matter of course, an artificial agent cannot learn knowledge about things before that agent has accumulated knowledge of things. Specifically, the former cannot be understood until the latter is comprehended.
Now, at a prior point in the dialog, the discussion intimated that the divergence of functionality typical between human left and right hemispheres could provide clues for a model architecture of the Organon Sutra. It then detailed our working definition for the two fundamental types of knowledge, and provided evidence that the hippocampus structure in humans may be a neurological arbiter of that type of knowledge we call “knowledge by acquaintance” (acknowledging that our characterization of ‘neurological arbiter’ is fairly broad).
At this point, it would be tempting to go even farther in our theorizing, and take the discussion of hemispheric asymmetry along with the assertion of the hippocampus channeling of one type of knowledge, and combine the two thoughts into a larger hypothesis: that of one brain hemisphere being dominant in processing one type of knowledge, and the other hemisphere being dominant in processing the other type.
Unfortunately, the very bit of evidence that has brought us to this point in the discourse is also the very observation that pokes a hole in that neat little hypothesis.
This dialog will pursue the supposition that the hippocampus in humans is indeed acting as an arbiter of that knowledge we are calling “knowledge by acquaintance”, but we cannot extend this assertion to a hypothesis that one brain hemisphere might predominately mediate one type of knowledge and the opposite hemisphere mediating the other type of knowledge because of one salient point: There exists in each hemisphere of the human brain a separate left and right hippocampus structure.
With this observation, and our foregoing assertions, we can only conclude that the mediation of that knowledge we are calling “knowledge by acquaintance” is occurring in both hemispheres.
However, much like the speculation on using supercomputers for emulating intelligence was not without merit, we should not be too quick to dismiss this hypothesis. But considering what we have so far, where do we want to go from here?
One important difference between knowledge by acquaintance, also referred to as Sense Knowledge, and knowledge by mediation, also referred to as Intellectual Knowledge, is this. The object of sense knowledge is singular or particular whereas that of intellectual knowledge is characteristically universal.
An object is said to be particular in this context in as much as it is incommunicably limited to one individual object, and in fact, to that individual in a given place in a given time.
An object is said to be universal, on the other hand, in as much as it is applicable to many and is free from the restrictions of time and place. Thus, what we grasp intellectually when we know a nature, essence, or a meaning differs significantly from that which we know in an act of sense knowledge.
The intellible object is universal, communicable, and unchanging, whereas the object of the senses is singular, incommunicable, and in a state of flux.
And as we delve even deeper into the intrinsic nature of knowledge, it becomes apparent that the knowledge acquisition process begins with the very underlying perception processes that an artificial agent uses to discern its environment.
(This dialog will introduce the subject of artificial perception by discussing human vision, but the discussion of any perception systematic for an artificial agent is not limited to any one modality, and indeed, will comprise multiple modalities, many of which will differ in their transduction physics compared to biological senses. For example, there will be instantiations of specialized intelligent agents such as weather comprehension agents, whose perceptual modalities and transduction physics might be tuned toward such things as barometric pressure, wind speed and direction, temperature and dew-point analysis in addition to the traditional visual, auditory and haptic “senses”. And certainly for artificially intelligent agents whose “real world” is entirely demarcated within the virtuality of the Internet, perceptions of physical modalities will not necessarily be a part of its design, and “virtual” perception will be given new definitions. )
In his insightful book Cognition and Reality, Ulric Neisser writes that contemporary theories of visual perception hold that the retinal “image is not looked at but processed. Certain specific mechanisms in the visual system, called detectors, are assumed to initiate neural messages in response to certain equally specific features of the image. Information about these features is then passed on to higher stages of the brain. At the higher stages it is collated and combined with previously stored information in a series of processes that eventually results in the perceptual experience.”
Indeed, neural systems in biological organisms that respond selectively to orientation, curves, lines, colors, and movement have been identified. But Neisser goes on to ask:
“It appears, however, that other aspects of perception are more difficult for such models to explain. Particular problems arise in connection with selection, unit formation, meaning, coherence, veridicality, and perceptual development. How does it happen that different people notice different aspects of the same real situation? Why are some portions of the retinal input treated as belonging to the same object, others as independent? Why do we often seem to perceive the meanings of events rather than their detectable surface features? How are successive glances of the same scene “integrated”? What happens in perceptual learning?”
And he goes on to ask other more general questions:
“How do perceivers differ from one another? What happens when we choose what to see or how do we learn to see better? How are illusions and error possible if perception is simply the pickup of information that is already available and specific?”
His questions tend to suggest that students of perception should develop new and richer descriptions of stimulus information, rather than ever-subtler hypothesis about mental mechanisms. To which he adds:
“The difference between a skilled and unskilled perceiver is not that the former adds anything to the stimulus but that he is able to gain more information from it; he detects features and higher-order structure to which the naïve viewer is not sensitive. A newborn infant ignores information that older children and adults acquire effortlessly.”
At this point, Neisser refines his definition of the ‘perceptual cycle’:
“The cognitive structures crucial for vision are the anticipatory schemata that prepare the perceiver to accept certain kinds of information rather than others and thus control the activity of looking.”
There are two things that we can take away from this. The first is that perception is an active skill, not just a passive behavior. The second is the key phrase in his refined definition, which is ‘anticipatory schemata’.
Certainly, we can see only what we know what to look for, but more importantly, perception requires a tuning mechanism to direct the very process that is perceiving a scene based on those anticipatory structures triggered by prior perceptions. Perception is not about just the raw processing of sensory data, it is about the processing of the right kind of sensory data. Learning what is the right kind of data and what data can be safely ignored in the perceptual field is the first step in building these anticipatory structures.
Perception is indeed a constructive process, but what is constructed is not a mental image appearing in consciousness where it is admired by an inner man. At each moment the perceiver is constructing anticipations of certain kinds of information that enable him to accept the anticipated information as it becomes available. Often, he must actively explore the optic array to make the anticipated information available, by moving his eyes or his head or his body. These explorations are directed by the anticipatory schemata. The outcomes of the explorations-the information newly picked up-modifies the original schemata, leading to subsequently different anticipatory directions.
About the “schemata” that Neisser defines, he begins by comparing his definition of schemata to the concept of formats (as in computer data formats), and plans, but then he goes on to say:
“The analogy between schemata and that of formats and plans is not complete. Real formats and plans incorporate a sharp distinction between form and content, but this is not true of schemata. The information that fills in the format at one moment in the cyclic process becomes a part of the format in the next, determining how further information is accepted. The schemata is not only the plan but also the executor of the plan. It is a pattern of action as well as a pattern for action.”
In addition to developing perception directives, the anticipatory schemata must also assure the continuity of perception over time. Because schemata are anticipations, they are the medium by which the past affects the future, and information already acquired determines what will be picked up next. In addition, however, some schemata are temporal in their very nature. When an object moves, for example, continuous and complex changes take place in the optic array. The schemata must be attuned to the overall progression of moving optical events. It must anticipate temporal patterns as well as well as spatial ones, in addition to serving as a continuously interactive process.
This does not mean that we cannot pick up unanticipated information. Normally, however, the function of an unexpected stimulus is to initiate the cycle of perception itself; there is usually enough continuing information to support the cycle once it has begun. Even when there is not, the very act of searching or exploring for such information embeds what was picked up into some degree of context.
While defining the perceptual cycle, Neisser adds more detail:
“The perceptual machinery does not come into existence all at once. Schemata develop with experience. Information pickup is crude and inefficient at first, as are the perceptual explorations by which the cycle is continued. Only through ‘perceptual learning’ do we become able to perceive progressively more subtle aspects of the environment. The schemata that exist at any given moment are the product of a particular history as well as of the ongoing cycle itself.”
In a final note regarding Neissers’ discourse of the human perceptual process, he stresses that students of perception should not be drawn into the seductive analogy between eye and camera that has too often suggested that perception is a simple transfer of light stimulus from the eye to some form of neurological “photo-negative”.
The perceptual process is an intentional endeavor, in which anticipatory structures are built up that actively condition the vision and auditory neural processes to accept specific features of the sensory environment after guided explorations. It is also a cyclic process, in that the anticipatory structures that are activated for a particular perception cycle are driven by perceptions gleaned from prior cycle anticipatory structures.
In short, perception is not based on what you see, but on what you think you will see.
These anticipations are perceptual skills that are formed over time, and establish the foundation for our knowledge acquisition process. And yet, this portrayal creates another paradox.
Before we can even start to acquire the previously discussed “sense knowledge”, or knowledge of things we are intimately acquainted with, as William James relates it, we must develop these fundamental anticipation structures to know how to look for them in the first place. How do we develop one before the other?
GO TO NEXT DISCUSSION
GO TO TOP OF DISCUSSION
Copyright © 2019 All rights reserved