Panlingua, by Chaumont Devin, May 12, 1998.

Chapter 10, Robotics.

What does Panlingua mean to robotics? A very great deal. With a knowledge of Panlingua it will be possible for people building robotic devices to think in important new ways. Why? Because once Panlingua is known it is possible to integrate all the various parts of a robot so that they will harmonize with each other in modular ways. What Panlingua offers is a new kind of target or goal for robot subsystem development.

Let us study, for example, the problem of electronic image pattern recognition. What would be the output of such a system without Panlingua? Probably something very limited in scope, very esoteric, and very difficult to use except with a very specific system. But what if an image processing system were being designed to interface with a system using Panlingua? The goal of the image processing system would them be to set up reports in Panlingua. Such reports might go something like:

I see a car at center moving right at high speed.
I see another car passing behind it moving left.

Etc.

Of course such electronic perceptions may be impossible at this stage of pattern recognition and image processing development, but they serve to illustrate what I want to say. At least at this time no image processing system will be able to accurately identify everything it sees. But supposing the developer manages to create an image processor that can even return information about two kinds of things. If even this seemingly trivial amount of information can be represented using Panlingua it will be of great use. In the first place, in an integrated system that included all the major linguistic components, the system would immediately be able to say what it saw in the surface languages it used. For example, "I see a car," etc. Even this small achievement would be of immediate and possibly incalculable value to a blind man. But besides being able to say what it saw in English, the system would also have this information available for immediate use by any other subsystem. For example, supposing the system were designed to drive cars. The Panlingua representation would immediately be available to be passed to a "car driving" module for use. This car driving module would then keep analysing all the incoming reports from the image processor and telling its subsystems what to do based on its driving decisions, also in Panlingua. Each such subsystem would only be required to understand that subset of Panlingua for which it was designed. For example, the throttle mechanism might only understand such things as, "faster," "slower," "full throttle," " and "cut power." These Panlingua representations would be all that this particular subsystem would ever need to know. But because they were coded in Panlingua, a person testing the system would still be able to analyze or listen to each command as it was produced in plain English if necessary just by asking the system to monitor the signals sent to this subsystem. In fact no matter what the robotic subsystem this would be the case. This would constitute a vast improvement over all kinds of systems and subsystems in use today, each one of them using proprietary protocols that may have to be studied for days before they can be used.

Many marvelous things would be possible with systems held to an interface standard that used only Panlingua. For example you might go down to your local appliance store and buy an electric range, bring it home, plug it into your computer, assign it the name, "Charley," and say, "Charley, I want half heat on your front left burner for fifteen minutes," Or you may have told Charley to heat his oven to 350 degrees, but you are not quite sure whether or not it is time to put in the cake, and ask, "Charley, how hot is your oven now?" Or you might call your vacuum cleaner "Mr. Clean," and say to your computer, which is linked to Mr. Clean by Panlingua interface using infra red, "Mr. Clean, go clean up that mess in the corner. No, not there. Turn left. Now go straight." Etc.

The point is that once everything is running on Panlingua and can interpret Panlingua in the performance of its particular tasks, then any new device can be voice-controlled just by plugging it into some master system capable of interpreting speech, converting it to text, parsing text, and determining what to do with the results. And this is true for a self contained robot of the traditional anthropoid variety with two arms and two legs as well as for a distributed system capable of running all the new appliances in a 21st-century home.

Incredibly, nothing much more is required for the command and control unit of such a system than just the same old Panlingua-based linguistic apparatus I have already described. Here are its components again:

English-to-Panlingua parser
Panlingua command processor
Panlingua-to-English text generator

And in finer detail, the shared components to make these larger systems work?

lexicon
ontology
Panlingua template general reference
Panlingua idiom reference
Panlingua discourse log
Panlingua cyclopedic reference
Panlingua scenario reference
Panlingua command reference

Much work is still needed to know how to make a robot perform all the tasks regarded by humans as "thinking." What appears clear is that all these thought processes will involve one or another of the data structures listed above in some way. Notice that the lexicon and ontology are the only two such references not represented in Panlingua. The implication is clear. Most thought processes will be carried out using Panlingua.

Of particular interest to robot builders will be the Panlingua command reference. Suppose you told your faithful robot, who had just finished digging in your garden, "Go wash your hands." From this input the parsing apparatus would set up two thought representations in Panlingua. The first would be just, "Go." The second would be, "Wash your hands." Because sentence type is encoded in Panlingua, the robot would perceive that these two sentences were both imperative ones, so instead of just standing there with an unreadable expression while it checked to see if it already had this information stored somewhere before storing it, the computer would search its Panlingua command processor for a match to the command, "Go." When the Panlingua command reference found the match, it would return an event code to the command processor of the robot, and guided by this event code the robot's command processor would pass this "go" command to its mobility subsystem, and the robot would start moving away. Then it would process "wash your hands" in the same way, and pass this second command to a subsystem handling hand things. This subsystem would in turn pass this command to a special hand washing subsystem, which would know that this operation requires water and a sink, and pass a "find the sink" command to the mobility system, which would send the robot in the direction of a sink, etc., etc.

Thus simply by hanging an event code onto a Panlingua representation to be matched in a command reference, any action the robot can perform can be selected by a simple Panlingua matching operation.

Or what if you asked your robot a question, like, "What is the chemical composition of fiberglass resin?" Once again, sentence type is also represented in Panlingua (as verb synlink type), so your faithful robot would know that you had asked it a question and start looking for the answer. If you had asked it whether roses were red it would have realized that the assertion, if it existed in its memory, would b a binary one, and look for it in the ontology. But because this question involves a complex structure, it searches the cyclopedic reference, from which it retrieves the answer, uses it to generat text, converts this to speech, and tells you in plain English.

Many questions remain unanswered about the general problems of robot design. Some of these are comparatively straightforward, for example: How should a robot determine what should be transfered from short to long term memory, when, and how? In a Panlingua-based implementation, of course, this would involve moving knowledge from the discourse log to the ontology, the general template reference, the cyclopedic, and other Panlingua references in the system. More subtle are questions about how robots learn. I have explained one or two of these processes in previous chapters, but there are doubtless many more. As far as I can tell, robot learning will involve the creation, destruction, and migration of various links among various nodes. And then there are those most difficult questions about inferences drawn from long acquired facts, and the synthesis of new ideas. I am confident that through a knowledge of Panlingua these things are now coming within reach, and this fact makes our time a very exciting time to be alive.

But one of the most important problems that must be overcome before reliable distributed Panlingua-based systems can be developed is that of the ontology. Recall that ontologies differ from language to language, and even from individual to individual. Before, say, English can be used to drive any of the distributed systems I have described above, it will be necessary that compatibility be ensured by setting up a standard English ontology to be used by everyone. This will be possible because although the semlink patterns of ontologies may vary, the set of semnods remains more-or-less the same. Almost everywhere English is spoken, therefore, a rose is a rose is a rose, and a dog is a dog is a dog, etc. What needs to be done is to agree upon the specific semnod identifier to be used (for example 15329) for the semnod linked to "rose" (the flower), and the semnod linked to "dog" (the animal, etc. Otherwise when you tell your faithful robot to go get a screwdriver he may return with a tree! This would not be difficult to do, but knowing my American countrymen, I suppose many thousands of mutually incompatible ontologies will be developed before this happens. Hopefully some automated means will be devised to clean up the mess and integrate them all in the end, so that, say, in another 100 years or so we will have worked out a common ontology for English, and modular systems will be able to work as I have described. This should be a simple thing to do, really, since these are only identifying integers, and no special sequence or order of any kind is required. The only major difficulty will be getting any two red-blooded Americans to agree upon which of the billions of integer values available to use for things like "dog" and "rose."

The possibilities I have described may all sound quite interesting, but by far the most important of all is the potential for making machines that really think. We already know a few of the processes involved, but there remain many more of which we know nothing at all. The exciting thing is that Panlingua theory may give us a key to unlock the doors. As everyone knows, computer processors are getting faster and faster every year. What this means to robotics is that if we can someday learn to make computers think, then it may also be possible to make them think very fast. To build such a machine would not seem impossible for a creature who can barely run 15mph but can fly aircraft at many times the speed of sound. It might be possible, say, to compress a thousand man-hours of human thought into a mere ten minutes or so. Thus it might be possible to create an entity capable of transcending the limits imposed by a lifespan of 70-80 years in order to bring long-term thought processes to maturity. And not only bring them to maturity, but do so almost immediately. Albert Einstein spent the last decades of his life searching for a unified field theory in vain. Before he had time to discover this secret his life was cut short. A machine with the same intellectual powers might hit upon it in less time than it takes to boil an egg!

Extrapolate the curve of technological development and it should become clear that the things I am saying are not mere fantasies but the stuff of our future.