Home
Paths to AGI
- Details
- Written by Valerio Targon
- Category: Path to AGI
- Hits: 4538
This article is inspired by my AGI 2020 paper Experience-specific AGI Paradigms
Can you break down the route to Artificial General Intelligence, if we are going to develop AGI in the near future? Every researcher will rely on certain assumptions when formulating their path from Narrow AI to AGI.
Some researchers use as a metaphor human development: going from infant-AGI, toddler-AGI, schoolchild-AGI to adult-AGI. Others seem to make reference to evolution: going from an ant-level AGI, bird-level AGI, chimp-level AGI to human-level AGI. However, why should an AGI be subject to constraints of human brain maturation? Why should an AGI be subjects to constraints due to brain size and composition? It seems to me that these researchers are rather assuming that the first AGI will be limited in its capabilities, as a human with a brain that is not yet mature or an animal with an inferior brain cannot reach certain mental abilities. With the developmental metaphor, it is implied that the first AGI may have some language skills, but lack some forms of reasoning. With the evolutionary metaphor, it is implied that the first AGI may have some visuospatial ability but not problem solving, or may be able of problem-solving but lack language.
Recently, Ben Goertzel suggested in From Narrow AI to AGI via Narrow AGI? a path of developing Narrow AGIs on top of Narrow AIs and AGI on top of Narrow AGIs. He describes Narrow AGI as "biased in capability toward some particular domain" of science; the key capability of a Narrow AGI is one of combining together application-specific narrow AIs, creating and training new ones of such tools for its own purposes as needed. According to Goertzel, each Narrow AGI will exceed human level in one or two categories of intelligence - which are, according to Gardner’s theory of Multiple Intelligences: linguistic, logical-mathematical, musical, bodily-kinesthetic, spatial, interpersonal, intrapersonal, naturalist and existential - and more generality could be reached by cross-connecting different Narrow AGIs.
A truly general AI should only be restricted by the domain of operation captured by its design. An AGI designed for computer vision should be able to solve all/most problems in vision, not just those in a particular science such as biomedical research. An AGI designed for natural language processing should be able to solve all/most problems in any science, not just, for example, those in economy and finance. On the other side, an AGI designed for autonomous control would probably be highly specialised, e.g. designed for car-driving, or for home-service-robotics, and we should not expect it to solve problems outside of its specialisation. Instead of considering combinations of categories of intelligence, I would rather focus on the distinction between AGIs able to understand language or AGIs not able to do that.
In Experience-specific AGI Paradigms I suggest a path of developing “experience-specific” AGIs, each of them general-purpose but specific to a given domain of experience, definable as a class of input/output or, in some cases, input/action. I describe three first types of "experience-specific" AGIs that can be developed and three development paths to expand the capabilities of these first AGIs.
- VIS-AGI based on visual experience --> passive LINK-AGI based on passive linkage experience VIS-AGI, learning from images, videos and live cameras, will develop intuitive physics, make predictions potentially involving human behavior, detect anomalies, produce simulations and virtual reality. VIS-AGI could be expanded into passive LINK-AGI thanks to the simultaneous embedding with language, in the form of image tagging and video captions
- SEMO-AGI based on sensorimotor experience + VIS-AGI based on visual experience --> active LINK-AGI based on active linkage experience SEMO-AGI will develop purposeful behavior and navigation for autonomous robots or cars, learning from logs of human operations of these robots or cars. It could be merged with VIS-AGI and expanded into active LINK-AGI thanks to simultaneous embedding with language
- SYM-AGI based on symbolic experience --> passive and active LINK-AGI based on linkage experience SYM-AGI will learn thanks to electronic texts (digitalised books, webpages, source codes) and i/o interfaces, to interact successfully with humans through language (any language) and other games, develop science through mathematics and self-improve through machine programming, ultimately resulting in LINK-AGI
The first and third development paths here described may seem counter-intuitive, because passive vision and symbolic experience do not exist in nature, but an AGI can have types of experience which no agent in nature can have.
The "semiosis reply" to the Chinese Room Argument
- Details
- Written by Valerio Targon
- Category: SCA
- Hits: 108166
This article is inspired by my BICA 2018 paper Toward Semiotic Artificial Intelligence
Nobody proposed so far the following solution to the Chinese Room Argument against the claim that a program can be constitutive of understanding (a human, non-Chinese-speaker, cannot understand Chinese just having run a given program, even if this program enables the human to have input/output interactions in Chinese).
My reply goes as follows: a program, to be run by a human, non-Chinese-speaker, may indeed teach the human Chinese. Humans learn Chinese all the time; yet it is uncommon having them learning Chinese by running a program. Even if we are not aware of such a program (no existing program satisfies said requirement), we cannot a priori exclude its existence.
Before enunciating my reply, let me first steelman the Chinese Room Argument. If the human in the mental experiment of the Chinese Room is Searle, he may not know Chinese, but he may now a lot of things about Chinese: that it has ideograms and punctuation, which he may recognize; that it is a human language, which has a grammar; that it has the same expressive power of a language he knows, e.g. English; that it is very likely to have a symbol for “man” and a symbol for “earth”, and so on. Searle, unlike a computer processor, holds a lot of a priori knowledge about Chinese. He may be able to understand a lot of Chinese just because of this a priori knowledge.
Let us require the human in the Chinese Room to be a primitive, e.g. an Aboriginal, with absolutely no experience of written languages. Let us suppose that Chinese appears so remote to the Aboriginal, that she would never link it to humans (to the way she communicates) and always regard it as something alien. She would never use knowledge of her world, even if somebody tells her to run a given program to manipulate Chinese symbols. In this respect, she would be exactly like the computer processor and have no prior linguistic knowledge. The Chinese Room Argument is then reformulated: can a program to be run by the Aboriginal teach her Chinese (or, as a matter of fact, any other language)?
I am going to reply that yes, a program to be run by the Aboriginal can teach her a language. I am going to call this reply the “semiosis reply”.
Semiosis is the performance element involving signs. A sign, during semiosis, get interpreted and related to an object. Signs can be symbols of Chinese text or of English text, that a human may recognize. An object is any thing available in the environment, which may be related to a sign. It has been suggested that artificial systems can also perform (simulated) semiosis [Gomes et al, 2003]. Moreover, it has been suggested that objects can become available not only from sensory-motion experience, but also from symbolic experience of an artificial system [Wang, 2005]. A sign as recognizable by a machine can be related to a position in an input stream as perceived by a machine. For example, the symbol "z" stands for something that is much less frequent in English text than the interpretant which stands to the symbol "e". Semiosis is an iterative process in which the interpretant can become a sign to be interpreted (for example, the symbol "a" can get interpreted as a letter, as a vowel, as a word, as an article, etc). At any given time the machine may select as potential signs any thing available to it, including previous interpretants such as paradigms and any representation it created. I suggest that the machine should also interpret its internal functions and structures through semiosis. This comprises "computation primitives", including conditional, application, continuation and sequence formation, but also high-level functions such as read/write. The meaning that the machine can give to the symbols it experiences as input becomes then increasingly complex. Such a meaning is not given by a human interpreter (parasitic meaning), but it is rather intrinsic to the machine. When a human executes the program on behalf of the machine, it arrives at the same understanding, at the same meaning, i.e. simulating semiosis ultimately amounts to performing semiosis and the Aboriginal can actually learn from the program. (note how existing artificial neural networks, including deep learning for natural language processing, are ungrounded and devoid of meaning. A human, even executing the training phase of the artificial neural network, cannot arrive at any understanding. This is because the artificial neural network, despite its evocative name, at no level simulates a human neural network. On the contrary, semiotic artificial intelligence, despite having no representation of neurons, simulates semiosis occurring in human brains)
Let me tell you how an Aboriginal, called SCA, could learn English just by running a program. Let us suppose that SCA is given as an input the following text in English, the book "The adventures of Pinocchio" (represented as a sequence of characters with spaces and new lines replaced by special characters):
THE ADVENTURES OF PINOCCHIO§CHAPTER 1§How it happened that Mastro Cherry, carpenter, found a piece of wood that wept and laughed like a child.§Centuries ago there lived--§"A king!" my little readers will say immediately.§No, children, you are mistaken. Once upon a time there was a piece of wood... |
This input contains 211,627 characters, which are all incomprehensible symbols for SCA. (This input represents a very small corpus compared to those used to train artificial neural networks)
Let me tell you how SCA learns, through only seven reflection actions and via only three iterations of a semiotic algorithm, to output something very similar to the following:
(It is suggested that the first thing SCA could output is “I said”, while more processing would be needed to actually have her output “I write”. Yes, SCA prefers writing with a stick on the sand!)
Introducing SCA
- Details
- Written by Valerio Targon
- Category: SCA
- Hits: 37669
This article is inspired by my 2016 Cognitive Computation article Learning the Semantics of Notational Systems with a Semiotic Cognitive Automaton
Let me introduce you SCA. SCA is an Aboriginal cryptographer living in the Australian bush. She has never learned to read or to do arithmetic.
One day, SCA finds some sheets of paper. On the first one, one could read:
383+386=769;277+415=692;293+335=628;386+492=878;149+421=570;362+27=389;190+59=249;263+426=689;40+426=466;172+236=408;211+368=579;334-193=141;439-334=105;421-159=262;485-457=28;354-261=93;472-262=210;216-41=175;352-350=2;482-162=320;399-217=182;368x42=15456;247x0=0;155x462=71610;436x131=57116;71x217=15407;458x138=63204;476x187=89012;17x434=7378;199x140=27860;270x72=19440; |
SCA finds 50 sheets of mathematical sentences like this one and gets very excited. What could she learn from them? She does not know what ciphers are. They represent for her merely various incomprehensible symbols. How could she possibly tell that the symbol 8 on the sheet represents the number eight?
Let me tell you how SCA learns, in only four steps, the decimal system and how to do arithmetic by applying a semiotic algorithm. She follows the algorithm using a stick to draw lines on the sand and piling together stones to keep count.
The Chinese Room and Semiotic AI
- Details
- Written by Valerio Targon
- Category: SCA
- Hits: 96340
In 1980 John Searle set out a famous argument against the claim that a program may be able to understand.
The Chinese Room argument works as an (hypothetical, speculative) counter-example: (1) suppose that you do not understand Chinese and you are given program instructions that when executed by a processor cause it to pass Turing’s test in Chinese; (2) then, by executing said program, you still do not understand Chinese; (3) then, a program passing Turing’s test does not provide sufficient conditions of understanding.
Being based on a counter-example, the scope of Searle’s argument should however be limited only to programs for which it holds that (step 2 of Searle’s argument), by executing instructions of the program, a person would not get the desired competence.
Searle’s argument, originally targeted at scripts and rule-based question answering systems of the seventies, can be applied nevertheless against later-in-time programs featuring any of LSA, Elman Simple Recurrent Network... or neural networks with pre-training (learning word embeddings), because all these programs fall in the condition given in step (2), thus forcing one either to agree with Searle or to criticize the conclusion contained in step (3) of his argument.
Is it possible, however, to envisage an “appropriate” program that causes a person to get the desired competence? Back in 1988 John Fisher suggested that an appropriate program should be able to generate “second-order knowledge”, but feared that this would involve, in the context of the Chinese Room, the absurd requirement for the program to have instructions for sampling information from the brain of the person executing it. However, it is well-known that executing certain instructions can have consequences on a person’s brain. An interpretation occurs when an internal representation is evoked, starting from one's observations, or from a previously identified interpretation. The source of information originating the interpretation is called a sign and performing semiosis means extracting meanings.
In the following I will argue that the goal of Semiotic Artificial Intelligence (Semiotic AI) should be to make programs that simulate the process of semiosis. Semiotic AI comes with the promise of escaping the "regressio ad infinitum" argument against theories of artificial intelligence, because simulating semiosis ultimately amounts to performing semiosis.