Print
Category: SCA
Hits: 90468

In 1980 John Searle set out a famous argument against the claim that a program may be able to understand.

The Chinese Room argument works as an (hypothetical, speculative) counter-example: (1) suppose that you do not understand Chinese and you are given program instructions that when executed by a processor cause it to pass Turing’s test in Chinese; (2) then, by executing said program, you still do not understand Chinese; (3) then, a program passing Turing’s test does not provide sufficient conditions of understanding.

Being based on a counter-example, the scope of Searle’s argument should however be limited only to programs for which it holds that (step 2 of Searle’s argument), by executing instructions of the program, a person would not get the desired competence.

Searle’s argument, originally targeted at scripts and rule-based question answering systems of the seventies, can be applied nevertheless against later-in-time programs featuring any of LSA, Elman Simple Recurrent Network... or neural networks with pre-training (learning word embeddings), because all these programs fall in the condition given in step (2), thus forcing one either to agree with Searle or to criticize the conclusion contained in step (3) of his argument.

Is it possible, however, to envisage an “appropriate” program that causes a person to get the desired competence? Back in 1988 John Fisher suggested that an appropriate program should be able to generate “second-order knowledge”, but feared that this would involve, in the context of the Chinese Room, the absurd requirement for the program to have instructions for sampling information from the brain of the person executing it. However, it is well-known that executing certain instructions can have consequences on a person’s brain. An interpretation occurs when an internal representation is evoked, starting from one's observations, or from a previously identified interpretation. The source of information originating the interpretation is called a sign and performing semiosis means extracting meanings.

In the following I will argue that the goal of Semiotic Artificial Intelligence (Semiotic AI) should be to make programs that simulate the process of semiosis. Semiotic AI comes with the promise of escaping the "regressio ad infinitum" argument against theories of artificial intelligence, because simulating semiosis ultimately amounts to performing semiosis.

 

Semiotic AI is not symbolic AI: semiotic AI does make use of explicit representations to contain symbols and their relations, but, in contrast with symbolic AI, all the representations are retrieved automatically by the system, without the need to determine them a priori and hand code them.

Semiotic AI is not connectionist AI either: semiotic AI does learn a constellation of representations, each of them able to appear in connection with any other such that meanings reside in these connections. However, the structures of semiotic AI can be naturally interpreted through semantics, i.e. they are signs, while it has not been explained yet how semantics could possibly emerge from the syntactic operations of a connectionist network, i.e. by the tuning of connections strengths.

This will become clear by considering the example offered by the Semiotic Cognitive Automaton and the mental experiment of mine, featuring an Aboriginal cryptographer, discussed here below.

 

Sharp, Cassie, Alice and SCA

 photo by Kevan Westenbarger: source https://www.flickr.com/photos/westenbarger/4430064261/in/photolist-7KteLi-7pNtsz-8DftfY-9kMzLQ-9sAUny-9sxToc-9hj4id-9sxTaX-azP5B1-D3QKco-biaY1B-7brZp1-CbywzC-7pNtaH-8DftgU-6jXs7H-azLw3X-7pSoX5-peP4yt-7pSoEW-BoxY1Y-DQp72E-Cd3CSJ-az1MqN-dDqzhQ-BoBn9W-9rn9Yh-BoBnSj-ifz6wb-79QxVL-6ziwQJ-snxa8V-pd3YsL-aV8BGV-oXANk9-DrDsfj-gsQkXj-H1AR2-oXA73Y-BozCxW-pf5Wr4-8LFbhE-4JiDbe-ksG6Q-ceqmcw-ceqm8d-51D9gT-98yw4q-d3Hbbq-r32yUE/

Take a pocket calculator, performing arithmetic operations. Of course, it lacks any understanding of them. Supply now a machine with a background of more complex functions, until it understands arithmetic operations. What are then the necessary conditions of understanding for a machine?

Let me reframe Searle’s famous mental experiment: (1) suppose that you do not understand arithmetic operations because you are an Aboriginal cryptographer and you are given program instructions that, when executed by a processor, cause it to solve arithmetic operations; (2) does the program get you understand addition?

The answer to (2) is “no”, if the instructions of a pocket calculator are given. They specify to put a first operand in a first register, to put a second operand in a second register and to execute some logic operations of a decimal-to-binary converter, of an adder and of a binary-to-decimal converter. Executing these operations has no consequence on the arithmetic competency of the Aborigine. I believe the answer to be “no” also if the book of addition specified by Hector Levesque for his “Summation Room” [Levesque, 2009] is given (such a book specifies rules to make base-10 addition in a prescriptive or “wired” way).

The answer to (2) may be “yes”, if a program is given that contains a number progression 0; 1; 2; 3;... up to 999999, for example, and a set of statements that 0+0=0; 0+1=1; 1+0=1; 0+2=0; 1+1=2; 2+0=2; and so on. The program could extract a rule for finding the next item in the progression, so that it could keep counting endlessly and the Aborigine would get the same competence. Moreover, the program could map “+0=”, to an operation of repeating the same item”; “+1=” to an operation of outputting the next item; “+2=” to an operation of outputting the second-next item; and so on, so that it could perform “finger” addition and the Aborigine would get the same competence. Neglecting the fact that finger addition is not very efficient, this amounts to generating “second-order knowledge” that addition is counting from the first addend for the second-addend-many times. The claim that actions implemented in an artificial agent and involving construction of object collections and motion along a path can serve as a source for conceptual metaphor mappings to the abstract objects and operations of arithmetic dates back to the system Cassie [Goldfain, 2007]. How attractive is, however, such a program? The program must have access to a specific, exhaustive representation of declarative knowledge, i.e. the number progression ideally starting from the number 0, in order to be able to display procedural knowledge of addition. There are so few domains for which such an exhaustive specification of declarative knowledge can be provided, and natural language is certainly not among them (consider the requirement of containing all possible natural language conversations of one word, of two words,...). Therefore, such a program is of limited interest because it only can give rise to mathematical understanding. Does understanding really require all the declarative knowledge to be specified beforehand?

Intuitively, we would like to require the program to learn from examples. This is what the system Alice does [Nizamani et al., 2015]. Starting from samples of arithmetic facts, it can learn rules such as: 1+1 Equals 2; 2+1 Equals 3; 2+9 Equals 1#1 (where # represents concatenation), before learning that (x#y)+z Equals x#(y+z) and that a#(b#c) Equals (a+b)#c. However, by executing this program, the Aborigine, despite being able to solve additions (and to complete numerical sequences) by syntactic manipulations, cannot be said to understand addition (answer “no” to test 2). The Aborigine simply does not know that the symbol 1 denotes the numerosity one and the symbol 2 the numerosity two.

By looking for an example-based learning system that has the property of enabling understanding by generating “second-order knowledge”, I designed the Semiotic Cognitive Automaton, a.k.a. SCA [Targon, 2016]. When provided with an input of mathematical sentences and no a priori declarative knowledge of mathematical formalisms, SCA not only learns syntactic rules enabling it to solve arithmetic operations, but also learns – via second-order reasoning - the semantics of the arithmetic symbolism. The symbols used by SCA (for example, the paradigm of Arabic ciphers) are intrinsically grounded as a result of their semiotic definition, i.e. they are autonomously created semiotic symbols. Moreover, semiotic cognitive grounding of the second order assigns another meaning to the created symbols based on internal processes and structures of the automaton itself. The external symbol 1 gets interpreted as the numerosity of one, when moving from one item to the next in a list or when performing addition as repeated counting, in the way the Aborigine would learn what numbers are and what she can do with them.

One must then answer “yes” to (2). Even if syntax suffices for answering a quiz like

7+3=

or

8;11;14;

“second-order knowledge” is however necessary for answering

iii+iiiii=

or

1:7;2:12;3:17;n:

or other post-arithmetic problems.

The endeavor of Semiotic AI is to design appropriate programs (for signal processing, for natural language processing...), capable of generating second-order knowledge. Semiotic AI has to start from zero. Or better said, it has already started from 'zero' (see how SCA could learn from examples the semantics associated with the symbol 0 in additions, subtractions, multiplications… in its own conceptual reality).

 

Bibliography

 

Searle, John R. (1980), "Minds, brains and programs", Behavioral and Brain Science 3(3): 417-457.

Fisher, John A. (1988), "The wrong stuff: Chinese rooms and the nature of understanding", Philosophical Investigations 11(4): 279-299.

Levesque, Hector J. (2009), "Is it enough to get the behavior right?", Proc. of IJCAI-09, Pasadena, USA: 1439-1444.

Goldfain, Albert (2007), "A Case Study in Computational Math Cognition and Embodied Arithmetic", Proc. of the Twenty-Ninth Meeting of the Cognitive Science Society (CogSci2007), Nashville, USA: 293-298.

Nizamani, Abdul R., Juel , J., Persson, U., Strannegård, C., (2015), "Bounded cognitive resources and arbitrary domains", In: Bieger, J., Goertzel, B., Potapov, A. (eds.) AGI 2015. LNCS, vol. 9205: 166-176.

Targon, Valerio (2016), "Learning the Semantics of Notational Systems with a Semiotic Cognitive Automaton", Cognitive Computation 8(4): 555-576.