This article is inspired by my AGI 2020 paper Experience-specific AGI Paradigms


Can you break down the route to Artificial General Intelligence, if we are going to develop AGI in the near future?  Every researcher will rely on certain assumptions when formulating their path from Narrow AI to AGI.

Some researchers use as a metaphor human development: going from infant-AGI, toddler-AGI, schoolchild-AGI to adult-AGI. Others seem to make reference to evolution: going from an ant-level AGI, bird-level AGI, chimp-level AGI to human-level AGI. However, why should an AGI be subject to constraints of human brain maturation? Why should an AGI be subjects to constraints due to brain size and composition? It seems to me that these researchers are rather assuming that the first AGI will be limited in its capabilities, as a human with a brain that is not yet mature or an animal with an inferior brain cannot reach certain mental abilities. With the developmental metaphor, it is implied that the first AGI may have some language skills, but lack some forms of reasoning. With the evolutionary metaphor, it is implied that the first AGI may have some visuospatial ability but not problem solving, or may be able of problem-solving but lack language.

Recently, Ben Goertzel suggested in From Narrow AI to AGI via Narrow AGI? a path of developing Narrow AGIs on top of Narrow AIs and AGI on top of Narrow AGIs. He describes Narrow AGI as "biased in capability toward some particular domain" of science; the key capability of a Narrow AGI is one of combining together application-specific narrow AIs, creating and training new ones of such tools for its own purposes as needed. According to Goertzel, each Narrow AGI will exceed human level in one or two categories of intelligence - which are, according to Gardner’s theory of Multiple Intelligences: linguistic, logical-mathematical, musical, bodily-kinesthetic, spatial, interpersonal, intrapersonal, naturalist and existential - and more generality could be reached by cross-connecting different Narrow AGIs.

A truly general AI should only be restricted by the domain of operation captured by its design. An AGI designed for computer vision should be able to solve all/most problems in vision, not just those in a particular science such as biomedical research. An AGI designed for natural language processing should be able to solve all/most problems in any science, not just, for example, those in economy and finance. On the other side, an AGI designed for autonomous control would probably be highly specialised, e.g. designed for car-driving, or for home-service-robotics, and we should not expect it to solve problems outside of its specialisation. Instead of considering combinations of categories of intelligence, I would rather focus on the distinction between AGIs able to understand language or AGIs not able to do that.

In Experience-specific AGI Paradigms I suggest a path of developing “experience-specific” AGIs, each of them general-purpose but specific to a given domain of experience, definable as a class of input/output or, in some cases, input/action. I describe three first types of "experience-specific" AGIs that can be developed and three development paths to expand the capabilities of these first AGIs.

 

  • VIS-AGI based on visual experience --> passive LINK-AGI based on passive linkage experience VIS-AGI, learning from images, videos and live cameras, will develop intuitive physics, make predictions potentially involving human behavior, detect anomalies, produce simulations and virtual reality. VIS-AGI could be expanded into passive LINK-AGI thanks to the simultaneous embedding with language, in the form of image tagging and video captions
  • SEMO-AGI based on sensorimotor experience + VIS-AGI based on visual experience --> active LINK-AGI based on active linkage experience SEMO-AGI will develop purposeful behavior and navigation for autonomous robots or cars, learning from logs of human operations of these robots or cars. It could be merged with VIS-AGI and expanded into active LINK-AGI thanks to simultaneous embedding with language
  • SYM-AGI based on symbolic experience --> passive and active LINK-AGI based on linkage experience SYM-AGI will learn thanks to electronic texts (digitalised books, webpages, source codes) and i/o interfaces, to interact successfully with humans through language (any language) and other games, develop science through mathematics and self-improve through machine programming, ultimately resulting in LINK-AGI 

The first and third development paths here described may seem counter-intuitive, because passive vision and symbolic experience do not exist in nature, but an AGI can have types of experience which no agent in nature can have.