From symbolic AI to connectionist AI

From symbolic AI to connectionist AI

Jean – Jacques Pluchart

The history of AI is marked by a tension between two approaches, alternately symbolic and connectionist, as observed by Cardon, Cointet and Mazières (2018). The researchers, following Lecun (2015), relaunched AI by processing massive data using so-called “deep neural” models (deep learning) and following a logic borrowed from cybernetics. This approach, described as “generative”, “inductive” or “connectionist”, has long been marginalised after the launch of symbolic AI in 1956 at Dartmouth by John McCarthy and Marvin Minsky, followed by the development of expert systems before the emergence of machine learning in the 1980s.

Symbolic models were developed by a limited number of heavy league researchers, composed of a group of MIT (Minsky, Papert), Carnegie Mellon (Simon, Newell) and Stanford University (McCarthy), which mainly responded to public tenders and engaged in more or less playful experiments: chess or go games, dynamic simplified spaces, simulation of sets, semantic networks, truth functions, robotisation of behaviours, creation of new languages, etc.

While symbolic AI applies a model to data following a hypothetical-deductive reasoning, connectionist AI follows an inductive logic by applying a learning method that makes it possible to make predictions by iteration of massive data. While symbolic AI applies a model (a theory or a heuristic) to structured data in order to verify a result at a given horizon, connectionist AI produces original content by “learning data” through appropriate questioning. While symbolic AI attempts to solve a predefined problem, connectionist AI induces meaningful representations from the interactions between social actors. This approach follows the logic of cybernetics initiated in 1948 by Norbert Wiener. The renaissance of connectionist AI is attributed in particular to the Parallel Distributed Processing research group led by Rumelhart et al. (1986). The work of the PDP explores the deep mechanisms of knowledge by exploiting the metaphor of neurons (a network of connections) and assuming that it is constructed by a binary activation mechanism.

For more than 60 years, this controversy between researchers on AI has given rise to countless scientific works since, according to Cardon, Cointet and Mazières (2018), the “symbolic” corpus totalled 65,522 publications between 1956 and 2018, while the “connectionist” corpus gathered 106,278 publications. This vast debate is part of a process of scientific construction and deconstruction theorised in particular by Latour (1988).

Références

CARDON D, COINTET J-P. et Mazières A. (2018), « La revanche des neurones. L’invention des machines inductives et la controverse de l’intelligence artificielle », Réseaux 2018/5 (n° 211).

LATOUR  B. (1988) , Science in Action: How to Follow Scientists and Engineers Through Society , Harvard University Press.

LECUN Y., BENGIO Y., HINTON G. (2015), « Deep learning », Nature, vol. 521, n° 7553.

RUMELHART D. E., McCLELLAND J. L. (1986), « PDP Models and General Issues in Cognitive Science », in PDP RESEARCH GROUP (1986), Parallel Distributed Processing. Explorations in the Microstructure of Cognition, Cambridge MA, MIT Press.

WIENER N. (2014), La cybernétique : Information et régulation dans le vivant et la machine, Seuil