Numerical Evidence That the Power of Artificial Neural Networks Limits Strong AI

Roman Englert, Jörg Muschiol

Research output: Contribution to journalArticlepeer-review

Abstract

A famous definition of AI is based on the terms weak and strong AI from McCarthy. An open question is the characterization of these terms, i.e., the transition from weak to strong. Nearly no research results are known for this complex and important question. In this paper we investigate how the size and structure of a Neural Network (NN) limits the learnability of a training sample, and thus, can be used to discriminate weak and strong AI (domains). Furthermore, the size of the training sample is a primary parameter for the training effort estimation with the big O function. The needed training repetitions may also limit the learning tractability and will be investigated. The results are illustrated with an analysis of a feedforward NN and a training sample for language with 1,000 words including the effort for the training repetitions.

Original languageEnglish
Pages (from-to)338-346
Number of pages9
JournalAdvances in Artificial Intelligence and Machine Learning
Volume2
Issue number2
DOIs
StatePublished - 1 Jan 2022
Externally publishedYes

Keywords

  • Betti numbers
  • Dimension of NN
  • NN
  • Power of AI
  • Training Repetitions of NN
  • Training Sample
  • Weak and strong AI

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Numerical Evidence That the Power of Artificial Neural Networks Limits Strong AI'. Together they form a unique fingerprint.

Cite this