Artificial General Intelligence — A thought experiment

Victor Morgante
4 min readNov 10, 2019


The general consensus within the Artificial Intelligence (AI) community is that we are years away from achieving Artificial General Intelligence (AGI), where machines can approximate or supersede the intelligence of humans across a wide swath of criteria. It is hard to dispute this, with no demonstrable machine meeting those requirements.

That does not stop us from pondering the architecture of such a machine. Indeed, a plethora of books have been written on the supposed working of the human mind and embellishing a litany of requirements for machines that achieve AGI.

This article proposes a thought experiment that defines one such requirement of the architecture of an AGI. It takes the approach of considering what the human mind does with ease and for which an AGI architecture must match.

When we speak of machines that will achieve AGI, and their subsequent architecture, we can assume for purposes here that we speak of computers embodying both hardware and software. The output of the thought experiment in this article is a suggested way forward for AGI architecture focusing on the software that such an AGI must embody within its architecture.

Mostly, when we think of software, we take a position of viewing the instructions that inform computer hardware what to do. It is easily forgotten that all software operates over data, manipulating, storing and presenting that data in one fashion or another. The hardware, in that view, is merely a way of enabling the software to store, manipulate and present data. We consider here that data, as well as the software that defines the structure of that data, is software.

And so our thought experiment begins:

People store vast compendiums of data about an equally large set of topics. We do so without even thinking about it. “A mobile phone screen has four corners”, “Tar is black”, “Horses have four legs” etc.

Forgoing that all this information is of an ‘in general’ form, it indisputable that people store such facts in their mind, to be called upon when needed to do something with the information. For example it is easy to imagine a horse with three legs, but we store generalised facts none the less. In general, a horse has four legs.

Sometimes we purposefully submit information to our data store, “My friend Peter’s birthday is in April”. We may do this consciously rather than unconsciously. Just the same, we store the information such that we can draw upon it in the future for some purpose.

“Oh, it’s April. I must remember to call Peter to wish him a happy birthday”, draws on data whether the fact that Peter was born in April was stored consciously or subconsciously.

What is interesting about the information stored, is that the facts can range in their cardinality. “Tar is black” stores two pieces of information together, if we consider, ‘tar’ and ‘black’. A third if we consider an example such as “UHT milk is in isle 7 in my local supermarket”, and where UHT milk may be in isle 8 in another store.

So the human mind is not limited in the cardinality of the facts that it stores, regardless of how it stores them.

Whatever the architecture of the human mind, facts may range over an infinite variety of cardinalities, although it would seem difficult for people to remember facts with large number of terms. The mind is malleable as to the structure of the data it stores. It also seems certain that the architecture of the human mind is malleable in its capacity to store data in the variety of structures in which it decides to store that data.

If we call the structure of each fact stored in the mind a ‘model’ and the architecture supporting each model the metamodel, we arrive quickly at the output of our thought experiment.

An AGI must have a malleable metamodel within which to store various compendiums of information and from which to draw data over which to operate to produce intelligent thought synonymous to that of people.

Another way to look at the problem is to visualise the various database technologies that exist in the software realm, be they relational, hierarchical, graph-based or object based. Humans use software databases to store data over which software operates to achieve and aim. That is, it is a natural process for modern humans to even store data extraneous to themselves, and use that data for a purpose. Our thought experiment holds true, an AGI must have a malleable metamodel within which to store compendiums of information from which to draw data over which to operate to produce intelligent thought. The metamodel may well simply define the structure of the data over which the AGI operates, with the actual data separate to the AGI, in the same way that people have a model in their mind of the database over which they operate with software. In that instance, our thought experiment is more refined:

“An AGI must be able to form models of data structures over which to operate, and store that model and (likely) the data itself, over which to operate to produce intelligent thought.”

We arrive at this conclusion in visualising that that is exactly what people do in the production of what we perceive as intelligent thought in the operation over data we use in the production of that thought; in the same way you mull over this sentence (as data) to produce a model in your mind of what an AGI would do in the production of thought.

Viev Pty Ltd (20220617-now FactEngine) produces Boston, conceptual Object-Role Modeling software that allows in its data store a compendium of models, metamodels and actual data…the roots for future AGI.

Thank you for reading.

— — — — — — — — — — — — — — — —