The Secret of the Boston Meta(Meta)Model

Applied use of Ehrenfeucht Fraisse Games

Victor Morgante
5 min readOct 12, 2024

The time has come to share the secret of how the Boston conceptual modelling software meta(meta)model works, providing the means to have one metamodel (database/datastore) that can store multiple languages.

With the number of customers of Boston growing and with there growing need for the use of the metamodel of Object-Role Modeling for various tasks, including in artificial intelligence, it becomes clear that there will be a growing need for knowledge as to how Boston works as more organisations adopt a 4-Layer Architecture for the management of multiple metamodels within the one metamodel.

Unless someone has reverse-engineered the Boston database or XML export of what I call, Models, one could easily scratch their head as to how, for instance, a Property Graph Schema of a Graph Database, can have exactly the same metamodel of an Entity Relationship Diagram, achieving this:

Same Metamodel for Property Graph Schema and Entity Relationship Diagrams. Image by Victor Morgante.

Or the below, with 3 languages emanating from the same metamodel (click to enlarge):

Click-to-Enlarge. Variable interpretation of models within a common meta(meta)model. Image by Victor Morgante.

Or the below, with 5 languages emanating from the same metamodel (click to enlarge):

Click-to-Enlarge. Five languages within the one meta(meta)model. Image by Victor Morgante.

Ambiguous MetaModel

The basic premise behind how Boston works is that the metamodel of Object-Role Modeling is ambiguous without the establishment of something beyond Object-Role Modeling, ergo Object-Role Modeling is ambiguous, as the metamodel of Object-Role Modeling can be expressed as an Object-Role Model.

This has been quite controversial in topic for some time as Object-Role Modeling is largely touted as unambiguous.

However the Boston software itself is proof that Object-Role Modeling and its metamodel can be ambiguous by the Curry-Howard Correspondence.

Because an Object-Role Model (graphical notation and verbalisations) can be expressed as theorems of a theory under Finite Model Theory, we can work backwards to achieve the result.

Ehrenfeucht Fraisse Games allow for variable interpretation of theorems under a theory of Finite Model Theory (FMT), implying that a theory of First-Order Logic (FOL) under FMT is and can be largely ambiguous but for each player playing the same game, or having/using the same interpretation.

If we remove (what I call) the arbitrary notion that theories of First-Order Logic limited to Finite Model Theory are bounded and including the data of their variables the same (i.e. Ignore or discount Finite Model Theory completely), we have simply theorems of First Order Logic to which the Lowenheim Skolem Correspondence theorems can apply, providing unbounded interpretation of theorems of a theory of FOL *otherwise* under FMT.

We know that theories of First-Order Logic may have Higher-Order Interpretations, so, one-way-or-another we may extract of the data of theorems of a FOL, Godel Numbers and have the Incompleteness Theorems, and so we arrive at the notion that theories of Formal Logic are subjective as to their interpretation, and so All of Logic is a Game.

So, if all of logic is a game, then how do we communicate models (structures) between people unambiguously?

We set up a game.

We call that a Coherent Cooperative Game, and we simply say:

“You and I are going to communicate using this interpretation”

Why?

Because there is no other choice. Without setting up a Coherent Cooperative Game one has the potential for ambiguous interpretation, resulting in contention.

And so we arrive at the Unification of Game Theory, Information/Communication Theory and Formal Logic (formalising Formal Logic under Game Theory, rather than having Game Theory as an adjunct to Formal Logic).

So, simply, the secret behind the Boston meta(meta)model is that it is the metamodel of Object-Role Modeling (see the articles in the series), and we simply flag sub-sections (pages/models-within-models) as to the language of their shared interpretation between players (human or computer). We do this when we put the data into the metamodel, and choose the interpretation (which language) when we pull the data out.

I call this the Infinity Fountain:

“Infinity Fountain”. Image by Victor Morgante

I.e. The metamodel of Object-Role Modeling is infinitely variable within its interpretation, and one must and simply stores the language of (mutual) interpretation within the data itself (within the metamodel) such that one or more players can choose the correct interpretation when the data is pulled out of the metamodel to show multiple languages:

Pulling data out of the Infinity Fountain as Models. Image by Victor Morgante.

Thank you for reading, as time permits, I will write more on the Boston metamodel, metalogic and the Unification of Game Theory, Information/Communication Theory and Formal Logic.

=============End=============

Homage to Doctor Terry Halpin, who possibly saw this weakness in formal logic as Dr Halpin put together his PhD Thesis, accepted in 1989, with these words:

In section 4–6, Halpin writes [6, 4–6]:

Extract from the PhD Thesis of Dr Terry Halpin. Royalty Free image.

I.e. Boston just ignores the formalisation of NIAM (and subsequently Object-Role Modeling) because interpreters are free to choose whatever interpretation they like, and logic is otherwise by extension not intension. I.e. We adopt a meta-logical view of Formal Logic itself to arrive at a result where relationships may be individuals if we so choose, finding infinitely variable benefit in Coherent Cooperative communication by understanding the very nature of ambiguity. We eliminate, to what degree we can, ambiguity by the introduction of considering all of logic a game, under Coherent Cooperative Games.

=================End================

--

--

Victor Morgante
Victor Morgante

Written by Victor Morgante

@FactEngine_AI. Manager, Architect, Data Scientist, Researcher at www.factengine.ai and www.perceptible.ai

No responses yet