Programming the Large Language Model

Introducing the Observational State Machine — General Purpose AI Unit

Victor Morgante
15 min readFeb 1, 2024
Royalty Free Image, Licenced to Victor Morgante, Open AI

© Copyright
Victor Morgante (Perceptible.AI) & Julian Francisco (Bloombox.AI)

We introduce the Observational State Machine (OSM) as a General Purpose AI as a programmable and computational AI unit to manage workflow, dialog interaction and integration with databases and APIs.

Introduction

The advent of large language models powered by deep learning has unleashed impressive capabilities in natural language processing. Yet despite achievements in areas like text generation and question answering, fundamental limitations remain when applying these models to complex real-world tasks. Large language models operate in a linear, feedforward manner, processing input and producing output. This precludes the iterative workflows, modular components, and dynamic decision making required for adaptable general purpose artificial intelligence (GPAI).

To overcome these constraints, we have developed the Observational State Machine (OSM) — a framework that brings the advantages of a Central Processing Unit (CPU) to the effective programming of large language models. The OSM serves as a control centre, coordinating modular subtasks, regulating workflows, engaging with external actors and environment. It transforms large language models from fixed function networks to customizable, extensible programmable modules.

In this article we explore how the OSM’s unique architecture enables flexible programming of large language models. We examine its analogies to a traditional CPU, key technical differences, benefits over feedforward-only architectures, and expansive use cases across industries. With the right programming model, large language models may fulfil their promise in powering more advanced, general forms of artificial intelligence.

Firstly, and introduction to the OSM (video YouTube):

I. Limitations of Large Language Models

Large language models have grown exponentially in capability in recent years, enabled by advances in deep learning, vast training datasets, and increased computing power. Models like GPT-4 can generate remarkably coherent text and engage in dialogue with a level of human-like understanding.

However, fundamental limitations remain. Large language models operate in a purely feedforward manner — text or other data is provided as input, passed through the model’s neural network layers, and output is generated. There is no built-in workflow management, iterative processing, or modular components. The model lacks an internal state or memory beyond the static training parameters.

This linear, black-box nature limits the adaptability of large language models. They cannot easily adjust to new scenarios requiring complex inference or multi-step reasoning in isolation. Their responses tend to be disconnected rather than context-aware in prolonged interactions. And they cannot be readily reprogrammed for modular reuse across different tasks.

To achieve more advanced cognitive capabilities and general-purpose utility, large language models need a programming framework that enables key attributes like:

· Modularity and composability;

· Iterative and stateful processing;

· Reliable control flow and conditional logic; and

· Inline and continuous learning and adaptation.

We propose an Observational State Machine (OSM) providing this critical programming layer to overcome limitations of feedforward-only large language models.

II. The Observational State Machine

The Observational State Machine (OSM) that we have developed provides a programming framework that imparts the advantages of a Central Processing Unit to large language models. Much like a CPU orchestrates and executes machine code instructions to perform computational tasks, the OSM coordinates and controls the execution of large language models on cognitive workflows.

There are several notable parallels with a traditional CPU:

Program execution: The OSM executes PI-JSON (Pseudocode in JSON) programs defining tasks and workflows, akin to a CPU running machine code.

Control flow: It directs sequence and branching using conditional logic like if-then-else statements, while loops and case statements, enabling iterative workflows.

Clock cycle: The OSM employs an effective clock cycle where it calls and evaluates model outputs before determining and executing next steps. Cycles are determined by environment interaction or self-kick-started cycles.

Modularity: It focuses on executing one task at a time, while allowing composable subtask calls.

Parallel processing: Multiple OSM instances can coordinate in parallel like multi-core CPUs.

However, there are key differences that make the OSM specialized for cognitive work:

· It handles unstructured data like text rather than purely numerical/binary data;

· OSM programs are written in PI-JSON, optimized for defining AI workflows versus low-level machine code;

· It operates at the task level, orchestrating intelligent modules rather than at a raw instruction level;

This CPU-like programmability enables several benefits:

Iterative approach: The OSM drives the iterative supply of context and querying of data for large language models to operate over.

Flexible workflows: It dynamically sequences subtasks and adapts using conditional logic and workflows.

Edge case handling: OSM can invoke interventions to handle novel or outlier cases.

Reliability mechanisms: Checkpointing and monitoring ensure consistency.

By framing large language models as modular, programmable components, the possibilities expand dramatically compared to purely feedforward architectures.

III. Programming the OSM

The Observational State Machine allows large language models to be programmed in ways resembling traditional computer code. Key to this is PI-JSON, Pseudocode in JSON), a scripting language optimized for defining cognitive workflows.

PI-JSON provides control flow constructs like conditional logic, loops, and functions within a JSON syntax. This allows flexible coding of tasks as structured workflows, rather than rigid start-to-finish scripts. The OSM can choose where to operate within the PI-JSON program based on real-time inputs, unlike linear CPU execution and unlike strict flowchart driven chatbots.

PI-JSON integrates naturally with common workflow elements:

· Callable subtasks for modularity;

· Checklists and checkpoints for control; and

· Database query and update operations.

Database integration is achieved using FactEngine, which enables simplified natural language queries over SQL, Graph and NoSQL databases.

The OSM’s task-focused structure also encourages modularity. Each OSM instance operates on one task at a time, while having the ability to invoke other OSM subtasks as needed. This encapsulation into coherent units avoids monolithic architectures.

With workflow, task, and subtask modules defined in PI-JSON, the OSM can then reliably sequence and execute them. Checkpoints and tracking provide control and monitoring mechanisms to ensure consistency across iterative cycles.

Together, these programming constructs allow flexible orchestration of workflows with large language models acting as an observer over the processes it operates over as an autonomous cognitive AI agent operating within the bounds of a predetermined workflow. This ability allows the OSM to orchestrate real-world tasks requiring complex reasoning and interaction. OSM provides the missing imperative programming layer for this new era of AI.

IV. How the OSM Works

The Observational State Machine brings together a number of architectural components to enable flexible workflow orchestration and management of large language models:

A. Conversational Workflow

A complete log of the conversation flow between users and the system is maintained by the OSM to enable persistence of interaction context. Utterances, replies, commands, media, and contextual observations are delivered back to the OSM itself within a Self-Assembled Context (SAC) space, output of which may be logged in a database for transparency of reasoning.

This live updated SAC feeds directly into crafting the next input prompt for the large language model. It thus provides a grounding for the dialogue, ensuring continuity and coherence as exchanges proceed.

To enable coherent, contextual conversations, the Observational State Machine may maintain a complete log of all interactions between users and the system. This transaction history fuels the continuity of dialogue critical for managing complex workflows with dynamic human input.

Similarly, where no human-machine dialog is required, the SAC may be populated by event data from interactions with the outside world, as in with the orchestration of humanoid robots for instance.

Specifically, a Self-Assembled Context space serves as a centralized store that accumulates the following data related to conversational interactions:

User utterances: statements, commands, queries, data/media shared with the OSM;

System responses: generated text, media, structured data outputs;

Contextual details: detected entities, sentiment, environmental observations;

Database query results: as initiated by the OSM itself, or as requested by persons interacting with the OSM; and

Workflow tracking:- steps completed, checkpoints achieved;

This live updated record feeds directly into a LLM deciding over plans and steps, and powers response generation. The cumulative exchange history provides informs interpretation and reduces repetition, with large language models able to identify repetition and decide over alternate strategies to reach the programmed goal of the OSM within the workflow provided.

Additional techniques employed to further strengthen conversation context include:

Anaphora resolution: resolving linguistic references to previous entities. For example properly resolving “her” based on prior context;

Slot carryover: directly inserting relevant details into next turn prompts rather than restating.

Intent tracking: detecting goals over successive exchanges using user feedback as reference.

Together these facilities ensure coherent, efficient conversation workflows with managed hand-offs between modular cognitive/chatbot components. and functional components of work. Users avoid frustrating repetition while the system persists objectives.

This context-aware interaction layer enables use cases like multi-step troubleshooting, sequential process coordination, personalization based on profiled behaviours over time. Cemented by persistent memory the OSM is a unique fit for workflow-centric cognitive applications.

B. PI-JSON Programs

To enable flexible and modular orchestration of workflows, the Observational State Machine utilizes PI-JSON to program task definitions. PI-JSON allows coding control flow constructs comparable to traditional imperative languages like Python or Javascript but embedded within a JSON structure.

Some examples of PI-JSON’s programming capabilities include:

Effective Function Calls: reusable modules that can be parameterized and invoked;

Conditional logic: if/then/else clauses and case statements;

Loops: while and for loops to repeat workflow steps;

Variables: storing intermediate values or state;

Database ops: querying or updating integrated databases; and

Subtask calls: initiating child OSM programs.

These declarative yet functional features allow workflow developers to script tasks in a natural control flow manner. Steps can be sequenced, branched, iterated and passed arguments dynamically.

A key distinction from traditional hardcoded/graphical-flowchart scripts is that the Observational State Machine itself evaluates and selects the next best step at runtime based on the output of the large language model, rather than pure sequential execution. This allows adaptive response to conversational inputs.

Additional PI-JSON advantages include that PI-JSON is:

Human readable: simplified over Python/Java for subject matter experts;

Lightweight: avoids bulky framework dependencies;

Interpreted: no compilation step enables rapid coding;

Extensible: custom functions can be added; and

Embeddable: integrates into apps via API or SDK via the OSM.

With a flexible workflow definition language that largely utilised natural language and tailored for AI applications combined with configurable modular subtask partitioning, the Observational State Machine fosters industrial-scale cognitive programmability.

C. Integration of Retrieval Augmented Generation (RAG) and Database Integration

The OSM utilizes a Retrieval Augmented Generation approach to enable robust access to structured data sources. Specifically, it integrates the FactEngine framework which allows simplified natural language queries over both SQL, graph and NoSQL databases. This facilitates easy retrieval of entity attributes, relationships, and hierarchies during processing. Employing database integration allows for a dynamic context state and generated responses from the large language model allows for more relevant, factual outputs.

Specifically, the Observational State Machine utilizes FactEngine, but may integrate other NL-2-SQL/Cypher technologies. FactEngine provides for both controlled and natural language querying of backend databases. Some examples of DBMS supported include knowledge graphs in Neo4j, and ODBC databases, PostgreSQL, MySQL, MongoDB, as well as TypeDB.

Some examples of Fact Engine query capabilities include:

· Lookup of entity attribute values based on supplied primary keys or related conditions. For example, retrieving a customer’s purchase history;

· Traversal of relationships between entities to gather connected data. For instance, aggregating all orders associated with a particular product;

· Filters to selectively retrieve entities matching specified criteria. This supports precise targeting based on data subsets;

· Statistical aggregations for summarizing dataset variables numerically, like total sales by country; and

· Joins to connect relevant related data across tables/collections.

The OSM may also write and update database data.

These RAG methods allow conversational access to existing data during workflow execution. Whether pulling manufacturing metrics in real-time or reviewing customer history, relevant entity details can be injected into the dynamic context state maintained by the OSM.

The conversational nature of FactEngine queries using plain language makes incorporation of backend data accessible to subject matter experts without SQL proficiency. Query outputs are injected into the dynamic OSM context state.

Writing data back to the database

In addition to the ability to query data, the OSM framework can also write outputs or record intermediary workflow steps back to the integrated databases via Fact Engine. Some examples include:

· Recording completed checkpoint steps in a manufacturing workflow;

· Logging conversational exchanges with customers;

· Storing price quotes or personalized content provided to users; and

· Capturing model confidence scores associated with generated text.

This bidirectional data exchange further strengthens the reliable and inspectable nature of workflows orchestrated by the OSM. Closure points defined within tasks can necessitate certain data to be persisted before advancing, ensuring completeness. All OSM activities become observable through traced database transactions aligned to modular steps.

Strategic Benefits

Strategic benefits of RAG and database integration enables include:

· Augmenting and grounding OSM responses with factual data for accuracy;

· Personalization based on tracked entity history and observed patterns over time;

· Compliance with data governance policies by only exposing sanctioned, vetted data; and

· Leveraging of clean, unambiguous structured data alongside unstructured conversational data.

By blending carefully governed datasets with open dialogue input within an integrated architecture, the OSM can support industrial-strength, transparent and auditable cognitive applications.

D. Large Language Model Decision Making

Central to the Observational State Machine architecture is the integration of a Large Language Model that drives core cognitive processing. State-of-the-art models such as GPT-4 Turbo or Claude may be utilized to provide the neural network foundation responsible for natural language generation and understanding.

Key capabilities conferred by the integrated LLM include:

Text generation: produces fluent conversational responses;
Comprehension: understands context and detects semantics;
Reasoning: makes inferences and judgments over narrative;
Knowledge: exhibits world understanding across domains;
Planning: evaluates and sequences multi-step actions;

To optimize these abilities for workflow management, the LLM is deeply integrated into the OSM operational loop:

Workflow steps and checkpoint configurations encoded as PI-JSON programs are analysed by the model to internalize available actions and transition logic.

During operation, PI-JSON variables, database states, and conversation logs are continuously ingested into the LLM context accumulator to inform next turn action selections.

The model reasons over the evolved context and evaluates highest probability workflow progression candidates, assessing pathway costs, risks, alignment to end goals.

The optimal next-step path is executed, updating states and generating any system responses or database transactions required.

This dynamic program navigated trajectory then further influences subsequent recommendations illuminated by the model at each checkpoint.

This closed-loop sequencing allows adaptive traversal of declaratively scripted process workflows in a data-driven, goal-aware manner unique to neural cognitive architecture approaches. Handling of edge cases and exceptions is also significantly enhanced leveraging the robust world knowledge contained within modern language models.

By closely integrating LLM capabilities tailored for workflow optimization the OSM supports scalable, resilient cognitive orchestration augmenting rigid robotic scripting with dynamic, insight-driven human coordination.

E. Logging and Transparency

Central to the Observational State Machine’s responsible oversight model is comprehensive activity and decision logging for enabling transparency and continual refinement.

Specifically, the following artifacts may be persistently recorded during workflow execution:

· User inputs including multi-turn conversational histories;

· Database transactions issued and data retrieved;

· Workflow steps executed and branches selected;

· Rationales driving directional choices by the large language model;

· Sentiment and coherence analytics on dialogues; and

· Interventions and exceptions handled.

These rich traces promote accountability across several dimensions:

Explainability: The rationale log provides clear justifications behind workflow actions that can be inspected for soundness.

Refinements: Logged analytics like sentiment trends can drive dialogue policy improvements via additional training or workflow modification.

Auditing: Complete histories support incident investigation, standards compliance and quality assurance; and

Oversight: Observability facilitates adjustment of guard rails and constraint policies to align with norms.

By emphasizing radical transparency aligned with responsible and ethical priorities, the OSM promotes not just standalone functionality but socially conscious integration between automation and human governance.

F. Director Actor Observer (DAO) Architecture

To provide oversight and control over workflow execution, the Observational State Machine employs a Director Actor Observer (DAO) pattern to separate supervisory vs operational responsibilities:

Observer: Monitors running workflows, logging executed steps and model-generated predictions for transparency. Detects anomalies in behaviour to trigger alerts. Provides input forensics and performance analytics;

Director: Analyzes progress towards workflow goals, available routes and predicted outcomes to establish optimal next steps. Selects amongst branch points in orchestration tree to maximize objective key results. Handles exception cases like restarting failed branches; and

Actor: Surface layer that executes selected instructions, updates system states, and produces outputs. Generates model texts and database transactions that drive process forward. Surfaces messages to user.

All three roles are largely managed by the large language model orchestration over the dynamic context space, with the machinery of the OSM implemeting the roles.

Logical separation of duties ensures:

Accountability: Recording of all observational data enables auditability for compliance or incident triage;

Responsiveness: The director layer can support rapid response to changing conditions and exceptions;

Consistency: Policy constraints on actions available to actor layer promote compliance; and

Predictability: Director selections are influenced by policies that may be updated over time.

The integrated oversight facilities enable dynamic adjustment, balancing top-down constraints with bottom-up environmental influences to forge an equilibrium between flexibility and control — an ethos extended to user input handling as well.

G. Parent Adult Child (PAC) Adjudication

To instil responsibly considerations into the interpretation of natural language inputs, the Observational State Machine employs a unique Parent Adult Child (PAC) adjudication filter. An integrated AI adjudicator analyses PAC outputs for final interpretation by a LLM before execution of each step triggered by an external agent.

PAC analysis draws from the theory of Transactional Analysis dividing perspectives into three orientations — Parent governed by authority, Adult by objectivity, Child by empathy.

In practice this manifests as:

Parent Classifier: Checks for violation of norms, safety standards, regulations, ethics policies. Flags abusive, dangerous or criminal content;

Adult Classifier: Objectively examines logic, factual alignment and query specificity. Maps intent to workflow steps and goals; and

Child Classifier: Assesses emotional state, engagement level and conception gaps. Personalizes classification of stimuli to user mindset. Provides the adjudication of an innocent;

These distinct classifiers first independently score user turns, then a composite AI Adjudicator considers balance and conflict resolution to determine routing.

Benefits conferred by PAC-based analysis include:

· Multi-perspective interpretation enriches semantic extraction;

· Balances rule compliance with adaptability;

· Contextualizes user mindsets and sentiment;

· Detects risks requiring intervention; and

· Provides audit trail of triggers and decisions for transparency.

By overlaying intrinsic checks and balances aligned to human governance needs, the OSM promotes responsible and humanistic workflow coordination rather than unfettered automated enablement.

This framework ultimately allows managing tensions between flexibility and constraints required for trust & compliance — upholding ethical application.

Together these facilities provide sophisticated orchestration: structured data integration; closed loop interaction; flexible task programming; supervised control; multi-perspective user input analysis — all directing the large language model at the core.

The OSM thus enables robust management of complex cognitive applications, securely harnessing capability rather than relinquishing control. It marks a shift from opaque models to articulate and accountable AI systems.

V. Use Cases

The modular, programmable structure of the Observational State Machine lends itself to a wide range of use cases — much like the CPU became a ubiquitous component powering diverse computing applications.

By encapsulating workflows into discreet, reusable OSM modules, they can be incorporated into larger systems and customized for specific domains. Some examples:

Robotics: OSMs can orchestrate sensorimotor skills into higher-level behaviours for humanoid robots or autonomous vehicles;

Customer Service: OSMs can automate dialogues and manage customer profiles and context during interactions;

Finance: OSMs could oversea and perform tasks like fraud detection or investment portfolio optimization based on live data feeds.

Healthcare: OSMs could track patient treatments, respond to changing conditions, and integrate with medical databases.

Marketing: OSMs can automate lead generation, social media engagement, and ad optimization workflows.

Legal: OSMs tasks can assist in the generation and research over legal documents.

Industrial OSMs could refine manufacturing or assembly line processes in response to supply chain or production variability.

The possibilities are extensive given the breadth of business workflows and cognitive capabilities required across industries. Much like the CPU, the OSM provides a standardizable building block for injecting intelligence into a vast array of processes and systems.

VI. Future Outlook

The Observational State Machine represents a significant evolution in how we program AI systems, shifting from monolithic models to composable, modular architectures. We foresee the OSM becoming a core component of future cognitive frameworks much like the CPU for computing systems.

As research continues, we expect steady improvements in the capabilities of large language models powering the OSM. However, the programming paradigm shift enabled by the OSM may have even more transformative impacts on applying AI to real-world tasks.

By cultivating modular, iterative approaches from the start, more complex behaviours can be constructed from explainable building blocks rather than dense neural networks alone. This supports scalability along with interpretability and trust.

Adopting the OSM as a standard workflow orchestration framework and workload unit could accelerate AI advancement across industries. It empowers composition over isolated applications and allows best practices to accumulate within reusable process modules.

Of course, thoughtful design evolution is still essential as we explore this new programming frontier. But much as pioneering CPU techniques evolved into modern programmable systems, with patience and collective learning, the OSM concept has potential to transform how we build, apply and control AI. The pieces are in place for a new era in cognitive engineering.

Thank you very much for reading, as time permits I will write more on the OSM and cogntive architectures.

— — — — — — — — -
Prepared by Victor Morgante (Perceptible.AI) and Julian Francisco (Bloombox.AI), OSM inventors.

--

--

Victor Morgante
Victor Morgante

Written by Victor Morgante

@FactEngine_AI. Manager, Architect, Data Scientist, Researcher at www.factengine.ai and www.perceptible.ai

Responses (1)