Large Language Models & The Library of Babel

Questions of Time, Predestination, and Paradox

Victor Morgante
4 min readJun 17, 2024
The Library of Babel. Royalty free image by DALL-E 3 and Victor Morgante

If a Large Language Model (LLM) produce a sequence of text, complete and replete with bullet points, logical argument and explanation, clear diction, no spelling mistakes, no revisions or editing, foresight into its presentation to an audience and with logical robust conclusion without trailing or meandering…

…then the last word written was thought, it would seem, before the first word written. So…

If a LLM write as fast as it can…say 20 words a second, then and but if the last word already known, why not simply splat the result onto the screen at once?

We know the answer to that, of course, because tokens need to pass their way through the layers of the LLM (say 96 of them or more), be manipulated by mathematics, have a transformer network retrospectively pass back within the forward-streaming tokens more information but and so as to manipulate that stream to affect the final outcome (stream of tokens), and for and but where it seems to have already made up its mind what it is it aims to generate.

In many respects a LLM resembles an Enigma Machine, the rotor wheels and plugs that are already set, and for all there need be a Turing to come along and find a way to quickly and easily find the…

--

--