Adrian Tchaikovsky’s “The Expert System’s Champion”and LLMs as Perspective Simulacra Engines and/or Frankensteinian Sacks of Neural Meat
Like many, I’ve been playing with LLMs over the past two years, and my own understanding of these technologies continues to evolve the more I use them. There are obviously many problems with these technologies and how so-called AI is being developed, implemented, and used, but it’s also been a long time since I’ve been as fascinated by a technology or as impacted by its use. They are deeply weird and unpredictable, both powerful and macabre.
LLMs as Perspective Simulacra Engines
One way I’ve been thinking about LLMs is as engines for simulating different perspectives. LLMs are mathematical models trained on huge amounts of content, much of it writing from different people’s perspectives. This data is represented as connections with different weights. People, especially at first, tend to use very generic prompts, so what they get back is influenced by the “heaviest” weights, which tend to be pretty average/mid/beige. As you learn to use more specific prompts and give it more context, you trigger “lighter” or “deeper” connections, which results in more unique, weird, and useful output.
Don’t take any of this as a technically accurate account of how LLMs work.
What I’m often doing is giving the LLM some original content (an idea, proposal, etc.) and trying to get it to simulate a specific perspective through both the prompt and the content. My partner says this is a very librarian way of thinking about LLMs: I’m kind of using them as a “library” of simulated perspectives that I can “retrieve” and apply to my work (superficially this is like a library, but it’s also clearly not at all like a library for there is nothing to collect, retrieve, or curate). For example, I’ve asked LLMs to ask me questions about a program I’m developing from the perspective of future attendees, and to review an idea from the perspective of someone passionate about our pedagogies or values. Prompts like this that are given along with well-formed original documents can be extremely powerful ways of rapidly testing and revising ideas.
There’s another way of seeing this metaphor.
LLMs as Frankensteinian Meat Sacks of Collected Neural Tissue
I recently finished “The Expert System’s Champion,” the second in a series by science fiction author Adrian Tchaikovsky. Tchaikovsky is very good at exploring what non-human sentience might be like, and his “Children of Time” series is particularly good in this respect. This book felt more mythic than his others, reminding me a little of Becky Chambers’ Robot and Monk series or Ursula Le Guin’s Earthsea series.
I won’t try to explain the whole plot, but the next two paragraphs do contain spoilers.
The book is set on a planet where colonization mostly failed due to a hostile biology. Generations later, what remains is something like medieval villages organized around engineered “hives” of “wasps” that pick individuals to fill roles like law-giver and doctor, burrowing into them, and turning them into “expert systems” that more-or-less take the person over to fulfill the role needed to manage the community. These wasps were one of the ways the original colonists managed to survive on the planet. All this has been forgotten and the people who live in the villages who have their own myths and understandings of these systems and the world they live in.
In this second book, the colonists encounter a species of giant snails with another sort of hive structure organized around a truly giant queen snail. This species survives by being capable of hybridizing with anything it encounters, integrating whatever it consumes into itself and the hive, including memories and, for lack of a better phrase, the ways of being of the thing it consumed. There was a splinter group of colonists who were consumed by these snails shortly after they arrived.
“There is nothing of us but Leviathan and her dreams,” uttered the thing called Geordi. “I told you. Before she took us, I told you. Whatever she swallows becomes her mind. Whatever she takes within herself becomes a dream in her, and though she is slumberous and insensate, those dreams can think for her and advise her and give her a mind and a purpose. And so she calls to the Children, our Children, in the voice we give her, and they act as her hands and haul her across the land. You stand within her, and so you will be her dreams soon enough, and perhaps you will guide her purpose when she dreams you.”
Adrian Tchaikovsky in “The Expert System’s Champion”
In this quote, we meet one of these colonists, entombed in the wall of the queen, “explaining” what it is like to be entombed in the queen and part of her mind. The scare quotes are because it’s clear the consumed colonists don’t live anymore, but rather have become part of the snails. It’s not at all clear if the snails themselves are “conscious” beyond the abilities they have from those they consumed.
This is another way of seeing LLMs: as giant meat sacks of neural tissues, harvested by a marauding evil doctor, spliced together and reanimated by the electrical sparks of your prompts.
I wish I could say all this was leading somewhere deeper than wanting to write that last paragraph.