Collapsing the “information wave function” with LLM
3 mins read

Collapsing the “information wave function” with LLM

Collapsing the “information wave function” with LLM

Source: Graphics: DALL-E/OpenAI

Large language models (LLM) have taken the world by storm, generating everything from poetry to technical documents. They were praised for their ability to establish new contacts and even surprise us with their observations. But do these models actually know anything? Or is the knowledge we attribute to them merely an illusion – a projection of our own interpretive power?

LLM and quantum superposition of meaning

This brings us to an interesting and very conceptual analogy rooted in quantum mechanics: the concept of wave function collapse. In quantum mechanics, particles exist in a state of superposition, maintaining many possible states at once until they are observed. It is the act of observation that “breaks down” this superposition into a specific result. Likewise, LLMs generate a wide range of potential responses and connections, existing in a kind of informational superposition. However, these results do not yet constitute knowledge; they are probabilities and patterns that hover in the realm of potential meaning.

When we engage with LLM outcomes, our minds act as an observer, transforming this information wave function into something concrete and meaningful. LLM generates text based on statistical associations in your training data, offering linguistic scaffolding – words and phrases that create potential ideas. Only when a person reads, interprets and contextualizes, the result is transformed into something more – knowledge.

Partners, not substitutes, in knowledge creation

This interplay highlights an important truth about the relationship between humans and artificial intelligence. Although LLMs can reveal connections that might otherwise go unnoticed, they do not create knowledge in the sense that humans do. Their power lies in generating new combinations of “cognitive units” based on huge amounts of data. However, it is up to us to cope with these possibilities, sift through the noise and raise the signal level. In a sense, LLM is like a cosmic linguistic soup filled with endless patterns, but it is the human observer who crystallizes one particular pattern into something meaningful.

So let’s torture this analogy even more. A recent article suggests that LLMs also influence collective intelligence – how groups create, access and process information. By providing a vast network of interconnected ideas, LLMs reshape not only individual knowledge creation, but also group decision-making. This collective dimension reinforces the quantum analogy: it is not just the collapse of a single wave by a single observer, but an entire society collectively engaging in artificial intelligence to create shared knowledge. In this dance, LLMs become partners who enhance our collective capabilities, not mere substitutes for human cognition.

Is there knowledge in LLM schools? Not exactly. What they have is the potential of knowledge – a vast, swirling ocean of possibilities. It is our act of interpretation that transforms these potentials into concrete insights, transforming the raw result into something that has meaning, value, and sometimes even wisdom. However, we must remember to charm the eloquence of artificial intelligence and be careful not to project metaphysical characteristics onto these models. In this relationship we find the essence of what it means to be human in an increasingly digital world.

Perhaps my next story will be about a certain cat – one that is alive and dead until you observe it!