← Back to blog

On Legibility

Markets don't reward prediction. They reward the capacity to see what is already there but not yet articulable.

markets information theory

There’s a passage in Borges where a cartographer creates a map so detailed it eventually covers the entire territory it describes. The map becomes useless precisely at the moment of its completion. This is roughly the situation with information in markets.

The efficient market hypothesis, properly understood, makes no claim about prices being “correct.” It describes the economics of edge: any exploitable pattern, once legible, will be arbitraged away. The map devours itself.

This creates a peculiar epistemological condition. In most domains, knowledge accumulates. In markets, alpha decays. The half-life of an insight is inversely proportional to its clarity.

We tend to think of market insight as knowing something others don’t. But this framing is subtly wrong. By the time something can be cleanly articulated as knowledge (a fact, a signal, a pattern) it’s usually already in the price.

A more useful frame: insight is the capacity to see what is already there but not yet articulable.

Consider what happens when an analyst develops conviction on a name. The thesis, when finally written, appears as a sequence of logical propositions. But the conviction preceded the articulation. Something was legible to them before they could explain why.

This pre-linguistic pattern recognition is not mysticism. It’s the accumulated weight of context, of thousands of hours spent in a particular corner of the market, crystallizing into judgment. The written thesis is archaeology. An after-the-fact excavation of a structure that was already built.

The interesting question for AI in this context: can machines accelerate the process by which latent patterns become legible to humans?

Note that this differs from automation. Automation replaces human judgment. Acceleration augments it. The distinction matters because markets are adversarial in a specific way. They punish legibility itself. Any fully automated trading signal is definitionally arbitrageable.

What remains valuable is the half-formed intuition, the not-quite-pattern, the thing that requires a human to squint at it and say “there’s something here.” The analyst’s job is to be the first to articulate what has not yet been articulated.

AI’s role, then, is to surface the pre-articulable faster.

There’s an irony in the name “Right Tail.” In probability, the right tail represents rare events of outsized positive magnitude. The defining feature of right-tail outcomes is that they cannot be predicted from base rates. By definition, they are the deviations.

And yet the entire machinery of institutional research is organized around base rates. Coverage universes, consensus estimates, relative value. The infrastructure is optimized for the body of the distribution.

This is not a criticism. You cannot run a research operation on the assumption that everything is a special case. But it does suggest where the leverage is. The returns to better base-rate analysis are competed away. The returns to faster legibility of tail events are not.

We sometimes hear that AI will “democratize” access to information. True in a narrow sense. Anyone can now query an LLM about a 10-K. But the claim misunderstands where scarcity lies.

Information was never the bottleneck. Sell-side desks have been drowning in information for decades. The bottleneck is the conversion of information into something actionable before it becomes consensus.

If anything, the flood of AI-generated analysis raises the bar. When everyone has access to the same summarization tools, the marginal value of summarization collapses. What remains scarce is taste. The judgment to know which threads to pull, which anomalies matter, which patterns are noise.

AI doesn’t solve the taste problem. But it can compress the time between “something feels off here” and “here’s what’s actually happening.” That compression is worth quite a lot.

Markets have a way of punishing hubris. Any claim to have “solved” them is a claim to have solved a complex adaptive system that specifically evolves to defeat solutions. The correct posture is prepared opportunism. The readiness to recognize and act on legibility when it briefly appears.

This is, incidentally, the only honest framing for what tools like ours do. We are building a lens. The territory remains irreducibly complex. But the map can be drawn faster.

Whether that matters depends on what you do in the interval between seeing and consensus. That part is still up to you.