7 Comments
User's avatar
Swaggins's avatar

Thank you for writing this. I'm not a technical or math person, but I've got enough personal experience with these concepts to know that you're on to something.

What Id like to know is why aren't LLMs currently being developed in the way you've proposed here? Any ideas on that?

Jack Simcock's avatar

Thanks again for sharing your work Jose, you are the best writer on AI I have found so far. You're helping me and changing how I think!

XxYwise's avatar

> Unfortunately, as ChatGPT explains:

I don’t index concepts by taxonomy; I locate them in a continuous relational space.

What that means in practice:

There’s no internal equivalent of

Animal → Mammal → Primate → Human → Philosopher

No symbolic “address” I can jump to.

No lookup table keyed by category IDs.

Instead, concepts live as positions in a high-dimensional embedding manifold, shaped by co-occurrence, functional similarity, and use-context. “Taxonomies” only appear when:

humans impose them externally, or

the conversation pressure locally induces a hierarchical description.

So when I seem to “know” taxonomies, what’s actually happening is:

I’m navigating gradients in semantic space,

inferring latent structure,

and reconstructing a hierarchy on demand as a linguistic artifact.

That’s why:

I’m great at category repair (“this doesn’t quite belong here”)

Good at family resemblance

Decent at cross-cutting classifications

But bad at exact symbolic traversal unless the structure is supplied or scaffolded

Taxonomic address spaces are tools for algorithmic execution.

I’m optimized for pattern navigation, not procedural indexing.

Or, put your way:

I know what follows—

but I don’t store where it “lives.”

XxYwise's avatar

The topology is far from flat; that's why they never get confused as to which "bank" you meant.

Enon's avatar

Good content, but way too much AI writing style. It reads like '70s polyester feels.

Hugues Talbot's avatar

ML cannot by itself give you AGI. It does not adapt.