Assemblers, Assemble!
Hello, DALL-E. Class is in session.

Assemblers, Assemble!

Copyright 2024. The Cagle Report

This blog posted started in a discussion thread about the nature of intelligence, and whether it was an intrinsic or emergent process. I am firmly in the camp of emergence, but with the understanding that we need to define intelligence VERY carefully here to keep the discussion from going off into the weeds.

In the process, though, I thought it a good chance to start talking about foundational structure patterns. One of the big patterns that I've found over and over again is the assemblage.

In several data models and upper ontologies that I've seen, there is often a conflation between an individual and an organization, usually with both being subclassed as agents. This has led to a great deal of confusion in modelling (and in law), because it can be argued that an assemblage of people is not a person, but rather, is a higher order abstraction with its own model, rules and processes. Moreover,this distinction has interesting implications both for data modelling and the definition of intelligence.

First, of course, some pictures:

This is a very generic instance of an assemblage. At its fundamental level, an assemblage is a bag of bags. It could contain one bag or several, but the structure holds regardless. As with any bag, the individual items of each bag should be of a single given type (or inherit from that type, as appropriate), and are usually contained by reference (they point to the bag, not vice versa). An assemblage is, as its synonym "organization" suggests, a way of organizing bags into a cohesive whole. An assemblage may have other properties as well, but because of the open-world assumption, adding these things does not change the underlying model.

An Assemblage does not need to involve people. it could be a machine made from components and processes. It may contain some form of history, but doesn't necessarily have to. It's simply a way to say: here is a thing made up of other things.

I usually reserve the word organization to refer to a subclass of assemblage that includes people as one subclass of bag, and corporation specifically for what is called a legal business entity - an organization of people and resources with a specific charter, mission, and stakeholders as defined by some formal governing institution.

So, a (very simplistic) corporation might look like this:

If you want to determine who is in what role, you can actually extend that out with a Roles bag as well.

Querying assemblages is just as easy. If you wanted to know who was the current CEO, the query becomes:

The output for this is, as expected:

(assuming labels have been defined and resolved).

One of the key aspects of the bags in bags approach to modeling is that you are essentially creating classification structures via the use of the predicate model without having to necessarily make these relationships explicit for the objects themselves. For instance, the same query, when constrained for a given employee, consider the following SHACL:

The SHACL constraints then provide significant extra metadata, albeit at the cost of a more complex SPARQL query.

To help visualize this, the following illustrates a portion of the graph, with the blue arrows being given template triples in the SPARQL.

The enclosed bag notation has been replaced with the explicit rdfs:member relationship, and the blue outlined edges and nodes show which participate directly in the SPARQL query. Pointing to an edge is always difficult to show. In this case, I turn the edge into a hexagonal node and then have a leg to and a leg from that node, which should be read as a single unified triple. <Corporation:BigCo :hasEmployees _:Employees>, where employees in this case is a blank node.

The output of the SPARQL can be seen below, and assumes that there were more cases where Jane Doe was an employer, an investor or a director.

So, what does this have to do with AI or even real-world intelligence? Quite a bit, actually.

Well, it's intelligence of SOME kind, anyway.

Assemblages and the Definition of Emergent Intelligence

When do many things start to become one thing? This is a surprisingly difficult question to answer. A good working answer to that is when those many things begin to interact with one another in a long-term, sustained manner. Take a region in space where there is a pretty even distribution of "dust". By itself each of these dust particles are so small (perhaps even down to single atom small) that their gravitational influence is non-existent. Given hundreds of trillions of years, they might coalesce, but that's a long time to wait.

Then along comes a stressor (an impulse), causing those particles to move like water into regions of higher and (much) lower density, and setting them in motion. They start influencing one another. Small dust grains become bigger dust grains, which become rocks, eventually coalescing into a protoplanetary disk. Gravity begins to do its work, setting the disk spinning, and before you know it, you've got a glowing ball of gas that begins sucking everything in its immediate environs, except for things what are large enough and fast enough to develop strong orbits.

The same things happen with smaller pockets, which eventually form gas planets, and, if properly positioned, can also protect smaller planets, while simultaneously ejecting out other potential "rivals". Entropy decreases and order emerges from seeming randomness. Things react with on another in earnest, and eventually, after about five billion years or so, you have an active solar system.

Assemblages happen.

This process does not happen in a vacuum (well, in this case, it actually does, but ignore that minor point). Information is transmitted between the various components largely on the basis of gravitation fields, though past a certain point, magnetic fields actually play a role. That information tells a planet how to move in the system, how its own magnetic field should flare or go quiet in the face of solar activity, even how to react when bombarded by water-rich asteroids. There are protocols of communication between the various parts of the system (albeit simple ones), and these could, through a very big lens be seen as "intelligence".

This is not human intelligence or animal or plant or even microbial intelligence. It's simply a way that different parts of the system react to one another, which in turn changes of the state of the system overall.

When do many become one? This is one of the things that have divided ontologists for decades. We build assemblages of people, and call them villages or kingdoms or cities or corporations or universities or provinces or countries. We treat these things as being different from human beings, while those in positions of power within these organizations try to make their organizations be treated as human beings legally to gain the protections and rights that individuals have.

Yet these things -- these corporations and governments and churches and so forth - are not individual humans. They are a level of abstraction beyond humans, and play by a different set of rules.

A system becomes more abstract when a sufficient number of interactions that follow a particular pattern occur and begin to act as a functional unit. The most obvious of these is an organization, which originally meant an assemblage of individuals organized to fulfil a specific objective but eventually become a separate entity (usually what we would now call corporate entities) in their own right. Organizations are an emergent abstraction. Organizations should not follow the same "rules" that individuals do because their composition and purpose in "life" are different, but in the 19th and 20th centuries, they became deliberately conflated as being the same thing, something that had long-term, mostly bad consequences.

Nonetheless, organizations or assemblages, a term I prefer because "organization" has become semantically loaded, create analogous structures once they "mature" - both a person and an assemblage have distinct parts that perform different functions that are "in the abstract" essential - both the human and the organization fall apart when the capabilities are removed unless they can create an alternative (a transplant or the use of a machine to temporarily take over that function in a human, outsourcing or automation in the case of corporations). What we perceive as intelligence is, in effect, the flow of information between different components within the assemblage. They are analogues, but they communicate via different protocols to ensure the long-term survival of the organization (the species, if you will).

Look at an ant colony. The colony is made up of ants and the warren that they construct. ants have specialized to the extent that their survival has little to no bearing on the colony. The colony has a rudimentary analysis that is not ant intelligence (fairly primitive). The same analysis can be made of cells and even organic processes at the molecular level, with most cells having evolved from smaller bacteria and viruses that have become so specialized that they are no longer recognizable as distinct entities (and cannot survive long outside of the context of the organization itself). Cells communicate within themselves (they have "intelligence" in a very limited sense) via chemical processes, which can be considered generative.

How deep does that process go? You can argue that an atom is a very stable assemblage of assemblages, an aggregate of electron (or mu-meson) orbitals and a nucleus, which is itself an assemblage of quarks, which are part of a sea of "wavicles" that in turn communicate via bosons, including light. This is where entanglement originates, by the way, as most bosons likely travel at light speed.

From their perspective (or the perspective of a person riding on such a "particle"), time (and hence distance) does not exist. Thus, Einstein's complaint to Rosenberg to physicist Max Born about spooky action at a distance doesn't apply because there is no distance.

Now, as with any emergent property, these are analogues. A eukaryotic microbe for instance, is likely aware of its internal state (that's the role of intelligence), but almost certainly does not have any sense of itself as an individual. I have a suspicion that an analogue to self-awareness in the Descartian sense requires a certain degree of underlying organizational complexity mixed with autonomy and that when individuals become too specialized, they lose self-awareness, as it is antithetical to the assemblage that it is participating in. We become a part of the Borg.

We call the intelligence of an assemblage of humans its culture. Human assemblages have already begun to specialize: cities, countries, corporations, cooperatives, networks, unions, universities, etc. What we call AI is, in fact, the growing need for intercommunication between various assemblies as well as within it. We're in that phase now and will likely be in this phase for a few more decades.

Increasingly, it will become possible to "talk" with a university or a city as a semi-self-aware entity. This is why I think that the whole concept of AGI is fundamentally misguided. It's reading the wrong vector. We won't have a planetwide intelligence for some time yet, because there is insufficient speciation among corporate entities. You first need assembly organization, and that's JUST really beginning to get off the ground.

By the way, culture is not human intelligence, but it is informed by human intelligence. We should not make the mistake that the two are equivalent - they are analogues and leaky metaphors, but not the same thing.

With thanks to Nicky Clarke and the AI Augmented Intuition Collective.

Kingsley Uyi Idehen

Founder & CEO at OpenLink Software | Advancing Data Connectivity, Multi-Model Data Management, and AI Smart Agents | Unifying Disparate Data Silos via Open Standards (SQL, SPARQL, RDF, ODBC, JDBC, HTTP, GraphQL)

2mo

Insightful post!

Like
Reply
Malome Tebatso Khomo

Everywhere, knowingly with the bG-Hum; Crusties!

3mo

What's emergent is a consistent paradigm of pattern. Whomever that's been condemned into a life of 'Programmer', willfully or not develops this functional structuralism, not involuntarily, but deliberately; to mitigate the tedium and to rescue some meaning out of their mechanist destiny. Accused No. 1 your's truly #myself. And occasionally I find solace in kindred sufferings as penned here. But Kith and kin tell differing stories. And it helps to contextualize comments to enhance the mutual narratives. I've contrasted myself before here in the contrast between uniformity seeking platform leaders and versus idiosyncracy driven problem solvers. In this piece the massive task undertaken is to begin with available toolsets; circumscribed by their here and now expressiveness (their ability to fully articulate a problem statement); and then deploying their ready-made translators to compile or interpret a solution at massive scale. The compiler design use cases provide prototypical tests which sort-of do the job, and then a test is rigged for data trials thereafter. That's when the trouble starts. So it almost becomes a paradigm for thought experimental proofing with no imperative finish line. Thence it reverts into philosophy.

Nicky Clarke 🎶

🌎 Biomimetic HI for AGI 🕸️ Hypergraph Mindset 🎼 Research play innovate ✍🏽 Art tech eco-strategies🐛🔥🦋 甲斐 Ikigai {🪷Bioresilient💎} 🧠 👀 Pro meta-cognition🦾 Quanta attuned 🤞🏿Ethos=dreams 🔜 Neurodiversity wins!

3mo

Thorough. I’ll have to read closer. Love all the visuals. Splendid work there especially. Especially appreciate the shout out Kurt Cagle !!! 💫

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics