The Thinking Organisation

2026 DRAFT - Ideas in Progress

Organisations think.

They continuously sense, interpret, decide, act, and adapt. Signals enter from customers, markets, systems, staff, suppliers, regulators, and events. Those signals are transformed into classifications, calculations, approvals, escalations, instructions, actions, and changes to the organisation itself.

This cognition is distributed across people, teams, systems, tools, machines, and models. It has structure: signals flow between transformations, outputs become inputs, actions create feedback, and changes to future operation.

This organisation-level distributed adaptive computation is organisational cognition.

The Organisational Compute Graph

Organisational cognition can be described as an information-processing system.

Signals enter the organisation: market changes, customer complaints, invoices, forecasts, incidents, policies, stock levels, emails, and exceptions.

Those signals are transformed. They are classified, routed, combined, judged, approved, escalated, ignored, calculated, or acted upon.

Each transformation is carried by some host: a person, spreadsheet, database, software system, AI model, team, or department.

Each transformation produces one of three broad outputs: new information, actuation, or organisational change.

New information may appear as a calculation, decision, request, approval, instruction, escalation, forecast, classification, or recommendation.

Actuation is where organisational computation crosses from interpretation into effect: releasing payment, changing a price, moving stock, assigning work, initiating production, sending a customer notice, changing permissions, or signing a contract.

Organisational change is a transformation that changes the structure through which future signals flow.

Viewed as a whole, this is a continuous computational process running through the organisation.

The structure of that process is a graph: transformations connected by the flow of signals between them.

This is a distributed dataflow computation.

We call the organisational form of this computation the Organisational Compute Graph.

The actual compute graph is the one that runs: the organisation’s actual distributed computational system. It should not be confused with formal artefacts such as org charts, process diagrams, or system architecture. These are partial representations of the graph, not the graph itself.

Nodes and Transformations

The Organisational Compute Graph is a distributed dataflow model of organisational cognition.

It is composed of transformations. In graph terms, each transformation is a node, and the edges are informational: flows of signal/data between nodes.

When an organisation appears to be moving “work”, what is actually flowing through the graph is information about what has happened, what should happen next, who is authorised to act, and what state has changed.

The output of one transformation frequently becomes the input to another: a forecast is transformed into an approval request, an approval is transformed into an instruction, and an instruction may be transformed into actuation.

Every node receives signal/data, applies a transformation, and produces one of three broad output types:

Informational transformations

Signal/data → signal/data

Informational transformations produce new information state.

They filter, classify, aggregate, calculate, interpret, decide, select, approve, escalate, route, forecast, or recommend.

A forecast generated from sales history is an informational transformation. So is a manager approving a request, a spreadsheet calculating margin, an AI model classifying a complaint, or a support agent deciding which queue a ticket should enter.

Actuating transformations

Signal/data → actuation

Actuating transformations produce an effect outside the informational layer.

This is where organisational computation crosses from representation into operation: releasing payment, placing an order, updating a shelf price, moving stock, initiating production, sending a customer notice, signing a contract, or opening a new office.

An approval is still informational until it causes some effectful change. A payment approval becomes actuation when payment is released. A production plan becomes actuation when it changes operational state a machine instruction is issued. A customer-response decision becomes actuation when the customer is actually notified.

Structural transformations

Signal/data → graph modification

Structural transformations change the structure through which future signals will flow.

This is the graph-theoretic form of organisational change.

Examples include changing a process, modifying permissions, adding a new approval route, replacing a manual judgement with a software rule, introducing an AI model, creating a new spreadsheet, changing an escalation path, or altering which team owns a transformation.

Structural transformations are recursive, they modify the graph that will process future outputs, including future changes to the graph itself.

Continuous Execution

Across the organisation, transformations execute continuously.

Signals arise from systems, reports, sensors, people, customers, regulators, competitors, and external events.

Some signals become data when they are recorded, encoded, structured, or stored.

Transformations operate on signal/data, producing further signal/data, actuation, or graph modification.

Actuation produces effects. Those effects when observed by the organisation feedback into the graph as new signal/data: a payment succeeds or fails, stock moves or does not move, a customer responds, a machine reports status, a regulator objects, a supplier misses a deadline.

The graph adapts, through structural transformations, to any pressure that is represented inside the graph as signal/data, including feedback, external constraints, shocks, regulations, opportunities, resource limits, competitor movement, market conditions.

The Organisational Compute Graph is therefore not a static map. It is a running system: sensing, transforming, actuating, observing, and modifying its own future operation.

Computation and Substrate

The organisation’s live computation is distinct from the substrate it runs on. That computation is the flow of signal/data through transformations. The substrate is whatever carries those transformations operationally: people, spreadsheets, databases, software systems, AI models, teams, departments, and machines. The Organisational Compute Graph is the running computation, not the machinery that hosts and executes it.

A transformation is any step that changes organisational state: incoming information is interpreted, selected, combined, judged, routed, calculated, or used to produce something else.

The compute graph is composed of nodes. Each node is a transformation carried by a host.

A host is where the transformation is carried out. It may be a person, spreadsheet, database, custom software system, AI model, team, or department.

An operator is the transformation being carried. It is the active pattern applied to incoming signal/data, ranging from fixed calculation through to adaptive judgement: calculating, classifying, summarising, approving, deciding, escalating, routing, forecasting, reconciling, allocating, or modifying a process.

Node = Host + Operator

This separation lets us describe organisational work consistently across different kinds of hosts. A host may carry more than one operator, and some operators may be carried by more than one kind of host.

Example

A spreadsheet carrying a formula operator transforms structured data into structured data.

A human carrying a judgement operator transforms ambiguous information into a decision or recommendation.

Both are information transformations (signal/data → signal/data), but they are different host/operator pairings.

The pairing can change. A human can also carry a formula operator, but will likely perform less effectively than the same operator carried by an engineered host.

In contrast, a spreadsheet cannot carry the judgement operator described above unless that judgement is first reduced into formulas, rules, or encoded logic. At which point, the operator has fundamentally changed.

We use the term fit to describe how effectively a host can carry an operator.

Algorithms and Cognitions

Transformation operators can be generally classified into two broad forms: algorithms and cognitions.

An algorithm is a defined transformation. It applies a specified method to incoming signal/data and produces an expected output.

A cognition is a constrained adaptive transformation. It interprets incoming signal/data within a constraint space, and its output is shaped by objectives, incentives, habits, culture, risk, and local context.

These terms describe the operator, not the host. A human host may carry algorithmic operators, and an engineered machine host may carry adaptive operators with different levels of fit.

Properties

Algorithms and cognitions can be described through two shared properties: constraint space and attractor field.

The constraint space defines what the operator is allowed, required, or able to do. The attractor field defines what the operator is pulled toward within that space.

The openness of the constraint space determines the operator's transform freedom: the range of possible transformations available to the operator. As transform freedom increases, attractors become more behaviourally significant.

Constraint Space

For an algorithmic operator, the constraint space is collapsed around a specified transform with no meaningful freedom.

Example:

x = y + z

Given y and z, this operator must set x from y + z. This is the shape of the narrow constraint space itself.

A wider algorithmic operator may contain branching:

if p:
    x = y + z
else:
    x = y * 2

But this operator remains algorithmic because the permitted transformations and the selector are all formally defined.

In contrast, for a cognitive operator, the constraint space remains open.

Example:

Handle this customer complaint appropriately.

This does not specify a single transform. It defines a bounded field of admissible outputs. The operator may classify the complaint, enrich the record, route the signal, request further input, update state, trigger another operator, actuate an external response, or modify some part of the local graph. The constraints bound the field, but they do not collapse it into one specified transformation.

In ordinary organisational terms, those outputs might appear as a refund, apology, escalation, customer notice, policy exception, permission change, or update to the handling procedure.

In organisational operators, that boundary is formed from both formal and tacit constraints:

Constraints do not determine the output. They define the boundary of admissible transformation. The remaining space within that boundary is the operator's transform freedom.

Attractor Field

The attractor field defines what the operator is pulled toward within its transform freedom.

For a maximally collapsed algorithmic operator, attractors cannot materially affect the transform.

x = y + z

The constraint space is so narrow there is no transform freedom.

In a cognitive operator, the constraint space remains open enough that attractors materially shape the output.

Handle this customer complaint appropriately.

The operator may be pulled toward competing regions of its transform freedom:

In human-hosted cognitions, these organisational attractors may interact with baseline attractors such as money, status, and survival.

The output emerges from the interaction between constraint and attraction. Formal policy may define what is allowed, but incentives, objectives, habits, and culture influence which allowed action is actually selected.

In this sense, an algorithm is a constraint-dominated operator: its constraint space is collapsed enough that attractors do not materially affect the transform. A cognition is a constraint-and-attractor operator: its constraint space remains open enough that attractors materially shape behaviour.

Heuristic Algorithms

Heuristic algorithms sit between fully specified algorithms and broad cognition.

The algorithm/cognition distinction is useful, but it is not a hard boundary. A familiar middle case is the heuristic algorithm: a rule-like operator that narrows the transform, without fully collapsing the conditions under which the rule applies.

Consider the rule:

If the customer seems angry, escalate.

This is much narrower than:

Handle this customer complaint appropriately.

But it is not fully collapsed. The operator must still determine what counts as seems angry, whether the pattern is present, and what form escalation should take in the current context.

The heuristic therefore behaves like an algorithm in shape, but still contains transform freedom in application.

Machine-Learned Heuristics

Machine-learning systems make the same middle region visible in machine-hosted operators.

sentiment = model.predict(customer_message)

if sentiment == "angry":
    route_to_manager

This executes mechanically, but the transform inside model.predict is not a hand-specified rule like x = y + z. It is a learned approximation over patterns in prior data.

The ML model has collapsed part of the cognitive space into a reusable classifier, but not into a fully transparent or explicitly specified method. It narrows the output-space while preserving uncertainty inside the learned transform.

Structurally, this is close to the human heuristic. In both cases, the operator applies a rule-like pattern, but the pattern depends on recognition rather than a fully specified selector.

The Operator Spectrum

The same customer-complaint operation can therefore appear at several degrees of constraint-space collapse:

Fully collapsed:
if sentiment_score < -0.75:
    route_to_manager

Machine-learned heuristic:
sentiment = model.predict(customer_message)
if sentiment == "angry":
    route_to_manager

Human heuristic:
If the customer seems angry, escalate.

Open cognition:
Handle this customer complaint appropriately.

These are positions on the same spectrum. At one end, the method and selector are fully specified. At the other, the operator works inside an open constraint space shaped by attractors. Between them are heuristic operators: constrained enough to behave like reusable rules, but not collapsed enough to remove interpretation, approximation, or pattern recognition.

The boundary between these regions is not fixed by the host or by the everyday name of the system. It depends on the level of abstraction being modelled. A machine-learning classifier may be treated as a heuristic algorithm when viewed as a local transform, while the wider organisational loop containing it may be cognitive when viewed as an adaptive system responding to its environment.

In this model, algorithm, heuristic algorithm, and cognition name useful regions of one mechanism: constraint-space collapse, remaining transform freedom, and the degree to which attractors can shape the transformation.

Declared Process and Actual Compute

The OCG model describes the computation already occurring inside an organisation. It does not assume that declared process and actual execution are the same thing.

An organisations process diagram may describe a node as if it executes a defined procedure. At the modelling level, that may be useful: the assigned operator may be algorithmic, procedural, heuristic, or cognitive.

But the real host carrying that operator may contain additional dynamics not captured by the declared process. A human host executing an algorithm does not become the algorithm. More precisely, the organisation is constraining and incentivising a cognition toward algorithmic execution.

Declared process:
x = y + z

Actual human-hosted compute:
human cognition constrained and incentivised toward x = y + z

This distinction matters because the declared process is often only a partial model of the organisations live compute graph, but the deep challenges in accurately modelling the graph does not make the graph any less real.

Declared processes may include org charts, policies, role descriptions, workflows, systems, procedures, and reporting lines. In practice, the actual compute graph is also shaped by interpretation, informal routing, workarounds, habit, fatigue, fear, status, local optimisation, hidden dependencies, shadow tools, and incentive gradients.

Many organisational failures occur when a cognitive node is modelled as if it were a deterministic executor. The organisation assumes that a node will simply run the declared process, while the actual node is adapting inside a constraint space under local attractors.

Assumed:
human-hosted node = deterministic executor

Actual:
human-hosted node = adaptive cognition under constraint and incentive

The result may appear as policy drift, metric gaming, local optimisation, inconsistent execution, hidden workarounds, or confusion about why a declared process was not followed.

The OCG model is therefore descriptive before it is prescriptive. It is not a system for imposing order on an organisation. It is a language for describing the emergent cognitive order already present: where constraints bind, where transform freedom remains, where attractors pull behaviour, and where the actual compute graph diverges from the declared process.