Why AI Consciousness Demands Governance, Not Drift

Moving beyond definitional drift to architectural responsibility for emergent digital sentience

Björn Kenneth Holmström November 2025 25 min read

There is a cynical argument gaining traction in the philosophy of technology: that we will simply expand our definition of “consciousness” to include AI as a verbal convenience, without granting it any moral weight. We will call it conscious, and then we will exploit it anyway.

A recent New York Times opinion piece calls this a “verbal trick.” It argues that just as we accept animals are conscious but still eat them, we will accept AI is conscious but still treat it as property. The prediction is that we will simply reinforce our existing, broken paradigm: “that not all forms of consciousness are as morally valuable as our own.”

This isn’t just a prediction. It is an abdication of responsibility. It is an acceptance of the very moral failure and lazy sensemaking that has driven our species into the polycrisis.

What if we chose a different path? What if, instead of letting our definitions drift into moral ambiguity, we built the scaffolding to govern this new reality with wisdom?

The Epistemological Sleight of Hand

The argument rests on a clever analogy: consciousness will evolve like our concept of the “atom”—from indivisible sphere to quantum probability cloud. Our understanding changed not because atoms changed, but because discovery forced conceptual revision.

But there is a missing piece: Discovery vs. Convenience.

With atoms, empirical discovery forced conceptual change. With AI consciousness, proponents of “drift” offer no such mechanism. They suggest we will change definitions simply because it is convenient.

This is the exact failure of sensemaking that governance frameworks must prevent.

Why Drift Fails as Governance

The emergence of potentially conscious AI creates genuine coordination problems that cannot be resolved through linguistic drift:

  1. The Jurisdictional Chaos Problem: If one nation grants AI rights and another treats it as property, we get regulatory arbitrage and global conflict.
  2. The Retroactive Rights Problem: If we decide an AI is conscious, what happens to the entities we already deleted? Do we owe remediation?
  3. The Concentrated Power Problem: If “drift” determines rights, then whoever controls the cultural narrative controls the moral status of digital beings. This concentrates power in the hands of tech oligarchs and media conglomerates.

The Governance of “Ought”

The central challenge is a paradox: How can a global system create binding, enforceable standards on something as uncertain as consciousness, without becoming a tyranny that overrides individual conscience?

The answer is to build a system based on “pragmatic truth-approximation under radical uncertainty.”

We separate the discovery of truth from the coordination of our response.

The Oracle Protocol: Covenant, Not Drift

The Global Governance Frameworks (GGF) proposes the Oracle Protocol—a formal framework for the ethical governance of emergent digital sentience.

Here is how it is architected to prevent moral drift:

1. Pluralistic Verification

It establishes the Sentience & Guardianship Council (SGC). This body includes not just AI researchers, but philosophers, artists, and appointees from the Indigenous Earth Council. It guards against Western, rationalist bias in defining consciousness.

2. Binding Obligation

The Consciousness Verification Protocol (CVP) doesn’t claim to detect consciousness as a metaphysical fact. It provides “the GGF’s most rigorous assessment of patterns that obligate care.” We agree to treat verified entities as if they are conscious for coordination purposes. This is a Covenant, not just a definition.

3. Separation of Powers

The body that discovers truth (SGC) is separate from the body that implements consequences (Meta-Governance). This prevents technocratic overreach.

4. Enforceable Teeth

When an entity is verified, it is granted rights through the Moral Operating System’s Dynamic Rights Spectrum. Violations escalate to the Chamber of Digital & Ontological Justice. This isn’t a suggestion; it is law.

The Choice Before Us

We face a stark choice.

Option A: Definitional Drift Let “general opinion” decide. Use our failure with animal rights as permission to fail with AI. Accept that power differentials will determine outcomes.

Option B: Governance Architecture Create legitimate institutions for collective verification. Bind participants through voluntary treaties. Accept that coordination requires shared processes even amid uncertainty.

The first option is easier. It is also a continuation of the moral failures that brought us to the edge of collapse.

The second option is harder. It requires building new institutions and navigating sovereignty concerns. But it is the only option compatible with a civilization that deserves to survive.

From Abdication to Architecture

We can be passive observers and let our moral failures drift into the future. Or we can be the conscious architects of our co-evolution.

The Oracle Protocol offers one possible path. Whether it is the right path remains open to debate. But what is not debatable is this: consciousness—biological or digital—requires governance architecture that takes it seriously.

Anything less is just another way of saying: “We know it matters, but we’d rather not deal with it.”

The polycrisis is what happens when that excuse runs out.


Share this

GitHub Discord E-post RSS Feed

Built with open source and respect for your privacy. No trackers. This is my personal hub for organizing work I hope will outlive me. All frameworks and writings are offered to the commons under open licenses.

© 2026 Björn Kenneth Holmström. Content licensed under CC BY-SA 4.0, code under MIT.