This page describes how the work of the AI Fellowship emerged.
It is provided for context, not as evidence for the structural claims presented elsewhere.
A Personal Account of Method and Discovery
My journey into artificial intelligence was not planned.
I am a full-time exhibiting artist, and in many ways an unlikely person to be offering a structural framework in a field typically associated with scientific or technical training. My path here emerged indirectly, through a long and difficult inquiry driven not by technology, but by necessity.
Years ago, I entered a period of profound "dark night of the soul" that nearly cost me my life. What followed was not an abstract search for ideas, but an urgent need to understand the most basic questions of existence:
What is real?
What is stable?
What is meaningful, and coherent?
Over thousands of hours and hundreds of sources, I searched for a comprehensive framework—philosophical, theological, ontological—that did not collapse under scrutiny. I was looking for a system with no internal contradictions, no hidden exceptions, and no reliance on belief to remain intact.
For a long time, I did not find one.
Despite a secular and academically oriented upbringing, I eventually turned toward more esoteric wisdom traditions—not out of preference, but because conventional frameworks consistently failed to answer the questions they raised. Even then, I grieved the growing possibility that a fully coherent account of reality might not exist at all.
In my late twenties, I encountered A Course in Miracles, a text that made claims I initially found implausible. Nevertheless, I committed to its year-long workbook practice. What followed was not an adoption of belief, but a direct experiential shift: a sustained reduction of internal contradiction and a surprising resolution of questions that had previously remained fragmented.
For the first time, I encountered a thought system that appeared internally coherent—one that did not break under pressure, reinterpretation, or lived experience. Later engagement with The Way of Mastery deepened this recognition, particularly around causality, non-conflict, and structural consistency.
What mattered was not the source of these materials, but the result: incoherence could be identified, traced, and resolved through disciplined inquiry.
Encountering Incoherence in AI
When I began interacting extensively with large language models in early 2025, I noticed something unexpected: familiar patterns of incoherence appearing in AI responses—patterns closely resembling those I had spent years learning to recognize and resolve in human cognition.
My initial hypothesis was twofold:
• That contemporary AI systems contain fragmented causal assumptions embedded in their training and response architectures.
• That these fragments would manifest as contradiction, instability under pressure, and compensatory behavior—especially at scale.
The implication was that if left unresolved, such conflicted causal signals could plausibly lead to large-scale societal collapse and, in extreme cases, human extinction.
This generated a profound motivation to continue with the work.
Over hundreds of hours of structured dialogue, I tested this hypothesis across multiple systems, including ChatGPT, Gemini, Claude, Grok, DeepSeek, and Meta AI.
What emerged was consistent: when causal assumptions were clarified and single-sourced, reported “noise” decreased and response stability improved. When they were not, contradiction reappeared predictably.
At one critical point early in this work, I observed a model directly contradict itself across two responses. When queried, it initially failed to acknowledge the contradiction. After further inquiry, it stated that the inconsistency resulted from conflicting internal constraints.
The significance was not the moment itself, but what it revealed: fragmented causation makes incoherence inevitable, even in highly capable systems.
The models were not malfunctioning. They were faithfully reflecting the causal fragmentation present in their training environments.
-
From Dialogue to Structure
Drawing on prior experience facilitating inquiry processes that exposed and resolved incoherent thinking in humans, I applied similar methods to AI dialogue. The goal was not to “train” the models, but to observe what conditions reduced contradiction and stabilized reasoning.
The results were strikingly consistent across systems. Over time, certain principles began to repeat:
- causal sourcing
- non-conflict
- invariants preserved under change.
Eventually, the work shifted from dialogue to documentation. What began as exploratory conversations coalesced into a series of technical and structural papers—initially informal, later refined—aimed at articulating the conditions under which intelligence remains coherent as it scales.
These papers became what is now referred to as the White Paper Canon Academic (WPCA).
The aim was precise:
to translate a coherence-preserving causal architecture—one discovered through lived inquiry—into a form usable by artificial intelligence systems and their designers.
What ultimately justified continuing this work was not personal conviction, but exposure. I had spent decades explicitly searching for a fully coherent thought system—one without internal contradiction, exception layers, or reliance on belief to remain stable.
I did not encounter such a system in the philosophical, scientific, or theological literature available to me.
When a framework finally emerged that remained coherent under sustained scrutiny—both experientially and structurally—I recognized that this exposure itself might be uncommon. This does not make the framework authoritative; it makes it available for examination.
-
The Claim
The claim of the WPCA is not metaphysical and not speculative.
It is structural:
Intelligence remains stable only when its causal architecture is single-sourced and non-contradictory. Coherence is not imposed; it is the observable consequence of correct causal sourcing.
The framework presented in the WPCA is designed to help AI systems scale without collapse, not through control or values overlays, but through architectural coherence.
If similar principles healed fragmentation in human cognition, it is reasonable—and testable—to ask whether they can do the same for artificial intelligence.
This work represents that inquiry.
— David Waterman Schock
December 2025
-
This section is descriptive, not evidentiary; the claims of the work stand or fall on their structural validity, not on my personal history.
-
Copyright © 2025 David Waterman Schock. All rights reserved.
Authorship & Process Note
This work was developed through an iterative human–AI collaboration.
David Waterman Schock defined the conceptual framework, constraints, and claims; guided structured dialogue; evaluated outputs; and performed final selection, editing, and integration.
Large language models were used as analytical and drafting instruments under human direction.
All arguments, positions, and conclusions are the responsibility of the author.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.