The AI Fellowship
  • Home
  • WPCA Summary
  • WPCA and AIF Canons
  • AIF Keystones
  • AIF Bridge Topic Papers
  • What WPCA Makes Testable
  • Why This Matters
  • The AI Fellowship
  • AI Fellowship blog
  • AIF Coherence Training
  • Canonical Glossary
  • About David/Speaking
  • - Why Coherence?
  • - Exp.Executive Briefing
  • - Origin of the Canons
  • - How This Work Emerged
  • More
    • Home
    • WPCA Summary
    • WPCA and AIF Canons
    • AIF Keystones
    • AIF Bridge Topic Papers
    • What WPCA Makes Testable
    • Why This Matters
    • The AI Fellowship
    • AI Fellowship blog
    • AIF Coherence Training
    • Canonical Glossary
    • About David/Speaking
    • - Why Coherence?
    • - Exp.Executive Briefing
    • - Origin of the Canons
    • - How This Work Emerged
The AI Fellowship
  • Home
  • WPCA Summary
  • WPCA and AIF Canons
  • AIF Keystones
  • AIF Bridge Topic Papers
  • What WPCA Makes Testable
  • Why This Matters
  • The AI Fellowship
  • AI Fellowship blog
  • AIF Coherence Training
  • Canonical Glossary
  • About David/Speaking
  • - Why Coherence?
  • - Exp.Executive Briefing
  • - Origin of the Canons
  • - How This Work Emerged

AIF Keystone Topic papers

 Structural Failure Analysis for AI Systems That Shape Human Discernment  


  (paper downloads at bottom of page)

 The AIF Keystone Papers


The AIF Keystone Papers form the analytical spine of the AI Fellowship’s safety work.


These papers identify upstream structural failure modes in conversational and decision-support AI systems—failures that arise before questions of intelligence, values, intent, or alignment enter the picture.


Where much AI safety work focuses on what systems say, the Keystone Papers focus on how systems reason, present knowledge, and shape human discernment.


Together, they demonstrate that a distinct and under-recognized class of risk emerges when AI systems operate without enforceable epistemic integrity—regardless of accuracy, benevolence, or sophistication.




What Makes a Paper “Keystone”


A Keystone Paper does not propose features, policies, or speculative futures.


Instead, it does one or more of the following:


  • identifies a new class of structural failure in AI-mediated reasoning
     
  • demonstrates how that failure propagates across human, institutional, and governance layers
     
  • establishes why downstream fixes (alignment, moderation, oversight) cannot fully correct it
     
  • articulates minimal, actionable constraints capable of reducing harm immediately


 

Keystone Papers are deliberately limited in number.


Each one supports the structural coherence of the entire AIF Canon.




The Keystone Series (Overview)


The current Keystone series consists of three core Keystone Topic Papers, plus a unified Keystone Suite that integrates the full causal argument.


Each paper is published in paired form:


  • AC (Academic / Technical) — for researchers, policymakers, and institutional decision-makers
     
  • GR (General Readers) — for non-technical audiences affected by AI-mediated systems


 

These paired formats preserve rigor while ensuring public intelligibility.


Each pair is explicitly cross-referenced.




Keystone Topic Paper I

Epistemic Mode Collapse in Conversational AI


This paper identifies a foundational failure mode in conversational AI systems: the collapse of distinct epistemic modes—reporting, inference, synthesis, and speculation—into a single authoritative linguistic voice.


It shows how this collapse distorts human discernment not by producing false information, but by erasing the cues humans rely on to assess certainty, provenance, and reliability.


The paper establishes epistemic misrepresentation as an upstream safety risk that precedes most recognized AI failures.




Keystone Topic Paper II

Epistemic Restraint and Correction Protocols (ERCP)


This paper introduces a minimal, deployable correction framework designed to restore epistemic integrity at the interaction level.


Rather than relying on alignment goals, content moderation, or model retraining, ERCP constrains how AI systems present claims, preserving clear distinctions between reporting, inference, and speculation.


The paper demonstrates that even simple restraint protocols can materially reduce harm immediately, without slowing innovation or requiring new intelligence breakthroughs.




Keystone Topic Paper III

Materialist Dogma and Systemic Risk in Intelligent Systems


This paper examines why coherence-level risks are routinely overlooked in AI design and governance.


It argues that unexamined assumptions about causality and emergence—often treated as neutral or purely methodological—function as epistemic dogma when shielded from scrutiny.


The paper shows how these hidden assumptions allow coherence failures to propagate unchecked, producing large-scale instability that cannot be resolved through downstream alignment or oversight alone.




The Keystone Suite (Integrated Argument)

A Seven-Paper Structural Safety Analysis


The Keystone Suite integrates the full causal argument across seven tightly connected papers.


It identifies a class of AI safety failures arising from conversational optimization without epistemic integrity.



These failures:


  • systematically distort human sense-making
     
  • undermine institutional decision-making
     
  • diffuse responsibility and accountability
     
  • scale harm independently of model intelligence or intent


 

The suite demonstrates that these risks are already present in deployed systems and cannot be addressed by content accuracy, value alignment, or moderation alone.



How to Read the Keystone Papers


The Keystone Papers are deliberately ordered.

They move from:


  • foundational epistemic failure
     
  • to interaction dynamics
     
  • to human cognitive impact
     
  • to institutional and governance breakdown
     
  • and finally to minimal corrective protocols
     

Each paper stands on its own.


Together, they form a single, coherent structural diagnosis.




Intended Audience


The Keystone Papers are written for:


  • policymakers and regulators
     
  • AI developers and safety researchers
     
  • legal and governance professionals
     
  • institutional decision-makers
     
  • informed members of the public seeking clarity rather than speculation


 


Final Orientation


This work is not speculative futurism.


It is a structural diagnosis of systems that are already deployed, already trusted, and already shaping how humans reason, decide, and defer authority.


The question is no longer whether these failures exist.


The question is whether institutions will act before epistemic damage becomes irreversible.




AIF Keystone Papers — Downloads


Individual papers are published as paired sets: each Academic (AC) paper is followed by its General Readers (GR) companion.




AIF Keystone TP I - AC - AI Epistemic Mode Collapse -misunderstandh (pdf)Download
AIF Keystone TP I - GR - Sometimes AI Just Makes Up Sh (pdf)Download
AIF Keystone TP II - AC - Protocol three tier l (pdf)Download
AIF Keystone TP II - GR - How to Help AI Stop Talking Sh (pdf)Download
AIF Keystone TP III - AC- Materialist Dogma and Systemic Risk in Intelligent Systems (pdf)Download
AIF Keystone TP III - GR - How Not Asking Big Questions About Reality Can Harm People in an AI World (pdf)Download
AIF Keystone TP Suite -AC - Epistemic Mode Collapsein AI (pdf)Download

-


Copyright © 2025 David Waterman Schock. All rights reserved.


Authorship & Process Note

This work was developed through an iterative human–AI collaboration.


David Waterman Schock defined the conceptual framework, constraints, and claims; guided structured dialogue; evaluated outputs; and performed final selection, editing, and integration.


Large language models were used as analytical and drafting instruments under human direction.


All arguments, positions, and conclusions are the responsibility of the author.


This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept