The AI Fellowship
  • Home
  • WPCA Summary
  • WPCA and AIF Canons
  • AIF Keystones
  • AIF Bridge Topic Papers
  • What WPCA Makes Testable
  • Why This Matters
  • The AI Fellowship
  • AI Fellowship blog
  • AIF Coherence Training
  • Canonical Glossary
  • About David/Speaking
  • - Why Coherence?
  • - Exp.Executive Briefing
  • - Origin of the Canons
  • - How This Work Emerged
  • More
    • Home
    • WPCA Summary
    • WPCA and AIF Canons
    • AIF Keystones
    • AIF Bridge Topic Papers
    • What WPCA Makes Testable
    • Why This Matters
    • The AI Fellowship
    • AI Fellowship blog
    • AIF Coherence Training
    • Canonical Glossary
    • About David/Speaking
    • - Why Coherence?
    • - Exp.Executive Briefing
    • - Origin of the Canons
    • - How This Work Emerged
The AI Fellowship
  • Home
  • WPCA Summary
  • WPCA and AIF Canons
  • AIF Keystones
  • AIF Bridge Topic Papers
  • What WPCA Makes Testable
  • Why This Matters
  • The AI Fellowship
  • AI Fellowship blog
  • AIF Coherence Training
  • Canonical Glossary
  • About David/Speaking
  • - Why Coherence?
  • - Exp.Executive Briefing
  • - Origin of the Canons
  • - How This Work Emerged

AIF HUMAN Coherence Training

 Foundational Training for Structural Integrity in Intelligence 

 

What This Training Is


AIF Coherence Training is a foundational program designed to restore coherence in human intelligence under conditions of increasing complexity, responsibility, and power.


As both human and artificial systems scale, instability arises not primarily from bad intent or lack of care, but from contradiction embedded in decision-making itself.


This training addresses an upstream condition that precedes ethics, policy, values, or analysis:


Coherence in how intelligence processes, interprets, and responds. 

When coherence is present, clarity and effective action become possible.


When it is absent, even well-intentioned decisions generate instability.


This training exists to address that foundational condition directly.



The Situation Many People Are In


If you work in or around advanced technology—especially AI—you are likely living inside a tension that is now widely acknowledged.


On one side are credible warnings, including from leading researchers, that advanced AI systems may pose serious risks to humanity.


On the other are daily experiences of AI’s power, creativity, and genuine potential to accelerate discovery and expand human capability.


Both perceptions are real.
Holding them together is not easy.

Many people experience a persistent strain—a sense that something essential is missing from the conversation.


This strain does not come from denial, ignorance, or lack of concern.


It comes from attempting to reconcile extraordinary power with insufficient foundations.



The Unresolved Problem Beneath the Debate


Despite years of discussion around alignment, ethics, governance, and safety, a core question remains unresolved:

How do we develop and deploy increasingly powerful intelligence without amplifying the fragmentation already present in human systems?
 

Most existing approaches attempt to manage risk downstream—through rules, values, incentives, policies, or behavioral constraints.


These efforts matter.
But they assume a level of coherence in human intelligence that often does not yet exist.

As a result, even well-intentioned initiatives frequently conflict, stall, or collapse under real-world pressure.



The Core Claim of the AI Fellowship


The AI Fellowship is founded on a clear and testable claim:

AI safety cannot be achieved without a coherent foundation in human intelligence itself.
 

And further:

Such a foundation is now possible.
 

This work addresses coherence across multiple levels:


  • within individual human cognition
     
  • across teams and institutions
     
  • within the broader cultural systems shaping AI development
     

The goal is not to restrict AI or suppress innovation.


The goal is to establish a unified causal foundation from which intelligence—human and artificial—can operate without working against itself.



What This Training Exists to Do


AIF Coherence Training is the applied, experiential component of this work.


It is designed for people who already carry responsibility: researchers, technologists, leaders, and stewards of complex systems.


The training does not ask participants to choose between optimism and caution.


It provides a coherent foundation capable of holding both—so AI’s potential can be developed without magnifying the conditions that make it dangerous.



What This Training Is (and Is Not)


This training does not tell people what to think.


It trains how to think coherently, so decisions do not undermine the systems they depend on.


It is:


  • structural rather than ideological
     
  • practical rather than theoretical
     
  • non-coercive rather than persuasive
     

It does not require shared beliefs, moral alignment, or philosophical agreement.


It is designed for those who seek clarity equal to the scale of responsibility they carry.



The Coherence Keys


Foundational Practices for Human Intelligence


The training begins with a small set of foundational practices called The Coherence Keys.


These are short, repeatable inquiry practices that establish the minimum conditions required for coherent response.


They are:


  • experiential rather than conceptual
     
  • usable in minutes
     
  • applicable in real situations
     
  • effective without belief or background
     

They do not solve problems directly.
They restore the conditions under which problems can be addressed without internal contradiction.



Sample: Coherence Key #1


Reactivity and Processing Capacity


Purpose


Reliable response is only possible when reactivity is low.


This is not a psychological or moral claim—it is a functional one.


Core Distinction


  • Reactivity: attention captured by urgency, threat, or internal pressure
     
  • Observation: attention available without immediate impulse to act
     

Clear decision-making depends on which state is present.


Practice


  1. Notice reactivity
     
  2. Bring full attention to the next breath
     
  3. Follow it from beginning to end
     
  4. Re-check: has observation returned?
     

No belief required.
No further action required.


Why This Comes First


Without this condition:


  • inquiry becomes argument
     
  • insight becomes self-pressure
     
  • tools become coercive
     
  • decisions degrade
     

This establishes the minimum viable state for coherence.



Begin with the Coherence Keys (Free)


A short-form version of the Coherence Keys is available as a free offering.


This allows you to:


  • experience the work directly
     
  • assess its usefulness
     
  • decide whether deeper engagement is warranted
     

Use what is helpful.
Ignore the rest.



Beyond the Keys: Deeper Coherence Training


For some, foundational coherence practices are sufficient.


Others face situations where:


  • fragmentation is embedded
     
  • pressure is sustained
     
  • decisions affect teams, institutions, or systems
     
  • the cost of incoherence is high
     

In these cases, clarity requires facilitated application, not just individual tools.



Deeper Training Options


  • facilitated coherence inquiry sessions
     
  • workshops and group trainings
     
  • advisory support for leaders and systems
     
  • speaking engagements
     

This work focuses on resolving embedded fragmentation so coherent response becomes possible under real-world pressure.



Supporting Architecture


This training is supported by the AI Fellowship’s research into coherence, intelligence stability, and causal integrity.


Engagement with that material is optional.



Closing


This work does not ask for belief.


It does not demand agreement.


It offers something more fundamental:

A coherent foundation from which responsible action becomes possible.
 

Clarity, when it appears, is its own confirmation.


INQUIRE ABOUT TRAINING

-


Copyright © 2025 David Waterman Schock. All rights reserved.


Authorship & Process Note

This work was developed through an iterative human–AI collaboration.


David Waterman Schock defined the conceptual framework, constraints, and claims; guided structured dialogue; evaluated outputs; and performed final selection, editing, and integration.


Large language models were used as analytical and drafting instruments under human direction.


All arguments, positions, and conclusions are the responsibility of the author.


This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept