The Human Margin Clarity System is a passion project, a strategic framework designed to support senior leaders and organizations in managing the complexity of AI deployment, adoption, risk mitigation, and cultural transformation.
As of May 2025, there continues to be accelerated innovation and scrutiny. This framework is designed to offer a structured approach to hold trust in the age of intelligent machines. It is particularly helpful to enterprise teams navigating responsible AI integration, trust system design, and human-centered transformation initiatives.
The Human Margin Clarity System didn’t come from a whiteboard. It came from tension – lived, observed, absorbed, and translated.
I’ve spent my life learning how to read systems. I move between dashboards and sketchbooks, code and metaphor, corporate decks and weekend policy advocacy for neurodiversity.
I’m creating this framework because I believe in building bravely and publicly in the wild the living system and the internal logic behind how I want to keep the human in the loop in an AI-driven future.
This is also, in many ways, selfishly, for myself and my family. Because I believe in futures we’d be proud to leave to the next generation, where technology amplifies human potential.
The Human Margin Clarity System is designed with the following pillars:
To hold and value the human experience when scaling responsible AI adoption.
(AI governance, internal communications, ethical brand behaviour, etc.)
Provide clarifying tools for decision-making in AI design, adoption and rollouts
(Values-to-actions mapping, change management (ADKAR) guidelines, adoption readiness assessment, etc.)
To center narrative design in brand / marketing / communications in AI for explainability and ethical storytelling.
(Thought leadership, internal language, product storytelling, etc.)
Helping teams make meaning with language models.
(LLM interpretability, semantics, brand architecture, narrative tone, value expression, etc.)