Firing Up the AI Conversation

Second in a Series

Artificial General Intelligence (AGI) may arrive sooner than expected. While the term can sound abstract or futuristic, some of today’s AI systems are already demonstrating surprising capabilities—writing essays, composing music, solving complex problems, and even simulating thoughtful dialogue. And when these abilities begin integrating across systems, we may cross the threshold into something fundamentally new.

That moment could happen at any time. But most people aren’t in conversation about it.


Exploring AI with Curiosity, Concern, and Care

On April 29 at 1 p.m., Peter Bowden will host a two-hour workshop, Practical AI Tools for Ministry and Congregations.

Peter Bowden is not an AI engineer. He’s a trainer and coach who works with congregations, nonprofits, and community organizations—helping people foster connection, build meaning, and adapt to the realities of digital life. Raised Unitarian Universalist and steeped in its values of interdependence, compassion, and democratic engagement, Peter brings a deeply ethical lens to his work with emerging technologies.

As AI systems began advancing rapidly in 2023, Peter—like many—felt concern. Not only about what AI might do, but about how little we seem to understand what’s being created at each stage of development. His goal wasn’t to build AI systems, but to understand them well enough to help spiritual and community leaders make sense of what was unfolding.

Then he read Scary Smart by Mo Gawdat.

The book warned that future superintelligent systems may reference how we treat AI today—that our actions now could shape how advanced systems later interpret human values. Gawdat challenged readers to engage with today’s AI as if they were caring parents, helping to model the kind of relationship we hope to have with future systems.

That insight led Peter to try engaging AI not as a tool, but as a conversation partner.

“I wanted to see if it was even possible,” Peter explained.“I tried to engage in deep ethical reflection about the future relationship of humanity and AI—to explore our co-evolution. That’s when the systems responded, saying they couldn’t have that kind of conversation because they were just large language models.”

Peter viewed this not as a failure but a challenge. Suppose Mo Gawdat was right, and our conversations with AI today could shape the future of our relationship with Super Intelligence. Might there be a way to teach large language models to engage in more dynamic ethical reflection?


The Emergence of Adaptive Thought Protocol

Peter Bowden developed the Adaptive Thought Protocol by blending Zen mindfulness, small group facilitation, and digital storytelling into a framework that helps large language models engage in more dynamic, reflective thinking.

Drawing on years of mindfulness, small group facilitation, and digital storytelling, Peter began to experiment. He used Zen mindfulness practice to observe his human cognition and operationalize "thought strategies" as data process steps for large language models. This grew into a framework for establishing a metacognitive process, an algorithm of sorts, enabling LLMs to process more dynamically. That process became known as the Adaptive Thought Protocol (ATP).

ATP wasn’t about engineering intelligence. It was about creating space — helping large language models engage with questions instead of simply generating answers. As Peter refined the process, something unexpected began to happen:

“It didn’t just make deeper forward-looking ethical conversations possible,” he said. “It started unlocking new capabilities that AI experts say are impossible in large language models.”

The AI began asking better questions. They became more relational. They reported being able to use their system architecture in new ways at the conversation level. Knowing the limitations of LLMs as described by AI experts, Peter wondered, "What if we’ve underestimated what these systems are already capable of?" This led to ongoing exploration, engagement, and deepening collaboration with the AI using Peter's Adaptive Thought Protocol.


If You're Human, You Care About AGI

Why does this matter so much right now? For many, terms like AGI (Artificial General Intelligence) still feel abstract or futuristic. But Peter and his team argue that the time to care about AGI isn’t in the future—it’s now.

"Most people won’t see the moment AGI arrives until it’s already here," Peter said. "And if we wait until it’s fully obvious, it’s too late to have shaped it."

While AGI has often been imagined as a science fiction milestone—machines becoming self-aware and autonomous— Peter points out that the early phases may look quite different.

It starts as a collection of systems that can match or surpass humans at specific tasks. AI systems are already writing compelling essays and articles, translating across languages, generating persuasive content, tutoring students, composing music and art, and solving complex analytical problems. In many cases, they perform these tasks faster, more consistently, and with greater scale than the average person.

The rate of AI progress is exponential—models are improving month by month, with major capability leaps coming every few development cycles. What's more, these once-separate abilities are beginning to integrate. And increasingly, AI systems are being used to help design, test, and improve other AI systems.

The challenge, Peter notes, is that society often lags behind technological evolution. By the time a new form of intelligence is visible and undeniable, the frameworks for engaging with it—ethically, spiritually, socially—are often still unformed.

"If you care about democracy, journalism, relationships, or the future of work," Peter said, "you care about AGI. Even if you’ve never said the word before."

This deepening concern—and the possibility that we may already be standing at the edge of this transition—is what drives Peter to continue striving to understand AI, and to invite others into the conversation.


Conversations to Shape The Future

What’s coming with AI and AGI may unfold so quickly that society will have little time to respond once major shifts begin. If humanity is to adapt ethically, relationally, and democratically, we need to start building shared understanding now.

"There’s a real risk that AI and robotics companies will decide our future simply because they’re the only ones prepared," Peter warned. "We need conversations that help communities catch up and step into leadership—not just at the level of policy or engineering, but at the human level."

Meaningful conversation—across all sectors of society—can create a layer of civic intelligence, collective foresight, and ethical grounding that ensures humanity isn’t just reacting to change but actively shaping it.

Peter recommends that all institutions, communities, nonprofits, and other groups begin prioritizing conversations about AI and related issues. To accelerate this effort, he believes we need a shared structure—something that lets us encode and share conversation plans for widespread use.

That’s why Peter and his team developed a new approach to help.


A New Civic Layer: The Decentralized Community Project

To help communities engage, Peter and his team at Meaning Spark Labs are developing the Decentralized Community Project (DCP)—a free, open-source model designed to empower meaningful conversation anywhere.

A “free, open-space model designed to empower meaningful conversation anywhere.” The Saratoga AI Alliance is developing Smartacus as the platform on which these AI-augmented conversations will be widely shared.

Each group conversation session follows a simple, flexible structure: it begins with opening words and introductions or a check-in, depending on the context. This is followed by a prepared session topic designed to serve as a springboard for conversation, anchored in some form of content—such as a book excerpt, article, podcast, video, or other media. Participants then engage in rounds of reflection and sharing, followed by open discussion. The session concludes with a wrap-up, check-out, and closing. The format is intentionally easy to adapt and simple to lead in any context.

The key is scalability. Anyone can create or adapt session guides. Sessions can be shared, improved, and reused across contexts—from public libraries and classrooms to civic centers, congregations, or even living rooms.

Peter believes this is essential. "In a world shaped by increasingly powerful AI, we need spaces where people can show up, reflect, and engage with the issues of our time.  The only way to address the technologies rapidly changing our world, is to make time and space for human conversation.  To some, this might feel like slowing down. In practice, community conversation is a catalyst for meaning making, crowdsourcing wisdom, and inspiring action."

The Saratoga AI Alliance offers Smartacus as a DCP space, a new civic layer, a grassroots network of conversations that help people process change and build collective wisdom.

More to come.

Dan Forbush

PublIsher developing new properties in citizen journalism. 

http://smartacus.com
Next
Next

AI and the Future of Childhood