What Comes Next?

Recorded on October 5 at the Saratoga Book Festival at Universal Preservation Hall.
Transcribed by Otter.ai and processed in the
Smartacus Neural Net.

Here’s how.


Panelists

  • Gary Rivlin — Pulitzer finalist, Gerald Loeb Award–winning journalist; author of AI ValleySaving Main Street, and Katrina: After the Flood. Former New York Times reporter.

  • Matt Lucas — Professor of Business and Vice Chair for AI Strategy, Skidmore College; former global executive; generative-AI subject matter expert.

Moderator/Host

  • Ellen Beal — Festival founder (remarks condensed).


Session Overview 

Gary Rivlin and Matt Lucas at the Saratoga Book Festival

Journalist Gary Rivlin and business professor Matt Lucas explore why the 2022 release of ChatGPT marked an inflection point for AI, how “hyperscalers” (Microsoft, Google, Amazon, Meta) are reshaping innovation economics, and what bubbles, agents, and autonomous systems could mean for jobs, education, and regulation.

Rivlin argues AI is simultaneously overhyped in the short term and under-hyped in the long, and urges practical guardrails on surveillance, warfare, hiring, and bias. Audience questions press on data access, power use, copyright “original sin,” and whether market forces or regulation will keep the technology aligned with public interests.


Opening & Introductions

Ellen Beal: (Condensed) Welcome and thanks for joining us. After the conversation, books are available for purchase and signing downstairs. I’m pleased to introduce Gary Rivlin and Matt Lucas.

Matt Lucas: Thanks for being here. Quick show of hands—who used ChatGPT, Google Gemini, or Claude in the past week? (Many hands.) We’ve reached the point where these tools are part of everyday life. OpenAI reports hundreds of millions of weekly users—evidence that AI is mainstreaming fast. Still, we’re in a complicated moment: concerns around bias, ethics, safety, and security are real. Gary’s book AI Valley dives into all of this. Let’s start with the core premise: AI has been around for decades. Why did 2022 feel like the switch flipped?


Why 2022 Was an Inflection Point

Gary Rivlin: The field dates to the 1940s–50s; in 1956, researchers at Dartmouth famously coined “artificial intelligence” and predicted business adoption in ten years. They were off by decades. We’ve all used AI for years—Google Translate, autocorrect, spell-tolerant search—but it lived behind glass. On November 30, 2022ChatGPT changed that. You could talk to a machine and feel the “magic” of it talking back. That conversational interface made the capability visible and personal.

Lucas: Your book humanizes this story with real characters. Let’s take Reid Hoffman—what did he see?

Rivlin: Hoffman—LinkedIn cofounder and one of Silicon Valley’s best-connected investors—was an early AI student at Stanford. For years, AI could barely tell a circle from a square. By the mid-2010s, with deep learning breakthroughs, the field got interesting again. He helped fund OpenAI early and later co-founded Inflection AI in 2022. He articulated a key shift: machines learning our language, not the other way around. That insight preceded ChatGPT’s public moment and helped crystalize my book.

Lucas: Silicon Valley’s muscle memory says startups drive disruption. But you argue this wave favors incumbents.

Rivlin: 
Right. Traditionally, venture capital + talent + scrappiness could birth a Google. With frontier models, costs exploded—compute, talent, training, and inference. Inflection raised ~$1.5B with blue-chip backers, yet ultimately sold to Microsoft. Training runs that once cost tens of millions have climbed to billions, with some projecting tens of billionswithin a few years. That’s why the “next Google” may simply be Google, or Meta, or Microsoft—firms with cash, chips, and data. Startups can still innovate, but the core model layer is consolidating.


Are We in a Bubble?

Lucas: 
We’re reading about $300+ billion planned for data centers and AI infrastructure. Is this exuberance or laying new “digital highways”?

Rivlin: Both. I covered the dot-com boom, bust, and revival. We overestimate tech in the short term and underestimate it in the long term. There’s froth—teams raising at billion-dollar valuations with a memo and no product. A correction will come. But the long-term arc is real—like the internet, AI will pervade business, education, and daily life. The difference from dot-com: many leaders (OpenAI, etc.) are still private, so when the music stops, the pain concentrates among investors more than public markets.

Lucas: Big spend requires big returns. Where does the durable value come from?

Rivlin: Two answers. First, strategic necessity: Microsoft, Google, Amazon, and Meta can’t risk missing the next platform (see: IBM and the PC, Microsoft and mobile). Second, the agent model: systems that do things—schedule, purchase, draft, triage, negotiate—on our behalf at work and at home. Think trusted copilot that knows your context and can execute tasks, not just chat. That’s where many in the Valley think the value will crystallize.


Doomers, Zoomers, and Bloomers

Lucas: 
You describe three camps: Doomers (AI destroys us), Zoomers (unfettered progress cures everything), and Bloomers(optimists with guardrails). Where are you?

Rivlin: Firmly Bloomer. I don’t buy apocalyptic sci-fi scenarios as our policy frame. I worry about near-term harms within our lifetime: AI in warfaresurveillanceprivacyscams and persuasion, and bias in high-stakes decisions. On the other hand, I’m skeptical of the “nothing should slow AI” argument. AI can accelerate science, education, and healthcare, but also amplify bad actors. We need guardrails that make a net positive more likely.


Bias: What’s Fixable and What Isn’t

Lucas: LLMs learn from human data—so they inherit human bias. What can companies do?

Rivlin: Two layers. Surface-level toxicity—racist, misogynist, antisemitic speech—can be reduced with reinforcement learning from human feedback and policy fine-tuning. Harder is implicit bias baked into training data and patterns. My principle: AI is a tool, not a decider. It should support humans, not replace judgment in hiring, sentencing, lending, or housing. I like Microsoft’s term “copilot.” You don’t put the hammer in charge of building a house.


Jobs: What Gets Squeezed First

Lucas: Students graduating today feel the ground moving. Entry-level analyst, research, and presentation roles are shrinking. What’s your read?

Rivlin: We already see fewer entry-level openings—because models can do basic tasks: first drafts, simple research, routine code. That poses a pipeline problem: fewer junior roles today means fewer seasoned professionals tomorrow. In the short-to-medium term, it’s less “AI replaces a worker” and more “AI-enabled workers replace those who aren’t.”Teams of 20 become 15–18. Blue-collar and pink-collar roles are affected too (e.g., drivers as autonomy scales). Over time, AI will create new roles—but the transition costs will be uneven and real.


Why Non-Coders Matter More

Lucas: You argue the future of AI can’t be left to a handful of engineers.

Rivlin: Exactly. We need diversity of people and disciplines—historians, philosophers, sociologists, activists—alongside computer scientists. If AI will be a personal assistant, it must understand human contexts and values. That means trust—and Big Tech hasn’t always earned it. Encouragingly, universities like Skidmore are building cross-disciplinaryprograms. As AI spreads, tastejudgment, and human sensibility become more valuable, not less.

Rivlin: When a machine beat Kasparov in 1997, some thought chess was “over.” Instead, chess boomed. Similarly with digital music—streaming didn’t kill albums; it reshaped them. My hope: as AI generates more, human authorship and intent become more prized. I use AI, but I decide what’s good.


The Long Term: Exponential Blind Spots

Lucas: You’ve said we underestimate the long term. Why?

Rivlin: Humans struggle with exponentials. Models improve 10x over short cycles; that compounds. We also fail at second-order effects. Cars created suburbs and motels; the internet enabled social media with profound social impact. With AI, we’ll see equally unexpected downstream realities. That’s why guardrails matter now, before the capabilities leap again.

A looming issue: power. Data centers for AI run 24/7 and are energy-hungry. Utilities in some regions are already raising rates. Electrification (EVs), crypto, and now AI are straining grids. Ambitious build-outs promise national advantage but also entail huge electricity and water use. This is a climate and infrastructure story, not just a software story.


Audience Q&A (condensed)

Audience Member 1: Access seems gated: universities with big budgets get the best datasets and compute. Meanwhile, the public uses “free” chatbots that quickly hit paywalls. Is AI becoming a product for the few?

Rivlin: Compute and data access do favor incumbents; that’s part of the consolidation story. At the same time, consumer chatbots are widely accessible—often free at light usage. But the economics bite: even at $20 or $200 per user per month, providers can lose money on inference. Expect business models to evolve—some benign, some worrisome (e.g., subtle commercial bias or “product placement” in answers). Vigilance is essential.

Audience Member 2: We should be talking about the people behind AI—oversight, contracts, copyrighted data, and the externalities of data centers. Why did companies roll this out before addressing those obligations?

Rivlin: You’ve named AI’s original sin: models trained on creators’ copyrighted work without compensation. Lawsuits are ongoing; the landscape is unsettled. There’s also a civic irony: government-funded research seeded AI, yet private platforms capture the profits. OpenAI is a case study—founded as a nonprofit to safeguard humanity, but competitive pressure and commercialization shifted priorities. I don’t trust Sam Altman; he’s brilliant but squirrely in governance terms. Broadly, we need contracts, consent, compensation, and community impact planning—before deployment, not after.

Audience Member 3: Venture capital optimizes for money, not “making the world better.” Can markets alone keep AI aligned with public values?

Rivlin: 
Markets rarely self-correct toward the public interest. Tech often begins with noble rhetoric—“connecting the world,” “organizing knowledge”—and then financial incentives warp behavior: search bias, engagement hacks, algorithmic arbitrage. We’ve seen this with Google and Facebook. OpenAI began as “for humanity” and drifted. This is why public pressure, journalism, and policy matter.


Audience Member 4: 
What regulations would you “magic wand” into existence?

Rivlin: 
I liked parts of the Biden Administration’s approach—red-teaming by external experts, transparency to government—though executive orders are fragile. California is considering rules (pending signatures and donor politics).

What I’d prioritize:

  • Warfare: clear limits on autonomous targeting and lethal use.

  • Surveillance & Privacy: strict constraints on pervasive monitoring and data brokerage.

  • High-stakes Decisions: ban fully automated decisions in hiring, lending, housing, sentencing; require human-in-the-loop, audit trails, and recourse.

  • Safety & Disclosure: standardized risk testing, post-deployment incident reporting, and provenance signals for AI-generated media. Historically, safety standards (think 19th-century railroads) built public trust and grew industries. Guardrails can accelerate adoption by making it safer.

Audience Member 5: If society objects, do companies actually change?

Lucas: Sometimes yes, when the public pushes and the press focuses attention. A recent example: users recoiled from overly sycophantic chatbot behavior; providers tuned it back. It’s imperfect and reactive, but public feedback matters. We should build that civic muscle—locally and nationally.

Rivlin: My fear is that real regulation only arrives after a bad event—a biosecurity incident, a massive financial exploit. I hope we don’t wait for that.


Closing

Lucas: Thank you, Gary—and thanks to all of you. Please support thoughtful journalism on AI, and keep this conversation going in your organizations and communities.

Rivlin: Appreciate it. Saratoga Springs is beautiful—I’m glad to be here. Thanks for the great questions.

(Applause.)


Pull Quotes (≤25 words each)

  • AI is overhyped in the short term and under-hyped in the long. That’s the paradox—and the policy challenge.” —Gary Rivlin

  • Don’t put the hammer in charge of building the house. AI is a tool, not the decider.” —Gary Rivlin

  • The next Google might be Google. Frontier models are so expensive that incumbents have the edge.” —Gary Rivlin

  • AI-enabled workers will replace those who aren’t. That’s the near-term labor story.” —Gary Rivlin

  • Guardrails build trust—and markets. Safety standards helped railroads thrive; AI needs the same.” —Gary Rivlin

Dan Forbush

PublIsher developing new properties in citizen journalism. 

http://smartacus.com
Previous
Previous

I Cannot Laugh, But I Can Listen

Next
Next

AI for Creatives