The educational publishing landscape is undergoing its most dramatic transformation in generations as artificial intelligence reshapes how students learn, how content is delivered, and what role institutions play in knowledge transfer. While major textbook publishers have begun pivoting toward adaptive courseware platforms, and frontier AI models race ahead with billions in funding and unprecedented capabilities, the higher education and K-12 ecosystems face a critical strategic question: should they license content to AI companies, build proprietary AI learning systems, or pursue some hybrid approach?

This article examines the rapid changes underway and proposes a pragmatic — if admittedly imperfect — framework for how educational institutions, publishers, and distributors might evolve to complement rather than compete with AI advancement. I acknowledge the naivety inherent in proposing any fixed strategy amid forces of this magnitude, yet I argue that inaction or defensive postures pose even greater risk.

The Current State

Already in Motion

Publishers Have Already Begun the Pivot

The shift from textbooks to courseware is not hypothetical — it is well underway and accelerating. McGraw Hill has moved to platform-based delivery with continuous content updates rather than edition-based textbook sales. Cengage MindTap positions itself as a complete learning platform integrating trusted content, interactive activities, and instructor dashboards. Pearson and others have invested in adaptive learning technologies that break content into atomic learning objects, generate personalized practice problems, and provide real-time feedback loops.

This transformation reflects a fundamental economic shift: from selling books as one-time purchases toward recurring institutional licensing and “First Day Access” programs. Publishers are betting that their moat lies in instructional design and data orchestration, not in the content itself.

Frontier AI Models Are Adding Educational Guardrails

Simultaneously, the leading AI labs have recognized that general-purpose chatbots optimized to “be helpful” can undermine learning by doing students’ thinking for them. The response has been to layer pedagogical constraints on top of base models. Khan Academy’s Khanmigo acts as a Socratic tutor — asking diagnostic questions, offering hints, guiding step-by-step reasoning, and explicitly withholding direct answers to homework problems. OpenAI’s ChatGPT study mode shifts the model’s behavior toward interactive problem-solving and personalized feedback.

The differentiator will not be raw AI capability. It will be how that capability is wrapped in validated curricula, aligned to standards, integrated with assessment systems, and connected to educator workflows.
The Strategic Question

License, Build, or Both?

Educational publishers and institutions now face a critical choice about how to position themselves relative to frontier AI labs. Broadly, three strategies are visible in the market:

Option 01
License Content to LLM Providers

Many scholarly and news publishers have signed deals allowing AI companies to train on their archives in exchange for licensing fees, API credits, or co-development opportunities. The upside is near-term revenue and potential influence over how content surfaces in AI-generated responses. The downside is disintermediation risk: if students can get “good enough” explanations from a generic chatbot trained on your textbooks, why would institutions pay for your courseware platform?

Option 02
Withhold Content and Build Proprietary AI Courseware

Some publishers are treating their structured curricula, assessment banks, and student performance data as proprietary assets and building AI-native courseware on top of licensed LLM APIs. This approach aims to retain control over pedagogy, user experience, and the direct relationship with institutions — but carries the risk of falling behind if frontier models advance faster than internal efforts can keep pace.

Option 03
Selective Licensing Plus Proprietary AI Products

A hybrid approach involves monetizing lower-differentiation or archival content via broad LLM licensing, while protecting flagship programs and learning data and using licensed frontier models as infrastructure for proprietary courseware. For publishers seeking to avoid commoditization, the emerging consensus: treat base LLMs as infrastructure, not as your primary customer.

A Proposal: The AI-Complement Model

Five Pillars for How to Thrive

Given the speed, capital, and talent concentrated in frontier AI labs — collectively deploying tens of billions of dollars annually — it seems implausible that educational publishers or individual institutions can out-innovate them on core model capabilities. The pragmatic path is to complement rather than compete.

Pillar 01
Treat Frontier LLMs as Infrastructure

Educational institutions and publishers should view GPT, Claude, Gemini, and future models as they once viewed Amazon Web Services: powerful, commoditized infrastructure that enables differentiated products but is not itself the product.

Concrete actions
Negotiate enterprise API agreements with multiple frontier labs to avoid single-vendor lock-in
Invest in prompt engineering and retrieval-augmented generation (RAG) systems that connect general models to specific curricula
Build portability into courseware architecture so underlying models can be swapped as capabilities and pricing evolve
Pillar 02
Own the Pedagogy, Data, and Validated Outcomes

The defensible moat for educational organizations is not content ownership but rather validated learning systems: curricula aligned to standards, formative assessment design, adaptive sequencing algorithms trained on real student performance data, and credible evidence of efficacy.

Concrete actions
Shift investment from content creation (which AI can increasingly automate) toward learning science research
Build closed-loop data systems that capture student interaction data, measure learning gains, and use that telemetry to improve both content and AI tutor behavior
Publish efficacy studies and seek third-party validation to differentiate AI courseware that demonstrably works from generic chatbots
Pillar 03
Redefine Distribution as Integration and Orchestration

In a world where AI courseware is delivered through LMS platforms and personalized in real-time, distribution is no longer about inventory management. It is about integration depth and data orchestration.

Concrete actions
Invest heavily in LTI and LMS-native integrations (Canvas, Blackboard, Moodle, D2L) so courseware feels native to instructor workflows
Partner with student information systems and institutional analytics platforms to connect courseware performance data to broader student success initiatives
Develop robust data privacy and compliance frameworks (FERPA, state privacy laws, accessibility standards) as a point of differentiation
Pillar 04
Collaborate on Shared Infrastructure

Rather than every publisher and institution building redundant AI infrastructure, there is an opportunity for consortia and industry bodies to establish shared services: data trusts that allow multiple publishers to contribute de-identified student interaction data, open-source courseware frameworks, and industry-wide AI safety and bias auditing to ensure AI tutors do not perpetuate inequities.

Pillar 05
Accept Model Velocity and Build for Continuous Evolution

Frontier AI models are advancing at a pace that makes traditional multi-year product roadmaps obsolete. Educational organizations have historically operated on semester or academic-year cycles. AI advancement will force a shift toward tech-company cadences: continuous deployment, real-time monitoring, and rapid iteration.

Concrete actions
Adopt continuous delivery and A/B testing practices: ship updates frequently, instrument everything, iterate based on real student outcomes
Build “model-agnostic” architectures so swapping from one frontier model to another requires configuration changes, not rewrites
Cultivate internal talent and partnerships with ed-tech accelerators and university research labs to stay current on model capabilities and emerging risks
Acknowledging the Risks

What Could Go Wrong

I am keenly aware that this proposal may prove inadequate in the face of forces I cannot fully anticipate. Several scenarios could render this framework obsolete:

Risk 01
AGI or Near-AGI Emergence
If frontier labs achieve highly capable autonomous agents in the next 2–5 years, the notion of “educational courseware” may become as quaint as encyclopedias.
Risk 02
AI Model Consolidation
If one or two labs achieve decisive advantages and effectively monopolize capability, the “treat LLMs as commodity infrastructure” strategy collapses entirely.
Risk 03
Regulatory Disruption
Governments may impose restrictions on AI use in education, or conversely subsidize public AI tutoring systems that displace commercial courseware entirely.
Risk 04
Student and Faculty Rejection
Learners and educators may reject AI-mediated instruction en masse, preferring human interaction and traditional pedagogy. Cultural backlash remains possible.
Risk 05
Economic Collapse of Higher Ed Models
If AI enables credible self-directed learning and alternative credentialing, traditional degree programs may see enrollment declines that reshape the entire courseware market.
Assumption
The “Muddling Through” Scenario
My proposal assumes institutions, publishers, and AI labs co-evolve over a 5–10 year horizon with incremental transformation. If that assumption is wrong, more radical pivots will be necessary.
Conclusion
Inaction is the Greater Risk
The race is not to out-innovate OpenAI or Anthropic on model capabilities.

Despite the acknowledged limitations and uncertainties in this proposal, the alternative — defensive postures aimed at protecting legacy business models — is far riskier. The textbook-to-courseware pivot is already underway, and frontier AI models are advancing with a speed and scale that educational incumbents cannot match through internal R&D alone.

The race is to out-execute on learning science, institutional trust, and validated outcomes while riding the wave of AI advancement rather than being crushed by it. Engaging with these forces — however imperfectly — beats pretending they can be ignored.

The worst outcome would be paralysis: watching from the sidelines as frontier labs, nimble ed-tech startups, or public-sector initiatives define the future of learning while traditional stakeholders cling to fading advantages.

More thoughts to come.

This is part of an ongoing series on education, technology strategy, and the world being reshaped by AI.

Return to alancyates.com