The educational publishing landscape is undergoing its most dramatic transformation in generations as artificial intelligence reshapes how students learn, how content is delivered, and what role institutions play in knowledge transfer. While major textbook publishers have begun pivoting toward adaptive courseware platforms, and frontier AI models race ahead with billions in funding and unprecedented capabilities, the higher education and K-12 ecosystems face a critical strategic question: should they license content to AI companies, build proprietary AI learning systems, or pursue some hybrid approach?
This article examines the rapid changes underway and proposes a pragmatic — if admittedly imperfect — framework for how educational institutions, publishers, and distributors might evolve to complement rather than compete with AI advancement. I acknowledge the naivety inherent in proposing any fixed strategy amid forces of this magnitude, yet I argue that inaction or defensive postures pose even greater risk.
Already in Motion
Publishers Have Already Begun the Pivot
The shift from textbooks to courseware is not hypothetical — it is well underway and accelerating. McGraw Hill has moved to platform-based delivery with continuous content updates rather than edition-based textbook sales. Cengage MindTap positions itself as a complete learning platform integrating trusted content, interactive activities, and instructor dashboards. Pearson and others have invested in adaptive learning technologies that break content into atomic learning objects, generate personalized practice problems, and provide real-time feedback loops.
This transformation reflects a fundamental economic shift: from selling books as one-time purchases toward recurring institutional licensing and “First Day Access” programs. Publishers are betting that their moat lies in instructional design and data orchestration, not in the content itself.
Frontier AI Models Are Adding Educational Guardrails
Simultaneously, the leading AI labs have recognized that general-purpose chatbots optimized to “be helpful” can undermine learning by doing students’ thinking for them. The response has been to layer pedagogical constraints on top of base models. Khan Academy’s Khanmigo acts as a Socratic tutor — asking diagnostic questions, offering hints, guiding step-by-step reasoning, and explicitly withholding direct answers to homework problems. OpenAI’s ChatGPT study mode shifts the model’s behavior toward interactive problem-solving and personalized feedback.
The differentiator will not be raw AI capability. It will be how that capability is wrapped in validated curricula, aligned to standards, integrated with assessment systems, and connected to educator workflows.
License, Build, or Both?
Educational publishers and institutions now face a critical choice about how to position themselves relative to frontier AI labs. Broadly, three strategies are visible in the market:
Many scholarly and news publishers have signed deals allowing AI companies to train on their archives in exchange for licensing fees, API credits, or co-development opportunities. The upside is near-term revenue and potential influence over how content surfaces in AI-generated responses. The downside is disintermediation risk: if students can get “good enough” explanations from a generic chatbot trained on your textbooks, why would institutions pay for your courseware platform?
Some publishers are treating their structured curricula, assessment banks, and student performance data as proprietary assets and building AI-native courseware on top of licensed LLM APIs. This approach aims to retain control over pedagogy, user experience, and the direct relationship with institutions — but carries the risk of falling behind if frontier models advance faster than internal efforts can keep pace.
A hybrid approach involves monetizing lower-differentiation or archival content via broad LLM licensing, while protecting flagship programs and learning data and using licensed frontier models as infrastructure for proprietary courseware. For publishers seeking to avoid commoditization, the emerging consensus: treat base LLMs as infrastructure, not as your primary customer.
Five Pillars for How to Thrive
Given the speed, capital, and talent concentrated in frontier AI labs — collectively deploying tens of billions of dollars annually — it seems implausible that educational publishers or individual institutions can out-innovate them on core model capabilities. The pragmatic path is to complement rather than compete.
Educational institutions and publishers should view GPT, Claude, Gemini, and future models as they once viewed Amazon Web Services: powerful, commoditized infrastructure that enables differentiated products but is not itself the product.
The defensible moat for educational organizations is not content ownership but rather validated learning systems: curricula aligned to standards, formative assessment design, adaptive sequencing algorithms trained on real student performance data, and credible evidence of efficacy.
In a world where AI courseware is delivered through LMS platforms and personalized in real-time, distribution is no longer about inventory management. It is about integration depth and data orchestration.
Rather than every publisher and institution building redundant AI infrastructure, there is an opportunity for consortia and industry bodies to establish shared services: data trusts that allow multiple publishers to contribute de-identified student interaction data, open-source courseware frameworks, and industry-wide AI safety and bias auditing to ensure AI tutors do not perpetuate inequities.
Frontier AI models are advancing at a pace that makes traditional multi-year product roadmaps obsolete. Educational organizations have historically operated on semester or academic-year cycles. AI advancement will force a shift toward tech-company cadences: continuous deployment, real-time monitoring, and rapid iteration.
What Could Go Wrong
I am keenly aware that this proposal may prove inadequate in the face of forces I cannot fully anticipate. Several scenarios could render this framework obsolete:
Despite the acknowledged limitations and uncertainties in this proposal, the alternative — defensive postures aimed at protecting legacy business models — is far riskier. The textbook-to-courseware pivot is already underway, and frontier AI models are advancing with a speed and scale that educational incumbents cannot match through internal R&D alone.
The race is to out-execute on learning science, institutional trust, and validated outcomes while riding the wave of AI advancement rather than being crushed by it. Engaging with these forces — however imperfectly — beats pretending they can be ignored.
The worst outcome would be paralysis: watching from the sidelines as frontier labs, nimble ed-tech startups, or public-sector initiatives define the future of learning while traditional stakeholders cling to fading advantages.
More thoughts to come.
This is part of an ongoing series on education, technology strategy, and the world being reshaped by AI.
Return to alancyates.com