We are not the first generation to face a moment of educational disruption. The industrial revolution demanded universal literacy. The post-war era demanded mass access to university. The internet demanded digital fluency. And in each of those moments, the institutions that served students best were the ones that moved deliberately but without delay — holding fast to quality and equity while embracing what was genuinely new.
We are in one of those moments again. Artificial intelligence is reshaping how knowledge is produced, how skills are learned, and how individuals navigate careers and civic life. The question before every college, accreditor, professional society, and policymaker is not whether to respond. It is whether we will respond together, with the speed, seriousness, and care that students deserve.
The institutions that serve students best are the ones that move deliberately but without delay — holding fast to quality and equity while embracing what is genuinely new.
This paper is a call to action. It is offered humbly, in recognition of the complexity of what I am proposing. But it is offered urgently, because the window to shape AI’s role in higher education — rather than merely react to it — is open right now.
The Opportunity We Cannot Afford to Miss
For decades, the promise of truly personalized education — learning that adapts to each student’s pace, background, and goals — remained out of reach at scale. The economics were simply prohibitive. A human tutor for every student is a luxury very few can afford.
Agentic AI systems, deployed thoughtfully, change that calculus. They can provide real-time, adaptive, personalized guidance across subjects and skill levels — not as a replacement for human guidance, but as a powerful amplifier of it. When a student struggles with a concept at 11 p.m. before an exam, an AI tutoring agent can scaffold understanding, ask Socratic questions, and check for genuine comprehension.
Early deployments across community colleges, research universities, and corporate training programs already show measurable gains in student engagement, course completion, and learning outcomes — particularly among students who historically have been underserved by traditional lecture-and-exam models.
But here is the risk: AI in education is arriving without the kind of trusted, broadly accepted, evidence-based standards that allow students, families, and employers to know what a credential means. Without those standards, we risk a fragmented landscape of uneven quality, inequitable access, and eroding public trust — precisely when trust in higher education is already under strain.
What “Good” Looks Like: Four Interlocking Elements
The vision this paper proposes is a disciplined application of principles that the best of American higher education has always held: that learning should be rigorous, equitable, transparent, and continuously improving. Applied to AI-assisted learning, those principles suggest a model with four interlocking elements:
AI-assisted courses should meet the same accreditation standards as any other course — with clearly defined learning outcomes, evidence of student achievement, faculty oversight, and continuous improvement cycles. Accreditors should develop — urgently but carefully — criteria that govern how AI-assisted personalization is designed, validated, and monitored.
Modern AI agents can guide students through multi-step problems, track progress over time, identify gaps in understanding, and connect students to human support when needed. But they must be scoped with care: agents should operate within well-defined pedagogical boundaries, with full security and transparency to students and faculty about what they are doing and why.
No AI system — however sophisticated — should operate in a teaching context without meaningful human oversight. Faculty must retain authority over learning objectives, assessment design, and the ultimate judgment of student mastery. The “teacher-in-the-loop” is not a compliance checkbox; it is the pedagogical and ethical spine of the entire model.
AI capabilities will continue to change, sometimes faster than traditional accreditation cycles can accommodate. Standards must therefore be designed for continuous, evidence-based evolution — with modular governance structures that allow specific domains to be updated on shorter cycles, while high-level principles remain stable.
A Federated AI in Education Council
The good news is that we do not need to start from scratch. The United States has, in ABET and its predecessor organizations, a proven template for how professional societies can federate around shared standards, build genuine expertise, and earn the trust of employers, institutions, and students alike. We can learn from it and compress the timeline significantly, without sacrificing legitimacy.
I propose the formation of a federated, multi-stakeholder AI in Education Council — an independent body convened by existing higher-education quality organizations, grounded in professional societies, and governed with explicit safeguards against undue industry influence.
CHEA, a coalition of regional accreditors, and leading professional societies (ACM, IEEE, ASEE, and discipline-specific societies across the arts, sciences, and professions) should jointly convene a founding task force within the next six months. This task force should include faculty representatives, student voices, disability advocates, equity researchers, civil society organizations, and — in non-voting advisory roles — representatives from AI technology companies.
Rather than attempting to develop a single comprehensive standard, the Council should organize its work into focused modules: AI Tutoring and Personalized Learning; Assessment Integrity and Academic Honesty; Data Governance and Student Privacy; AI Literacy as a Graduate Outcome; and Faculty Roles and Development. Each module should aim to produce a version 1.0 standard within twelve months, subject to a 60-day public comment process.
Standards without evidence are just aspirations. The Council should partner with a network of “AI in Education Labs” — colleges and universities willing to pilot AI-assisted learning under rigorous ethical and research protocols — to generate the real-world evidence that drives standard refinement. Pilots should be transparent, publicly reported, and evaluated on student outcomes across demographic groups.
The Council’s standards should be explicitly designed to plug into, not replace, existing accreditation frameworks. Regional accreditors and specialized accreditors like ABET should be encouraged to adopt the Council’s AI-specific criteria as recognized supplementary standards — keeping core quality authority where it belongs while providing AI expertise they do not currently have the capacity to develop on their own.
Genuine Commitment, Not Performative Participation
A coalition of this kind will only succeed if each constituency shows up with genuine commitment. Here is a candid accounting of what each group must contribute:
A Note on Urgency and Humility
I am aware that calls to action in education reform have a long and mixed record. Many urgent coalitions have been announced and quietly dissolved. I do not pretend this will be easy.
But I believe this moment is genuinely different. The technology is not waiting. AI tutoring systems, agentic learning tools, and automated assessment platforms are being deployed in American classrooms right now — with or without the standards I am proposing. The question is not whether change will come. It is whether the people most accountable for educational quality will help shape it.
The window to shape AI’s role in education — rather than merely react to it — is open right now. Students deserve better than a standards vacuum.
On funding: the most likely near-term path runs through major education-focused philanthropies — the Gates Foundation, Lumina Foundation, Hewlett Foundation, and Schmidt Futures all operate at the intersection of learning quality, credential reform, and emerging technology. Professional societies like ACM and IEEE already maintain standard-setting infrastructure that could stand up a credible Task Force within months. Industry funding is welcome but must be structurally bounded: a strict cap on total industry funding (no more than 20% of the operating budget), full public disclosure of all contributions, and no governance role for funders in the standard-setting process.
The path to self-sufficiency follows the ABET model: philanthropic and in-kind funding for years one through three, transitioning to dues and review fees by year four or five as institutional membership grows.
Students in American higher education and vocational education deserve learning that adapts to their needs and prepares them for a world that will require lifelong learning. They also deserve assurance that the institutions awarding their credentials have held those credentials to honest, rigorous, independently verified standards.
Both things are possible. But only if the people with the most to contribute — educators, accreditors, researchers, students, and yes, technologists — choose to build this together, with the urgency the moment demands and the care that students deserve.
More thoughts to come.
This is part of an ongoing series on education, AI, and the future we are building together.
Return to alancyates.com