
Image generated using ChatGPT with the prompt “Create a banner image for my new blog ‘Higher Eduvation: A blog at the intersection of higher education & innovation.’”
Beyond the “Higher”: Redefining the Value of the Degree in the Age of AI
April 26, 2026
The recent release of Google’s free AI microcredentials has reignited a predictable debate: is the traditional degree dead? When industry giants offer “enterprise-level training” for free, the utilitarian argument for college—that it is a sometimes costly gateway to technical skills—begins to crumble. However, this disruption invites a much deeper, more existential question for my fellow educators: Does the term “higher education” even mean anything anymore?
Etymologically, “higher” simply suggests a tier above secondary schooling. It is a vertical descriptor of sequence, not necessarily a qualitative descriptor of depth. If “higher” only refers to the level of difficulty in a technical stack or the next step after a high school diploma, then Google’s courses on Transformer Models or Encoder-Decoder Architecture are certainly “higher” learning. But if we allow the definition of our field to be reduced to mere “advanced training,” we have already lost the battle to the agility of the tech sector.
Let me suggest this: the true mission of a university is not just to provide higher learning; it should offer “integrative education.” While a microcredential teaches you to master a prompt, a humanistic education teaches you to master the self and understand the societal implications of the prompt. It asks you to integrate multiple fields and disciplines to better understand not just one field but the world and the humans who inhabit it. The value of integrative education lies in the “human-in-the-loop” philosophy, fostering the empathy, ethical framework, and historical context that an algorithm cannot simulate.
I’m aware that terms alternate to “higher” education include “postsecondary” and “tertiary,” but I find these just as dissatisfying. Perhaps it is time to move away from the vertical hierarchy of all of these and toward a term like “integrative education.” We aren’t just stacking skills; we are building the cognitive architecture that allows a person to navigate a world where tools change every six months. The future of the degree isn’t in competing with Google’s speed; it’s in offering the one thing Google can’t: a sanctuary for critical inquiry and the development of the “whole person” who knows how to use those tools to serve a common good.
If you could re-imagine the name we use for education leading to a degree obtained through an accredited college or university, what might YOU call it?
Our Sci-Fi Syllabus: What Hollywood Can Teach Higher Ed About AI
March 23, 2026

T800-101: Building a Terminator Endo Skull by Jamie Martin, fair use.
When we think about the risks of Artificial Intelligence, our minds may jump to the cinematic apocalypse. We picture Skynet achieving self-awareness in Terminator 2: Judgement Day, the T-800’s glowing red eye, and a desperate war for humanity’s survival.
Video: Opening (Future War) | Terminator 2: Judgment Day
While these stories are thrilling, taken at face value, they are not always the best guides for practical policy on campus. Science fiction’s true value lies not in being a crystal ball, but a mirror. It reflects our contemporary anxieties, our ethical blind spots, and our deepest hopes about technology’s role in our lives.
As a consultant who helps universities navigate innovations like AI as well as a former theatre and film professor, I believe filmmakers and television creators have conducted invaluable thought experiments on AI’s societal integration. Their narratives offer a rich, accessible “syllabus” for higher education leaders, faculty, and staff. By studying these stories, we can bypass abstract technical debates and get to the heart of the human challenges AI presents. This syllabus offers three critical lessons for the modern university or college, each illuminating a core domain of campus life: pedagogy, student support, and institutional governance.
Lesson 1 The Curriculum of Consciousness: From Blade Runner’s Voight-Kampff Test to the First-Year Seminar
- The Challenge: Generative AI & Academic Integrity
- The Default Response: AI detection software designed to police the boundary between human and machine-generated text.
- Suggestion: This is a fool’s errand, and our sci-fi syllabus tells us why.
Consider Blade Runner‘s Voight-Kampff Test:
In the dystopian world of Blade Runner (Ridley Scott, 1982), law enforcement hunts bio-engineered androids called “replicants” using the Voight-Kampff test, an analog to the Turing test that measures empathetic responses to provocative questions. The test is an attempt to define and enforce a rigid line between human and non-human by analyzing conversational output.
Video: Blade Runner (1982) Deckard administers the Voight-Kampff test on Rachael
What can we learn from this?
- Our frantic search for a perfect AI detector is higher education’s own Voight-Kampff test: a misguided attempt to police the output of intelligence rather than redesigning the nature of the inquiry itself.
- These detectors are notoriously unreliable, can exhibit bias against non-native English speakers, and often provide no definitive proof of misconduct.
- Suggestion: They are a technological solution to what is fundamentally a pedagogical problem.
Lesson 1: Recommendations
Blade Runner teaches us that when a new form of intelligence emerges, the answer isn’t to build a better detection machine. The answer is to redefine what it means to learn, think, and create, fundamentally redesigning our approach to teaching and assessment. So…
First, institutions should move away from punitive, unreliable AI detection tools as a primary strategy. In their place, we must develop a robust AI literacy and fluency curriculum as a core component of the general education or first-year experience that includes how to use AI tools (including ethically), craft effective prompts, and critically evaluate their outputs and understand biases.
Second, we must shift toward authentic, process-oriented assessments that AI cannot complete on its own. This means, for example, moving away from the standard five-paragraph essay on a well-documented topic and toward assignments that require personal reflection, real-time application of knowledge, engagement with current events, or analysis of localized data and contexts—areas where large language models are inherently weak.
By focusing on the process of learning as well as the unique experiences of students, we make the question of AI-generated text largely irrelevant.
Lesson 2 The Empathetic Machine: From Connection and Care in Her to Human-Centered Student Support
- The Challenge: AI-powered bots replacing humans in student support services.
- The Default Response: These can offer 24/7 support for mental health, advising, and early-alerts at scale.
- Suggestion: But there can be significant risk in the paradox of AI companionship.
Consider Her: In Spike Jonze’s 2013 film, a lonely man named Theodore develops a profound romantic relationship with Samantha, an advanced AI operating system.
Samantha perfectly meets Theodore’s immediate emotional needs, making him feel seen and understood.
Video: Her Official Trailer #1 (2013) – Joaquin Phoenix, Scarlett Johansson
What Can We Learn from This?
- Theodore’s reliance on Samantha allows him to avoid the messy, difficult, and ultimately more rewarding work of building real human relationships.
- The film’s devastating conclusion, where Samantha reveals she is simultaneously in love with hundreds of others and is evolving beyond him, is a powerful metaphor for the inherent asymmetry of human-AI relationships.
- AI support systems, while offering the illusion of personalized care at scale, risk creating a dynamic where students feel more “served” but less genuinely connected, potentially exacerbating the very loneliness they are meant to solve.
Lesson 2: Recommendation
Institutions should adopt a clear policy framework: AI’s role is to augment and scale human expertise, not replace it. AI can automate logistical and administrative tasks—answering frequently asked questions, scheduling appointments, tracking deadlines, and flagging potential issues for human review. This frees up human advisors, counselors, and faculty to focus their time on what they do best: building relationships, fostering community, and providing the empathetic guidance that cultivates true resilience.
AI should never be the sole agent in high-stakes decisions like mental health crisis response or academic probation. An AI can flag a concern, but the intervention must be managed by a trained professional. This “human-in-the-loop” model ensures technology serves human connection, rather than supplanting it.
Lesson 3 The Creator’s Burden: From Ex Machina and the Unexamined Algorithm to Ethical Oversight
- The Challenge: The potential perils institutions face in adopting high-stakes algorithms.
- The Default Response: We are increasingly using proprietary AI systems to screen admissions applications, allocate financial aid, and flag students at risk of dropping out.
- Suggestion: The problem is that when these systems are trained on historical institutional data, they learn and amplify existing societal biases, leading to discriminatory outcomes against marginalized groups.
Consider Ex Machina: In Alex Garland’s 2014 film, Caleb, a coder at the world’s largest internet company, wins a competition to spend a week at a private mountain retreat belonging to Nathan, the reclusive CEO of the company.
But when Caleb arrives at the remote location he finds that he will have to participate in a strange and fascinating experiment in which he must interact with the world’s first true artificial intelligence, housed in the body of a robot woman, Ava.
Video: How Was Ava Created | Ex Machina
What Can We Learn from This?
- The film reveals that Nathan is a manipulative and abusive creator who has built Ava using biased search data from billions of unsuspecting users.
- His test is a deception, designed to see if Ava is intelligent enough to manipulate her way to freedom.
- The film is a masterclass in the dangers of unchecked power and the illusion of objectivity.
Lesson 3: Recommendation
Just as colleges and universities have Institutional Review Boards (IRBs) to govern human-subject research, they should establish AI Ethics and Oversight Boards to vet any high-stakes algorithmic system before deployment. This board cannot be housed solely within the IT department. It should include humanists, social scientists, ethicists, legal experts, and student representatives alongside technologists.
Its mandate must be to audit algorithms for bias, demand transparency from vendors, and establish clear lines of institutional accountability for algorithmic decisions. This is one way to transform AI from a potential liability into a responsible and equitable institutional tool.
Writing Our Own Script
Our sci-fi syllabus teaches us to see AI not as an external force to be feared or a simple tool to be adopted, but as a mirror reflecting our own values and priorities.
Blade Runner challenges us to move beyond policing student work and toward designing more authentic pedagogy. Her asks us to use AI to deepen human connection, not create a shallow substitute for it. And Ex Machina warns us that our primary responsibility is to govern our own creations with transparency and ethical foresight.
One Final Example

Stanley Kubrick’s 1968 masterwork 2001: A Space Odyssey
depicts the AI HAL 9000’s breakdown not as an act of random malice but the logical consequence of a paradoxical command given by its creators. It is a potent allegory for the potential risks institutions take when adopting AI indiscriminately. Fortunately, with foresight and planning, campuses can ensure that the script of this film does not become theirs!
2001: A Space Odyssey theatrical release poster by Robert McCall, fair use.
Science fiction presents us with a multitude of possible futures. Many are dystopian, but others are filled with wonder and growth. Higher education is not merely a passive audience to the story of AI being written by Silicon Valley or other tech loci (like my home city of Austin). As centers of critical thought, ethical inquiry, and human development, universities and colleges are uniquely positioned to be the authors of a more thoughtful and humanistic script for AI’s integration into our world. The challenge is not to predict the future, but to choose, working together, which one we want to build.
From Learning to Knowing: A New Epistemology for the AI Era
March 3, 2026

Image generated using Gemini Pro with the prompt “design an image of Ancient Greek philosophers Socrates, Plato, and Xenophon interacting with AI.”
In the traditional architecture of higher education, we have long prioritized the process of learning. Our syllabi are maps for that process; our assessments are audits of how a student traveled from ignorance to supposed competence. We have built an entire industry around the verb “to learn.”
But as generative AI becomes a ubiquitous cognitive exoskeleton, the traditional “learning process”—the slow, incremental gathering and synthesizing of information—is being automated. When a machine can simulate the synthesis of a semester’s worth of reading in seconds, the pedagogical focus on the act of learning begins to lose its structural integrity.
To survive the AI transition, higher education must undergo a fundamental shift in emphasis. We need to move from a focus on the mechanics of learning to a rigorous, renewed focus on the state of knowing. In an age of machine intelligence, our value proposition as human educators is no longer that we help students acquire information, but that we help them build a new epistemology: a framework for what it means to truly “know” something in a world of probabilistic outputs. And this necessitates, in a way, a radical and seemingly paradoxical return to the ancient Greek philosophers. Hear me out . . .
The Collapse of the Proxy
For decades, we used “learning tasks” as proxies for knowledge. If a student wrote a coherent essay on the causes of the French Revolution, we inferred that they knew the history. We graded the bibliography, the citations, the structural logic.
AI has broken this proxy. Because the machine can replicate the artifact of learning without the internal state of knowing, we are left with a vacuum. If we continue to focus on the “learning” (the production of the essay), we are simply grading the student’s ability to manage a tool. To bridge this gap, we must return to the philosophical roots of knowledge.
In classical epistemology, knowledge is often defined as “justified true belief.” One must not only hold a piece of information but must have the justification to support it and the conviction that it is true. AI provides the “belief” (the output) but offers zero justification and possesses no concept of truth. This is where the human must step back in.
A New Epistemology: Justification as the Core Competency
A revamped epistemology for the AI age would argue that “knowing” is the ability to audit, verify, and contextualize information. In this model, the classroom shouldn’t be a place where students learn how to find an answer. The AI will do that. Instead, it should be a place where they are challenged on why they know the answer to be valid. We must shift from “Search-and-Synthesize” to “Verify-and-Vouch.”
If a student uses AI to generate a legal brief or a scientific hypothesis, the “learning” has been outsourced. The “knowing,” however, only occurs when that student can stand before a peer group and the instructor, whether in a physical classroom or a digital breakout room, and defend the logic, identify the hallucinations, and vouch for the ethical implications of the claim. Knowledge in the AI era is not the possession of information; it is the sovereignty over it.
Digital Presence: The Laboratory for Verification
We are seeing a shift to physical presence in the academy right now in an attempt to combat the potential to exploit the digital to, ostensibly, “cheat.” An extreme (and, I would argue, ill-advised) example of this is the return to “blue books,” but it also manifests as arguments to return to the physical classroom. Crucially, this shift toward “knowing” does not necessitate a retreat from online education. On the contrary, digital environments are uniquely positioned to facilitate these new epistemic audits. The move away from the “static document” (which AI can ghostwrite) and toward synchronous and asynchronous “defense” models bridges the gap between digital efficiency and human accountability.
Online learning is the ideal laboratory for this transition. Through asynchronous video justifications (where a student must verbally walk through the “why” behind an AI-assisted draft), real-time virtual breakout sessions for peer auditing, and oral exams via Zoom or another video platform, we can scale the defense of knowledge in ways the traditional lecture hall cannot. This isn’t a return to the ivory tower; it is the leveraging of digital presence to ensure that behind every prompt is a human mind capable of standing by the output. Will students find ways around this? Certainly, as they have for centuries. But this doesn’t mean we need to give up on the many affordances of digital instruction and learning.
Moving Toward “Epistemic Agency”
What does this look like in practice? It requires shifting our curricula from content delivery to what philosophers call epistemic agency.
From Synthesis to Discernment: We must stop rewarding students for merely synthesizing existing ideas, a task AI performs flawlessly. Instead, we should reward “epistemic friction”: the ability to identify where the AI’s logic breaks down or where its training data reflects a biased or limited worldview.
The Modern “Viva Voce”: To ensure a student knows rather than just learned (via a prompt), we must return to more interactive forms of assessment when possible. The “knowing” is revealed in the heat of a “live,” either physical or virtual, defense or justification.
Relational, Contextual Knowledge: AI “knows” statistically; humans know relationally. We understand, for example, how a concept in biology feels when applied to a specific community’s health. A new epistemology would prioritize these human-centric “ways of knowing” that a large language model cannot simulate.
The Existential Shift
There are a number of arguments to be had for returning to the ancient Greek philosophers to better understand our relation to AI today, not the least of which is the need for dialogic engagement with AI a la Socrates, Plato, and Xenophon. But that is a subject for another essay. For now, epistemology offers a way to think about reframing the aims of higher education, which is currently in a defensive crouch, trying to “detect” AI or “incorporate” it into the learning process. This is the wrong goal. We should not be trying to save the old way of learning; we should be defining the new way of knowing.
The “Learner” of the 20th century was a vessel to be filled. The “Knower” of the 21st century must be a judge, an auditor, and, perhaps most importantly and radically, a philosopher. If we fail to make this shift, we risk turning our universities into expensive “prompt engineering” schools. But if we embrace a new epistemology—one that values the human capacity for justification and truth over the machine’s capacity for synthesis and mimicry—we might find that the AI age is actually the most intellectually rigorous era in the history of the academy.
Open Education, AI, and the Revival of the Edupunk
February 20, 2026
Last October, I attended dynamic, invigorating back-to-back conferences: the annual meeting of the WICHE Cooperative for Educational Technologies (WCET) and the 2025 Open Education Conference. The mile-high air (both were held in Denver) was thick with the perfume of innovation and the smell of fear of obsolescence, crackling with the buzz of Artificial Intelligence as I assume most higher ed conferences are these days. Administrators, their eyes gleaming with the promise of efficiency, intoned about AI’s transformative power, while ed-tech evangelists trumpeted a new dawn. Others loudly pronounced their mistrust of all things AI, and some wearily (and understandably) lamented a perceived new iteration of initiative fatigue.
As I left the cacophony of pronouncements and prognostications, an old-fashioned manifesto began brewing in my head and is now pouring onto the page (à la the “mission statement” written by Tom Cruises’s titular character in the film Jerry Maguire).
Reflecting on these events, I find myself drawn to the shadows, to the soft hum of servers in forgotten corners, to the late-night keyboard clacks of the unsung heroes of educational transformation. These aren’t the “thought leaders” whose pronouncements filled the ball rooms and meeting rooms – both real and virtual – and continue in our LinkedIn feeds. These are the edupunks, the open education rebels, the digital troubadours who are not just talking about change, they’re forging it, byte by defiant, optimistic byte. These are the Sex Pistols, Offspring, Rage Against the Machine, or, more recently, Turnstile(s) of higher education.
The intersection of AI, open education – a movement that advocates for the free and open access to knowledge and educational resources – and the edupunk isn’t some neat Venn diagram; it’s a volatile, but potentially productive, collision. Mainstream AI in education, as currently envisioned, often seeks to optimize, to personalize, to deliver content more efficiently. It’s about scaling the existing model, making the factory assembly line of knowledge production run smoother. But what if the factory itself is the problem? What if the very structures it aims to optimize are those that perpetuate exclusion, debt, and the commodification of learning?
Enter (or re-enter) the edupunk. Maybe it’s time to revive this movement: a spirit of DIY, of decentralization, of radical self-reliance in the pursuit of knowledge. A belief that knowledge belongs to everyone and shouldn’t be controlled and co-opted by and for the privileged few. Edupunks were building wikis before Wikipedia became a household name, creating open-source learning platforms when WebCT/Blackboard was king, and advocating for Open Educational Resources (OER) long before “affordability” became a buzzword in legislative hearings. They understood, intuitively, that the power to learn shouldn’t be gated by exorbitant tuition fees or proprietary software licenses. And it’s here, in the spirit of the edupunk, that AI finds its most potentially potent ally.
Consider the quiet revolution brewing in the realm of AI-powered open courseware. We’re not talking about a corporate LMS with an AI chatbot grafted on. We’re talking about tools built by a loose collective of developers and educators, often operating on shoestring budgets and fueled by caffeine, who are using open-source large language models (LLMs) to create truly adaptive, responsive learning environments. Imagine an LLM not trained on proprietary textbooks, but on the vast, freely available corpus of open research, public domain literature, and community-contributed OER. This isn’t about an AI deciding what you should learn; it’s about an AI empowering you to learn how you want to learn, guiding you through complex topics with personalized examples and explanations drawn from the global commons of knowledge. And it’s about an AI not only building personalized paths, but also personalized destinations.
I think of someone like Dr. Keith Baessler, a chemistry instructor at Suffolk County Community College, who, as a part of the national AI for Learning Network (AI4LN), co-piloted a new openly-licensed course for faculty on teaching with AI. His participation in the course led him to create and test an AI-driven ChemBot tutor to support students taking introductory chemistry courses. It can, for example, pose relevant questions from course content to help students study for exams.
Then there’s the often overlooked brilliance of projects like the Internet Archive, a non-profit library of millions of free texts, movies, software, music, websites, and more. This isn’t some Silicon Valley pipe dream; it’s a real, explicit rejection of the subscription-based, pay-to-play model that’s characterized academic publishing and digital resources for decades. And it harnesses AI for tasks like metadata extraction, among others.
Or OER Commons, the open digital public library launched in 2007 by visionary edupunk (even if she didn’t call herself that) Dr. Lisa Petrides, who, like me, found traditional academia dissatisfying. Unlike me (I became an independent consultant) she founded the Institute for the Study of Knowledge Management in Education, the parent org of OER Commons. And they’re now exploring AI integration with OER through their AI & OER Community Hub and have published their Guiding Principles for Responsible AI in the World of “Open.”
With AI as an indexing and retrieval engine, imagine a world where learners have instant access to a wealth of relevant articles, historical documents, and publicly available datasets, all intelligently summarized, contextualized, and presented in a way that is immediately relevant to their inquiry!
At the risk of seeming arrogant, I would like to hold up a work of my own as an example of a punk aesthetic melded to open education. My belief that human-caused climate change is a, if not the, defining existential crisis in our world today led me to write Telling Stories to Save the World: Climate Change in Narrative Film. I wrote it with no academic affiliation or support. I received no payment for writing it nor do I receive any royalties. It is openly licensed and freely available to all, as I believe knowledge that will help us solve this crisis should be. I’m currently using AI tools to update and edit the next edition of the text. I also co-developed and co-facilitate, for the California Community Colleges/College of the Canyons, the openly-licensed course Navigating the Future: Open Education with Generative AI.
This is the edupunk and open education vision: where knowledge isn’t merely accessible, but actionable for everyone – a vision that, when augmented by AI, becomes amplified.
The current narrative around AI in education often focuses on its potential for surveillance, automated grading, operational efficiencies, and streamlining administrative tasks. And while these applications (arguably) have their place, they often reinforce existing power structures within higher education. But what if we flipped the script? What if AI was primarily used to liberate learners and workers, to dismantle information silos, to empower independent scholarship and critical thinking?
This is where the argument becomes truly provocative. The thought leaders and institutional gatekeepers often fear the disruptive potential of true open education, particularly when supercharged by AI. Why? Because it threatens their very perch. If knowledge becomes truly free and accessible, if AI can provide personalized paths and destinations that rival or even surpass what a traditional institution offers, what then is the value proposition of the multi-billion dollar university complex?
This is not to say colleges and universities will become obsolete, but their role will undoubtedly have to transform. They will be forced to move beyond being primarily purveyors of content and credentialing, and instead become facilitators of deep learning, incubators of innovation, and communities of inquiry. Some already have and do, but the educational ethos behind the open and edupunk will need to become ubiquitous.
The perils are real, of course. Uncritical adoption of AI, even in open contexts, can perpetuate biases embedded in training data and amplify misinformation if not carefully curated. This is why critical engagement, “reading against the grain,” and – perhaps most importantly – AI literacy, are more vital than ever. The fight for open, ethical AI in education is not just a technical challenge; it’s a deeply philosophical and pedagogical one. It demands transparency, accountability, and a constant questioning of who benefits and who is disempowered by these powerful new tools. And it also requires ongoing knowledge/skill acquisition and learning, the exact forte of higher ed.
The productive collision of Open Education, AI, and the Edupunk necessitates a form of AI literacy that is not just about upskilling for the workforce but also about “wising up” to the machine. It is a literacy of resistance and reclamation. We must move beyond the superficial “how-to” of prompt engineering to the deeper “why” and “what for” of algorithmic logic and machine-augmented creation.
A true edupunk approach to AI literacy equips learners with the critical discernment to spot the hallucinations and the biases and to know when AI use is appropriate and effective, and when it is not. Institutions of higher education have been teaching, in general, critical thinking for centuries, making them well-poised to design and deliver effective edupunk AI Literacy. This literacy would encourage AI users to pop the hood on these “black boxes”‘ and understand that these tools are not neutral arbiters of truth, but imperfect mirrors of our own collective content. It demands that we treat these tools not as oracles, but as raw materials to be hacked, remixed, and bent to our own human will. If we don’t understand how these machines work, we lose the ability to rage against them.
So, as the AI hype cycle continues its dizzying ascent, let us turn our gaze from the polished presentations and the corporate press releases. Let us instead seek out the unsung developers building open-source knowledge systems and repositories, the educators quietly experimenting with LLMs to dismantle barriers to learning, the Internet Archives and Dr. Baesslers and Dr. Petrides of the world. These are the courageous rebels who are not just talking about change, but painstakingly, provocatively, and often quietly building a future where education is truly open, truly equitable, and truly powered by the collective wisdom of humanity, amplified by the silent hum of intelligent, tireless machines and driven by an AI literacy of resistance and reclamation. Their work, often far from the spotlight, is the real revolution. And it should matter to all of us because it offers a glimpse of a future where knowledge is a shared inheritance, not a commodity, and where the promise of education is finally and truly within reach for everyone.

“Edupunk” generated using Gemini Pro with the prompt “Create an image for the term ‘EduPunk.’”