Bryan Alexander: Instructors after AI

Bryan Alexander is an internationally-known futurist, researcher, writer, speaker, consultant, and teacher, working in the field of how technology transforms education. Here, Bryan shares his insights into the future of artificial intelligence, and what it means for the teaching profession if it becomes really successful. Read on to discover what college teachers will do in a post-LLM world, and read Bryan’s original blog post here.

How will the current wave of artificial intelligence change college teaching? I’ve been thinking AI and education for years, and it’s all come into sharp focus lately, due to the advent of large language learning (LLM) bots like ChatGPT (previously). I’ve posed the question to academics and other people interested in higher education, and many answer by wondering what a post-AI instructor’s job looks like. 

One extreme view holds that AI is failing now and will collapse soon, so our task is to protect higher education from damage.  Opposed to this is the fear that we might not need human instructors* at all, if these new technologies keep developing.

Today I’d like to chart a middle path and wonder what happens to the job of teaching in higher education if AI continues and improves.  What happens if generative AI gets better – not artificial general intelligence level, but good enough to make content which large numbers of people value, somewhere between a calculator and a mad scientist’s assistant?  What does an instructor do in a post-LLM world?

Here’s the list I came up with in conversation with many people over the past month, from students to Patreon supporters, Facebook pals, friends in person and strangers online.  It’s not in any particular order and there’s a lot of overlap between points.

  1. Teaching prompt engineering (showing how use the tech: the best ways to write a prompt, how to iterate results, how to go beyond simple content generation). This includes teaching learners how to interact with AI to teach them best, apart from the human instructor.
  2. Instilling a critical stance about technology.  This should certainly include criticizing AI, which can take place in various ways and through different disciplines – i.e., science and technology studies, rhetoric and composition, computer science, etc.
  3. Offering students emotional support, both in the class context and in their lives.
  4. Facilitating group work.  Right now Bard, Bing, etc. are good at interacting with a single user, but don’t seem to have much capacity for wrangling clusters of students.
  5. Guiding students through a curriculum, or answering the “what to learn now?” question. Teachers do this within classes, as well as through advising.
  6. Related to 6: teaching students what they need to know and aren’t interested in.  This might be according to an instructor’s views, or what a larger authority (state government, community) prefers.  For one example, it could take the form of encouraging an arts-loving student to learn math.
  7. Structuring learning over the long haul. It’s easy now to learn something small on demand (what’s the French word for “cat”? what happens inside a biological cell?), but people have a harder time persisting in learning over weeks and years.
  8. Protecting students in their learning process. This can be defense against political attacks (example: studying evolution in a creationist context) as well as in terms of social, interpersonal issues.  (Related to #3 above)
  9. Nurturing curiosity.  Generative AI can satisfy one’s curiosity, but how to spark and support it?
  10. Teaching critical thinking. It’s not easy to find consensus among educators about what that means or how we do it, and I think we overstate how much we actually do this, but it’s something we tend to value highly. (#2 above is part of this.)
  11. Teaching, inspiring, and supporting creativity. There are other sources for this, but teachers can be good at helping students exercise and explore their creative sides.
  12. Modeling.  Former student and current teacher Justin Kirkes thoughtfully explained on Facebook: “Modeling vulnerability in the learning process, excitement in exploration, curiosity in the unknown. These aren’t behaviors that are always innate in learners, but can be called forth.”


It’s a messy list, with redundancies and parts I’m not sure about. And assembling it gives me questions.  How much of this can be automated, either now or in the likely near future? How are we likely to value (culturally, financially, professionally) such counter-AI skills?

What do you think of this outline of the college teaching profession post-generative AI?

*I’m using the term “instructor” here to emphasize the college or university faculty member’s teaching mission.




Provided for OEB Global 2023 by Bryan Alexander.

Leave a Reply

Your email address will not be published.