Publications and Podcasts

This page includes some recent publications and podcasts


Hosted by Daniel Shea, with Mary Kalantzis, Bill Cope and guests.


Our recent research and writing has examined the challenges of AI in education. Here is a selection of our publications:

Paper 1: Literacy in the Time of Artificial Intelligence

The latest mutation of Artificial Intelligence, “Generative AI,” is more than anything a technology of writing. Generative AI is a machine that can write. In a world-historical frame, the significance of this cannot be understated. It is a technology in which the unnatural language of code tangles with the natural language of everyday life. Its form of writing, moreover, is multimodal, able not only to write text as conventionally understood, but “read” images by matching textual labels and “write” images from textual prompts. Within the scope of this peculiarly machinic writing is mathematics, actionable software procedure, and algorithm. This paper explores the consequences of Generative AI for literacy teaching and learning. In its first part, we speak theoretically and historically, suggesting that this development is perhaps as momentous for society and education as Bi Sheng’s invention of moveable type and Gutenberg’s printing press—and in its peculiar ways just as problematic. In the second part, we go on to propose that literacy in the time of AI requires a new way to speak about itself, a revised “grammar” of sorts. In a third part, we discuss an application we have developed that puts Generative AI to work in support of literacy and learning. We end with some broad-brushstroke implications for education.

[He] allowed himself to be swayed by his conviction that human beings are not born once and for all on the day their mothers give birth to them, but that life obliges them over and over again to give birth to themselves. Gabriel García Márquez, Love in the Time of Cholera.

Paper 2: A Multimodal Grammar of Artificial Intelligence: Measuring the Gains and Losses in Generative AI

This paper analyzes the scope of Artificial Intelligence (AI) from the perspective of a multimodal grammar. Its focal point is Generative AI, a technology that puts so-called Large Language Models to work. The first part of the paper analyzes Generative AI, based as it is on the statistical probability of one token (a word or part of a word) following another. If the relation of tokens is meaningful, this is circumstantial and no more, because its mechanisms of statistical analysis eschew any theory of meaning. This is the case not only for the written text that Generative AI leverages, but by extension image and multimodal forms of meaning that it can generate. The AI can only work with non-textual forms of meaning after applying language labels, and to that extent is captive not only to the limits of probabilistic statistics but the limits of written language as well. While acknowledging gains arising from the brute statistical power of Generative AI, in its second part the paper goes on to map what is lost in its statistical and text-bound approaches to multimodal meaning-making. Our measure of these gains and losses is guided by the concept of grammar, defined here as a theory of the elemental patterns of meaning in the world—not just written text and speech, but also image, space, object, body, and sound. Ironically, a good deal of what is lost by Generative AI is computable. The third and final part of the paper briefly discusses educational applications of Generative AI. Given both its power and intrinsic limitations, we have been experimenting with the application of Generative AI in educational settings and the ways it might be put to pedagogical use. How does a grammatical analysis help us to identify the scope of worthwhile application? Finally, if more of human experience is computable than can be captured in text-bound AI, how might it be possible at the level of code to create a synthesis in which grammatical and multimodal approaches complement Generative AI?

Paper 3: On Cyber-Social Learning: A Critique of Artificial Intelligence in Education

“Artificial Intelligence” is an idea that not only promises too much; it elides the irreducible differences between human intelligence and electronic manipulation of binary notation. This chapter does three things. 1) In broad historical and philosophical brushstrokes, it critiques the idea of “artificial intelligence.” We discuss the development of computers in a long, historical perspective, as well as recent developments in computing capacity, focusing particularly on Generative AI. 2) It examines human-computer interaction as a relationship of two fundamentally different kinds of “intelligence”—so different in fact that the words “human” and “artificial” barely warrant the right to describe the same thing. Computers can indeed automate a good deal of cognitive and communicative work. Like so many other technologies, they radically extend human natural capacities. But they do so in unnatural ways. 3) The chapter proposes an alternative orientation to understanding and using AI that we call “cyber-social learning.” This stands in contrast to the idea that #AI, reduced to an acronym and a hashtag, can be a replicant of human intelligence. We ask the question, what does this mean for the social project of education and the role of computers in learning? A concluding section proposes a program of action in a “manifesto for cyber-social learning.”

  • Read the full text at this link.
  • Forthcoming chapter: Cope, Bill and Mary Kalantzis, “On Cyber-Social Learning: A Critique of Artificial Intelligence in Education,” in Trust and Inclusion in AI-Mediated Education: Where Human Learning Meets Learning Machines, edited by Theodora Kourkoulou, Anastasia O. Tzirides, Bill Cope and Mary Kalantzis, Cham CH: Springer, 2023.