3. Postdigital dialogues for emancipation, peace, solidarity and democracy
Petar Jandrić
Abstract
This chapter explores postdigital transformations of dialogue. It portrays postdigital dialogue as the proverbial floating iceberg. The visible part of the iceberg includes new technological affordances, sociomaterial assemblages between humans and Artificial Intelligences, potentials for human-computer and human-machine collaboration, etc. The invisible part of the iceberg includes the ideology behind technology’s making and use, power relationships between the makers and users of technology, changing relationships between human and non-human entities, questions pertaining to human nature, environmental impacts of technology, changes in political economy, etc. Most uses of Artificial Intelligences support the powers that be by focusing on the visible parts of the iceberg, and it is crucial to develop critical scholarship that reaches the invisible parts. Postdigital dialogue is a socially engaged practice; a struggle for a better world; an important way of taking our destiny into our own hands and creating the world that we would like to live in.
Keywords: postdigital, dialogue, artificial intelligence, fake news, education, emancipation, peace, solidarity.
Résumé
Ce chapitre explore les transformations post-numériques du dialogue. Il dépeint le dialogue post-numérique comme l’iceberg flottant proverbial. La partie visible de l’iceberg comprend les nouvelles possibilités technologiques, les assemblages socio-matériels entre les humains et les intelligences artificielles, les possibilités de collaboration entre l’humain et l’ordinateur et entre l’humain et la machine, etc. La partie invisible de l’iceberg comprend l’idéologie qui sous-tend la fabrication et l’utilisation de la technologie, les relations de pouvoir entre les fabricant·e·s et les utilisat·eur·rice·s de la technologie, l’évolution des relations entre les entités humaines et non humaines, les questions relatives à la nature humaine, les incidences de la technologie sur l’environnement, les changements dans l’économie politique, etc. La plupart des utilisations des intelligences artificielles soutiennent les pouvoirs en place en se concentrant sur les parties visibles de l’iceberg, et il est crucial de développer une recherche critique qui atteigne les parties invisibles. Le dialogue post-numérique est une pratique socialement engagée, une lutte pour un monde meilleur, un moyen important de prendre notre destin en main et de créer le monde dans lequel nous aimerions vivre.
Mots-clés : post-numérique, dialogue, intelligence artificielle, fausses nouvelles, éducation, émancipation, paix, solidarité.
Resumen
Este capítulo explora las transformaciones del diálogo posdigital. Se retrata el diálogo posdigital como el proverbial iceberg flotante. La parte visible del iceberg incluye nuevas posibilidades tecnológicas, ensamblajes sociomateriales entre humanos y las inteligencias artificiales, potenciales para la colaboración entre humanos y computadoras o máquinas, etc. La parte invisible del iceberg incluye la ideología detrás de la creación y el uso de la tecnología, las relaciones de poder entre los creadores y usuarios de la tecnología, las relaciones cambiantes entre entidades humanas y no humanas, preguntas sobre la naturaleza humana, los impactos ambientales de la tecnología, los cambios en la economía política, etc. La mayoría de los usos de las inteligencias artificiales respaldan a los poderes establecidos al centrarse en las partes visibles del iceberg, y es crucial desarrollar una erudición crítica que alcance las partes invisibles. El diálogo posdigital es una práctica socialmente comprometida; una lucha por un mundo mejor; una forma importante de tomar nuestro destino en nuestras propias manos y crear el mundo en el que nos gustaría vivir.
Palabras clave: postdigital, diálogo, inteligencia artificial, noticias falsas, educación, emancipación, paz, solidaridad.
Introduction
Critical pedagogy is a tradition of dialogue (McLaren & Jandrić, 2020), yet dialogues of today bear little resemblance to dialogues of yesterday. Carried out in a postdigital mashup of human beings and information technologies, postdigital dialogues have some distinct features and pose new challenges (Jandrić et al., 2019). Crucially, postdigital dialogues are a mixed bag of interactions with human and non-human actors; the two are often hard to distinguish. This gives rise to a plethora of questions in diverse areas, including philosophy (Fuller & Jandrić, 2019), media studies (MacKenzie et al., 2021), and teaching and learning (Bozkurt et al., 2023; Peters et al., 2023). Who is responsible for the content of these dialogues? How are these dialogues being shaped? How can we orient our postdigital dialogues across humans and machines towards emancipation, peace, and solidarity? What is the linkage to a meaningful form of democracy?
Reimagining dialogue for a postdigital age
In an early attempt to systematically approach the humanities and social sciences in a postdigital way, Jandrić et al. (2018, p. 895) wrote: “The postdigital is hard to define; messy; unpredictable; digital and analog; technological and non-technological; biological and informational. The postdigital is both a rupture in our existing theories and their continuation.” Postdigital dialogue is a clear case of such a continuation and a rupture. On the side of a continuation, dialogue is still indispensable in knowledge making and emancipation. On the side of a rupture, dialogue has undergone fewer changes in the two millennia between Plato and Freire than in the two short decades between Freire’s death and the arrival of ChatGPT. To maintain the critical emancipatory edge, therefore, traditional approaches to dialogue are due for an urgent postdigital reimagining. Importantly, the wheel of progress does not stop; our current reimagining will soon be replaced by even newer deliberations. However, critical pedagogues always live in the world and with the world (Freire, 1972); in that sense, ephemerality is the critical pedagogue’s destiny.
Critical pedagogy has many faces. It is a set of classroom practices oriented to emancipation and justice, it is a philosophy that sees all people as deserving of equal opportunity, and it is a field of social activism aimed at creating a (more) equal world, and much more. In the context of “the twenty-first-century techno-social transformations”, traditional mid and late twentieth-century critical pedagogy has experienced some urgent new challenges and has increasingly become “ripe for reinvention” (Jandrić & Hayes, 2022, p. 321).
As a part of these efforts, the postdigital community has recently started reimagining dialogue in many different contexts. Jandrić et al. (2019, p. 180) wrote a multi-authored “experimental postdigital dialogue on postdigital dialogue”, hoping that their article “might serve as a practical and theoretical starting point for retooling our educational and research toolbox to adapt to and shape our postdigital reality”. Postdigital dialogue, conceived in this article, “provides a space of and for learning, struggle and hope”. Its authors
are tentatively confident that this article produces more knowledge than the arithmetic sum of its constituent parts. Interstitial spaces between authors’ research interests offer important insights into the breadth and depth of the postdigital challenge; overlaps and reoccurring themes are good indicators of pressing issues raised by and through postdigital dialogue. (Jandrić et al., 2019, p. 163)
This is one of many examples of the genre of collective writing. The concept has a long historical tail, yet its postdigital transformations have recently sparked a lot of attention. Between 2016 and today, the community associated with Michael Peters and the Editors’ Collective [1] published more than 100 different collectively written articles. A representative example of the genre is the 4-article series “Teaching in The Age of Covid-19” (Jandrić et al., 2020, 2021a, b, 2022), where more than 80 authors from 19 countries (and nearly all continents) have shared their experiences of teaching under sudden lockdown conditions. This example of democratic deliberative dialogue, where everyone with Internet access was invited through a public call and where all experiences were shared without polarization and conflict, has shown that postdigital approaches to dialogue may lead us closer to critically engaged democracy. While this approach still suffers from some sorts of hegemony (most obviously, it is available only to those with Internet access and sufficient writing skills to participate in the dialogue), postdigital dialogue does seem like an important step forward towards democracy.
Within a few short years, the body of collective articles has grown to a point which required some systematization. In 2023, a summary of these efforts, titled “Collective Writing: The Continuous Struggle for Meaning-Making” (Jandrić et al., 2023), responded to that need. The article underscores the many advantages of collective writing. It also shows that there is a lot of work left, especially in the areas of “new AI publishing technologies, automatic writing and editing, and data-driven autonomous science”. With the recent popularization of ChatGPT and Generative Artificial Intelligence (GAI), collective writing enters a whole new phase of dialogue between human beings and machines. When these technologies are used not just to format texts, but also to create ideas, then the technologies are upgraded from ‘tools for writing’ to ‘partners in dialogue’. Postdigital theory has written about this challenge for a long time (e.g., Jandrić et al., 2018), yet theory is very different from practice. In this context, an age-old open question gets new salience: What is the relationship between human beings and technologies in postdigital dialogue?
This strongly relates to existing scholarship on openness and collective intelligences. Collective intelligence is based on a simple idea of synergy: for instance, it could be argued that five people, sitting in the same room, will come up with more and better ideas than the same five people sitting separately in their own rooms. (Sometimes, this idea is simplistically represented as 1+1>2.) Things get messy when those five people do not sit in the same room and when their ideas are communicated through technology that is not a passive tool but an active agent. Associated challenges are usually addressed by various forms of (postdigital) critical media literacy (see Jandrić, 2019).
Collective intelligence can also be based on collaboration of and between humans and non-humans. Some years ago, Gary Kasparov (2017) developed centaur chess, a new version of the game, in which humans play together with machines. Kasparov showed that:
[t]he strongest player in centaur chess, by far, is the human being who plays with the assistance of the computer; even an average human player, supported by an average computer, beats the best human player or the best computer. The second place belongs to the computer, and the third place belongs to the human. (Jandrić, 2023b, p.2)
Based on Kasparov’s work, in Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy, Erik Brynjolfsson and Andrew McAfee (2011) argue that the race against the machine should be replaced by the race with the machine.
As of recently, therefore, synergies between human and non-human intelligences have started to attract renewed attention. Two fresh-off-the-press examples are Chitnarong Sirisathitkul’s “Slow Writing with ChatGPT: Turning the Hype into a Right Way Forward” (2023) and Alexios Brailas’ “In Dialogue with The Machine: Prolegomena to A Posthuman Nomadic Intelligence” (2023), where the authors explore “writing with the assistance of LLM-powered chatbots loosely based on Ulmer’s Writing Slow Ontology” (Sirisathitkul, 2023) and “a duo ethnography between a human and the ‘machine” (Brailas, 2023).
As a downside, the authors point to ‘standard’ AI problems, such as data and algorithm bias. Data bias is caused by the fact that these tools are based on large databases that favour the writing of people from the Global North, predominantly White authors, and the English language (just to mention a few main biases). Algorithm bias arises from the fact that most of these technologies are developed, again, by people from the Global North, predominantly White, and in the English language. This implies, of course, that the technology itself is biassed; and the use of such technology poses a big problem to democratic dialogues and processes (McKenzie et al., 2021).
On a more positive note, Sirisathitkul (2023) and Brailas (2023) confirm Kasparov’s (2017) and Brynjolfsson and McAfee’s (2011) claims about the superiority of centaur teams and illustrate that thinking and writing with large language models such as ChatGPT can improve both author’s ideas and writing style. These technologies do more than help people to do their tasks more efficiently; ‘[t]he sociomaterialist symmetry between people and machines, while subject to a lot of debate, opens up new venues for creative knowledge development’ (Jandrić 2020). With all its shortcomings, we have entered the age of the race with the machine (Brynjolfsson & McAfee, 2011) – and postdigital dialogues need to respond to that challenge.
The posthuman challenge
You are walking down the street of an unknown city, dragging your luggage and following Google’s instructions to your hotel. Suddenly, a stranger asks: What are you looking for? After exchanging a few pleasantries, she advises walking down the street and taking the second turn to the right. You look at your phone, and the map advises you to take the second turn to the left. Whom will you trust: the human or the algorithm?
Humans can be wrong, and for various reasons: a well-intended stranger may remember an old hotel entrance that has moved due to reconstruction, while an ill-intended stranger might send you to an empty street where you will be robbed by her accomplices. Google Maps can also be wrong: they can lead you to the car entrance, which is not on the same street as the pedestrian entrance, or in extreme cases, they can point to a street with the same name in a different city. Nevertheless, people tend to place different levels of trust into humans and machines. In the context of simple retrieval of information (such as checking the names of all basketball players on Croatia’s team), we tend to believe computers more than humans. In the context of complex questions (such as: what is love?), we tend to believe in humans more than computers. Agency and power of human and non-human entities are inextricably linked to the question: Who or what, and under which circumstances, can be considered (equal to) a human? (Here, we could also consider evolving forms of culture that can change the sense, add emotion or alter the strength and level of affirmation – but those important questions are too complex to cover in this book chapter.)
This question cannot be determined simply by looking at what the agents are made of (flesh or microchips). Not all people have full agency (for instance, mentally challenged persons or children), and not all machines are mere calculators. Different levels of human agency are well explored in human rights literature, yet the emergence of new computing machines permanently opens new questions. A few years before the ChatGPT hype, Lesley Gourlay (2021, p. 49) interviewed her laptop to “bring focus to a specific set of academic writing practices at a micro level”. How, if at all, is that different from Chitnarong Sirisathitkul’s (2023) and Alexios Brailas’ (2023) conversations with ChatGPT? What distinguishes the text processor used in writing this article from a conversational general artificial intelligence chatbot such as ChatGPT? And, indeed, what is to be done when these human-technologies become emotional and people, for example, marry robots (Haas, 2017)?
According to Steve Fuller, “we shouldn’t be sentimental about these questions” (in Fuller and Jandrić, 2019, p. 207). For centuries, and even today in some countries, some classes of human beings have not been recognized as fully human: slaves, women, people of colour… For instance, slaves in the West were “not considered human, but just working tools that speak human language, a type of commodity for exchange between slave owners” (Trang & Quỳnh, 2021, p. 2541). Therefore, continues Fuller,
‘[h]uman’ began – and I believe should remain – as a normative not a descriptive category. It’s really about which beings that the self-described, self-organised ‘humans’ decide to include. So we need to reach agreement about the performance standards that a putative ‘human’ should meet that a ‘non-human’ does not meet. (Fuller in Fuller & Jandrić, 2019, p. 207).
This is easier said than done, as technological development continuously improves machine performance, and older performance criteria such as the Turing test require continuous updating (see Lukaszewicz & Fortuna, 2022). Referring to the latest technological hype (Bozkurt, 2023; Peters et al., 2023; Jandrić, 2023a), which performance standards should be applied to ChatGPT?
In some cases, ChatGPT can pass for a human interlocutor, according to Turing’s test; therefore, ChatGPT is indeed human. Yet the chatbot’s responses have exhibited some particularly biassed opinions, and even made-up sources (the so-called ‘AI hallucination’), so we want to scrutinize its power to make important decisions (see Eubanks, 2018). Or, perhaps more accurately, most people do not mind taking ChatGPT as a human, as they use it to create a funny meme; but most people would have serious issues if ChatGPT, disguised as a human, were to influence political elections (see Mackenzie et al., 2021). According to Jandrić and Hayes (2021, p. 292), “[a]n important element of the postdigital challenge lies in sociomaterial reconfigurations of relationships between human beings and technologies”.
One way of circumventing the question of performance standards is to apply the concept of symmetry. This, according to Jones (2018, p. 47), is a sociomaterialist view of relationships between human beings and technologies that “conceptualise[s] knowledge and capacities as being emergent from the webs of interconnections between heterogeneous entities, both human and nonhuman”. Jones (2018, p. 51) argues that “all actors cannot be treated as completely symmetrical for research purposes because of the particular access that we have to accounts of experience from human actors”. Nevertheless, a symmetry between human and non-human actors allows us to analyse their relationships without defining exact performance standards. This symmetry is incomplete and relational: depending on context, ChatGPT can sometimes be considered human and sometimes non-human. Philosophy can tell us only so much; agency and power in human and non-human agents are largely a matter of practice.
The political challenge
Collaboration between humans and non-humans can improve chess performance (Kasparov, 2017) and work performance (Brynjolfsson & McAfee, 2011); in an academic context, it can improve produced knowledge and writing style (Jandrić et al., 2019; Brailas, 2023; Sirisathitkul, 2023). Nothing new under the sun: bicycles and cars have improved the human ability to move through space, while chess programs and ChatGPT have improved the human ability to create and present new ideas and knowledge. Bicycles and cars have also introduced negative consequences, such as the increase in road accidents and pollution, while ChatGPT has introduced increased problems with plagiarism (Bozkurt et al., 2023; Peters et al., 2023). What is interesting, however, are the social implications of these developments. The social impact of bicycles and cars has been debated extensively for at least a century; when it comes to Artificial Intelligences, especially of the generative large language models ilk, the debate is much younger. And it seems that, just like many times before, hyped expectations from technology are much higher than reality (see Jandrić, 2023a).
Recently, reports and studies on general Artificial Intelligences such as Virginia Eubanks’ Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018) and ‘Making Sense of the Digital Automation of Education’ (Selwyn et al., 2023) have shown that automation largely reinforces existing social inequalities (and occasionally creates new ones). In ‘Generative AI and the Automating of Academia’, Watermeyer et al. (2023) confirm this conclusion in the context of generative Artificial Intelligences (GAIs) and show that “the digitalisation of higher education through GAI tools no more alleviates than extends the dysfunctions of neoliberal logic and deepens academia’s malaise”. While the authors identify some potentials that “could force academics to confront the extent of their drift from intellectual and pedagogical craft and their anchoring to mundane service functions”, their study shows that academics, at least in mid-2023, use GAIs to cope with existing problems rather than challenge those problems. In other words, academics use GAIs to reinforce, rather than challenge, the neoliberal system of higher education. This is a missed opportunity, and “now is the time to be drawing full attention to the reconfigurations of power that are taking place in the name of automation” (Selwyn et al., 2023, p. 12).
Admittedly, this call is not exactly revolutionary; much ink has already been spilled on describing and analysing complex relationships between automation and power. However, much less attention has been given to opportunities for reconfiguring existing relationships: how can we use automation to make a better world? This question reaches much deeper than using a computer to play better chess or using ChatGPT to write better academic articles. Consequently, it needs to be addressed at a deeper ideological level. What do we want from these technologies, and how can we make our wishes happen?
Obviously, this is a collective decision that requires a lot of inclusive postdigital dialogue and democratic deliberation. In the context of knowledge production, Michael Peters argues that we live in the age of knowledge capitalism and develop its counterpart in the concept of knowledge socialism. “[K]nowledge capitalism focuses on the economics of knowledge, emphasizing human capital development, intellectual property regimes, and efficiency and profit maximization”, while “knowledge socialism shifts emphasis towards recognition that knowledge and its value are ultimately rooted in social relations” (Peters et al., 2012, p. 88). Knowledge socialism runs deeper than political economy or even epistemology; it “refers to a new global collectivist society that is coming online based on communal aspects of digital culture, including sharing, cooperation, collaboration, peer production and collective intelligence” (Peters et al., 2020, p. 2). Knowledge socialism, again, is based on postdigital dialogue and democratic deliberation.
Like all social systems, knowledge socialism also has its ‘ideal citizen’. Knowledge capitalism’s main protagonist is the homo economicus governed by “controlling assumptions of rationality, individuality and self-interest” (Peters & Jandrić, 2018, p. 82). Knowledge socialism’s main protagonist is the homo collaborans
committed to three assumptions that tend to run counter to the collective learning processes that characterize the digital environment. The assumption of individuality is counter posed by collective intelligence … The assumption of rationality is contradicted in a networked environment as the ontological basis is contained in the relations between entities … the assumption of self-interest again tends to be offset or decentred by forms of collective responsibility. (Peters & Jandrić, 2018, pp. 342–343)
Strong individuals do not easily blend into collective intelligences, and self-interest does not mix well with collective responsibility. Homo economicus thrive in knowledge capitalism, while homo collaborans thrives in knowledge socialism. However, knowledge capitalism and knowledge socialism, like homo economicus and homo collaborans, are black-and-white typologies that do not adequately represent reality. Every individual is a mix of homo economicus and homo collaborans, bringing this discussion to the eternal question of human nature and
the old dispute between Darwin’s theory of evolution and Kropotkin’s theory of mutual aid. … [T]he struggle between homo economicus and homo collaborans has always been there, but digital technologies have created a new battlefield and a new opportunity to challenge the traditional order of things. (Peters & Jandrić, 2018, p. 350)
The transition from knowledge capitalism to knowledge socialism comes down to the transition from homo economicus to homo collaborans. This transition is collective (it is based on democratic postdigital dialogue, which should shape both the transition and its goals) and pedagogical (we all need to unlearn toxic individualism and learn how to participate in truly egalitarian democratic deliberation). The next section, therefore, explores the pedagogical challenge of postdigital dialogue.
The pedagogical challenge
For years, I have been criticizing critical pedagogy for its lack of engagement with technology (see McLaren & Jandrić, 2020). Thankfully, the tides have turned, and these days, more and more critical work actively engages in the critique of technology and its many usages in different (educational) contexts. However, many of those critiques still fall into some traps identified by postdigital (and other) literature (see Jandrić & Knox, 2022, for a good overview).
Today’s technologies are more than tools – these days, they are actors that reside in somewhat symmetrical relationships with human beings. Technologies are also not neutral instruments of our will, at least not in the same sense as, for instance, an extension of the human arm in the form of the hammer. “We shape our tools and thereafter our tools shape us” (Culkin, 1967). While I do appreciate studies that show how to use this or that technology in the service of critical pedagogy, this chapter clearly shows the need for a broader and deeper outlook. A postdigital dialogue with ChatGPT, which helps the author to write more articles, in order to get promoted in neoliberal academia, will not make a difference. First, it will not help the author to stand out from the crowd, because everyone else has access to the same tool and can use it to the same end. Second, if everyone’s writing becomes quicker, neoliberalism will just raise the bar and demand more articles for promotion [see Hayes (2021) for a detailed elaboration of “Postdigital Perspectives on the McPolicy of Measuring Excellence”]. And third, ChatGPT based on existing datasets promotes Western, predominantly male and White values over, for instance, Indigenous, thus perpetuating hegemony. Nothing new under the sun; as it currently stands, ChatGPT is a compliant technology that supports the status quo.
Postdigital dialogues for emancipation, peace, and solidarity are linked to fundamental changes in political economy (from knowledge capitalism to knowledge socialism), relationships between human and non-human entities (from hierarchy to symmetry), and human nature (homo economicus vs homo collaborans). They are not about serving (and improving) (knowledge) capitalism; they are about challenging the existing order of things. This does not imply that we should stop exploring technological affordances. However, it means that we should look both into and beyond technological affordances. Postdigital dialogues are, above all, an important part of the struggle for a different world.
Let us explore what this could mean in practice. Back to studies on dialogue with ChatGPT (e.g., Sirisathitkul, 2023; Brailas, 2023) an orientation to emancipation, peace, and solidarity implies the need to acknowledge the presence of the algorithm and data used to train the algorithm; the legacy of those who designed, produced, and provided the software; the availability and price of the computer; the conditions of workers making the computer; the environmental cost of running the computer; and a myriad of other matters. Probably most importantly, human interlocutors (researchers and readers of the articles) should move beyond the homo economicus mindset aimed at making the most of the conversation for themselves, towards the homo collaborans mindset, which acknowledges that any dialogue between a human and ChatGPT should benefit everyone (directly and indirectly) included: the human interlocutor, ChatGPT, the environment, workers who produce the technology… And this all, ideally, works towards a more democratic postdigital dialogue, which is a prerequisite for replacing knowledge capitalism with a knowledge socialist alternative.
How can this postdigital dialogue help us transcend the currently prevailing false and illusory forms of democracy, or Pygmalion Democracy, into a (more) emancipatory project? As can be seen from the various attempts at regulating Artificial Intelligence, and especially their latest iterations focused on Generative Artificial Intelligence (e.g., Watermeyer et al., 2023), top-down solutions such as introducing new legislation are quick but rather inefficient. However, the transition from knowledge capitalism to knowledge socialism, translated into the transition from homo economicus to homo collaborans, is a slow, bottom-up enterprise. Recognizing that human nature is a mix of homo economicus and homo collaborans, the question of improving democracy translates into the question of improving human beings by nurturing the homo collaborans aspect of our nature.
Postdigital dialogue for emancipation, peace, solidarity, and democracy is therefore a complex and multifaceted concept: a top-down democratic decision-making tool, a bottom-up pedagogical tool for nurturing the homo collaborans aspect of human nature, and an object of inquiry and improvement. Viewed as a democratic decision-making tool, postdigital dialogue offers improved theoretical and practical understandings of human and non-human agency, suitable for top-down decision making in our postdigital condition. Just as importantly, postdigital dialogue is a pedagogical tool suitable for the slow burn, bottom-up upbringing of new generations of homo collaborans focused on collective good (in our age of rapid climate change, that could easily translate into our species’ survival).
The transient, postdigital nature of this dialogue implies its permanent unfinishedness and change. Using postdigital dialogue for making democratic decisions of today will help us improve postdigital dialogue for making democratic decisions of tomorrow. Today’s pedagogy of postdigital dialogue will provide input into tomorrow’s pedagogy of postdigital dialogue. These practical inputs will then improve our theoretical views on postdigital dialogue, only to start another round of practical applications of the improved concept. Postdigital dialogue is a full-blooded Freirean (1972) praxis, where theory leads practice just as much as practice leads theory, and where bottom-up pedagogy aimed at tomorrow is inextricably linked to top-down political decisions of the day.
Conclusion
Postdigital dialogue can be portrayed, perhaps not very creatively but fairly accurately, as the proverbial floating iceberg with about 10 percent of its mass visible above water. The small, visible part of the iceberg includes new technological affordances (isn’t it amazing to talk to the computer?), sociomaterial assemblages between humans and Artificial Intelligences, potentials for human-computer and human-machine collaboration, and so on.
The large, invisible part of the iceberg includes the ideology behind technology’s making and use, power relationships between the makers and users of technology, changing relationships between human and non-human entities (from hierarchy to symmetry), questions pertaining to human nature (the eternal struggle between homo economicus and homo collaborans), environmental impacts of technology, fundamental changes in political economy (from knowledge capitalism to knowledge socialism), and much more.
The key difference between using new technology for dialogue [such as Sirisathitkul’s (2023) and Brailas’ (2023) work on improving academic writing using ChatGPT] and critical, emancipatory postdigital dialogue oriented to peace, solidarity, and democracy, is in postdigital dialogues’ non-complicity with unjust power relationships and neoliberal principles that characterize knowledge capitalism. Postdigital dialogue is a socially engaged practice, a struggle for a better world. Unfortunately, most current uses of Generative Artificial Intelligences support the powers that be by focusing on the visible parts of the postdigital dialogues’ iceberg (Watermeyer et al., 2023). Therefore, it is crucial to develop critical scholarship that reaches the invisible parts of the postdigital dialogues’ iceberg and engages in a dialogic struggle for social change.
Critical pedagogues need to reach out of their comfort zones in the humanities and the social sciences and engage with technology at a deep conceptual level. Technology does not make our destiny, but our destiny does strongly depend on technology. This is a pedagogical challenge and an important message for critical pedagogues, activists, change makers, and so on. However, democracy is relevant for —and should be practised by— literally all human beings. Postdigital dialogue simultaneously aims to improve today’s democratic processes, nurturing voters of tomorrow, and improving itself in the process. Therefore, it is an important way of taking our destiny into our own hands and co-creating the future through a collective process of democratic deliberation.
References
Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., Toquero, C. M., Singh, L Tlili, A., Lee, K., Nichols, M., Ossiannilsson, E., Brown, M., Irvine, V., Raffaghelli, J. E., Santos-Hermosa, G Farrell, O., Adam, T., Thong, Y. L., Sani-Bozkurt, S., Sharma, R. C., Hrastinski, S., & Jandrić, P. (2023). Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568.
Brailas, A. (2023). In Dialogue with The Machine: Prolegomena to A Posthuman Nomadic Intelligence. Postdigital Science and Education.
Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Digital Frontier Press.
Culkin, J. M. (1967). A schoolman’s guide to Marshall McLuhan. New York: Saturday Review.
Eubanks, V. (2018). Automating inequality. How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Freire, P. (1972). Pedagogy of the Oppressed. Harmondsworth: Penguin Education Specials.
Fuller, S., & Jandrić, P. (2019). The Postdigital Human: Making the history of the future. Postdigital Science and Education, 1(1), 190-217. https://doi.org/10.1007/s42438-018-0003-x.
Gourlay, L. (2021). Posthumanism and the Digital University: Texts, Bodies and Materialities. Bloomsbury.
Haas, B. (2007, April 4). Chinese man ‘marries’ robot he built himself. Guardian. https://www.theguardian.com/world/2017/apr/04/chinese-man-marries-robot-built-himself.
Hayes, S. (2021). Postdigital Perspectives on the McPolicy of Measuring Excellence. Postdigital Science and Education, 3(1), 1-6. https://doi.org/10.1007/s42438-020-00208-2.
Jandrić, P. (2019). The Postdigital Challenge of Critical Media Literacy. The International Journal of Critical Media Literacy, 1(1), 26-37. https://doi.org/10.1163/25900110-00101002.
Jandrić, P. (2020). Creativity and collective intelligence. In M. A. Peters (Ed.), Encyclopedia of Educational Innovation. Singapore: Springer. https://doi.org/10.1007/978-981-13-2262-4_65-1.
Jandrić, P. (2023a). On The Hyping of Scholarly Research (With A Shout-Out to ChatGPT). Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00402-y.
Jandrić, P. (2023b). Postdigital Human Capital: What is Different This Time? International Journal of Educational Research. https://doi.org/10.1016/j.ijer.2023.102182.
Jandrić, P., & Hayes, S. (2020). Postdigital We-Learn. Studies in Philosophy of Education, 39(3), 285.297. https://doi.org/10.1007/s11217-020-09711-2.
Jandrić, P., & Hayes, S. (2022). Postdigital Critical Pedagogy. In A. A. Abdi & G. W. Misiaszek (Eds.), Palgrave Handbook on Critical Theories of Education (pp. 321-336). Palgrave MacMillan. https://doi.org/10.1007/978-3-030-86343-2_18.
Jandrić, P., & Knox, J. (2022). The Postdigital Turn: Philosophy, Education, Research. Policy Futures in Education, 20(7), 780-795. https://doi.org/10.1177%2F14782103211062713.
Jandrić, P., Bozkurt, A., McKee, M., Hayes, S. (2021b). Teaching in the Age of Covid-19 – A Longitudinal Study. Postdigital Science and Education, 3(3), 743-770. https://doi.org/10.1007/s42438-021-00252-6.
Jandrić, P., Fuentes Martinez, A., Reitz, C., Jackson, L., Grauslund, D., Hayes, D., Lukoko, H. O., Hogan, M., Mozelius, P., Arantes, J. A., Levinson, P., Ozoliņš, J., Kirylo, J. D., Carr, P. R., Hood, N., Tesar, M., Sturm, S., Abegglen, S., Burns, T., Sinfield, S., Stewart, G. T., Suoranta, J., Jaldemark, J., Gustafsson, U., Monzó, L. D., Batarelo Kokić, I., Kihwele, J. E., Wright, J., Kishore, P., Stewart, P. A., Bridges, S. M., Lodahl, M., Bryant, P., Kaur, K., Hollings, S., Brown, J. B., Steketee, A., Prinsloo, P., Hazzan, M. K., Jopling, M., Mañero, J., Gibbons, A., Pfohl, S., Humble, N., Davidsen, J., Ford, D. R., Sharma, N., Stockbridge, K., Pyyhtinen, O., Escaño, C., Achieng-Evensen, C., Rose, J., Irwin, J., Shukla, R., SooHoo, S., Truelove, I., Buchanan, R., Urvashi, S., White, E. J., Novak, R., Ryberg, T., Arndt, S., Redder, B., Mukherjee, M., Komolafe, B. F., Mallya, M., Devine, N., Sattarzadeh, S. D., & Hayes, S. (2022). Teaching in the Age of Covid-19—The New Normal. Postdigital Science and Education, 4(3), 877-1015. https://doi.org/10.1007/s42438-022-00332-1.
Jandrić, P., Hayes, D., Truelove, I., Levinson, P., Mayo, P., Ryberg, T., Monzó, L.D., Allen, Q., Stewart, P.A., Carr, P.R., Jackson, L., Bridges, S., Escaño, C., Grauslund, D., Mañero, J., Lukoko, H.O., Bryant, P., Fuentes Martinez, A., Gibbons, A., Sturm, S., Rose, J., Chuma, M.M., Biličić, E., Pfohl, S., Gustafsson, U., Arantes, J.A., Ford, D.R., Kihwele, J.E., Mozelius, P., Suoranta, J., Jurjević, L., Jurčević, M., Steketee, A., Irwin, J., White, E.J., Davidsen, J., Jaldemark, J., Abegglen, S., Burns, T., Sinfield, S., Kirylo, J.D., Batarelo Kokić, I., Stewart, G.T., Rikowski, G., Lisberg Christensen, L., Arndt, S., Pyyhtinen, O., Reitz, C., Lodahl, M., Humble, N., Buchanan, R., Forster, D.J., Kishore, P., Ozoliņš, J., Sharma, N., Urvashi, S., Nejad, H.G., Hood, N., Tesar, M., Wang, Y., Wright, J., Brown, J.B., Prinsloo, P., Kaur, K., Mukherjee, M., Novak, R., Shukla, R., Hollings, S., Konnerup, U., Mallya, M., Olorundare, A., Achieng-Evensen, C., Philip, A.P., Hazzan, M.K., Stockbridge, K., Komolafe, B.F., Bolanle, O.F., Hogan, M., Redder, B., Sattarzadeh, S.D., Jopling, M., SooHoo, S., Devine, N., & Hayes, S. (2020). Teaching in The Age of Covid-19. Postdigital Science and Education, 2(3), 1069-1230. https://doi.org/10.1007/s42438-020-00169-6.
Jandrić, P., Knox, J., Besley, T., Ryberg, T., Suoranta, J., & Hayes, S. (2018). Postdigital Science and Education. Educational Philosophy and Theory, 50(10), 893-899. https://doi.org/10.1080/00131857.2018.1454000.
Jandrić, P., Luke, T. W., Sturm, S., McLaren, P., Jackson, L., MacKenzie, A., Tesar, M., Stewart, G. T., Roberts, P., Abegglen, S., Burns, T., Sinfield, S., Hayes, S., Jaldemark, J., Peters, M. A., Sinclair, C., & Gibbons, A. (2023). Collective Writing: The Continuous Struggle for Meaning-Making. Postdigital Science and Education, 5(3), 851-893. https://doi.org/10.1007/s42438-022-00320-5.
Jandrić, P., Ryberg, T., Knox, J., Lacković, N., Hayes, S., Suoranta, J., Smith, M., Steketee, A., Peters, M. A., McLaren, P., Ford, D. R., Asher, G., McGregor, C., Stewart, G., Williamson, B., & Gibbons, A. (2019). Postdigital Dialogue. Postdigital Science and Education, 1(1), 163-189. https://doi.org/10.1007/s42438-018-0011-x.
Jandrić, P., Hayes, D., Levinson, P., Lisberg Christensen, L., Lukoko, H. O., Kihwele, J. E., Brown, J. B., Reitz, C., Mozelius, P., Nejad, H. G., Fuentes Martinez, A., Arantes, J. A., Jackson, L., Gustafsson, U., Abegglen, S., Burns, T., Sinfield, S., Hogan, M., Kishore, P., Carr, P. R., Batarelo Kokić, I., Prinsloo, P., Grauslund, D., Steketee, A., Achieng-Evensen, C., Komolafe, B. F., Suoranta, J., Hood, N., Tesar, M., Rose, J., Humble, N., Kirylo, J. D., Mañero, J., Monzó, L. D., Lodahl, M., Jaldemark, J., Bridges, S. M., Sharma, N., Davidsen, J., Ozoliņš, J., Bryant, P., Escaño, C., Irwin, J., Kaur, K., Pfohl, S., Stockbridge, K., Ryberg, T., Pyyhtinen, O., SooHoo, S., Hazzan, M. K., Wright, J., Hollings, S., Arndt, S., Gibbons, A., Urvashi, S., Forster, D. J., Truelove, I., Mayo, P., Rikowski, G., Stewart, P. A., Jopling, M., Stewart, G. T., Buchanan, R., Devine, N., Shukla, R., Novak, R., Mallya, M., Biličić, E., Sturm, S., Sattarzadeh, S. D., Philip, A. P., Redder, B., White, E. J., Ford, D. R., Allen, Q., Mukherjee, M., & Hayes, S. (2021a). Teaching in the Age of Covid-19—1 Year Later. Postdigital Science and Education, 3(3), 1073-1223. https://doi.org/10.1007/s42438-021-00243-7.
Jones, C. (2018). Experience and networked learning. In N. Bonderup Dohn, S. Cranmer, J. A. Sime, M. de Laat, & T. Ryberg (Eds.), Networked learning: Reflections and challenges (pp. 39–56). Cham: Springer. https://doi.org/10.1007/978-3-319-74857-3_3.
Kasparov, G. (2017). Deep thinking: Where machine intelligence ends and human creativity begins. John Murray.
Lukaszewicz, A., & Fortuna, P. (2022). Towards Turing Test 2.0—Attribution of Moral Status and Personhood to Human and Non-Human Agents. Postdigital Science and Education, 4(3), 860–876. https://doi.org/10.1007/s42438-022-00303-6.
MacKenzie, A., Rose, J., & Bhatt, I. (Eds.). (2021). The Epistemology of Deceit in a Postdigital Era: Dupery by Design. Springer. https://doi.org/10.1007/978-3-030-72154-1.
McLaren, P., & Jandrić, P. (2020). Postdigital Dialogues on Critical Pedagogy, Liberation Theology and Information Technology. Bloomsbury.
Peters, M. A., & Jandrić, P. (2018). The Digital University: A Dialogue and Manifesto. Peter Lang.
Peters, M. A., Besley, T., Jandrić, P., & Zhu, X. (Eds.). (2020). Knowledge Socialism. The Rise of Peer Production: Collegiality, Collaboration, and Collective Intelligence. Springer. https://doi.org/10.1007/978-981-13-8126-3.
Peters, M. A., Jackson, L., Papastephanou, M., Jandrić, P., Lazaroiu, G., Evers, C. W., Cope, B., Kalantzis, M., Araya, D., Tesar, M., Mika, C., Chen, L., Wang, C., Sturm, S., Rider, S., & Fuller, S. (2023). AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses. Educational Philosophy and Theory. https://doi.org/10.1080/00131857.2023.2213437.
Peters, M. A., Liu, T. C., & Ondercin, D. J. (2012). The pedagogy of the open society: Knowledge and the governance of higher education. Sense.
Selwyn, N., Hillman, T., Bergviken-Rensfeldt, A., & Perrotta, C. (2023). Making Sense of the Digital Automation of Education. Postdigital Science and Education, 5(1), 1–14. https://doi.org/10.1007/s42438-022-00362-9.
Sirisathitkul, C. (2023). Slow Writing with ChatGPT: Turning the Hype into a Right Way Forward. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00441-5.
Trang, D. T. T., & Quỳnh, L. T. N. (2021). The historical conditions for the forming of Aristotle’s political thought. Linguistics and Culture Review, 5(4), 2535-2544. https://doi.org/10.21744/lingcure.v5nS4.2114.
Watermeyer, R., Phipps, L., Lanclos, D., & Knight, C. (2023). Generative AI and the Automating of Academia. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00440-6.