Looking Ahead to 2040 and Beyond: Some Hesitation

Thinking about forecasting, here’s a draft section from Bryan Alexander’s upcoming book. Consider it with some throat-clearing and/or humility:

If exploring the near- and medium-term futures of colleges and universities is both daunting and requiring of extensive analysis, to attempt to look further ahead should inspire true humility. Starting with the largest possible scale of analysis, planetary civilization, involves modelling the possibilities of climate change, which already necessitates an enormous scientific endeavour. That research has established the likelihood of a one or two centigrade temperature rise over the next few decades, a warming which could drive larger numbers of climate refugees to move across international borders. Similar movements over the past few years have already changed the face of politics in Europe; we can anticipate at least as much social and cultural stress, on top of large-scale human suffering. This temperature rise is likely to trigger agricultural crises stemming from crop failures due to excessive warmth or encroaching aridification; that, too, could also inspire political, economic, and social unrest. Anticipating and mitigating these possibilities – should a given polity decide to do so – then stimulates yet another level of political, economic, and social change.

Looking at change drivers for geopolitical structures and events other than those caused by the Earth’s changing climate involves a small galaxy of possibilities. We have touched on several of these throughout this book. The growing age gap between developed and developing nations; rising and unevenly distributed income and wealth inequality; the tension between those seeking to extend and deepen globalization versus neonationalists and localists; growing illiberalism in many political environments; areas of rising religious belief and practice versus regions of growing religious unaffiliation; the battle between corruption and law enforcement; the continued struggle for women’s rights; traffic in multiple illegal substances: all of these, and more, offer ways for shaking or reaffirming certain elements of the world order. Individual nations and regions provide myriad opportunities for change, too, from long-standing border tensions (Israel and its Arab neighbours; China and India; India and Pakistan) to political instability (sub-Saharan Africa, some of the Middle East) to transcontinental projects (China’s Belt and Road initiative). The number of possible alterations to the present-day political settlement ramify accordingly.

Alongside and intertwined with these forces is the ongoing technological revolution, a domain which offers yet another realm of colossal complexity. Attempting to forecast the digital world of 2035 from 2019, a gap of nearly twenty years, runs risks along the lines of anticipating the technological environment of 2019 from that of 2001. In that year most Americans were slow to imagine the mobile revolution, even while mobile phones swept the rest of the world. The dot.com bubble has just burst, which chastened many formerly expansive imaginings. Virtual reality of the 1990s had failed massively, and few saw it proceeding again. The web was growing rapidly, but remained largely in its noninteractive, document-centric mode; the more social, easier to publish environment of what would be dubbed “Web 2.0” was just beginning to surface.

If we wish to look beyond 2035, we would do well to augment our caution even further and imagine glimpsing 2019 from as far back as 1984. Visionaries of the Cold War’s last decade did manage to foresee certain features of our time. Futurists like Alvin Toffler augured a shift from manufacturing and towards a post-industrial economy, which has largely transpired, at least in the developed world. Others focused on the looming threat of nuclear war. Most failed to pay attention to China’s recovery from its Cultural Revolution and turn towards modified capitalism, which would become of the great stories of our time. Science fiction writers of the then-emergent cyberpunk school envisioned a deeply networked future world dominated by large corporations and suffering from threats to civil liberties, which is actually quite prescient. Those writers largely missed mobile devices, however, tended to overstate AI’s actual realization, and did not see the world-changing World Wide Web. Looking back at historical futuring gives us some retrospective caution in looking forward now.

Cautiously, we can suggest some technological possibilities based on the frameworks that seem most durable now, and on extrapolating from some current initiatives. The Fourth Industrial Revolution model, for example, posits continued movement away from a classic manufacturing-based economy and towards a society reshaped by multiple forms of new technologies, most especially automation from AI to robotics. Clearly there are many powerful forces driving such a transformation in the present, as we have discussed in chapters four and five. There is a great deal of cultural and financial capital invested in this revolution. Technological invention continues to progress. If we assume only incremental advances along these developmental lines, rather than chaotic disruptions, we should anticipate a transformed world, with a baseline of widespread DIY manufacturing, artificial assistants, widespread use of robotics in professional and personal lives, rich multimedia production and experience.

Naturally we should also anticipate a range of cultural responses to a Fourth Industrial Revolution. Automation alone offers multiple ways forward, assuming that set of technologies and practices succeeds on its own terms (and the possibility of a major automation crash is one we should anticipate). For example, we have already imagined, and are presently working towards, various forms of recapitulating some degree of human identity in silicon. One concept involves starting from the mass of expressions a given person creates during decades of life – text messages, emails, phone calls, blog posts, Instagram photos, video appearances, and so on – then using software to determine that person’s distinct style, their expressive voice, in order to repeat it after death in a kind of machine learning memorial. Extrapolated one step further, we can envision virtual advisors from the past who assist us in our work, or grief therapy programs based on living people interacting with mimetic forms of the deceased. Another concept sees software simulating a more generic or less representational human being, a virtual person living a digital life, which can then be put to various uses, from study to work; with some degree of autonomy such emulations could well develop their own society.

Looking at this historical transformation from political and economic perspective, our present political and social arrangements and the historical record of the first through third industrial revolutions offer several different civilizational reconfigurations, as Peter Frase and others have suggested. If automation renders many jobs obsolete, human creativity could respond by creating new functions, jobs, and professions for carbon-based life to perform. After all, hardware and software need some degree of managing, and the social impacts of automation will transform current human needs while creating new ones, which emergent professions could meet. Alternatively, automation’s successes may be limited to ways of assisting rather than replacing people, enhancing work rather than outmoding workers. In this future we could work closely with machines in some form of cyborg relationship, either literally through implants and ingested devices, or metaphorically, as we come to depend ever more intensively on software, data, networks, and hardware to perform our various tasks. Machines would closely empower our work and lives.

We can also imagine a socio-economic elite powered by automation and related industries, dominating a society consisting largely of disempowered poor or working-class people kept in line through a mixture of rich entertainment and ubiquitous surveillance. This could become something nearly medieval in scope, with a social base of impoverished techno-peasantry and a vanishingly small middle class above. Social media by itself would perform that mixture of pleasant distraction and data-driven monitoring. Other technologies could be pressed into service: AI for more ambitious data-mining, robotics as a police force, and tiny networked devices for extensive surveillance. From this future we would look back and see the early 21st century dystopian literature wave and the warnings of Carr, Lanier, et al as eerily prescient.

Alternatively, a less dystopian version of this automated inequality world would see most workers freed from many historical drudgeries thanks to automation’s successes, and leading healthier lives. In fact, some futurists see such a world as one positively liberated by automation. This is where the universal basic income (UBI) idea enters discussions, based on various plans to guarantee all residents (or citizens, a crucial difference) of a given nation or region a sufficient cash transfer to maintain a basic existence. UBI proponents often pitch their idea as a response to automation’s capacity to render human workers obsolete and the possibility that we will not generate new professions. The average work week may fall from the classic 40 hours to 30 or 20. Alternatively, more people may alternate periods of full-time employment with seasons of unemployment. A UBI system would tide people over these compensation shortfalls. Moreover, without an existential requirement to work for pay some of us may choose to pursue non-remunerative tasks, from writing a novel to learning a foreign language, spending more hours caring for loved ones or conducting a religious pilgrimage. UBI could usher in a new dawn for human potential: quite the knock-on effect from automation’s potential triumph.

Automation could yield another range of possible mid-century worlds, wherein devices and software progress even further, augmenting the world with a posthuman ecosystem. Imagine machines handling many of today’s human tasks, but better: hauling cargo in redesigned vehicles, growing crops, diagnosing human and animal illnesses, building colonies on Mars and the moon, performing surgeries, all more safely and efficiently than humans could do. Software produces nonfiction and creative art, manages the economy, patiently counsels and instructs humans. We have seen horrific versions of this in fiction, such as Capek’s human-exterminating robots (1920), the Terminator movies’ genocidal Skynet (1984ff), and the “benevolent” tyranny of Colossus (1970), but popular culture has also produced more positive visions of a posthuman society, most notably Iain Banks’ far-future Culture sequence. This fiction brings to mind a deep question: faced with being kindly outmoded by our technological creations, how would humans react, psychologically and culturally? Would we rage against these devices, as Victor Frankenstein snarled against his much more articulate monster? Would we instead accept our new status and launch a society-wide vacation, a la Wall-E (2008)? This is a question the university is supremely well suited to explore, given the intellectual depth of our many disciplines. Imagine a curriculum based on the new, post-human age, and how history, computer science, sociology, literature, philosophy, and economics might teach it.

Yet we must be cautious about these mid-century visions, since they are based primarily on certain possible ways we could restructure our world based on only one technological domain, that of automation. Consider instead the futures driven by other technologies currently in development. A Facebook team seems to be making progress in developing a device to allow hearing-impaired people to experience audio communications as haptic vibrations, either returning to them the sense of sound or producing a new, sixth sense. How else might we enhance the lives of the disabled, or extend the range of human experience? Research into brain science has allowed early methods of physically intervening in human cognition, leading to explorations of altering mental states, connecting minds directly to computers, or linking minds together. The potential for torment and abuse here is vast, as are, once more, the possibilities for expanding what humans can do in the world, not to mention exploring the old dream of teaching by sending information directly into the brain, a la Neo in The Matrix (1999). Meanwhile, the long-running field of genetic engineering, frequently the source of dystopian imagination (Gattaca, 1997) and ethical conundrums, is developing new powers through CRISPR technology. We can, perhaps, redefine human and overall biological life on Earth. Adding other technologies and practices to this mix – psychopharmaceuticals, advanced artificial limbs and organs, 3d printed anatomy, the internet of things installed within bodies – and what it means to be a human being in 2045 would be a radically different question than it did when posed in 2019. Once more, what other institution is better positioned to guide us through such extraordinary challenges than the academy? And to what extent will colleges and universities shape such a future through research, producing technologies, practices, and concepts?

At the same time the biological world may be further inflected by changes in large-scale material science and new projects. Ever-shrinking computational devices may lead us to mobile and networked machines small enough to be ingested, that can conduct medical work on the human body. Even at scales larger than the dreams of nanotechnology we can imagine transforming the physical world through the deployment of networked mites too small to be seen by the naked eye, perhaps leading to the advent of materials that can be addressed remotely or function autonomously, a/k/a “smart matter.” 3d printing could literally reshape aspects of our built environment, as might the use of new materials, like very strong and light graphene. New materials may well be needed, as currently under consideration for mitigating climate change are massive geoengineering projects, such as adding saline to an entire ocean, building region-scale seawalls, altering the planetary atmosphere’s chemical composition, or installing a massive shade in deep space between the earth and sun. To reach space at all we currently use fairly risky rockets, the dangers of which have elicited experiments and designs for everything from reusable spacecraft to atmosphere-straddling space elevators. New entities are participating in a 21st century space race that barely resembles that of the 20th century: corporations, billionaires, and nations building programs for the first time. These potential innovations or transformations could impact higher education in multiple ways, starting with altering a campus’ physical plant. College curricula and student career services would likely develop programs to support learners who seek to work in those new fields. The development of any or all of these projects will draw heavily on academic research and development. Further, many university departments will be able to contribute to the selection of and critical assessment of such epochal projects, from political science to philosophy and sociology.

All of these possibilities are based on trends that we can perceive in the present day. Meanwhile, beyond those evident change drivers, black swan possibilities also lurk ahead. Historical examples abound, such as a leader’s sudden death by accident or assassination that unravels a political order. A new religious sect or the vigorous reformation of an existing faith can win adherents and upend societies. Beyond political and social causes, a pandemic that exceeds our medical containment capacity could not only constitute a humanitarian disaster but also sap regimes, shock economies, and electrify cultures. On the other hand, a medical innovation might save or extend lives, such as a cure for congestive heart failure or a therapy that ends Alzheimer’s. Many natural disasters have so far been handled without disruption by our current national and international systems, but larger scale ones are quite possible and potentially devastating, such as cometary or asteroid impact. Climate change proceeds slowly, yet an unlikely and sudden shock, such as the Atlantic Ocean’s thermohaline circulation system shutting down, could yield a range of powerful impacts. And, since by their very nature black swan events are challenging to anticipate, we may well be hit by a completely strange development, one which to us in 2018 is a Rumsfeldian unknown unknown.

Our digital world may be especially vulnerable to these low-probably and high impact events. A solar coronal mass ejection of sufficient size could damage networks and devices over a large geographical range. An electromagnetic pulse (EMP) could remove a target completely from the internet and the electrically connected world for a short period of time, leading to potentially catastrophic results. Imagine a city or state not only forced offline (no banking, email, documents, voice calls) but cut off from electricity: no lighting, refrigeration, air conditioning; no use of cars or aircraft. The immediate medical consequences are, to pick but one result, dire. Digital attacks conducted by national governments and their military or intelligence agencies (cyberwar), by organized crime, by other non-state actors, or by organizations that have yet to appear could crash major networks. If any of these occur at sufficient scale, a social disaster could unfold, given the deep dependency we now have on the digital world.

Amid all of these possibilities and others, and in all humility, we must consider the ways forward higher education might develop after 2040.

Written by Bryan Alexander.

Leave a Reply

Your email address will not be published.