{{ site.title }}

Sylvester Johnson on Humanism in Our Technological Age

Sylvester Johnson

As a follow-up to Research Library Issues no. 299 on the ethics of artificial intelligence (AI), Association of Research Libraries (ARL) executive director Mary Lee Kennedy talked with author Sylvester Johnson, founding director of the Center for Humanities and assistant vice provost for the humanities at Virginia Tech. With Kennedy, Johnson discussed such ideas as the need for humanistic expertise to address radical inequality in a world shaped by technology. He also considered the library’s role, as a “lever for public good,” in managing the creation of knowledge by AI systems as well as by humans. The full interview follows.

Kennedy: What are the essential humanistic expertises and informed considerations with regard to our humanity today?

Johnson: I think radical inequality is the greatest challenge humanity faces today. This means that expertise in the human condition—broadly conceived—wedded to a cultivated understanding of how social power works, will become even more essential for shaping a viable human future. It is not enough to insist that knowledge should exist only for the sake of knowledge. All knowledge—in humanities, business, and STEM disciplines—has inherent value. So, no one should ever doubt the intrinsic value of learning. Ultimately, however, knowledge should exist for the sake of people—all people—and this means supporting well-being and human thriving. We should be wary, furthermore, of easy recourse to reductionist claims that humanities education “should not solve problems.” This assertion itself is in part a product of histories of power and privilege that would adopt a posture of learning about humanity while elevating as the ideal a radical disregard for the most vulnerable humans in our global society. It is no accident that areas such as gender and sexuality studies, critical ethnic studies, and similar disciplines created in recent decades are easily excluded for a “traditional humanities” paradigm. These newer disciplines emerged with a clear concern for addressing problems of human suffering, disparity, and structural constraints on the human condition. So, although there is no consensus among humanists about what lies at the core of humanities, I want to emphasize that actual humans and human-centered outcomes should be treated as a vital part of humanities scholarship.

So, what does this imply for expertise? Today’s innovation economy is bringing tremendous benefits, but it is also concentrating wealth at an unprecedented scale. This trend is intersecting with historical, institutional forms of injustice. As we focus on advancing the interests of human thriving, we must forge new opportunities for humanists to lead a society that is increasingly technological and that is being reshaped by innovation. This translates into many different approaches. Some examples are historical insights into how systems of power actually function; theoretical and ethnographic analyses of democracy; studies of such issues as race, gender, disability, and sexuality; and creativity, composition, performance, and other forms of communicating human experience that can inspire human flourishing. We must also continue to foster breadth of curiosity and analysis. As David Epstein has emphasized in a recent book, an expansive range of learning, which has been a historic feature of humanities education, will ensure that humanities experts excel in a hyper-specialized world.

Kennedy: What is the same or different about the need for these as compared to past technological innovations such as the internet?

Johnson: That’s a great question, because it probes the interface of the familiar and the new. As a human phenomenon, technology is classic. Humans have manipulated our environment and our own bodies throughout the history of our species. But recent digital technologies are introducing profoundly new quandaries, as we advance our ability to design machines that can, to a considerable degree, replace us. Machines are increasingly adept at performing cognitive tasks and, to a lesser extent, mechanical, tactile ones. High-end labor automation will continue to advance. “Cognitive” computing systems (AIs) are increasingly working alongside humans to make decisions that affect the lives of billions of people. Moreover, intelligent machine systems are advancing alongside the enhancement of humans with AI-enabled machine parts—prosthetic limbs and brain chips are key examples. Genetic engineering will eventually make the design of humans from the cellular level upward the most game-changing technology in human history. All of this means that we will be forced to reckon with profound questions in the coming years, such as determining who or what gets to count as human. Although we can look to histories of racism and colonialism to see precedent for debating or denying people’s humanity, these new technologies are creating truly novel, unprecedented challenges.

So, I think you can summarize the new challenge this way: Technology is becoming the essential means to shaping and governing human society at almost every level. It will be important that we not be distracted by any claims that nothing fundamentally new is happening and that “we have seen it all before.” Machines have never before made decisions about our lives and our society. That is now beginning to happen. In a world that is increasingly governed by non-human systems that humans have designed, we will need humanists who can govern technology—who can guide its design, implementation, regulation, and societal impact to achieve human-centered outcomes rooted in fairness.

Kennedy: Given what is the same and different, is there a way to “future proof” our humanity through consistent ethical practices across time?

Johnson: I think you’ve probably asked the most difficult question on this topic! Kai-Fu Lee, a venture capitalist and innovation expert, has devoted the past few years to addressing the question of where humans will fit in a future where human labor may very well be increasingly replaced by automation. He emphasizes that the challenges deriving from this possible future of labor will be compounded by the most uneven creation of wealth in human history. Economists anticipate the next decade will see a global net increase of approximately $100 trillion in gross domestic product (GDP) deriving from digital technologies; most of that will belong to a small number of private companies in concentrated regions of the United States and China.

So, I think one lesson for future-proofing humanity is learning from history. The United States carceral system is truly world class—we are the most carceral nation on the planet, specializing in treating millions of humans as expendable, compounding precarity among the poor and racial minorities with generational consequences. It is no exaggeration to say that our carceral system commoditizes cruelty by investing in institutional racism and classism. Unless we transform this political economy, we will likely extend this expendability to new populations for whom there is no place in an innovation economy.

This is one of the most important reasons why technology cannot be treated as simply a technical issue. Our humanists must lead efforts to reshape a different possible future for capital. This also demonstrates why reductively insisting that “humanities should not solve problems” is a harmful strategy, one that will never “save the humanities.” We must celebrate the inherent value of learning across any and all disciplines—including STEM—while also seizing new opportunities for leadership and pathways to careers by preparing a new generation of humanists to shape a viable, human-centered future.

Kennedy: Where do you see the most need for leadership and progress? Are there exemplars that stand out?

Johnson: In the coming years, we will witness a steep demand for greater leadership from humanists in the public arena as well as private sector. Contrary to years of pessimistic forecasts, a significant part of this will be driven by technology innovation. For decades, legislative assemblies throughout the nation propelled efforts to devalue humanistic, comprehensive education. Various academic pundits have also joined in asserting that liberal arts education is increasingly irrelevant. Nothing could be further from the truth. We have witnessed some of the greatest societal challenges at the nexus of technology and human society.

The moral of this story is that technology is not only a STEM issue. It is a human one that is thoroughly comprehensive. Humanists must participate in leadership of technology by leveraging expertise in human complexity, interpreting culture, advancing thought-leadership, analyzing power to forge ethical outcomes, and so forth. This is inclusive of but not reducible to studying about technology. We must govern technology. That is different from the past.

There are examples of humanists addressing this issue. Will.i.am is a creative artist who is now working to transform our society from one rooted in data monarchy—a few private companies controlling data and monetizing it exclusively—to data democracy, which would advance shared participation in the ownership and commoditization of data. He has argued compellingly that creative humanists must begin to create the technology that shapes our lives. Major philanthropists are driving a new vision for humanities leadership of a technological society. Witness Stephen Schwarzman’s recent gift to the University of Oxford to create a humanities center that will lead ethical governance of technology. The Ford Foundation has joined efforts with other major philanthropic organizations in partnership with the New America think tank to create a network of universities focusing on public-interest technology—developing new pathways for technology to focus on public good rather than exclusively benefiting private capital. Stanford University recently launched a $1 billion capital campaign to fund a new institute for human-centered AI. At Virginia Tech, we have launched a university-wide Tech for Humanity initiative to advance human-centered approaches to technology; as part of this effort, we are redefining what it means to be a technologist to include not only technical skills but also humanistic knowledge and skilled understanding of societal impact and the human condition.

Kennedy: What would you ask of the research library community in securing the “very future of humanity in a technological age”?

Johnson: That’s a great question. Libraries have been extraordinary repositories of information for centuries. Information has always been a commodity, but never before on the scale that we are witnessing today. Throughout history, the production of knowledge has been an exclusively human affair. One of the most difficult challenges we will face in the coming years will be managing the creation of knowledge by non-human agents—AI systems. AIs are already creating music and handling decision services across domains such as finance, transportation, and healthcare. This is going to expand into knowledge service—creating new research insights, drawing conclusions, participating in deciding public policy, and advancing education. Some of this challenge will be cultural—who can blame people for being unsettled by such a ground-shift? Other challenges will be logistical and architectural in the institutional sense.

Currently, the most powerful agents who control the relationship between repositories of information and capital are artificial people known as private corporations. Our society is experiencing tremendous benefits from the innovation that private capital is creating. But we are also reaching a point of existential crisis. In this new reality, our libraries will need to prepare to disseminate knowledge and make it accessible in new ways. They must also become major players participating in the value creation surrounding the production of knowledge (by humans and AIs) and its delivery. This will have to occur in an innovation environment where the revenue models in higher education and information services are changing drastically. In order for our colleges and universities to remain financially viable, for instance, libraries cannot simply become tenants in the platform ecosystem of private capital, handing over billions of dollars to a small number of data landlords in exchange for storage, access, and analytics services. Libraries will either become bankrupt in this new environment, or they will become active agents on the value-creation side, not as vehicles for private capital but levers for public good.

This is just one way that libraries must play a central and active role in governing technology innovation. And it exemplifies the urgency for new models to institutionalize relations among private capital, public interest, and knowledge curation. This is a complicated challenge, but it is one that we must meet to help secure the future of humanity in a technological age.

Affiliates