University of Pretoria: First-year students: Important information about registration and orientation
Late last year, the Academy of Science of South Africa (ASSAf) hosted a Presidential Roundtable themed ‘Frankenstein or Gods? The Impact of the New Technologies on What It Means to Be Human’. I was part of a panel that engaged with political scientist Professor Margaret Levi’s keynote address.
Prof Levi expressed concerned over the future and governance of artificial intelligence (AI) technologies, and her overarching analysis involved a consideration of the underlying social problems. She pointed to the problem of power inequality as it relates to corporate monopoly, technical expertise of the programmers versus the lack of said expertise by the end user, as well as the dependence and addiction that lead to human concession of control. Furthermore, she identified the weaponisation of technology, the biased nature of algorithms, legal and illegal data/identity theft, as well as AI technology taking over human jobs among the fear-inspiring advances of technology.
Prof Levi argued that AI taking human jobs is impossible as robots are “savant nerds” without emotional intelligence and self-consciousness. She maintained that the reservations that humans display in response to AI technologies point to the anticipation of machines dominating humans, which is met with a strong human desire to dominate AI. The fear of being dominated and the desire to dominate machines operates within the wrong paradigm of domination. She recommended viewing the relation between AI and humans using the paradigm of collaboration.
While I agree with Prof Levi’s conclusion that talk of domination is the wrong paradigm with which to capture the relationality between AI and humans, I am not convinced that the alternative paradigm is collaboration.
Apart from anthropomorphising AI, the language of collaboration is value laden and it invokes AI agency. It is a categorical mistake to think that AI possesses the kind of agency that enables collaboration with humans. Collaboration would require empathy, ethics, tacit knowledge and other forms of comprehensive knowledge, among other values that enable equitable participation among agents.
Prof Levi rightly argued that AI technologies cannot learn ethics, compassion and other forms of comprehensive knowledge. They are instruments that cannot have agency in the way humans understand agency. While they might have mechanical autonomy, AI cannot be said to be praiseworthy (gods) or blameworthy (Frankensteins). It is the designers and program developers who can be gods or Frankensteins. Herein I agree with Prof Levi’s suggestion that AI research and design should not be left to technocrats alone – it should necessarily involve transdisciplinary collaborations that can create systems that respect and uphold ethical values that minimise the violation of human lives.
What we ought to be mindful of in our interaction with and use of AI technologies involves acute consideration of our ethical futures. This will necessarily involve a culture change, which includes consideration of ethical and societal consequences related to the design and design research of AI technologies; planning and mitigating for expected harms and being able to conscientiously choose not to create a harmful product; and revise the culture of AI design by rethinking the faces of consequences. All aspects of our ethical futures, as espoused by Prof Levi, can be captured using three values: 1) inclusivity; 2) planning; and 3) revision – all of which seem to co-opt human beings and not the AI technologies.
Certainly, humans are entities whose humanity is dynamic in ways that are not replicable in AI technologies. Our fears about being dominated by AI are partly due to the overextension of the instrumental value of AI. We tend to confuse their instrumental value with complex human capabilities that stem from the dynamic aspects of human nature. Our fear of AI is misplaced precisely because we ascribe human capabilities and wrongly assign them moral agency simply because they can appear and act in ways that can be interpreted through anthropomorphic frameworks. While their actions can have moral consequences, AI technologies remain amoral objects designed and created for specific tasks. They can be used and deployed for human benefit.
Ultimately, it is humans who need to collaborate to ensure ethically responsible use of AI instead of giving it honorary human status. Although we can grant AI technologies mechanical autonomy, it is those who design and create them that should remain accountable for their actions and should collaborate across disciplines to mitigate abusive use of such technologies.
Observably, Prof Levi’s suggestion for humans and AI to collaborate raises issues of moral relationality and the plausibility of co-authorship of the principles that govern interactions between humans and AI. In a society where humans seem to constantly be intentionally and unintentionally interacting with AI technologies, collaboration may not be quite possible. What is possible is morally responsible use/deployment of AI technologies. In short, I think we should be careful not to superimpose agency on AI technologies as it indefensibly shifts moral responsibility from its creators.