This is a guest post. The views expressed here are solely those of the author and do not represent the position of IEEE Spectrum, The Institute, or IEEE.
Many in the civilian artificial intelligence community seem unaware that today’s AI innovations could have significant impacts on international peace and security. However, AI practitioners – researchers, engineers, product developers, and industry managers – can play a key role in mitigating risks through the decisions they make throughout the lifecycle of AI technologies.
There are many ways in which advances in civilian AI could threaten peace and security. Some threats are direct, such as using AI-powered chatbots to create disinformation for political influence. Large-scale language models could also be used to write code for cyber attacks or to facilitate the development and production of biological weapons.
Other ways are more indirect. For example, decisions by AI companies about whether and under what terms they open source their software have geopolitical implications. These decisions could determine how states or non-state actors gain access to critical technologies that could be used to develop military AI applications, including autonomous weapons systems.
AI companies and researchers need to be more aware of the challenges and their capabilities to address them.
Change must start with the education and career development of AI practitioners. Technically, there are many options in the responsible innovation toolbox that AI researchers can use to identify and mitigate the risks associated with their work. Researchers should be given the opportunity to learn about such options, such as IEEE 7010: Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being, IEEE 7007-2021: Ontology Standard for Ethically Driven Robotics and Automation Systems, and the National Institute of Standards and Technology’s AI Risk Management Framework.
If educational programs provide foundational knowledge about the societal impacts of technology and how technology governance works, AI practitioners can be better equipped to innovate responsibly and become meaningful designers and implementers of regulation.
What needs to change in AI education?
Responsible AI requires a range of competencies that are not typically covered in AI education. AI should no longer be treated as a pure STEM field, but as an interdisciplinary field that requires not only technical knowledge but also insights from the social sciences and humanities. There should be mandatory courses on the societal impacts of technology and responsible innovation, as well as specific training on AI ethics and governance.
These subjects should be part of the core curriculum at both undergraduate and graduate levels in all universities offering AI degrees.
If educational programs provide foundational knowledge about the societal impacts of technology and how technology governance works, AI practitioners can be empowered to innovate responsibly and become meaningful designers and implementers of AI regulation.
Changing AI education curricula is not an easy task. In some countries, changes to university curricula require ministry-level approval. Proposed changes may encounter internal resistance for cultural, bureaucratic, or financial reasons. Meanwhile, existing instructors’ expertise on the new topic may be limited.
However, an increasing number of universities now offer these subjects as electives, including Harvard, New York University, the Sorbonne, Umeå University and the University of Helsinki.
We don’t need a one-size-fits-all model of education, but we certainly need funding to hire and train dedicated staff.
Adding Responsible AI to Lifelong Learning
The AI community should develop continuing education courses on the societal impacts of AI research so that practitioners can continue to learn about such topics throughout their careers.
AI will undoubtedly evolve in unexpected ways. Identifying and mitigating those risks requires ongoing discussions that include not only researchers and developers, but also those who may be directly or indirectly affected by the use of AI. A comprehensive continuing education program will elicit insights from all stakeholders.
Some universities and private companies already have ethics review committees or policy teams that evaluate the impact of AI tools. The teams’ mandate does not typically include training, but they could expand their mandate to offer courses that everyone in the organization can take. Training in responsible AI research is not a private concern, but something that should be encouraged.
Organizations such as the IEEE and the Association for Computing Machinery can play an important role in establishing continuing education courses because they are well positioned to gather information and foster dialogue, and thus establish codes of ethics.
Engagement with the wider world
There is also a need for AI practitioners to share knowledge and stimulate discussion about potential risks beyond the AI research community.
Fortunately, there are already many active groups on social media discussing the risks of AI, including the misuse of civilian technologies by state and non-state actors, as well as a niche of responsible AI-focused organizations that consider the geopolitical and security implications of AI research and innovation, including the AI Now Institute, Centre for the Governance of AI, Data and Society, Distributed AI Research Institute, Montreal AI Ethics Institute, and Partnership on AI.
However, these communities are currently too small and not diverse enough, as their core members usually share similar backgrounds. The lack of diversity can lead groups to ignore the risks of impacting under-represented populations.
Additionally, AI practitioners may need assistance and guidance on how to engage with people outside the AI research community, especially policymakers. Clearly expressing problems and recommendations in ways that non-technical people can understand is a necessary skill.
We need to find ways to grow our existing communities, make them more diverse and inclusive, and better engage with the rest of society. Large professional organizations like IEEE and ACM can help with this by creating dedicated working groups of experts or setting up tracks at AI conferences.
Universities and the private sector can also contribute by creating or expanding positions and departments focused on the societal impacts of AI and AI governance. Umeå University recently established an AI Policy Lab to address these issues, and companies such as Anthropic, Google, Meta, and OpenAI have established departments or units dedicated to such topics.
Movements to regulate AI are growing around the world. Recent initiatives include the establishment of the UN’s High-Level Advisory Body on Artificial Intelligence and the Global Commission on Responsible Artificial Intelligence in the Military Sector. G7 leaders issued a statement on the Hiroshima AI Process, and the UK government hosted the first AI Safety Summit last year.
A central question facing regulators is whether they can trust AI researchers and companies to develop the technology responsibly.
In our view, one of the most effective and sustainable ways to hold AI developers responsible for risk is to invest in education. Practitioners of today and tomorrow must have the fundamental knowledge and tools to address the risks arising from their work if they are to be effective designers and implementers of future AI regulation.
Authors’ note: Authors are listed in order of their contribution. The authors were brought together by an initiative of the United Nations Office for Disarmament Affairs and the Stockholm International Peace Research Institute, launched with the support of the European Union Initiative on Responsible Innovation in AI for International Peace and Security.