Artificial Intelligence (AI) has been around for decades, but its rapid evolution is reshaping industries in unprecedented ways – and the security sector is no exception.
As AI’s capabilities grow, its potential applications are both exciting and concerning, leaving a landscape filled with both opportunities and challenges. AI-driven systems transform the security sector by optimizing resource allocation, minimizing routine tasks, and enabling real-time threat detection.
But how can AI be aligned with democratic principles and human rights?
In recognition of the rising importance of technology in security, DCAF’s working group on new technologies, brings together experts from across the organisation, focusing on digitalisation, cybersecurity, and AI, to explore current challenges and future possibilities.
The goal? To ensure that technological innovation in the security sector advances alongside robust and responsible governance frameworks that uphold security and democratic values.
Bias in AI systems, lack of transparency, and other ethical concerns pose significant challenges for their use in security settings.
Without a clear international framework to guide the development of AI, there is a critical need for organizations - like DCAF - to lead the way in ensuring AI is used responsibly and ethically when it comes to the security sector. Applying the principles of good governance – accountability, transparency, and inclusivity – offers a solid foundation for making AI safe and effective.
Ultimately, as AI continues to evolve, the question remains: How can we ensure that its power is harnessed for good, and not misused? Collective efforts need to be made to figure this out now – before the technology outpaces our ability to govern it.
To learn more about how AI is shaping the security sector, we’ve compiled a few useful resources. Browse through them to unpack key insights on the impact of these new digital tools on security and more.
LEARN: Discover resources on artificial intelligence
The OECD’s Policy Observatory on AI (aptly named OECD.AI) has a dedicated page for useful resources on AI. They talk you through key concepts and present some additional websites, videos, and courses from other organisations working on AI. Explore them to better understand what AI is and its policy implications.
WATCH: How do gender and other social biases filter into AI? What does this mean for military applications?
Drawing from their report “Does Military AI Have Gender?”, this video from UNIDIR unpacks gender bias in data collection, algorithms, and computer processing. It highlights what a gender-based approach to human-machine interactions should entail. Watch this video for an introduction and delve into the full report for more.
READ: Digitalization and SSG/R: Projections into the Future
Our report on digitalisation and security sector governance aims to shed light on this complex intersection, investigating the multifaceted challenges and opportunities of digitalisation. Read the full report for some useful recommendations to help navigate these technological advancements.
LISTEN: Robots and AI in frontline policing
We invited Brendan Schulman, Vice President of Policy and Government Relations at Boston Dynamics, to join us for a podcast episode that unpacks the potential and challenges of integrating robotics and AI in law enforcement. Tune in to hear his insights and some of Boston Dynamics innovative solutions in robotics.
READ: Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force
This blog post, written by SIPRI, explores whether international humanitarian law provides sufficient guidance concerning autonomous weapon systems – weapons triggered by sensors and softwares without human intervention to apply force. Read the full blog post for more insights on this complex human-machine interaction, which is available in English, French, and Portuguese.
LEARN: The challenges of artificial intelligence
Like all disruptive technologies, AI presents some major challenges, and its potential misuses – which could have serious consequences for international security, democracy and society as a whole – are many. Learn about some of these challenges in this op-ed piece by GCSP, that seeks to bridge the gap between the scientific and technological community and the world of policy making.
READ: How does AI intersect with democracy and multilateralism?
At a time when the use of AI is raising concerns for the resilience of democratic systems and principles, looking at how multilateralism and current global efforts can regulate AI is key. This issue brief by the Geneva Graduate Institute and Kofi Annan Foundation highlights the urgency of addressing systemic flaws, amplifying diverse voices, and adopting ethical approaches to decision-making. Download the brief to learn more about the need for AI global governance frameworks to maintain democracy.