AI, defence and the biggest bullies in the playground – Mike Gillespie posts

News and information from the Advent IM team.

“Research has shown that 30 countries want the introduction of an international treaty over such weapons systems, which would be able to select and engage targets without any meaningful human intervention.” From an article by Ian Sale for SkyNews.

The ever-increasing desire to march towards automation, utilise Artificial Intelligence and achieve the removal of human interventions has been widely seen across industry and security management.  More recently it is becoming a topic of great interest and indeed, in some circles, consternation in area of militarisation and defence too.

If we are talking about the removal of any human decision making in selection, targeting and engagement, there is a dialogue required which seems to have been overlooked.  As is too often the case there is a rush to innovate and implement without pausing to consider the moral and more importantly ethical aspects of the proposed implementations.

Given that thirty countries have expressed the need for an international treaty on this critical area, you would think that this dialogue would happen, that a treaty would be developed and embraced, and the world can be assured at least that there is some international regulation, some checks and balances in place to prevent the worst happening and a terrible catastrophe we cannot step back from.

However, what we find is exactly the same scenario as we have been facing with cyber weapons and an agreement by a large number of countries that there ought to be controls implemented in the Geneva Convention style, that would protect nation states from each other seeking to destroy critical infrastructure through cyberspace.

Instead, this is stalled by the US, Russia and China refusing to sign up to any such agreement. Coincidentally these are the nation states that are really fond of using cyberspace to attack other nations. These genuinely are weapons of mass destruction if you can halt water sanitation, power, effective delivery of health and banking services, to name but a few, imagine the chaos you can wreak on a nation…

So, what happens when everyone agrees but the biggest bullies in the playground? Is there any point having unilateral policies on either defence or cyber defence? If you are effectively weakening your position by failing to ‘move with the times’ and go for continual upgrade, stockpile and threat, would you not simply be placing a large cyber target on your back?  That’s before we even go onto securing the automated weapons of mass destruction.

We have some seriously polarised cultural and political issues in the world today that has been heightened through the pandemic by political posturing and warlike threats. The world we live in is mostly removed from the first hand understanding of the horror of war unlike the generations after WW1 and the widescale use of chemical warfare, WW2 and the use of the atomic weapons of the day, and there is even a fading memory now of the fear that escalated in the 1960’s when the threat of nuclear war became so very real.

There is a reason why nations queued up to sign an agreement first on chemical then on biological weapons, because they knew the true horror that came with the deployment of widescale weapons of mass destruction. This reluctance from the superpowers to do so for autonomous AI weapons and for cyber weapons is very worrying and is something we should all be aware of.

This is not a suggestion to halt defence or progress but it is a suggestion that there are some weapon capabilities that although achievable should not be in anyone’s arsenal because history taught us with chemical and biological agreements, they are a step too far and should not be tolerated. Agreements on limitations are there to protect people just as much as our defence industries.

We have talked about this topic many times but you can still read our three-part article in The Professional Security Officer Magazine.

Share this Post