3 Comments
User's avatar
Matthew Rosenquist's avatar

War is ugly, but leveraging tools in appropriate ways (given the context) is important. A minefield kills indiscriminately, yet an AI minefield may choose not to harm specific targets. A call of an artillery barrage on a location will harm everyone within its kill-range, but AI targeting can be much more specific. AI targeting is a tool. We must establish boundaries and oversight, but it does have its place.

I think the argument should not be about avoiding AI smart weapons, but rather how do we refine the tool so it can be effective and efficient to achieve the objective and be properly tuned to reduce or avoid unnecessary harm. If we are responsible, we can significantly reduce harms to non-combatants.

Expand full comment
Reid Blackman's avatar

Yeah, I think that's right. A lot of the objections I hear have to do with reliability of the AI system. But the relevant ethical question is 'If it's shown to be sufficiently reliable, then is it ethically acceptable?'. I suspect that, in some cases at least, it may even be morally required since it would decrease the loss of innocent lives.

Expand full comment
Matthew Rosenquist's avatar

Exactly! Using AI may contribute to less collateral damage (civilians, property etc.), less long-term risks (like mines and unexploded ordinance remaining after the conflict), conformity to ROEs (Rules of Engagements), and less risk of catastrophic unintended consequences (ex. targeting of nuclear plants, industrial caustic-chemical storage, hospitals, etc.).

But like any tool, it needs to be tuned. In the case of AI, it must be trained.

There should still be accountability for the use of AI, by aggressors, just as it holds true for the use of current weapons of war.

Expand full comment