Today in my Ethics course here at St. Paul’s University in Ottawa, my chaplains’ course heard two presentations by representatives of the Canadian Forces’ Defence Ethics Program and Army Ethics Program. These are great programs, carefully and sincerely designed to help Canada’s young men and women learn to do the right thing on the morally complex and ambiguous battlefields of today and of the forseeable future.
However, an article posted on slate.com summarizes a new generation of technology that will allow the US and allied militaries deploy a variety of robotic systems, including autonomous robot fighting systems, in the near future.
Lethal military robots are currently deployed in Iraq, Afghanistan and Pakistan. Ground-based robots like QinetiQ’s MAARS robot (shown here), are armed with weapons to shoot insurgents, appendages to disarm bombs, and surveillance equipment to search buildings. Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing a package of software and hardware that tells robots when and what to fire.
We’re already familiar with weapons systems like the Predator Drone, an unmanned aerial vehicle controlled by an operator who may be thousands of miles away. But what happens as these sorts of systems are married to cognition systems that allow them to act independently of humans once they are programmed and deployed in theatre? The ethics programs of the near future may well be aimed at computer programmers as much as they are aimed at soldiers. Or so we can hope.