Robotics and War
Given our history (World War II, Korea, Vietnam, Kuwait, Afghanistan, Iraq), the American people no longer trust their government to make proper decisions, moral decisions —but the decision to go to war isn’t up to the American people; the issue is always decided by those whom the American people choose to lead them, or represent them in Congress. Hmmm. Maybe voting responsibly is the key to this problem.
We are at a place now where America’s antiquated electrical grids are significantly vulnerable to enemy targeting. If an enemy shuts down our electrical power, they shut down America. Totally. Yet, there is now in our country a developing strategy for the employment of robotic/autonomous technology to improve our national defense structure. One that relies exclusively on electrical systems.
In fact, the United States has been using these technologies for quite a few years now —mostly aeronautical drones that rely on global positioning systems controlled through high altitude satellite systems. We have used these devices to bomb enemy targets, including human targets.
Today, the Navy is in the process of developing autonomous ships that can resupply guided missile destroyers no matter where they are in the world, and unmanned submarines that can conduct undetected coastal reconnaissance.
America’s use of autonomous vehicles demands a national discussion about the ethics of programing computers to make life and death decisions. The discussion has been ongoing for the past 25 years.
Despite any of our concerns, the military services continue with robotic development—as they must. Sadly, after Iran shot down one of our aeronautical drones, they sent the wreckage to Russia for reverse engineering. Now that the “enemy” have acquired our technology, our own countrymen are in danger of the use of such weapons.
But autonomous weapons are here and we must confront two possibilities: we are either doomed to suffer harm by an enemy who does not hesitate to use such weapons, or in using them ourselves we relinquish our humanity, the principles that have long set us apart from the enemy (whomever that is).
Given the danger robotic technology imposes on ourselves, the United States has no choice but to proceed with technological development. Better them, than us … but we must proceed with the understanding that mistakes will be made, and innocent people will suffer from them. Unintended civilian casualties occur in every conflict, so our planners, programmers, technicians, and controllers must do all that they can to minimize inadvertent carnage.
Two questions arise: (1) Will lethal robotic systems be more or less likely to make mistakes than human operated systems? (2) Are civilian casualties at the hands of human operators less reprehensible than those imposed by a computer?
The answer to the first question must be “no,” since computer systems are designed by humans. The issue becomes one of the ability to discriminate between legitimate targets and unintended ones. Over time, automated systems will improve in this area, and of course, computers are able to process information quickly, never gets fatigued, and doesn’t let emotions interfere with judgment.
In answer to the second question, there is no real difference between unintended collateral damage caused by human or computer error, but some will argue that computers will become a better safeguard for innocent life. Beyond this, social attitudes change over time. With more young people playing computerized war games, it is likely that with the passage of time, there will be less objections to autonomous warfare—not more.
It is doubtful that fully autonomous systems will run the show in any near-future conflicts. Robotics will remain a human-machine collaboration: significant autonomy with humans making the final “go-no go” decision. But we should make no mistake about the interest in robotic warfare in China and Russia, two significant US adversaries. The new age is here. We either shape it in our own interests, or we suffer the consequences of falling behind our enemies.
If robotic technology makes our forces more lethal, if it increases their survivability rate, if it gives American troops an edge on the battlefield, then I’m in favor of autonomous systems. My opinion may not matter because this is the direction our military leaders are taking us. Questions do remain, however.
When the United States develops a robust robotic defense system, when our warriors can inflict more damage to the enemy while remaining relatively isolated from an enemy response, when or if we get to the point where we can send robots to kill the enemy and keep our young people home, will our president or congressional leaders be more or less likely to take our nation to war?
One point of view regards warfare as immoral. Its opposite is that in self-defense, warfare is a necessity, obligatory, and moral. But there are many levels to these kinds of arguments. We assume that the individual with the responsibility and authority to commit our nation to war is the President of the United States, or collectively, the Congress of the United States.
But under what circumstances is a presidential or congressional decision for war justified? Franklin D. Roosevelt wanted war so badly that he did everything within in his power to provoke the Japanese into attacking the United States.
Harry S. Truman’s inept foreign policy invited the North Korean invasion of South Korea in June 1950 and set into motion the firsts and second Indochina wars (1946-1954, 1960-1975). Lyndon Baines Johnson wanted a war in Southeast Asia so badly that he lied about the circumstances that prompted his decision to commit the United States to war in South Vietnam and against North Vietnam. Ours is not a fail proof system.
Meanwhile, our nation’s ability to protect our access to electrical power remains a concern.
What is your opinion?
Other than that, all is well in the swamp.