Robot Ethics: Gesture Without Motion


photo by UW News

Robot Ethics?  No such thing of course.  But, here’s the problem, there are robots, there are drones, there are killing machines and spying machines already in operation everywhere (yes, everywhere), some of which are in automated attack mode and make their own “strategic decisions.”  So there is or must be an ethics applied to the use of robots.  I’m going to go out on a limb here and find any ethic that doesn’t strongly state, “STOP,” the wrong ethic.  Feel free to argue for “medical” robotics or “exploration” robotics.  But the reality is that these “branches” of robotics spring from military applications–as does all US technological “advance.”

I don’t know any other way to say this: Forget about humanity.  Or rather, finally concede that you and you and you, all of us, are no longer a viable or sustainable feature of this world.  And intentionally so.  This is not Chicken Little territory as the sky is falling; and “wolf” is exactly the appropriate cry.  And you can thank the absolutely brilliant, hyper-rational, detached minds hunkered down in American universities doing the dirty work via their attachment to government funding programs through the DOD and DARPA and NSA and so on to include any number of agencies of which you and I will never be aware.  And not only do they make the killing technologies, they also offer the philosophical rationale for their use and justification.

This is a mode of human cogitation that to me is cognate with, but to an exponentially more insidious degree, the scientists of the Third Reich.  All of these men and women are their own brand of Eichmann.  In fact, perhaps we are all versions of Eichmann these days and this is why these things march ahead without a single bit concern.  Oh well, this is the way the world works.

Maybe the massive loss of life in WWI, the first truly machine-driven war, simply made humanity irrelevant as a valued form of existence.  We became, as in Eliot’s haunting poem, “the hollow men.”  (Is there any way to beg the women of the world to rise up and save us?)

This piece by a “philosopher”–seriously, he has a degree and a job and writes books–in the Atlantic detailing his “briefing” to a “venture capital” arm of the CIA, In-Q-Tel, “Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA“–honestly felt like some kind of prank to me.  I mean, In-Q-Tel is a “Bond” joke.  How absolutely juvenile is a group of men who name their killing ventures after 50s era spy novels and movies; in other words after fantasies?  The US has messed up in the most immense and unfathomable ways over the last 70 years.  These are the rotting fruits of a deeply gnarled root.

First, how many things are wrong with the matter-of-fact existence of a “venture capital” arm of the CIA.  Remember, this is government activity with ZERO oversight (as if oversight matters in the least anymore).  But, as the drum is continuously beating for our march to the killing cliff, why quibble over points of policy or constitutional government?  Irrelevant.

You really must read the piece to believe the absolute sincerity of this.  While the Nazi’s had Heidegger to offer a philosophical justification of Nazi “becoming” and Frege to provide a rationale of detached merciless logic, we apparently have the likes of Patrick Lin.  Question, is Cal Polytech the home of sophistry and apologetics in the service of the arms industry, er, technology ethics?  (Check out that creepy bar-code logo.)  Is it Philosophy University for the CIA?  Note to Doctors of Philosophy, you too will have souls burdened by murder.

It’s true that we might forgive Lin for this particular piece as he goes through the motions of pointing out where there are “ethical” concerns.  But for the most part, this is a done deal as his rhetoric makes plain, and he makes it clear that he’s speaking out of an “us/them” perspective.  Yes, it’s a war briefing, but he is no detached thinker…he is an “us” and that makes his discussion one in which it’s clear his work in ethics will be in the service of making philosophical language excuse the terrorizing machinations of US robotics.  It’s hard to choose specific sections to share with you and again, I’ll urge you to at least skim it to see for yourself.  Each section, each sentence brings fresh incredulity.  In fact, I find the very questions asked already ethically suspect.

Here is how Lin frames much of this piece, again, a kind of “war” briefing–never mind that the very acts that we call wars are already immoral, unethical, and illegal by many of the conventions and laws he discusses:

Robots are replacing humans on the battlefield–but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians’ duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn’t be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.

First, we have already discovered via our health system and our use of physical and mental torture supervised by those virtuous, oath-taking physicians (which oath comes first–the chicken or the oath?) that humans do not follow the very ethical codes we might pretend to apply to robot use.  At least we might be honest in acknowledging that we’ll have no way to enforce these codes and rules.  “First do no harm” seems to me to be one of the cruelest jokes ever perpetrated on humans by a another group of humans.  Anyway, these codes seem only apply in an academic fashion…you know, as in, “it’s academic.”  Which implies, again, its irrelevance.  But Lin goes on to detail the “usual suspects” for our call to employ robots.

The usual reason why we’d want robots in the service of national security and intelligence is that they can do jobs known as the 3 “D”s: Dull jobs, such as extended reconnaissance or patrol beyond limits of human endurance, and standing guard over perimeters; dirty jobs, such as work with hazardous materials and after nuclear or biochemical attacks, and in environments unsuitable for humans, such as underwater and outer space; and dangerous jobs, such as tunneling in terrorist caves, or controlling hostile crowds, or clearing improvised explosive devices (IEDs).

But there’s a new, fourth “D” that’s worth considering, and that’s the ability to act with dispassion. (This is motivated by Prof. Ronald Arkin’s work at Georgia Tech, though others remain skeptical, such as Prof. Noel Sharkey at University of Sheffield in the UK.) Robots wouldn’t act with malice or hatred or other emotions that may lead to war crimes and other abuses, such as rape. They’re unaffected by emotion and adrenaline and hunger. They’re immune to sleep deprivation, low morale, fatigue, etc. that would cloud our judgment. They can see through the “fog of war”, to reduce unlawful and accidental killings. And they can be objective, unblinking observers to ensure ethical conduct in wartime. So robots can do many of our jobs better than we can, and maybe even act more ethically, at least in the high-stress environment of war.

Humans create surrogate life agents because life is dull, dirty and dangerous, but further, because our surrogates are “dispassionate”–i.e. they don’t care.  But currently all programs originate in the human mind and these folks presumably care, or at least have a motivation for their actions.

You’ve seen Blade Runner, right?  I’m not one to imagine the mysteries of life going haywire so as to “evolve” to create passionate robots (or “replicants”) but when it turns out that this is the real final solution to humanity (because, in truth, not even Hitler’s Aryan Nation can be humanly conceived as biologically pure and so all must be exterminated), who will be responsible?  The Tyrell Corporation?

I really don’t even know how or what to respond to things like the following as every sentence is a red flag ethically:

More broadly, the public could be worried about whether we should be creating machines that intentionally deceive, manipulate, or coerce people. That’s just disconcerting to a lot of folks, and the ethics of that would be challenged. One example might be this: Consider that we’ve been paying off Afghani warlords with Viagra, which is a less-obvious bribe than money. Sex is one of the most basic incentives for human beings, so potentially some informants might want a sex-robot, which exist today. Without getting into the ethics of sex-robots here, let’s point out that these robots could also have secret surveillance and strike capabilities–a femme fatale of sorts.

As I said, it’s more the “matter-of-fact” somewhat “folksy” tone that bothers me about as much as the content.  I guess I’m of a mind that our work as thinkers does require us to use arguments to say good, bad, wrong, right and then defend those choices.  Lin and his cohort seem to be making arguments because that’s what they do, and then they try to be the one that can do the most to justify corrupt and evil actions.  A kind of philosopher’s Addington and Yoo–the most hollow of men, yet likely not dispassionate.

Finally, let’s jump to Lin’s concluding paragraphs.

And if we are relying on robots more in the intelligence community, there’s a concern about technology dependency and a resulting loss of human skill. For instance, even inventions we love have this effect: we don’t remember as well because of the printing press, which immortalizes our stories on paper; we can’t do math as well because of calculators; we can’t recognize spelling errors as well because of word-processing programs with spell-check; and we don’t remember phone numbers because they’re stored in our mobile phones. In medical robots, some are worried that human surgeons will lose their skill in performing difficult procedures, if we outsource the job to machines. What happens when we don’t have access to those robots, either in a remote location or power outage? So it’s conceivable that robots in the service of our intelligence community, whatever those scenarios may be, could also have similar effects.

Sounds bad, let’s stop.

Even if the scenarios we’ve been considering end up being unworkable, the mere plausibility of their existence may put our enemies on point and drive their conversations deeper underground. It’s not crazy for people living in caves and huts to think that we’re so technologically advanced that we already have robotic spy-bugs deployed in the field. (Maybe we do, but I’m not privileged to that information.) Anyway, this all could drive an intelligence arms race–an evolution of hunter and prey, as spy satellites had done to force our adversaries to build underground bunkers, even for nuclear testing. And what about us? How do we process and analyze all the extra information we’re collecting from our drones and digital networks? If we can’t handle the data flood, and something there could have prevented a disaster, then the intelligence community may be blamed, rightly or wrongly.

Uh oh, our enemies may be listening and/or aware we’re listening.  Tinker Tailor Soldier Spy, same old, right?  We must be the worst FIRST to save the world from people just like US!

Us and Them…how will Lin make us right and them wrong?  He can do it, I’m sure, even if he’s not privileged to say how.

Related to this is the all-too-real worry about proliferation, that our adversaries will develop or acquire the same technologies and use them against us. This has borne out already with every military technology we have, from tanks to nuclear bombs to stealth technologies

men and their partners to get advice about the condition. cialis no prescriptiion time, it Is recommended to evaluate and consider all aspects.

cultural, social, ethnic, religious and national/regional buy levitra online The kidney sections of animals in group ‘C’ treated with 1..

acquired, global or situational. Adequate attention to buy real viagra online include its noninvasive nature and broad applicability. The.

complications and mechanical failure. viagra online purchase Patients who fail oral drug therapy, who have.

(most of the time) and complete ED (all the time) (5) . Theof therapies may therefore vary from individual to generic viagra.

chin up and maintaining a full erection. organ or tissue, âenergy creates a load of high pressure that viagra 50mg NSAID’s.

. Already, over 50 nations have or are developing military robots like we have, including China, Iran, Libyan rebels, and others.

Be afraid, but be ethically confident!

CONCLUSION

The issues above–from inherent limitations, to specific laws or ethical principles, to big-picture effects– give us much to consider, as we must. These are critical not only for self-interest, such as avoiding international controversies, but also as a matter of sound and just policy. For either reason, it’s encouraging that the intelligence and defense communities are engaging ethical issues in robotics and other emerging technologies. Integrating ethics may be more cautious and less agile than a “do first, think later” (or worse “do first, apologize later”) approach, but it helps us win the moral high ground–perhaps the most strategic of battlefields.

Apparently, the example that comes to mind when Lin writes of a concern regarding “inherent limitations” is for a national “self-interest” and not a human one.  Can you imagine a “sound and just” policy coming out of the gang at Cal Polytech after reading this?  But finally, my favorite rationale is the one he has the audacity and lack of scruple to end on, doing the ethical work will give us the moral high ground.

No more historical quibbling over US war atrocities like dropping exploding suns on the innocents.  Patrick Lin will have devised a sound and just argument for our National Interest.

Huzzah, humanity.  This is how the world ends, this is how the world ends, this is how the world ends…

 

(Visited 4 times, 1 visits today)

1 Comment

  1. Pingback: Eliminating Randomness

Leave A Comment

Your email address will not be published. Required fields are marked *