We are now entering the age of robot. The robotics revolutionize not only our life but conflicts among the countries  America is in the vanguard of robot industry, especially in the military field. We shall face a fighting robot with AI before long. We shall have to establish the robotic ethics. It remind me of Asimov's first law: A robot may not harm a human being or through inaction allow a human being to come to harm. Do you have any idea about the subject?

Views: 365

Reply to This

Replies to This Discussion

Masahiro, this is a great topic. I recommend the book Wired for War by P. W. Singer. He spoke a couple years ago at Carnegie Council, and you can find the video and transcript here.

Another important researcher on this topic is Ronald Arkin of Georgia Tech. Few people have thought about military robots more than he has -- from ethics, to international law, to operationalizing robot "rules of engagement" as a military consultant. Arkin is quoted in this recent article from the Chronicle of Higher Education: "Robots at War: Scholars Debate the Ethical Issues."

You might also find our most recent Carnegie Ethics Online column of interest. In it Global Ethics Fellow Nicholas Rengger and Caroline Kennedy discuss "The New Assassination Bureau: On the 'Robotic Turn' in Contemporary War."

I hope this is helpful, and I look forward to continuing the conversation. --Evan

Evan, thanks for your recommendation. I'll study the opinions of these
people. We have a famous character of Japanese Robot manga, "Astro Boy"(Tetsuwan atomu, literally "Iron Arm Atom") created in the 1950s by Osamu
Tezuka. He described the future society in which robots and humans live
together. Atomu is an intelligent robot who serves for human beings. He
helps the people in great troubles and sometimes fights with crimes and
evils. I think we are now facing the futuristic world Osamu Tezuka
depicted. The manga is real.

Assuming the robot is made to be sufficiently inteligent as to be able to understand the secondaty effects on the human population of what its first-programmed task or role is. Then it will be in a state of delima.

It all very well having the job of collecting rubbish and disposing of it as if this is an automatic process, but this takes energy and certain chemicals from the biosphere, the expenditure of which have harmful effects too. Some of these may even be directly influencial on the human population and on individuals.

So the robot inteligence (and we too, for that matter) need to have some kind of "awareness over-ride" control that allows us to ignore any signals that initially say that what we are doing is not so good for everyone. Do we drive to the supermerket when the fuel burnt and CO2 produced are harmful? Do we allow ourselves to consume the purchases with the understanding that their containers will become a health hazard after they are likely to be improperly disposed? An incompletely programmed robot might think so!  

Thanks, David. Your discussion reminds me of another dilemma.    

Robots work in the workplace of 3Ds (Dangerous, Dirty and Dull).It has a
deep significance, because in that workplace humans have worked. What
does the replacement mean? It is great, because humans are free from the
dangerous, dirty and dull jobs. On the other hand, it is sad , because
they have lost their jobs. It is an antinomy. So what?

Thank you too.

What I was aiming for was an introduction to robotic (and consequently to human) super-ego or conscience. However, you are saying something else which is related but surely it is an older story. The introduction of automatic machinery in the 1800's caused a lot of manual workers to loose their jobs. This was a part of the industrial revolution, which today is extended to computers and will finally include robots.

As far as I can tell, the displaced workers had no easy solution, until some years later when they were needed to maintain the faster-working machines. Then after many trials and tribulations beginning with child-labor laws and ending with trade-unionism, certain rules were established as to the rights of whom was to perform what. But even this stability seems to be failing today, with globalised cheaper goods coming from abroad and the fear of unemployment at home driving down wages. Do we have to go back to the bad old days of customs duty on imports, protected industries and higher prices?

Consequently, I foresee that there will develop a serious problem with autonomous robots when organized groups of them begin to monopolize particular kinds of production and service provision. In order to cause the least harm to their human masters, these robots will need to be programmed to compete as individuals, without being allowed to join forces to create monopolies! Can we ascertain that their industrial developments will be any more ethical than our own dismal failures here? (Asimov's robotic laws would need to comply to a human kind of ethics too. And of what would be robot/robot ethics comprise?)

Thanks. Surely it is important for us to continue to think about the ramifications of robotization. I agree on you that Asimov's robotic laws would need to comply to a human kind of ethics too.  Asimov's first law: A robot may not harm a human being or through inaction allow a human being to come to harm. I remember the first principle of the Hippocratic oath:I will prescribe regimens for the good of my patients according to my ability and my judgment and never do harm to anyone. Robot is a product of technology. Humans develop and use the technology  If the future robots could possess the same intelligence as humans do, they will be able to get away from our control and become independent. Is it a groundless nonsense?


Carnegie Council

Killer Robots, Ethics, & Governance, with Peter Asaro

Peter Asaro, co-founder of the International Committee for Robot Arms Control, has a simple solution for stopping the future proliferation of killer robots, or lethal autonomous weapons: "Ban them." What are the ethical and logistical risks of this technology? How would it change the nature of warfare? And with the U.S. and other nations currently developing killer robots, what is the state of governance?

As Biden Stalls, Is the "Restorationist" Narrative Losing Ground?

U.S. Global Engagement Senior Fellow Nikolas Gvosdev notes that former Vice President Joe Biden is, in foreign policy terms, most associated with a "restorationist" approach. How does this differentiate from other candidates? What approach will resonate most with voters?

Democratic Candidates & Foreign Policy after Iowa, with Nikolas Gvosdev

With the (incomplete) results of the Iowa Caucus putting the spotlight on Pete Buttigieg and Bernie Sanders, what do we know about their foreign policy platforms? How do they differentiate themselves from Joe Biden? Senior Fellow Nikolas Gvosdev shares his thoughts and touches on voters' possible perception of Sanders as a "socialist" and how climate change could become an issue in this election.





© 2020   Created by Carnegie Council.   Powered by

Badges  |  Report an Issue  |  Terms of Service

The views and opinions expressed in the media, comments, or publications on this website are those of the speakers or authors and do not necessarily reflect or represent the views and opinions held by Carnegie Council.