Summary of AI and robotics ethics, part I: laws and principles:
In this series:
1. Summary of AI and robotics ethics, part I: laws and principles
2. Summary of AI and robotics ethics, part II: reports, guidelines, strategies
3. Summary of AI and robotics ethics, part III: selected articles (coming)
All the credits from this post goes to Alan Winfield. Important work when putting this all together. This blog series started when I decided to translate all these principles picked by Alan in Finnish.
Original text and collection is here: http://alanwinfield.blogspot.fi/2017/12/a-round-up-of-robotics-and-ai-ethics.html
Asimov's three laws of Robotics (1950)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Murphy and Wood's three laws of Responsible Robotics (2009)
1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
2. A robot must respond to humans as appropriate for their roles.
3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.
EPSRC Principles of Robotics (2010)
1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.
3. Robots are products. They should be designed using processes which assure their safety and security.
4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
5. The person with legal responsibility for a robot should be attributed.
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
Montréal Declaration for Responsible AI draft principles (Nov 2017)
Additonal info and background: https://www.montrealdeclaration-responsibleai.com/the-declaration
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
The organization behind: IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems
Background and additional info: http://standards.ieee.org/develop/indconn/ec/ead_general_principles_v2.pdf
"Why Principles Matter", IEEE general principles co-chair Mark Halverson