AI Ethics: Effects on human life

  • Posted on: 21 March 2019
  • By: Juho Vaiste

“These technologies and the systems they enable are rapidly shifting behaviours and creating new rules for human interaction by virtue of the incentives and boundaries built into their design.”
(World Economic Forum, 2018).

Even the applications of narrow AI will affect radically to our lives and how we see our world. The initial discussion is how much power we want to give to technology and how much we let technology to direct our society and societal development. We still understand the digitalization of society very little.

The borderline between critical perspective on technological development and full-on technology optimism is fickle. It is about how we want to understand human life and the value of humankind. The development of the last decades has made our world a lot more efficient, but equally, irrefutably it has changed and eliminated some parts of our lives, especially forms of social intercourse.

When one follows the development of AI and robotics, it seems clear that social, care and sex robots and AIs capable of advanced conversation are a reality very soon. We should likely get used to see and meet robots, even in our ordinary lives. Social robots will fulfill many of our needs in which we experience shortcomings. However, does that isolate people even more from each other? Is it right? Does it matter?

Chat and conversation bots are developing fast, and those will change how we think about human agency, communication, and relationships (Neff & Nagy, 2016). In 2016 Chatbot Eugene Goostman won the Turing Challenge by tricking 33 % of the judges to think it would be a human being (Liljas, 2014).

One major societal risk to be included in the AI development is the risk of our democratic system to be weakened or failed (O'Neil, 2017). The threat to democracy shows the nature of AI/ML related ethical risks: many of the risks are strongly linked and incorporated and include multiform attributes and details. The threat to democracy is formed
based on the risks of manipulation, too powerful internet platforms, biased information, low- information literacy.

 

Added sources and references:

Sharkey, N., & Sharkey, A. (2010). The crying shame of robot nannies: an ethical appraisal. Interaction Studies, 11(2), 161-190. https://www.researchgate.net/publication/228785014_The_crying_shame_of_robot_nannies_An_ethical_appraisal

Neff, G. Nagy, P. (2016). Automation, algorithms, and politics talking to Bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10 (2016): 17. 

O´Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. ISBN-13: 978-0553418811.

Liljas, P. (2014). Computer Posing as Teenager Achieves Artificial-Intelligence Milestone. Time, https://time.com/2846824/computer-posing-as-teenager-achieves-artificial-intelligence-milestone/

Ellis, D. A., Davidson, B. I., Shaw, H., & Geyer, K. (2019). Do smartphone usage scales predict behavior?. International Journal of Human-Computer Studies, 130, 86-92.

Ellis, D. A. (2019). Are smartphones really that bad? Improving the psychological measurement of technology-related behaviors. Computers in Human Behavior, 97, 60-66.

Orben, A., & Przybylski, A. K. (2019). The association between adolescent well-being and digital technology use. Nature Human Behaviour, 3(2), 173.

Susser, D. (2019). Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_54.pdf