AI Ethics: Safety and Security
The development and implementation of AI create threats to our safety and security. These threats include small, practical safety and security issues, but also global concerns on our safety in a more significant meaning. One must learn to understand better safety and security matters related to the technology, starting from the common forms of cybersecurity.
Concerns on fully autonomous weapons and other AI warfare shape many questions on the global level. Outsourcing wars to robots might sound fascinating, but these kinds of AI solutions and agents contain dangerous and complex risks. The good thing is that the problems of AI warfare have been already recognized broadly by the multinational organizations and NGOs. The question is intensely political: the report by the US Government 40 states strong commitment to the debate and collaboration in the context of the Convention on Certain Conventional Weapons, but strongly argues wanting to progress with the autonomous weapon systems (US Government, 2016).
In their research paper "Concrete Problems in AI Safety" a group of mostly Google based AI researchers presents five different problems which might cause issues in real-world AI solutions. With real-world solutions, they refer to system and robots which have physical effects on the surroundings. Five problems and categories they present are 1) Avoiding side effects 2) Avoiding reward hacking 3) Scalable supervision 4) Safe exploration 5) Distributional shift. (Amodei et al., 2016).
Under the question of safety and security goes partly the problems attached to the autonomous vehicles and cars. Autonomous cars are coming popular, and those are legal in some regions, for example in a few states in the United States. Questions on autonomous vehicles vary from the fear and distrust towards the vehicles to the responsibility questions how the responsibility is shared if an autonomous vehicle ends up to causing an accident (Internet Society, 2017).
Another ethical risk related to safety and security is the malicious use of AI. The malicious use can come in various forms, scenarios, and domains including digital, physical and political dimensions (Brundage et al., 2018).