TAST: Are all AI agents designed?
TAST - Against the technology as a tool discourse. This blog series presents my views against common claims to be made in order to undersate AIs, robots and machine learning agents moral dimensions and highlighting AI technology as a tool. 1) All AI agents are designed 2) The view against computationalism 3) Anthropomorphized view of intelligence. Themes overlap each others and I might add some themes more to the list.
A common claim is that all AI systems, robots and machine learning solutions are designed. All they do is originated to human designer and implementer.
My counterargument is that there is situations or at least concrete thought experiments where this it not the case.
1) Learning and learning things far away what intended: Computer scientist seems to be strangely peaceful with learning algorithms even though many of those algorithms are said to be impossible to understand even for the developers. No one argues that those systems aren't designed by their developers, but the developers' ability to design the goals and objective seems to be questionable.
There is examples that AI agents have learnt things far away what intended - approaching serious and dangerous impacts like harming and killing people.
2) My through experiment: The starting point is an advanced machine learning model. For simplicity, I use AlphaGo Zero as an example. It is an advanced model or system of reinforcement learning. We take an open source distribution of AlphaGo Zero (f.ex. https://github.com/gcp/leela-zero *). Let's install that RL system to an open server platform. The open server platform and the model implementation gives as free hands for the RL goal setting.
Let's assume that these all four steps are executed by different parties and no one is monitoring the system after the setting.
I see this setting already problematic in the sense of responsibility and sharing the moral consequences of the system.
3a) Taking into account the anonymous and network-based nature of internet, it seems to be impossible to trace the party who set the goal for the system (that might be the weak part of the idea above).
3b) Adding block-chain technology and other distribution effects: Now there is one AlphaGo Zero, but there will be others. Open source projects and distributions can be broadly distributed and developed. Storj and other projects are developing distributed computing further. The goal-setting, f.ex. in a form of creating rules*, can be easily distributed to large amounts of people and algorithms.
* Technical notes
- I don't have deep technical understanding of reinforcement learning or AlphaGo Zero's open source distribution. But when being familiar with the open source approach and modular thinking, I don't see a reason that there would really be a fully functioning open source distribution of AlphaGo Zero implementation which could replicate all the results DeepMind did achieve with it.
- RL is technically unknown for me. I don't know how freely and easily you can set different goals for a RL model. But looking from the abstract level, shouldn't it be possible to set whatever goal for the model, if the goal is codified in a similar way and form than the original goal.