AI Ethics: Employment and Economic Impacts
The change and challenge AI sets for our employment and distribution of income is potentially enormous (Frey & Osborne, 2017). The tasks and jobs can be automated and replaced are high, and under our current economic paradigm, this possibility will be utilized by our companies and the public sector. Critics of this view usually refer to the Luddites and are promising that a lot of new jobs will be available.
The World Economic Forum introduces in its report eight possible scenarios for the future of work. The scenarios are built on a framework of levels in three variables: technological change, learning evolution, and talent mobility. The varying levels (steady-accelerated) and combinations of these three forms eight different scenarios for future of work: workforce autarkies, mass movement, robot replacement, polarized world, empowered entrepreneurs, skilled flows, productive locals and agile adapters. (World Economic Forum, 2018).
Right now, it seems unlikely that the development of AI would not have a substantial and crucial impact on our employment. Many jobs will be automated, and the amount of new jobs and tasks is minimal. One essential reason is that the learning and know-how curve seems to be hard to handle for a significant amount of our population (Krugman, 2013). Other scholars are also sharing this view, from their perspective (Hanson, 2001; Brynjolfsson & McAfee, 2011)
The economic change is not necessarily negative. The other popular view is imaging a world without work and people to be supported by a basic income solution (Vice, 2015). Contemporary experiments on basic income model you can find from Kenya, Finland, Unites States and the Netherlands (Business Insider Nordic, 2017). The essential of the basic income model is giving “all citizens a modest, yet unconditional income, and let them top it up at will with income from other sources,” as one of the central defenders of the idea, Philippe Van Parijs, has defined it (Van Parijs, 2003).
Another perspective is that the share of capital from our gross production seems to be rising (Piketty, 2015). However, at the same time, it might be that even the broader population has access to investing and small capital incomes might become more common. In the spirit of basic income, building little capital flows might become even more critical or a norm for everyone to cope in our changing future economy.
If the share of work in human life decreases, the problem of meaning might be the next societal phenomenon. The western societies have a long tradition of building our lives and communities around work life, but now in these future scenarios, a change to our mindset might be needed. Unpleasant news on the rise of the problems in mental health and drug use might tell us that some of us are already struggling with the new order in our societies. Some silent signals in the literature tell about the search for new spirituality in life (Harari, 2017). That might be something the societies need, and what the societies are waiting for.
Brynjolfsson, McAfee, and Spence (2014) present their idea that the progressing technology will create an even more centralized economy where the technological progress benefits only a small group of people, especially rockstar innovators and executives. Their present two examples, first the classic Instagram business case, where multi-million business was created only by 14 employees and then sold with almost one billion dollars. Secondly, the executives in the United States have enjoyed relatively enormous raises in their salaries, in 1990 they earned 70 times as much as other workers while in 2005 the statistics are 300 times.
Economic impacts can also arise from the algorithm and data biases presented in the earlier chapter Concrete Ethical Risks in Machine Learning Projects. Discriminative algorithms might maintain discriminative processes and even strengthen those when more decision processes are outsourced to algorithms. The trickiest impacts can be found in the processes where one does not even recognize as impactful decision processes or sources of discrimination. For example, what can be economic impacts of worse Google results, mismatching recommendations from web services or misfortunate dating app experiences. Economically these possible processes of algorithmic bias or discrimination must be traced back to the economic theories of discrimination (Goodman, 2016).
Added sources and references:
Arntz, M., Gregory, T., & Zierahn, U. (2016). The risk of automation for jobs in OECD countries.
Brynjolfsson, E., Rock, D., Syverson, C., 2017. Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics. NBER Working paper No. 24001.
De Stefano, Valerio, Introduction: Automation, Artificial Intelligence, and Labour Protection (June 13, 2019). Comparative Labor Law & Policy Journal, Vol. 41, No. 1, 2019. Available at SSRN: https://ssrn.com/abstract=3403837