Small notes along the journey. The Finnish and English versions of the notebook won't, unfortunately, keep the same pace.


Surprisingly many support making humans obsolete

  • Posted on: 4 April 2019
  • By: Juho Vaiste

It surprises me how many people are so eagerly and purely interested in making humans (any of genders) obsolete.

Is there a reasoning behind (f.ex the ultimate lazy satisfaction of needs) or is it only the fresh and novel idea?

I am not critizing O'Neil's mid-level point (the gender-centered discussion about sex robots), but the background point "it's OK to think making people or humans obsolete".

Physics and philosophy of time

  • Posted on: 6 July 2018
  • By: Juho Vaiste

Still working with my knowledge of physics, but this lecture by Carlo Rovelli contained interesting thoughts on the nature of time.

"-- and the closest to us, I think, the strong emotional connection with time, the emotion of time, is what time is for us."

It would be interesting to hear what Valtteri Arstila from University of Turku would say about the topic.

Ethical codes and governance: looking for examples of strong and transparent ethical codes

  • Posted on: 24 January 2018
  • By: Juho Vaiste

There is a thread in Twitter about good examples of ethical codes in technological companies. An interesting example is from Sage UK, you can find the code here: The Ethics of Code: Developing AI for Business with Five Core Principles

You can answer straight to Alan Winfield via Twitter:

What's wrong with the digital monopolies of the IT giants

  • Posted on: 10 January 2018
  • By: Juho Vaiste

Roger McNamee's writing is a bit radical, but comprehensive presentation on what's wrong with the monopoly position of social and internet platforms companies. 

- social media platforms' role in political elections
- built-in discrimination in advertising
- exploitation possibilities of these platforms by bad actors
- data centralisation
- regulation of social media and internet platforms (free products business model)

From statement from Gartner: By 2020 AI will create more jobs than it eliminates

  • Posted on: 28 December 2017
  • By: Juho Vaiste

A positive techno-economical view on AI is starting to dominate the discussion on AI and its future. That was predictable while it is still the existing paradigm on how we see our world and from the capitalist perspective, there is a lot of money to be made ("Gartner predicts that by 2021, AI augmentation will create $2.9 trillion of business value").

Gartner also put its effort on this lobbying and stating powerfully that already by 2020, AI would be creating more jobs than it eliminates.

I took a quick glance on the material and I didn't find a clear argument on that phenomenon. The strongest reference was " the AI augmentation", which means combining human and machine capabilities. I didn't understand how this would differ noticeably from the current work life or employment problems we face via the automation development

The value of work and bullshit jobs

  • Posted on: 24 December 2017
  • By: Juho Vaiste

David Graeber's idea on the value of work and bullshit jobs fascinates me a lot. I can see this happening and I hear a lot about this. People aren't doing that much at their jobs. The value-creation has built on something else.

Together with Slatestar Codex's cost disease theory, I feel familiar with this thought. I can see it happening - a lot.

Notes from: Forget about the terminator already, Teemu Roos. AI Day Finland, 13.12.2017.

  • Posted on: 13 December 2017
  • By: Juho Vaiste

Why do we think AI as a terminator?

Four reasons:
- Science fiction
- Uncanny valley
- Click-bait media
- Fear of the unknown

Why we need to change this

1. Societal cost: fear of AI slows down the adoption of AI solutions

2. Scientific cost: without research funding, no world-class science

3. Industrial cost: no world-class science, no innovations

How we can change this:
- experts should reach out more to the public about their work
- journalists should dump their click-bait headlines
- we should all be more critical about what we read and see
- most importantly: we need AI education on all levels, free for all

Lesson 1: Why the terminator will not come

Exponential is not enough: exponential progress (speed) is trumped by exponential increase in problem hardness
- Think of the self-improving system: yourself

Narrow AI: even though computers beat humans in certain tasks (arithmetics, chess, Go...) they are very stupid in many other ways
- There is no ”brain-in-a-jar” AI that learns new skills: it's always a different system

FCAI AI & Education program

- University education: already strong and growing
- Professional training
- Schools
- Open education for all

Part I: Elements of AI

Starting in May 2018
Three weeks, 2 credit units

Philosophy and history of AI
Basic concepts: Search and planning, games, machine learning, neural networks, signal processing, robotics
”AI literacy”
No programming or maths skills required
Free of charge, advised by Risto Siilasmaa

Stuart Russell in AI World Summit

  • Posted on: 28 October 2017
  • By: Juho Vaiste

You can find his presentation from the link below. As most of computer scientist, he don't say anything about ethical and existential risks, but override those as non-scientific arguments. This is probably because the philosophical premises of the computer science.


- Rapid progress in AI is impacting society 
- Regulate specific uses and misuses 
- Prepare for major economic disruption 
- Develop the theory and practice of provably beneficial AI

If believing in physicalism, everything can replicated in AI

  • Posted on: 19 October 2017
  • By: Juho Vaiste

Richard Dawkins summarizes my thought and ideology in this video. If one is believing in physicalism, the whole spectrum of human life can be replicated by artificial intelligence and other artificial solutions. The majority of philosophers share the idea of physicalism and the research of dualistic options seems to be quite passive. 

Is there still space for dualism or should we move to discuss the nature of humanity? Is there any reason for biological human beings after the first artificial generation?

About biased algorithms

  • Posted on: 4 October 2017
  • By: Juho Vaiste

Will Knight continues his important series on algorithms biases. 

Three articles from MIT Tech Review:

Related to the article "Concrete problems in AI Safety" and Bostrom's article "The Ethics of Artificial Intelligence" (the part "Ethics in Machine Learning and Other Domain‐Specific AI Algorithms").

An interview of Harri Valpola, The Curious AI Company

  • Posted on: 24 September 2017
  • By: Juho Vaiste

Valpola's comments on AI's future, AI and machine learning approaches, including critique to DeepMind's recent papers:

"I've been working on that for a long time, something like eight years ago we started work on something very similar as they are publishing now. Been there, seen that, doesn't work.”