Who’s afraid of the Technological Singularity?
Reposting from LinkedIn Pulse October 17, 2016
The last few weeks I have been immersed in presentations, conversations and thoughts on Artificial Intelligence and the future of machine intelligence. I summarised my thoughts on AI as hype or reality in a blog today on Pivigo, and as part of these conversations I was introduced to the “Technological Singularity”. I hadn’t heard about it before but it sounded scary. The idea is that machine intelligence will soon reach human level of intelligence, at which point it will start to innovate itself and start an exponential evolutionary track in which it will soon leave humans behind, thinking of us as we think of unintelligent creatures such as monkeys, dogs or even insects.
Two questions sprang to my mind when hearing about this. First, is it true, are we nearly there? And second, is that such a bad thing?? On the first question, I think not. Sure, Google DeepMind winning Go was a breakthrough. Computers and algorithms now surpassing human skill in classifying text of images, great! But all of these are ultimately very “logical” tasks. There is an element of randomness, but not chaos to these events, and machines are far from being able to understand emotions, ask curious questions and deal with the chaos that is the Earth’s system. Until the machine can understand its own limitations, “think…