Getting excited about future technology is a terribly easy thing to do. Especially when you’re surrounded by the latest innovations, as we are in the Pioneers office. And big events like Pioneers’17, of course, are like our office run amok. For two days next week, the venerable windowpanes of the Hofburg will rattle with the buzz surrounding the latest in human progress.
But while the thrill of the new is might be intoxicating, it should never stop us thinking about where all these novelties are taking us. We’re well aware of that. Which is why we’ve got experts like Jaan Tallinn speaking at Pioneers’17. The Future of Life Institute co-founder, who was also a founding engineer at Skype, will discuss technology and existential risk. Will tech empower or embattle humanity? A most important question, we think you’ll agree!
If one thing’s a no-brainer, it’s the fact that we need to be thinking ahead, particular when it comes to controlling the ultimate possibilities (and dangers) of artificial intelligence. It’s something the Estonian thought leader is spending a growing share of his time doing. Powerful stuff is coming our way for sure. It’s how we manage it that’s critical.
“Smooth transitions are better than disruptive and catastrophic ones,” says Tallinn. “Even people who think AI is good no matter what seem to be acknowledging that disruptive changes are bad! So we definitely need to think about how to make upcoming changes smoother and more controllable.”
Worst case scenario for Tallinn, who counts Elon Musk among his fellow Future of Life Institute co-founders?
“Artificial intelligence is not just a tech, it’s a meta-tech,” he explains. “It’s a technology that can develop its own technology. And the AI concern I have comes when you remove humans from the research loop. In other words fully automated technological progress.”
It’s easy to agree that this sort of situation might not be optimal. Same goes for the idea of fully automated warfare, another undesirable Jaan also highlights. But it’s an awful lot harder to see how we’re going to avoid these worst-case scenarios in a free-for-all world economy with an unfortunately high proportion of destructive, malicious individuals. Do we need agreements like the Geneva Conventions or the Antarctic Treaty System to prevent things getting out of hand?
It’s a mite too soon for that, says Jaan. But we should definitely be getting things in place. “With the exception of military, where the time is ripe for international agreements, it seems too early to talk about concrete treaties when it comes to tech development.
“But I think it’s valuable to increase our ability to do that kind of co-ordination. If in five or ten years we learn that we really need to constrain certain scientific research avenues, it would be great if we had the ability to do so.
“We shouldn’t have more rules right now, but we should at least think about how we increase our ability to create more enforceable rules when it becomes clear that we need to do so.”
But it’s not just AI’s direct technical abilities that we be keeping in mind. It’s also the indirect impact we might face. The fear of rampant unemployment in the medium term has made Universal Basic Income a hot topic – and one which will get plenty of attention at Pioneers’17 – but Jaan believes we need to think outside of the box defined by our current system of people, cash and jobs.
“We need to go beyond economic interventions to ensure the long term prosperity of humanity, because there’s uncertainty about how long a human economy will last at all. AI goes way beyond economics. Anything that doesn’t need human assistance is not going to be participating in human economy.”
And lest we get hung up exclusively on AI, it’s worth remembering others areas of progress that’s basically good, but have the potential to get out of control.
“Synthetic biology is second on my list of potentially dangerous technology,” says Jaan. “I’m glad people are thinking more about bio-safety. I’m particularly interested in interventions that would help to stop, slow or reverse ageing…but the worst possible outcomes of biotech and AI tech are very similar. They would basically look to us humans like the planet becoming uninhabitable.”
One hopes it doesn’t come to that! And it really shouldn’t. As long as we keep a close eye on a couple of critical areas – and stand ready to limit things if we need to.
“With most of tech we don’t have to worry about in terms of anything catastrophic,” says Jaan.
And here’s to that!