This appeared on the CNBC Tv18 website
One of the more familiar tropes in Hollywood is that of the intelligent machine, that sees human kind as a threat to its survival and annihilates people. In the film 2001 Space Odyssey, the AI HAL 9000 kills human beings who try to disconnect it. In the popular franchise Terminator, Skynet is a corporation that invents an AI that crushes humanity to create a dystopian future. While we might write these off as the extreme imagination of highly creative and paranoid minds, we need to take the warnings of scientists and technologists slightly more seriously.
As more and more of us go online, and conduct ever increasing numbers of transactions, engagements, and interactions, the great machines learn more about our minute likes and dislikes, than we possibly do. And, while it is a great convenience when it comes to shopping, it could have drastic consequences if used to manipulate us in other ways. As technology advances, and gigantic corporations control more and more of data about us, that they spin, churn, and analyse to find out even more about us – experts across the world have begun to question whether such data in the hands of a few giant technology monopolies is good for humankind at large, and democracy.
The quantum of data that we give away daily to companies that track our interests and engagement; and Artificial intelligence where machines can make decisions based on various parameters, have been discussed separately. However, every time we use Google to search, or Amazon to shop, or Netflix to watch movies – we are providing more data about ourselves, that helps the systems learn to offer us better options. Scientists and technologies have begun to worry about this. In their view, AI isn’t something in a distant future, we are already experiencing it, in a limited form, now. A recent documentary “Do you Trust this Computer” looks at the interconnected worlds of large corporations, big data, machine learning, and AI. Leading technologists, tech entrepreneurs and academics call for reflection on where we are going, and what we will find when we get there.
Elon Musk has been one of the most active voices cautioning us against the dangers of machines growing too intelligent. He has said that AI is more dangerous than nuclear weapons, and it’s use needs to be regulated. The late Stephen Hawking held similar views. In one of his last speeches in 2017, he said “Success in creating effective AI could be the biggest invention of this period of civilization, or the worst… We just don’t know, so we cannot know if we will be infinitely helped by AI or ignored by it and side-lined, or conceivably destroyed by it” In 2015, Musk and Hawking and many others working in the field of technology – including the founders of Deep Mind AI – signed an open letter asking for research to be conducted on the societal impact of AI.
But it is not just about gigantic corporations using data and AI to manipulate you to use their product. IT is also about the threat posed to Constitutional Democracy by the concentration of power in the hands of digital internet giants, who are researching AI. Paul Nemitz, Principal advisor to the EU, has sounded a warning bell about the impact of big data, and AI on Democracy. In a recent essay in the Royal Society journal, he argues that democracy is at risk from those who control large AI projects, MNC’s who are already harvesting our data for more intelligence. He accuses the internet giants of thriving on a culture of ‘lawlessness and irresponsibility’. And, he believes that the discussion of AI has to go beyond ethics and move into the realm of regulation.
Right now, there are 3 main regulatory models of how we see, data being collected, what machines are learning from this data about our behaviour, and regulation. The first is the American model, where things are pretty much unregulated. It is believed that digital is a space that allows for free enterprise and ignores the possibility of monopolies. The Cambridge Analytica scandal, and accusations of the Russians having manipulated the data to manipulate the US elections, has woken them up to the possibility that data can be misused by corporations and countries. President Trump has threatened regulation, but hasn’t followed through on it. Yet. The next is the Chinese model, where the Communist party seems fully aware of what it can do with the data, and the direction of AI research – and is controlling both with an iron fist – both in terms of using the data in terms of keeping tabs on the population, and in terms of developing AI as a tool of aggression. And, lastly is the European model, where the EU has put some checks on the massive data stores that large multinationals like Facebook, Amazon, and Google are able to collect.
India has yet to evolve how it is going to deal with data, and how data can be manipulated to manipulate us. As machines become faster, as they can process more quicker, and as more and more people get on to internet, and use it for a variety of activities, more data will be available which will allow those who control this data to manipulate the audience better. There is of course, the right to privacy. But, when it comes to digital footprints that we leave behind daily, that gives systems more insights on us, there is not much that can be done in the model that we have today.
To assume that a system this powerful, will continue without regulatory oversight is naïve. Regulation is coming, the only question is when, and how strict. Given the nature of the internet, that crosses all borders, and the nature of data and machine learning, it may be more sensible to come to a worldwide agreement on what needs to be regulated, and what needs to be kept free. And, this needs to be hammered out before it is too late.