Cardiff Business Club interviews: Professor Sir Nigel ShadboltDate Posted: 22 February 2019
The event, kindly sponsored by Brooks Macdonald, was attended by over 200 people from across the Welsh capital.
Professor Shadbolt has had an extensive and impressive career as one of the originators of the interdisciplinary field of Web Science. He sits as Chairman of the Open Data Institute and is Principal of Jesus College, at Oxford University.
Often labelled as a pioneer of computer science, he discussed with us the impact robots will have in the workforce, and why our own stupidity is a greater threat than technology.
If you would prefer to listen to the podcast version of the interview, click here.
The media is awash with stories about robots replacing workers. Are we totally missing the point of what automation can do for us or are we just a nation of neg heads?
It’s understandable that people think new technology might threaten their jobs. Since the earliest industrial revolutions, we have seen jobs displaced. But ultimately, each one has resulted in the creation of new jobs - the machines supplement what we do.
We’re very creative as a species, so whilst there may be some jobs lost to technology there will always be new professions and new types of work that we haven’t thought of yet.
There are people now working in digital services, when there just weren’t these jobs before. I guarantee there’ll be a new type of Accountant eventually; they will be called an Algorithmic Accountant, and they will worry about the quality of the data going into the business.
You are quoted as saying that ‘it’s not artificial intelligence that should terrify you, it’s natural stupidity.’ Can you expand on that?
I think the worry is that AI will wake up and become self-aware but that is often the image that Hollywood portrays. The view of self-aware machines being conniving, or evil, is just science fiction.
The much more pressing worry is poor human decision making and not thinking about the consequences of putting algorithms which aren’t self-aware, that don’t have moral judgement - whether that’s running critical infrastructure or making decisions in our financial industries. Of course, we’re seeing increasing levels of automated decision-making but it needs humans to make those critical decisions about what kind of quality we’re after.
The workplace is evolving at a rate faster than we’ve ever experienced before and Brexit will come into effect too. Are we transforming quick enough to remain competitive once we leave the EU?
We’re a very effective digital economy, we’re an innovation nation. Our sites and engineering are the best in the world and we over produce for every pound put into research in terms of outputs, compared to our competitors.
Historically we’ve done very well in the EU in terms of receiving funds and now the government assures us that it will be replaced going forward. Whatever the deal is, one hopes it won’t be a disorderly exit. My expectation is we will continue to play a leading role in that we have a great deal of experts in AI and data science.
Do you think London’s position as the leading tech hub in Europe could be under threat?
Well I think that we will have the keep invested in talent to make sure that continues. Not just in London either - we have got strong metropolitan centres like Edinburgh and Cardiff, too. It’s about human talent at the end of the day and making sure we invest enough in our young people’s education.
As we hand over more tasks to AI, there are also some serious ethical questions that need to be asked regarding accountability, in the like of cars and drones, for example. Who is responsible at the end of the day?
We’re still working that out. I think that people have suddenly become aware that computing and AI is a jury-use technology. In the same way that, back in the day - when we were developing chemical science, Biological science and nuclear science - they all had applications which were for good.
However, they were all weaponised. They were turned into technologies that could do considerable harm and damage. In those cases, we had to think quite hard about limitation treaties and safeguards around how they were used. That is absolutely the care with modern AI and computing.
The UK government has published its final report into fake news, denouncing Facebook, and other similar platforms, as ‘digital gangsters’. Do you think that these companies need to take greater responsibility to protect the public?
I think that what we see and understand is that there has been this enormous concentration of power within a few big players. We have to question whether that concentration of power and data is always in our interest as consumers or citizens.
These platforms have to stand up and explain how they are going to be more accountable in the future. It’s not that we want to tear down these organisations, they’ve been hugely beneficial but as they have evolved, there are all sorts of consequences we didn’t anticipate.
For some people, it’s around elections and for other people it’s about understanding if their data has been used appropriately or fairly. We need strong laws and regulations about what we think is acceptable.
In your 2008 book, The Spy in a Coffee Machine, you talked about how technology has eroded privacy. 11 years on, do you think this message has gotten through to people? Particularly in light of the Cambridge Analytica debacle.
We have to be very right minded about what we want from these technologies and they shouldn’t oppress us - they should empower us. If we go back to the big ideas that have driven the development of western democracies, it’s all about freedom, autonomy, dignity, self-determination. We may have become a little bit too enraptured by the idea that we’re just a transaction.
You stated that earlier that you’ve been operating in the field for 40 years or so. Has anything surprised you in that time?
I started my PHD in about 1978 and until now, there has been a million-fold increase in the power of computers. No field of technology has endured that kind of major change.
You could say that AI is conquering the world’s challenges, but of course it’s also raised just as many questions as it’s answered and the rate of change is exciting. Looking ahead, I imagine a world were people who are paraplegic can control their exoskeleton to move them around with AI.
It also has the potential to help those who are physically impaired – to give them a new hope. Naturally there will be downsides that we’ll have to take into account and adapt to, too.