Editor’s Note: Abhishek Gupta is the founder of the Montreal AI Ethics Institute, a machine learning engineer at Microsoft, and an AI Ethics Researcher at McGill University. Last week, he won the Community Organizer of the Year award within Montreal’s startup community for hosting the official Montreal AI Ethics Meetup. The award is given to the individual who best mobilizes people in the community for good.
At McGill’s annual Trottier Public Symposium, this year’s theme was artificial intelligence. On day 1, there was a roundtable discussion including the likes of Dr. Doina Precup (Google DeepMind), Dr. Derek Ruths , Dr. Ian Kerr, Dr. Tal Arbel, as well as Alex Shee (ElementAI), where Dr. Joe Schwarcz asked them a series of thought-provoking questions related to the future of technology.
We decided to ask Abhishek those same questions, given that he is perfectly positioned to add uniquely valuable ethics-related insights. Abhishek has been invited to the G7 Multistakeholder Conference on Artificial Intelligence being held on December 6th.
What does the term “artificial intelligence” mean to you?
It’s an umbrella term that encompasses a lot of the things we hear about today including machine learning, deep learning, and reinforcement learning. AI is an evolving term. If you asked someone what it meant in the 80s or 70s, it’d be vastly different from today.
The key idea that underpins all of that is the ability to solve new challenges in a novel environment in a way that we would typically expect from humans. We don’t have an algorithm to solve everything. You can go learn skiing today, or cook indian food, and you don’t necessarily have to be loaded with specific programs to do that – you can just innately learn new things in new environments.
There are many misconceptions about AI and where AI will take us. Which concerns do you think are legitimate? On what aspect of life do you think AI will have the greatest impact?
Biggest misconception is that AI is some amorphous entity that’s going to subsume everything that humans do and take all our jobs. At the end of the day, it’s just software — at least in the near future. People attribute more “intelligence” to these systems than they have.
Transparency, fairness, inclusion, ethics – these are legitimate concerns. Things like superintelligence and robots taking over the world are not relevant in the moment and probably don’t deserve too much attention.
Ultimately it’ll help us automate menial, unfulfilling tasks that feel like drudgery. That’ll be its biggest impact: allowing us to spend more time on work tasks that are fulfilling and satisfying. Everyone has a few things they hate in their job, and a few they love.
What level of sophistication do you think robots can eventually achieve? Is there a chance they can develop “a mind of their own” and operate outside of human control?
No one can predict this – humans are known to be terrible at predicting the future. Consciousness might just be a byproduct or an emergent property of inherently complex interactions within the brain. In that case, it seems theoretically possible. But in the near to medium term, that’s not a concern.
What role will “machine learning” play in the practice of medicine? Will medical education have to undergo a paradigm shift? What about general university education? Is the traditional lecture becoming a dinosaur?
Doctors can focus on the complex cases by automating the simple, repetitive tasks. In the developing world, doctors are always short-staffed. And in some places, there are no doctors at all and so AI can democratize at least some aspects of medicine.
In terms of education, we should be focused on preparing the next generation for a very different future. This includes encouraging learning how to learn, soft skills (interacting with people, empathetic), and promoting a culture of learning in new contexts quickly. That’s what education needs to be all about.
Yes, the education system in its current state is a dinosaur – but disruption is already happening because of wider availability of information on the web. This means that instructors should be focused on helping students develop skills in the classroom instead of providing content.
Same with medical education: learn empathy, learn to work on complex cases, learn to co-exist and work with machines instead of working against them. People see it as a competition but it’s really more of a collaborative process.
Algorithms are already being used for hiring, consumer purchase targeting, investment decisions, finding romantic partners, evaluating insurance risks, parole board decisions, determining crime “hot spots” and potential terrorist profiling. What privacy issues are raised and what other concerns come to mind?
The biggest privacy concern is pervasive data collection. Machine learning algorithms can operate on large datasets with or without consent. They may then create inferences out of those detected patterns, putting people in different buckets, and making decisions on attributes that aren’t explicitly stated but inferred through data, which means there’s potential to discriminate against people on many different axes without even being aware of why and how the system arrived at such a decision.
Smart phones have become ubiquitous, allowing around the clock access to Facebook, Instagram, Google, emails and text messaging. What impact has this had on society?
It has altered the way we communicate with each other. A huge portion of human interaction now happens online, asynchronously. This can have deeper long term impacts: we may become less patient as we deal with each other, have less empathy, and a shorter attention span.
Online messaging is mostly agnostic to emotions despite emojis. It just doesn’t jive well with our biologically hard-wired way of interacting with each other and we’re seeing that with the generation that is growing up with smartphones today. Those consequences will be clear in about 10 years, including lowered in-person communication skills.
Would you get into a self-driving car or fly in a plane with no pilot?
If it has been certified for safety, yes. That’s what we often don’t discuss. We need to be thinking about how we certify the robustness of these systems. If you hop into a car, you don’t actually know how the internal combustion engine works, but you trust that it’s safe and that it will get you from A to B. It’s the same thing with airlines. They arose out of a cycle of iteration and improvement. Once we have those safety standards in place after rigorous testing and certification, I would be comfortable.
Devices such as the Apple watch can monitor heart function. Can this cause unnecessary anxiety or can it lead to lives being saved?
That’s a false dichotomy. Yes its use as a tool for preventative healthcare can help people improve their health. But having information all the time is not the best for anxiety levels.
When I call an Uber and see where the car is, I don’t like wondering why it’s going so slow and taking weird detours – that’s a frustrating experience. Constant monitoring can lead to anxiety.
How do you see space exploration unfolding since we have moved from the purely government-led era to one where technology, being more affordable, can be developed by private enterprises?
I think it’s going to accelerate the rate of innovation and quality of products and services. I’m excited about initiatives like Relativity Space, whose innovative, autonomous, 3D printing rocket factory will redefine how we access space.
The speed at which we’ll be able to create rockets will increase by an order of magnitude. And it’ll be 100X cheaper.
If they’re subject to the same level of rigor and safety standards, I don’t see why we can’t have more private companies entering the space market.