Editor’s Note: On April 6th, 2017 the McGill Dobson Centre for Entrepreneurship and the McGill AI society hosted the Business of Artificial Intelligence event. We concluded our event with a panel discussion facilitated by Professor Jiro Kondo, Assistant Professor of Finance at the Desautels Faculty of Management, between Professor Doina Precup, Undergraduate Program Director at the School of Computer Science at McGill University, and Mr. Jean-Sebastien Cournoyer, co-founder and partner at Real Ventures and co-founder and board member at Element AI.
Professor Jiro Kondo (Professor JK): What role does Montreal and more generally Canada played into generating some of the hype around AI?
Professor Doina Precup (Professor DP): On the research side, there has been a lot of excitement around AI in Montreal in the recent years but just to give a little background knowledge, Artificial Intelligence has been around for more than 20 years. We are lucky to be in Canada because the government was and continuously is investing and promoting basic research. For those who don’t know what basic research is, it is a field or a question that you are curious about and the end result of conducting the research does not necessarily lead to a new product. Additionally, researchers, here in Canada, decided to focus their research on learning and reinforcement learning at a time where the rest of the world did not really care about this field. Coupled together, government support and focus in this particular field, helped us build a lot of strength in AI in Montreal and in some Canadian cities. This is the reason why there is a lot of that hype in Canada especially here in Montreal.
Jean Sébastien Cournoyer (JSC): The existence of a strong academic setting, along with government support of this industry, as we have here in Montreal, has truly allowed for the growth of business in AI. If it wasn’t for these few individuals like Professor Yoshua Bengio, Professor Doina Precup and the others, that decided to dedicate their life to research and teaching, we wouldn’t have the density of business in AI that we have here today. Being able to collaborate with the professors by applying their fundamental research, into applied research and then into software has allowed us to build applications that can create new companies. We have truly gone from a place of being world class at research to creating commercial value built on such a strong foundation. As a result, there has been increased hype around this industry.
If you missed the event, check out our recorded livestream on Facebook.
Professor JK: Diving more into the details of the drivers of innovation in AI, the innovations seem to be driven by 3 main factors; innovation in data, innovation in algorithms and models, and innovation in computing power. When we think about the type of innovation in Canada and more specifically Montreal, has it been in one specific area or multiple domains? Are there other domains that drive innovation?
Professor DP: Most of our emphasis in research is in algorithms and models used for machine learning and its core learning and reinforcement learning for people. We have definitely benefitted from the collection and existence of more and more data and increased computing power. However, algorithms are the core component to solving these issues. Where things make or break are not necessarily in the data or computational power we have but rather in the algorithms and models themselves.
JSC: From a business perspective, the innovation in AI has been as a result of the existence of the density and diversity of research in AI here in Canada. Over a month, we built a huge network of experts in every aspect of AI here in Montreal. You couldn’t have done that anywhere else in the world. The power of these people is not only that they are innovating based on the data and computation power they have but they knowing about what is happening everywhere else in the world. The real value out of the outputs in research and implementation of AI, comes from who can take the next major shift and bring it to the market as quickly as possible. Which is why the innovation in algorithms and models are the key component for this ecosystem that we are trying to build here in montreal and we are in the process of building.
Professor JK: More specifically focused on the data aspect to AI, one of the areas of concerns when it comes to data is overfitting our models, which is when a random error in a set of training data produces a generalised models instead of demonstrating the true underlying relationship in the data. What do you think about the notion of overfitting and what are some measures taken to resolve it?
Professor DP: One of the major ways to reduce over fitting is cross validation. This means testing the model on other datasets to see if it works. If you have lots of data, you need to test the model on different data to see if it actually works. For example, when you have a complex concept explained to a group of students through preparatory examples, you are not going to test the students on the preparatory questions to see if they understood the concepts, but rather, you will take different questions to test the understanding. This is the same with the data in AI, you’re not going to test the same models with the same training data. This is one of the ways we prevent the models from simply memorising cases.
JSC: The notion of overfitting and these random errors make it hard predict or understand how the model really works. There are anomalies in the results, not understanding why there are these anomalies is a huge limiting factor to fully deploying AI in the enterprises and decision systems. More research is being done to understand these random errors, how these models work , how they come up with the solutions understand the anomalies that are produced as a result of overfitting,.
— Maher (@ayammaher) April 14, 2017
If you missed the event, check out our recap.
Professor Jiro Kondo: In order to understand how these models work, we need to understand the intermediate layers and processes where we find the anomalies and the results that produce the outcome. How important is it to give meaning to those intermediate layers in research in comparison to in business?
Professor DP: There are three perspectives on this question in research depending on the workability of the model. One, sometimes we don’t actually care about those intermediate layers; as long as the system works, it doesn’t really matter how the model works on the inside. A second perspective is let’s try to understand what the model does by doing an MRI sort of thing on our model – brain imaging but for artificial brains. The third perspective is trying to understand what kinds of mistakes happen, how they happen and asking whether there something fundamentally wrong with the model. If there is something fundamentally wrong with the model or the data, then we go in and then see it case by case. Then we take these individual notes and then plot their activity to understand how it activates and how it works.
JSC: For business, understanding those intermediate layers is a little more important. To put AI in mission critical situations is similar to putting a human who suffers from seizures put them in an intense atmosphere such as a trading floor without knowing what what the outcome could be. This is not ideal and you don’t necessarily know what is going to happen. We can’t test how an AI makes decisions in a real-time decision situation without knowing what the outcome could be and thus how those intermediate layers work. In most business situations, the products, the weather, the political sphere, the market changes constantly, you need to be able to understand how AI will behave and react, which involves understanding those intermediate layers that produce a model.
Professor JK: More to do with the social implications and the vulnerabilities of AI. How do we deal with biases in the data and thus biases in the results we produce? And just as a final note, what are the social implications of AI?
Professor DP: Biases in the datasets have become an issue as algorithms have become better. We have noticed that the algorithm learns to predict exactly what you put in. If you put in garbage, you will get garbage. Therefore, if you put in data with biases, you will produce results with biases. Researchers are actively searching for ways to reduce say, racial and gender, biases, how might we detect these biases and how might we correct for these biases.
With regards to social implications, a lot of us researchers are motivated by social good. We view the AI as a tool improving healthcare, law services, etc. Montreal is special in those ways because a lot of the people here in Montreal are socially minded. Therefore, even though there are those biases, we will be actively finding ways to reduce them and work around them.
JSC: With AI, if we look at what happened with social networks, we allowed it to dramatically change the way interact with one another and we adapted. Businesses were built on profits only and did not necessarily see the potential biased impact these mediums may have. We can’t do that with AI, we need to emphasize thinking about the implications it has on societies, especially with regards to the biases it produces. As we create more wealth with AI, we need to find ways to redistribute some of that wealth without biases.
I personally think in the long run, AI will bring us back full circle in the sense that it will bring us back to a time, like tribal living, where we did not have to necessarily do much tedious work and could focus on enjoying our relationships with the people around us.
If you missed the event, check out our recorded livestream on Facebook!
Applications are now open for our 10-week summer intensive X-1 Accelerator program. Deadline is April 30, 2017. For more details, click here.