Insight researcher Dr Alessandra Mileo tells us about her work towards a more human-like AI that can be explainable and trustworthy.
Two years ago, Dr Alessandra Mileo secured funding from Science Foundation Ireland (SFI), the Irish Research Council and Nokia Bell Labs for her research into deep learning and artificial intelligence (AI).
She leads a growing research team exploring the use of knowledge graphs to interpret deep learning models, and also examining how knowledge can both be extracted from and injected into these models.
So far, published results are promising, and Mileo’s team is already testing the application of this research in medical imaging analysis with the School of Medicine at University College Dublin and BarcelonaTech, a major engineering university in Spain.
A funded investigator across two SFI-supported research centres – Insight and I-Form – Mileo also serves as an assistant professor at Dublin City University.
‘I believe AI is not and never should be seen as technology that does things for you. Instead, it does things with you’
– DR ALESSANDRA MILEO
What inspired you to become a researcher?
I loved studying, learning and discovering new things, so I thought I’d start by studying for as long as possible. From there on, one thing led to another: university, PhD, postdoc, academic.
My AI research dates back to my last year as an undergraduate student and a project we did on Lego Mindstorms robots. I was so frustrated that we could only get robots to do simple things, as the language expressivity and complexity was limited. I kept complaining with my project teammates that there was very little intelligence in there. And there I found my challenge in finding better ways of expressing logical thinking and human intelligence to teach a robot (and machines in general) to do something smarter. That’s how I got into symbolic artificial intelligence and automated reasoning.
How would you explain the research you are currently working on?
As humans, when we learn something new, we do not rely on seeing millions and millions of examples each time. We leverage a massive amount of experience or previous knowledge we have accumulated and synthetised through a portion of our brain that enables long-term memory (the hippocampus). We also select what is relevant and combine such structured knowledge with new data to adapt what we already know to the new task.
If we can systematically integrate structured knowledge (expressed as relationships between pieces of information) with data into deep learning models, we obtain a more human-like AI.
The interesting thing about structured knowledge is that it is organised as concepts and relationships among concepts. Deep learning models, however, are still mostly a black box. You throw in millions of data samples and magically get an outcome, and you have no idea how to justify it, where it came from or what relationship it has with what you already knew.
If we can combine the two aspects of machine learning and symbolic reasoning in AI, we are setting the foundations for building the next generation of intelligent machines. It is a very ambitious goal, but some approaches in this direction are becoming more and more popular in the AI community with neuro-symbolic computing coming back to life.
In your opinion, why is your research important?
Despite advances in deep learning for medical image analysis, there is still a lack of clinical adoption as it is difficult for humans to understand and therefore trust the results of the analysis. This research will reduce this gap in interpretability, leading to increased trust, greater patient empowerment and, ultimately, better outcomes, including improved diagnosis, wider clinical deployment and greater efficiency in time and cost.
There are many other areas where this research can be relevant, such as financial decision-making, digital forensics and crime prediction. Essentially, all decision-making processes where the inability to understand and correct errors and bias can result in a huge loss, and reverting a wrong decision has a high cost.
The lack of transparency, the need to understand and explain (and predict) possible errors, and the identification of bias is a key enabler for widespread adoption of modern deep learning.
This type of research can also play a key role in reaching out to the broader AI community to come together to solve complex problems. It is another step towards bridging the gap between symbolic and connectionist AI, combining the strength of both worlds in a new version of AI that encompasses different aspects of human intelligence into artificial agents for truly explainable systems.
What commercial applications do you foresee for your research?
This is fundamental research and I believe open source has to be the choice in the short term. Medium to long-long term, though, I think this research has the potential to create a shift in the business model of companies selling deep learning solutions for decision-making. They should not be profiting from their proprietary models without being accountable for the quality of their results.
There are many applications where AI relies on computer vision and language understanding. For computer vision beyond medical image analysis I can think of surveillance, crime prevention, liability for autonomous vehicles. For language understanding, I can think of safety on social media and fairness of recruitment processes.
What are some of the biggest challenges you face as an AI researcher?
The research I am focusing on right now is quite new; there is no solid benchmark to compare against. This makes it difficult to understand whether you are going in the right direction as most of the approaches that have been independently proposed in the last few years are not fully and directly comparable.
In addition to that, when you are trying to combine approaches with very different fundamental underpinnings, you need to grasp both or to engage in meaningful collaborations. This can be a challenge as it requires reaching out to experts in other AI communities. Often these communities have very few links and opportunities to get together as they usually publish at different venues and partner in projects and initiatives that are mostly disjointed.
When everybody thinks within their own bubble it can be hard to break through, but when we do, something good usually comes out of it.
Are there any common misconceptions about this area of research?
I think one common misconception is that if results (ie accuracy) are good, then your AI system is good. But we need to evaluate approaches beyond their quantitative accuracy in a more holistic way. When you deal with high-stake decisions, aspects like transparency, trust and fairness are worth serious consideration even though there is a trade-off between how confident your AI system is about its own decisions, and how much you trust your AI system.
Another misconception is that when we reasonably trust an AI system, we can sit back and let it do the job. I believe AI is not and never should be seen as technology that does things for you. Instead, it does things with you. It is not replacing your ability to judge and decide, it is enhancing it. The human role is therefore always a key one and we should focus on AI supporting decisions as opposed to making decisions.
What are some of the areas of research you’d like to see tackled in the years ahead?
There is a lot of hype around interdisciplinary research, but I think there is still a lot of untapped potential within AI research alone. I would like to see a more systematic framework for the AI community to come together. We need to open up to new ways of engaging beyond isolated workshops and seminar events. I believe this is key in order to combine the strengths of symbolic and data-driven AI with a new, holistic perspective.
Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.
Are you a researcher with an interesting project to share? Let us know by emailing editorial@siliconrepublic.com with the subject line ‘Science Uncovered’.