Miao works as Assistant Professor at the Department of Computer Science at Aalborg University. He is also part of the DIREC workstream Advanced and Efficient Big Data Management and Analysis. The project focuses on how we can develop new efficient prototypes that can enable the use of big data in industry. Miao focuses especially on building efficient and explainable prototypes for different tasks and data in an automated manner.
Can you tell us about your background and why you settled down in Denmark as a computer scientist?
I am interested in machine learning, automated deep learning and explainable AI. I hope that I can introduce automated deep learning and explainable AI to the Danish data science community, since research about this topic is rare.
Besides that I chose to come to Aalborg because it is a young and very active university, which provides a lot of opportunities for young researchers. I have several friends, who are working here, and they also recommend me to join their group, Center for Data-Intensive Systems (DAISY), which has an international reputation. I believe I can learn a lot here.
I think the working environment in Denmark and Aalborg is pretty good. We have a lot of flexible time, so I can focus on my research. In addition, I think Aalborg is an environmentally-friendly city, and I really enjoy life here.
Can you tell us about your research area?
I have broad research interests in machine learning and artificial intelligence – especially automated deep learning and explainable AI. I am interested in automatic development of efficient, scalable and robust algorithms for machine learning, data mining, data management and deep learning applications with formal theoretical guarantees and explanations. I see myself working on these problems in my foreseeable research life.
What are the scientific challenges and perspectives of your project?
Although the techniques of deep learning have been applied in different areas, such as computer vision, face recognition, medical imaging, natural language processing, data mining and data management, the design of deep learning systems is time-consuming – and it is still a black box problem to explain why the developed deep learning system is working.
Automated deep learning is the process of building deep learning systems for different problems without human intervention. Explainable AI is to explain why the developed system is working – and it can also assist the design of the deep learning system. Automated deep learning and explainable AI are in their early-stages, and we still need to define some research problems, improve efficiency, and explain why the automated designed system works.
How can your research make a difference for companies and communities?
Automated deep learning aims to build a better deep learning system in a data-driven automated manner, so that most practitioners in deep learning can build a high-performance machine learning model without being an expert in the field of deep learning.
Automated deep learning can provide end-to-end deep learning solutions and these solutions are usually better than hand-designed deep learning systems. These automated systems can lower the threshold of deep learning and make it easy for everyone to use these techniques to solve their own problems.