Project type: Bridge Project
Project type: Bridge Project
Today, the manual visual inspection of grain is still one of the most important quality assurance procedures throughout the value chain of bringing cereals from the field to the table. In order to improve performance, robustness and consistency of this inspection, there is a need for automated imaging-based solutions to replace subjective manual inspection. In order to meet this need FOSS has developed a multispectral imaging system called EyeFoss™. With this system user independent multispectral images of +10.000 individual kernels can easily be collected within minutes real time on site. The EyeFoss™ applications currently cover wheat and barley grading.
To derive maximum value from the data there is a need to develop methods of training data algorithms to automatically be able to provide industry with the best possible feedback on the quality of incoming materials. The purpose is to develop a framework which replaces the current feature-based models with deep learning methods. By using these methods, the potential is significantly to reduce the labor needed to expand the application of EyeFoss™ into new applications; e.g. maize, coffee, while at the same time increase the performance of the algorithms in accurately and reliably describing the quality of cereals.
This project aims at developing and validating, with industrial partners, a method of using deep learning neural networks to monitor quality of seeds and grains using multispectral image data. The method has the potential of providing the grain industry with a disruptive new tool for ensuring quality and optimising the value of agricultural commodities. The ambition of the project is to end up with an operationally implemented deep learning framework for deploying EyeFoss™ to new applications in the industry. In order to the achieve this, the project will team up with DTU Compute as a strong competence centre on deep learning as well as a major player within the European grain industry (to be selected).
The research aim of the project is the development of AI methods and tools that enable industry to develop new solutions for automated image-based quality assessment. End-to-end learning of features and representations for object classification by deep neural networks can lead to significant performance improvements. Several recent mechanisms have been developed for further improving performance and reducing the need for manual annotation work (labelling) including semi-supervised learning strategies and data augmentation.
Semi-supervised learning combines generative models that are trained without labels (unsupervised learning), application of pre-trained networks (transfer learning) with supervised learning on small sets of labelled data. Data augmentation employs both knowledge based transformations, such as translations and rotations and more general learned transformations like parameterised “warps” to increase variability in the training data and increase robustness to natural variation.
The scientific value of the project will be new methods and open source tools and associated knowledge of their performance and properties in an industrial setup.
For capacity building the value of the project is to educate one PhD student in close collaboration with FOSS – the aim is that the student will be present at FOSS at least 40% of the time to secure a close integration and knowledge exchange with the development team at FOSS working on introducing EyeFossTM to the market. Specific focus will be on exchange at the faculty level as well; the aim is to have faculty from DTU Compute present at FOSS and vice-versa for senior FOSS specialists that supervise the PhD student. This will secure better networking, anchoring and capacity building also at the senior level. The PhD project will additionally be supported by a master-level program already established between the universities and FOSS.
Specifically, the project aims to provide FOSS with new tools to assist in scaling the market potential of the EyeFossTM from its current potential of 20m EUR/year. Adding, in a cost-efficient way, applications for visual inspection of products like maize, rice or coffee has the potential to at least double the market potential. In addition, the contributions will be of generic relevance to companies developing image-based solutions for food quality/integrity assessment and will provide excellent application and AI integration knowledge of commercial solutions already on-the-market to other Danish companies.
The project involves the research themes of AI (WS2) and CyPhys (WS6) of DIREC.
October 1, 2020 – September 31, 2024 – 3.5 years
Total budget DKK 3,91 million / DIREC investment DKK 1,90 million
University of Copenhagen
Department of Computer Science
Technical University of Denmark