Loading Events

« All Events

  • This event has passed.

Thesis defense – Thalita Drumond

3 December 2020 / 13:30 - 15:30

Link for online streaming: (https://youtu.be/IzKFvw3CzBw)


Interactions between hierarchical learning and visual system modeling: Image classification on small datasets

Language: English

Thesis supervisors: Frédéric ALEXANDRE and Thierry VIÉVILLE (IMN, team Mnemosyne)

Jury members
CARRÉ, Philippe Pr. Univ. de Poitiers Reviewer
ESCOBAR, Maria Jose Asc. Pr. Univ. Técnica Federico Santa María, Chile Reviewer
JOUFFRAIS, Christophe DR CNRS IPAL IRL2955, Singapour Examiner
SAUZÉON, Helène Pr. Univ. de Bordeaux Examiner
VON ZUBEN, Fernando José Full Pr. Univ. estadual de Campinas, Brazil Examiner

Abstract

Deep convolutional neural networks (DCNN) have recently protagonized a revolution in large-scale object recognition. They have changed the usual computer vision practices of hand-engineered features, with their ability to hierarchically learn representative features from data with a pertinent classifier. Together with hardware advances, they have made it possible to effectively exploit the ever-growing amounts of image data gathered online. However, in specific domains like healthcare and industrial applications, data is much less abundant, and expert labeling costs higher than those of general purpose image datasets. This scarcity scenario leads to this thesis’ core question: can these limited-data domains profit from the advantages of DCNNs for image classification? This question has been addressed throughout this work, based on an extensive study of literature, divided in two main parts, followed by the proposal of original models and mechanisms. The first part reviews object recognition from an interdisciplinary double-viewpoint. First, it resorts to understanding the function of vision from a biological stance, comparing and contrasting to DCNN models in terms of structure, function and capabilities. Second, a state-of-the-art review is established aiming to identify the main architectural categories and innovations in modern day DCNNs. This interdisciplinary basis fosters the identification of potential mechanisms — inspired both from biological and artificial structures — that could improve image recognition under difficult situations. Recurrent processing is a clear example: while not completely absent from the “deep vision” literature, it has mostly been applied to videos — due to their inherently sequential nature. From biology however it is clear such processing plays a role in refining our perception of a still scene. This theme is further explored through a dedicated literature review focused on recurrent convolutional architectures used in image classification. The second part carries on in the spirit of improving DCNNs, this time focusing more specifically on our central question: deep learning over small datasets. First, the work proposes a more detailed and precise discussion of the small sample problem and its relation to learning hierarchical features with deep models. This discussion is followed up by a structured view of the field, organizing and discussing the different possible paths towards adapting deep models to limited data settings. Rather than a raw listing, this review work aims to make sense out of the myriad of approaches in the field, grouping methods with similar intent or mechanism of action, in order to guide the development of custom solutions for small-data applications. Second, this study is complemented by an experimental analysis, exploring small data learning with the proposition of original models and mechanisms (previously published as a journal paper). In conclusion, it is possible to apply deep learning to small datasets and obtain good results, if done in a thoughtful fashion. On the data path, one shall try gather more information from additional related data sources if available. On the complexity path, architecture and training methods can be calibrated in order to profit the most from any available domain-specific side-information. Proposals concerning both of these paths get discussed in detail throughout this document. Overall, while there are multiple ways of reducing the complexity of deep learning with small data samples, there is no universal solution. Each method has its own drawbacks and practical difficulties and needs to be tailored specifically to the target perceptual task at hand.

Keywords

deep learning, image classification, small data learning, convolutional neural networks, transfer learning.

Related publications

  • Drumond TF, Viéville T and Alexandre F (2019) Bio-inspired Analysis of Deep Learning on Not-So-Big Data Using Data-Prototypes. Front. Comput. Neurosci. 12:100. doi: 10.3389/fncom.2018.00100
  • Drumond TF, Viéville T and Alexandre F (2017) Using prototypes to improve convolutionalnetworks interpretability. NIPS 2017 – 31st Annual Conference on Neural Information Processing Systems: Transparent and interpretable machine learning in safety critical environments Workshop,Dec 2017, Long Beach, United States. hal-01651964

Popularized abstract in english

The field of artificial intelligence has made many advances in the last decade, especially with deep learning for image recognition. Despite being biologically inspired, these deep neural networks function in a quite different way from our natural vision. Essentially, it is a mathematical object with numerical parameters that can be adjusted automatically, using large sets of previously labeled images. Only after “seeing” thousands of cats, dogs, cars, trees, people, etc., will the network be able to recognize these elements in new images (without understanding their meaning). Having access to large databases of labeled images is unfortunately not possible in all fields of application. On specific industrial problems or in medical imaging for example, it can be difficult or even impossible to obtain hundreds of different images of the same condition, patient, etc… In addition, image labeling is more expensive since it requires an expert opinion. This motivates the central question of this thesis: How can we take advantage of deep neural networks on small image datasets? This work moves a step towards this answer through an extensive literature review, complemented by an experimental study including the proposition of original models and mechanisms.

I subscribe to the newsletter:

Details

Date:
3 December 2020
Time:
13:30 - 15:30
Event Category: