Journée Machine Learning Optimisation

Jump to: navigation, search

Recent advances in Machine Learning and Optimisation

Cette journée est financée par l'ARC 6 et labellisée par le GdR MaDICS. Elle se situe dans la continuité des journées SIERRA (Signal et Image En Région Rhone Alpes) et des journées LIMA (Loisirs et IMAges). Elle s'adresse à tous les collègues intéressés par les thèmes du machine learning et de l'optimisation.

Organisateurs: M. Clausel (Université de Grenoble Alpes, LJK), L. Condat (Gipsa-Lab et CNRS), J. Digne (LIRIS et CNRS) et R. Chaine (Université Lyon 1, LIRIS)

Date: 13 Décembre 2016

Lieu: Amphi CNRS, Lyon, Campus de la Doua. Accès de la gare de Lyon Part Dieu par le tram T1 direction IUT Feyssine, arrêt INSA Einstein.


  • 10h30-11h00 : Accueil-Café
  • 11h00-12h00 : L. Condat : Slides
  • 12h00-13h00 : C. Wolf : Slides
  • 13h00-14h15 : Repas et discussions
  • 14h15-15h15 : C. Bouveyron : Slides
  • 15h15-16h15 : R. Emonet  : Slides


  • C. Bouveyron (Université Paris Descartes, MAP5) : The Stochastic Topic Block Model for the Clustering of Networks with Textual Edges.
Résumé : Due to the significant increase of communications between individuals via social medias (Face-book, Twitter) or electronic formats (email, web, co-authorship) in the past two decades, network analysis has become a unavoidable discipline. Many random graph models have been proposed to extract information from networks based on person-to-person links only, without taking into account information on the contents. In this paper, we have developed the stochastic topic block model (STBM) model, a probabilistic model for networks with textual edges. We address here the problem of discovering meaningful clusters of vertices that are coherent from both the network interactions and the text contents. A classification variational expectation-maximization (C-VEM) algorithm is proposed to perform inference. Simulated data sets are considered in order to assess the proposed approach and highlight its main features. Finally, we demonstrate the effectiveness of our model on two real-word data sets: a communication network and a co-authorship network.
Référence : C. Bouveyron, P. Latouche and R. Zreik, The Stochastic Topic Block Model for the Clustering of Networks with Textual Edges, Statistics and Computing, in press, 2017.

  • L. Condat (Gipsa-Lab et CNRS) : An introduction to 
proximal splitting algorithms
 for large-scale convex optimization
Résumé : This short tutorial will be accessible to people without knowledge in optimization. I will present the principles behind some first order proximal splitting methods, well suited for large-scale convex optimization: the Douglas-Rachford, ADMM, Chambolle-Pock algorithms. In addition, I will present a little known fact, which is that is is better to deal with quadratic functionals using preconditioning than using their gradient.

  • R. Emonet (Université Jean Monnet, LaHC) : A Tour of Probabilistic and Deep Approaches for Unsupervised Learning.
Résumé : The increasing amount of available data and the relatively high labeling-costs are bringing unsupervised learning (back) into the spotlight. After this presentation, the audience will have a better (or different) understanding of the approaches for unsupervised learning, including recent advances around generative adversarial networks (GANs).
In the talk, we'll start from a probabilistic formulation of unsupervised learning, and will draw parallels and links with other related formulations that include hierarchical probabilistic models, auto-encoders and generative adversarial networks. We will successively make the focus on specific facets of the problem to better understand the properties of these formulations: how they are optimized, how they can learn deep representations, how they can handle spatial or temporal data, how much semantics they capture, etc.
While being ill-defined, unsupervised learning aims at extracting structure, knowledge, patterns, latent properties or intermediate representations from a dataset without any labels. Here, we focus on automatically finding hidden/latent representations and patterns from data: the generic case will be used as much as possible but we will also do illustrations with "topic models" and their convolutional and hierarchical variants.
  • C. Wolf (INSA de Lyon, LIRIS) : Deep Learning of human motion
Résumé : We will first present a brief introduction into common deep models for computer vision and Machine Learning and the main challenges of the field. The second part is devoted to develop learning methods advancing automatic analysis and interpreting of human motion from different perspectives and based on various sources of information, such as images, video, depth, mocap data, audio and inertial sensors. We propose several models and associated training algorithms for supervised classification and semi-supervised and weakly-supervised feature learning, as well as modelling of temporal dependencies, and show their efficiency on a set of fundamental tasks, including detection, classification, parameter estimation and user verification.
Advances in several applications will be shown, including (i) gesture spotting and recognition based on multi-scale and multi-modal deep learning from visual signals (such as video, depth and mocap data), where we will present a training strategy for learning cross-modality correlations while preserving uniqueness of each modality-specific representation; (ii) hand pose estimation through deep regression from depth images, based on semi-supervised and weakly-supervised learning; (iii) mobile biometrics, in particular the automatic authentification of smartphone users through deep learning from data acquired from inertiel sensors.