Fwd: Colloquia Patavina Math.UniPD: June 8th, 2021 4 PM - Marc'Aurelio Ranzato (Facebook AI Research Lab, NY)
Ciao Alberto, ho pensato solo ora che magari questo seminario di oggi (!) ti può interessare. E' una cosa tecnica sul ML ma essendo un colloquia di dipartimento dovrebbe essere un po' piu' digeribile. Bye, Silvia
Begin forwarded message:
From: Francesco Ranzato <ranzato@math.unipd.it> Subject: Colloquia Patavina Math.UniPD: June 8th, 2021 4 PM - Marc'Aurelio Ranzato (Facebook AI Research Lab, NY) Date: 7 June 2021 at 20:59:14 CEST To: Francesco Ranzato <ranzato@math.unipd.it>
REMINDER: TOMORROW JUNE 8TH AT 4 PM ---------------------------------------------------------------------- Please find below the information for the next Colloquium Patavinum at the Math Department of the University of Padova. https://www.math.unipd.it/news/the-curse-and-blessing-of-learning-from-non-s...
We look forward for your participation!
The Colloquia Committee (F.Ancona, L.Ballan, L.Caravenna, R.Colpi, A.Iovita, F.Ranzato, O.Tommasi) https://www.math.unipd.it/~ranzato/colloquia/colloquia.html
-------- LECTURER Marc'Aurelio Ranzato, Facebook AI Research Lab, New York
-------- WHEN June 8th, 2021 - From 4:00 pm to 5:00 pm
-------- WHERE Online: https://unipd.link/Zoom-Colloquia-Patavina
-------- TITLE The curse and blessing of learning from non static datasets
-------- ABSTRACT In the classical empirical risk minimization framework of machine learning, learners observe samples from a dataset all at once. In practice, datasets are seldom a static object and data arrives a little bit at the time instead. New chunks of data may be added over time, and their distribution may be even non-stationary, as it is typical in robotics applications, for instance.
This new learning setting is clearly more difficult to characterize, but it offers an unprecedented opportunity to reduce sample complexity. By leveraging knowledge acquired over the previous chunks of data, the learner has the potential to more quickly adapt to the new incoming data. Such a learning setting is related to continual learning and anytime learning, subfields of machine learning which have seen a recent surge of interest from the research community interested in learning with limited supervision.
In this talk, I will formalize this learning setting, propose metrics and benchmarks to test the ability of learning algorithms to transfer knowledge acquired in the past. I will also introduce a modular architecture that has proven to be very effective and efficient across a variety of data streams. This is a hierarchical mixture of experts that adds new experts over time to automatically adjust its capacity as more and more data is observed. These promising results indicate that it might be possible to efficiently leverage past experience to reduce the amount of supervision needed to learn a new task and that non-static models can be highly effective at learning from non-static datasets, opening a new and exciting avenue of research.
-------- SHORT BIO SPEAKER Marc’Aurelio Ranzato is a research scientist at the Facebook AI Research lab in New York City. His research interests are in the area of unsupervised learning, continual learning and transfer learning, with applications to vision, natural language understanding and speech recognition. Marc’Aurelio is originally from Padova in Italy, where he graduated in Electronics Engineering. Marc’Aurelio has earned a PhD in Computer Science at New York University under Yann LeCun’s supervision. After a post-doc with Geoffrey Hinton at University of Toronto, he joined the Google Brain team in 2011. In 2013 he joined Facebook and was a founding member of the Facebook AI Research lab. Marc’Aurelio has served as program chair for ICLR 2017, ICLR 2018 and NeurIPS 2020. He is the general chair of NeurIPS 2021.
More infos are available here: https://ranzato.github.io/
participants (1)
-
Silvia Crafa