Model-Agnostic Meta-Learning
Transfer Learning differs in that its goal is not to reuse model knowledge for new tasks, but rather to find good common initial parameters for meta
- Data sampling across diverse tasks
- Inner Loop: Back propagate samples for data points
- Outer Loop: Compute gradient based on updated parameters for each task
- Aggregate gradients from multiple tasks to update initial parameters
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning...
https://arxiv.org/abs/1703.03400


Seonglae Cho