Related PapersΒΆ
- Hyperparameter optimization with approximate gradient(HOAG)
- Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks(MAML)
- On First-Order Meta-Learning Algorithms(FMAML)
- Meta-SGD: Learning to Learn Quickly for Few-Shot Learning(Meta-SGD)
- Bilevel Programming for Hyperparameter Optimization and Meta-Learning(RHG)
- Truncated Back-propagation for Bilevel Optimization(TG)
- Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace(MT-net)
- Meta-Learning with warped gradient Descent(WarpGrad))
- DARTS: Differentiable Architecture Search(DARTS)
- A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Level Singleton(BDA)