1 Introduction

In recent years, deep learning techniques have achieved remarkable successes on various tasks, including game-playing (Mnih et al. 2013; Silver et al. 2016), image recognition (Krizhevsky et al. 2012; He et al. 2015), machine translation (Wu et al. 2016), and automatic classification in biomedical domains (Goceri 2019a; Goceri and Karakas 2020; Iqbal et al. 2019a, b, 2020). Despite these advances and recent solutions (Goceri 2019b, 2020), ample challenges remain to be solved, such as the large amounts of data and training that are needed to achieve good performance. These requirements severely constrain the ability of deep neural networks to learn new concepts quickly, one of the defining aspects of human intelligence (Jankowski et al. 2011; Lake et al. 2017).

Meta-learning has been suggested as one strategy to overcome this challenge (Naik and Mammone 1992; Schmidhuber 1987; Thrun 1998). The key idea is that meta-learning agents improve their learning ability over time, or equivalently, learn to learn. The learning process is primarily concerned with tasks (set of observations) and takes place at two different levels: an inner- and an outer-level. At the inner-level, a new task is presented, and the agent tries to quickly learn the associated concepts from the training observations. This quick adaptation is facilitated by knowledge that it has accumulated across earlier tasks at the outer-level. Thus, whereas the inner-level concerns a single task, the outer-level concerns a multitude of tasks.

Fig. 1
figure 1

The accuracy scores of the covered techniques on 1-shot miniImageNet classification. The used feature extraction backbone is displayed on the x-axis. As one can see, there is a strong relationship between the network complexity and the classification performance

Historically, the term meta-learning has been used with various scopes. In its broadest sense, it encapsulates all systems that leverage prior learning experience in order to learn new tasks more quickly (Vanschoren 2018). This broad notion includes more traditional algorithm selection and hyperparameter optimization techniques for Machine Learning (Brazdil et al. 2008). In this work, however, we focus on a subset of the meta-learning field which develops meta-learning procedures to learn a good inductive bias for (deep) neural networks.Footnote 1 Henceforth, we use the term Deep Meta-Learning to refer to this subfield of meta-learning.

The field of Deep Meta-Learning is advancing at a quick pace, while it lacks a coherent, unifying overview, providing detailed insights into the key techniques. Vanschoren (2018) has surveyed meta-learning techniques, where meta-learning was used in the broad sense, limiting its account of Deep Meta-Learning techniques. Also, many exciting developments in deep meta-learning have happened after the survey was published. A more recent survey by Hospedales et al. (2020) adopts the same notion of deep meta-learning as we do, but aims for a broad overview, omitting technical details of the various techniques.

We attempt to fill this gap by providing detailed explications of contemporary Deep Meta-Learning techniques, using a unified notation. More specifically, we cover modern techniques in the field for supervised and reinforcement learning, that have achieved state-of-the-art performance, obtained popularity in the field, and presented novel ideas. Extra attention is paid to MAML (Finn et al. 2017), and related techniques, because of their impact on the field. We show how the techniques relate to each other, detail their strengths and weaknesses, identify current challenges, and provide an overview of promising future research directions. One of the observations that we make is that the network complexity is highly related to the few-shot classification performance (see Fig. 1). One might expect that in a few-shot setting, where only a few examples are available to learn from, the number of network parameters should be kept small to prevent overfitting. Clearly, the figure shows that this does not hold, as techniques that use larger backbones tend to achieve better performance. One important factor might be that due to the high amount of tasks that have been seen by the network, we are in a setting where similarly large amounts of observations have been evaluated. This result suggests that the size of the network should be taken into account when comparing algorithms.

This work can serve as an educational introduction to the field of Deep Meta-Learning, and as reference material for experienced researchers in the field. Throughout, we will adopt the taxonomy used by Vinyals (2017), which identifies three categories of Deep Meta-Learning approaches: (i) metric-, (ii) model-, and (iii) optimization-based meta-learning techniques.

The remainder of this work is structured as follows. Sect. 2 builds a common foundation on which we will base our overview of Deep Meta-Learning techniques. Sections 34, and 5 cover the main metric-, model-, and optimization-based meta-learning techniques, respectively. Section 6 provides a helicopter view of the field and summarizes the key challenges and open questions. Table 1 gives an overview of the notation that we will use throughout this paper.

Table 1 Some notation and meaning, which we use throughout this paper

2 Foundation

In this section, we build the necessary foundation for investigating Deep Meta-Learning techniques in a consistent manner. To begin with, we contrast regular learning and meta-learning. Afterwards, we briefly discuss how Deep Meta-Learning relates to different fields, what the usual training and evaluation procedure looks like, and which benchmarks are often used for this purpose. We finish this section by describing the context and some applications of the meta-learning field.

2.1 The meta abstraction

In this subsection, we contrast base-level (regular) learning and meta-learning for two different paradigms, i.e., supervised and reinforcement learning.

2.1.1 Regular supervised learning

In supervised learning, we wish to learn a function \(f_{\varvec{\theta }}: X \rightarrow Y\) that learns to map inputs \({\varvec{x}}_{i} \in X\) to their corresponding outputs \(y_{i} \in Y\). Here, \(\varvec{\theta }\) are model parameters (e.g. weights in a neural network) that determine the function’s behavior. To learn these parameters, we are given a data set of m observations: \(D = \{({\varvec{x}}_{i}, y_{i})\}_{i=1}^{m}\). Thus, given a data set \({\mathcal {D}}\), learning boils down to finding the correct setting for \(\varvec{\theta }\) that minimizes an empirical loss function \({\mathcal {L}}_{D}\), which must capture how the model is performing, such that appropriate adjustments to its parameters can be made. In short, we wish to find

$$\begin{aligned} \varvec{\theta }_{SL} := {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{\varvec{\theta }}} \, {\mathcal {L}}_{D}(\varvec{\theta }), \end{aligned}$$
(1)

where SL stands for “supervised learning”. Note that this objective is specific to the data set \({\mathcal {D}}\), meaning that our model \(f_{\varvec{\theta }}\) may not generalize to examples outside of \({\mathcal {D}}\). To measure generalization, one could evaluate the performance on a separate test data set, which contains unseen examples. A popular way to do this is through cross-validation, where one repeatedly creates train and test splits \(D^{tr}, D^{test} \subset D\) and uses these to train and evaluate a model respectively (Hastie et al. 2009).

Finding globally optimal parameters \(\varvec{\theta }_{SL}\) is often computationally infeasible. We can, however, approximate them, guided by pre-defined meta-knowledge \(\omega\) (Hospedales et al. 2020), which includes, e.g., the initial model parameters \(\varvec{\theta }\), choice of optimizer, and learning rate schedule. As such, we approximate

$$\begin{aligned} \varvec{\theta }_{SL} \approx g_{\omega }(D, {\mathcal {L}}_{D}), \end{aligned}$$
(2)

where \(g_{\omega }\) is an optimization procedure that uses pre-defined meta-knowledge \(\omega\), data set \({\mathcal {D}}\), and loss function \({\mathcal {L}}_{D}\), to produce updated weights \(g_{\omega }(D, {\mathcal {L}}_{D})\) that (presumably) perform well on \({\mathcal {D}}\).

2.1.2 Supervised meta-learning

In contrast, supervised meta-learning does not assume that any meta-knowledge \(\omega\) is given, or pre-defined. Instead, the goal of meta-learning is to find the best \(\omega\), such that our (regular) base-learner can learn new tasks (data sets) as quickly as possible. Thus, whereas supervised regular learning involves one data set, supervised meta-learning involves a group of data sets. The goal is to learn meta-knowledge \(\omega\) such that our model can learn many different tasks well. Thus, our model is learning to learn.

More formally, we have a probability distribution of tasks \(p({\mathcal {T}})\), and wish to find optimal meta-knowledge

$$\begin{aligned} \omega ^{*} := {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{\omega }} \, \underbrace{{\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})}}_\text {Outer-level} [ \underbrace{ {\mathcal {L}}_{{\mathcal {T}}_{j}}(g_{\omega }({\mathcal {T}}_{j}, {\mathcal {L}}_{{\mathcal {T}}_{j}}))}_\text {Inner-level}] . \end{aligned}$$
(3)

Here, the inner-level concerns task-specific learning, while the outer-level concerns multiple tasks. One can now easily see why this is meta-learning: we learn \(\omega\), which allows for quick learning of tasks \({\mathcal {T}}_{j}\) at the inner-level. Hence, we are learning to learn.

2.1.3 Regular reinforcement learning

In reinforcement learning, we have an agent that learns from experience. That is, it interacts with an environment, modeled by a Markov Decision Process (MDP) \(M = (S, A, P, r, p_{0}, \gamma , T)\). Here, S is the set of states, A the set of actions, P the transition probability distribution defining \(P(s_{t+1}| s_{t}, a_{t})\), \(r: S \times A \rightarrow {\mathbb {R}}\) the reward function, \(p_{0}\) the probability distribution over initial states, \(\gamma \in [0,1]\) the discount factor, and T the time horizon (maximum number of time steps) (Sutton and Barto 2018; Duan et al. 2016).

At every time step t, the agent finds itself in the state \(s_{t}\), in which the agent performs an action \(a_{t}\), computed by a policy function \(\pi _{\varvec{\theta }}\) (i.e., \(a_{t} = \pi _{\varvec{\theta }}(s_{t})\)), which is parameterized by weights \(\varvec{\theta }\). In turn, it receives a reward \(r_{t} = r(s_{t}, \pi _{\varvec{\theta }}(s_{t})) \in {\mathbb {R}}\) and a new state \(s_{t+1}\). This process of interactions continues until a termination criterion is met (e.g. fixed time horizon T reached). The goal of the agent is to learn how to act in order to maximize its expected reward. The reinforcement learning (RL) goal is to find

$$\begin{aligned} \varvec{\theta }_{RL} := {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{\varvec{\theta }}} \, {\mathbb {E}}_{\mathrm{traj}} \sum _{t=0}^{T} \gamma ^{t}r(s_{t}, \pi _{\varvec{\theta }}(s_{t})), \end{aligned}$$
(4)

where we take the expectation over the possible trajectories \(\text{ traj } = (s_{0}, \pi _{\varvec{\theta }}(s_{0}), ...s_{T}, \pi _{\varvec{\theta }}(s_{T}))\) due to the random nature of MDPs (Duan et al. 2016). Note that \(\gamma\) is a hyperparameter that can prioritize short- or long-term rewards by decreasing or increasing it, respectively.

Also in the case of reinforcement learning, it is often infeasible to find the global optimum \(\varvec{\theta }_{RL}\), and thus we settle for approximations. In short, given a learning method \(\omega\), we approximate

$$\begin{aligned} \varvec{\theta }_{RL} \approx g_{\omega }({\mathcal {T}}_{j}, {\mathcal {L}}_{{\mathcal {T}}_{j}}), \end{aligned}$$
(5)

where again \({\mathcal {T}}_{j}\) is the given MDP, and \(g_{\omega }\) is the optimization algorithm, guided by pre-defined meta-knowledge \(\omega\).

Note that in a Markov Decision Process (MDP), the agent knows the state at any given time step t. When this is not the case, it becomes a Partially Observable Markov Decision Process (POMDP), where the agent receives only observations O and uses these to update its belief with regard to the state it is in Sutton and Barto (2018).

2.1.4 Meta reinforcement learning

The meta abstraction has as its object a group of tasks, or Markov Decision Processes (MDPs) in the case of reinforcement learning. Thus, instead of maximizing the expected reward on a single MDP, the meta reinforcement learning objective is to maximize the expected reward over various MDPs, by learning meta-knowledge \(\omega\). Here, the MDPs are sampled from some distribution \(p({\mathcal {T}})\). So, we wish to find a set of parameters

$$\begin{aligned} \varvec{\omega }^{*} := {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{\varvec{\omega }}} \, \underbrace{{\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})}}_\text {Outer-level} \left[ \underbrace{{\mathbb {E}}_{traj} \sum _{t=0}^{T} \gamma ^{t}r(s_{t}, \pi _{g_{\omega }({\mathcal {T}}_{j}, {\mathcal {L}}_{{\mathcal {T}}_{j}})}(s_{t}))}_\text {Inner-level} \right] . \end{aligned}$$
(6)

2.1.5 Contrast with other fields

Now that we have provided a formal basis for our discussion for both supervised and reinforcement meta-learning, it is time to contrast meta-learning briefly with two related areas of machine learning that also have the goal to improve the speed of learning. We will start with transfer learning.

Transfer Learning In Transfer Learning, one tries to transfer knowledge of previous tasks to new, unseen tasks (Pan and Yang 2009; Taylor and Stone 2009), which can be challenging when a new task comes from a different distribution than the one used for training Iqbal et al. (2018). The distinction between Transfer Learning and Meta-Learning has become more opaque over time. A key property of meta-learning techniques, however, is their meta-objective, which explicitly aims to optimize performance across a distribution over tasks (as seen in previous sections by taking the expected loss over a distribution of tasks). This objective need not always be present in Transfer Learning techniques, e.g., when one pre-trains a model on a large data set and fine-tunes the learned weights on a smaller data set.

Multi-task learning Another, closely related field, is that of multi-task learning. In multi-task learning, a model is jointly trained to perform well on multiple fixed tasks (Hospedales et al. 2020). Meta-learning, in contrast, aims to find a model that can learn new (previously unseen) tasks quickly. This difference is illustrated in Fig. 2.

Fig. 2
figure 2

Adapted from https://meta-world.github.io/.

The difference between multi-task learning and meta-learning.

2.2 The meta-setup

In the previous section, we have described the learning objectives for (meta) supervised and reinforcement learning. We will now describe the general setting that can be used to achieve these objectives. In general, one optimizes a meta-objective by using various tasks, which are data sets in the context of supervised learning, and (Partially Observable) Markov Decision Processes in the case of reinforcement learning. This is done in three stages: the (i) meta-train stage, (ii) meta-validation stage, and (iii) meta-test stage, each of which is associated with a set of tasks.

First, in the meta-train stage, the meta-learning algorithm is applied to the meta-train tasks. Second, the meta-validation tasks can then be used to evaluate the performance on unseen tasks, which were not used for training. Effectively, this measures the meta-generalization ability of the trained network, which serves as feedback to tune, e.g., hyper-parameters of the meta-learning algorithm. Third, the meta-test tasks are used to give a final performance estimate of the meta-learning technique.

Fig. 3
figure 3

Illustration of N-way, k-shot classification, where \(N = 5\), and \(k = 1\). Meta-validation tasks are not displayed. Adapted from Ravi and Larochelle (2017)

2.2.1 N-way, k-shot Learning

A frequently used instantiation of this general meta-setup is called N-way, k-shot classification (see Fig. 3). This setup is also divided into the three stages—meta-train, meta-validation, and meta-test—which are used for meta-learning, meta-learner hyperparameter optimization, and evaluation, respectively. Each stage has a corresponding set of disjoint labels, i.e., \(L^{tr}, L^{val}, L^{test} \subset Y\), such that \(L^{tr} \cap L^{val} = \emptyset , L^{tr} \cap L^{test} = \emptyset\), and \(L^{val} \cap L^{test} = \emptyset\). In a given stage s, tasks/episodes \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\) are obtained by sampling examples \(({\varvec{x}}_{i}, y_{i})\) from the full data set \({\mathcal {D}}\), such that every \(y_{i} \in L^{s}\). Note that this requires access to a data set \({\mathcal {D}}\). The sampling process is guided by the N-way, k-shot principle, which states that every training data set \(D^{tr}_{{\mathcal {T}}_{j}}\) should contain exactly N classes and k examples per class, implying that \(|D^{tr}_{{\mathcal {T}}_{j}}| = N \cdot k\). Furthermore, the true labels of examples in the test set \(D_{{\mathcal {T}}_{j}}^{test}\) must be present in the train set \(D^{tr}_{{\mathcal {T}}_{j}}\) of a given task \({\mathcal {T}}_{j}\). \(D^{tr}_{{\mathcal {T}}{j}}\) acts as a support set, literally supporting classification decisions on the query set \(D^{test}_{{\mathcal {T}}_{j}}\). Importantly, note that with this terminology, the query set (or test set) of a task is used during the meta-training phase. Furthermore, the fact that the labels across stages are disjoint ensures that we test the ability of a model to learn new concepts.

The meta-learning objective in the training phase is to minimize the loss function of the model predictions on the query sets, conditioned on the support sets. As such, for a given task \({\mathcal {T}}_j\), the model ‘sees’ the support set, and extracts information from the support set to guide its predictions on the query set. By applying this procedure to different episodes/tasks \({\mathcal {T}}_j\), the model will slowly accumulate meta-knowledge \(\omega\), which can ultimately speed up learning on new tasks.

The easiest way to achieve this is by doing this with regular neural networks, but as was pointed out by various authors (see, e.g., Finn et al. 2017) more sophisticated architectures will vastly outperform such networks. In the remainder of this work, we will review such architectures.

At the meta-validation and meta-test stages, or evaluation phases, the learned meta-information in \(\omega\) is fixed. The model is, however, still allowed to make task-specific updates to its parameters \(\varvec{\theta }\) (which implies that it is learning). After task-specific updates, we can evaluate the performance on the test sets. In this way, we test how well a technique performs at meta-learning.

N-way, k-shot classification is often performed for small values of k (since we want our models to learn new concepts quickly, i.e., from few examples). In that case, one can refer to it as few-shot learning.

2.2.2 Common benchmarks

Here, we briefly describe some benchmarks that can be used to evaluate meta-learning algorithms.

  • Omniglot (Lake et al. 2011): This data set presents an image recognition task. Each image corresponds to one out of 1 623 characters from 50 different alphabets. Every character was drawn by 20 people. Note that in this case, the characters are the classes/labels.

  • ImageNet (Deng et al. 2009): This is the largest image classification data set, containing more than 20K classes and over 14 million colored images. miniImageNet is a mini variant of the large ImageNet data set (Deng et al. 2009) for image classification, proposed by Vinyals et al. (2016) to reduce the engineering efforts to run experiments. The mini data set contains 60 000 colored images of size \(84 \times 84\). There are a total of 100 classes present, each accorded by 600 examples. tieredImageNet (Ren et al. 2018) is another variation of the large ImageNet data set. It is similar to miniImageNet, but contains a hierarchical structure. That is, there are 34 classes, each with its own sub-classes.

  • CIFAR-10 and CIFAR-100 (Krizhevsky 2009): Two other image recognition data sets. Each one contains 60K RGB images of size \(32 \times 32\). CIFAR-10 and CIFAR-100 contain 10 and 100 classes respectively, with a uniform number of examples per class (6 000 and 600 respectively). Every class in CIFAR-100 also has a super-class, of which there are 20 in the full data set. Many variants of the CIFAR data sets can be sampled, giving rise to e.g. CIFAR-FS (Bertinetto et al. 2019) and FC-100 (Oreshkin et al. 2018).

  • CUB-200-2011 (Wah et al. 2011): The CUB-200-2011 data set contains roughly 12K RGB images of birds from 200 species. Every image has some labeled attributes (e.g. crown color, tail shape).

  • MNIST (LeCun et al. 2010): MNIST presents a hand-written digit recognition task, containing ten classes (for digits 0 through 9). In total, the data set is split into a 60K train and 10K test gray scale images of hand-written digits.

  • Meta-Dataset (Triantafillou et al. 2020): This data set comprises several other data sets such as Omniglot (Lake et al. 2011), CUB-200 (Wah et al. 2011), ImageNet (Deng et al. 2009), and more (Triantafillou et al. 2020). An episode is then constructed by sampling a data set (e.g. Omniglot), selecting a subset of labels to create train and test splits as before. In this way, broader generalization is enforced since the tasks are more distant from each other.

  • Meta-world (Yu et al. 2019): A meta reinforcement learning data set, containing 50 robotic manipulation tasks (control a robot arm to achieve some pre-defined goal, e.g. unlocking a door, or playing soccer). It was specifically designed to cover a broad range of tasks, such that meaningful generalization can be measured (Yu et al. 2019).

Fig. 4
figure 4

Learning continuous robotic control tasks is an important application of Deep Meta-Learning techniques. Image is taken from (Yu et al. 2019)

2.2.3 Some applications of meta-learning

Deep neural networks have achieved remarkable results on various tasks from image recognition, text processing, game playing to robotics (Silver et al. 2016; Mnih et al. 2013; Wu et al. 2016), but their success depends on the amount of available data (Sun et al. 2017) and computing resources. Deep meta-learning reduces this dependency by allowing deep neural networks to learn new concepts quickly. As a result, meta-learning widens the applicability of deep learning techniques to many application domains. Such areas include few-shot image classification (Finn et al. 2017; Snell et al. 2017; Ravi and Larochelle 2017), robotic control policy learning (Gupta et al. 2018; Nagabandi et al. 2019) (see Fig. 4), hyperparameter optimization (Antoniou et al. 2019; Schmidhuber et al. 1997), meta-learning learning rules (Bengio et al. 1991, 1997; Miconi et al. 2018, 2019), abstract reasoning (Barrett et al. 2018), and many more. For a larger overview of applications, we refer interested readers to Hospedales et al. (2020).

2.3 The meta-learning field

As mentioned in the introduction, meta-learning is a broad area of research, as it encapsulates all techniques that leverage prior learning experience to learn new tasks more quickly (Vanschoren 2018). We can classify two distinct communities in the field with a different focus: (i) algorithm selection and hyperparameter optimization for machine learning techniques, and (ii) search for inductive bias in deep neural networks. We will refer to these communities as group (i) and group (ii) respectively. Now, we will give a brief description of the first field, and a historical overview of the second.

Group (i) uses a more traditional approach, to select a suitable machine learning algorithm and hyperparameters for a new data set \({\mathcal {D}}\) (Peng et al. 2002). This selection can for example be made by leveraging prior model evaluations on various data sets \(D'\), and by using the model which achieved the best performance on the most similar data set (Vanschoren 2018). Such traditional approaches require (large) databases of prior model evaluations, for many different algorithms. This has led to initiatives such as OpenML (Vanschoren et al. 2014), where researchers can share such information. The usage of these systems would limit the freedom in picking the neural network architecture as they would be constrained to using architectures that have been evaluated beforehand.

Driven by advances in neural networks another approach, taken by group (ii), is to adopt the view of a self-improving agent, which improves its learning ability over time by finding a good inductive bias (a set of assumptions that guide predictions). We now present a brief historical overview of developments in this field of Deep Meta-Learning, based on Hospedales et al. (2020).

Pioneering work was done by Schmidhuber (1987) and Hinton and Plaut (1987). Schmidhuber developed a theory of self-referential learning, where the weights of a neural network can serve as input to the model itself, which then predicts updates (Schmidhuber 1987, 1993). In that same year, Hinton and Plaut (1987) proposed to use two weights per neural network connection, i.e., slow and fast weights, which serve as long- and short-term memory respectively. Later came the idea of meta-learning learning rules (Bengio et al. 1991, 1997). Meta-learning techniques that use gradient-descent and backpropagation were proposed by Hochreiter et al. (2001) and Younger et al. (2001). These two works have been pivotal to the current field of Deep Meta-Learning, as the majority of techniques rely on backpropagation, as we will see on our journey of contemporary Deep Meta-Learning techniques. We will now cover the three categories metric-, model-, and optimization-based techniques, respectively.

2.4 Overview of the rest of this work

In the remainder of this work, we will look in more detail at individual meta-learning methods. As indicated before, the techniques can be grouped into three main categories (Vinyals 2017), namely (i) metric-, (ii) model-, and (iii) optimization-based methods. We will discuss them in sequence.

To help give an overview of the methods, we draw your attention to the following figure and tables. Table 2 summarizes the three categories and provides key ideas, strengths, and weaknesses of the approaches. The terms and technical details are explained more fully in the remainder of this paper. Table 3 contains an overview of all techniques that are discussed further on.

Table 2 A high-level overview of the three Deep Meta-Learning categories, i.e., (i) metric-, (ii) model-, and (iii) optimization-based techniques, and their main strengths and weaknesses
Table 3 Overview of the discussed Deep Meta-Learning techniques

3 Metric-based meta-learning

At a high level, the goal of metric-based techniques is to acquire—among others—meta-knowledge \(\omega\) in the form of a good feature space that can be used for various new tasks. In the context of neural networks, this feature space coincides with the weights \(\varvec{\theta }\) of the networks. Then, new tasks can be learned by comparing new inputs to example inputs (of which we know the labels) in the meta-learned feature space. The higher the similarity between a new input and an example, the more likely it is that the new input will have the same label as the example input.

Metric-based techniques are a form of meta-learning as they leverage their prior learning experience (meta-learned feature space) to ‘learn’ new tasks more quickly. Here, ‘learn’ is used in a non-standard way since metric-based techniques do not make any network changes when presented with new tasks, as they rely solely on input comparisons in the already meta-learned feature space. These input comparisons are a form of non-parametric learning, i.e., new task information is not absorbed into the network parameters.

More formally, metric-based learning techniques aim to learn a similarity kernel, or equivalently, attention mechanism \(k_{\varvec{\theta }}\) (parameterized by \(\varvec{\theta }\)), that takes two inputs \({\varvec{x}}_{1}\) and \({\varvec{x}}_{2}\), and outputs their similarity score. Larger scores indicate larger similarities. Class predictions for new inputs \({\varvec{x}}\) can then be made by comparing \({\varvec{x}}\) to example inputs \({\varvec{x}}_{i}\), of which we know the true labels \(y_{i}\). The underlying idea is that the larger the similarity between \({\varvec{x}}\) and \({\varvec{x}}_{i}\), the more likely it becomes that \({\varvec{x}}\) also has label \(y_{i}\).

Given a task \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\) and an unseen input vector \({\varvec{x}} \in D^{test}_{{\mathcal {T}}_{j}}\), a probability distribution over classes Y is computed/predicted as a weighted combination of labels from the support set \(D^{tr}_{{\mathcal {T}}_{j}}\), using similarity kernel \(k_{\varvec{\theta }}\), i.e.,

$$\begin{aligned} p_{\varvec{\theta }}(Y|{\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}}) = \sum _{({\varvec{x}}_{i}, y_{i}) \in D^{tr}_{{\mathcal {T}}_{j}}} k_{\varvec{\theta }}({\varvec{x}}, {\varvec{x}}_{i})y_{i}. \end{aligned}$$
(7)

Importantly, the labels \(y_{i}\) are assumed to be one-hot encoded, meaning that they are represented by zero vectors with a ‘1’ on the position of the true class. For example, suppose there are five classes in total and our example \({\varvec{x}}_{1}\) has true class 4. Then, the one-hot encoded label is \(y_{1} = [0,0,0,1,0]\). Note that the probability distribution \(p_{\varvec{\theta }}(Y|{\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}})\) over classes is a vector of size |Y|, in which the i-th entry corresponds to the probability that input \({\varvec{x}}\) has class \(Y_{i}\) (given the support set). The predicted class is thus \({\hat{y}} = {{\,\mathrm{arg\,max}\,}}_{i=1,2,\ldots ,|Y|} p_{\varvec{\theta }}(Y|{\varvec{x}},S)_{i}\), where \(p_{\varvec{\theta }}(Y|{\varvec{x}},S)_{i}\) is the computed probability that input \({\varvec{x}}\) has class \(Y_{i}\).

Fig. 5
figure 5

Illustration of our metric-based example. The blue vector represents the new input from the query set, whereas the red vectors are inputs from the support set which can be used to guide our prediction for the new input

3.1 Example

Suppose that we are given a task \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\). Furthermore, suppose that \(D^{tr}_{{\mathcal {T}}_{j}} = \{ ([0,-4], 1), ([-2,-4],2), ([-2,4],3), ([6,0], 4) \}\), where a tuple denotes a pair \(({\varvec{x}}_{i},y_{i})\). For simplicity, the example will not use an embedding function, which maps example inputs onto an (more informative) embedding space. Our query set only contains one example \(D^{test}_{{\mathcal {T}}_{j}} = \{ ([4, 0.5], y) \}\). Then, the goal is to predict the correct label for new input [4, 0.5] using only examples in \(D^{tr}_{{\mathcal {T}}_{j}}\). The problem is visualized in Fig. 5, where red vectors correspond to example inputs from our support set. The blue vector is the new input that needs to be classified. Intuitively, this new input is most similar to the vector [6, 0], which means that we expect the label for the new input to be the same as that for [6, 0], i.e., 4.

Suppose we use a fixed similarity kernel, namely the cosine similarity, i.e., \(k({\varvec{x}}, {\varvec{x}}_{i}) = \frac{{\varvec{x}} \cdot {\varvec{x}}_{i}^{T}}{||{\varvec{x}}|| \cdot ||{\varvec{x}}_{i}||}\), where \(||{\varvec{v}}||\) denotes the length of vector \({\varvec{v}}\), i.e., \(||{\varvec{v}}|| = \sqrt{(\sum _{n}v_{n}^{2})}\). Here, \(v_{n}\) denotes the n-th element of placeholder vector \({\varvec{v}}\) (substitute \({\varvec{v}}\) by \({\varvec{x}}\) or \({\varvec{x}}_{i}\)). We can now compute the cosine similarity between the new input [4, 0.5] and every example input \({\varvec{x}}_{i}\), as done in Table 4, where we used the facts that \(||{\varvec{x}}|| = ||\, [4,0.5] \, || = \sqrt{4^{2}+0.5^{2}} \approx 4.03\), and \(\frac{{\varvec{x}}}{||{\varvec{x}}||} \approx \frac{[4,0.5]}{4.03} = [0.99,0.12]\).

From this table and Eq. 7, it follows that the predicted probability distribution \(p_{\varvec{\theta }}(Y|{\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}}) = -0.12y_{1} -0.58y_{2} - 0.37y_{3} + 0.99y_{4} = -0.12 [1,0,0,0] - 0.58 [0,1,0,0] -0.37[0,0,1,0] + 0.99[0,0,0,1] =[-0.12,-0.58,-0.37,0.99]\). Note that this is not really a probability distribution. That would require normalization such that every element is at least 0 and the sum of all elements is 1. For the sake of this example, we do not perform this normalization, as it is clear that class 4 (the class of the most similar example input [6, 0]) will be predicted.

Table 4 Example showing pair-wise input comparisons

One may wonder why such techniques are meta-learners, for we could take any single data set \({\mathcal {D}}\) and use pair-wise comparisons to compute predictions. At the outer-level, metric-based meta-learners are trained on a distribution of different tasks, in order to learn (among others) a good input embedding function. This embedding function facilitates inner-level learning, which is achieved through pair-wise comparisons. As such, one learns an embedding function across tasks to facilitate task-specific learning, which is equivalent to “learning to learn”, or meta-learning.

After this introduction to metric-based methods, we will now cover some key metric-based techniques.

3.2 Siamese neural networks

A Siamese neural network (Koch et al. 2015) consists of two neural networks \(f_{\varvec{\theta }}\) that share the same weights \(\varvec{\theta }\). Siamese neural networks take two inputs \({\varvec{x}}_{1}, {\varvec{x}}_{2}\), and compute two hidden states \(f_{\varvec{\theta }}({\varvec{x}}_{1}), f_{\varvec{\theta }}({\varvec{x}}_{2})\), corresponding to the activation patterns in the final hidden layers. These hidden states are fed into a distance layer, which computes a distance vector \({\varvec{d}} = |f_{\varvec{\theta }}({\varvec{x}}_{1}) - f_{\varvec{\theta }}({\varvec{x}}_{2})|\), where \(d_{i}\) is the absolute distance between the i-th elements of \(f_{\varvec{\theta }}({\varvec{x}}_{1})\) and \(f_{\varvec{\theta }}({\varvec{x}}_{2})\). From this distance vector, the similarity between \({\varvec{x}}_{1}, {\varvec{x}}_{2}\) is computed as \(\sigma (\varvec{\alpha }^{T} {\varvec{d}})\), where \(\sigma\) is the sigmoid function (with output range [0,1]), and \(\varvec{\alpha }\) is a vector of free weighting parameters, determining the importance of each \(d_{i}\). This network structure can be seen in Fig. 6.

Fig. 6
figure 6

Source: Koch et al. (2015)

Example of a Siamese neural network.

Koch et al. (2015) applied this technique to few-shot image recognition in two stages. In the first stage, they train the twin network on an image verification task, where the goal is to output whether two input images \({\varvec{x}}_{1}\) and \({\varvec{x}}_{2}\) have the same class. The network is thus stimulated to learn discriminative features. In the second stage, where the model is confronted with a new task, the network leverages its prior learning experience. That is, given a task \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\), and previously unseen input \({\varvec{x}} \in D^{test}_{{\mathcal {T}}_{j}}\), the predicted class \({\hat{y}}\) is equal to the label \(y_{i}\) of the example \(({\varvec{x}}_{i},y_{i}) \in D^{tr}_{{\mathcal {T}}_{j}}\) which yields the highest similarity score to \({\varvec{x}}\). In contrast to other techniques mentioned further in this section, Siamese neural networks do not directly optimize for good performance across tasks (consisting of support and query sets). However, they do leverage learned knowledge from the verification task to learn new tasks quickly.

In summary, Siamese neural networks are a simple and elegant approach to perform few-shot learning. However, they are not readily applicable outside the supervised learning setting.

3.3 Matching networks

Matching networks (Vinyals et al. 2016) build upon the idea that underlies Siamese neural networks (Koch et al. 2015). That is, they leverage pair-wise comparisons between the given support set \(D^{tr}_{{\mathcal {T}}_{j}} = \{ ({\varvec{x}}_{i}, y_{i}) \}_{i=1}^{m}\) (for a task \({\mathcal {T}}_{j}\)), and new inputs \({\varvec{x}} \in D^{test}_{{\mathcal {T}}_{j}}\) from the query set which we want to classify. However, instead of assigning the class \(y_{i}\) of the most similar example input \({\varvec{x}}_{i}\), matching networks use a weighted combination of all example labels \(y_{i}\) in the support set, based on the similarity of inputs \({\varvec{x}}_{i}\) to new input \({\varvec{x}}\). More specifically, predictions are computed as follows: \({\hat{y}} = \sum _{i=1}^{m} a({\varvec{x}}, {\varvec{x}}_{i})y_{i}\), where a is a non-parametric (non-trainable) attention mechanism, or similarity kernel. This classification process is shown in Fig. 7. In this figure, the input to \(f_{\varvec{\theta }}\) has to be classified, using the support set \(D^{tr}_{{\mathcal {T}}_{j}}\) (input to \(g_{\varvec{\theta }}\)).

Fig. 7
figure 7

Source: Vinyals et al. (2016)

The architecture of matching networks.

The attention that is used consists of a softmax over the cosine similarity c between the input representations, i.e.,

$$\begin{aligned} a({\varvec{x}}, {\varvec{x}}_{i}) = \frac{e^{c( f_{\varvec{\phi }}({\varvec{x}}), g_{\varvec{\varphi }}({\varvec{x}}_{i}) )}}{\sum _{j=1}^{m} e^{c( f_{\varvec{\phi }}({\varvec{x}}), g_{\varvec{\varphi }}({\varvec{x}}_{j}) )}}, \end{aligned}$$
(8)

where \(f_{\varvec{\phi }}\) and \(g_{\varvec{\varphi }}\) are neural networks, parameterized by \(\varvec{\phi }\) and \(\varvec{\varphi }\), that map raw inputs to a (lower-dimensional) latent vector, which corresponds to the output of the final hidden layer of a neural network. As such, neural networks act as embedding functions. The larger the cosine similarity between the embeddings of \({\varvec{x}}\) and \({\varvec{x}}_{i}\), the larger \(a({\varvec{x}}, {\varvec{x}}_{i})\), and thus the influence of label \(y_{i}\) on the predicted label \({\hat{y}}\) for input \({\varvec{x}}\).

Vinyals et al. (2016) propose two main choices for the embedding functions. The first is to use a single neural network, granting us \(\varvec{\theta } = \varvec{\phi } = \varvec{\varphi }\) and thus \(f_{\varvec{\phi }} = g_{\varvec{\varphi }}\). This setup is the default form of matching networks, as shown in Fig. 7. The second choice is to make \(f_{\varvec{\phi }}\) and \(g_{\varvec{\varphi }}\) dependent on the support set \(D^{tr}_{{\mathcal {T}}_{j}}\) using Long Short-Term Memory networks (LSTMs). In that case, \(f_{\varvec{\phi }}\) is represented by an attention LSTM, and \(g_{\varvec{\varphi }}\) by a bidirectional one. This choice for embedding functions is called Full Context Embeddings (FCE), and yielded an accuracy improvement of roughly 2% on miniImageNet compared to the regular matching networks, indicating that task-specific embeddings can aid the classification of new data points from the same distribution.

Matching networks learn a good feature space across tasks for making pair-wise comparisons between inputs. In contrast to Siamese neural networks (Koch et al. 2015), this feature space (given by weights \(\varvec{\theta }\)) is learned across tasks, instead of on a distinct verification task.

In summary, matching networks are an elegant and simple approach to metric-based meta-learning. However, these networks are not readily applicable outside of supervised learning settings, and suffer from performance degradation when label distributions are biased (Vinyals et al. 2016).

3.4 Prototypical networks

Just like Matching networks (Vinyals et al. 2016), prototypical networks (Snell et al. 2017) base their class predictions on the entire support set \(D^{tr}_{{\mathcal {T}}_{j}}\). However, instead of computing the similarity between new inputs and examples in the support set, prototypical networks only compare new inputs to class prototypes (centroids), which are single vector representations of classes in some embedding space. Since there are fewer (or equal) class prototypes than the number of examples in the support set, the amount of required pair-wise comparisons decreases, saving computational costs.

The underlying idea of class prototypes is that for a task \({\mathcal {T}}_{j}\), there exists an embedding function that maps the support set onto a space where class instances cluster nicely around the corresponding class prototypes (Snell et al. 2017). Then, for a new input \({\varvec{x}}\), the class of the prototype nearest to that input will be predicted. As such, prototypical networks perform nearest centroid/prototype classification in a meta-learned embedding space. This is visualized in Fig. 8.

More formally, given a distance function \(d: X \times X \rightarrow [0, +\infty )\) (e.g. Euclidean distance) and embedding function \(f_{\varvec{\theta }}\), parameterized by \(\varvec{\theta }\), prototypical networks compute class probabilities \(p_{\varvec{\theta }}(Y | {\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}})\) as follows

$$\begin{aligned} p_{\varvec{\theta }}(y = k | {\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}}) = \frac{exp[-d(f_{\theta }({\varvec{x}}), {\varvec{c}}_{k})]}{\sum _{y_{i}} exp[-d(f_{\theta }({\varvec{x}}), {\varvec{c}}_{y_{i}}) ]}, \end{aligned}$$
(9)

where \({\varvec{c}}_{k}\) is the prototype/centroid for class k and \(y_{i}\) are the classes in the support set \(D^{tr}_{{\mathcal {T}}_{j}}\). Here, a class prototype for class k is defined as the average of all vectors \({\varvec{x}}_{i}\) in the support set such that \(y_{i} = k\). Thus, classes with prototypes that are nearer to the new input \({\varvec{x}}\) obtain larger probability scores.

Snell et al. (2017) found that the squared Euclidean distance function as d gave rise to the best performance. With that distance function, prototypical networks can be seen as linear models. To see this, note that \(-d(f_{\theta }({\varvec{x}}), {\varvec{c}}_{k}) = -|| f_{\theta }({\varvec{x}}) - {\varvec{c}}_{k}||^{2} = - f_{\theta }({\varvec{x}})^{T}f_{\theta }({\varvec{x}}) + 2{\varvec{c}}_{k}^{T}f_{\theta }({\varvec{x}}) - {\varvec{c}}_{k}^{T}{\varvec{c}}_{k}\). The first term does not depend on the class k, and does thus not affect the classification decision. The remainder can be written as \({\varvec{w}}_{k}^{T}f_{\theta }({\varvec{x}}) + {\varvec{b}}_{k}\), where \({\varvec{w}}_{k} = 2{\varvec{c}}_{k}\) and \({\varvec{b}}_{k} = -{\varvec{c}}_{k}^{T}{\varvec{c}}_{k}\). Note that this is linear in the output of network \(f_\theta\), not linear in the input of the network \({\varvec{x}}\). Also, Snell et al. (2017) show that prototypical networks (coupled with Euclidean distance) are equivalent to matching networks in one-shot learning settings, as every example in the support set will be its own prototype.

Fig. 8
figure 8

Source: Snell et al. (2017). (Color figure online)

Prototypical networks for the case of few-shot learning. The \({\varvec{c}}_{k}\) are class prototypes for class k which are computed by averaging the representations of inputs (colored circles) in the support set. Note that the representation space is partitioned into three disjoint areas, where each area corresponds to one class. The class with the closest prototype to the new input \({\varvec{x}}\) in the query set is then given as prediction.

In short, prototypical networks save computational costs by reducing the required number of pair-wise comparisons between new inputs and the support set, by adopting the concept of class prototypes. Additionally, prototypical networks were found to outperform matching networks (Vinyals et al. 2016) in 5-way, k-shot learning for \(k=1,5\) on Omniglot (Lake et al. 2011) and miniImageNet (Vinyals et al. 2016), even though they do not use complex task-specific embedding functions. Despite these advantages, prototypical networks are not readily applicable outside of supervised learning settings.

3.5 Relation networks

Fig. 9
figure 9

Source: Sung et al. (2018)

Relation network architecture. First, the embedding network \(f_{\varvec{\varphi }}\) embeds all inputs from the support set \(D^{tr}_{{\mathcal {T}}_{j}}\) (the five example inputs on the left), and the query input (below the \(f_{\varvec{\varphi }}\) block). All support set embeddings \(f_{\varvec{\varphi }}({\varvec{x}}_{i})\) are then concatenated to the query embedding \(f_{\varvec{\varphi }}({\varvec{x}})\). These concatenated embeddings are passed into a relation network \(g_{\varvec{\phi }}\), which computes a relation score for every pair \(({\varvec{x}}_{i}, {\varvec{x}})\). The class of the input \({\varvec{x}}_{i}\) that yields the largest relation score \(g_{\varvec{\phi }}([f_{\varvec{\varphi }}({\varvec{x}}), f_{\varvec{\varphi }}({\varvec{x}}_{i})])\) is then predicted.

In contrast to previously discussed metric-based techniques, Relation networks (Sung et al. 2018) employ a trainable similarity metric, instead of a pre-defined one (e.g. cosine similarity as used in matching networks (Vinyals et al. 2016)). More specifically, matching networks consist of two chained, neural network modules: the embedding network/module \(f_{\varvec{\varphi }}\) which is responsible for embedding inputs, and the relation network \(g_{\varvec{\phi }}\) which computes similarity scores between new inputs \({\varvec{x}}\) and example inputs \({\varvec{x}}_{i}\) of which we know the labels. A classification decision is then made by picking the class of the example input which yields the largest relation score (or similarity). Note that Relation networks thus do not use the idea of class prototypes, and simply compare new inputs \({\varvec{x}}\) to all example inputs \({\varvec{x}}_{i}\) in the support set, as done by, e.g., matching networks (Vinyals et al. 2016).

More formally, we are given a support set \(D^{tr}_{{\mathcal {T}}_{j}}\) with some examples \(({\varvec{x}}_{i}, y_{i})\), and a new (previously unseen) input \({\varvec{x}}\). Then, for every combination \(({\varvec{x}}, {\varvec{x}}_{i})\), the Relation network produces a concatenated embedding \([f_{\varvec{\varphi }}({\varvec{x}}), f_{\varvec{\varphi }}({\varvec{x}}_{i})]\), which is the vector obtained by concatenating the respective embeddings of \({\varvec{x}}\) and \({\varvec{x}}_{i}\). This concatenated embedding is then fed into the relation module \(g_{\varvec{\phi }}\). Finally, \(g_{\varvec{\phi }}\) computes the relation score between \({\varvec{x}}\) and \({\varvec{x}}_{i}\) as

$$\begin{aligned} r_{i} = g_{\varvec{\phi }}([f_{\varvec{\varphi }}({\varvec{x}}), f_{\varvec{\varphi }}({\varvec{x}}_{i}) ]). \end{aligned}$$
(10)

The predicted class is then \({\hat{y}} = y_{{{\,\mathrm{arg\,max}\,}}_{i} r_{i}}\). This entire process is shown in Fig. 9. Remarkably enough, Relation networks use the Mean-Squared Error (MSE) of the relation scores, rather than the more standard cross-entropy loss. The MSE is then propagated backwards through the entire architecture (Fig. 9).

The key advantage of Relation networks is their expressive power, induced by the usage of a trainable similarity function. This expressivity makes this technique very powerful. As a result, it yields better performance than previously discussed techniques that use a fixed similarity metric.

3.6 Graph neural networks

Graph neural networks (Garcia and Bruna 2017) use a more general and flexible approach than previously discussed techniques for N-way, k-shot classification. As such, graph neural networks subsume Siamese (Koch et al. 2015) and prototypical networks (Snell et al. 2017). The graph neural network approach represents each task \({\mathcal {T}}_{j}\) as a fully-connected graph \(G = (V,E)\), where V is a set of nodes/vertices and E a set of edges connecting nodes. In this graph, nodes \({\varvec{v}}_{i}\) correspond to input embeddings \(f_{\varvec{\theta }}({\varvec{x}}_{i})\), concatenated with their one-hot encoded labels \(y_{i}\), i.e., \({\varvec{v}}_{i} = [f_{\varvec{\theta }}({\varvec{x}}_{i}), y_{i}]\). For inputs \({\varvec{x}}\) from the query set (for which we do not have the labels), a uniform prior over all N possible labels is used: \(y = [\frac{1}{N},\ldots ,\frac{1}{N}]\). Thus, each node contains an input and label section. Edges are weighted links that connect these nodes.

The graph neural network then propagates information in the graph using a number of local operators. The underlying idea is that label information can be transmitted from nodes of which we do have the labels, to nodes for which we have to predict labels. Which local operators are used, is out of scope for this paper, and the reader is referred to Garcia and Bruna (2017) for details.

By exposing the graph neural network to various tasks \({\mathcal {T}}_{j}\), the propagation mechanism can be altered to improve the flow of label information in such a way that predictions become more accurate. As such, in addition to learning a good input representation function \(f_{\varvec{\theta }}\), graph neural networks also learn to propagate label information from labeled examples to unlabeled inputs.

Graph neural networks achieve good performance in few-shot settings (Garcia and Bruna 2017) and are also applicable in semi-supervised and active learning settings.

3.7 Attentive recurrent comparators

Fig. 10
figure 10

Source: Shyam et al. (2017)

Processing in an attentive recurrent comparator. At every time step, the model takes a glimpse of a part of an image and incorporates this information into the hidden state \(h_t\). The final hidden state after taking various glimpses of a pair of images is then used to compute a class similarity score.

Attentive recurrent comparators (Shyam et al. 2017) differ from previously discussed techniques as they do not compare inputs as a whole, but by parts. This approach is inspired by how humans would make a decision concerning the similarity of objects. That is, we shift our attention from one object to the other, and move back and forth to take glimpses of different parts of both objects. In this way, information of two objects is fused from the beginning, whereas other techniques (e.g., matching networks (Vinyals et al. 2016) and graph neural networks (Garcia and Bruna 2017)) only combine information at the end (after embedding both images) (Shyam et al. 2017).

Given two inputs \({\varvec{x}}_{i}\) and \({\varvec{x}}\), we feed them in interleaved fashion repeatedly into a recurrent neural network (controller): \({\varvec{x}}_{i}, {\varvec{x}},\ldots ,{\varvec{x}}_{i},{\varvec{x}}\). Thus, the image at time step t is given by \(I_{t} = {\varvec{x}}_{i}\) if t is even else \({\varvec{x}}\). Then, at each time step t, the attention mechanism focuses on a square region of the current image: \(G_{t} = attend(I_{t}, \varOmega _{t})\), where \(\varOmega _{t} = W_{g}h_{t-1}\) are attention parameters, which are computed from the previous hidden state \(h_{t-1}\). The next hidden state \(h_{t+1} = \text{ RNN }(G_{t}, h_{t-1})\) is given by the glimpse at time t, i.e., \(G_{t}\), and the previous hidden state \(h_{t-1}\). The entire sequence consists of g glimpses per image. After this sequence is fed into the recurrent neural network (indicated by RNN(\(\circ\))), the final hidden state \(h_{2g}\) is used as a combined representation of \({\varvec{x}}_{i}\) relative to \({\varvec{x}}\). This process is summarized in Fig. 10. Classification decisions can then be made by feeding the combined representations into a classifier. Optionally, the combined representations can be processed by bi-directional LSTMs before passing them to the classifier.

The attention approach is biologically inspired, and biologically plausible. A downside of attentive recurrent comparators is the higher computational cost, while the performance is often not better than less biologically plausible techniques, such as graph neural networks (Garcia and Bruna 2017).

3.8 Metric-based techniques, in conclusion

In this section, we have seen various metric-based techniques. The metric-based techniques meta-learn an informative feature space that can be used to compute class predictions based on input similarity scores. Figure 11 shows the relationships between the various metric-based techniques that we have covered.

As we can see, Siamese networks (Koch et al. 2015) mark the beginning of metric-based, deep meta-learning techniques in few-shot learning settings. They are the first to use the idea of predicting classes by comparing inputs from the support and query sets. This idea was generalized in graph neural networks (GNNs) (Hamilton et al. 2017; Garcia and Bruna 2017) where the information flow between support and query inputs is parametric and thus more flexible. Matching networks (Vinyals et al. 2016) are directly inspired by Siamese networks as they use the same core idea (comparing inputs for making predictions), but directly train in the few-shot setting and use cosine similarity as a similarity function. Thus, the auxiliary, binary classification task used by Siamese networks is left out, and matching networks directly train on tasks. Prototypical networks (Snell et al. 2017) increase the robustness of input comparisons by comparing every query set input with a class prototype instead of individual support set examples. This reduces the number of required input comparisons for a single query input to N instead of \(k \cdot N\). Relation networks (Sung et al. 2018) replace the fixed, pre-defined similarity metrics used in matching and prototypical networks with a neural network, which allows for learning a domain-specific similarity function. Lastly, attentive recurrent comparators (ARCs) (Shyam et al. 2017) take a more biologically plausible approach by not comparing entire inputs but by taking multiple interleaved glimpses at various parts of the inputs that are being compared.

Key advantages of these metric-based techniques are that (i) the underlying idea of similarity-based predictions is conceptually simple, and (ii) they can be fast at test-time when tasks are small, as the networks do not need to make task-specific adjustments. However, when tasks at meta-test time become more distant from the tasks that were used at meta-train time, metric-learning techniques are unable to absorb new task information into the network weights. Consequently, performance may suffer.

Furthermore, when tasks become larger, pair-wise comparisons may become computationally expensive. Lastly, most metric-based techniques rely on the presence of labeled examples, which make them inapplicable outside of supervised learning settings.

Fig. 11
figure 11

The relationships between the covered metric-based meta-learning techniques

4 Model-based meta-learning

A different approach to Deep Meta-Learning is the model-based approach. On a high level, model-based techniques rely upon an adaptive, internal state, in contrast to metric-based techniques, which generally use a fixed neural network at test-time.

More specifically, model-based techniques maintain a stateful, internal representation of a task. When presented with a task, a model-based neural network processes the support set in a sequential fashion. At every time step, an input enters and alters the internal state of the model. Thus, the internal state can capture relevant task-specific information, which can be used to make predictions for new inputs.

Because the predictions are based on internal dynamics that are hidden from the outside, model-based techniques are also called black-boxes. Information from previous inputs must be remembered, which is why model-based techniques have a memory component, either in- or externally.

Recall that the mechanics of metric-based techniques were limited to pair-wise input comparisons. This is not the case for model-based techniques, where the human designer has the freedom to choose the internal dynamics of the algorithm. As a result, model-based techniques are not restricted to meta-learning good feature spaces, as they can also learn internal dynamics, used to process and predict input data of tasks.

More formally, given a support set \(D^{tr}_{{\mathcal {T}}_{j}}\) corresponding to a task \({\mathcal {T}}_{j}\), model-based techniques compute a class probability distribution for a new input \({\varvec{x}}\) as

$$\begin{aligned} p_{\varvec{\theta }}(Y|{\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}}) = f_{\varvec{\theta }}({\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}}), \end{aligned}$$
(11)

where f represents the black-box neural network model, and \(\varvec{\theta }\) its parameters.

4.1 Example

Using the same example as in Sect. 3, suppose we are given a task support set \(D^{tr}_{{\mathcal {T}}_{j}} = \{ ([0,-4], 1), ([-2,-4],2), ([-2,4],3), ([6,0], 4) \}\), where a tuple denotes a pair \(({\varvec{x}}_{i},y_{i})\). Furthermore, suppose our query set only contains one example \(D^{test}_{{\mathcal {T}}_{j}} = \{ ([4, 0.5], 4) \}\). This problem has been visualized in Fig. 5 (in Sect. 3). For the sake of the example, we do not use an input embedding function: our model will operate on the raw inputs of \(D^{tr}_{{\mathcal {T}}_{j}}\) and \(D^{test}_{{\mathcal {T}}_{j}}\). As an internal state, our model uses an external memory matrix \(M \in {\mathbb {R}}^{4 \times (2+1)}\), with four rows (one for each example in our support set), and three columns (the dimensionality of input vectors, plus one dimension for the correct label). Our model proceeds to process the support set in a sequential fashion, reading the examples from \(D^{tr}_{{\mathcal {T}}_{j}}\) one by one, and by storing the i-th example in the i-th row of the memory module. After processing the support set, the memory matrix contains all examples, and as such, serves as internal task representation.

Given the new input [4, 0.5], our model could use many different techniques to make a prediction based on this representation. For simplicity, assume that it computes the dot product between \({\varvec{x}}\), and every memory M(i) (the 2-D vector in the i-th row of M, ignoring the correct label), and predicts the class of the input which yields the largest dot product. This would produce scores \(-2, -10, -6,\) and 24 for the examples in \(D^{tr}_{{\mathcal {T}}_{j}}\) respectively. Since the last example [6, 0] yields the largest dot product, we predict that class, i.e., 4.

Note that this example could be seen as a metric-based technique where the dot product is used as a similarity function. However, the reason that this technique is model-based is that it stores the entire task inside a memory module. This example was deliberately easy for illustrative purposes. More advanced and successful techniques have been proposed, which we will now cover.

4.2 Recurrent meta-learners

Recurrent meta-learners (Duan et al. 2016; Wang et al. 2016) are, as the name suggests, meta-learners based on recurrent neural networks. The recurrent network serves as dynamic task embedding storage. These recurrent meta-learners were specifically proposed for reinforcement learning problems, hence we will explain them in that setting.

The recurrence is implemented by e.g. an LSTM (Wang et al. 2016) or a GRU (Duan et al. 2016). The internal dynamics of the chosen Recurrent Neural Network (RNN) allows for fast adaptation to new tasks, while the algorithm used to train the recurrent network gradually accumulates knowledge about the task structure, where each task is modelled as an episode (or set of episodes).

The idea of recurrent meta-learners is quite simple. That is, given a task \({\mathcal {T}}_{j}\), we simply feed the (potentially processed) environment variables \([s_{t+1},a_{t},r_{t},d_{t}]\) (see Sect. 2.1.3) into an RNN at every time step t. Recall that sard denote the state, action, reward, and termination flag respectively. At every time step t, the RNN outputs an action and a hidden state. Conditioned on its hidden state \(h_{t}\), the network outputs an action \(a_{t}\). The goal is to maximize the expected reward in each trial. See Fig. 12 for a visual depiction. From this figure, it also becomes clear why these techniques are model-based. That is, they embed information from previously seen inputs in the hidden state.

Fig. 12
figure 12

Source: Duan et al. (2016)

Workflow of recurrent meta-learners in reinforcement learning contexts. As mentioned in Sect. 2.1.3, \(s_{t}, r_{t},\) and \(d_{t}\) denote the state, reward, and termination flag at time step t. \(h_{t}\) refers to the hidden state at time t.

Recurrent meta-learners have been shown to perform almost as well as asymptotically optimal algorithms on simple reinforcement learning tasks (Wang et al. 2016; Duan et al. 2016). However, their performance degrades in more complex settings, where temporal dependencies can span a longer horizon. Making recurrent meta-learners better at such complex tasks is a direction for future research.

4.3 Memory-augmented neural networks (MANNs)

The key idea of memory-augmented neural networks (MANNs) (Santoro et al. 2016) is to enable neural networks to learn quickly with the help of an external memory. The main controller (the recurrent neural network interacting with the memory) then gradually accumulates knowledge across tasks, while the external memory allows for quick task-specific adaptation. For this, Santoro et al. (2016) used Neural Turing Machines (Graves et al. 2014). Here, the controller is parameterized by \(\varvec{\theta }\) and acts as the long-term memory of the memory-augmented neural network, while the external memory module is the short-term memory.

The workflow of memory-augmented neural networks is displayed in Fig. 13. Note that the data from a task is processed as a sequence, i.e., data are fed into the network one by one. The support set is fed into the memory-augmented neural network first. Afterwards, the query set is processed. During the meta-train phase, training tasks can be fed into the network in arbitrary order. At time step t, the model receives input \({\varvec{x}}_{t}\) with the label of the previous input, i.e., \(y_{t-1}\). This was done to prevent the network from mapping class labels directly to the output (Santoro et al. 2016).

Fig. 13
figure 13

Source: Santoro et al. (2016)

Workflow of memory-augmented neural networks. Here, an episode corresponds to a given task \({\mathcal {T}}_j\). After every episode, the order of labels, classes, and samples should be shuffled to minimize dependence on arbitrarily assigned orders.

Fig. 14
figure 14

Source: Santoro et al. (2016)

Controller-memory interaction in memory-augmented neural networks.

The interaction between the controller and memory is visualized in Fig. 14. The idea is that the external memory module, containing representations of previously seen inputs, can be used to make predictions for new inputs. In short, previously obtained knowledge is leveraged to aid the classification of new inputs. Note that neural networks also attempt to do this, however, their prior knowledge is slowly accumulated into the network weights, while an external memory module can directly store such information.

Given an input \({\varvec{x}}_{t}\) at time t, the controller generates a key \({\varvec{k}}_{t}\), which can be stored in memory matrix M and can be used to retrieve previous representations from memory matrix M. When reading from memory, the aim is to produce a linear combination of stored keys in memory matrix M, giving greater weight to those which have a larger cosine similarity with the current key \({\varvec{k}}_{t}\). More specifically, a read vector \({\varvec{w}}^{r}_{t}\) is created, in which each entry i denotes the cosine similarity between key \({\varvec{k}}_{t}\) and the memory (from a previous input) stored in row i, i.e., \(M_{t}(i)\). Then, the representation \({\varvec{r}}_{t} = \sum _{i}w_{t}^{r}(i)M(i)\) is retrieved, which is simply a linear combination of all keys (i.e., rows) in memory matrix M.

Predictions are made as follows. Given an input \({\varvec{x}}_{t}\), memory-augmented neural networks use the external memory to compute the corresponding representation \({\varvec{r}}_{t}\), which could be fed into a softmax layer, resulting in class probabilities. Across tasks, memory-augmented neural networks learn a good input embedding function \(f_{\varvec{\theta }}\) and classifier weights, which can be exploited when presented with new tasks.

To write input representations to memory, Santoro et al. (2016) propose a new mechanism called Least Recently Used Access (LRUA). LRUA either writes to the least, or most recently used memory location. In the former case, it preserves recent memories, and in the latter, it updates recently obtained information. The writing mechanism works by keeping track of how often every memory location is accessed in a usage vector \({\varvec{w}}_{t}^{u}\), which is updated at every time step according to the following update rule: \({\varvec{w}}_{t}^{u} := \gamma {\varvec{w}}^{u}_{t-1} + {\varvec{w}}_{t}^{r} + {\varvec{w}}_{t}^{w}\), where superscripts uw and r refer to usage, write and read vectors, respectively. In words, the previous usage vector is decayed (using parameter \(\gamma\)), while current reads (\({\varvec{w}}_{t}^{r}\)) and writes (\({\varvec{w}}_{t}^{w}\)) are added to the usage. Let n be the total number of reads to memory, and \(\ell u(n)\) (\(\ell u\) for ‘least used’) be the n-th smallest value in the usage vector \({\varvec{w}}^{u}_{t}\). Then, the least-used weights are defined as follows:

$$\begin{aligned} {\varvec{w}}^{\ell u}_{t}(i) = {\left\{ \begin{array}{ll} 0 &{} \text {if }w^{u}_{t}(i) > \ell u(n) \\ 1 &{} else \end{array}\right. }. \end{aligned}$$

Then, the write vector \({\varvec{w}}_{t}^{w}\) is computed as \({\varvec{w}}^{w}_{t} = \sigma (\alpha ) {\varvec{w}}^{r}_{t-1} + (1 - \sigma (\alpha )){\varvec{w}}^{\ell u}_{t-1}\), where \(\alpha\) is a parameter that interpolates between the two weight vectors. As such, if \(\sigma (\alpha ) = 1\), we write to the most recently used memory, whereas when \(\sigma (\alpha ) = 0\), we write to the least recently used memory locations. Finally, writing is performed as follows: \(M_{t}(i) := M_{t-1}(i) + w_{t}^{w}(i){\varvec{k}}_{t}\), for all i.

In summary, memory-augmented neural networks (Santoro et al. 2016) combine external memory and a neural network to achieve meta-learning. The interaction between a controller, with long-term memory parameters \(\varvec{\theta }\), and memory M, may also be interesting for studying human meta-learning (Santoro et al. 2016). In contrast to many metric-based techniques, this model-based technique is applicable to both classification and regression problems. A downside of this approach is the architectural complexity.

4.4 Meta networks

Fig. 15
figure 15

Source: Munkhdalai and Yu (2017)

The architecture of a Meta Network.

Meta networks are divided into two distinct subsystems (consisting of neural networks), i.e., the base- and meta-learner (whereas in memory-augmented neural networks the base- and meta-components are intertwined). The base-learner is responsible for performing tasks, and for providing the meta-learner with meta-information, such as loss gradients. The meta-learner can then compute fast task-specific weights for itself and the base-learner, such that it can perform better on the given task \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\). This workflow is depicted in Fig. 15.

The meta-learner consists of neural networks \(u_{\varvec{\phi }}, m_{\varvec{\varphi }}\), and \(d_{\varvec{\psi }}\). Network \(u_{\varvec{\phi }}\) is used as an input representation function. Networks \(d_{\varvec{\psi }}\) and \(m_{\varvec{\varphi }}\) are used to compute task-specific weights \(\varvec{\phi }^{*}\) and example-level fast weights \(\varvec{\theta }^{*}\). Lastly, \(b_{\varvec{\theta }}\) is the base-learner which performs input predictions. Note that we used the term fast-weights throughout, which refers to task- or input-specific versions of slow (initial) weights.

In a similar fashion to memory-augmented neural networks (Santoro et al. 2016), meta networks (Munkhdalai and Yu 2017) also leverage the idea of an external memory module. However, meta networks use the memory for a different purpose. The memory stores for each observation \({\varvec{x}}_i\) in the support set two components, i.e., its representation \({\varvec{r}}_i\) and the fast weights \(\varvec{\theta }_i^*\). These are then used to compute an attention-based representation and fast weights for new inputs, respectively.

figure a

The pseudocode for meta networks is displayed in Algorithm 1. First, a sample of the support set is created (line 1), which is used to compute task-specific weights \(\varvec{\phi }^{*}\) for the representation network \(u_{\varvec{\phi }}\) (lines 2-5). Note that \(u_{\varvec{\phi }}\) has two tasks, (i) it should compute a representation for inputs \(({\varvec{x}}_{i}\) (line 10 and 15), and (ii) it needs to make predictions for inputs \(({\varvec{x}}_{i}\), in order to compute a loss (line 3). To achieve both goals, a conventional neural network can be used that makes class predictions. The states of the final hidden layer are then used as representations. Typically, the cross-entropy is calculated over the predictions of representation network \(u_{\varvec{\phi }}\). When there are multiple examples per class in the support set, an alternative is to use a contrastive loss function (Munkhdalai and Yu 2017).

Then, meta networks iterate over every example \(({\varvec{x}}_{i}, y_{i})\) in the support set \(D^{tr}_{{\mathcal {T}}_{j}}\). The base-learner \(b_{\varvec{\theta }}\) attempts to make class predictions for these examples, resulting in loss values \({\mathcal {L}}_{i}\) (line 7). The gradients of these losses are used to compute fast weights \(\varvec{\theta }^{*}\) for example i (line 8), which are then stored in the i-th row of memory matrix M (line 9). Additionally, input representations \({\varvec{r}}_{i}\) are computed and stored in memory matrix R (lines 10-11).

Now, meta networks are ready to address the query set \(D^{test}_{{\mathcal {T}}_{j}}\). They iterate over every example \(({\varvec{x}}, y)\), and compute a representation \({\varvec{r}}\) of it (line 15). This representation is matched against the representations of the support set, which are stored in memory matrix R. This matching gives us a similarity vector \({\varvec{a}}\), where every entry k denotes the similarity between input representation \({\varvec{r}}\) and the k-th row in memory matrix R, i.e., R(k) (line 16). A softmax over this similarity vector is performed to normalize the entries. The resulting vector is used to compute a linear combination of weights that were generated for inputs in the support set (line 17). These weights \(\varvec{\theta }^{*}\) are specific for input \({\varvec{x}}\) in the query set, and can be used by the base-learner b to make predictions for that input (line 18). The observed error is added to the task loss. After the entire query set is processed, all involved parameters can be updated using backpropagation (line 20).

Fig. 16
figure 16

Source: Munkhdalai and Yu (2017)

The layer augmentation setup used to combine slow and fast weights.

Note that some neural networks use both slow- and fast-weights at the same time. Munkhdalai and Yu (2017) use a so-called augmentation setup for this, as depicted in Fig. 16.

In short, meta networks rely on a reparameterization of the meta- and base-learner for every task. Despite the flexibility and applicability to both supervised and reinforcement learning settings, the approach is quite complex. It consists of many components, each with its own set of parameters, which can be a burden on memory usage and computation time. Additionally, finding the correct architecture for all the involved components can be time-consuming.

4.5 Simple neural attentive meta-learner (SNAIL)

Instead of an external memory matrix, SNAIL (Mishra et al. 2018) relies on a special model architecture to serve as memory. Mishra et al. (2018) argue that it is not possible to use Recurrent Neural Networks for this, as they have limited memory capacity, and cannot pinpoint specific prior experiences (Mishra et al. 2018). Hence, SNAIL uses a different architecture, consisting of 1D temporal convolutions (Oord et al. 2016) and a soft attention mechanism (Vaswani et al. 2017). The temporal convolutions allow for ‘high bandwidth’ memory access, and the attention mechanism allows one to pinpoint specific experiences. Figure 17 visualizes the architecture and workflow of SNAIL for supervised learning problems. From this figure, it becomes clear why this technique is model-based. That is, model outputs are based upon the internal state, computed from earlier inputs.

Fig. 17
figure 17

Source: Mishra et al. (2018). (Color figure online)

Architecture and workflow of SNAIL for supervised and reinforcement learning settings. The input layer is red. Temporal Convolution blocks are orange; attention blocks are green.

SNAIL consists of three building blocks. The first is the DenseBlock, which applies a single 1D convolution to the input, and concatenates (in the feature/horizontal direction) the result. The second is a TCBlock, which is simply a series of DenseBlocks with an exponentially increasing dilation rate of the temporal convolutions (Mishra et al. 2018). Note that the dilation is nothing but the temporal distance between two nodes in a network. For example, if we use a dilation of 2, a node at position p in layer L will receive the activation from node \(p-2\) from layer \(L-1\). The third block is the AttentionBlock, which learns to focus on the important parts of prior experience.

In a similar fashion to memory-augmented neural networks (Santoro et al. 2016) (Sect. 4.3), SNAIL also processes task data in sequence, as shown in Fig. 17. However, the input at time t is accompanied by the label at time t, instead of \(t-1\) (as was the case for memory-augmented neural networks). SNAIL learns internal dynamics from seeing various tasks so that it can make good predictions on the query set, conditioned upon the support set.

A key advantage of SNAIL is that it can be applied to both supervised and reinforcement learning tasks. In addition, it achieves good performance compared to previously discussed techniques. A downside of SNAIL is that finding the correct architecture of TCBlocks and DenseBlocks can be time-consuming.

4.6 Conditional neural processes (CNPs)

Fig. 18
figure 18

Source: Garnelo et al. (2018)

A schematic view of how conditional neural processes work. Here, h denotes a network outputting a representation for an observation, a denotes an aggregation function for these representations, and g denotes a neural network that makes predictions for unlabelled observations, based on the aggregated representation.

In contrast to previous techniques, a conditional neural process (CNP) (Garnelo et al. 2018) does not rely on an external memory module. Instead, it aggregates the support set into a single aggregated latent representation. The general architecture is shown in Fig. 18. As we can see, the conditional neural process operates in three phases on task \({\mathcal {T}}_{j}\). First, it observes the support set \(D^{tr}_{{\mathcal {T}}_{j}}\), including the ground-truth outputs \(y_{i}\). Examples \(({\varvec{x}}_{i},y_{i}) \in D^{tr}_{{\mathcal {T}}_{j}}\) are embedded using a neural network \(h_{\varvec{\theta }}\) into representations \({\varvec{r}}_{i}\). Second, these representations are aggregated using operator a to produce a single representation \({\varvec{r}}\) of \(D^{tr}_{{\mathcal {T}}_{j}}\) (hence it is model-based). Third, a neural network \(g_{\varvec{\phi }}\) processes this single representation \({\varvec{r}}\), new inputs \({\varvec{x}}\), and produces predictions \({\hat{y}}\).

Let the entire conditional neural process model be denoted by \(Q_{\varvec{\varTheta }}\), where \(\varTheta\) is a set of all involved parameters \(\{ \varvec{\theta }, \varvec{\phi } \}\). The training process is different compared to other techniques. Let \({\varvec{x}}_{{\mathcal {T}}_{j}}\) and \({\varvec{y}}_{{\mathcal {T}}_{j}}\) denote all inputs and corresponding outputs in \(D_{{\mathcal {T}}_{j}}^{tr}\). Then, the first \(\ell \backsim U(0,\ldots , k \cdot N -1)\) examples in \(D^{tr}_{{\mathcal {T}}_{j}}\) are used as a conditioning set \(D^{c}_{{\mathcal {T}}_{j}}\) (effectively splitting the support set into a true training set and a validation set). Given a value of \(\ell\), the goal is to maximize the log-likelihood (or minimize the negative log-likelihood) of the labels \({\varvec{y}}_{{\mathcal {T}}_{j}}\) in the entire support set \(D^{tr}_{{\mathcal {T}}_{j}}\)

$$\begin{aligned} {\mathcal {L}}(\varvec{\varTheta }) = -{\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})}\left[ {\mathbb {E}}_{\ell \backsim U(0,\ldots ,k \cdot N-1)} \left( Q_{\varvec{\varTheta }} ({\varvec{y}}_{{\mathcal {T}}_{j}} | D^{c}_{{\mathcal {T}}_{j}}, {\varvec{x}}_{{\mathcal {T}}_{j}}) \right) \right] . \end{aligned}$$
(12)

Conditional neural processes are trained by repeatedly sampling various tasks and values of \(\ell\), and propagating the observed loss backwards.

In summary, conditional neural processes use compact representations of previously seen inputs to aid the classification of new observations. Despite its simplicity and elegance, a disadvantage of this technique is that it is often outperformed in few-shot settings by other techniques such as matching networks (Vinyals et al. 2016) (see Sect. 3.3).

4.7 Neural statistician

Fig. 19
figure 19

Neural statistician architecture. Edges are neural networks. All incoming inputs to a node are concatenated

A neural statistician (Edwards and Storkey 2017) differs from earlier approaches as it learns to compute summary statistics, or meta-features, of data sets in an unsupervised manner. These latent embeddings (making the approach model-based) can then later be used for making predictions. Despite the broad applicability of the model, we discuss it in the context of Deep Meta-Learning.

A neural statistician performs both learning and inference. In the learning phase, the model attempts to produce generative models \({\hat{P}}_{i}\) for every data set \(D_{i}\). The key assumption that is made by Edwards and Storkey (2017) is that there exists a generative process \(P_{i}\), which conditioned on a latent context vector \({\varvec{c}}_{i}\), can produce a data set \(D_{i}\). At inference time, the goal is to infer a (posterior) probability distribution over the context \(q({\varvec{c}}|D)\).

The model uses a variational autoencoder, which consists of an encoder and decoder. The encoder is responsible for producing a distribution over latent vectors \({\varvec{z}}\): \(q({\varvec{z}}|{\varvec{x}}; \varvec{\phi })\), where \({\varvec{x}}\) is an input vector, and \(\varvec{\phi }\) are the encoder parameters. The encoded input \({\varvec{z}}\), which is often of lower dimensionality than the original input \({\varvec{x}}\), can then be decoded by the decoder \(p({\varvec{x}}|{\varvec{z}};\varvec{\theta })\). Here, \(\varvec{\theta }\) are the parameters of the decoder. To capture more complex patterns in data sets, the model uses multiple latent layers \({\varvec{z}}_{1},\ldots ,{\varvec{z}}_{L}\), as shown in Fig. 19. Given this architecture, the posterior over c and \({\varvec{z}}_{1},..,{\varvec{z}}_{L}\) (shorthand \({\varvec{z}}_{1:L}\)) is given by

$$\begin{aligned} q({\varvec{c}}, {\varvec{z}}_{1:L}| D; \varvec{\phi }) = q({\varvec{c}}|D;\varvec{\phi }) \prod _{{\varvec{x}} \in D} q(z_{L}| {\varvec{x}}, {\varvec{c}};\varvec{\phi })\prod _{i=1}^{L-1} q({\varvec{z}}_{i} | {\varvec{z}}_{i+1},{\varvec{x}}, {\varvec{c}};\varvec{\phi }). \end{aligned}$$
(13)

The neural statistician is trained to minimize a three-component loss function, consisting of the reconstruction loss (how well it models the data), context loss (how well the inferred context \(q({\varvec{c}}|D;\varvec{\phi })\) corresponds to the prior \(P({\varvec{c}})\), and latent loss (how well the inferred latent variables \({\varvec{z}}_{i}\) are modelled).

This model can be applied to N-way, few-shot learning as follows. Construct N data sets for every of the N classes, such that one data set contains only examples of the same class. Then, the neural statistician is provided with a new input \({\varvec{x}}\), and has to predict its class. It computes a context posterior \(N_{{\varvec{x}}} = q({\varvec{c}}|{\varvec{x}};\varvec{\phi })\) depending on new input \({\varvec{x}}\). In a similar fashion, context posteriors are computed for all of the data sets \(N_{i} = q({\varvec{c}}|D_{i};\varvec{\phi })\). Lastly, it assigns the label i such that the difference between \(N_{i}\) and \(N_{{\varvec{x}}}\) is minimal.

In summary, the neural statistician (Edwards and Storkey 2017) allows for quick learning on new tasks through data set modeling. Additionally, it is applicable to both supervised and unsupervised settings. A downside is that the approach requires many data sets to achieve good performance (Edwards and Storkey 2017).

4.8 Model-based techniques, in conclusion

In this section, we have discussed various model-based techniques. Despite apparent differences, they all build on the notion of task internalization. That is, tasks are processed and represented in the state of the model-based system. This state can then be used to make predictions. Figure 20 displays the relationships between the covered model-based techniques.

Memory-augmented neural networks (Santoro et al. 2016) mark the beginning of deep model-based meta-learning techniques. They use the idea of feeding the entire support set in a sequential fashion into the model and then making predictions for the query set inputs using the internal state of the model. Such a model-based approach, where inputs sequentially enter the model was also taken by recurrent meta-learners (RMLs) (Duan et al. 2016; Wang et al. 2016) in the reinforcement learning setting. Meta networks (Munkhdalai and Yu 2017) also use a large black-box solution but generate task-specific weights for every task that is encountered. SNAIL (Mishra et al. 2018) tries to improve the memory capacity and ability to pinpoint memories, which is limited in recurrent neural networks, by using attention mechanisms coupled with special temporal layers. Lastly, the neural statistician and conditional neural processes are two techniques that try to learn meta-features of data sets in an end-to-end fashion. The neural statistician uses the distance between meta-features to make class predictions, while the conditional neural process conditions classifiers on these features.

Advantages of model-based approaches include the flexibility of the internal dynamics of the systems, and their broader applicability compared to most metric-based techniques. However, model-based techniques are often outperformed by metric-based techniques in supervised settings (e.g. graph neural networks (Garcia and Bruna 2017); Sect. 3.6), may not perform well when presented with larger data sets (Hospedales et al. 2020), and generalize less well to more distant tasks than optimization-based techniques (Finn and Levine 2018). We discuss this optimization-based approach next.

Fig. 20
figure 20

The relationships between the covered model-based meta-learning techniques. The neural statistician and conditional neural processes form an island in the model-based approaches

5 Optimization-based meta-learning

Optimization-based techniques adopt a different perspective on meta-learning than the previous two approaches. They explicitly optimize for fast learning. Most optimization-based techniques do so by approaching meta-learning as a bi-level optimization problem. At the inner-level, a base-learner makes task-specific updates using some optimization strategy (such as gradient descent). At the outer-level, the performance across tasks is optimized.

More formally, given a task \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\) with new input \({\varvec{x}} \in D^{test}_{{\mathcal {T}}_{j}}\) and base-learner parameters \(\varvec{\theta }\), optimization-based meta-learners return

$$\begin{aligned} P(Y|{\varvec{x}}, D^{tr}_{{\mathcal {T}}_{j}}) = f_{g_{\varvec{\varphi }(\varvec{\theta }, D_{{\mathcal {T}}_{j}}^{tr}, {\mathcal {L}}_{{\mathcal {T}}_{j}})} }({\varvec{x}}), \end{aligned}$$
(14)

where f is the base-learner, \(g_{\varvec{\varphi }}\) is a (learned) optimizer that makes task-specific updates to the base-learner parameters \(\varvec{\theta }\) using the support data \(D_{{\mathcal {T}}_{i}}^{tr}\), and loss function \({\mathcal {L}}_{{\mathcal {T}}_{j}}\).

5.1 Example

Suppose we are faced with a linear regression problem, where every task is associated with a different function f(x). For this example, suppose our model only has two parameters: a and b, which together form the function \({\hat{f}}(x) = ax + b\). Suppose further that our meta-training set consists of four different tasks, i.e., A, B, C, and D. Then, according to the optimization-based view, we wish to find a single set of parameters \(\{a, b\}\) from which we can quickly learn the optimal parameters for each of the four tasks, as displayed in Fig. 21. In fact, this is the intuition behind the popular optimization-based technique MAML (Finn et al. 2017). By exposing our model to various meta-training tasks, we can update parameters a and b to facilitate quick adaptation.

Fig. 21
figure 21

Example of an optimization-based technique, inspired by Finn et al. (2017)

We will now discuss the core optimization-based techniques in more detail.

5.2 LSTM optimizer

Standard gradient update rules have the form

$$\begin{aligned} \varvec{\theta }_{t+1} := \varvec{\theta }_{t} - \alpha \nabla _{\varvec{\theta }_{t}} {\mathcal {L}}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{t}), \end{aligned}$$
(15)

where \(\alpha\) is the learning rate, and \({\mathcal {L}}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{t})\) is the loss function with respect to task \({\mathcal {T}}_{j}\) and network parameters at time t, i.e., \(\varvec{\theta }_{t}\). The key idea underlying LSTM optimizers (Andrychowicz et al. 2016) is to replace the update term (\(- \alpha \nabla {\mathcal {L}}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{t})\)) by an update proposed by an LSTM g with parameters \(\varvec{\varphi }\). Then, the new update becomes

$$\begin{aligned} \varvec{\theta }_{t+1} := \varvec{\theta }_{t} + g_{\varvec{\varphi }}(\nabla _{\varvec{\theta }_{t}} {\mathcal {L}}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{t})). \end{aligned}$$
(16)
Fig. 22
figure 22

Source: Andrychowicz et al. (2016)

Workflow of the LSTM optimizer. Gradients can only propagate backwards through solid edges. \(f_{t}\) denotes the observed loss at time step t.

This new update allows the optimization strategy to be tailored to a specific family of tasks. Note that this is meta-learning, i.e., the LSTM learns to learn. As such, this technique learns an update policy.

The loss function used to train an LSTM optimizer is:

$$\begin{aligned} {\mathcal {L}}(\varvec{\varphi }) = {\mathbb {E}}_{{\mathcal {L}}_{{\mathcal {T}}_{j}}}\left[ \sum _{t=1}^{T}w_{t}{\mathcal {L}}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{t}) \right] , \end{aligned}$$
(17)

where T is the number of parameter updates that are made, and \(w_{t}\) are weights indicating the importance of performance after t steps. Note that generally, we are only interested in the final performance after T steps. However, the authors found that the optimization procedure was better guided by equally weighting the performance after each gradient descent step. As is often done, second-order derivatives (arising from the dependency between the updated weights and the LSTM optimizer) were ignored due to the computational expenses associated with the computation thereof. This loss function is fully differentiable and thus allows for training an LSTM optimizer (see Fig. 22). To prevent a parameter explosion, the same network is used for every coordinate/weight in the base-learner’s network, causing the update rule to be the same for every parameter. Of course, the updates depend on their prior values and gradients.

The key advantage of LSTM optimizers is that they can enable faster learning compared to hand-crafted optimizers, also on different data sets than those used to train the optimizer. However, Andrychowicz et al. (2016) did not apply this technique to few-shot learning. In fact, they did not apply it across tasks at all. Thus, it is unclear whether this technique can perform well in few-shot settings, where few data per class are available for training. Furthermore, the question remains whether it can scale to larger base-learner architectures.

5.3 LSTM meta-learner

Instead of having an LSTM predict gradient updates, Ravi and Larochelle (2017) embed the weights of the base-learner parameters into the cell state (long-term memory component) of the LSTM, giving rise to LSTM meta-learners. As such, the base-learner parameters \(\varvec{\theta }\) are literally inside the LSTM memory component (cell state). In this way, cell state updates correspond to base-learner parameter updates. This idea was inspired by the resemblance between the gradient and cell state update rules. Gradient updates often have the form as shown in Eq. 15. The LSTM cell state update rule, in contrast, looks as follows

$$\begin{aligned} {\varvec{c}}_{t} := f_{t} \odot {\varvec{c}}_{t-1} + \alpha _{t} \odot \bar{{\varvec{c}}}_{t}, \end{aligned}$$
(18)

where \(f_{t}\) is the forget gate (which determines which information should be forgotten) at time t, \(\odot\) represents the element-wise product, \({\varvec{c}}_{t}\) is the cell state at time t, and \(\bar{{\varvec{c}}}_{t}\) the candidate cell state for time step t, and \(\alpha _t\) the learning rate at time step t. Note that if \(f_{t} = {\varvec{1}}\) (vector of ones), \(\alpha _{t} = \alpha\), \({\varvec{c}}_{t-1} = \varvec{\theta }_{t-1}\), and \(\bar{{\varvec{c}}}_{t} = - \nabla _{\varvec{\theta }_{t-1}}{\mathcal {L}}_{{\mathcal {T}}_{t}}(\varvec{\theta }_{t-1})\), this update is equivalent to the one used by gradient-descent. This similarity inspired Ravi and Larochelle (2017) to use an LSTM as a meta-learner that learns to make updates for a base-learner, as shown in Fig. 23.

Fig. 23
figure 23

Source: Ravi and Larochelle (2017)

LSTM meta-learner computation graph. Gradients can only propagate backwards through solid edges. The base-learner is denoted as M. \((X_{t}, Y_{t})\) are training sets, whereas (XY) is the test set.

More specifically, the cell state of the LSTM is initialized with \(c_{0} = \varvec{\theta }_{0}\), which will be adjusted by the LSTM to a good common initialization point across different tasks. Then, to update the weights of the base-learner for the next time step \(t+1\), the LSTM computes \({\varvec{c}}_{t+1}\), and sets the weights of the base-learner equal to that. There is thus a one-to-one correspondence between \({\varvec{c}}_{t}\) and \(\varvec{\theta }_{t}\). The meta-learner’s learning rate \(\alpha _{t}\) (see Eq. 18), is set equal to \(\sigma ({\varvec{w}}_{\alpha } \cdot [\nabla _{\theta _{t-1}} {\mathcal {L}}_{{\mathcal {T}}_{t}}(\varvec{\theta }_{t-1}), {\mathcal {L}}_{{\mathcal {T}}_{t}}(\varvec{\theta }_{t}), \theta _{t-1}, \alpha _{t-1}] + {\varvec{b}}_{\alpha })\), where \(\sigma\) is the sigmoid function. Note that the output is a vector, with values between 0 and 1, which denote the the learning rates for the corresponding parameters. Furthermore, \({\varvec{w}}_{\alpha }\) and \({\varvec{b}}_{\alpha }\) are trainable parameters that part of the LSTM meta-learner. In words, the learning rate at any time depends on the loss gradients, the loss value, the previous parameters, and the previous learning rate. The forget gate, \(f_{t}\), determines what part of the cell state should be forgotten, and is computed in a similar fashion, but with different weights.

To prevent an explosion of meta-learner parameters, weight-sharing is used, in a similar fashion to LSTM optimizers proposed by Andrychowicz et al. (2016) (Sect. 5.2). This implies that the same update rule is applied to every weight at a given time step. The exact update, however, depends on the history of that specific parameter in terms of the previous learning rate, loss, etc. For simplicity, second-order derivatives were ignored, by assuming the base-learner’s loss does not depend on the cell state of the LSTM optimizer. Batch normalization was applied to stabilize and speed up the learning process.

In short, LSTM optimizers can learn to optimize a base-learner by maintaining a one-to-one correspondence over time between the base-learner’s weights and the LSTM cell state. This allows the LSTM to exploit commonalities in the tasks, allowing for quicker optimization. However, there are simpler approaches (e.g. MAML Finn et al. 2017) that outperform this technique.

5.4 Reinforcement learning optimizer

Li and Malik (2018) proposed a framework that casts optimization as a reinforcement learning problem. Optimization can then be performed by existing reinforcement learning techniques. At a high-level, an optimization algorithm g takes as input an initial set of weights \(\varvec{\theta }_{0}\) and a task \({\mathcal {T}}_{j}\) with the corresponding loss function \({\mathcal {L}}_{{\mathcal {T}}_{j}}\), and produces a sequence of new weights \(\varvec{\theta }_{1},\ldots ,\varvec{\theta }_{T}\), where \(\varvec{\theta }_{T}\) is the final solution found. On this sequence of proposed new weights, we can define a loss function \({\mathcal {L}}\) that captures unwanted properties (e.g. slow convergence, oscillations, etc.). The goal of learning an optimizer can then be formulated more precisely as follows. We wish to learn an optimal optimizer

$$\begin{aligned} g^{*} = argmin_{g} \, {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}}), \varvec{\theta }_{0} \backsim p(\varvec{\theta }_{0})}[{\mathcal {L}}( g({\mathcal {L}}_{{\mathcal {T}}_{j}},\varvec{\theta }_{0}))] \end{aligned}$$
(19)

The key insight is that the optimization can be formulated as a Partially Observable Markov Decision Process (POMDP). Then, the state corresponds to the current set of weights \(\varvec{\theta }_{t}\), the action to the proposed update at time step t, i.e., \(\varDelta \varvec{\theta }_{t}\), and the policy to the function that computes the update. With this formulation, the optimizer g can be learned by existing reinforcement learning techniques. In their paper, they used a recurrent neural network as an optimizer. At each time step, they feed it observation features, which depend on the previous set of weights, loss gradients, and objective functions, and use guided policy search to train it.

In summary, Li and Malik (2018) made the first step towards general optimization through reinforcement learning optimizers, which were shown able to generalize across network architectures and data sets. However, the base-learner architecture that was used was quite small. The question remains whether this approach can scale to larger architectures.

5.5 MAML

Model-agnostic meta-learning (MAML) (Finn et al. 2017) uses a simple gradient-based inner optimization procedure (e.g. stochastic gradient descent), instead of more complex LSTM procedures or procedures based on reinforcement learning. The key idea of MAML is to explicitly optimize for fast adaptation to new tasks by learning a good set of initialization parameters \(\varvec{\theta }\). This is shown in Fig. 24: from the learned initialization \(\varvec{\theta }\), we can quickly move to the best set of parameters for the task \({\mathcal {T}}_{j}\), i.e., \(\varvec{\theta }^{*}_{j}\) for \(j=1,2,3\). The learned initialization can be seen as the inductive bias of the model, or simply the set of assumptions (encapsulated in \(\varvec{\theta }\)) that the model makes with respect to the overall task structure.

Fig. 24
figure 24

Source: Finn et al. (2017)

MAML learns an initialization point from which it can perform well on various tasks.

More formally, let \(\varvec{\theta }\) denote the initial model parameters of a model. The goal is to quickly learn new concepts, which is equivalent to achieving a minimal loss in few gradient update steps. The amount of gradient steps s has to be specified upfront, such that MAML can explicitly optimize for achieving good performance within that number of steps. Suppose we pick only one gradient update step, i.e., \(s=1\). Then, given a task \({\mathcal {T}}_{j} = (D^{tr}_{{\mathcal {T}}_{j}}, D^{test}_{{\mathcal {T}}_{j}})\), gradient descent would produce updated parameters (fast weights)

$$\begin{aligned} \varvec{\theta }'_{j} = \varvec{\theta } - \alpha \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}} }(\varvec{\theta }), \end{aligned}$$
(20)

specific to task j. The meta-loss of quick adaptation (using \(s=1\) gradient steps) across tasks can then be formulated as

$$\begin{aligned} ML := \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) = \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta } - \alpha \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta })), \end{aligned}$$
(21)

where \(p({\mathcal {T}})\) is a probability distribution over tasks. This expression contains an inner gradient (\(\nabla _{\varvec{\theta }} {\mathcal {L}}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{j})\)). As such, by optimizing this meta-loss using gradient-based techniques, we have to compute second-order gradients. One can easily see this in the computation below

$$\begin{aligned} \nabla _{\varvec{\theta }} ML&=\nabla _{\varvec{\theta }} \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \nonumber \\&= \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \nonumber \\&= \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \nabla _{\varvec{\theta }}(\varvec{\theta }'_{j}) \nonumber \\&= \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}') \nabla _{\varvec{\theta }} ( \varvec{\theta } - \alpha \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}(\varvec{\theta })}) \nonumber \\&= \underbrace{\sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}')}_\text {FOMAML} (\nabla _{\varvec{\theta }} \varvec{\theta } - \alpha \nabla _{\varvec{\theta }}^{2} {\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta })), \end{aligned}$$
(22)

where we used \({\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}')\) to denote the derivative of the loss function with respect to the query set, evaluated at the post-update parameters \(\varvec{\theta }_{j}'\). The term \(\alpha \nabla _{\varvec{\theta }}^{2} {\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta })\) contains the second-order gradients. The computation thereof is expensive in terms of time and memory costs, especially when the optimization trajectory is large (when using a larger number of gradient updates s per task). Finn et al. (2017) experimented with leaving out second-order gradients, by assuming \(\nabla _{\varvec{\theta }}\varvec{\theta }'_{j} = I\), giving us First-Order MAML (FOMAML, see Eq. 22). They found that FOMAML performed reasonably similarly to MAML. This means that updating the initialization using only first-order gradients \(\sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}')\) is roughly equal to using the full gradient expression of the meta-loss in Eq. 22. One can extend the meta-loss to incorporate multiple gradient steps by substituting \(\varvec{\theta }_{j}'\) by a multi-step variant.

MAML is trained as follows. The initialization weights \(\varvec{\theta }\) are updated by continuously sampling a batch of m tasks \(B = \{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})\}_{i=1}^{m}\). Then, for every task \({\mathcal {T}}_{j} \in B\), an inner update is performed to obtain \(\varvec{\theta }_{j}'\), granting an observed loss \({\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}')\). These losses across a batch of tasks are used in the outer update

$$\begin{aligned} \varvec{\theta } := \varvec{\theta } - \beta \nabla _{\varvec{\theta }} \sum _{{\mathcal {T}}_{j} \in B} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}'). \end{aligned}$$
(23)

The complete training procedure of MAML is displayed in Algorithm 2. At test-time, when presented with a new task \({\mathcal {T}}_{j}\), the model is initialized with \(\varvec{\theta }\), and performs a number of gradient updates on the task data. Note that the algorithm for FOMAML is equivalent to Algorithm 2, except for the fact that the update on line 8 is done differently. That is, FOMAML updates the initialization with the rule \(\varvec{\theta } = \varvec{\theta } - \beta \sum _{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}')\).

figure b

Antoniou et al. (2019), in response to MAML, proposed many technical improvements that can improve training stability, performance, and generalization ability. Improvements include (i) updating the initialization \(\varvec{\theta }\) after every inner update step (instead of after all steps are done) to increase gradient propagation, (ii) using second-order gradients only after 50 epochs to increase the training speed, (iii) learning layer-wise learning rates to improve flexibility, (iv) annealing the meta-learning rate \(\beta\) over time, and (v) some Batch Normalization tweaks (keep running statistics instead of batch-specific ones, and using per-step biases).

MAML has obtained great attention within the field of Deep Meta-Learning, perhaps due to its (i) simplicity (only requires two hyperparameters), (ii) general applicability, and (iii) strong performance. A downside of MAML, as mentioned above, is that it can be quite expensive in terms of running time and memory to optimize a base-learner for every task and compute higher-order derivatives from the optimization trajectories.

5.6 iMAML

Instead of ignoring higher-order derivatives (as done by FOMAML), which potentially decreases the performance compared to regular MAML, iMAML (Rajeswaran et al. 2019) approximates these derivatives in a way that is less memory-consuming.

Let \({\mathcal {A}}\) denote an inner optimization algorithm (e.g., stochastic gradient descent), which takes a support set \(D^{tr}_{{\mathcal {T}}_{j}}\) corresponding to task \({\mathcal {T}}_{j}\) and initial model weights \(\varvec{\theta }\), and produces new weights \(\varvec{\theta }'_{j} = {\mathcal {A}}(\varvec{\theta }, D^{tr}_{{\mathcal {T}}_{j}})\). MAML has to compute the derivative

$$\begin{aligned} \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}} (\varvec{\theta }'_{j}) = {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}'(\varvec{\theta }'_{j})\nabla _{\varvec{\theta }}(\varvec{\theta }'_{j}) , \end{aligned}$$
(24)

where \(D^{test}_{{\mathcal {T}}_{j}}\) is the query set corresponding to task \({\mathcal {T}}_{j}\). This equation is a simple result of applying the chain rule. Importantly, note that \(\nabla _{\varvec{\theta }}(\varvec{\theta }_{j}')\) differentiates through \({\mathcal {A}}(\varvec{\theta }, D^{tr}_{{\mathcal {T}}_{j}})\), while \({\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}'(\varvec{\theta }'_{j})\) does not, as it represents the gradient of the loss function evaluated at \(\varvec{\theta }'_{j}\). Rajeswaran et al. (2019) make use of the following lemma.

If \(({\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}))\) is invertible (i.e., \(({\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}))^{-1}\) exists), then

$$\begin{aligned} \nabla _{\varvec{\theta }}(\varvec{\theta }_{j}') = \left( {\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \right) ^{-1}. \end{aligned}$$
(25)

Here, \(\lambda\) is a regularization parameter. The reason for this is discussed below.

Combining Eqs. 24 and 25, we have that

$$\begin{aligned} \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}} (\varvec{\theta }'_{j}) = {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \left( {\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \right) ^{-1} . \end{aligned}$$
(26)

The idea is to obtain an approximate gradient vector \({\varvec{g}}_{j}\) that is close to this expression, i.e., we want the difference to be small

$$\begin{aligned} {\varvec{g}}_{j} - {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \left( {\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \right) ^{-1} = \varvec{\epsilon }, \end{aligned}$$
(27)

for some small tolerance vector \(\varvec{\epsilon }\). If we multiply both sides by the inverse of the inverse, i.e., \(\left( {\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \right)\), we get

$$\begin{aligned} {\varvec{g}}_{j}^{T} \left( {\varvec{I}} + \frac{1}{\lambda } \nabla ^{2}_{\varvec{\theta }}{\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) \right) {\varvec{g}}_{j} - {\varvec{g}}_{j}^{T} {\mathcal {L}}'_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) = \varvec{\epsilon }', \end{aligned}$$
(28)

where \(\varvec{\epsilon }'\) absorbed the multiplication factor. We wish to minimize this expression for \({\varvec{g}}_{j}\), and that can be performed using optimization techniques such as the conjugate gradient algorithm (Rajeswaran et al. 2019). This algorithm does not need to store Hessian matrices, which decreases the memory cost significantly. In turn, this allows iMAML to work with more inner gradient update steps. Note, however, that one needs to perform explicit regularization in that case to avoid overfitting. The conventional MAML did not require this, as it uses only a few number of gradient steps (equivalent to an early stopping mechanism).

At each inner loop step, iMAML computes the meta-gradient \({\varvec{g}}_{j}\). After processing a batch of tasks, these gradients are averaged and used to update the initialization \(\varvec{\theta }\). Since it does not differentiate through the optimization process, we are free to use any other (non-differentiable) inner-optimizer.

In summary, iMAML reduces memory costs significantly as it need not differentiate through the optimization trajectory, also allowing for greater flexibility in the choice of the inner optimizer. Additionally, it can account for larger optimization paths. The computational costs stay roughly the same compared to MAML (Finn et al. 2017). Future work could investigate more inner optimization procedures (Rajeswaran et al. 2019).

5.7 Meta-SGD

Meta-SGD (Li et al. 2017), or meta-stochastic gradient descent, is similar to MAML (Finn et al. 2017) (Sect. 5.5). However, on top of learning an initialization, Meta-SGD also learns learning rates for every model parameter in \(\varvec{\theta }\), building on the insight that the optimizer can be seen as trainable entity.

Fig. 25
figure 25

Source: Li et al. (2017)

Meta-SGD learning process.

The standard SGD update rule is given in Eq. 15. The meta-SGD optimizer uses a more general update, namely

$$\begin{aligned} \varvec{\theta }_{j}' \leftarrow \varvec{\theta } - \varvec{\alpha } \odot \nabla _{\varvec{\theta }} {\mathcal {L}}_{D^{tr}_{{\mathcal {T}}_{j}}}(\varvec{\theta }), \end{aligned}$$
(29)

where \(\odot\) is the element-wise product. Note that this means that alpha (learning rate) is now a vector—hence the bold font— instead of scalar, which allows for greater flexibility in the sense that each parameter has its own learning rate. The goal is to learn the initialization \(\varvec{\theta }\), and learning rate vector \(\varvec{\alpha }\), such that the generalization ability is as large as possible. More mathematically precise, the learning objective is

$$\begin{aligned} min_{\varvec{\alpha }, \varvec{\theta }} {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} [{\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }_{j}')] = {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} [{\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}( \varvec{\theta } - \varvec{\alpha } \odot \nabla _{\varvec{\theta }} {\mathcal {L}}_{D_{{\mathcal {T}}_{j}}^{tr}}(\varvec{\theta }) )], \end{aligned}$$
(30)

where we used a simple substitution for \(\varvec{\theta }_{j}'\). \({\mathcal {L}}_{D_{{\mathcal {T}}_{j}}^{tr}}\) and \({\mathcal {L}}_{D_{{\mathcal {T}}_{j}}^{test}}\) are the losses computed on the support and query set respectively. Note that this formulation stimulates generalization ability (as it includes the query set loss \({\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}\), which can be observed during the meta-training phase). The learning process is visualized in Fig. 25. Note that the meta-SGD optimizer is trained to maximize generalization ability after only one update step. Since this learning objective has a fully differentiable loss function, the meta-SGD optimizer itself can be trained using standard SGD.

In summary, Meta-SGD is more expressive than MAML as it does not only learn an initialization but also learning rates per parameter. This, however, does come at the cost of an increased number of hyperparameters.

5.8 Reptile

Reptile (Nichol et al. 2018) is another optimization-based technique that, like MAML (Finn et al. 2017), solely attempts to find a good set of initialization parameters \(\varvec{\theta }\). The way in which Reptile attempts to find this initialization is quite different from MAML. It repeatedly samples a task, trains on the task, and moves the model weights towards the trained weights (Nichol et al. 2018). Algorithm 3 displays the pseudocode describing this simple process.

figure c

Nichol et al. (2018) note that it is possible to treat \(( \varvec{\theta } - \varvec{\theta }_{j}')/\alpha\) as gradients, where \(\alpha\) is the learning rate of the inner stochastic gradient descent optimizer (line 4 in the pseudocode), and to feed that into a meta-optimizer (e.g. Adam). Moreover, instead of sampling one task at a time, one could sample a batch of n tasks, and move the initialization \(\varvec{\theta }\) towards the average update direction \(\bar{\varvec{\theta }} = \frac{1}{n}\sum _{j=1}^{n}(\varvec{\theta }'_{j} - \varvec{\theta })\), granting the update rule \(\varvec{\theta } := \varvec{\theta } + \epsilon \bar{\varvec{\theta }}\).

The intuition behind Reptile is that updating the initialization weights towards updated parameters will grant a good inductive bias for tasks from the same family. By performing Taylor expansions of the gradients of Reptile and MAML (both first-order and second-order), Nichol et al. (2018) show that the expected gradients differ in their direction. They argue, however, that in practice, the gradients of Reptile will also bring the model towards a point minimizing the expected loss over tasks.

A mathematical argument as to why Reptile works goes as follows. Let \(\varvec{\theta }\) denote the initial parameters, and \(\varvec{\theta }^{*}_{j}\) the optimal set of weights for the task \({\mathcal {T}}_{j}\). Lastly, let d be the Euclidean distance function. Then, the goal is to minimize the distance between the initialization point \(\varvec{\theta }\) and the optimal point \(\varvec{\theta }^{*}_{j}\), i.e.,

$$\begin{aligned} min_{\varvec{\theta }} \, {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})}[ \frac{1}{2}d(\varvec{\theta }, \varvec{\theta }^{*}_{j})^{2}]. \end{aligned}$$
(31)

The gradient of this expected distance with respect to the initialization \(\varvec{\theta }\) is given by

$$\begin{aligned} \nabla _{\varvec{\theta }} {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} [\frac{1}{2} d(\varvec{\theta }, \varvec{\theta }^{*}_{j})^{2}]&= {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} [\frac{1}{2} \nabla _{\varvec{\theta }} d(\varvec{\theta }, \varvec{\theta }^{*}_{j})^{2}] \nonumber \\&= {\mathbb {E}}_{{\mathcal {T}}_{j} \backsim p({\mathcal {T}})} [\varvec{\theta } - \varvec{\theta }^{*}_{j}], \end{aligned}$$
(32)

where we used the fact that the gradient of the squared Euclidean distance between two points \({\varvec{x}}_{1}\) and \({\varvec{x}}_{2}\) is the vector \(2({\varvec{x}}_{1} - {\varvec{x}}_{2})\). Nichol et al. (2018) go on to argue that performing gradient descent on this objective would result in the following update rule

$$\begin{aligned} \varvec{\theta }&= \varvec{\theta } - \epsilon \nabla _{\varvec{\theta }} \frac{1}{2}d(\varvec{\theta }, \varvec{\theta }^{*}_{j})^{2} \nonumber \\&= \varvec{\theta } - \epsilon (\varvec{\theta }^{*}_{j} - \varvec{\theta }). \end{aligned}$$
(33)

Since we do not know \(\varvec{\theta }^{*}_{{\mathcal {T}}_{j}}\), one can approximate this by term by k steps of gradient descent \(SGD({\mathcal {L}}_{{\mathcal {T}}_{j}}, \varvec{\theta }, k)\). In short, Reptile can be seen as gradient descent on the distance minimization objective given in Eq. 31. A visualization is shown in Fig. 26. The initialization \(\varvec{\theta }\) is moving towards the optimal weights for tasks 1 and 2 in an interleaved fashion (hence the oscillations).

Fig. 26
figure 26

Schematic visualization of Reptile’s learning trajectory. Here, \(\varvec{\theta }_{1}^*\) and \(\varvec{\theta }_{2}^*\) are the optimal weights for tasks \({\mathcal {T}}_{1}\) and \({\mathcal {T}}_{2}\) respectively. The initialization parameters \(\varvec{\theta }\) oscillate between these. Adapted from Nichol et al. (2018)

In conclusion, Reptile is an extremely simple meta-learning technique, which does not need to differentiate through the optimization trajectory like, e.g., MAML (Finn et al. 2017), saving time and memory costs. However, the theoretical foundation is a bit weaker due to the fact that it does not directly optimize for fast learning as done by MAML, and performance may be a bit worse than that of MAML in some settings.

5.9 LEO

Latent Embedding Optimization (LEO) was proposed by Rusu et al. (2018) to combat an issue of gradient-based meta-learners, such as MAML (Finn et al. 2017) (see Sect. 5.5), in few-shot settings (N-way, k-shot). These techniques operate in a high-dimensional parameter space using gradient information from only a few examples, which could lead to poor generalization.

Fig. 27
figure 27

Workflow of LEO, adapted from Rusu et al. (2018)

LEO alleviates this issue by learning a lower-dimensional latent embedding space, which indirectly allows us to learn a good set of initial parameters \(\varvec{\theta }\). Additionally, the embedding space is conditioned upon tasks, allowing for more expressivity. In theory, LEO could find initial parameters for the entire base-learner network, but the authors only experimented with setting the parameters for the final layers.

The complete workflow of LEO is shown in Fig. 27. As we can see, given a task \({\mathcal {T}}_{j}\), the corresponding support set \(D^{tr}_{{\mathcal {T}}_{j}}\) is fed into an encoder, which produces hidden codes for each example in that set. These hidden codes are paired and concatenated in every possible manner, granting us \((Nk)^{2}\) pairs, where N is the number of classes in the training set, and k the number of examples per class. These paired codes are then fed into a relation network (Sung et al. 2018) (see Sect. 3.5). The resulting embeddings are grouped by class, and parameterize a probability distribution over latent codes \({\varvec{z}}_{n}\) (for class n) in a low dimensional space \({\mathcal {Z}}\). More formally, let \({\varvec{x}}^{\ell }_{n}\) denote the \(\ell\)-th example of class n in \(D^{tr}_{{\mathcal {T}}_{j}}\). Then, the mean \(\varvec{\mu }^{e}_{n}\) and variance \(\varvec{\sigma }^{e}_{n}\) of a Gaussian distribution over latent codes for class n are computed as

$$\begin{aligned} \varvec{\mu }_{n}^{e}, \varvec{\sigma }^{e}_{n} = \frac{1}{Nk^{2}}\sum _{\ell _{p}=1}^{k}\sum ^{N}_{m=1} \sum _{\ell _{q}=1}^{k} g_{\varvec{\phi }_{r}}\left( g_{\varvec{\phi }_{e}}({\varvec{x}}^{\ell _{p}}_{n}), g_{\varvec{\phi }_{e}}({\varvec{x}}^{\ell _{q}}_{m}) \right) , \end{aligned}$$
(34)

where \(\varvec{\phi }_{r}, \varvec{\phi }_{e}\) are parameters for the relation network and encoder respectively. Intuitively, the three summations ensure that every example with class n in \(D^{tr}_{{\mathcal {T}}_{j}}\) is paired with every example from all classes n. Given \(\varvec{\mu }_{n}^{e}\), and \(\varvec{\sigma }_{n}^{e}\), one can sample a latent code \({\varvec{z}}_{n} \backsim N(\varvec{\mu }_{n}^{e}, diag(\varvec{\sigma }_{n}^{e2}))\) for class n, which serves as a latent embedding of the task training data.

The decoder can then generate a task-specific initialization \(\varvec{\theta }_{n}\) for class n as follows. First, one computes a mean and variance for a Gaussian distribution using the latent code

$$\begin{aligned} \varvec{\mu }_{n}^{d}, \varvec{\sigma }_{n}^{d} = g_{\varvec{\phi }_{d}}({\varvec{z}}_{n}). \end{aligned}$$
(35)

These are then used to sample initialization weights \(\varvec{\theta }_{n} \backsim N(\varvec{\mu }^{d}_{n}, diag(\varvec{\sigma }^{d2}_{n}))\). The loss from the generated weights can then be propagated backwards to adjust the embedding space. In practice, generating such high-dimensional set of parameters from a low-dimensional embedding can be quite problematic. Therefore, LEO uses pre-trained models, and only generates weights for the final layer, which limits the expressivity of the model.

A key advantage of LEO is that it optimizes in a lower-dimensional latent embedding space, which aids generalization performance. However, the approach is more complex than e.g. MAML (Finn et al. 2017), and its applicability is limited to few-shot learning settings.

5.10 Online MAML (FTML)

Online MAML (Finn et al. 2019) is an extension of MAML (Finn et al. 2017) to make it applicable to online learning settings (Anderson 2008). In the online setting, we are presented with a sequence of tasks \({\mathcal {T}}_{t}\) with corresponding loss functions \(\{ {\mathcal {L}}_{{\mathcal {T}}_{t}} \}_{t=1}^{T}\), for some potentially infinite time horizon T. The goal is to pick a sequence of parameters \(\{ \varvec{\theta }_{t} \}_{t=1}^{T}\) that performs well on the presented loss functions. This objective is captured by the \(Regret_{T}\) over the entire sequence, which is defined by Finn et al. (2019) as follows

$$\begin{aligned} Regret_{T} = \sum _{t=1}^{T}{\mathcal {L}}_{{\mathcal {T}}_{t}}(\varvec{\theta }_{t}') - min_{\varvec{\theta }} \sum _{t=1}^{T}{\mathcal {L}}_{{\mathcal {T}}_{t}}(\varvec{\theta }'_{t}), \end{aligned}$$
(36)

where \(\varvec{\theta }\) are the initial model parameters (just as MAML), and \(\varvec{\theta }_{t}'\) are parameters resulting from a one-step gradient update (starting from \(\varvec{\theta }\)) on task t. Here, the left term reflects the updated parameters chosen by the agent \((\varvec{\theta }_{t})\), whereas the right term presents the minimum obtainable loss (in hindsight) from a single fixed set of parameters \(\varvec{\theta }\). Note that this setup assumes that the agent can make updates to its chosen parameters (transform its initial choice at time t from \(\varvec{\theta }_{t}\) to \(\varvec{\theta }_{t}'\)).

Finn et al. (2019) propose FTML (Follow The Meta Leader), inspired by FTL (Follow The Leader) (Hannan 1957; Kalai and Vempala 2005), to minimize the regret. The basic idea is to set the parameters for the next time step (\(t+1\)) equal to the best parameters in hindsight, i.e.,

$$\begin{aligned} \varvec{\theta }_{t+1} := argmin_{\varvec{\theta }} \sum _{k=1}^{t}{\mathcal {L}}_{{\mathcal {T}}_{k}}(\varvec{\theta }_{k}'). \end{aligned}$$
(37)

The gradient to perform meta-updates is then given by

$$\begin{aligned} g_{t}(\varvec{\theta }) := \nabla _{\varvec{\theta }} {\mathbb {E}}_{{\mathcal {T}}_{k} \backsim p_{t}({\mathcal {T}})} {\mathcal {L}}_{{\mathcal {T}}_{k}}(\varvec{\theta }_{k}'), \end{aligned}$$
(38)

where \(p_{t}({\mathcal {T}})\) is a uniform distribution over tasks 1, ..., t (at time t).

Algorithm 4 contains the full pseudocode for FTML. In this algorithm, \(MetaUpdate\) performs a few (\(N_{meta}\)) meta-steps. In each meta-step, a task is sampled from B, together with train and test mini-batches to compute the gradient \(g_{t}\) in Eq. 37. The initialization \(\varvec{\theta }\) is then updated (\(\varvec{\theta } := \varvec{\theta } - \beta g_{t}(\varvec{\theta })\)), where \(\beta\) is the meta-learning rate. Note that the memory usage keeps increasing over time, as at every time step t, we append tasks to the buffer B, and keep task data sets in memory.

figure d

In summary, Online MAML is a robust technique for online-learning (Finn et al. 2019). A downside of this approach is the computational costs that keep growing over time, as all encountered data are stored. Reducing these costs is a direction for future work. Also, one could experiment with how well the approach works when more than one inner gradient update steps per task are used, as mentioned by Finn et al. (2019).

5.11 LLAMA

Grant et al. (2018) mold MAML into a probabilistic framework, such that a probability distribution over task-specific parameters \(\varvec{\theta }_{j}'\) is learned, instead of a single one. In this way, multiple potential solutions can be obtained for a task. The resulting technique is called LLAMA (Laplace Approximation for Meta-Adaptation). Importantly, LLAMA is only developed for supervised learning settings.

A key observation is that a neural network \(f_{\varvec{\theta }'_{j}}\), parameterized by updated parameters \(\varvec{\theta }'_{j}\) (obtained from few gradient updates using \(D^{tr}_{{\mathcal {T}}_{j}}\)), outputs class probabilities \(P(y_{i}| {\varvec{x}}_{i}, \varvec{\theta }'_{j})\). To minimize the error on the query set \(D^{test}_{{\mathcal {T}}_{j}}\), the model must output large probability scores for the true classes. This objective is captured in the maximum log-likelihood loss function

$$\begin{aligned} {\mathcal {L}}_{D^{test}_{{\mathcal {T}}_{j}}}(\varvec{\theta }'_{j}) = - \sum _{{\varvec{x}}_{i},y_{i} \in D^{test}_{{\mathcal {T}}_{j}}} log \, P(y_{i} | {\varvec{x}}_{i}, \varvec{\theta }'_{j}). \end{aligned}$$
(39)

Simply put, if we see a task j as a probability distribution over examples \(p_{{\mathcal {T}}_{j}}\), we wish to maximize the probability that the model predicts the correct class \(y_{i}\), given an input \({\varvec{x}}_{i}\). This can be done by plain gradient descent, as shown in Algorithm 5, where \(\beta\) is the meta-learning rate. Line 4 refers to ML-LAPLACE, which is a subroutine that computes task-specific updated parameters \(\varvec{\theta }'_{j}\), and estimates the negative log-likelihood (loss function) which is used to update the initialization \(\varvec{\theta }\), as shown in Algorithm 6. Grant et al. (2018) approximated the quadratic curvature matrix \({\hat{H}}\) using K-FAC (Martens and Grosse 2015).

The trick is that the initialization \(\varvec{\theta }\) defines a distribution \(p(\varvec{\theta }'_{j}|\varvec{\theta })\) over task-specific parameters \(\varvec{\theta }'_{j}\). This distribution was taken to be a diagonal Gaussian (Grant et al. 2018). Then, to sample solutions for a new task \({\mathcal {T}}_{j}\), one can simply generate possible solutions \(\varvec{\theta }'_{j}\) from the learned Gaussian distribution.

figure e
figure f

In short, LLAMA extends MAML in a probabilistic fashion, such that one can obtain multiple solutions for a single task, instead of one. This does, however, increase the computational costs. On top of that, the used Laplace approximation (in ML-LAPLACE) can be quite inaccurate (Grant et al. 2018).

5.12 PLATIPUS

PLATIPUS (Finn et al. 2018) builds upon the probabilistic interpretation of LLAMA (Grant et al. 2018) but learns a probability distribution over initializations \(\varvec{\theta }\), instead of task-specific parameters \(\varvec{\theta }_{j}'\). Thus, PLATIPUS allows one to sample an initialization \(\varvec{\theta } \backsim p(\varvec{\theta })\), which can be updated with gradient descent to obtain task-specific weights (fast weights) \(\varvec{\theta }_{j}'\).

figure g

The approach is best explained by its pseudocode, as shown in Algorithm 7. In contrast to the original MAML, PLATIPUS introduces five more parameter vectors (line 1). All of these parameters are used to facilitate the creation of Gaussian distributions over prior initializations (or simply priors) \(\varvec{\theta }\). That is, \(\varvec{\mu }_{\varvec{\theta }}\) represents the vector mean of the distributions. \(\varvec{\sigma }^{2}_{{\varvec{q}}}\), and \({\varvec{v}}_{q}\) represent the covariances of train and test distributions respectively. \(\varvec{\gamma }_{x}\) for \(x = q,p\) are learning rate vectors for performing gradient steps on distributions q (lines 6 and 7) and P (line 11).

The key difference with the regular MAML is that instead of having a single initialization point \(\varvec{\theta }\), we now learn distributions over priors: q and P, which are based on query and support data sets of task \({\mathcal {T}}_{j}\) respectively. Since these data sets come from the same task, we want the distributions \(q(\varvec{\theta } | D^{test}_{{\mathcal {T}}_{j}})\), and \(p(\varvec{\theta } | D^{tr}_{{\mathcal {T}}_{j}})\) to be close to each other. This is enforced by the Kullback–Leibler divergence (\(D_{ KL }\)) loss term on line 12, which measures the distance between the two distributions. Importantly, note that q (line 7) and P (line 11) use vector means which are computed with one gradient update step using the query and support data sets respectively. The idea is that the mean of the Gaussian distributions should be close to the updated mean \(\varvec{\mu }_{\varvec{\theta }}\), because we want to enable fast learning. As one can see, the training process is very similar to that of MAML (Finn et al. 2017) (Sect. 5.5), with some small adjustments to allow us to work with the probability distributions over \(\varvec{\theta }\).

At test-time, one can simply sample a new initialization \(\varvec{\theta }\) from the prior distribution \(p(\varvec{\theta } | D^{tr}_{{\mathcal {T}}_{j}})\) (note that q cannot be used at test-time as we do not have access to \(D^{test}_{{\mathcal {T}}_{j}}\)), and apply a gradient update on the provided support set \(D^{tr}_{{\mathcal {T}}_{j}}\). Note that this allows us to sample multiple potential initializations \(\varvec{\theta }\) for the given task.

The key advantage of PLATIPUS is that it is aware of its uncertainty, which greatly increases the applicability of Deep Meta-Learning in critical domains such as medical diagnosis (Finn et al. 2018). Based on this uncertainty, it can ask for labels of some inputs it is unsure about (active learning). A downside to this approach, however, is the increased computational costs, and the fact that it is not applicable to reinforcement learning.

5.13 Bayesian MAML (BMAML)

Bayesian MAML (Yoon et al. 2018) is another probabilistic variant of MAML that can generate multiple solutions. However, instead of learning a distribution over potential solutions, BMAML simply keeps M possible solutions and jointly optimizes them. Recall that probabilistic MAMLs (e.g., PLATIPUS) attempt to maximize the data likelihood of task \({\mathcal {T}}_{j}\), i.e., \(p({\varvec{y}}^{test}_{j}| \varvec{\theta }'_{j})\), where \(\varvec{\theta }'_{j}\) are task-specific fast weights obtained by one or more gradient updates. Yoon et al. (2018) model this likelihood using Stein Variational Gradient Descent (SVGD) (Liu and Wang 2016).

To obtain M solutions, or equivalently, parameter settings \(\varvec{\theta }^{m}\), SVGD keeps a set of M particles \(\varvec{\varTheta } = \{ \varvec{\theta }^{m} \}_{i=1}^{M}\). At iteration t, every \(\varvec{\theta }_{t} \in \varvec{\varTheta }\) is updated as follows

$$\begin{aligned} \varvec{\theta }_{t+1}&= \varvec{\theta }_{t} + \epsilon (\phi (\varvec{\theta }_{t})) \end{aligned}$$
(40)
$$\begin{aligned} \text { where } \phi (\varvec{\theta }_{t})&= \frac{1}{M}\sum _{m=1}^{M} \left[ k(\varvec{\theta }^{m}_{t}, \varvec{\theta }_{t}) \nabla _{\varvec{\theta }^{m}_{t}} log \, p(\varvec{\theta }_{t}^{m}) + \nabla _{\varvec{\theta }_{t}^{m}}k(\varvec{\theta }^{m}_{t}, \varvec{\theta }_{t}) \right] . \end{aligned}$$
(41)

Here, \(k({\varvec{x}}, {\varvec{x}}')\) is a similarity kernel between \({\varvec{x}}\) and \({\varvec{x}}'\). The authors used a radial basis function (RBF) kernel, but in theory, any other kernel could be used. Note that the update of one particle depends on the other gradients of particles. The first term in the summation (\(k(\varvec{\theta }^{m}_{t}, \varvec{\theta }_{t}) \nabla _{\varvec{\theta }^{m}_{t}} log \, p(\varvec{\theta }_{t}^{m})\)) moves the particle in the direction of the gradients of other particles, based on particle similarity. The second term (\(\nabla _{\varvec{\theta }_{t}^{m}}k(\varvec{\theta }^{m}_{t}, \varvec{\theta }_{t})\)) ensures that particles do not collapse (repulsive force) (Yoon et al. 2018).

These particles can then be used to approximate the probability distribution of the test labels

$$\begin{aligned} p({\varvec{y}}^{test}_{j}| \varvec{\theta }'_{j}) \approx \frac{1}{M} \sum _{m=1}^{M} p({\varvec{y}}_{j}^{test} | \varvec{\theta }^{m}_{{\mathcal {T}}_{j}}), \end{aligned}$$
(42)

where \(\varvec{\theta }_{{\mathcal {T}}_{j}}^{m}\) is the m-th particle obtained by training on the support set \(D^{tr}_{{\mathcal {T}}_j}\) of task \({\mathcal {T}}_{j}\).

Yoon et al. (2018) proposed a new meta-loss to train BMAML, called the Chaser Loss. This loss relies on the insight that we want the approximated parameter distribution (obtained from the support set \(p^{n}_{{\mathcal {T}}_{j}}(\varvec{\theta }_{{\mathcal {T}}_{j}} | D^{tr}, \varvec{\varTheta }_{0})\)) and true distribution \(p^{\infty }_{{\mathcal {T}}_{j}}(\varvec{\theta }_{{\mathcal {T}}_{j}}|D^{tr} \cup D^{test})\) to be close to each other (since the task is the same). Here, n denotes the number of SVGD steps, and \(\varvec{\varTheta }_{0}\) is the set of initial particles, in a similar fashion to the initial parameters \(\varvec{\theta }\) seen by MAML. Since the true distribution is unknown, Yoon et al. (2018) approximate it by running SVGD for s additional steps, granting us the leader \(\varvec{\varTheta }^{n + s}_{{\mathcal {T}}_{j}}\), where the s additional steps are performed on the combined support and query set. The intuition is that as the number of updates increases, the obtained distributions become more like the true ones. \(\varvec{\varTheta }^{n}_{{\mathcal {T}}_{j}}\) in this context is called the chaser as it wants to get closer to the leader. The proposed meta-loss is then given by

$$\begin{aligned} {\mathcal {L}}_{BMAML}(\varvec{\varTheta }_{0}) = \sum _{{\mathcal {T}}_{j} \in B}\sum _{m=1}^{M} || \varvec{\theta }_{{\mathcal {T}}_{j}}^{n,m} - \varvec{\theta }_{{\mathcal {T}}_{j}}^{n+s,m} ||^{2}_{2}. \end{aligned}$$
(43)

The full pseudocode of BMAML is shown in Algorithm 8. Here, \(\varvec{\varTheta }^{n}_{{\mathcal {T}}_{j}}(\varvec{\varTheta }_{0})\) denotes the set of particles after n updates on task \({\mathcal {T}}_{j}\), and SG means “stop gradients” (we do not want the leader to depend on the initialization, as the leader must lead).

figure h

In summary, BMAML is a robust optimization-based meta-learning technique that can propose M potential solutions to a task. Additionally, it is applicable to reinforcement learning by using Stein Variational Policy Gradient instead of SVGD. A downside of this approach is that one has to keep M parameter sets in memory, which does not scale well. Reducing the memory costs is a direction for future work (Yoon et al. 2018). Furthermore, SVGD is sensitive to the selected kernel function, which was pre-defined in BMAML. However, Yoon et al. (2018) point out that it may be beneficial to learn the kernel function instead. This is another possibility for future research.

5.14 Simple differentiable solvers

Bertinetto et al. (2019) take a quite different approach. That is, they pick simple base-learners that have an analytical closed-form solution. The intuition is that the existence of a closed-form solution allows for good learning efficiency. They propose two techniques using this principle, namely R2-D2 (Ridge Regression Differentiable Discriminator), and LR-D2 (Logistic Regression Differentiable Discriminator). We cover both in turn.

Let \(g_{\varvec{\phi }}: X \rightarrow {\mathbb {R}}^{e}\) be a pre-trained input embedding model (e.g. a CNN), which outputs embeddings with a dimensionality of e. Furthermore, assume that we use a linear predictor function \(f(g_{\varvec{\phi }}({\varvec{x}}_{i})) = g_{\varvec{\phi }}({\varvec{x}}_{i})W\), where W is a \(e \times o\) weight matrix and o is the output dimensionality (of the label). When using (regularized) Ridge Regression (done by R2-D2), one uses the optimal W, i.e.,

$$\begin{aligned} W^{*}&= {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{W}} \, || XW - Y||^{2}_{2} + \gamma ||W||^{2} \nonumber \\&= (X^{T}X + \gamma I)^{-1}X^{T}Y, \end{aligned}$$
(44)

where \(X \in {\mathbb {R}}^{n \times e}\) is the input matrix, containing n rows (one for each embedded input \(g_{\varvec{\phi }}({\varvec{x}}_{i})\)), \(Y \in {\mathbb {R}}^{n \times o}\) is the output matrix with correct outputs corresponding to the inputs, and \(\gamma\) is a regularization term to prevent overfitting. Note that the analytical solution contains the term \((X^{T}X) \in {\mathbb {R}}^{e \times e}\), which is quadratic in the size of the embeddings. Since e can become quite large when using deep neural networks, Bertinetto et al. (2019) use Woodburry’s identity

$$\begin{aligned} W^{*} = X^{T}(XX^{T} + \gamma I)^{-1} Y, \end{aligned}$$
(45)

where \(XX^{T} \in {\mathbb {R}}^{n \times n}\) is linear in the embedding size, and quadratic in the number of examples, which is more manageable in few-shot settings, where n is very small. To make predictions with this Ridge Regression model, one can compute

$$\begin{aligned} {\hat{Y}} = \alpha X_{test}W^{*} + \beta , \end{aligned}$$
(46)

where \(\alpha\) and \(\beta\) are hyperparameters of the base-learner that can be learned by the meta-learner, and \(X_{test} \in {\mathbb {R}}^{m \times e}\) corresponds to the m test inputs of a given task. Thus, the meta-learner needs to learn \(\alpha , \beta , \gamma\), and \(\varvec{\phi }\) (embedding weights of the CNN).

The technique can also be applied to iterative solvers when the optimization steps are differentiable (Bertinetto et al. 2019). LR-D2 uses the Logistic Regression objective and Newton’s method as solver. Outputs \({\varvec{y}} \in \{-1,+1\}^{n}\) are now binary. Let \({\varvec{w}}\) denote a parameter row of our linear model (parameterized by W). Then, the i-th iteration of Newton’s method updates \({\varvec{w}}_{i}\) as follows

$$\begin{aligned} {\varvec{w}}_{i} = (X^{T}\text{ diag }({\varvec{s}}_{i})X + \gamma I)^{-1}X^{T}\text{ diag }({\varvec{s}}_{i}){\varvec{z}}_{i}, \end{aligned}$$
(47)

where \(\varvec{\mu }_{i} = \sigma ({\varvec{w}}^{T}_{i-1}X)\), \({\varvec{s}}_{i} = \varvec{\mu }_{i}(1 - \varvec{\mu }_{i})\), \({\varvec{z}}_{i} = {\varvec{w}}^{T}_{i-1}X + ({\varvec{y}} - \varvec{\mu }_{i})/{\varvec{s}}_{i}\), and \(\sigma\) is the sigmoid function. Since the term \(X^{T}\text{ diag }({\varvec{s}}_{i})X\) is a matrix of size \(e \times e\), and thus again quadratic in the embedding size, Woodburry’s identity is also applied here to obtain

$$\begin{aligned} {\varvec{w}}_{i} = X^{T}(XX^{T} + \lambda \text{ diag }({\varvec{s}}_{i})^{-1})^{-1}{\varvec{z}}_{i}, \end{aligned}$$
(48)

making it quadratic in the input size, which is not a big problem since n is small in the few-shot setting. The main difference compared to R2-D2 is that the base-solver has to be run for multiple iterations to obtain W.

In the few-shot setting, the base-level optimizers compute the weight matrix W for a given task \({\mathcal {T}}_{i}\). The obtained loss on the query set of a task \({\mathcal {L}}_{D_{test}}\) is then used to update the parameters \(\varvec{\phi }\) of the input embedding function (e.g. CNN) and the hyperparameters of the base-learner.

Lee et al. (2019) have done similar work to Bertinetto et al. (2019), but with linear Support Vector Machines (SVMs) as base-learner. Their approach is dubbed MetaOptNet, and achieved state-of-the-art performance on few-shot image classification.

In short, simple differentiable solvers are simple, reasonably fast in terms of computation time, but limited to few-shot learning settings. Investigating the use of other simple base-learners is a direction for future work.

5.15 Optimization-based techniques, in conclusion

Optimization-based aim to learn new tasks quickly through (learned) optimization procedures. Note that this closely resembles base-level learning, which also occurs through optimization (e.g., gradient descent). However, in contrast to base-level techniques, optimization-based meta-learners can learn the optimizer and/or are exposed to multiple tasks, which allows them to learn to learn new tasks quickly. Figure 28 shows the relationships between the covered optimization-based techniques.

Fig. 28
figure 28

The relationships between the covered optimization-based meta-learning techniques. As one can see, MAML has a central position in this network of techniques as it has inspired many other works

As we can see, the LSTM optimizer (Andrychowicz et al. 2016), which replaces hand-crafted optimization procedures such as gradient descent with a trainable LSTM, can be seen as the starting point for these optimization-based meta-learning techniques. Li and Malik (2018) also aim to learn the optimization procedure with reinforcement learning instead of gradient-based methods. The LSTM meta-learner (Ravi and Larochelle 2017) extends the LSTM optimizer to the few-shot setting by not only learning the optimization procedure but also a good set of initial weights. This way, it can be used across tasks. MAML (Finn et al. 2017) is a simplification of the LSTM meta-learner as it replaces the trainable LSTM optimizer by hand-crafted gradient descent. MAML has received considerable attention within the field of deep meta-learning, and has, as one can see, inspired many other works.

Meta-SGD is an enhancement of MAML that not only learns the initial parameters, but also the learning rates (Li et al. 2017). LLAMA (Grant et al. 2018), PLATIPUS (Finn et al. 2018), and online MAML (Finn et al. 2019) extend MAML to the active and online learning settings. LLAMA and PLATIPUS are probabilistic interpretations of MAML, which allow them to sample multiple solutions for a given task and quantify their uncertainty. BMAML (Yoon et al. 2018) takes a more discrete approach as it jointly optimizes a discrete set of M initializations. iMAML (Rajeswaran et al. 2019) aims to overcome the computational expenses associated with the computation of second-order derivatives, which is needed by MAML. Through implicit differentiation, they also allow for the use of non-differentiable inner loop optimization procedures. Reptile (Nichol et al. 2018) is an elegant first-order meta-learning algorithm for finding a set of initial parameters and removes the need of computing higher-order derivatives. LEO (Rusu et al. 2018) tries to improve the robustness of MAML by optimizing in lower-dimensional parameter space through the use of an encoder-decoder architecture. Lastly, R2-D2, LR-D2 (Bertinetto et al. 2019), and Lee et al. (2019) use simple classical machine learning methods (ridge regression, logistic regression, SVM, respectively) as a classifier on top of a learned feature extractor.

A key advantage of optimization-based approaches is that they can achieve better performance on wider task distributions than, e.g., model-based approaches (Finn and Levine 2018). However, optimization-based techniques optimize a base-learner for every task that they are presented with and/or learn the optimization procedure, which is computationally expensive (Hospedales et al. 2020).

Optimization-based meta-learning is a very active area of research. We expect future work to be done in order to reduce the computational demands of these methods and improve the solution quality and level of generalization. We think that benchmarking and reproducibility research will play an important role in these improvements.

6 Concluding remarks

In this section, we give a helicopter view of all that we discussed, and the field of Deep Meta-Learning in general. We will also discuss challenges and future research.

6.1 Overview

In recent years, there has been a shift in focus in the broad meta-learning community. Traditional algorithm selection and hyperparameter optimization for classical machine learning techniques (e.g. Support Vector Machines, Logistic Regression, Random Forests, etc.) have been augmented by Deep Meta-Learning, or equivalently, the pursuit of self-improving neural networks that can leverage prior learning experience to learn new tasks more quickly. Instead of training a new model from scratch for different tasks, we can use the same (meta-learning) model across tasks. As such, meta-learning can widen the applicability of powerful deep learning techniques to domains where less data is available and computational resources are limited.

Deep Meta-Learning techniques are characterized by their meta-objective, which allows them to maximize performance across various tasks, instead of a single one, as is the case in base-level learning objectives. This meta-objective is reflected in the training procedure of meta-learning methods, as they learn on a set of different meta-training tasks. The few-shot setting lends itself nicely towards this end, as tasks consist of few data points. This makes it computationally feasible to train on many different tasks, and it allows us to evaluate whether a neural network can learn new concepts from few examples. Task construction for training and evaluation does require some special attention. That is, it has been shown beneficial to match training and test conditions (Vinyals et al. 2016), and perhaps train in a more difficult setting than the one that will be used for evaluation (Snell et al. 2017).

On a high level, there are three categories of Deep Meta-Learning techniques, namely (i) metric-, (ii) model-, and (iii) optimization-based ones, which rely on (i) computing input similarity, (ii) task embeddings with states, and (iii) task-specific updates, respectively. Each approach has strengths and weaknesses. Metric-learning techniques are simple and effective (Garcia and Bruna 2017), but are not readily applicable outside of the supervised learning setting (Hospedales et al. 2020). Model-based techniques, on the other hand, can have very flexible internal dynamics, but lack generalization ability to more distant tasks than the ones used at meta-train time (Finn and Levine 2018). Optimization-based approaches have shown greater generalizability, but are in general computationally expensive, as they optimize a base-learner for every task (Finn and Levine 2018; Hospedales et al. 2020).

Table 2 provides a concise, tabular overview of these approaches. Many techniques have been proposed for each one of the categories, and the underlying ideas may vary greatly, even within the same category. Table 3, therefore, provides an overview of all methods and key ideas that we have discussed in this work, together with their applicability to supervised learning (SL) and reinforcement learning (RL) settings, key ideas, and benchmarks that were used for testing them. Table 5 displays an overview of the 1- and 5-shot classification performances (reported by the original authors) of the techniques on the frequently used miniImageNet benchmark. Moreover, it displays the used backbone (feature extraction module) as well as the final classification mechanism. From this table, it becomes clear that the 5-shot performance is typically better than the 1-shot performance, indicating that data scarcity is a large bottleneck for achieving good performance. Moreover, there is a strong relationship between the expressivity of the backbone and the performance. That is, deeper backbones tend to give rise to better classification performance. The best performance is achieved by MetaOptNet, yielding a 1-shot accuracy of 64.09% and a 5-shot accuracy of 80.00%. Note however that MetaOptNet used a deeper backbone than most of the other techniques.

Table 5 Comparison of the accuracy scores of the covered meta-learning techniques on 1- and 5-shot miniImageNet classification

6.2 Open challenges and future work

Despite the great potential of Deep Meta-Learning techniques, there are still open challenges, which we discuss here.

Figure 1 in Sect. 1 displays the accuracy scores of the covered meta-learning techniques on 1-shot miniImageNet classification. Techniques that were not tested in this setting by the original authors are omitted. As we can see, the performance of the techniques is related to the expressivity of the used backbone (ordered in increasing order on the x-axis). For example, the best-performing techniques, LEO and MetaOptNet, use the largest network architectures. Moreover, the fact that different techniques use different backbones poses a problem as it is difficult to fairly compare their classification performance. An obvious question arises to which degree the difference in performance is due to methodological improvements, opposed to the fact that a better backbone architecture was chosen. For this reason, we think that it would be useful to perform a large-scale benchmark test where techniques are compared when they use the same backbones. This would also allow us to get a more clear idea of how the expressivity of the feature extraction module affects the performance.

Another challenge of Deep Meta-Learning techniques is that they can be susceptible to the memorization problem (meta-overfitting), where the neural network has memorized tasks seen at meta-training time and fails to generalize to new tasks. More research is required to better understand this problem. Clever task design and meta-regularization may prove useful to avoid such problems (Yin et al. 2020).

Another problem is that most of the meta-learning techniques discussed in this work are evaluated on narrow benchmark sets. This means that the data that the meta-learner used for training are not too distant from the data used for evaluating its performance. As such, one may wonder how well these techniques are able to adapt to more distant tasks. Chen et al. (2019) showed that the ability to adapt to new tasks decreases as they become more distant from the tasks seen at training time. Moreover, a simple non-meta-learning baseline (based on pre-training and fine-tuning) can outperform state-of-the-art meta-learning techniques when meta-test tasks come from a different data set than the one used for meta-training.

In reaction to these findings, Triantafillou et al. (2020) have recently proposed the Meta-Dataset benchmark, which consists of various previously used meta-learning benchmarks such as Omniglot (Lake et al. 2011) and ImageNet (Deng et al. 2009). This way, meta-learning techniques can be evaluated in more challenging settings where tasks are diverse. Following Hospedales et al. (2020), we think that this new benchmark can prove to be a good means for the investigation and development of meta-learning algorithms for such challenging scenarios.

As mentioned earlier in this section, Deep Meta-Learning has the appealing prospect of widening the applicability of deep learning techniques to more real-world domains. For this, increasing the generalization ability of these techniques is very important. Additionally, the computational costs associated with the deployment of meta-learning techniques should be small. While these techniques can learn new tasks quickly, meta-training can be quite computationally expensive. Thus, decreasing the required computation time and memory costs of Deep Meta-Learning techniques remains an open challenge.

Some real-world problems demand systems that can perform well in online, or active learning settings. The investigation of Deep Meta-Learning in these settings (Finn et al. 2018; Yoon et al. 2018; Finn et al. 2019; Munkhdalai and Yu 2017; Vuorio et al. 2018) remains an important direction for future work.

Yet another direction for future research is the creation of compositional Deep Meta-Learning systems, which instead of learning flat and associative functions \({\varvec{x}} \rightarrow y\), organize knowledge in a compositional manner. This would allow them to decompose an input \({\varvec{x}}\) into several (already learned) components \(c_{1}({\varvec{x}}),...,c_{n}({\varvec{x}})\), which in turn could help the performance in low-data regimes (Tokmakov et al. 2019).

The question has been raised whether contemporary Deep Meta-Learning techniques actually learn how to perform rapid learning, or simply learn a set of robust high-level features, which can be (re)used for many (new) tasks. Raghu et al. (2020) investigated this question for the most popular Deep Meta-Learning technique MAML and found that it largely relies on feature reuse. It would be interesting to see whether we can develop techniques that rely more upon fast learning, and what the effect would be on performance.

Lastly, it may be useful to add more meta-abstraction levels, giving rise to, e.g., meta-meta-learning, meta-meta-...-learning (Hospedales et al. 2020; Schmidhuber 1987).