Combined Scaling for Zero-shot Transfer Learning

We present a combined scaling method - named BASIC - that achieves 85.7% top-1 accuracy on the ImageNet ILSVRC-2012 validation set without learning from any labeled ImageNet example. This accuracy surpasses best published similar models - CLIP and ALIGN - by 9.3%. Our BASIC model also shows significant improvements in robustness benchmarks. For instance, on 5 test sets with natural distribution shifts such as ImageNet-{A,R,V2,Sketch} and ObjectNet, our model achieves 84.3% top-1 average accuracy, only a small drop from its original ImageNet accuracy. To achieve these results, we scale up the contrastive learning framework of CLIP and ALIGN in three dimensions: data size, model size, and batch size. Our dataset has 6.6B noisy image-text pairs, which is 4x larger than ALIGN, and 16x larger than CLIP. Our largest model has 3B weights, which is 3.75x larger in parameters and 8x larger in FLOPs than ALIGN and CLIP. Finally, our batch size is 65536 which is 2x more than CLIP and 4x more than ALIGN. We encountered two main challenges with the scaling rules of BASIC. First, the main challenge with implementing the combined scaling rules of BASIC is the limited memory of accelerators, such as GPUs and TPUs. To overcome the memory limit, we propose two simple methods which make use of gradient checkpointing and model parallelism. Second, while increasing the dataset size and the model size has been the defacto method to improve the performance of deep learning models like BASIC, the effect of a large contrastive batch size on such contrastive-trained image-text models is not well-understood. To shed light on the benefits of large contrastive batch sizes, we develop a theoretical framework which shows that larger contrastive batch sizes lead to smaller generalization gaps for image-text models such as BASIC.


Introduction
The recent advances in zero-shot transfer learning such as CLIP [64] and ALIGN [37] have the potential to eliminate the need for collecting labeled training data for every new application. Using natural language as a weak supervision signal, CLIP and ALIGN achieve the impressive top-1 accuracy of 76.2% and 76.4% on ImageNet ILSVRC-2012 without learning from any labeled ImageNet data. In addition to the promising accuracy on ImageNet, the zero-shot models in CLIP and ALIGN demonstrate two important properties. First, these zero-shot models are versatile, as they can be directly deployed on many downstream tasks without task-specific data for finetuning. Second, CLIP and ALIGN models are more robust than traditional classifiers. Robustness evaluations on benchmarks with natural distribu-  Table 1: Highlights of our key results. Shown are the top-1 accuracy of our method, BASIC, and other state-of-the-art zero-shot transfer methods -CLIP and ALIGN -on ImageNet and other robustness test sets. None of these models has seen any ImageNet training example. On average, BASIC surpasses these methods by the significant 9.2 percentage points.
tion shifts [4,30,31,65,89] show that the accuracy of zeroshot transfer models typically drops less than 10%, while the accuracy of supervised and semi-supervised models might drop as much as 40% [77,81]. Despite their versatility and robustness, zero-shot transfer models are still not as competitive as supervised and semisupervised models when enough labeled data is available, which can limit their potential applications. For example, the best CLIP and ALIGN models have an accuracy about 76% on ImageNet, which is only comparable with a supervised ResNet-50 [27], and significantly worse than the state-ofthe-art supervised training on ImageNet (without extra data: 87.1% [97], and with extra data: 90.88% [17]). Therefore, narrowing the gap from zero-shot transfer models to supervised and semi-supervised models would make zero-shot transfer learning a viable alternative for image classification.
Here we develop significantly better zero-shot transfer image classifiers via combined scaling. In particular, we scale up the contrastive learning framework of CLIP [64] and ALIGN [37] in 3 dimensions: data size, model size, and batch size. For the data, we expand the ALIGN dataset [37] from 1.7B noisy image-text pairs to 6.6B pairs, i.e., almost 4x larger. For the models, we choose CoatNet, an architecture with higher learning capacity [17], and scale it to 3B parameters, i.e., 3.75x more weights and 8x more FLOPs than the largest models in CLIP and ALIGN. For the batch size, we use 65536 contrastive learning examples per minibatch, i.e., 2x more than CLIP and 4x more than ALIGN. The larger batch size allows for more negative examples and leads to better generalization of our zero-shot model.
The fundamental bottleneck of training large models at larger batch sizes is the limited memory of deep learning accelerators such as GPUs and TPUs. To overcome this bottleneck, we implement a technique of gradient accumulation (GradAccum) [61,98]. Vanilla GradAccum is not applicable to the contrastive learning framework in CLIP and ALIGN, and introduces significant overheads at our scale. Therefore, we propose two modifications to GradAccum, making it applicable for scaling up the contrastive learning framework.
Our proposed method, called BASIC, for Batch, Data and Model SIze Combined Scaling, achieves drastic improvements over CLIP and ALIGN models. For instance, on ImageNet, the largest BASIC model achieves 85.7% top-1 zero-shot accuracy, surpassing CLIP and ALIGN 9.3%. This BASIC model also shows significant improvements on robustness benchmarks. For instance, on 5 test sets with natural distribution shifts such as ImageNet-{A,R,V2,Sketch} and ObjectNet, the model achieves an average of 83.7% top-1 accuracy, only a small drop from the its original ImageNet accuracy (see Table 1). When tested against CLIP on other 17 image classification benchmarks, e.g., CIFAR, Caltech101, Flowers, etc. BASIC outperforms CLIP on 13 out of these 17 benchmarks.

Related Work
Large-scale pretraining and the contrastive loss. As computer vision models grow in their size and capacity, many weakly-supervised and self-supervised pretraining methods have been proposed to learn good visual representations. On one hand, pretraining with a classification loss on large weakly-labeled datasets such as Instagram hashtags or JFT can produce significant gains on downstream tasks such as ImageNet [20,38,43,55,76,98]. On the other hand, self-supervised methods which leverage existing structures in unlabeled data to train models have been developed. A promising development in self-supervised learning is the contrastive loss, with representative works like CPC [85], SimCLR [10,11] and MoCo [12,28]. In this paper, we scale up the contrastive learning framework, which we will revisit in detail in Section 3.1.
Zero-shot and image-text models. Early works on zeroshot vision models date back to the 2000s, e.g., [46,68,92,93,99]. In these early works, the term "zero-shot" refers to ability of models to recognize classes which are not present in their training data, and mostly focus on the vision domain.
Our work, on the other hand, leverages text data as well. Using images with accompayning text is related to the literature on image-captioning models, such as [18,38,40,50,67,88,96,100]. While learning to generate captions from images can induce good visual representations, it is not the goal of this paper. Instead, this paper focuses on the zero-shot transfer ability, i.e., the ability to classify images based on textual descriptions. This focus makes our work closely related the recent work of image-text models such as CLIP [64] and ALIGN [37], both of which have shown zero-shot transfer ability for image classification. Similar to CLIP and ALIGN, our work also learns the mapping between images and texts, which is related to many previous works, such as [2,3,8,13,22,24,32,33,36,42,51,52,54,56,57,60,[71][72][73]91].
Data, model and batch scaling. Scaling has proven to be a powerful tool to boost the efficacy of vision model pretraining. There are three dimensions one can scale on. The simplest dimension is data. Indeed, recent efforts have shown that the more data we train on, the better the models become [20,38,43,55,76]. The second dimension is the model size, with representative works such as EfficientNet, VITs and related works [5,20,[78][79][80]98]. Lastly, scaling up batch sizes is also the key for improving the model effectiveness [25], especially for the contrastive loss [10,37,64,82]. Our work is inspired by the power of scaling, and pushes the limits in all the dimensions.

Batch Size Scaling
In this section, we first revisit the contrastive learning framework and point out the memory bottleneck in scaling it up. We focus on memory because it is the most crucial bottleneck which hinders two out of three dimensions that we want to scale, i.e., model size and batch size. We further show that standard gradient accumulation (GradAccum) [61,98], a previous approach to overcome memory limits, is not directly applicable to contrastive learning. We then describe our modifications to make GradAccum work for constrastive learning.

Background
Settings and notations. The contrastive learning framework in CLIP [64] and ALIGN [37] trains two neural networks: network I processes image inputs and network T processes text inputs. The goal is to train I and T so that they can embed image and text inputs into a D-dimensional hypersphere S D in such a way that inputs with similar semantics are mapped to nearby points regardless of their modalities.
To this end, in each training step, the networks I and T receive a minibatch of N pairs (x i , y i ), where x i is an image and y i is a text sequence (e.g., a textual description) with similar semantic contents to x i . Each image x i and text sequence y i is then mapped into the points I(x i ), T (y i ) ∈ S D . Based on these embeddings, a similarity matrix A ∈ R N ×N is computed, where A i,j = I(x i ) T (y j ) quantifies how similar are the embeddings of image x i and text sequence y i . The contrastive loss in CLIP and ALIGN is the average cross-entropy loss on the rows and columns of A, where the diagonal entries are treated as correct classes while other entries are treated as incorrect classes. Specifically: Here, τ is called the softmax temperature which serves to steepen or dampen the softmax distributions in the rows and columns of A. As L c is minimized, the embeddings I(x i ) and T (y i ) become closer to each other, while further from all I(x j =i ) and T (y j =i ), hence achieving the goal of the contrastive learning framework.
The challenge: memory bottleneck. The consensus among representative work in contrastive learning [10][11][12]28] is that the larger the networks trained with a larger contrastive batch size performs better. Our results in Section 7 also confirm this observation. Therefore, we want to enlarge the networks I, T , and the batch size N . However, this will create a memory bottleneck. Three well-known techniques to relieve memory burden are gradient accumulation (GradAccum) [61,98], re-materialization (or gradient checkpointing) [9,26] and model parallelism [35,48,70]. Note that all three techniques are orthogonal and complementary to each other. Among them, model parallelism is considered much more complicated, often requiring dedicated system supports and nontrivial extra communication to reduce runtime overheads. To keep our method simple and fast, this paper will focus on GradAccum and re-materialization, and we will show how they can be naturally combined for large-scale contrastive learning. We now analyze the steps of GradAccum. For simplicity, assume that M evenly divides N , and that chunk i-th consists of examples e j 's with (i − 1)M + 1 ≤ j ≤ iM . With this assumption, the GradAccum procedure first initializes a zero vectorḡ of the same size with θ. Then, sequentially for each chunk i-th, the chunk gradient c i = iM j=(i−1)M +1 g j /M is added toḡ. In the end,ḡ holds the correct minibatch gradient, up to a normalization constant K = N/M . GradAccum cannot be naively applied to contrastive learning. There are two properties that make GradAccum not applicable to contrastive learning. First, in order to evaluate the loss L c in Equation 1, we need all entries of the similarity matrix A. Hence, we cannot rely only on examples in every chunk i-th to compute the chunk gradients c i 's. Second, GradAccum must allocate memory for the cumulative gradientḡ. 1 Asḡ has as many elements as θ, its memory grows as we scale up the networks I and T . This growth becomes a more pronounced issue as we scale up our models. For reference, our largest model has 3B weights, occupying roughly 11GB of accelerator memory. Spending another 11GB onḡ, while possible, defeats the purpose of saving memory in GradAccum. In the remaining of this subsection, we discuss how to modify GradAccum so that we can use it to scale up contrastive learning.

Resolving the Memory Issue
Chunking the contrastive loss. To enable proper GradAccum, a key observation is that while we need the entire similarity matrix A to compute L c in Equation 1, we do not need store all the intermediate results leading to the matrix in memory. This observation immediately connects to re-materialization, which trades computation for memory by dropping some intermediate hidden states during the forward pass and re-computing them during backpropagation. Following this insight, we propose to combine re-materialization with gradient accumulation by chunking the contrastive loss and re-materializing each chunk.
Specifically, we first run a forward pass on the networks I, T to compute the entire similarity matrix A while discarding all intermediate hidden states. Then, we use A to compute L c and the gradient ∇ A L c and chunk this gradient along the batch axis. Finally, for each chunk, we re-materialize the hidden states, i.e., rerun the forward computation, and back-prop and accumulate the corresponding gradient chunk of ∇ A L c into the weights of the networks I, T .
Algorithm 1 presents this procedure in detail and provides the memory analysis for each step. As shown, our algorithm can compute the exact chunk gradients from an entire batch of N examples, with the peak memory usage of Θ(M · max {Mem(I), Mem(T )}), instead of Θ(N · (Mem(I) + Mem(T ))). We note that our algorithm can be flexibly modified to work different chunk-sizes, i.e., M , for the image network I and the text network T . This flexibility allows for more efficient computations, e.g., when one network is smaller than another and thus, can operate with larger chunks.
M is the largest in-memory batch size. Yields Gradients ∇ θ L c for N/M chunks of the minibatch.
L c is the contrastive loss in Equation 1 1 Allocate embedding matrices X, Y ∈ R D×N D is the embedding size Θ(N D) 2 For i = 1 to N/M do: Sequentially compute the embeddings for 3 Let J ← {j : (i − 1)M + 1 ≤ j ≤ iM } chunks of images and text sequences, not 4 X :,J ← I(x J ) saving the activations of I and T .
The contrastive loss in Equation 1 Repeat a forward pass on I, T to 12 Let Accumulating the chunk gradients. Algorithm 1 yields a stream of chunked gradients c 1 , ..., c N/M , which need to be accumulated, i.e., averaged, intoḡ to perform the batch weight update. As discussed, we want to avoid allocating extra memory forḡ. To do this, we need two assumptions about our training implementation. Our first assumption is that we use an optimizer which involves gradient moments [41,53,58,69,83]. This assumption motivates our idea to avoid allocatingḡ: since the optimizer already allocates the memory for gradient moments, typically called slots, we will directly accumulate the chunk gradients c i 's into these slots.
We illustrate this idea with Adam [41], a popular optimizer that involves two gradient moments. At training step t, Adam receives the averaged minibatch gradientḡ and makes the following updates to its gradient moments v 1 and v 2 : Accumulating the chunk gradients c i 's to v 1 is straightforward. We can simply modify v 1 's single update withḡ into K = N/M updates as follows: Unfortunately, the same approach is not applicable for v 2 , as the square of the sum is generally different from the sum of the squares, i.e. ( c i ) 2 = c 2 i . However, the difference between these two quantities turns out to be: which we can estimate. Indeed, since each c i 's is the mean of M per-example gradients g j 's in the i-th chunk, we can treat c i 's as the population mean of M observed examples drawn from a random variable g ∼ Uniform{g 1 , ..., g N }.
This treatment allows us to use the familiar identity:

Larger image-text dataset
Starting from the ALIGN dataset, which contains 1.7B weakly-aligned image-text pairs [37], we collect 5B more image-text pairs, hence expanding the dataset size by roughly 4 times. We acquire these 5B image-text pairs from the JFT dataset. In the JFT dataset, each image is associated with one or multiple classes. We convert these classes into a text sequence: "{class_1} and {class_2} and ... and {class_k}". We combine the instances from JFT into ALIGN, forming our extended dataset, which we denote by ALIGN+JFT.
To tokenize the texts from ALIGN+JFT, we randomly sample 200M sentences and use them to train a sentence piece model [45] with a vocabulary size of 32K pieces. Using this tokenizer, we filter and discard the text sequences which are longer than 64 tokens. In our preliminary experiments, we find that using a tokenizer directly learned from ALIGN+JFT and adapting this filtering step can boost our zero-shot top-1 accuracy on ImageNet ILSVRC-2012 by more than 1%.

Larger Model Architectures
We find that for the same computational budget, it is more beneficial to invest in scaling up the image encoder, rather than the text encoder. Thus, for our image encoder, we use the largest CoatNet architecture [17] due to its proven large learning capacity. This network has convolution layers followed by attention layers. For our text encoder, we use a simple transformer [86]. Unlike ALIGN [37] which extracts the final text representations using a [CLS] token similar to BERT [19], we average the representations across all steps at the top layer of our transformer.
By experimenting with the scaling benefits for small models and generalizing these findings to larger models, we choose three model sizes, termed BASIC-{S,M,L} for Small, Medium, and Large. In Appendix A, we report our architectures and their computational costs and provide a small-scale study on the effects of scaling model sizes.

Pretraining and Finetuning
To further speed up the training of our networks, we make use of pretraining. In our experiments, we first pretrain the image encoder on a large labeled dataset using the standard softmax classification loss. After pretraining the image encoder, we fix all of its weights and just train the text encoder using contrastive learning. Compared to contrastive learning with GradAccum, the pretraining-finetuning procedure is much more efficient in terms of peak memory usage. This is because we never have to compute the gradients of both the image encoder and the text encoder, which allows automated compiler optimizations to free up unused memory on-the-fly.
Despite its reduced memory usage, we find that this pretraining-finetuning scheme has a weakness: it never exposes the image encoder to noisy image-text data, which makes the image encoder fail on certain tasks. For instance, while some pretrained-and-finetuned models achieve similar accuracy to their contrastive counterparts on ImageNet or CIFAR, they completely fail on an easier task -MNIST. This is because our pretraining labeled dataset, which mostly consists of natural images, has very few digit images. Meanwhile, our noisy image-text dataset has plenty instances that can teach a model certain optical character recognition skills.
As will be shown in Section 6, our best experimental results are achieved using a hybrid procedure. First, we pretrain the image encoder on a large labeled dataset, then fix its weights and train the text encoder using the contrastive loss on our image-text dataset. Finally, we finetune both image and text encoders, using our GradAccum technique when needed. In Section 7, we present ablation studies to analyze the effects of pretraining, finetuning, and other alternative training procedures.
6. Experiments 6.1. Training details Labeled data for pretraining. For pretraining (Section 5), we use the JFT dataset. This dataset has been used in previous publications [20,43,98], but it has been constantly expanded. The JFT version used in our experiments has 5B images, each of which can be associated to one or multiple labels out of 29K possible classes.
Data filtering. A problem with training on large autocurated datasets like ALIGN and JFT is that these datasets might unintentionally contain examples from our test sets. To avoid such contaminations, we filter all instances in our training data that has a structural similarity index (SSIM [90]) of at least 0.5 with any image from our evaluation benchmarks.
Optimizer. We train our models with our own optimizer called AdaFactorW, adapted from two existing ones: AdaFactor [69] and AdamW [53]. Specifically, we factorize our second gradient moments like AdaFactor, and decouple the weight decay from all moments like AdamW. To further save memory, we follow Zhai et al. [98] and store the first gradient moments in bfloat16. We observe, however, that while we can store these moments in bfloat16, we need to convert them into float32 prior to computing our weight updates to avoid numerical instability.
Other hyperparameters. For all experiments, we train and evaluate with the image resolution of 224x224. While we can increase this resolution to gain performance [37,64,78,79,84], we choose not to this and instead, reserve our computational resources for scaling up our model and our batch size. All our other hyper-parameters can be found in Appendix B.

Results on Image Classification Benchmarks
We first present the zero-shot transfer performance of our models. We compare our models BASIC-{S,M,L} to CLIP models with similar computational budgets [64] on 17 natural image classification datasets. Details about these datasets can be found in Appendix C.  Zero-shot transfer models require textual prompts, which we take from CLIP [64] for consistent comparison. We suspect that using prompts which are tuned for our models can further improve our results as shown in [49], because the text sequences in our training data have a different distribution from the text sequences in CLIP. Table 2 shows the comparison. From the table, it can be seen that BASIC models conclusively outperform CLIP models of the same computational budgets. Specifically, BA-SIC models demonstrate higher accuracy than CLIP models on 13 out of 17 datasets. On the Oxford IIIT Pets dataset, BASIC-L achieves 97.9% mean per-class recall which sets a new state-of-the-art, despite having never seen any training images from the dataset. On ther other hand, BASIC models have low accuracy on EuroSAT, MNIST, and PCam. MNIST is where BASIC models perform worst, where the highest accuracy is only 40.3%. We discuss these failure cases further in Section 8.

Results on Robustness Benchmarks
Despite the convincing accuracy of modern deep learning models on ImageNet, concerns have been raised about their robustness [77]. These concerns arise from a common failure mode of ImageNet-trained models: subtle changes to their input images, which are imperceptible to humans, can wildly alter their predictions with high confidence, e.g., from "golden retriever" into "goldfish".
In CLIP, Radford et al. [64] have studied certain aspects of this failure mode. They have not drawn a definitive conclusion whether to attribute such failures to deep learning, ImageNet, or a combination of them. Instead, they cautioned against generalizing "too far from [their] initial findings".
Here we advance CLIP's study on the robustness of zeroshot models in two aspects. First, we analyze our BASIC models presented previously in Section 6.2 and reaffirm that zero-shot models are indeed more robust than their ImageNet-trained counterparts. Second, we perform an experiment which suggests that ImageNet's labeled training examples might be responsible for making ImageNet-trained models less robust. Similar to CLIP's authors, we caution  readers that our experiment presents a correlation, not a causal analysis. In other words, we do not attribute the lack of robustness in ImageNet-trained models to the dataset.
More accurate zero-shot transfer models are also more robust. We evaluate BASIC-{S,M,L} models from Section 6.2 on 5 robustness benchmarks derived from ImageNet: ImageNet-A [31], ImageNet-R [30], ImageNet-V2 [65], ImageNet-Sketch [89], and ObjectNet [4]. These benchmarks have images in all or a subset of the 1000 ImageNet classes, but their inputs are selected from certain natural distribution shifts, which can cause ImageNet-trained models to make many more mistakes. Our numerical results are highlighted in Table 1 from Section 1. To visualize the data trend, in Figure 1, we plot the accuracy of zero-shot models -BASIC, CLIP [64], and ALIGN [37] -and of 200 ImageNet-trained models collected by Taori et al. [81].
The data points from our BASIC models extend the prediction from CLIP: zero-shot transfer models have a higher effective robustness [64,81], i.e. they have higher robustness than ImageNet-trained models with the same ImageNet accu-racy. To extrapolating from this trend, we fit a logistic curve (red dashes) to the zero-shot accuracy and robustness of zeroshot transfer models. The plot shows that this line meets the ideal robustness line at about 91% on the x-coordinate. In other words, our plot predicts that a model which achieves about 91% zero-shot accuracy on ImageNet, i.e., just slightly better than the state-of-the-art ImageNet-trained model [17], will also achieve the ideal robustness.
ImageNet-finetuned models are less robust. We now study the effect of ImageNet's labeled data on our models. We take the converged BASIC-{S,M,L} checkpoints from Section 6.2 and continue to train them on 1%, 10%, 20%, and 50% of ImageNet's labeled examples. Note that we continue training these checkpoints using the contrastive loss, where the names of ImageNet classes are utilized as text sequences accompanying their images. This is different from CLIP's linear probing approach, which we do not perform to avoid potential confounding factors from our study, e.g. linear classifiers might behave differently from our zeroshot transfer classifiers. We then compare the accuracy of these finetuned models on ImageNet and on the 5 robustness benchmarks. The results are visualized in Figure 2.
The figure shows a clear trend: as our model learns from more labeled ImageNet data, they become more accurate on ImageNet, but these gains do not carry over to the robustness benchmarks. Specifically, with the exception of ImageNet-V2, for which the accuracy of finetuned models stay the same (for BASIC-L) or slightly increase (for BASIC-M), for all other robustness benchmarks, the finetuned models suffer from significant performance drops. In the extreme case, 3% accuracy gain on ImageNet leads to 8.3% accuracy drop for ImageNet-R.
What makes our finetuned models less robust? A quick glance at our results might lead to the superficial conclusion that our models have overfit, as our finetuning sets are a lot smaller than ALIGN and JFT. However, this overfitting theory does not explain the trend observed in Figure 2: training on more labeled ImageNet data makes our models less robust. We hope our observation invites further causal analysis on the effects of ImageNet's labeled data.

The Importance of Batch Size Scaling
To demonstrate the role of large batch sizes, we conduct several controlled experiments for BASIC-S and BASIC-M on ALIGN. For both BASIC-S and BASIC-M, we fix all hyperparameters as shown in Table 5, but vary the batch size and the number of training steps. Models that are trained with larger batch sizes are trained with fewer steps to guarantee that they "see" the same number of examples. Table 3 presents the ImageNet top-1 zero-shot accuracy of all models at the end of their training, and Figure 3 visualizes their entire validation accuracy curves.    Table 3: Top-1 ImageNet accuracy at the end of the training for our BASIC-{S,M} models trained with different batch sizes and numbers of training steps. All models are trained for the same number of epochs, but models trained with larger batch sizes has a higher accuracy. Table 3 and Figure 3 both suggest that training for more steps cannot equalize the benefit of large batch sizes. This phenomenon is consistent with the observation from Sim-CLR [10,11]: large batch sizes help contrastive learning. SimCLR observes that the benefit of large batch sizes saturate at 8192. In contrast, our results in Table 3 and Figure 3 show that lager batch sizes continue to benefit our models until 32768, and even until 65536 as in Section 6.2. We suspect that the benefits for large batch sizes do not saturate because our dataset size and model size are both larger than those of SimCLR, e.g.

Data Scaling, Model Scaling, and Pretraining
We now study the benefits of other scaling dimensions, data and model scaling, on the quality of our models. We also study pretraining as an alternate training procedure to contrastive learning. We train BASIC-{S,M} models in 6 different settings and plot their final top-1 ImageNet accuracy in Figure 4. Below, we compare and analyze the settings.
First, BASIC-S and BASIC-M respectively gain 5.3% and 5.8% accuracy when we expand the contrastive training dataset from ALIGN to ALIGN+JFT. These gains, albeit  Figure 2: Top-1 accuracy of BASIC models on ImageNet and on 5 robustness benchmarks. In all cases, as the BASIC models are trained on more ImageNet labeled data (1%, 10%, 20%, and 50%), their ImageNet accuracy significantly increase, but their accuracy on the robustness benchmarks increase much less, or decrease.  large, are smaller than the gain by enlarging the model size, e.g., 11.7% when going from BASIC-S to BASIC-M. Next, we study the effects of pretraining image encoders on JFT. As can be seen from Figure 4, models whose image encoders are pretrained on JFT and whose text encoders are subsequently trained on ALIGN, i.e., the red bars, have similar performances with models trained from scratch on ALIGN+JFT, i.e., the blue bars. Their similar accuracy suggest that the training losses -softmax cross-entropy or contrastive -have a much smaller effect than the datasets. In other words, when given the same dataset, the image encoders in BASIC models learn to become equally good, regardless of their loss functions.
To our surprise, training the text encoders for JFTpretrained image encoders on ALIGN+JFT gains 1% for BASIC-S and 1.8% for BASIC-L, compared to training these text encoders on ALIGN. We suspect that these gains come from better representations for the textual prompts, since the models trained on ALIGN+JFT also sees the textual prompts which consist of clean JFT class names. However, this speculation needs a more thorough study to understand.
Finally, we find that if we take a converged model whose image encoder is pretrained on JFT and whose text encoder is trained on ALIGN+JFT, then we continue to train both its image encoders and text encoders at a small learning rate. This extra training phase gains us 1.4% ImageNet accuracy for BASIC-S, 0.6% for BASIC-M, and 0.4% for BASIC-L (not shown in this section).

Limitations
Despite the strong results of our zero-shot transfer models, especially on natural image classification tasks, they inevitably have their shortcomings. In this section, we discuss the problems that we find with our BASIC models. Zero-shot transfer models do not perform well on test sets that are underrepresented in the training datasets. We emphasize the failures of BASIC on two test sets where BASIC models are much worse than CLIP models: Eu-roSAT, MNIST, PatchCamelyon (PCam) (see Table 2 from Section 6.2). Here, we summarize that BASIC models fail on MNIST and PCam because our training datasets ALIGN and JFT have relatively few images of handwritten digits and of lymph nodes, which are the domain of these datasets. Compared to MNIST and PCam, BASIC models do better on EuroSAT which consist of satellite land images, but their accuracy is lower than that of CLIP models. This is because the class names for these satellite images are not very descriptive to BASIC models. More analysis for these failures are in Appendix G. Zero-shot transfer learning requires prompt engineering. In this paper, we use the prompts from CLIP [64] to make our results comparable to previous works. In Appendix G, we present examples which show that prompts that are badly chosen or adversarially chosen can hurt the accuracy of zero-shot transfer models by flipping their predictions. These examples suggest that prompt engineering is an important research topic to make zero-shot models robust and reliable, but the topic of out of the scope of this paper. Combined scaling is expensive. As reported in Appendix E, the hardware and training time for our models are not small. Despite the training cost, we can use the models in this paper without any finetuning, and hence avoid the finetuning cost. We hope that future research can reduce our models' training expense, e.g., larger accelerator memory can save the extra re-materialization steps in our algorithm (see Section 3).

Conclusion
Zero-shot transfer learning represents a new paradigm where pretrained models can be used directly for downstream applications without collecting any application-specific data.
However, in order to become practical for real-world applications, zero-shot transfer models need to bridge the accuracy gap to supervised and semi-supervised models.
In this paper, we presented combined scaling techniques that significantly boost the performance of zero-shot image classification models. We show that scaling in the data size, the model size, and the batch size all improves the final model's accuracy and robustness. To overcome the memory limit arising from combined scaling, we devise a simple gradient accumulation method based on re-materialization.

A. Model sizes
In our preliminary experiments, we experimented with different model size. Table 4 presents the final, most compute-toperformance efficient model sizes, which we use throughout the paper.

Image model Text model
Model [17]

B. Hyperparameters and other implementation details
Our training and evaluation code will eventually be released. Here, we summarize a few important details. All of our hyper-parameters are in Table 5.
No regularization. Other than the decoupled weight decay in AdaFactorW, we do not use any other regularization technique. In fact, we find that with BASIC-S and BASIC-M, if we add other forms of regularization such as stochastic depth [34] or dropout [75], our ImageNet top-1 accuracy drops substantially. This suggests that our datasets are very large and perhaps in such situation, regularization techniques do more harm than good by causing optimization difficulty to our models.
Another important effect of not using regularization in our training framework is to make the re-materialization steps in Section 3.2 consistent. If we apply random perturbations to our forward passes, e.g. by skipping layers like in stochastic depth or by setting random values to zeros, then two forward passes for re-materialization (see Lines 2-5 and 11-14 in Algorithm 1) will compute two different passes. While we could treat such difference as a form of regularization noise, our early experiment show that with dropout-like regularizations, our training loss stays relatively large throughout the course of training. This observation suggests that the noise causes some optimization difficulty to our models, so we opt not to use any dropout-like regularization.

C. Evaluation Datasets Details
Here, we present the details of the datasets which we use to evaluate our BASIC models in Section 6.2. It is worth noting that not all these datasets use the accuracy as the performance metric. This is because these datasets have a certain level of imbalance between their classes, as well as other properties that make them accuracy not the best suitable metric for them. For instance, the dataset Caltech-101 has a class called "Background" which refers to any image that does not belong to its predefined 101 classes. One certainly cannot come up with a textual description that describes this "class". As such, Caltech-101 is evaluated using mean per-class recall. Details about other datasets are in Table 6 Table 6: Details of the datasets used in this paper to evaluate BASIC models. The evaluation results are presented in Table 1 and Table 2.

D. Further Discussion on Robustness
In Section 6.3, we present a surprising result: finetuning converged BASIC checkpoints on more ImageNet labeled data leads to worse robustness results. The metric for robustness in Section 6.3 is the average top-1 accuracy of the finetuned models on 5 robustness benchmarks derived from ImageNet [4,30,31,65,89]. It turns out that each of these benchmarks can demonstrate slightly different results for the finetuned models. Here, we discuss such benchmarks.
ImageNet-V2 [65]. This dataset is collected in a process that closely follows the process to collect and annotate the images in the standard ILSVRC-2012 validation set, which is typically referred to as "ImageNet" in the literature (and our paper as well). As such, gains observed on ImageNet often transfer to ImageNet-V2. Recent works such as EfficientNets [78,79] or ViT [20] also demonstrate the similar trend. For our experiment in Section 6.3, BASIC-M's robustness accuracy improves along with its ImageNet accuracy, following this trend. However, BASIC-L's robustness does not. We suspect this trend is because BASIC-L's learning capacity is larger than that of BASIC-M, so BASIC-L picks up more "spurious" patterns from ImageNet, making it less robust than BASIC-M.
ImageNet-R [89]. ImageNet-R is a special robustness dataset in our study. Not only of our BASIC models but also other zero-shot models -CLIP and ALIGN -are more accurate on ImageNet-R than they are on ImageNet (see Table 1). These data points alone would suggest that ImageNet-R is somewhat easier than ImageNet, until we look at the significant accuracy drops for other methods on ImageNet-R. For instance, Noisy Student [95] and Meta Pseudo Labels [63] respectively achieve only 74.9% and 72.7% accuracy on ImageNet-R, despite their accuracy of 88.4% and 90.2% on ImageNet ILSVRC-2012. The real reason for this such discrepancy in ImageNet-R is that ImageNet-R is collected by selecting the ImageNet classes from visual art pieces, such as paintings, cartoons, graffiti, origami, and sculpture. These art pieces are often displayed in clean environment, free of noises such as multiple classes per image, making them images easier to recognize. As such, BASIC, CLIP, and ALIGN, all perform better on ImageNet-R. However, ImageNet-R images have a drastically different distribution compared to ImageNet labeled training images, as they are respectively art images and natural images. This is why ImageNet-trained models display a much lower accuracy on ImageNet, compared to zero-shot models.
The case of ObjectNet [4]. From Table 1, it can be seen that BASIC model's improvement over ALIGN and CLIP on Object is significantly lower than other on other benchmarks, i.e., 6.6% compared to more than 8% (except for ImageNet-R, for which the accuracy of all models are saturated at over 90%). We find out the reason is that, even though ObjectNet has images from the same classes with ImageNet, these objects turn out to have their own more descriptive names, e.g. the class name "chairs" in ImageNet could be "chairs by [viewpoint]" or "chairs with [background]". As we later show in Section G, using different class names and prompts can affect our results. This effect has also been observed in CLIP [64]. Here, we take the same class names and prompts for ImageNet and use them for ObjectNet. We suspect that using ObjectNet-specific class names and prompts can improve our result.

E. Computational Cost
All of our models are implemented in TensorFlow [1] and trained on Tensor Processing Units (TPUs [39]). Our BASIC-S and BASIC-M models are all trained on TPUv3 chips, while our BASIC-L models are trained on TPUv4 chips. These TPUv4 chips in their MegaCore mode can offer 32GB of memory, out of which our BASIC-L models use 30.1GB, which means that our model essentially saturates the TPU's memory. We note that oftentimes, a small portion of TPU memory needs to be reserved for their low-level infra systems. Therefore, our BASIC-L models essentially saturate the accelerators with the largest memory currently available. Given this memory usage, we use Algorithm 1 with the chunk size M = 8192 and the batch size N = 65536 to train this model.

F. Qualitative Analysis: Successful Classification Examples
Zero-shot transfer image classification models open the door to versatile applications. This section is dedicated to demonstrate their versatility. In Figure 5, we visualize some predictions of our best model, BASIC-L, on instances that are less expected on traditional image classification benchmarks. We come up with the text sequences and demonstrate that the model can indeed align images to the most appropriate sequence.

G. Failure Analysis
Most machine learning models fail in certain tests. It is important to identify such failure cases, to understand the failing causes, and if possible, to come up with fixes. Here, we first look at the test benchmarks in Table 2 from Section 6.2 where BASIC models perform worse than CLIP models. We identify the cause of failures for BASIC models and recommend certain fixes that can improve their performance. Then, in Section G.2, we present some erroneous behaviors of BASIC models via selected examples. These examples reveal some weaknesses of zero-shot transfer image-text models, and invite future research to improve them.

G.1. The benchmarks where BASIC fails
From Section 6.2, we see that BASIC models have particularly low performance on EuroSat [29], MNIST [47], and Patch Camelyon [87]. The accuracy of BASIC-L on these datasets are 51.0%, 40.3%, and 59.6% respectively. For what it's worth, BASIC-L's accuracy are better than those of our smaller models, i.e., BASIC-S and BASIC-M, so our central message in this paper -scaling helps -is not altered. Here, we focus on analyzing the failures of BASIC-L.
Patch Camelyon (PCam). PCam is perhaps the most sensitive dataset among the three benchmarks where BASIC-L performs poorly. This dataset consists of images extracted from histopathologic scans of lymph node sections, and models are asked to make the binary prediction -whether an input image has a cancerous lymph node or note. For such an important task, the top-1 accuracy of both BASIC-L (59.6%) and CLIP (63.0%) are far below the bars for practical deployments. We remark that PCam is a binary classification task, so the accuracy of BASIC-L and CLIP are just slightly above random guessing. Their poor performance, however, are quite understandable: classifying lymph nodes requires much more specific training, compared to classifying common natural images. As our training data are weakly crawled and automatically curated from the internet, without any emphasis on medical images, our BASIC-L model cannot learn enough to perform well on PCam. We suspect the same speculation also holds for CLIP, as their data collection and curation process is comparable to ours. Finally, the low accuracy of CLIP and BASIC models on PCam is an assertion that despite the benefits of zero-shot transfer models, they are not ready to be deployed to tasks that require in-domain expertise, e.g. medical knowledge.
EuroSAT. This dataset consists of satellite images taken for certain types of lands. Models are asked to classify input images into one out of 10 given types of lands. The land types can be seen in Figure 6. The failure of BASIC-L on EuroSAT is an example for the importance of prompt engineering in zero-shot transfer learning for image-text models. In Figure 6, we show that by changing the dataset's class names and the model's set of prompts, into words and phrases that essentially have the same meaning to humans, we can improve the accuracy of BASIC-L from 51.0% to 55.7%. We do not further explore the changes in class names and prompts to improve BASIC-L's performance on EuroSAT, as they belong to a different topic from the focus of this paper -combined scaling. However, our findings on this EuroSAT dataset suggests that contrastive image-text models do not really "understand" texts. This is perhaps because of the low quality of the texts in our training data, unlike the millions of words from books and articles like the training data of NLP models such as BERT [19].
MNIST. MNIST is a classical dataset in computer vision for handwritten digit classification. Simple models can achieve more than 99.5% accuracy, and yet BASIC-L achieves the humble 40.3% accuracy. Unlike the case of PCam, i.e. there is not enough training data in our training dataset, for MNIST, we find that the ALIGN dataset has a fair amount of images that contain digits, either handwritten or printed. This means that the image encoder of BASIC-L has seen digit figures, and suggests that the failures might be more attributable to the text encoder, similar to the case of EuroSAT. In Figure 7, we show the confusion matrices of BASIC-L models with three sets of class names: using the digits such as {'0', '1', ...}, using the the texts such as {'one', 'two'', ...}, and using both such as {'0 or zero', '1 or one', ...}. Unfortunately, we cannot improve BASIC-L's accuracy on MNIST, like we did for EuroSAT: BASIC-L's accuracy is low in all three cases, but the confusion matrices are visibly different: BASIC-L models 'thinks' that many digits look like '3' for the digit-only class names, but many digits look like '1 or one' in the digit-and-text class names. Again, humans who understand languages will not make these mistakes. We think these mistakes constitute a new type or robustness failures, which we hope will invite further research.

G.2. Example failure cases
From the confusion matrices of BASIC-L on two benchmarks, EuroSAT [29] and MNIST [47], we observe that the prompts and class names are crucial for the performance of zero-shot transfor models. Here, we select and present a few examples to demonstrate the failures of BASIC-L. Figure 8 visualizes these examples.  [29]. Shown are the confusion matrices obtained from zero-shot transferring from BASIC-L, using prompts and class names and CLIP, compared to the same model using prompts and class names that we tuned. The zero-shot top-1 accuracy with our prompts and class names are 4.7% higher, and the confusion matrix illustrates this by showing more concentration on the diagonal. Digit only: we use the class names {"0", "1", ..., "9"}; Text only: {"one", "two", ..., "nine"}; Digit and Text: {"0 or zero", "1 or one", ..., "9 or nine"}. The model has vastly different confusion matrices for different class name, suggesting that it does not understand the meaning of these strings, but instead, simply learns match their embeddings.
More than 6 kittens in total. () 0.472 More than 4 kittens in total.
0.342 More than 2 kittens in total.
0.186 More than 6 puppies in total.