The present disclosure relates generally to training machine-learned models. More particularly, aspects of the present disclosure relate to weakly supervised training of machine-learned image-processing models.
Training machine-learned models can use large quantities of data. In some cases, supervised training can refer to training a model based on training examples that are individually curated to provide a certain training outcome (e.g., a curated collection of cat images to train an image-recognition model to recognize cats). For instance, a training objective can be to match a model output to a predetermined image label. In some cases, unsupervised training can refer to training a model with training examples that are not individually curated (e.g., crawled images, text, etc.). In some cases, training examples for unsupervised training can be collected with lower effort, but it can be challenging to determine a training objective.
Aspects and advantages of embodiments of the present disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the embodiments.
In one example aspect, the present disclosure provides an example system for training a machine-learned image-processing model. The example system includes one or more processors and one or more non-transitory, computer-readable media that store instructions that, when executed, cause the one or more processors to perform operations. In the example system, the operations include receiving a training sequence for the machine-learned image-processing model. In the example system, the training sequence includes text tokens and image tokens, a prefix sequence containing the image tokens, and a remainder sequence containing a remainder set of the text tokens. In the example system, the operations include determining, using the prefix sequence as an input to the machine-learned image-processing model, an objective based on recovery of the remainder sequence. In the example system, the operations include updating one or more learnable parameters of the machine-learned image-processing model based on the objective. In some embodiments of the example system, the machine-learned image-processing model is configured to bidirectionally attend over the prefix sequence and optionally evaluate a language modeling objective over the remainder sequence.
In one example aspect, the present disclosure provides an example method for training machine-learned image-processing model. The example method includes receiving, by a computing system having one or more processors, a training sequence for the machine-learned image-processing model. In the example method, the training sequence includes text tokens and image tokens, a prefix sequence containing the image tokens, and a remainder sequence containing a remainder set of the text tokens. The example method includes determining, by the computing system and using the prefix sequence as an input to the machine-learned image-processing model, an objective based on recovery of the remainder sequence. The example method includes updating, by the computing system, one or more learnable parameters of the machine-learned image-processing model based on the objective. In some embodiments of the example method, the machine-learned image-processing model is configured to bidirectionally attend over the prefix sequence and optionally evaluate a language modeling objective over the remainder sequence.
In one example aspect, the present disclosure provides an example system for implementing a machine-learned image-processing model. The example system includes one or more processors and one or more non-transitory, computer-readable media that store the machine-learned image-processing model. In the example system, the machine-learned image-processing model was trained over a weakly-supervised dataset containing images and associated text strings. In the example system, the machine-learned image-processing model includes one or more parameters updated based on a language modeling objective over a respective text string conditioned on a respective corresponding image. The example system includes the computer-readable media that store instructions that, when executed, cause the one or more processors to perform operations. In the example system, the operations include inputting image tokens to an encoder portion of the machine-learned image-processing model and outputting text tokens from a decoder portion of the machine-learned image-processing model. In some embodiments of the example system, the machine-learned image-processing model is configured to bidirectionally attend over the prefix sequence and optionally evaluate a language modeling objective over the remainder sequence. In some embodiments of the example system, the output text tokens are responsive to a query submitted via one or more text tokens input to the encoder portion.
These and other features, aspects, and advantages of various embodiments of the present disclosure will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate example embodiments of the present disclosure and, together with the description, serve to explain the related principles.
Detailed discussion of embodiments directed to one of ordinary skill in the art is set forth in the specification, which makes reference to the appended figures, in which:
Reference numerals that are repeated across plural figures are intended to identify the same features in various implementations.
Example embodiments according to aspects of the present disclosure are generally directed to techniques for improved pretraining of multimodal machine-learned models. For instance, a multimodal image-processing model can be trained to interpret, understand, and output semantic relationships between images and text (e.g., for image captioning, image-based reasoning, visual question answering, etc.). In some embodiments, the multimodal model can be trained to generate textual output based on an input image using a language-modeling pretraining objective. For instance, in some embodiments, the language-modeling pretraining objective can include a prefix-based objective: a training example can be used to obtain a training sequence split into a prefix and a textual remainder, and the objective can be configured to evaluate the recovery of the textual remainder by the model (e.g., via prediction/inference) when given the prefix. For example, a training example can contain an image and an associated text string. The image can be encoded into image tokens, and the text string can be encoded into text tokens. A prefix sequence can be obtained that includes the image tokens and optionally one or more text tokens. A remainder sequence can include the remaining text tokens. Pretraining can include predicting the remainder sequence with the model given the prefix sequence. The objective can be configured to evaluate recovery of the remainder sequence by the model. In this manner, for instance, the multimodal model can be trained to process multimodal data (e.g., image and text) using a single-modality objective (e.g., a generative language-modeling objective).
Prior techniques for pretraining multimodal models have generally required substantial curation of training data. For example, prior techniques have generally required a labeled dataset for learning each modality. For example, in some prior techniques, in order to capture alignment between images and text, labeled/curated object detection datasets are first used to train a supervised object detector for extracting region-of-interest features from images. Next, datasets of aligned image-text pairs are generally used for pretraining of a fusion model that can take as input the concatenation of the extracted region-of-interest features and the paired text. This pretraining approach generally requires multiple stages of fully supervised training.
In some other examples, due to the limited scale of human annotated data, various task-specific auxiliary losses have been used in the past in attempts to improve performance over noisier datasets. Other prior approaches have used training data from weakly labeled/aligned data crawled from the web, but generally such past approaches have relied on multiple independent single-mode processing pipelines (e.g., encoders/decoders for each modality). These design choices can complicate the pretraining process and create a bottleneck for further quality improvement, as well as inhibiting the use of powerful cross-modal context (e.g., cross-modal attention).
Advantageously, a prefix-based objective according to example aspects of the present disclosure can train an image-processing model for both generative language-modeling tasks while learning bidirectional attention pathways. For example, in some embodiments of pretraining, the multimodal model can bidirectionally attend over an input prefix and also obtain a recovered remainder in a generative fashion (e.g., sequentially predicting elements of an output sequence based on any preceding elements of the output sequence). In this manner, for example, the prefix-based objective according to example aspects of the present disclosure can leverage cross-modal context by bidirectionally attending over the prefix sequence, which can contain both image tokens and text tokens (e.g., bidirectional attention across modalities). Additionally, the remainder can be predicted using generative language modeling, further developing the capability of the model for unidirectional generative tasks. Compared to some prior methods that rely purely on bidirectional attention pathways (e.g., masked-language modeling), example pretraining objectives of the present disclosure can not only enjoy the benefits of learning bidirectional contextualized representation, but also can learn improved performance on open-ended text generation in language modeling. Furthermore, compared to some prior methods that rely on multiple objectives to train different attention configurations, example pretraining objectives can provide a single objective that can provide for the development and learning of both bidirectional attention pathways and generative language modeling skills, providing for more efficient pretraining in one pass (e.g., evaluating a single objective over the training data in one pass, pretraining the model in a single stage, etc.).
Additionally, a prefix-based objective according to example aspects of the present disclosure can exhibit high tolerance to noisy training datasets. Furthermore, example embodiments can train multimodal image-processing models with a single-modality language-modeling objective, simplifying the pretraining process flow and enabling implementation at scale. In this manner also, for instance, a prefix-based objective according to example aspects of the present disclosure can provide for processing at scale such that any deficiencies in quality of a noisy set of training data can be mitigated by processing the noisy training data in large quantities.
Example embodiments according to the present disclosure can provide a number of technical effects and benefits. For instance, some example embodiments can provide a streamlined pretraining process with fewer stages (e.g., a single stage), decreasing configuration overhead and opportunities for suboptimal arrangement. Similarly, some example embodiments can present a simplified pretraining objective for decreasing computational overhead for each training cycle. For instance, in some embodiments, an objective according to example aspects of the present disclosure can provide for pretraining in one pass (e.g., evaluating a single objective over the training data in one pass, pretraining the model in a single stage, etc.). For example, a simplified pretraining objective according to the present disclosure can provide for improved performance of a resulting model obtained with decreased computational cost. For example, training a multimodal image-processing model using a pretraining objective according to the present disclosure can decrease processing cycles, memory usage, communications bandwidth, and other computational resources used to obtain a pretrained model.
Accordingly, by providing a more efficient pretraining objective, example embodiments according to the present disclosure can offer improved performance at scale. For instance, training a large number of models and/or using a large number of training examples can be computationally intensive. Thus, a more efficient pretraining objective according to example embodiments according to the present disclosure can enable greater scalability of model training and deployment. By improving performance at scale, a more efficient pretraining objective according to example embodiments according to the present disclosure can improve the capacity and capabilities of computing systems large and small. For instance, the efficiency gains enjoyed at large scales can also be leveraged to implement pretraining routines in resource-constrained environments (e.g., on mobile devices).
Furthermore, by providing an objective that jointly develops bidirectional attention pathways and unidirectional language modeling performance, example embodiments according to aspects of the present disclosure can provide for pre-trained models that demonstrate improved performance across task domains. For instance, in real-world deployment scenarios in which tasks may not necessarily be neatly categorized into separate domains, a model trained with a pretraining approach according to example aspects of the present disclosure can provide for improved real-world performance in mixed or cross-domain tasks. For example, zero-shot transfer can be improved due to the combination of bidirectional attention training and generative language modeling training.
Additionally, for instance, a pretraining approach according to example aspects of the present disclosure can provide for implementation of a small number of models (e.g., one model) in place of many models (e.g., multiple models). This can decrease the computational complexity of deploying the models, training the models, updating the models, deactivating the models, etc. In this manner, for instance, decreased computational resources can be used to perform model operations with the techniques disclosed herein. Decreased storage can be used to store a small number of models (e.g., one model) in place of many models (e.g., multiple models). Decreased network transmissions can be used to implement a small number of models (e.g., one model) in place of many models (e.g., multiple models) on one or more remote device(s) (e.g., client devices connected to a server device). Efficiency of update and patch cycles can be improved by devoting resources (e.g., computational resources, human resources, etc.) to managing and versioning a small number of models (e.g., one model) in place of many models (e.g., multiple models). By using a model trained with a pretraining approach according to example aspects of the present disclosure, a target performance can be achieved with less computational overhead by leveraging a small number of models (e.g., one model) in place of many models (e.g., multiple models). Lower latency can be achieved by using a small number of models (e.g., one model) instead of switching between many models (e.g., multiple models).
Furthermore, systems and methods according to example aspects of the present disclosure are well suited to pretraining transformer models. For instance, example techniques described herein provide for pretraining objectives that leverage internal parallel structures and processing streams of a transformer model to attend bidirectionally over a prefix input to the model to recover a remainder associated with the prefix input. In some embodiments, transformer models can include effectively parallelized computation of multi-headed attention. In this manner, for instance, examples of inherently parallelizable transformer models can be better pretrained for immediate deployment and/or further fine-tuning, offering improvements in scalability and distributed computation by leveraging a small number of transformer models (e.g., one transformer model) in place of many varying models (e.g., multiple models) that may not offer the same advantages at scale.
With reference now to the Figures, example embodiments of the present disclosure will be discussed in further detail.
In some embodiments, the training example 102 can be obtained from an unsupervised or weakly supervised training dataset. For example, the training example 102 can correspond to an image and text pairing crawled from a server, repository, or other storage (e.g., crawled from the web). For example, the text 106 can include a filename of the image 104. The text 106 can include metadata associated with the image 104, such as the contents of an alt-text field. The text 106 can include a caption associated with the image 104, or other textual data found in proximity to the image 104 (e.g., text from a shared node or container of a website, etc.). In some embodiments, the training dataset can be collected with little to no processing of the training examples therein. In some embodiments, the training dataset can be filtered to, for example, deduplicate examples, remove spurious entries, avoid sensitive or offensive materials, and the like. Although the training dataset is described in some examples as containing image-text pairs, example image-processing pretraining pipelines 100 can be agnostic to datatypes. For instance, in some embodiments, the prefix sequence 128 can contain only textual tokens or only image tokens. For instance, the image-processing pretraining pipeline 100 can be implemented in a number of iterations - in some iterations, image-text pairings can be used (e.g., to learn to semantically interpret images 104 in the language of text 106), and in some iterations, text-text pairings can be used (e.g., translation data to map the language of text 106 to another language).
In some embodiments, the image 104 can be embedded into image tokens 108. For instance, the image 104 can be directly embedded into image tokens 108 by patches. For example, the image 104 can be split into raw image patches (e.g., portions of the image selected by geometric boundaries) that can be mapped to flattened encodings. For example, raw image patches can be linearly projected into a token (e.g., a two-dimensional token, a one-dimensional token, etc.). In some embodiments, the image 104 can be embedded into image tokens 108 without additional image preprocessing upstream. For example, in some embodiments, the image tokens 108 can be directly embedded without first extracting or otherwise identifying regions of interest in the image 104 (e.g., with an object detection or other image recognition module). In this manner, for instance, the image tokens 108 can be determined based on geometric subdivisions of the image 104 (e.g., panels on a grid, etc.) instead of a semantic image processing technique. For instance, in this manner, the image tokens 108 can be embedded without need to first obtain or train an image-recognition model for parsing regions of interest.
In some embodiments, raw image patches can be reduced or contextualized by applying one or more convolutions (e.g., to the raw image prior to subdivision into patches, to the patches themselves, etc.). For example, one or more layers or blocks of a trained image-processing model can be used in generating the image tokens 108 from the image 104. For example, in some embodiments, one or more convolutions can be applied, optionally reducing a dimensionality of an image 104. For instance, for a raw image x having a heigh H, width W, and number of channels C (e.g., x ∈ ℝH×W×C), a token for the i-th patch can be expressed as
where P is the patch size dimension and D is an optional parameter corresponding to an input architecture of the image-processing model 132. For example, for transformer-based image-processing models 132, D can correspond to a hidden size of the transformer layer(s). In some embodiments, one or more convolutions can be performed using one or more layers of an image processing model. For example, one or more blocks of ResNet may be used to perform convolutions on an input image, or patches thereof. For instance, two, three, or four blocks of ResNet can be used to extract contextualized patches from an input image.
In some embodiments, the text 106 can be embedded into text tokens 118. The text tokens 118 can be generated from the text 106 by one or more language embedding techniques. For example, the text tokens 118 can be generated based on word embeddings, sub-word embeddings, or character embeddings (e.g., or combinations thereof).
In some embodiments, tokens in the training sequence 127 can include one or more positional embeddings. For instance, one or more positional embeddings can be added for image tokens 108 and text tokens 118. In some embodiments, positional embeddings can be added for image tokens 108 and text tokens 118 separately. In some embodiments, the positional encodings can be learnable. In some embodiments, the image-processing model 132 includes one or more transformer-based model components, and two-dimensional relative attention can be added to one or more layers for the image tokens 108.
In some embodiments, one or more parameters of an embedding layer for embedding the inputs (e.g., patches of the image 104, text 106) into tokens can be shared with an output layer of the image-processing model 132. For instance, in some embodiments, parameter(s) in the embedding layer can be shared with a decoder softmax layer for outputting a probability distribution over a vocabulary.
In some embodiments, the training sequence 127 includes a prefix sequence 128 assembled from image tokens 108 and optionally text tokens 118. In some embodiments, the prefix sequence 128 can include some image tokens 108 (e.g., all of image tokens 108) prepended to one or more text tokens 118 (e.g., a prefix set of text tokens 118, which can optionally be an empty set if no text tokens are in the prefix sequence 128). In some embodiments, the prefix sequence 128 can include all image tokens (e.g., single modality prefix). In some embodiments, the prefix sequence 128 can include all text tokens (e.g., single modality prefix). For example, one or more training iterations can be performed with a prefix sequence 128 assembled from image tokens 108 and optionally text tokens 118, and one or more subsequent training iterations can be performed with a prefix sequence 128 assembled only from text tokens 118 or other text tokens.
In some embodiments, the remainder sequence 130 includes a set of text tokens not contained within the prefix sequence 128 (e.g., a remainder set of the text tokens 118). In some embodiments, the remainder sequence 130 contains only text tokens (e.g., from text tokens 118). In some embodiments, the remainder sequence 130 includes a contiguous remainder of the text tokens 118 (e.g., a contiguous set of tokens not used in the prefix sequence 128). For example, text 106 can include a textual string that can be tokenized into a sequence of text tokens 118. One or more tokens (e.g., textual token 120) can be included in the prefix sequence 128. One or more other, remaining tokens (e.g., textual tokens 122, 124, 126; optionally contiguous tokens) can be included in the remainder sequence 130. In some embodiments, the remainder sequence 130 can contain a terminus of the textual string.
In some embodiments, a break point can be determined within the text tokens 118 to allocate the text tokens 118 among the prefix sequence 128 and the remainder sequence 130. For instance, a break point can be explicitly provided based on a quantity of tokens for the prefix sequence 128. In some embodiments, a break point can be changed or updated according to a desired scheme. For instance, in some embodiments, a break point can be randomly determined, such as randomly determined for each training example 102.
In some embodiments, the image-processing model 132 can be or otherwise include one or more machine-learned models configured to receive a sequence of tokens as input and output one or more tokens. For example, in some embodiments, image-processing model 132 can be or otherwise include a transformer-based model. For instance, image-processing model 132 can include a transformer encoder, a transformer decoder, or both. In some embodiments, image-processing model 132 includes a transformer-based encoder-decoder structure, and the prefix sequence 128 is provided to the encoder portion as an input for recovering the remainder sequence 130 as an output of the decoder portion.
For example,
In some embodiments, the encoder 234 can provide for self-attention over the prefix sequence 128′ (e.g., leveraging a transformer-based architecture). For example, the encoder 234 can be configured to bidirectionally attend over tokens in the prefix sequence 128′, such that an output of the encoder 234 can process a respective token in the prefix sequence 128′ in view of other tokens that can come before or after the respective token. In this manner, for example, bidirectional attention pathways can be learned and developed in example pretraining pipelines according to the present disclosure.
In some embodiments, the decoder 236 can generate recovered remainder 134′ (e.g., containing recovered text tokens 122′, 124′, and 126′). For instance, the decoder 236 can generate recovered remainder 134′ in a generative fashion. For example, the decoder 236 can sequentially generate the recovered tokens in view of preceding token(s), including the prefix sequence 128′ or encodings based thereon. For instance, a start token 238 can be input to the decoder 236. Based on the prefix sequence 128′ (e.g., or an encoding generated therefrom by the encoder 234), the decoder 236 can output recovered text token 122′. Recovered text token 122′ can be input to the decoder 236, and based on the preceding tokens (e.g., on the start token 238 and the prefix sequence 128′, or an encoding generated therefrom by the encoder 234), recovered text token 124′ can be output. Recovered text token 124′ can be input to the decoder 236, and based on the preceding tokens (e.g., on the start token 238, recovered text token 122′, and the prefix sequence 128′, or an encoding generated therefrom by the encoder 234), recovered text token 126′ can be output. In this manner, for example, a recovered remainder 134′ can be generated with attention over preceding tokens, as in a generative language modeling task. In this manner, for example, bidirectional attention pathways as well as unidirectional attention pathways for generative language modeling can be learned and developed in example pretraining pipelines according to the present disclosure.
In some embodiments, a pretraining objective according to example aspects of the present disclosure can provide for the development and learning of bidirectional attention pathways and generative language modeling skills. For instance, a single pretraining objective can provide for the development and learning of both bidirectional attention pathways and generative language modeling skills. Although the example embodiment illustrated in
In some embodiments, based on the recovered remainder 134′ (e.g., as compared to an expected remainder sequence 130), one or more parameters of the image-processing model 232 can be updated. For example, with reference again to
where the superscript U indicates a unidirectional conditionality/attention over the indicated set of tokens and the superscript B indicates a bidirectional conditionality/attention over the indicated set of tokens. In this example, for instance, for a given image-text pair, an image token sequence of length Ti can be prepended to a text sequence having a length Tt for the model to sample a prefix of length Tp, where Ti≤ Tp ≤ Tp + Tt. In this manner, for instance, example pretraining objectives can leverage bidirectional attention on the prefix sequence while optionally only conducting autoregressive factorization on tokens in the remainder sequence.
In some embodiments, the evaluator 136 includes and example prefix-based objective as described herein. In some embodiments, the evaluator 136 includes only the prefix-based objective as described herein. For instance, in some embodiments, one or more pretraining cycles can leverage a single objective based on the prefix-based objectives described herein.
In some embodiments, pretraining can include prefix-based remainder recovery of text-only data as well as on image-text pairings. For example, in some embodiments, a pretraining recipe can include recovering (e.g., generatively predicting) one or more portions of text strings associated with images as well as recovering (e.g., generatively predicting) one or more portions of text strings associated with other portions of the text strings (e.g., without image tokens prepended thereto). In this manner, for instance, a single objective can be used for pretraining over both vision-language datasets and over textual corpora.
In some embodiments, an image-processing model pretrained with a pretraining pipeline 100 as described herein can be subsequently implemented to perform a number of downstream tasks. In some embodiments, the training procedures and techniques discussed herein can form part of a pretraining system or a fine-tuning system. For instance, the training of a machine-learned image-processing model can be completed in stages. A model can be pre-trained for general developing a general-purpose configuration and subsequently fine-tuned for specific tasks. Pre-training can include pursuit of unsupervised or weakly supervised objectives across large unlabeled training datasets, and can be followed by optionally supervised learning on smaller, sometimes labeled datasets in a fine-tuning stage. In some examples, an image-processing model pretrained with a pretraining pipeline 100 as described herein can be subsequently implemented to perform a number of downstream tasks with or without further fine-tuning. In some embodiments, the pretraining pipeline 100 as described herein can be implemented for fine-tuning a pretrained model.
In some embodiments, downstream tasks can include vision-language processing tasks. For example,
Subfigure (b) of
Subfigure (b) of
Subfigure (c) of
Subfigure (d) of
In some embodiments, an image-processing model can be pretrained with image and textual data and further pretrained with text-only data. In some embodiments, text-only data can be used for fine tuning for further learning of semantic relationships in language. In some embodiments, text-only data can be used for fine tuning for learning semantic relationships between languages. For example, an image-processing model can be pretrained to associate images with text using weakly supervised image-text pairings in a first language (e.g., English). In a fine-tuning procedure, the image-processing model can be fine-tuned on translation pairings (e.g., text-only data) between the first language and a second language (e.g., German). In this manner, for example, a downstream task can be performed with output in a second language when the model was only pre-trained in a first, different language. For instance, with respect to the example task in Subfigure (d) of
In another example, an image-processing model can be pretrained to associate images with text using weakly supervised image-text pairings. In a fine-tuning procedure, the model can be trained on a text-only natural-language reasoning corpus in a same or different language. For example, in fine-tuning, a premise can be input to an encoder portion and a hypothesis can be input to a decoder portion for outputting a classification (e.g., a classification of a logical relationship, such as entailment, neutral, or contradiction, etc.). In some embodiments, at runtime an image can be input to the encoder as a premise and a textual hypothetical can be input to the decoder for classification. Based on the pretraining using image-text pairings and an objective according to the present disclosure, the image-processing model can understand the premise from the image and proceed with classification of the hypothesis. In this manner, for example, cross-modality tasks can be performed, including zero-shot cross-modality tasks (e.g., zero-shot referring to the absence of training on, for instance, curated image-premise pairings).
In some embodiments, an image-processing model pretrained according to example aspects of the present disclosure can also provide for improved performance on single-modality tasks. For example, in some embodiments, after pretraining on image-text pairings with a pretraining pipeline according to the present disclosure, an image-processing model can be implemented to perform text-only tasks, such as tasks generally related to, for instance, the GLUE benchmarks. The pretraining objectives of the present disclosure, providing for joint learning of bidirectional attention pathways and generative language modeling skills, can transfer from the image-text domain to perform tasks in a text-text domain.
In some embodiments, an image-processing model pretrained according to example aspects of the present disclosure can also provide for improved performance on image classification tasks. For example, in some embodiments, an average pooling of encoder outputs can be used as image features for predicting image classes.
A Present Example is described below for providing experimental results for an example prefix-based pretraining objective of the present disclosure. For the convolution stage, the Present Example uses the first three blocks (excluding the conv stem) of ResNet-101. During pretraining, the Present Example uses a 224 × 224 image resolution with a fixed patch size of 16 × 16, resulting in a patch sequence of length 14 × 14 as visual tokens. For the textual input, the Present Example uses a vocabulary size of 32.000 and a max sequence length of 256 in both the encoder and decoder. The Present Example uses an embedding dimension of 512 and 8 layers. The Present Example also shares parameters between the embedding and the decoder softmax output layer. The Present Example is pretrained on large-scale web datasets for both image-text and text-only inputs. For joint vision and language data, the Present Example uses the training set of Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, & Tom Duerig, Scaling Up Visual and Image-processing Representation Learning with Noisy Text Supervision, arXiv preprint arXiv:2102.05918, 2021, which contains about 1.8 billion noisy image-text pairs. The Present Example employs random resized cropping. For the text-only copora, the Present Example uses the Colossal Clean Crawled Corpus (C4) dataset presented in Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, & Peter J Liu, Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, arXiv preprint arXiv:1910.10683, 2019, and followed its preprocessing steps. The dataset contains about 800 gigabytes of web crawled documents. The Present Example is pretrained for about 1 million steps from scratch. The Present Example is processed with the AdamW optimizer with β1 = 0.9, β2 = 0.999 and a weight decay of 0.01. The Present Example uses a warmed-up learning rate for the first 2% of updates to a peak value of 5×10-4, and then it linearly decays afterwards. The Present Example mixes the two pretraining datasets within each batch, which contains 4,096 image-text pairs and 512 text-only documents, sharded across 512 TPU v3 chips.
Table 1 provides example results comparing two baseline configurations with the Present Example. The baseline “Decoder-only with language modeling objective” provides an example baseline using a traditional language-modeling objective with only unidirectional attention within a decoder generating the output. The baseline “Encoder-decoder with span corruption objective” provides an example baseline using a traditional span-corruption objective with only bidirectional attention. The Present Example outperforms both baselines.
The computing device 2 can be any type of computing device, such as, for example, a mobile computing device (e.g., smartphone or tablet), a personal computing device (e.g., laptop or desktop), a workstation, a cluster, a gaming console or controller, a wearable computing device, an embedded computing device, or any other type of computing device. In some embodiments, the computing device 2 can be a client computing device. The computing device 2 can include one or more processors 12 and a memory 14. The one or more processors 12 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 14 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 14 can store data 16 and instructions 18 which are executed by the processor 12 to cause the user computing device 2 to perform operations (e.g., to perform operations implementing an image-processing model pretraining pipeline as described herein, or implementing an image-processing model trained thereby, etc.).
In some implementations, the user computing device 2 can store or include one or more machine-learned models 20. For example, the machine-learned models 20 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models or linear models. Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks. Some example machine-learned models can leverage an attention mechanism such as self-attention. For example, some example machine-learned models can include multi-headed self-attention models (e.g., transformer models). In some embodiments, machine-learned model 20 includes an image-processing model (e.g., model 132, 232, etc.).
In some implementations, one or more machine-learned models 20 can be received from the server computing system 30 over network 70, stored in the computing device memory 14, and used or otherwise implemented by the one or more processors 12. In some implementations, the computing device 2 can implement multiple parallel instances of a machine-learned model 20 (e.g., to perform parallel pretraining across multiple instances of an image-processing model pretraining pipeline).
Additionally, or alternatively, one or more machine-learned models 40 can be included in or otherwise stored and implemented by the server computing system 30 that communicates with the computing device 2 according to a client-server relationship. For example, the machine-learned models 40 can be implemented by the server computing system 40 as a portion of a web service (e.g., a model training/pretraining service, such as to provide to the computing device 2 one or more trained/pretrained models). For instance, the server computing system 30 can communicate with the computing device 2 over a local intranet or internet connection. For instance, the computing device 2 can be a workstation or endpoint in communication with the server computing system 30, with implementation of the model 40 on the server computing system 30 being remotely performed and an output provided (e.g., cast, streamed, etc.) to the computing device 2. Thus, one or more models 20 can be stored and implemented at the user computing device 2 or one or more models 40 can be stored and implemented at the server computing system 30.
The computing device 2 can also include one or more input components that receive user input. For example, a user input component can be a touch-sensitive component (e.g., a touch-sensitive display screen or a touch pad) that is sensitive to the touch of a user input object (e.g., a finger or a stylus). The touch-sensitive component can serve to implement a virtual keyboard. Other example user input components include a microphone, a traditional keyboard, or other means by which a user can provide user input.
The server computing system 30 can include one or more processors 32 and a memory 34. The one or more processors 32 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 34 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 34 can store data 36 and instructions 38 which are executed by the processor 32 to cause the server computing system 30 to perform operations (e.g., to perform operations implementing an image-processing model pretraining pipeline as described herein, or implementing an image-processing model trained thereby, etc.).
In some implementations, the server computing system 30 includes or is otherwise implemented by one or more server computing devices. In instances in which the server computing system 130 includes plural server computing devices, such server computing devices can operate according to sequential computing architectures, parallel computing architectures, or some combination thereof.
As described above, the server computing system 30 can store or otherwise include one or more machine-learned models 40. For example, the models 40 can be or can otherwise include various machine-learned models. Example machine-learned models include neural networks or other multi-layer non-linear models. Example neural networks include feed forward neural networks, deep neural networks, recurrent neural networks, and convolutional neural networks. Some example machine-learned models can leverage an attention mechanism such as self-attention. For example, some example machine-learned models can include multi-headed self-attention models (e.g., transformer models).
In some embodiments, the server computing system 30 can implement an image-processing model trained according to the present disclosure for performing a plurality of tasks. In some embodiments, the server computing system 30 can implement a plurality of machine-learned models based on an image-processing model trained according to the present disclosure for performing a plurality of tasks. For example, in some embodiments, an image-processing model can be pretrained with a prefix-based objective according to the present disclosure. One or more variants of the model can be generated by fine-tuning the variant(s) for different downstream tasks (e.g., tasks of the types described with respect to
The computing device 2 or the server computing system 30 can train example embodiments of a machine-learned image-processing model (e.g., including models 20 or 40) using a pretraining pipeline according to the present disclosure. In some embodiments, the computing device 2 or the server computing system 30 can train example embodiments of a machine-learned image-processing model (e.g., including models 20 or 40) using a pretraining pipeline according to the present disclosure via interaction with the training computing system 50. In some embodiments, the training computing system 50 can be communicatively coupled over the network 70. The training computing system 50 can be separate from the server computing system 30 or can be a portion of the server computing system 30.
The training computing system 50 can include one or more processors 52 and a memory 54. The one or more processors 52 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 54 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 54 can store data 56 and instructions 58 which are executed by the processor 52 to cause the training computing system 50 to perform operations (e.g., to perform operations implementing an image-processing model pretraining pipeline as described herein, or implementing an image-processing model trained thereby, etc.). In some implementations, the training computing system 50 includes or is otherwise implemented by one or more server computing devices.
The model trainer 60 can include a pretraining pipeline for training machine-learned image-processing models using a prefix-based objective according to the present disclosure. Parameters of the image-processing model(s) can be trained, in some embodiments, using various training or learning techniques, such as, for example, backwards propagation of errors. For example, an objective or loss (e.g., a prefix-based objective according to the present disclosure) can be backpropagated through the pretraining pipeline(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various determinations of loss can be used, such as mean squared error, likelihood loss, cross entropy loss, hinge loss, or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations. In some implementations, performing backwards propagation of errors can include performing truncated backpropagation through time. The pretraining pipeline can perform a number of generalization techniques (e.g., weight decays, dropouts, etc.) to improve the generalization capability of the models being trained.
The model trainer 60 can include computer logic utilized to provide desired functionality. The model trainer 60 can be implemented in hardware, firmware, or software controlling a general-purpose processor. For example, in some implementations, the model trainer 60 includes program files stored on a storage device, loaded into a memory, and executed by one or more processors. In other implementations, the model trainer 60 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM, hard disk, or optical or magnetic media.
The network 70 can be any type of communications network, such as a local area network (e.g., intranet), wide area network (e.g., Internet), or some combination thereof and can include any number of wired or wireless links. In general, communication over the network 70 can be carried via any type of wired or wireless connection, using a wide variety of communication protocols (e.g., TCP/IP, HTTP, SMTP, FTP), encodings or formats (e.g., HTML, XML), or protection schemes (e.g., VPN, secure HTTP, SSL).
The central intelligence layer can include a number of machine-learned models. For example, as illustrated in
The central intelligence layer can communicate with a central device data layer. The central device data layer can be a centralized repository of data for the computing device 80. As illustrated in
At 502, the method 500 can include receiving a training sequence for a machine-learned image-processing model. In some embodiments, the training sequence can include text tokens and image tokens. In some embodiments, a prefix sequence of the training sequence can include one or more image tokens. In some embodiments, a remainder sequence of the training sequence can include one or more text tokens, such as a set of text tokens remaining after text tokens (if any) are allocated to the prefix sequence. For example, image tokens and text tokens and their placement into prefix sequences and remainder sequences is described in various examples with respect to
In some embodiments, for example, the training sequence is based on a training example obtained from a training dataset. For instance, the training example can include an image associated with a text string, such that the image tokens are respectively based on patches of the image, and the text tokens are respectively based on portions of the text string. In some embodiments, the data from the training example is allocated to prefix sequence or the remainder sequence based on a break point. For example, the method 500 can include determining a random break point in the text string, with the prefix set being based on portions of the text string before the random break point and the remainder set being based on portions of the text string after the random break point.
At 504, the method 500 can include determining, using the prefix sequence as an input to the machine-learned image-processing model, an objective based on recovery of the remainder sequence. For example, in some embodiments, an objective can be configured such that the model is tasked with predicting one or more words to follow a prefix. For example, in some embodiments, a prefix can include an image and a first portion of an input sentence or phrase, such that an example objective can include predicting a remainder portion of the input sentence or phrase. In this manner, for example, a “missing” remainder can be “recovered” by the model. In some embodiments, a remainder sequence can be recovered from an image directly. For instance, a prefix sequence can contain image tokens based on an input image, and a caption or other related textual material can be recovered/predicted as text associated with the image. In this manner, for example, related textual material can be recovered/predicted based on an input prefix sequence. For example, recovery/prediction of text tokens is described in various examples with respect to
In some embodiments, the machine-learned image-processing model is configured to bidirectionally attend over the prefix sequence. For example, in some embodiments, the machine-learned image-processing model is configured with an encoder-decoder architecture. The prefix sequence can be input to the encoder, and in some examples the encoder can be configured to bidirectionally attend over its inputs. In some examples, the decoder can be trained to generatively predict a remainder sequence based on an output of the encoder (e.g., based on the bidirectional attention pathways of the encoder). For instance, the decoder can be trained to sequentially output one or more tokens based on unidirectional attention over any preceding input tokens (e.g., with an output token forming an input for processing of the next output token). In this manner, for example, an objective can include a generative language-modeling loss over the remainder sequence, such as a language-modeling loss is based on an autoregressive factorization of a probability of recovering one or more tokens of the remainder sequence conditioned on one or more preceding tokens in the remainder sequence. Example encoder-decoder architectures are described in various examples with respect to
At 506, the method 500 can include updating one or more learnable parameters of the machine-learned image-processing model based on the objective. In some embodiments, 506 can include a pretraining operation. For instance, a model can be pretrained (e.g., on large quantities of data) for subsequent fine-tuning (e.g., on smaller amounts of curated data, such as annotated or labeled training datasets). In some embodiments, the method 500 includes fine-tuning a plurality of variants of the machine-learned image-processing model for a respective plurality of different downstream tasks. For instance, a number of example downstream tasks are discussed with respect to
In some embodiments, the objective can be implemented in a similar or different configuration as the image-text prefix-remainder objective configurations described herein with respect to
In some embodiments, for example, pretraining can include evaluating the objective over image-text pairings and subsequent fine-tuning can include evaluating the objective over text-text pairings (e.g., curated or otherwise labeled pairings, etc.). For instance, in some embodiments, the fine-tuning training sequences can include textual information only.
In some embodiments, cross-domain semantic relationships can be leveraged in zero-shot or few-shot image processing. For example, in some embodiments, a model can perform image-processing tasks and provide output in a target language based on a training recipe that was not based on or did not include image-text pairings in the target language. For example, a prefix can include textual information in a first language and the remainder can include textual information in another language. In this manner, for example, the model can be trained to learn cross-language semantic relationships. In this manner, for instance, image-based translation tasks or other cross-domain image-processing tasks can be performed using a model fine-tuned using curated, text-only translation data between a subject language and a target language.
The technology discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken and information sent to and from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination. Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
While the present subject matter has been described in detail with respect to various specific example embodiments thereof, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the subject disclosure does not preclude inclusion of such modifications, variations or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure cover such alterations, variations, and equivalents.
Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Any and all features in the following claims can be combined or rearranged in any way possible, including combinations of claims not explicitly enumerated in combination together, as the example claim dependencies listed herein should not be read as limiting the scope of possible combinations of features disclosed herein. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. Moreover, terms are described herein using lists of example elements joined by conjunctions such as “and,” “or,” “but,” etc. It should be understood that such conjunctions are provided for explanatory purposes only. Clauses and other sequences of items joined by a particular conjunction such as “or,” for example, can refer to “and/or,” “at least one of”, “any combination of′ example elements listed therein, etc. Also, terms such as “based on” should be understood as “based at least in part on.”