This application claims the benefit under 35 USC § 119(a) of Korean Patent Application No. 10-2021-0148290, filed on Nov. 1, 2021, and Korean Patent Application No. 10-2022- 0030156, filed on Mar. 10, 2022, in the Korean Intellectual Property Office, the entire disclosures of which are incorporated herein by reference for all purposes.
The following description relates to a method and apparatus with data loading.
Training of a deep learning model may include updating a weight parameter of the deep learning model based on training data. The training data may be divided based on a batch size, which is a size of data to be trained at one time, and may be used for training of the deep learning model. Distributed training may include dividing and performing operations for training deep learning models on a plurality of graphics processing units (GPUs). Data parallelism may be a distributed training method that divides and processes training data in a plurality of GPUs, and may include synchronizing results of multiple GPUs. Synchronization may include calculating a final update result by synthesizing update results of multiple GPUs each time that the weight parameter of the deep learning model is updated. Since the synchronization may be performed after batch learning of each GPU is completed, the more uniform the size of data processed during the batch learning in the plurality of GPUs, the less waiting time required for the synchronization.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one general aspect, a processor-implemented method with data loading includes: dividing a training data set into a plurality of subsets based on sizes of a plurality of data files included in the training data set; loading, from each of the plurality of subsets, a portion of data files in the subset to a plurality of processors based on a proportion of a number of data files of the plurality of subsets in the subset and a batch size of distributed training; and reallocating, based on sizes of data files loaded to processors in a same group among the plurality of processors, the loaded data files to the processors in the same group.
The dividing of the training data set into the plurality of subsets may include: dividing a range of a data size corresponding to the training data set into a predetermined number of intervals, each having a predetermined size; and dividing the training data set into subsets corresponding to the divided intervals based on the sizes of the data files, and each of the subsets may include a data file of a size belonging to a corresponding interval.
The dividing of the training data set into the plurality of subsets may include dividing the training data set into a predetermined number of subsets based on a cumulative distribution function (CDF) for the sizes of the data files such that each of the subsets may include a same number of data files.
The reallocating of the loaded data files may include: sorting the data files loaded to the processors of the same group in an order of sizes; and distributing the sorted data files to the processors in the same group in a predetermined order.
The reallocating of the loaded data files may include: sorting the data files loaded to the processors in the same group in an order of sizes; and distributing, to the processors in the same group, a portion of the sorted data files in a first order determined in advance and another portion of the sorted data files in a second order that is a reverse order of the first order.
The distributing in the first order and the distributing in the second order may be repetitively performed within the batch size.
The loading, from each of the plurality of subsets, of the portion of data files in the subset to the plurality of processors may include: determining a number of data files to be extracted from the subset based on the proportion of the number of data files of the plurality of subsets in the subset and the batch size; and arbitrarily extracting the determined number of data files from the subset and loading the extracted data files to the plurality of processors.
The plurality of processors may include a first processor and a second processor, the plurality of subsets may include a first subset, and a number of data files extracted from the first subset among data files loaded to the first processor may be equal to a number of data files extracted from the first subset among data files loaded to the second processor.
A number of the plurality of subsets may be determined based on any one or any combination of any two or more of a number of the plurality of processors, the batch size, and an input of a user.
The same group may include a set of processors in a same server.
The training data set may include either one or both of: natural language text data for training a natural language processing (NLP) model; and speech data for training the NLP model.
The processors may include a graphics processing unit (GPU).
In another general aspect, one or more embodiments include a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform any one, any combination, or all operations and methods described herein.
In another general aspect, an apparatus with data loading includes: one or more processors configured to: divide a training data set into a plurality of subsets based on sizes of a plurality of data files included in the training data set; load, from each of the plurality of subsets, a portion of data files in the subset to a plurality of training processors based on a proportion of a number of data files of the plurality of subsets in the subset and a batch size of distributed training; and reallocate, based on sizes of data files loaded to training processors in a same group among the plurality of training processors, the loaded data files to the training processors in the same group.
For the dividing of the training data set into the plurality of subsets, the one or more processors may be configured to: divide a range of a data size corresponding to the training data set into a predetermined number of intervals, each having a predetermined size; and divide the training data set into subsets corresponding to the divided intervals based on the sizes of the data files, and each of the subsets may include a data file of a size belonging to a corresponding interval.
For the dividing of the training data set into the plurality of subsets, the one or more processors may be configured to divide the training data set into a predetermined number of subsets based on a cumulative distribution function (CDF) for the sizes of the data files such that each of the subsets may include a same number of data files.
For the reallocating of the loaded data files, the one or more processors may be configured to: sort the data files loaded to the training processors of the same group in an order of sizes; and distribute the sorted data files to the training processors in the same group in a predetermined order.
For the reallocating of the loaded data files, the one or more processors may be configured to: sort the data files loaded to the training processors in the same group in an order of sizes; and distribute, to the training processors in the same group, a portion of the sorted data files in a first order determined in advance and another portion of the sorted data files in a second order that is a reverse order of the first order.
The one or more processors may be configured to repetitively perform the distributing in the first order and the distributing in the second order within the batch size.
For the loading, from each of the plurality of subsets, of the portion of data files in the subset to the plurality of training processors, the one or more processors may be configured to: determine a number of data files to be extracted from the subset based on the proportion of the number of data files of the plurality of subsets in the subset and the batch size; arbitrarily extract the determined number of data files from the subset; and load the extracted data files to the plurality of training processors.
In another general aspect, a processor-implemented method with data loading includes: dividing a training data set into subsets such that each of the subsets corresponds to a distinct range of data sizes of data files in the training data set; loading, from each of the subsets, a portion of data files in the subset to each of processors based on a batch size of distributed training; and reallocating the data files loaded to processors in a same group to the processors in the same group sequentially based on sizes of the loaded data files.
The dividing of the training data set may include dividing the training data set such that each of the subsets may include a same number of the data files.
The reallocating may include reversing, for each subsequent batch of the batch size, a size-based distribution order of the loaded data files to the processors in the same group.
The method may include performing, using the processors of the same group, one or more training operations of a deep learning model based on the reallocated data files.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known, after an understanding of the disclosure of this application, may be omitted for increased clarity and conciseness.
Although terms, such as “first,” “second,” and “third” may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in the examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Throughout the specification, when a component is described as being “connected to,” or “coupled to” another component, it may be directly “connected to,” or “coupled to” the other component, or there may be one or more other components intervening therebetween. In contrast, when an element is described as being “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. Likewise, similar expressions, for example, “between” and “immediately between,” and “adjacent to” and “immediately adjacent to,” are also to be construed in the same way. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items.
The terminology used herein is for describing various examples only and is not to be used to limit the disclosure. The articles “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term “and/or” includes any one and any combination of any two or more of the associated listed items. The terms “comprises,” “includes,” and “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof. The use of the term “may” herein with respect to an example or embodiment (for example, as to what an example or embodiment may include or implement) means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
Unless otherwise defined, all terms including technical or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and based on an understanding of the disclosure of the present application. It will be further understood that terms, such as those defined in commonly-used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Hereinafter, examples will be described in detail with reference to the accompanying drawings. Regarding the reference numerals assigned to the elements in the drawings, it should be noted that the same elements will be designated by the same reference numerals, and redundant descriptions thereof will be omitted.
Referring to
The distributed training system may include one or more servers (or nodes) including one or more processors that perform training operations of a deep learning model. A processor of the distributed training system may be an arithmetic processing module that updates a weight parameter of a deep learning model based on training data, and may be or include, for example, a graphics processing unit (GPU).
The training data set may include data files of various sizes for training the deep learning model. For example, when the deep learning model is a natural language processing (NLP) model, the training data set may include natural language text files and/or spoken speech files for training the NLP model.
For example,
Operation 110 may include an operation of dividing a training data set into a plurality of subsets based on sizes of a plurality of data files included in the training data set. A subset may be a portion of the training data set, and each data file included in the training data set may be included in one subset corresponding to a size of the data file.
Operation 110 may include an operation of dividing a range of a data size corresponding to the training data set into a predetermined number of intervals, each having a predetermined size, and an operation of dividing the training data set into subsets corresponding to the divided intervals based on the sizes of the data files. Each of the subsets may include a data file of a size belonging to a corresponding interval of the subset. For example, a subset may correspond to a portion obtained by dividing the entire training data set based on size.
The entire training data set may be divided into a predetermined number of subsets. The number (e.g., a total number and/or total quantity) of subsets may be determined based on any one or any combination of any two or more of the number (e.g., a total number and/or total quantity) of the plurality of processors, a batch size, and an input of a user. For example, the number of the plurality of subsets may be determined by the number input by the user. In addition, for example, the number of subsets may be determined to be less than the batch size or determined as a divisor of the batch size. Also, the number of subsets may be determined such that one or more data files are allocated to each of the plurality of processors in each subset.
For example,
Depending on the sequence length distribution of the data files, the number of data files included in each subset may not be the same. For example, referring to
Operation 110 may include an operation of dividing the training data set into the predetermined number of subsets based on a cumulative distribution function (CDF) for the sizes of the data files such that each subset includes the same number of data files.
For example,
Depending on the sequence length distribution of the data file, lengths of intervals corresponding to subsets may not be the same. For example, referring to
Operation 120 may include an operation of loading a portion of data files in each subset to the plurality of processors based on a proportion of the number of data files of the plurality of subsets and a batch size of distributed training. The batch size may correspond to a unit of data files to be learned at a time by one processor. For example, when the batch size is 16, 16 data files may be loaded to a processor such that a weight parameter update for training the deep learning model is performed. The batch size may be determined in advance.
Operation 120 may include an operation of determining a number of data files to be extracted from each of the subsets based on the proportion of the number of data files of the plurality of subsets and the batch size, and may include an operation of arbitrarily extracting the determined number of data files from each of the subsets and loading to the plurality of processors. For example, the number of data files included in a predetermined subset among the data files loaded to each of the plurality of processors may be uniform. For example, the plurality of processors may include a first processor and a second processor. In an example, when the plurality of subsets includes the first subset, the number of data files extracted from the first subset among the data files loaded to the first processor may be the same as the number of data files extracted from the first subset extracted from the data files load to the second processor.
The number of data files loaded to each processor from a predetermined subset may be determined based on the batch size and a ratio of the number of data files included in a corresponding subset to the total number of data files included in the training data set. In the batch size, data files of the number corresponding to the ratio of the number of data files included in each subset to the total number of data files included in the training data set may be loaded to each of the plurality of processors.
For example, as illustrated in
Meanwhile, as illustrated in
By extracting the same number of data files from a predetermined subset and loading the extracted data files to each processor, the data loading method of one or more embodiments may reduce a deviation in size of data files loaded to each processor when compared to a typical data loading method of arbitrarily extracting data files of the batch size from the entire training data set and loading the extracted data files to each processor.
Operation 130 may include an operation of reallocating, based on sizes of data files loaded to processors in a same group among the plurality of processors, the loaded data files to the processors in the same group. The same group may be a unit of processors with a low communication cost with each other and may include, for example, a set of processors in the same server. The processors of a group may share the size of the allocated data file through communication with other processors of the group, and may exchange the allocated data file with each other. A speed of communication between processors in the same server may be higher than a speed of communication between processors in different servers, and thus communication overhead may be small. By reallocating data files previously allocated to the processors in the same group based on the communication between the processors in the same group with less communication overhead, the data loading method of one or more embodiments may reduce a communication cost compared to a typical data loading method in which reallocation is performed based on communications between all processors.
Operation 130 of reallocating the loaded data files may include an operation of sorting the data files loaded to the processors of the same group in an order of sizes and an operation of distributing the sorted data files to the processors in the same group in a predetermined order.
For example,
The determined order among the plurality of processors may be changed each time of distribution. Data files of the predetermined number may be distributed in the plurality of processors in a predetermined first order among the plurality of processors, and then data files of the predetermined number may be distributed in the plurality of processors in a second order different from the first order. For example, operation 130 of reallocating the loaded data files may include an operation of sorting the data files loaded to the processors of the same group in the order of sizes and an operation of distributing the sorted data files to the processors in the same group in a predetermined first order and then distributing the sorted data files in a second order that is a reverse order of the first order. The operation of distributing in the first order and distributing in the second order may be repetitively performed within the batch size.
For example,
Referring to
Operation 130 of reallocating the loaded data files may include an operation of reallocating the loaded data files using various methods for allocating data files of a uniform size to the plurality of processors. For example, data files may be distributed in a way of changing a processor to which a first data file is allocated, each time of distribution that one data file is allocated to each processor in an order of sizes of the data files. Also, for example, for each time of distribution, data files may be distributed in a way of randomly determining an order of processors to which the data files are allocated in the order of sizes.
For example, as a sum of sizes of data files loaded to each processor is more uniform, a period of time in which each processor processes an operation of distributed training may be more uniform. When a sum of sizes of data files loaded to each processor is less uniform, a difference in learning operation processing speed between processors may increase.
For example,
Referring to
The processor 901 may perform any one or more or all of the operations and methods described with reference to
The memory 903 may be a volatile memory or a non-volatile memory and store data related to the data loading method described with reference to
The communication module 905 may provide the apparatus 900 with a function to communicate with another electronic device or another server. For example, the apparatus 900 may be connected to an external device (for example, a terminal of a user, a server, or a network) through the communication module 905 and perform a data exchange. As an example, the apparatus 900 may transmit and receive data to and from one or more servers including one or more training processors for distributed training through the communication module 905. As another example, the apparatus 900 may transmit and receive data to and from a database in which a training data set for distributed training is stored through the communication module 905.
The memory 903 may store a program in which the data loading method described with reference to
The apparatus 900 may further include other components. As an example, the apparatus 900 may further include an input/output interface including an input device and an output device as a device for interfacing with the communication module 905. As another example, the apparatus 900 may further include other components such as a transceiver, various sensors, and a database.
The apparatuses, processors, memories, communication modules, apparatus 900, processor 901, memory 903, communication module 905, and other apparatuses, units, modules, devices, and components described herein with respect to
The methods illustrated in
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0148290 | Nov 2021 | KR | national |
10-2022-0030156 | Mar 2022 | KR | national |