The present application is the U.S. National Stage Application of International Application No. PCT/CN2018/108100, filed Sep. 27, 2018, which claims priority to and the benefit of Chinese Application Serial No. 201710943280.3, filed Oct. 11, 2017.
The invention relates to the field of communication technology, and more particularly to, a memory allocation method and a multi-core concurrent memory allocation method.
With the progress of audio and video encoding and decoding technologies, on the embedded platform, lots of contiguous memory should be reserved for accommodation of GPU, Camera and HDMI, etc. The reserved memory is unavailable for other application programs. However, the memory must be reserved in advance. Since resources cannot be fully utilized due to the waste of the reserved memory, memory is no longer reserved simply by using CMA (Contiguous Memory Allocator). In particular, the memory is available for other application programs when not being occupied by Camera, HDMI and other devices. At present, CMA has become the best solution for driving to use lots of memory on the embedded platform. The CMA can share the memory to the application programs when the driver does not use the memory through the specially marked migration type, and can compile enough contiguous memory for the driver by using the kernel's memory recovery and migration mechanism when the memory is required for the driver. The application programs and the drivers' need for the memory is well balanced, and it greatly alleviates the negative effects of being not smooth when it is out of memory. However, the above-mentioned solution has following shortcomings. Specifically, in the case when the memory usage is high and the memory is shared to the application program by CMA, data provided by the application program may be provided to the driver layer of the block device for processing, such that the driver has to wait for a longer time when the contiguous memory is allocated by CMA. In addition, when CMA shares the memory to the application program, if current application program needs to perform a great amount of input and output operations, it may takes longer time for CMA to allocate the contiguous memory for the driver.
In order to solve the above-mentioned problem occurring in the prior art in CMA allocating contiguous memory for the driver, and to remove application programs which occupy contiguous memory allocated by the continuous memory allocator for a long time, the present invention provides a memory allocation method and a multi-core concurrent memory allocation method for the drivers which can be implemented in a shorter time by CMA.
Detailed technical solutions are as follows:
A memory allocation method, applied to an embedded system, wherein a kernel module and a plurality of application programs are provided; wherein the memory allocation method comprises steps of:
Preferably, the screening marks comprise:
Preferably, the memory allocation request is a mark set, and the mark set is configured to specify marks for the memory allocation and a behavior for control of the memory allocation.
Preferably, the memory allocation method further comprises a plurality of drivers, when the kernel module acquires second memory allocation requests of the driver modules, performs steps of:
The present invention further provides a multi-core concurrent memory allocation method comprising the above-mentioned memory allocation method the multi-core concurrent memory allocation method specifically comprises steps of:
Preferably, M=N.
Preferably, M=4, N=4.
The above-mentioned technical solutions have the following beneficial effects. By adopting the memory allocation method, the application programs which occupy contiguous memory allocated by the continuous memory allocator for a long time can be screened and removed, then contiguous memory allocation can be provided for the drivers in a shorter time, and on the other hand, the corresponding contiguous continuous memory can be allocated for the drivers through a plurality of processing units at the same time so that the allocation efficiency can be higher.
The accompanying drawings, together with the specification, illustrate exemplary embodiments of the present disclosure, and, together with the description, serve to explain the principles of the present invention.
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” or “has” and/or “having” when used herein, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the term “plurality” means a number greater than one.
Hereinafter, certain exemplary embodiments according to the present disclosure will be described with reference to the accompanying drawings.
The technical solutions set forth in the present invention comprise a memory allocation method.
An embodiment of a memory allocation method, applied to an embedded system, wherein in the method, a kernel module and a plurality of application programs are provided; as shown in
In the prior art, for the application programs which occupy contiguous memory allocated by CMA for a long time, such as the application programs that needs to perform a large number of input and output operations, and the application programs whose data needs to be processed by the driver layer of the block device, if the contiguous memory is needed to be allocated for the drivers when these application programs have occupied the contiguous memory allocated by CMA, some problems will occur. Since the application programs are occupying the contiguous memory allocated by CMA, and migration cannot be immediately performed on the application programs due to the fact that immediate migration operation will make an error occur in the application programs, the common practice is to wait for the application programs to complete its operation, then CMA allocates the corresponding contiguous memory for the drivers. In this case, CMA may needs to wait for a relatively long time, and direct impact may be embodied the following ways, such as some delay in the decoding process, thus, a users' experience is lowered.
In the present invention, the application programs which occupy contiguous memory allocated by the continuous memory allocator for a long time can be screened and removed. It should be noted that screening and removing herein do not means that the application program is not executed. It means that the application program will use the memory allocated in other ways rather than the contiguous memory allocated by the CMA. The benefit effect of this operation is that the CMA can quickly perform the allocation operation without taking a long wait time when it is necessary to allocate contiguous memory to the driver.
In a preferred embodiment, the screening marks comprise:
In a preferred embodiment, the memory allocation request is a mark set, and the mark set may be a gfp_mask parameter configured to specify marks for the memory allocation and a behavior for control of the memory allocation.
In the above-mentioned technical solutions, when the kernel module receives the first memory allocation requests from the application program, first of all, judging whether preset screening marks (i.e. the GFP_BDEV mark or the GFP_WRITE mark) are included in the gfp_mask parameter. If the preset screening marks exist in the gfp_mask parameter, it shows that the current application program will occupy the contiguous memory allocated by the CMA for a long time.
It should be noted that an allocation mark parameter, which referred to as gfp_mask, must be provided when the memory is allocated to any application programs and the drivers;
The gfp_mask may comprise one or more of GFP_BDEV mark/GFP_WRITE mark or GFP_FS/GFP_IO mark.
In a preferred embodiment, the memory allocation method further comprises a plurality of drivers, as shown in
In the above-mentioned technical solutions, the method for migrating the application program from the current contiguous memory allocated by the CMA to a newly-allocated memory can be performed by a corresponding migration mechanism.
As shown in
The technical solutions set forth in the present invention comprise a multi-core concurrent memory allocation method.
In an embodiment of a multi-core concurrent memory allocation method, as shown in
In a preferred embodiment, M=N.
In a preferred embodiment, M=4, N=4.
In the above-mentioned technical solutions, a specific embodiment is shown for illustration. For example, when the number of the processing unit is 4, and the size of the contiguous memory needs to be allocated to the driver is 500 M, it is preferable that the size of the memory is divided into 4 equal parts, that is to say, 125 M for each part. When each of the processing units receives the second memory allocation request, the contiguous memory of 125 M is allocated to each of the processing units at the same time, such that the contiguous memory of 500 M can be provided to the driver. In addition, simultaneous execution of the CPU allows the CMA to complete the allocation of contiguous memory faster.
As shown in
The above descriptions are only the preferred embodiments of the invention, not thus limiting the embodiments and scope of the invention. Those skilled in the art should be able to realize that the schemes obtained from the content of specification and drawings of the invention are within the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
201710943280.3 | Oct 2017 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2018/108100 | 9/27/2018 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/072094 | 4/18/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7516291 | van Riel | Apr 2009 | B2 |
7523284 | Wilson | Apr 2009 | B1 |
7814288 | Coon | Oct 2010 | B2 |
7937625 | Calinoiu | May 2011 | B2 |
8402249 | Zhu et al. | Mar 2013 | B1 |
8838928 | Robin | Sep 2014 | B2 |
8954707 | Gheith | Feb 2015 | B2 |
10019288 | Kung | Jul 2018 | B2 |
10795808 | Naccache | Oct 2020 | B2 |
20070118712 | van Riel | May 2007 | A1 |
20180074863 | Kung | Mar 2018 | A1 |
20180373624 | Naccache | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
102053916 | May 2011 | CN |
102156675 | Aug 2011 | CN |
102521184 | Jun 2012 | CN |
107220189 | Sep 2017 | CN |
Entry |
---|
Park et al. “GCMA: Guaranteed Contiguous Memory Allocator”, 2016, pp. 29-34. |
Number | Date | Country | |
---|---|---|---|
20210334140 A1 | Oct 2021 | US |