A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyrights whatsoever.
This disclosure relates to the image decoding field, and more specifically, to an encoder, an encoding system and an encoding method.
Joint Photographic Experts Group (JPEG) and JPEG 2000 are common image encoding standards.
JPEG 2000 employs wavelet transformation and performs entropy encoding based on embedded block coding with optimized truncation (EBCOT). JPEG 2000 has a higher compression rate than JPEG, and supports progressive downloading and displaying.
The rate control (i.e., bit rate control) algorithm of a conventional JPEG 2000 encoder performs global optimization for an entire image. Thus, a high system bandwidth is required.
This disclosure provides an encoder, an encoding system and an encoding method, which can reduce the system bandwidth requirements of a encoding process.
According to an aspect of the present disclosure, an encoder is provided, including: a first interface circuit, configured to read pre-generated statistical information of a to-be-encoded image from an external storage device; a rate control circuit, configured to determine a target rate of an image tile in the to-be-encoded image based on the statistical information of the to-be-encoded image; a first encoding circuit, configured to perform tier-1 encoding on at least one code block of the image tile to obtain a bit stream of the image tile; and a second encoding circuit, configured to perform tier-2 encoding on the bit stream of the image tile based on the target rate to truncate the bit stream of the image tile.
According to another aspect of the present disclosure, an encoding system is provided, including: a preprocessing circuit, configured to calculate statistical information of a to-be-encoded image; a storage device, configured to store the to-be-encoded image and the statistical information; and an encoder, configured to read the to-be-encoded image and the statistical information from the storage device, where the encoder includes: a first interface circuit, configured to read the statistical information of the to-be-encoded image from the storage device, a rate control circuit, configured to determine a target rate of an image tile in the to-be-encoded image based on the statistical information of the to-be-encoded image, a first encoding circuit, configured to perform tier-1 encoding on at least one code block of the image tile to obtain a bit stream of the image tile, and a second encoding circuit, configured to perform tier-2 encoding on the bit stream of the image tile based on the target rate to truncate the bit stream of the image tile.
According to yet another aspect of the present disclosure, an encoding method is provided, including: reading pre-generated statistical information of a to-be-encoded image from an external storage device; determining a target rate of an image tile in the to-be-encoded image based on the statistical information of the to-be-encoded image; performing tier-1 encoding on a code block of the image tile to obtain a bit stream of the image tile; and performing tier-2 encoding on the bit stream of the image tile based on the target bit rate to truncate the bit stream of the image tile.
Thus, the statistical information of the image tiles in the image to be encoded is calculated, and the bit stream of the image tile is truncated according to the statistical information, thereby performing relatively independent bit rate control on each image tile so as to reduce an encoder's requirement on the system bandwidth.
This disclosure may be applied to the image encoding/decoding field, the video encoding/decoding field, the hardware video encoding/decoding field, the dedicated circuit video encoding/decoding field, and the real-time video encoding/decoding field.
An encoder provided by this disclosure may be configured to perform lossy compression on an image, or may be configured to perform lossless compression on an image. The lossless compression may be visually lossless compression, or may be mathematically lossless compression.
For ease of understanding, an encoding architecture of JPEG 2000 is first briefly described.
As shown in
The preprocessing module 12 may include a component transformation module 122 (the module is composed of one or more specific circuits) and a direct current level shift (module 124 (the module is composed of one or more specific circuits).
The component transformation module 122 may perform a certain transformation on components of an image to reduce correlation between the components. For example, the component transformation module 122 may transform components of an image from a current color gamut to another color gamut.
The component transformation module 122 may support a plurality of color transformation modes. Therefore, the component transformation module 122 may be sometimes referred to as a multi-mode color transformation (MCT) module. For example, the component transformation module 122 may support irreversible color transformation (ICT) or reversible color transformation (RCT). It should be noted that the component transformation module 122 is optional. In an actual encoding process, alternatively, components of an image may not be transformed; instead, subsequent processing is directly performed.
The direct current level shift module 124 may be configured to perform a center shift on component values, so that the component values are symmetrically distributed about 0, to facilitate subsequent transformation operation.
The transformation module 14 may use wavelet transformation to transform image blocks (hereinafter “image tiles”) in an image to obtain a wavelet coefficient of a sub-band. In some exemplary embodiments of this disclosure, the size of an image tile is not specifically limited, for example, 512×512 (in the unit of pixel).
The quantization module 16 may be configured to quantize the wavelet coefficient of a sub-band to obtain a quantized wavelet coefficient of the sub-band.
The EBCOT module 18 is an entropy encoding module of JEPG 2000, and is a core module of JEPG 2000.
The EBCOT module 18 may include a tier-1 encoding module 182 (the module is composed of one or more specific circuits), a tier-2 encoding module 184 (the module is composed of one or more specific circuits), and a rate control module 186 (the module is composed of one or more specific circuits) (e.g., a module for bit rate control). The tier-1 encoding module 182 may be configured to perform tier-1 encoding on a code block (a sub-band may be further divided into a plurality of independent code blocks). The tier-1 encoding may include bit plane encoding and arithmetic encoding. The tier-2 encoding module 184 is mainly responsible for bit stream organization, for example, may perform processing such as truncation on a bit stream of a code block based on a target rate provided by the rate control module 186.
JPEG 2000 mainly uses a post-compression rate-distortion optimization (PCRD) algorithm for rate control. When rate control is performed with the conventional JPEG 2000 technology, an optimal truncation point set of bit streams of all code blocks in an image is calculated by traversal. In other words, rate control is performed on the entire image in the conventional JPEG 2000 technology. For a hardware encoder is expected to perform rate control on the entire image, massive intermediate data will be generated. In the case where the on-chip cache is limited, the encoder inevitably needs to perform massive data interactions with an external storage device (for example, a memory), and high system bandwidth is thus required.
The following describes the technical solutions of this disclosure with reference to
Some exemplary embodiments of this disclosure provide an encoding system. As shown in
As shown in
The statistical information of the to-be-encoded image may be information that can be used to perform rate control on an image tile in the to-be-encoded image. Therefore, in some exemplary embodiments, the statistical information of the to-be-encoded image may also be referred to as rate control information of a to-be-encoded image tile. The statistical information of the to-be-encoded image may include one or more types of the following information of the image tile in the to-be-encoded image: complexity, activity, and texture.
The statistical information of the to-be-encoded image may be calculated in different ways. In an example in which the statistical information of the to-be-encoded image is the complexity of the image tiles in the to-be-encoded image, the complexity of an image tile may be defined or calculated based on amplitudes of high-frequency components of pixels in the image tile. For example, the complexity of an image tile may be a cumulative sum of amplitudes of high-frequency components of pixels in an image tile region. When the texture of an image tile is complex, the corresponding cumulative sum of amplitudes of high-frequency components is also large, and it may be considered that the complexity of the image tile is high. Based on the image encoding theory, an encoded bit stream (or a quantity of bits consumed during encoding) corresponding to the image tile region with high complexity is also large. Specifically, the high-frequency components may be obtained by performing a filter operation based on pixel values of the pixels in the image tile region, and further, the complexity of the image tile is calculated.
In another example, the complexity of the image tile may be defined or calculated based on a mean square error (MSE) of the pixels in an image tile. If the MSE of the pixels in an t image tile is large, it may be considered that the complexity of the image tile is high.
Certainly, the complexity of an image tile may also be defined in another way, or in a combination of the foregoing ways. This is not limited in the embodiments of this disclosure.
In some exemplary embodiments, the preprocessing circuit 4 may further include a component transformation circuit 44. The component transformation circuit 44 may be configured to perform the aforementioned component transformation operation. Performing component transformation on the to-be-encoded image in the process of calculating the statistical information of the to-be-encoded image is equivalent to removing the operation originally to be performed by the encoder 7 from the encoder 7, and incorporating the operation into the preprocessing circuit 4 for implementation, so that the complexity of the encoder 7 can be reduced. Certainly, in other embodiments, alternatively, the component transformation operation may not be performed by the preprocessing circuit 4, but is still performed by the encoder 7.
Still referring to
The encoder 7 may be a hardware encoder that supports the JPEG 2000 standard. As shown in
The first interface circuit 71 may be configured to read the pre-generated statistical information of an to-be-encoded image from the external storage device 5. The first interface circuit 71 may be further configured to read an image tile (this image tile may be any image tile in the to-be-encoded image) of the to-be-encoded image. The first interface circuit 71 may use a specific addressing mode to directly read the image tile of the to-be-encoded image stored from the storage device 5, without segmenting the to-be-encoded image. For example, to-be-encoded images may be sequentially stored in the storage device 5, and the first interface circuit 71 may calculate a storage location of each image tile based on a location of the to-be-encoded image in the storage device 5, and then read a corresponding image tile in an address jumping mode; or the to-be-encoded image may be stored in the storage device 5 as per image tiles, and the first interface circuit 71 may read the image tiles based on a storage sequence of the image tiles. The first interface circuit 71 may read the image tile from the storage device 5 in a direct memory access (DMA) mode.
The first interface circuit 71 may transmit the statistical information of the to-be-encoded image as rate control information to the rate control circuit 75, so that the rate control circuit 75 performs rate control on the encoding process.
In some exemplary embodiments, the first interface circuit 71 may be further configured to perform a direct current level shift on the image tile, that is, implement the function of the direct current level shift module 124.
The transformation circuit 72 may be configured to perform the operation previously performed by the transformation module 14, that is, perform the wavelet transformation on the image tile. After the wavelet transformation is performed on the image tile, a plurality of sub-bands may be obtained. After the wavelet transformation, wavelet coefficients of the image tile may be obtained, and the wavelet coefficients may be wavelet coefficients of these sub-bands.
The quantization circuit 73 may be configured to quantize the wavelet coefficients to obtain quantized wavelet coefficients or wavelet coefficients of quantized sub-bands.
It should be noted that, to reduce the complexity of the encoder 7, some or all operations such as transformation and quantization may be performed by the signal processing apparatus 6 shown in
When the signal processing apparatus 6 participates in the encoding process of an image tile, the signal processing apparatus 6 and the encoder 7 may be considered as an encoding subsystem (the encoding subsystem is represented by a dashed line box on the right in
The first encoding circuit 74 may be configured to perform tier-1 encoding on a code block of the image tile to obtain a bit stream of the image tile. With reference to the above descriptions, it can be known that a wavelet coefficient of a sub-band may be obtained after the transformation and quantization. One sub-band may also be divided into one or more code blocks that may be independently encoded. Therefore, the code block of the image tile is a code block of a sub-band of the image tile.
The first encoding circuit 74 may be configured to perform the operation performed by the tier-1 encoding module 182 in
The rate control circuit 75 may be configured to determine a target rate of the image tile in the to-be-encoded image based on the statistical information of the to-be-encoded image.
In an example in which the statistical information of the to-be-encoded image is the complexity of the image tile in the to-be-encoded image, the rate control circuit 75 may assign a weight to each image tile based on the complexity of each image tile. The higher of the complexity of an image tile, the larger a weight thereof. Based on the weight of each image tile and a current network condition (for example, network bandwidth), the rate control circuit 75 may calculate the target rate of the image tile, so that the larger the weight of the image tile, the higher the target rate. In some exemplary embodiments, the statistical information of the to-be-encoded image that is output by the preprocessing circuit 4 may include the weight of each image tile, and the rate control circuit 75 may calculate the target rate by directly using the weight of the image tile.
The second encoding circuit 76 may be configured to implement the function of the foregoing tier-2 encoding module 184. For example, the second encoding circuit 76 may be configured to perform tier-2 encoding on the bit stream of the image tile based on the target rate to truncate the bit stream of the image tile.
The second encoding circuit 76 may include a rate distortion calculation circuit 762 (or referred to as a slope maker) and a truncation circuit 764 (or referred to as a truncator).
The rate distortion calculation circuit 762 may be configured to calculate a rate distortion slope of the bit stream output by the first encoding circuit 74. For example, the rate distortion calculation circuit 762 may calculate the rate distortion slope based on the rate and distortion of each bit stream (that is, the bit stream of each code block) output by the first encoding circuit 74. The rate distortion slope may be used to evaluate a contribution of the bit stream of the current code block in the entire image tile. The rate distortion slope may be used for subsequent bit stream organization, such as bit stream layering and truncation.
The truncation circuit 764 may be configured to process the bit stream of the image tile based on the target rate and the rate distortion slope. For example, the truncation circuit 764 may be configured to truncate the bit stream of the image tile based on the target rate and the rate distortion slope. Further, the truncation circuit 764 may be further used for bit stream reorganization, bit stream layering, and the like. In addition, in some exemplary embodiments, the truncation circuit 764 may be further configured to generate header information of the bit stream, and transmit the header information and the bit stream together to the next-stage bit stream writing circuit 77.
The bit stream writing circuit 77 may be configured to receive the bit stream organized and output by the truncation circuit 764, and write the bit stream to an external storage device. For example, the bit stream may be written to the external storage device through a bus. The bus may be, for example, an advanced extensible interface (AXI) bus. The bit stream writing circuit 77 may further append information such as an image tile header to the bit stream.
In some exemplary embodiments, the rate control circuit 75 may be further configured to generate status information of a rate control buffer (or referred to as buffer size) based on statistical information of the image tile. The first encoding circuit 74 may be further configured to control the tier-1 encoding based on the status information of the rate control buffer. The status information of the buffer may be used by the first encoding circuit 74 to pre-truncate the bit stream. For example, based on the status information of the buffer, the first encoding circuit 74 may delete a bit stream whose size exceeds a predetermined size, or delete a bit stream that does not comply with a requirement. Therefore, the status information of the buffer may be sometimes referred to as pre-truncation information. Further, in some exemplary embodiments, the rate control circuit 75 may further receive feedback about a size of the bit stream actually encoded by the first encoding circuit 74, and update pre-truncation information of the image tile under each resolution.
In some exemplary embodiments, the encoder 7 may further include an interface circuit (not shown in the figure) for software configuration. This interface circuit may be used to configure or change information in a register of the encoder 7, so as to control the encoding mode of the encoder 7.
In some exemplary embodiments of this disclosure, the statistical information of the image tile in the to-be-encoded image is pre-calculated, and the bit stream of the image tile is truncated based on the statistical information. Therefore, rate control is performed on each image tile relatively independently, and there is no need to perform overall optimization on all code blocks in the to-be-encoded image, and massive intermediate data is not generated. Therefore, system bandwidth required by the encoder may be reduced in some exemplary embodiments of this disclosure. The entire encoding process of the to-be-encoded image may be even completely performed on the chip.
The conventional JPEG 2000 encoding system may be understood as an online encoding system. The online encoding system directly inputs a to-be-encoded image (for example, an image captured by the sensor 3 in
In some exemplary embodiments, a cache (on-chip cache) may be disposed in the transformation circuit 72 or at an output end of the transformation circuit 72, and configured to temporarily store an intermediate result output by the transformation circuit 72.
In some exemplary embodiments, a cache (on-chip cache) may be disposed in the truncation circuit 764 or at an output end of the truncation circuit 764, and configured to temporarily store an intermediate result output by the truncation circuit 764.
In some exemplary embodiments of this disclosure, since image tiles may be encoded relatively independently, massive intermediate data is not generated. The cache may be configured to temporarily store some intermediate results generated on the chip.
To improve encoding efficiency of the encoder 7, in some exemplary embodiments, two adjacent stage circuits in the encoder 7 may be rate-matched. For example, for two adjacent stage circuits, a circuit with a low processing speed may be set to a multi-path parallel structure; and then a mechanism may be used to control data transmission between the two stage circuits, so that the two stage circuits are fully streamlined.
In an example, the quantization circuit 73 and the first encoding circuit 74 may be rate-matched. Specifically, as shown in
The mode of group arbitration or free arbitration may be used between the quantization circuit 73 and the plurality of encoding units 742 to determine an encoding unit 742 corresponding to an intermediate result output by the quantization circuit 73. Group arbitration means always assigning a code block of a frequency component output by the quantization circuit 73 to a fixed group of encoding units (each group of encoding units may include several encoding units), and free arbitration means that each code block output by the quantization circuit 73 may be received by one of a plurality of parallel encoding units. An advantage of the group arbitration mode lies in its simple circuit connection in hardware implementation, while the free arbitration mode can improve utilization efficiency of the encoding units in some cases.
In another example, the first encoding circuit 74 and the rate distortion slope calculation circuit 762 may be rate-matched. For example, the rate distortion calculation circuit 762 may include a plurality of rate distortion slope calculation modules. The plurality of rate distortion slope calculation modules may be configured to calculate in parallel rate distortion slopes of bit streams output by the first encoding circuit 74. The group arbitration or free arbitration mode may also be used between the first encoding circuit 74 and the rate distortion calculation circuit 762 to determine a rate distortion calculation module corresponding to an intermediate result output by the first encoding circuit 74. Using group arbitration as an example, one rate distortion calculation module may correspond to one group of encoding units in the first encoding circuit 74. As one rate distortion calculation module corresponds to one group of encoding units, an entire circuit design becomes simpler.
Using a 512×512 image tile shown in
When three or four code blocks output by the transform circuit 72 are connected to the first encoding circuit 74 featuring multi-path parallelism via the quantization circuit 73, the group arbitration mode may be used to determine respective encoding units 742 corresponding to the code blocks.
It is assumed that the first encoding circuit 74 includes three groups of encoding units: a group 0, a group 1, and a group 2. The group 0 includes encoding units u0 to u3. The group 1 includes encoding units u4 to u7. The group 2 includes encoding units u8 to u11. Each image tile may include four 4 components (for example, R, Gr, Gb, and B). A mapping mode shown in the following table may be used to map code blocks of the components to the three groups of encoding units.
In the foregoing table, at a time point t5, encoding units u2 and u3 in the group 0 are in an idle state, and in this case, code blocks to be encoded at a time point t6 may be sent to the encoding units u2 and u3 in the group 0 in advance; at the time point t5, encoding units u5 to u7 in the group 1 are in the idle state, and in this case, code blocks to be encoded at the time point t6 may be sent to the encoding units u5 to u7 in the group 1 in advance; and at the time point t5, encoding units u9 to u11 in the group 2 are in the idle state, and in this case, code blocks to be encoded at the time point t6 may be sent to the encoding units u9 to u11 in the group 2 in advance. In this way, code blocks of components 0 and 2 and code blocks of components 1 and 3 are encoded with high efficiency in a ping-pong mode.
In addition, three rate distortion calculation modules may be disposed in the rate distortion slope calculation circuit 762, and a group arbitration mechanism may be used between the 12 encoding units and the three rate distortion calculation modules: u0 to u3 may be connected to the first rate distortion calculation module, u4 to u7 may be connected to the second rate distortion calculation module, and u8 to u11 may be connected to the third rate distortion calculation module.
The structure of the encoder 7 provided by some exemplary embodiments of this disclosure is described above with reference to
As shown in
The bit stream reading circuit 81 may be configured to read a to-be-decoded bit stream. For example, the bit stream reading circuit 81 may read a to-be-decoded bit stream from an external storage device (for example, a memory) through an advanced extensible interface (AXI).
The bit stream parsing circuit 82 may also be referred to as a bit stream header parsing circuit (header parser). The bit stream parsing circuit 82 may parse various types of header information in the bit stream to extract parameters and bit stream data related to decoding to be used by the next-stage decoding circuit 83.
The decoding circuit 83 may include one decoding unit, or may include a plurality of parallel decoding units (a specific quantity may be configured based on an actual requirement; for example, eight parallel decoding units may be provided). Each decoding unit in the decoding circuit 83 may independently decode one code block.
In some exemplary embodiments, a preprocessing circuit may be further disposed before the decoding circuit 83. The preprocessing circuit may be configured to assign the decoding parameters, bit stream data, and the like output by the bit stream parsing circuit 82, to the plurality of parallel decoding units.
In some exemplary embodiments, a post-processing circuit may be further disposed after the decoding circuit 83. The post-processing circuit may be configured to reorganize decoded data output by the decoding circuit 83, and output organized data to a next-stage circuit.
The inverse quantization circuit 84 may be configured to perform inverse quantization on the data obtained through decoding by the decoding circuit 83.
The inverse transformation circuit 85 may be configured to perform inverse transformation on data output by the inverse quantization circuit 84. The inverse transformation may be discrete wavelet inverse transformation.
The output circuit 86 may be configured to write data output by the inverse transformation circuit 85 to an external storage device. For example, the data output by the inverse transformation circuit 85 may be written to an external storage device through AXI.
In some exemplary embodiments, the decoder 8 may further include a software configuration interface. The software configuration interface may be used to configure or change information in a register in the decoder 8, to control the decoding mode of the decoder 8.
The decoder 8 provided by some exemplary embodiments of this disclosure may perform decoding in units of image tiles. After the decoder 8 reads a bit stream from an external storage device, the entire decoding process may be performed on a chip (since decoding may be performed in units of image tiles in some exemplary embodiments of this disclosure, intermediate data is not massive and thus can be temporarily stored in an on-chip cache), without interaction with the external storage device, to save system bandwidth. In addition, all stage circuits in the decoder 8 may operate in a pipeline mode to improve decoding efficiency.
Some exemplary embodiments of this disclosure further provide an encoding method. The encoding method may be performed by the encoder 7 or encoding system mentioned previously. As shown in
Step S72: Read pre-generated statistical information of a to-be-encoded image from an external storage device.
Step S74: Determine a target rate of an image tile in the to-be-encoded image based on the statistical information of the to-be-encoded image.
Step S76: Perform tier-1 encoding on a code block(s) in the image tile to obtain a bit stream of the image tile.
Step S78: Perform tier-2 encoding on the bit stream of the image tile based on the target rate to truncate the bit stream of the image tile.
In some exemplary embodiments, the method in
In some exemplary embodiments, the method in
In some exemplary embodiments, the method in
In some exemplary embodiments, the method in
In some exemplary embodiments, step S76 may include: performing tier-1 encoding in parallel on code blocks of the image tile by using a plurality of encoding units.
In some exemplary embodiments, the plurality of encoding units include a plurality of groups of encoding units, and different groups of encoding units are configured to perform tier-1 encoding on code blocks of different frequency components of the image tile.
In some exemplary embodiments, the method in
In some exemplary embodiments, step S78 may include: calculating a rate distortion slope of the bit stream after the tier-1 encoding; and truncating the bit stream of the image tile based on the target bit rate and the rate distortion slope.
In some exemplary embodiments, the calculating a rate distortion slope of the bit stream after the tier-1 encoding may include: calculating in parallel rate distortion slopes of bit streams after tier-1 encoding by using a plurality of rate distortion slope calculation modules.
In some exemplary embodiments, at least a part of transformation coefficients or quantized coefficients of the to-be-encoded image are generated based on an external signal processing apparatus; and the method in
In some exemplary embodiments, the statistical information of the to-be-encoded image includes complexity of the image tile in the to-be-encoded image.
In some exemplary embodiments, prior to reading pre-generated statistical information of a to-be-encoded image from an external memory, the method in
In some exemplary embodiments, prior to storing the statistical information of the to-be-encoded image in the memory, the method in
All or some of the foregoing exemplary embodiments may be implemented by software, hardware, firmware, or any combination thereof. When software is used to implement the exemplary embodiments, the exemplary embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the procedure or functions according to some exemplary embodiments of the present disclosure are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or may be a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a digital video disc (DVD)), a semiconductor medium (for example, a solid state disk (SSD)), or the like.
A person of ordinary skill in the art may appreciate that the units and algorithm steps in the examples described with reference to the exemplary embodiments disclosed in this specification can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this disclosure.
In some exemplary embodiments provided in this disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiments are merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or may not be performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network elements. Some or all of the units may be selected based on actual requirements to achieve the objects of the solutions of the embodiments.
In addition, functional units in the embodiments of this disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
The foregoing descriptions are merely specific implementations of this disclosure, but are not intended to limit the scope of protection of this disclosure. Any variation or replacement readily conceived of by a person skilled in the art within the technical scope disclosed in this disclosure shall fall within the scope of protection of this disclosure. Therefore, the scope of protection of this disclosure shall be subject to the scope of protection defined in the claims.
This application is a continuation application of PCT application No. PCT/CN2019/075746, filed on Feb. 21, 2019, and the content of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2019/075746 | Feb 2019 | US |
Child | 17198105 | US |