The present invention relates to an image data processing method, and more particularly relates to an image data processing method in a case of using an image processing device that displays many moving pictures having low resolutions while the moving pictures are subjected to decoding processing by a hardware decoder.
In moving picture reproduction processing in current various information appliances, encoded moving picture information having been previously encoded and stored is read from a storage medium or downloaded via a network from the viewpoint of the capacity, and the obtained information is decoded by decoding processing corresponding to the encoding and is supplied to a display device, so that moving pictures are reproduced on a liquid crystal device or the like.
Because there is enough time to perform the encoding processing, the encoding processing can be achieved by software. However, in the decoding processing, a hardware circuit, particularly a dedicated image processing processor is generally used to secure reproduction of moving pictures without delay even if the moving pictures are high-definition moving pictures.
That is, a configuration is applied in which a higher-order processing control unit provides only an instruction of processing to the dedicated image processing processor and the image processing processor thereafter autonomously performs image processing to increase the efficiency of the image processing.
The high-order CPU 100 issues a processing instruction which is generally called “API (Application Program Interface)” to the image processing processor 300. A unique API is provided from a software platform adopted by each device. For example, DXVA2 (DirectX Video Acceleration 2) is provided as an API when a device operates based on Windows as the operating system (OS) and VDPAU (Video Decode and Presentation API for Unix) is provided as an API when the OS is Linux®.
The high-order CPU 100 reads necessary encoded image data, which is desired to be displayed on the display unit 400 at a predetermined timing, from the image data ROM 200. Next, the high-order CPU 100 transmits the encoded image data along with an instruction on display (including a display timing, a display position, and the like) to the image processing processor 300. The image processing processor 300 having received this instruction and the corresponding encoded image data performs decoding processing corresponding to the encoding processing using an internal hardware decoder (not illustrated) to restore the original image. When the image processing processor 300 supplies the decoded image data to the display unit 400, the moving picture is reproduced on a liquid crystal screen or the like of the display unit 400.
H.264 being a standard for moving picture information stored in the image data ROM 200 is briefly explained next. In MPEG-2 or MPEG-4 (except for AVC), the hierarchical structure of a syntax is defined and information is arranged in a bitstream according to the hierarchical structure. In contrast thereto, in H.264, arrangement of parameter sets and slices that refer to the parameter sets is less restricted than in the standards such as MPEG-2.
The parameter sets include an SPS (Sequence Parameter Set) in which information associated with encoding of the entire sequence is stored, a PPS (Picture Parameter Set) indicating an encoding mode of the entire picture, and SEI (Supplemental Enhancement Information) that enables encoding of any information. Because the parameter sets such as the SPS and the PPS can be arranged for each frame in H.264, a plurality of sequences can be addressed in one bitstream.
Each slice being a VCL NAL unit has a header area where at least information on a first macroblock coordinate (first_mb_in_slice) of the own slice is stored. While a picture is the unit for encoding in MPEG-2 and the like, a slice is the unit for encoding in H.264. Therefore, for example, to parallelize processes, a picture can be divided into a plurality of slices to perform a process for each of the slices.
For example, Patent Literature 1 discloses a technique of handling one frame as a plurality of slices, that is, dividing one image into plural parts to be encoded as plural slices in order to improve efficiency in encoding of a motion vector. Patent Literature 2 discloses a video distribution system that uses a tile obtained by encoding an I picture that can be decoded without referring to other image information to correct the position in a frame of a tile read by a stream correcting unit, so that a view region of video can be freely set and the view region can also be interactively changed.
However, the conventional technique as represented by Patent Literature 1 divides each image of one moving picture into plural parts but does not encode each of a plurality of moving pictures to configure each encoded moving picture as a slice for the purpose of avoiding decrease of the efficiency in decoding processing in a case where a plurality of images having low resolutions are handled. The invention described in Patent Literature 2 is related to a technique of freely changing a view region of video and this conventional technique does not avoid decrease of the efficiency in decoding processing in a case where a plurality of images having low resolutions are handled either.
Patent Literature 1: Japanese Patent Application Laid-open No. 2012-191513
Patent Literature 2: International Publication No. WO2012/060459
Moving picture reproduction in a game machine such as a pachinko machine or a pachislot machine has characteristics significantly different from general moving picture reproduction reproducing one moving picture.
As illustrated in
Focusing on decoding processing for images having different resolutions by a hardware decoder, the processing performance is not constant regardless of the resolutions and depends on the resolutions. However, the processing time of the decoding is not determined in proportion to the resolution. For example, even if the resolution is halved, the processing time of the decoding is not halved.
It is considered that the reason why the processing performance reduces considerably as the resolution decreases in the decoding processing by a hardware decoder as described above is that the hardware decoder performs parallel processing in units of a predetermined resolution in the vertical direction of an image.
Generally, decoding processing is performed in units of a macroblock (MB) (16×16 pixels, for example). When parallel processing is not performed, one decoder core corresponding to one horizontal line of macroblocks decodes the macroblocks on that line sequentially as illustrated in
In contrast thereto, when a hardware decoder has a parallel processing function, the hardware decoder includes a plurality of decoder cores each corresponding to a plurality of macroblocks and the plural decoder cores each perform decoding processing on the plural macroblocks simultaneously and parallelly as illustrated in
When intra prediction is adopted also in this case, the processing is typically progressed in a wavelike manner as illustrated in
An image having a sufficiently high resolution in the vertical direction can sufficiently benefit from the parallel processing by plural hardware decoder cores as illustrated in
It is assumed, for example, that a macroblock consists of 16×16 pixels and eight hardware cores each being provided in a one-to-one relation with a macroblock are provided (
As described above, when decoding processing of an image having a low resolution is performed, the degree of decrease in the processing performance depends on the number of decoder cores used in the parallel processing and the resolution of an image to be decoded. However, if a general-purpose hardware decoder that performs parallel processing with many decoder cores is used for the purpose of quickly performing decoding processing of an image having a high resolution, the processing performance relatively decreases in a case where decoding processing of an image having a low resolution is performed, as compared to a case where an image having a high resolution is processed.
There is also reduction in the processing speed caused by an environment where an image having a low resolution is processed, in addition to decrease in the processing performance in a general-purpose hardware decoder supposing the parallel processing as described above. That is, an image having a low resolution and an image having a high resolution are the same in being one picture and processing for decoding one picture is always accompanied by associated processing such as initialization of the decoder, as well as encoding processing of image data itself.
Therefore, when a plurality of images having low resolutions are to be displayed on a game machine or the like, the associated processing such as initialization of a decoder performed to display the images increases according to the number of the images, resulting in degradation of apparent decoding processing performance.
From the reasons described above, a device that displays many moving pictures having low resolutions, especially parallelly, such as a game machine represented by a pachinko machine or a pachislot machine has a problem that the processing speed is considerably reduced.
The present invention has been achieved in view of circumstances described above and an object of the present invention is to provide an image data processing method for preventing decrease of a decoding processing capability of an image processing device even if the image processing device is included in a game machine on which many moving pictures having low resolutions are displayed.
In order to achieve the above object, the present invention provides an image data processing method using an image processing device that includes a high-order CPU that outputs encoded image data associated with a moving picture and issues an instruction associated with reproduction of the moving picture, an image processing processor that has a hardware decoder and decodes the encoded image data associated with the moving picture on the basis of the instruction to be input, and a display unit on which the moving picture is reproduced on the basis of image data decoded by the image processing processor, wherein in a case where a plurality of moving pictures are reproduced on the display unit on the basis of the instruction, the high-order CPU combines respective moving pictures at a level of slices being encoded image data to integrally configure encoded image data while considering the moving pictures as one picture of a plurality of slices, and supplies the encoded image data to the image processing processor, and the hardware decoder decodes the integrated encoded image data.
According to the image data processing method of the present invention, when a plurality of moving pictures having low resolutions are to be reproduced, a high-order CPU combines the moving pictures at a slice level and generates encoded image data of one picture composed of a plurality of slices, and a hardware decoder decodes one picture of plural slices including plural moving pictures combined together. Therefore, processing such as initialization of the hardware decoder necessary for each reproduction of moving pictures is rendered unnecessary, and the decoding processing speed can be improved.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings. An outline of a first embodiment in an image data processing method according to the present invention is described first with reference to
In brief, according to the present invention, processing of combining pieces of encoded image data of a plurality of moving pictures having low resolutions in the vertical direction, decoding the combined data, and separating original moving pictures from the resultant image data is performed to prevent decrease of the decoding processing capability even if there are many moving pictures having low resolutions in the vertical direction.
More specifically, in
The representation designer performs designing in such a manner that a moving picture X having a high vertical resolution is singly processed and a moving picture Y and a moving picture Z having low vertical resolutions can be combined together and be subjected to decoding processing as illustrated in
Next, a decoder decodes encoded data of the moving picture X singly, restores the moving picture X, and displays the restored moving picture X on a display unit of the image processing device at a predetermined timing. Meanwhile, the decoder combines encoded data of the moving picture Y and the moving picture Z together depending on respective display timings to decode the combined data, restores the moving picture Y and the moving picture Z, and further separates the moving pictures Y and Z from each other to display the moving pictures Y and Z on the display unit at the corresponding timings, respectively (step S3).
In the decoding processing design, the designer can use software that enables the designer to select moving pictures necessary for representation and input respective display timings thereof to sort out moving pictures to be combined together. Alternatively, a simultaneous reproduction processing capability of the image processing device can be input to software to enable control to select moving pictures that can be combined together based on the simultaneous reproduction processing capability.
A first embodiment in which the outline described above is more specified is explained below in detail including some predetermined conditions.
<Decoding Design Processing (step S1)>
To be examined from the viewpoint of the resolution, conditions for sorting out moving picture to be combined together at the time of decoding are as follows.
The condition (1) indicates that an individual moving picture does not benefit from the parallel processing of plural decoder cores as explained in association with the conventional technique. The condition (2) indicates that, as a result of combination of plural moving pictures, the sum of the respective resolutions in the vertical direction exceeds the number of pixels of the parallel processing of plural decoder cores, so that the moving pictures benefit from the parallel processing. Further, the condition (3) indicates that a plurality of moving pictures more than the decoder can perform decoding processing are not combined.
That is, it is indicated that a plurality of moving pictures that each normally do not benefit from the parallel processing of plural decoder cores are combined together in a range where the decoder can perform decoding processing, thereby benefiting from the parallel processing of plural decoder cores. Furthermore, from the viewpoint of timings at which the moving pictures are displayed on the image processing device, the moving pictures to be combined together are as close as possible in the display timings.
Moving pictures to be displayed at any timings can be combined as long as the moving pictures meet the conditions in the viewpoint of the resolution described above. However, as the time point of the decoding processing and the time point of display are more distant from each other, the time of retention in an image data buffer where image data after the decoding processing is temporarily stored is adversely longer.
Therefore, sorting of moving pictures to be combined together in the viewpoint of display timings depends on the capacity of the image data buffer. In the case of a game machine as an example, it is preferable that decoding design processing is performed for moving pictures that are reproduced at the same time or in the same period in one event and that meet the conditions (1) to (3) described above as data to be combined together.
<Encoding Processing (step S2)>
Encoding processing for moving pictures is explained next with reference to
In the case of a moving picture A displayed on the entire display screen as illustrated in
On the other hand, in the case of a moving picture B to a moving picture G (having a vertical resolution equal to or lower than 128 pixels, for example) as illustrated in
At step S23, the representation designer performs padding processing of image data. The processes at steps S22 and S23 can be performed using software for representation creation. That is, using software with which the designer can select a moving picture necessary for representation and to which the display timing can be input, the designer can alternatively determine whether the selected moving picture is a moving picture to be combined in view of the above conditions, and output padded data as required according to the resolution of the moving picture to be combined.
For example, a case where the moving picture B and the moving picture F can be combined as a set, and the moving picture C, the moving picture D, and the moving picture E (the moving picture G) are to be combined as a set is assumed. The moving picture G is a moving picture that is to be reproduced after interruption of the moving picture E when the moving picture E is interrupted in the middle of reproduction, as illustrated in
Therefore, the representation designer or the software performs padding processing of image data with respect to the horizontal direction of the moving pictures to equalize the respective resolutions in the horizontal direction. At this time, the representation designer or the software can uniformly equalize the respective resolutions in the horizontal direction of the moving pictures to the maximum processing resolution in the horizontal direction of the decoder. This enables all sets to have the same maximum processing resolution and facilitates the padding processing itself.
However, for example, if the respective resolutions in the horizontal direction of moving pictures are uniformly equalized to the maximum processing resolution in the horizontal direction of the decoder in a case where only moving pictures having low resolutions in the horizontal direction, such as the moving picture C, the moving picture D, and the moving picture E, are combined together, pad data increases and the encoding efficiency lowers.
Therefore, when moving pictures are to be combined together, it is advantageous to set some references to be matched, such as 120, 240, 480, 960, and 1920 pixels in the horizontal direction. The references to be matched are determined based on the resolution of a moving picture having a highest horizontal resolution among the moving pictures to be combined together. Therefore, setting of the number of references or the values thereof depends on variation in the respective horizontal resolutions of the moving pictures to be combined together.
In a case where some references are thus set in the horizontal resolution, the horizontal resolutions of moving pictures are also one of sorting conditions on the moving pictures to be combined, in addition to the conditions (1) to (3) described above.
At step S24, the encoder performs encoding processing of each of the moving pictures. The encoder performs encoding processing of each of the moving pictures by software processing based on the H.264 standard, for example. A condition at this time is that the encoding processing is performed in such a manner that the numbers of frames included in one GOP are equal (30, for example) and the orders of picture types of the frames of the respective moving pictures are the same at least in the moving pictures to be combined.
A bitstream of each moving picture after encoding is explained next with reference to
Another characteristic point of the present invention is performing encoding processing where “the data size of the original image” (the resolution) (110×48 in the case of the moving picture C illustrated in
At step S25, the representation designer or the software determines whether there remains an unprocessed moving picture. When an unprocessed moving picture remains (it is determined as YES), the process returns to step S21 to repeat the processes described above. When processing of all moving pictures is completed, the process ends. In the first embodiment described above, the resolutions in the vertical direction of the moving pictures are kept, that is, different. However, it is also possible to equalize also the respective resolutions in the vertical direction of the moving pictures to be combined and subjected to decoding processing by the padding processing.
When also the resolutions in the vertical direction are equalized, replacement of the moving pictures to be combined together in the same set and subjected to decoding processing can be easily performed and processing in the encoding design processing described above is simplified. For example, because the resolution in the vertical direction of the moving picture C is 64 pixels, the padding processing is performed to change the resolutions in the vertical direction of the moving picture D, the moving picture E, and the moving picture G being targets of combination from 48 pixels to 64 pixels. When subsequently intending to display a moving picture H (having a vertical resolution of 64 pixels regardless of whether the padding processing has been performed, not illustrated) instead of the moving picture E, the representation designer can easily replace the moving picture E with the moving picture H.
<Image Data Processing in Image Processing Device Including Decoder (Step S3)>
The image processing device B illustrated in
The high-order CPU 1 includes a pre-processing unit 11 and the pre-processing unit 11 includes a respective-moving-picture reproduction-timing-information generating unit 111 and an inter-moving-picture respective-picture combining unit 112. The image processing processor 3 includes a post-processing unit 31, a hardware decoder 32, and a video memory 33, and the post-processing unit 31 has a command interpreting unit 311 and a drawing control unit 312. A respective-moving-picture separating unit 3121, an original-image clipping unit 3122, and a respective-moving-picture display-timing control unit 3123 are included in the drawing control unit 312.
Data of each moving picture encoded by the <encoding processing (step S2)> described above is stored in the image data ROM 2. That is, data information of the original image size is stored in a non-VCL NAL unit such as SEI of a stream of each of moving pictures in a state where the horizontal resolutions of the moving pictures to be combined with other moving pictures are equalized or the horizontal resolutions and the vertical resolutions thereof are both equalized according to the reproduction timings of the moving pictures.
Image data processing is performed in the image processing device illustrated in
First, the high-order CPU 1 reads encoded image data associated with necessary moving pictures from the image data ROM 2 on the basis of representation designing in the image processing device B. The data read at this time is based on information related to each of moving pictures that can be combined together and are designed in the <decoding design processing (step S1)> described above, and the read data is combined by the inter-moving-picture respective-picture combining unit 112 of the pre-processing unit 11.
In the present embodiment, the SPS (Sequence Parameter Set), the PPS (Picture Parameter Set), and the SEI (Supplement Enhancement Information) are included as a group, and parameter sets and the like of a bitstream after combination are rewritten as necessary. For example, while the SPS includes information on the number of reference frames and the width and height of a frame, at least the height information is corrected because one picture after combination is different in the height (the vertical resolution) from the original individual pictures. The PPS includes information on the encoding method, the quantization coefficient, and the like and the information does not need to be corrected if the encoding method, the quantization coefficient, and the like are set to be common to moving pictures at the time of the encoding processing. When a moving picture to be combined has a different quantization coefficient, the quantization coefficient is corrected by rewriting Slice QP delta included in the header of the slice.
Information on the data size of the original image is stored in the container such as the SEI in the bitstream of each moving picture before combination. The inter-moving-picture respective-picture combining unit 112 extracts the information of the original image data from each piece of the SEI of the bitstreams of the moving pictures to be combined and supplies the extracted information to the image processing processor 3. Information of the start macroblock coordinate is stored in the header of each of the slices to be combined together. In the example illustrated in
The respective slices of the moving pictures are extracted sequentially by the method described above and are combined to form a new bitstream, and the new bitstream is supplied to the hardware decoder 32 of the image processing processor 3. Because the numbers of frames in the GOPs and the picture types of the respective frames are matched when the moving pictures are encoded, the pictures types of the slices of the moving pictures combined together by the inter-moving-picture respective-picture combining unit 112 match each other.
Timings at which the moving pictures based on the combined encoded image data being composed of plural slices are to be displayed may be different. For example, as illustrated in
The hardware decoder 32 performs decoding processing of the combined encoded image data supplied from the high-order CPU 1 and supplies the result of the decoding to the video memory 33 as well as outputting a decoding completion notification (not illustrated) to the high-order CPU 1.
The high-order CPU 1 having received the decoding completion notification from the hardware decoder 32 issues an instruction on post-processing to the post-processing unit 31 of the image processing processor 3. The command interpreting unit 311 of the post-processing unit 31 supplies at least original image data information of each of the moving pictures associated with combination and information associated with respective reproduction timings of the moving pictures, which are included in the instruction, to the drawing control unit 312.
The respective-moving-picture separating unit 3121, the original-image clipping unit 3122, and the respective-moving-picture display-timing control unit 3123 included in the drawing control unit 312 perform predetermined post-processing for the decoded image data stored in the video memory 33 on the basis of the original image data information of each of the moving pictures and the information associated with the reproduction timing of each of the moving pictures, and the start macroblock coordinate data stored in the header of each slice, and then supplies an obtained result to the display unit 4 to be displayed as each of the moving pictures.
Contents of the processing performed by the pre-processing unit 11 and the post-processing unit 31 are further explained with reference to
The high-order CPU 1 acquires encoded image data from the image data ROM 2 on the basis of the issued instruction. For example, in the case of an instruction to “reproduce the moving picture A from a predetermined timing”, the high-order CPU 1 acquires encoded image data related to the moving picture A from the image data ROM 2. The high-order CPU 1 supplies the encoded image data related to the moving picture A to the hardware decoder 32 of the image processing processor 3. After the hardware decoder 32 stores decoded image data in the video memory 33, the high-order CPU 1 creates an image using the drawing control unit 312 and supplies the created image to the display unit 4 at a predetermined timing to enable reproduction of the moving picture A on the display unit 4.
When an instruction to reproduce a plurality of moving pictures, for example, the moving pictures C, D, E, and G is issued in the high-order CPU 1, the high-order CPU 1 reads necessary encoded image data from the image data ROM 2. The inter-moving-picture respective-picture combining unit 112 rewrites information of the first macroblock coordinate included in the header area of the slice included in the bitstream of each of the read moving pictures according to the combination situation of the moving pictures and combines the plural slices to supply the combined slices to the hardware decoder 32 of the image processing processor 3. At the time of reading the necessary encoded image data from the image data ROM 2, the high-order CPU 1 extracts the information related to the data size of the original image, which is stored in the SEI of the bitstream of each of the read moving pictures.
Rewriting of the coordinate information of the first macroblock included in the header area of the slice is performed as follows. Information related to the number of the first macroblock of the relevant slice is stored in the header area and information of “0” is initially stored because the first macroblock normally starts from the upper left part of the screen. However, in moving picture data to be combined, the coordinate of the first macroblock changes according to arrangement of the slices at the time of combination and thus the coordinate is changed to a desired value by the rewriting.
When the moving picture C, the moving picture D, and the moving picture E are to be combined collectively as one large image as illustrated in
When receiving the encoded image data, the hardware decoder 32 recognizes that one picture is composed of plural slices (three slices in this example). Because composing one picture of plural slices is approved by the standard (H.264), the hardware decoder 32 can perform the decoding processing of the input encoded image data as it is.
However, the hardware decoder 32 does not recognize that the encoded image data is a combination of a plurality of moving pictures. In other words, it is unnecessary to add any alteration to the hardware decoder 32.
After the encoded image data is decoded by the hardware decoder 32 and is stored in the video memory 33, the high-order CPU 1 generates information for clipping individual images from the combined decoded image data, that is, information such as the data sizes of the respective original images of the moving pictures and the moving-picture display timings as instruction information, and outputs the generated instruction information to the image processing processor 3. This instruction information is interpreted by the command interpreting unit 311 of the image processing processor 3.
For example, when the interpreted instruction is for instructing to “reproduce the moving picture C from a predetermined timing”, “reproduce the moving picture D from a predetermined timing”, and “reproduce the moving picture E from a predetermined timing”, the command interpreting unit 331 outputs an instruction to the drawing control unit 312 to clip an image associated with the moving picture C, an image associated with the moving picture D, and an image associated with the moving picture E from the decoded image data stored in the video memory 33. That is, because the information for clipping individual images from the combined decoded image data, the information of the data sizes of the respective original images of the moving pictures, and the information such as the moving-picture display timings are received from the pre-processing unit 11 of the high-order CPU 1, the command interpreting unit 311 outputs the received information to the drawing control unit 312. The drawing control unit 312 controls the respective-moving-picture separating unit 3121, the original-image clipping unit 3122, and the respective-moving-picture display-timing control unit 3123 to display the moving pictures corresponding to the instruction from the high-order CPU 1 on the display unit 4.
Because the moving pictures are different in the reproduction periods, the decoding processing of a moving picture having a shorter reproduction period ends earlier as illustrated in
States of GOPs and respective slices in a case where the moving picture E is switched in the middle to the moving picture G as illustrated in
As described above, according to the first embodiment of the image data processing method of the present invention, moving pictures having low resolutions in the vertical direction are combined together and are subjected to decoding processing at the same time. Accordingly, even when parallel processing is performed with a plurality of decoder cores in the vertical direction, the function can be utilized.
Furthermore, the associated processing such as initialization of the decoder is decreased by reduction in the number of times of decoding processing due to combination of a plurality of moving pictures and thus the processing time is considerably shortened.
The moving pictures are combined in such a manner that plural slices of the moving pictures are brought together in units of pictures. Therefore, there is no need to alter the configuration or function of the hardware decoder 32.
A second embodiment of the present invention is explained next. In the first embodiment described above, to combine moving pictures, the orders of picture types of respective GOPs of the moving pictures are set in order and the numbers of frames therein are matched at the time of encoding. Therefore, switching of moving pictures needs to be performed at a timing when the picture types are all I pictures and there is a little restriction on the switching timing of moving pictures. Furthermore, while moving pictures having reproduction timings that are close to each other are combined, the information for controlling the moving-picture display timing is supplied from the pre-processing unit 11 of the high-order CPU 1 to the post-processing unit 31 of the image processing processor 3 and a data buffer for storing image data therein is required.
In the second embodiment, the method of the first embodiment is further improved and a method in which information for matching the orders of picture types of GOPs or for controlling the display timings is not supplied to the image processing processor is disclosed.
The general flow of the image data processing method is explained first. As illustrated in
In this case, the representation designer is a person that determines moving pictures to be reproduced at the same time in certain representation on a game machine such as a pachinko machine. For example, as illustrated in
The encoder encodes the moving picture X, the moving picture Y, and the moving picture Z (step S2). Next, the decoder decodes encoded data of the moving picture X singly, restores the moving picture X, and displays the moving picture X on the display unit of the image processing device at a predetermined timing. Meanwhile, depending on display timings, the decoder combines the encoded data of the moving picture Y and the moving picture Z together according to the respective display timings to decode the combined data, restores the moving picture Y and the moving picture Z, and further separates the moving pictures Y and Z from each other to display the moving pictures Y and Z on the display unit (step S3).
The second embodiment is different from the first embodiment in that the orders of picture types of respective GOPs of the moving pictures to be combined together do not need to be matched and the numbers of frames do not need to be equalized in the encoding at step S2. Therefore, moving pictures to be combined can be prepared with no regard for the picture types or the number of frames of other moving pictures to be combined therewith. The second embodiment is different from the first embodiment also in that encoded data of moving pictures are combined and decoded according to the respective display timings.
<Decoding Design Processing (Step S1)>
Conditions for sorting out moving pictures to be combined together at the time of decoding are as follows from the viewpoint of the resolution.
The condition (1) indicates that an individual moving picture does not benefit from the parallel processing of plural decoder cores as explained in association with the conventional technique. The condition (2)′ indicates that, as a result of combination of plural moving pictures according to the display timings, the sum of the resolutions in the vertical direction exceeds the number of pixels of the parallel processing of plural decoder cores, so that the moving pictures benefit from the parallel processing. Further, the condition (3)′ indicates that a plurality of moving pictures more than the decoder can perform decoding processing are not combined at any time.
That is, it is indicated that a plurality of moving pictures that each normally do not benefit from the parallel processing of plural decoder cores are combined together in a range where the decoder can perform decoding processing, thereby benefiting from the parallel processing of plural decoder cores.
In the first embodiment, moving pictures having close reproduction timings are supplied to the decoder at the same time and therefore the display timings are not considered in the conditions (2) and (3). However, because moving pictures are combined according to the display timings in the second embodiment, the conditions (2)′ and (3)′ include a condition related to the display timings.
In the present embodiment, a concept that a predetermined vertical resolution not exceeding “the maximum resolution” in the vertical direction of the decoder is defined and moving pictures that meet the conditions (1) and (2)′ described above are appropriately combined together within the defined resolution range is introduced. That is, as in general memory management, addition or deletion of moving pictures according to the display timings is performed by management of an available region associated with the vertical resolution.
From the viewpoint of timings when moving pictures are displayed on the image processing device, the moving pictures to be combined together have display timings that are close to each other.
In regard to this point, in the first embodiment, decoding processing is performed while temporal heads of respective pieces of encoded image data are aligned and the GOPs are also matched regardless of the display timings of moving pictures to be combined together. Therefore, a data buffer that temporarily stores therein image data after the decoding processing is required in the light of the display timings of the respective moving pictures. Accordingly, in the light of the limited capacity of the data buffer, moving pictures to be combined together are moving pictures having display timings as close as possible.
In contrast thereto, in the present embodiment, encoded image data of moving pictures are combined according to the respective display timings of the moving pictures as will be described later and thus a data buffer that stores therein image data after decoding processing as in the first embodiment is not required. However, to effectively use the video memory, it is preferable that combination is performed for “moving pictures that are reproduced at the same time or in the same period in one event” as targets as explained in the first embodiment.
This decoding design processing can be performed by the representation designer or can be performed using software that enables the designer to select moving pictures necessary for representation and input the respective display timings, thereby sorting out moving pictures to be combined together. Control can be enabled by inputting a simultaneous reproduction processing capability of the image processing device to software to sort out moving pictures to be combined together based on the simultaneous reproduction processing capacity. It is alternatively possible to set a combination image frame described later and to control sorting of moving pictures using the frame.
<Encoding Processing (Step S2)>
The encoding processing for moving pictures is explained next. A procedure of the encoding processing is performed as illustrated in
The representation designer selects a moving picture to be encoded (step S21).
The representation designer or the software determines whether the selected moving picture is a moving picture to be combined with other moving pictures on the basis of the decoding design processing described above (step S22).
When the moving picture is a moving picture that is subjected to decoding processing singly, it is determined as NO at step S22. The representation designer or the software skips step S23 and the process proceeds to step S24.
On the other hand, when the moving picture is to be combined with any other moving picture and subjected to decoding processing, it is determined as YES at step S22 and the process proceeds to step S23.
Padding processing at step S23 is identical to that in the first embodiment and therefore explanations thereof are omitted. In the second embodiment, the image size defined by “the maximum resolution” in the vertical direction of the decoder and the horizontal resolution supposing the padding processing is referred to as “combination image frame” for sake of convenience.
Next, the encoder performs encoding processing of the respective moving pictures at step S24. The encoder performs the encoding processing by software processing based on the H.264 standard, for example. In the first embodiment, the conditions at this time are defined to perform the encoding processing where the numbers of frames included in one GOP are equal (30, for example) and the orders of picture types are the same at least in moving pictures in the same set.
In contrast thereto, in the second embodiment, encoding processing preventing the respective decoding timings and the respective decoding output timings from differing at least among moving pictures that are determined to belong to the same set at step S22 is performed. In order to prevent the decoding timings and the decoding output timings from differing, it suffices to identically manage reference buffers in the respective moving pictures. For example, in the H.264 standard, encoding not using an image whose POC Type is 2 and that corresponds to an image in a chronological future as a reference image and being common in items other than those related to the resolutions of the SPS (Sequence parameter Set) and the PPS (Picture Parameter Set) is performed. Other specific examples of identical management on the reference buffers among moving pictures to be combined together are as follows.
With encoding of moving pictures where these conditions related to management on the reference buffers are conformed, and introduction of a concept of a padding slice as will be explained later, the moving pictures can be combined and subjected to decoding processing without equalizing the GOP configurations to match the picture types as explained with reference to
A bitstream of each of the encoded moving pictures (see
At step S25, the representation designer or the software determines whether there remains an unprocessed moving picture. The process returns to step S21 to repeat the above processes when an unprocessed moving picture remains (it is determined as YES), and the process ends when processing of all moving pictures is completed.
In the padding processing for equalizing the horizontal resolutions of moving pictures illustrated in
<Image Data Processing by Actual Working of Image Processing Device Including Decoder (Step S3)>
The high-order CPU 1 includes a pre-processing unit 15 and the pre-processing unit 15 includes an inter-moving-picture respective-picture combining unit 16 and a respective-moving-picture combination-information generating unit 17. The inter-moving-picture respective-picture combining unit 16 has a padding-slice generating unit 18.
The image processing processor 3 includes the hardware decoder 32, the video memory 33, and a post-processing unit 35. The post-processing unit 35 includes a command interpreting unit 36 and a drawing control unit 37. The respective-moving-picture separating unit 3121 and the original-image clipping unit 3122 are included in the drawing control unit 37.
The image data ROM 2 has respective data of moving pictures encoded by the processing described above stored therein. That is, moving picture data to which the identical management on reference buffers is applied are stored in the image data ROM 2 to prevent the respective decoding timings and the respective decoding output timings from differing among the moving pictures.
The outline of the image data processing in the image processing device C illustrated in
The high-order CPU 1 reads encoded image data associated with necessary moving pictures from the image data ROM 2 on the basis of a design operation in the image processing device C. The high-order CPU 1 reads encoded image data related to a plurality of moving pictures that can be combined together from the image data ROM 2 on the basis of information associated with moving pictures that are designed in the <decoding design processing (step S1)>and can be combined together. The inter-moving-picture respective-picture combining unit 16 subsequently combines respective pictures of the moving pictures as slices according to timings when the moving pictures are to be displayed, and rewrites information of first macroblock coordinates included in the header areas of the slices to be combined. As for a time period in which each of the combined moving pictures is not displayed or a time period in which each of the moving pictures is temporarily stopped, the padding-slice generating unit 18 generates a padding slice and applies the generated padding slice to the time periods. The inter-moving-picture respective-picture combining unit 16 supplies the combined encoded image data composed of plural slices to the hardware decoder 32 of the image processing processor 3.
The hardware decoder 32 decodes the supplied encoded image data. When the supplied encoded image data includes a plurality of moving pictures combined by the pre-processing unit 15, information of the rewritten first macroblock coordinate after combination is contained in the header of each slice in the supplied bitstreams of the moving pictures. Therefore, the hardware decoder 32 processes the encoded image data as an image of plural slices.
In the second embodiment, the inter-moving-picture respective-picture combining unit 16 combines moving pictures to be combined together at timings when the moving pictures are to be displayed. Therefore, there is no need to generate reproduction timing information associated with each of the moving pictures and notify the image processing processor 3 of the information as in the first embodiment.
However, the image processing processor 3 needs to be successively notified of information about which region is occupied by an image to be displayed or which region is available in the vertical direction of the combination image frame, and the data size of the original image. The high-order CPU 1 extracts the information related to the data size of the original image from the NAL such as the SEI of each of streams when the bitstreams of the respective moving pictures are read from the image data ROM 2, cause the respective-moving-picture combination-information generating unit 17 to generate information related to a used region in the combination image frame at the time when the moving pictures are combined together (hereinafter, “original image and combined image information”), and supplies the generated information as an instruction along with the original-image data-size information to the image processing processor 3.
That is, the high-order CPU 1 supplies encoded image data to the hardware decoder 32 of the image processing processor 3 and simultaneously issues an instruction on decoding, including the original image and combined image information generated by the respective-moving-picture combination-information generating unit 17 described above, to the post-processing unit 35 of the image processing processor 3.
In the image processing processor 3, the hardware decoder 32 performs decoding processing of the encoded image data supplied from the high-order CPU 1 and supplies the decoding result to the video memory 33. At this time, the decoded image data supplied from the hardware decoder 32 always includes latest frames that are required for display of the corresponding moving pictures, respectively. The reason thereof is that the moving pictures are combined together according to the display timings in the high-order CPU 1. Therefore, in the second embodiment, a special buffer required in the first embodiment is not required.
Meanwhile, the command interpreting unit 36 of the post-processing unit 35 having received the instruction from the high-order CPU 1 interprets the received instruction. At this time, the post-processing unit 35 supplies the information associated with the horizontal resolution included in the original image and combined image information, the information indicating which region in the vertical direction is occupied by each of moving pictures associated with combination and which region is available, and the data size of the original image, which are included in the instruction, to the drawing control unit 37.
The respective-moving-picture separating unit 3121 included in the drawing control unit 37 separates the moving pictures from each other on the basis of the information indicating which region in the vertical direction is occupied by each of moving pictures associated with combination and which region is available. The original-image clipping unit 3122 clips out the original image to restore the image on the basis of the information associated with the horizontal resolution or the vertical resolution included in the original image and combined image information. The original-image clipping unit 3122 supplies the obtained result to the display unit 4 to display each moving picture.
Detailed processing contents of the pre-processing unit 15 and the post-processing unit 35 are explained with reference to
In the high-order CPU 1, a display instruction for moving pictures is issued on the basis of operation processing designed for the image processing device (in the case of a game machine, an operation to design progress of a game in a game machine in which the image processing device is incorporated).
For example, in an example illustrated in
The high-order CPU 1 acquires encoded image data from the image data ROM 2 on the basis of the operation processing design. For example, in the case of an instruction to “reproduce the moving picture A”, the high-order CPU 1 acquires encoded image data related to the moving picture A from the image data ROM 2 and supplies the acquired data to the hardware decoder 32 of the image processing processor 3. Subsequently, the hardware decoder 32 decodes the encoded image data, stores the decoded image data in the video memory 33, thereafter creates an image using the drawing control unit 37, and supplies the created image to the display unit 4. In this way, the moving picture A is reproduced on the display unit 4.
When the operation processing is reproducing a plurality of moving pictures, for example, the moving pictures C, D, E, and G, the high-order CPU 1 reads encoded image data of necessary moving pictures from the image data ROM 2 and extracts information of the data sizes of original images included in the respective bitstreams of the moving pictures.
The inter-moving-picture respective-picture combining unit 16 first determines the combination image frame described above, subsequently extracts slices constituting a part of the bitstreams of the moving pictures to be combined together, rewrites information of the first macroblock coordinates included in the respective header areas of the slices as necessary, combines a plurality of slices according to the display timings, generates compressed data after combination, and supplies the generated compressed data to the hardware decoder 32 of the image processing processor 3.
Combination of respective pieces of encoded image data with respect to the slices is identical to that in the first embodiment described above (
The inter-moving-picture respective-picture combining unit 16 of the pre-processing unit 15 organizes respective pieces of encoded image data of moving pictures at the slice level as described above to supply the encoded image data to the hardware decoder 32 while combining the moving pictures. At this time, upon reception of the encoded image data, the hardware decoder 32 recognizes that one picture is composed of a plurality of slices (three slices in this example). However, this is within the range of the processing defined by the standard (H.264) and the encoded image data can be subjected as it is to the decoding processing of the hardware decoder 32. The hardware decoder 32 does not recognize that the encoded image data is obtained by combining a plurality of moving pictures.
In other words, this configuration means that any alteration does not need to be added to the hardware decoder 32. That is, also in the second embodiment, pre-processing and post-processing are performed to enable combination and separation of a plurality of moving pictures without adding any alteration to the hardware decoder 32.
In the second embodiment, insertion or replacement of moving pictures to be combined is performed using the combination image frame and moving pictures to be combined are selected according to the display timings. Therefore, there is an unused region in the combination image frame depending on display timings of moving pictures and the unused region needs to be controlled.
A padding slice is used for the unused region or to temporarily stop moving pictures. The padding-slice generating unit 18 generates a necessary padding slice according to the display timing of the relevant moving picture and supplies the generated padding slice to the respective-moving-picture combination-information generating unit 17. A padding slice generated by the padding-slice generating unit 18 is for displaying a previous frame as it is and is a slice configured by temporal direction prediction of acquiring the last frame with no change.
For example, in the H.264 standard, a padding slice is a P slice including all macroblocks (MB) marked as skip. However, in a start frame of combination processing, a padding slice for a deficient region is configured by in-plane coding (intra coding). For example, in the H.264 standard, a padding slice for a deficient region is an IDR slice. A padding slice for a deficient region other than that in the start frame of the combination processing can also be configured by in-plane coding. For example, in the H. 264 standard, this padding slice corresponds to a slice having the slice type of an I picture (hereinafter, “I slice”).
Specific processing of the inter-moving-picture respective-picture combining unit 16 is explained with reference to
Combination processing for the moving picture B and the moving picture F is explained first with reference to
In this case, a bitstream obtained after the combination of the inter-moving-picture respective-picture combining unit 16 is composed of at least two or more slices. Specifically, at a time t8 illustrated in
Next, at the time t9 illustrated in
Next, combination of the moving picture F is canceled at the time t10 illustrated in
The moving picture C, the moving picture D, the moving picture E, and the moving picture G to be combined together by the inter-moving-picture respective-picture combining unit 112 are explained next. In this case, a bitstream after the combination by the inter-moving-picture respective-picture combining unit 112 includes at least two or more slices.
Specifically, at a time t1 in
Next, at the time t2 illustrated in
Next, at the time 3 illustrated in
Next, at the time t4 in
Next, at the time t5 illustrated in
Next, at the time t6 illustrated in
Next, the time t7 illustrated in
However, because the region to which the moving picture C having a larger vertical resolution has been allocated is released as a free region at the time t7, the moving picture G can alternatively be allocated to the region to which the moving picture C has been allocated as illustrated in
A specific example of processing in the post-processing unit 35 is explained next with reference to
Meanwhile, in the post-processing unit 35, information associated with the horizontal resolution of the combination image frame, information indicating which regions in the vertical direction of the combination image frame are occupied by moving pictures associated with combination and which regions are unused, and information related to the data size of the original moving picture of each image are sequentially sent from the command interpreting unit 36 to the drawing control unit 37.
The respective-moving-picture separating unit 3121 included in the drawing control unit 37 separates the decoded image data from each other on the basis of the information indicating which regions in the vertical direction are occupied by the moving pictures associated with combination and which regions are unused. For example, the respective-moving-picture separating unit 3121 separates the decoded image data into the moving picture C, the moving picture D, and the moving picture E at any point of time from the time t3 to the time t4 illustrated in
Next, the original-image clipping unit 3122 included in the drawing control unit 312 deletes the padding data compensated in the <encoding processing (step S2)> with respect to each frame on the basis of the information associated with the horizontal resolution of the combination image frame and the data size of the original image of each moving picture supplied from the high-order CPU 1, and reconstructs original moving pictures as illustrated on the right parts of
As described above, according to the second embodiment of the image data processing method of the present invention, moving pictures having low resolutions in the vertical direction are combined and are subjected to decoding processing at the same time. Therefore, even when parallel processing is performed by a plurality of decoder cores in the vertical direction, the function can be utilized.
Furthermore, the associated processing such as initialization of the decoder is reduced by combination of a plurality of moving pictures to reduce the number of streams to be decoded. Therefore, the processing time is considerably shortened.
According to the second embodiment, slices constituting respective moving pictures are extracted, the plural slices are brought together to generate one piece of encoded image data, so that the moving pictures are combined. Therefore, there is no need to alter the configuration or function of the hardware decoder 32. Furthermore, moving pictures associated with combination are encoded in such a manner that reference buffers in the respective moving pictures are identically managed to prevent the decoding timings and the decoding output timings from differing. Therefore, it is unnecessary to match the picture types at the time of combining the moving pictures at the slice level and the moving pictures can be combined according to the respective display timings.
<Summary of Operations and Effects of Aspects of Present Embodiment>
<First Aspect>
An image data processing method according to the present invention is an image data processing method using an image processing device that includes a high-order CPU that outputs encoded image data associated with a moving picture and issues an instruction associated with reproduction of the moving picture, an image processing processor that has a hardware decoder and decodes the encoded image data associated with the moving picture on the basis of the instruction to be input, and a display unit on which the moving picture is reproduced on the basis of image data decoded by the image processing processor, wherein in a case where a plurality of moving pictures are reproduced on the display unit on the basis of the instruction, the high-order CPU combines respective moving pictures at a level of slices being encoded image data to integrally configure encoded image data while considering the moving pictures as one picture of a plurality of slices, and supplies the encoded image data to the image processing processor, and the hardware decoder decodes the integrated encoded image data.
Accordingly, when a plurality of moving pictures having low resolutions are to be reproduced, the high-order CPU combines the moving pictures at the slice level and generates encoded image data of one picture composed of a plurality of slices, and the hardware decoder decodes one picture of plural slices including plural moving pictures combined together. Therefore, processing such as initialization of the hardware decoder necessary for each reproduction of moving pictures is rendered unnecessary, and the decoding processing speed can be improved.
<Second Aspect>
According to the image data processing method of the present invention, the high-order CPU requests the image processing processor to separate respective moving pictures from image data integrally decoded by the hardware decoder, and separated moving pictures are reproduced on the display unit. Therefore, a plurality of moving pictures can be reproduced by the same decoding processing and processing such as initialization of the hardware decoder necessary for each reproduction of moving pictures is rendered unnecessary, and the decoding processing speed can be improved.
<Third Aspect>
The high-order CPU extracts original image size information of moving pictures to be combined together from respective pieces of moving picture data and supplies the original image size information to an image processing processor, and the image processing processor separates respective moving pictures from integrally-decoded image data on the basis of the original image size information. Accordingly, even when many moving pictures having low resolutions in the vertical direction are to be reproduced, processing such as initialization of the hardware decoder necessary for each reproduction of moving pictures is rendered unnecessary and, when a hardware decoder having a parallel processing function is used, an advantage of the parallel processing function can be provided.
<Fourth Aspect>
Furthermore, encoded image data obtained by encoding in such a manner that reference buffers are identically managed is used for moving pictures associated with the combining and read by the high-order CPU. This eliminates the need to match picture types of respective slices of the moving pictures associated with the combining and therefore restrictions on reproduction timings of moving picture are reduced.
<Fifth Aspect>
Pieces of encoded image data where the respective decoding timings and the respective decoding output timings do not differ are used and moving pictures are combined at a level of the slices according to timings at which the moving pictures are to be displayed on the display unit. Therefore, restrictions on the reproduction timings of moving pictures are reduced and a data buffer that stores therein decoded image data is not required.
<Sixth Aspect>
When moving pictures are to be combined together at a level of slices according to timings at which the moving pictures are displayed on the display unit, the high-order CPU can also apply a padding slice to time segments other than those associated with display and integrally configure encoded image data while considering the moving pictures as one picture of a plurality of slices including the padding slice. Accordingly, even when the number of reproduced moving pictures falling within a combination image frame is changed, application of a padding slice enables moving pictures to be reproduced without changing the decoding processing. Therefore, processing such as initialization of a hardware decoder is not required and the decoding processing speed can be improved.
<Seventh Aspect>
When combining moving pictures at a level of slices according to timings when the moving pictures are to be displayed on the display unit, the high-order CPU applies the padding slice for a moving picture whose display has been ended at a portion where a slice of the moving pictures has been located. Accordingly, even when the number of reproduced moving pictures falling within the combination image frame is changed, application of a padding slice enables moving pictures to be reproduced without changing the decoding processing. Therefore, processing such as initialization of a hardware decoder is not required and the decoding processing speed can be improved.
<Eighth Embodiment>
When combining moving pictures at a level of slices according to timings when the moving pictures are to be displayed on the display unit, the high-order CPU repeatedly applies, for a moving picture that is temporarily stopped in the middle of display, a slice that is configured by temporal direction prediction of acquiring the last frame with no change to a period in which the moving picture is temporarily stopped. Accordingly, one decoding processing is continuously performed without changing the format of the encoded image data. Therefore, procedures such as initialization required for the decoding processing can be reduced and the decoding processing speed can be improved.
<Ninth Aspect>
When combining moving pictures at a level of slices according to timings when the moving pictures are to be displayed on the display unit, the high-order CPU ends the padding slice for a moving picture newly displayed and applies a first slice associated with the moving picture newly displayed. Accordingly, one decoding processing can be continuously performed and procedures such as initialization required for the decoding processing can be reduced.
<Tenth Aspect>
Furthermore, the high-order CPU defines a combination image frame associated with a predetermined resolution and causes information about which position in a vertical direction of the combination image frame is occupied by each of moving pictures to be included in an instruction when combining the moving pictures at a level of slices according to timings when the moving pictures are to be displayed on the display unit. Therefore, a necessary portion of moving pictures within the combination image frame can be appropriately clipped.
<Eleventh Aspect>
In the image data processing method of the present invention, for each of moving pictures associated with the combining among the encoded image data associated with the moving pictures, encoded image data obtained by encoding where intervals between GOPs are equal and picture types are matched is used. Accordingly, respective pieces of encoded image data of moving pictures can be combined at the level of slices.
<Twelfth Aspect>
Moving pictures associated with the combining among the encoded image data associated with the moving pictures are encoded with horizontal resolutions thereof matched each other by padding of data in a horizontal direction. Accordingly, encoded image data of a combination of a plurality of moving pictures can be decoded by a single decoder.
<Thirteenth Aspect>
Moving pictures associated with the combining among the encoded image data associated with the moving pictures have vertical resolutions equal to or lower than a predetermined value. Accordingly, encoded image data of a combination of a plurality of moving pictures can be decoded by a single decoder.
<Fourteenth Aspect>
The sum of respective vertical resolutions of moving pictures associated with the combining does not exceed a vertical processing capacity of the hardware decoder. Accordingly, encoded image data of a combination of a plurality of moving pictures can be decoded by a single decoder.
<Fifteenth Aspect>
Moving pictures associated with the combining among the encoded image data associated with the moving pictures are encoded with vertical resolutions thereof matched each other by padding of data in a vertical direction. Therefore, moving pictures to be combined can be easily replaced.
<Sixteenth Aspect>
A plurality of reference values are set for the horizontal resolution and the horizontal resolutions are conformed to one of the reference values to be matched each other. Accordingly, a sorting reference for moving pictures to be combined together is clarified and images associated with combining can be easily selected at the time of representation designing.
<Seventeenth Aspect>
In the image data processing method of the present invention, encoded image data stored in SEI or a container such as MP4 of a bitstream is used. Accordingly, original image size information is extracted when the high-order CPU reads moving pictures, the original image size information can be supplied to the image processing processor, and the image processing processor can delete padding data from decoded data having been subjected to padding processing to extract an image in a desired region.
<Eighteenth Aspect>
The image processing processor accumulates decoded image data obtained from the hardware decoder in an image data storage unit to ensure a desired display timing of each of moving pictures in units of respective frames when the moving pictures are reproduced on the display unit. Accordingly, the moving pictures can be reproduced at desired timings on the basis of moving-picture reproduction timing information.
In any of the aspects described above, even when many moving pictures having low resolutions in the vertical direction are to be reproduced, processing such as initialization of a hardware decoder is rendered unnecessary, a plurality of moving pictures can be reproduced at a high speed, and when a hardware decoder having a parallel processing function is used, an advantage of the parallel processing function can be provided.
In any of the aspects described above, moving pictures are combined in such a manner that plural slices of the moving pictures are brought together in units of pictures. Therefore, it is possible to improve the decoding processing speed without adding any alteration to the configuration or function of the hardware decoder itself.
The image data processing method according to the present invention can be adopted, for example, in a game machine such as a pachinko machine.
Number | Date | Country | Kind |
---|---|---|---|
JP2016-239477 | Dec 2016 | JP | national |
JP2017-226709 | Nov 2017 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/043285 | 12/1/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/105515 | 6/14/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20030048848 | Li | Mar 2003 | A1 |
20080165277 | Loubachevskaia | Jul 2008 | A1 |
20090067507 | Baird et al. | Mar 2009 | A1 |
20130215016 | Moriyoshi | Aug 2013 | A1 |
20130343663 | Sato | Dec 2013 | A1 |
20140294080 | Kurihara | Oct 2014 | A1 |
20150319452 | Lewis | Nov 2015 | A1 |
20160261880 | Lee et al. | Sep 2016 | A1 |
Number | Date | Country |
---|---|---|
2012-191513 | Oct 2012 | JP |
2017-225096 | Dec 2017 | JP |
0156293 | Aug 2001 | WO |
2009035936 | Mar 2009 | WO |
2012060459 | May 2012 | WO |
2015058719 | Apr 2015 | WO |
2015168150 | Nov 2015 | WO |
Entry |
---|
International Search Report issued in PCT/JP2017/043285 dated Jan. 23, 2018 (5 pages). |
Written Opinion of the International Searching Authority issued in PCT/JP2017/043285 dated Jan. 23, 2018 (6 pages). |
Extended European Search Report in counterpart European Application No. 17879606.6 dated Mar. 13, 2020 (10 pages). |
Number | Date | Country | |
---|---|---|---|
20190327480 A1 | Oct 2019 | US |