The present disclosure relates to a generation device and a generation method, and a reproduction device and a reproduction method, and in particular, a generation device and a generation method, and a reproduction device and a reproduction method enabled to provide a mapping that is applicable to an arbitrary mapping method and in which high resolution is set in a viewing direction.
Examples of a method of reproducing an omnidirectional image in which it is possible to look around 360 degrees in all directions of top, bottom, left, and right, includes: a method of viewing while changing a display direction by using a controller on a general display; a method of displaying on a mobile terminal screen held in a hand by changing a direction on the basis of posture information obtained from a gyro sensor incorporated in the terminal; a method of displaying an image by a head mounted display mounted on the head reflecting movement of the head; and the like.
A feature of a reproduction device for one person that reproduces an omnidirectional image by these reproduction methods, is that an image actually displayed is limited to a part of the omnidirectional image since it is not necessary to display data in a direction in which a viewer is not viewing although input image data is prepared for the omnidirectional image. If transmission and decoding processing can be reduced of a part not being displayed, line bandwidth efficiency can be increased.
However, in a general method of compressing using a standard codec such as Moving Picture Experts Group phase 2 (MPEG2) or Advanced Video Coding (AVC), both the space direction and the time axis direction are compressed using redundancy, and it is difficult to cut out and decode an arbitrary area, and to start reproduction from an arbitrary time.
Moreover, there is also a problem of server response delay in a case where data is sent for distribution or the like from a server via a network. Therefore, it is very difficult to instantaneously switch data by using a video codec, and even if omnidirectional data is divided into multiple data, when the viewer suddenly changes a direction of viewing, a state occurs of data loss in which there is no data to be displayed, until the switching is actually occurs. Since reproduction quality is extremely degraded when this data loss state occurs, in construction of an actual omnidirectional distribution system, it is necessary to maintain a state in which minimum data for all directions exist to prepare for sudden turning around.
Therefore, various methods have been devised to make an image high resolution and high quality in a direction in which the viewer is viewing while securing the minimum data for the omnidirectional image within a limited line bandwidth.
For example, Patent Document 1 devises a so-called divided hierarchical distribution system in which a low-resolution omnidirectional image of equirectangular projection and a high-resolution partially cut-out image are combined and viewed. In the divided hierarchical distribution system, a high resolution partial image in a viewing direction is distributed while being switched depending on a direction of the viewer. In a case of drawing of an area where the high resolution partial image around the visual field is not covered, or in a case where the stream cannot be switched in time due to sudden turning around, a low-resolution omnidirectional image is used, to prevent occurrence of a state in which there is no image to be displayed.
A problem with such a divided hierarchical method is that, an image is drawn by using the high resolution with good image quality in a portion where the high resolution and the low resolution overlap each other, so that a low resolution side of the overlapping portion is wasted. Double transmission of the high resolution and the low resolution results in loss of line bandwidth.
There is also a method devised for eliminating the waste of double transmission of this overlapping portion.
For example, there is a method of using a mapping designed such that directions for the omnidirectional image is included in one image, and an area, in other words, the number of pixels allocated varies depending on the direction, and one specific direction has high resolution.
This method is described, for example, in the chapter “A.3.2.5 Truncated pyramid” of Working Draft “WD on ISO/IEC 23000-20 Omnidirectional Media Application Format” published at the Geneva meeting in June 2016 of the MPEG meeting (See Non-Patent Document 1) Furthermore, a similar method is also described in Non-Patent Document 2. Facebook, Inc. publishes another mapping based on a similar idea, which is called as “Pyramid Coding”, on the Web (see Non-Patent Document 3)
According to this method, the number of allocated pixels in the front area is large, and the resolution is high. A plurality of bit streams having different front directions is prepared, and transmission is performed while the bit streams are switched depending on the direction in which the viewer is viewing. Even if there is a difference in resolution, data in all directions are included, so that a sudden change in viewing direction does not cause data loss.
Such a mapping in which high resolution is set in the viewing direction front, and low resolution is set in the other directions, is called “viewport dependent projection mapping” in the MPEG meeting.
The above-described method of using a mapping in which high resolution is set in the viewing direction front and low resolution is set in the other directions, is a method using a dedicated mapping in which a mapping that causes a resolution difference is newly defined. The method using the dedicated mapping becomes incompatible with a normal mapping method, in other words, a mapping of displaying an omnidirectional image with uniform pixel density (resolution) in all directions. Therefore, a mapping is desired that can use a mapping designed for a general omnidirectional image as it is, and can generate an image with a high resolution in a specific direction.
The present disclosure has been made in view of such a situation, and is enabled to provide a mapping that is applicable to an arbitrary mapping method and in which high resolution is set in a viewing direction.
A generation device of a first aspect of the present disclosure includes a normalization unit that converts a first vector that maps a 360-degree image onto a predetermined 3D model into a second vector of a 3D model of a unit sphere.
A generation method of the first aspect of the present disclosure includes converting a first vector of a predetermined 3D model onto which a 360-degree image is mapped into a second vector of a 3D model of a unit sphere.
In the first aspect of the present disclosure, the first vector of the predetermined 3D model onto which the 360-degree image is mapped is converted into the second vector of the 3D model of the unit sphere.
A reproduction device of a second aspect of the present disclosure includes: a receiving unit that receives a 360-degree image generated by another device; and a normalization unit that converts a first vector that maps the 360-degree image onto a predetermined 3D model into a second vector of a 3D model of a unit sphere.
A reproduction method of the second aspect of the present disclosure includes receiving a 360-degree image generated by another device, and converting a first vector that maps the 360-degree image onto a predetermined 3D model into a second vector of a 3D model of a unit sphere.
In the second aspect of the present disclosure, the 360-degree image generated by the other device is received, and the first vector that maps the 360-degree image onto the predetermined 3D model is converted into the second vector of the 3D model of the unit sphere.
Note that, the generation device of the first aspect and the reproduction device of the second aspect of the present disclosure can be implemented by causing a computer to execute a program.
Furthermore, to implement the generation device of the first aspect and the reproduction device of the second aspect of the present disclosure, the program to be executed by the computer can be provided by being transmitted via a transmission medium or recorded on a recording medium.
The generation device and the reproduction device each may be an independent device or an internal block included in one device.
According to the first and second aspects of the present disclosure, it is enabled to provide a mapping that is applicable to an arbitrary mapping method and in which high resolution is set in a viewing direction.
Note that, the effect described here is not necessarily limited, and can be any effect described in the present disclosure.
The following is a description of a mode for carrying out a technology according to the present disclosure (the mode will be hereinafter referred to as the embodiment). Note that, description will be made in the following order.
1. First Embodiment
2. Direction of mapping of mapping processing unit of generation device and reproduction device
3. Second Embodiment
4. Relationship between eccentricity ratio k and resolution improvement ratio
5. Difference between first embodiment and second embodiment
6. Combination of first embodiment and second embodiment
7. Conclusion
8. Modifications
9. Computer configuration example
10. Application example
A distribution system 10 of
Specifically, the imaging device 11 of the distribution system 10 includes six cameras 11A-1 to 11A-6. Note that, in the following, in a case where it is not necessary to distinguish the cameras 11A-1 to 11A-6 in particular, they are referred to as cameras 11A.
Each of the cameras 11A captures a moving image. The imaging device 11 supplies, as captured images, moving images in six directions captured by the respective cameras 11A to the generation device 12. Note that, the number of cameras provided in the imaging device 11 is not limited to six, as long as it is plural.
The generation device 12 generates an omnidirectional image of 360 degrees around in the horizontal direction and 180 degrees around in the vertical direction, from the captured images supplied from the imaging device 11, with a method using equirectangular projection. The generation device 12 compresses and encodes image data in which the omnidirectional image in which it is possible to look around 360 degrees in all directions of top, bottom, left, and right with the equirectangular projection is mapped onto a predetermined 3D model, with a predetermined encoding method such as Advanced Video Coding (AVC) or High Efficiency Video Coding (HEVC)/H.265.
When mapping the omnidirectional image onto the predetermined 3D model, the generation device 12 sets a plurality of directions corresponding to gaze directions of the viewer, and converts the mapping so that high resolution is set for the resolution in the set directions and low resolution is set in the directions opposite to the set directions, for each direction. Then, the generation device 12 generates an encoded stream in which image data of the omnidirectional image generated by using the converted mapping is compressed and encoded.
For example, assuming that, with the center of a cube as a viewing position, six directions from the center in directions perpendicular to respective faces of the cube are set as the plurality of directions, the generation device 12 generates six encoded streams in which the image data of the omnidirectional image is compressed and encoded so that the encoded streams correspond to the six directions. The generated six encoded streams become image data of an omnidirectional image having different directions in which high resolution is set (hereinafter, also referred to as high resolution directions).
The generation device 12 uploads, to the distribution server 13, a plurality of encoded streams respectively having different high resolution directions. Furthermore, the generation device 12 generates auxiliary information for identifying the plurality of encoded streams, and uploads the auxiliary information to the distribution server 13. The auxiliary information is information that defines which direction the high resolution direction is, or the like.
The distribution server 13 connects to the reproduction device 15 via the network 14. The distribution server 13 stores the plurality of encoded streams and the auxiliary information uploaded from the generation device 12. In response to a request from the reproduction device 15, the distribution server 13 transmits the stored auxiliary information and at least one of the plurality of encoded streams to the reproduction device 15 via the network 14.
The reproduction device 15 requests the distribution server 13 via the network 14 to transmit the auxiliary information for identifying the plurality of encoded streams stored in the distribution server 13, and receives the auxiliary information transmitted in response to the request.
Furthermore, the reproduction device 15 incorporates a camera 15A and images a marker 16A attached to the head mounted display 16. Then, the reproduction device 15 detects the viewing position of the viewer in a coordinate system of the 3D model (hereinafter referred to as a 3D model coordinate system) on the basis of the captured image of the marker 16A. Moreover, the reproduction device 15 receives a detection result of a gyro sensor 16B of the head mounted display 16 from the head mounted display 16. The reproduction device 15 determines a gaze direction of the viewer in the 3D model coordinate system on the basis of the detection result of the gyro sensor 16B. The reproduction device 15 determines a visual field range of the viewer positioned inside the 3D model on the basis of the viewing position and the gaze direction.
Then, on the basis of the auxiliary information and the visual field range of the viewer, the reproduction device 15 requests of the distribution server 13 via the network 14 one of the plurality of encoded streams, and receives one encoded stream transmitted from the distribution server 13 in response to the request.
In other words, the reproduction device 15 determines, as an encoded stream to be acquired, an encoded stream in a high resolution direction closest to the gaze direction of the viewer among the plurality of encoded streams stored in the distribution server 13, and requests the distribution server 13 to transmit the encoded stream.
The reproduction device 15 decodes the received one encoded stream. The reproduction device 15 generates a 3D model image by mapping an omnidirectional image obtained as a result of decoding onto a predetermined 3D model.
Then, the reproduction device 15 generates an image of the visual field range of the viewer as the display image, by perspectively projecting the 3D model image onto the visual field range of the viewer with the viewing position as a focal point. The reproduction device 15 supplies the display image to the head mounted display 16.
The head mounted display 16 is mounted on the head of the viewer and displays the display image supplied from the reproduction device 15. To the head mounted display 16, the marker 16A imaged by the camera 15A is attached. Thus, the viewer can designate the viewing position by moving while the head mounted display 16 is mounted on the head. Furthermore, the head mounted display 16 incorporates a gyro sensor 16B, and a detection result of angular velocity by the gyro sensor 16B is transmitted to the reproduction device 15. Thus, the viewer can designate the gaze direction by rotating the head on which the head mounted display 16 is mounted.
The reproduction device 15 generates an image in the visual field range of the viewer while switching the plurality of encoded streams stored in the distribution server 13 depending on a direction in which the viewer is viewing, and causes the head mounted display 16 to display the image. In other words, the plurality encoded streams stored in the distribution server 13 is dynamically switched so that the resolution is increased in the direction in which the viewer is viewing.
In the distribution system 10, any method may be used as a distribution method from the distribution server 13 to the reproduction device 15. In a case where the distribution method is, for example, a method using Moving Picture Experts Group phase-Dynamic Adaptive Streaming over HTTP (MPEG-DASH), the distribution server 13 is a HyperText Transfer Protocol (HTTP) server, and the reproduction device 15 is an MPEG-DASH client.
(Configuration Example of Generation Device)
The generation device 12 of
In the generation device 12, the same number of rotation processing units 22, mapping processing units 23, and encoders 24 are provided as the number of high resolution directions. In the present embodiment, in the generation device 12, with the center of a cube as a viewing position, six directions from the center in directions perpendicular to respective faces of the cube are set as the high resolution directions, indicated by arrows dir1 to dir6 in
In the following, the rotation processing units 22-1 to 22-6 are simply referred to as the rotation processing units 22 in a case where it is not necessary to distinguish them in particular. Similarly, the mapping processing units 23-1 to 23-6 and the encoders 24-1 to 24-6 are referred to as the mapping processing units 23 and the encoders 24 in a case where it is not necessary to distinguish them in particular.
The stitching processing unit 21 makes colors and brightness of captured images in the six directions supplied from the camera 11A of
The rotation processing unit 22 rotates (moves) the image center of the captured image on the frame basis (for example, the equirectangular image) supplied from the stitching processing unit 21 so that the high resolution direction is at the center of the image. The setting unit 25 indicates which direction is to be set as the high resolution direction. The rotation processing units 22-1 to 22-6 only differ in the direction indicated from the setting unit 25.
The mapping processing unit 23 generates an omnidirectional image in which high resolution is set in the direction indicated from the setting unit 25, by mapping the captured image converted so that the vicinity of the image center has high resolution onto a predetermined 3D model. In the generated omnidirectional image, resolution in a direction opposite to (directly behind) the direction indicated from the setting unit 25 is low resolution.
The encoder 24 (encoding unit) encodes the omnidirectional image supplied from the mapping processing unit 23 with a predetermined encoding method such as the MPEG2 method or the AVC method, to generate an encoded stream. The encoder 24 supplies the generated one encoded stream to the transmission unit 27.
At this time, since the six encoded streams generated by the encoders 24-1 to 24-6 are dynamically switched and reproduced, so that, for example, sync points such as the first picture of a Group of Picture (GOP) and the IDR picture are made the same as each other between the six encoded streams generated by the encoders 24-1 to 24-6. The encoders 24-1 to 24-6 each supply one encoded stream generated to the transmission unit 27.
The mapping processing unit 23 and the encoder 24 execute the same processing with a direction of the image center of the image supplied from the rotation processing unit 22 corresponding to the high resolution direction set by the setting unit 25, as the front direction. An image in which the high resolution direction is at the image center supplied from the rotation processing unit 22 is referred to as a front direction image.
The setting unit 25 determines high resolution directions treated as front directions by sets of the six rotation processing units 22, mapping processing units 23, and encoders 24 provided in parallel with respect to 360 degrees around in the horizontal direction and 180 degrees around in the vertical direction. The setting unit 25 supplies, to the rotation processing units 22-1 to 22-6, information for specifying one of the six high resolution directions determined. The high resolution directions respectively supplied to the rotation processing units 22-1 to 22-6 are different from each other.
Furthermore, the setting unit 25 determines a resolution improvement ratio indicating how high the resolution is to be set in the high resolution direction, as compared with a case where uniform resolution is set in all directions of 360 degrees around in the horizontal direction and 180 degrees around in the vertical direction, and supplies the resolution improvement ratio to the mapping processing units 23-1 to 23-6. The resolution improvement ratios supplied to the mapping processing units 23-1 to 23-6 may be different values in the mapping processing units 23-1 to 23-6, but the resolution improvement ratios are common values in the present embodiment.
In the present embodiment, the number of high resolution directions set is determined in advance to six, and a configuration is adopted in which six each are provided of the rotation processing units 22, the mapping processing units 23, and the encoders 24 in the generation device 12; however, a configuration may be adopted in which the number of pieces is variable of each of the rotation processing unit 22, the mapping processing unit 23, and the encoder 24, corresponding to an arbitrary number of high resolution directions determined by the setting unit 25.
Moreover, the setting unit 25 supplies, to the table generation unit 26, information specifying the six high resolution directions supplied to the rotation processing units 22-1 to 22-6 and information indicating the resolution improvement ratio supplied to the mapping processing units 23-1 to 23-6.
Note that, in
The table generation unit 26 generates a table in which the six high resolution directions and the resolution improvement ratio supplied from the setting unit 25 are collected for each high resolution direction, and supplies the generated table as auxiliary information to the transmission unit 27.
The transmission unit 27 uploads (transmits) a total of six encoded streams respectively supplied from the encoders 24-1 to 24-6, and the auxiliary information supplied from the table generation unit 26 to the distribution server 13 of
(Configuration Example of Mapping Conversion Unit)
The mapping processing unit 23 includes an omnidirectional mapping coordinate generation unit 41, a vector normalization unit 42, and a mapping correction unit 43.
An image is supplied from the rotation processing unit 22 to the mapping processing unit 23 for each frame, in which the image is one image (for example, an equirectangular image) having sufficient resolution, and the high resolution direction set by the setting unit 25 is at the image center.
In a case where an omnidirectional image is generated in which uniform resolution is set in all directions of 360 degrees around in the horizontal direction and 180 degrees around in the vertical direction, the mapping processing unit 23 includes only the omnidirectional mapping coordinate generation unit 41. To generate an omnidirectional image in which a predetermined direction is set as the high resolution direction as in the present embodiment, the vector normalization unit 42 and the mapping correction unit 43 are added to the mapping processing unit 23.
The mapping processing unit 23 performs processing of mapping one image (for example, an equirectangular image) supplied from the rotation processing unit 22 to a predetermined 3D model, and in fact, three types of processing of the omnidirectional mapping coordinate generation unit 41, the vector normalization unit 42, and the mapping correction unit 43 are collectively performed by mapping processing of one time.
Thus, the mapping processing by the mapping processing unit 23 will be described stepwise, defining a mapping performed by the omnidirectional mapping coordinate generation unit 41 alone as a mapping f, defining a mapping performed by a combination of the omnidirectional mapping coordinate generation unit 41 and the vector normalization unit 42 as a mapping f′, and defining a mapping performed by a combination of the omnidirectional mapping coordinate generation unit 41, the vector normalization unit 42, and the mapping correction unit 43 as a mapping f″, as illustrated in
The omnidirectional mapping coordinate generation unit 41 generates an omnidirectional image in which high resolution is set in the high resolution direction by mapping the front direction image supplied from the rotation processing unit 22 onto the predetermined 3D model.
For example, assuming that regular octahedron mapping is adopted for mapping onto a regular octahedron as the predetermined 3D model, the omnidirectional mapping coordinate generation unit 41 generates an omnidirectional image in which high resolution is set in the high resolution direction by mapping the front direction image supplied from the rotation processing unit 22 onto the regular octahedron.
Polyhedron mapping including regular octahedron mapping is a mapping method capable of drawing relatively easily by the GPU by pasting a texture image to a 3D polyhedron solid model including multiple triangle patches and performing perspective projection so that the face of the polyhedron is looked around from the center of the 3D model.
A of
B of
A correspondence between the texture image (pixel) of plane coordinates (u, v) and a three-dimensional coordinate position (x, y, z) on the regular octahedron is defined by the mapping f, as (x, y, z)=f(u, v).
A of
As illustrated in A of
[Expression 1]
{right arrow over (d)}=f(u,v) (1)
Next, the vector d is normalized by an equation (2) below, whereby a vector p is calculated.
The vector p represents a vector to a three-dimension coordinate position (x, y, z) on the spherical surface of a sphere with a radius of 1 (hereinafter referred to as a unit sphere), as illustrated in B of
The mapping f′ performed by the combination of the omnidirectional mapping coordinate generation unit 41 and the vector normalization unit 42 is processing of mapping onto a 3D model of a unit sphere (spherical surface) by using the vector p.
Thus, the vector normalization unit 42 converts the predetermined 3D model (the regular octahedron model in the present embodiment) adopted by the omnidirectional mapping coordinate generation unit 41 into the 3D model of the unit sphere (spherical surface).
The mapping correction unit 43 adds a predetermined offset position vector to the vector p normalized by the vector normalization unit 42. When the offset position vector to be added to the vector p is a vector s1 and a vector corrected by the mapping correction unit 43 is a vector q1, the mapping correction unit 43 calculates the vector q1 by an equation (3) below.
[Expression 3]
{right arrow over (q)}
1
={right arrow over (p)}−{right arrow over (s)}
1 (3)
The vector q1 is obtained by adding the predetermined offset position vector s1 (negative offset position vector s1) to the vector p as illustrated in C of
The mapping f″ performed by the combination of the omnidirectional mapping coordinate generation unit 41, the vector normalization unit 42, and the mapping correction unit 43 is processing of mapping onto the 3D model of the unit sphere (spherical surface) by using the vector q1.
The calculated vector q1 is equivalent to viewing the texture image arranged on the spherical surface of the unit sphere as the 3D model from a position shifted from the center (origin) by the offset position vector s1.
As illustrated in
The length (magnitude) of the offset position vector s1 in C of
When the length of the offset position vector s1 is referred to as an eccentricity ratio k, the eccentricity ratio k can take a value from the origin to the unit spherical surface, and therefore a possible value of the eccentricity ratio k is 0≤k<1. The larger the eccentricity ratio k (length of the offset position vector s1), the larger the difference in density between the pixels illustrated in
As described above, the mapping processing unit 23 executes the mapping processing of the mapping f″ corresponding to the vector q1 of the equation (3), thereby generating an omnidirectional image in which a specific direction is set as the high resolution direction from the front direction image supplied from the rotation processing unit 22.
In the above-described example, an example has been described in which the 3D model of the regular octahedron is adopted, as an example of the omnidirectional image processing by the omnidirectional mapping coordinate generation unit 41.
However, the 3D model adopted by the omnidirectional mapping coordinate generation unit 41 may be any 3D model. For example, cube mapping may be used, or an equidistant cylinder may be used. However, since the equidistant cylinder is usually defined by a sphere model, there is no change in the 3D model shape even if normalization processing is performed by the vector normalization unit 42 in the subsequent stage.
In other words, since the vector normalization unit 42 that converts an arbitrary 3D model into a unit sphere model is provided in the subsequent stage of the omnidirectional mapping coordinate generation unit 41, the arbitrary 3D model can be adopted in the omnidirectional mapping coordinate generation unit 41.
In
Although it cannot be expressed sufficiently on the drawing due to the limitation of the resolution of the drawing, the two-dimensional texture image by the mapping processing of the mapping f″ of
(Configuration Example of Auxiliary Information)
The table generation unit 26 generates, as the auxiliary information, a parameter table in which the information specifying the high resolution direction and the resolution improvement ratio are collected for each high resolution direction.
In the example of
The azimuth angle θ is an angle from a predetermined reference axis on the XZ plane that is a horizontal plane of the 3D model coordinate system, and can take a value from −180 to +180° (−180°≤θ 180°). In the example of
The elevation angle φ is an angle in the vertical direction with the XZ plane of the 3D model coordinate system as a reference plane, and can take a value from −90° to +90° (−90°≤φ≤90°). In the example of
The rotation angle 4 is an angle around an axis o when the axis is a line connecting the origin as a viewpoint and a point on the spherical surface, and in the example of
Note that, the positive or negative sign changes depending on which of the right-handed system or left-handed system of the orthogonal coordinate system is adopted, and which direction or rotation direction is used as the positive direction in the azimuth angle θ, the elevation angle φ, and the rotation angle ψ; however, there is no problem in defining it in any way.
Six omnidirectional images 611 to 616 illustrated in
In the six omnidirectional images 611 to 616 illustrated in
The six omnidirectional images 611 to 616 are rotated by the rotation processing unit 22 so that the resolution direction specified by the parameter table in
(Description of Processing of Generation Device)
First, in step S11, the stitching processing unit 21 makes colors and brightness of captured images in the six directions supplied from the respective cameras 11A the same for each frame, and removes overlap and connects the captured images together, to convert the captured images to one captured image. The stitching processing unit 21 generates, for example, an equirectangular image as the one captured image, and supplies the equirectangular image on a frame basis to the rotation processing unit 22.
In step S12, the setting unit 25 determines six high resolution directions and resolution improvement ratios. The setting unit 25 supplies the determined six high resolution directions one by one to the rotation processing units 22-1 to 22-6, and supplies the determined resolution improvement ratios to the mapping processing units 23-1 to 23-6. Furthermore, the setting unit 25 also supplies the determined six high resolution directions and resolution improvement ratios to the table generation unit 26.
In step S13, the rotation processing unit 22 rotates the captured image (for example, the equirectangular image) on the frame basis supplied from the stitching processing unit 21 so that the high resolution direction indicated by the setting unit 25 is at the center of the image.
In step S14, the mapping processing unit 23 generates an omnidirectional image in which high resolution is set in the direction indicated from the setting unit 25 by mapping the captured image rotated so that the vicinity of the image center has high resolution onto a predetermined 3D model.
Specifically, the mapping processing unit 23 calculates the vector q1 of the equation (3), and executes processing of mapping the front direction image supplied from the rotation processing unit 22 onto a unit sphere by using the vector q1 obtained as a result of the calculation, thereby executing mapping processing of the mapping f″ in which 3D model mapping processing by the omnidirectional mapping coordinate generation unit 41, normalization processing by the vector normalization unit 42, and mapping correction processing by the mapping correction unit 43 are integrated together.
In step S15, the encoder 24 encodes the omnidirectional image supplied from the mapping processing unit 23 with a predetermined encoding method such as the MPEG2 method or the AVC method, to generate one encoded stream. The encoder 24 supplies the generated one encoded stream to the transmission unit 27.
In steps S13 to S15, the provided six each of the rotation processing units 22, the mapping processing units 23, and the encoders 24 perform processing in parallel on the captured images having the different high resolution directions (front directions).
In step S16, the generation table generation unit 26 generates, as auxiliary information, a parameter table in which information specifying the six high resolution directions and information indicating the resolution improvement ratios are collected for each high resolution direction, and supplies the auxiliary information to the transmission unit 27.
In step S17, the transmission unit 27 uploads a total of six encoded streams supplied from the six encoders 24, and the auxiliary information supplied from the table generation unit 26 to the distribution server 13.
(Configuration Example of Distribution Server and Reproduction Device)
The distribution server 13 includes a reception unit 101, a storage 102, and a transmission/reception unit 103.
The reception unit 101 receives six encoded streams and auxiliary information uploaded from the generation device 12 of
The storage 102 stores the six encoded streams and the auxiliary information supplied from the reception unit 101.
In response to a request from the reproduction device 15, the transmission/reception unit 103 reads the auxiliary information stored in the storage 102, and transmits the auxiliary information to the reproduction device 15 via the network 14.
Furthermore, in response to the request from the reproduction device 15, the transmission/reception unit 103 reads one predetermined encoded stream among the six encoded streams stored in the storage 102, and transmits the predetermined encoded stream to the reproduction device 15 via the network 14. The one encoded stream transmitted to the reproduction device 15 by the transmission/reception unit 103 is appropriately changed in response to the request from the reproduction device 15.
Note that, the change of the encoded stream to be transmitted is performed at a sync point. Thus, the change of the encoded stream to be transmitted is performed on several frames to several tens of frames basis. Furthermore, as described above, the sync points are the same as each other among the six encoded streams. Thus, the transmission/reception unit 103 can easily switch the captured image to be reproduced in the reproduction device 15 by switching the encoded stream to be transmitted at the sync point.
The reproduction device 15 includes the camera 15A, a transmission/reception unit 121, a decoder 122, a mapping processing unit 123, a rotation calculation unit 124, a receiving unit 125, a gaze detecting unit 126, a stream determination unit 127, and a drawing unit 128.
The transmission/reception unit 121 (receiving unit) of the reproduction device 15 requests the distribution server 13 to transmit auxiliary information via the network 14, and receives the auxiliary information transmitted from the transmission/reception unit 103 of the distribution server 13 in response to the request. The transmission/reception unit 121 supplies the acquired auxiliary information to the stream determination unit 127.
Furthermore, stream selection information is supplied from the stream determination unit 127 to the transmission/reception unit 121, the stream selection information indicating which one encoded stream is to be acquired among the six encoded streams that can be acquired from the distribution server 13.
The transmission/reception unit 121 requests the distribution server 13 via the network 14 to transmit the one encoded stream determined on the basis of the stream selection information, and receives the one encoded stream transmitted from the transmission/reception unit 103 of the distribution server 13 in response to the request. The transmission/reception unit 121 supplies the acquired one encoded stream to the decoder 122.
The decoder 122 (decoding unit) decodes the encoded stream supplied from the transmission/reception unit 121, to generate an omnidirectional image in which high resolution is set in a predetermined direction. The generated omnidirectional image is an omnidirectional image having a high resolution direction in a direction closest to a current gaze direction of the viewer among the high resolution directions of the respective six encoded streams that can be acquired from the distribution server 13. The decoder 122 supplies the generated omnidirectional image to the mapping processing unit 123.
The mapping processing unit 123 generates a 3D model image by mapping as a texture an omnidirectional image supplied from the decoder 122 onto the spherical surface of a unit sphere as a 3D model, and supplies the 3D model image to the drawing unit 128.
More specifically, the mapping processing unit 123, similarly to the mapping processing unit 23 of the generation device 12, executes mapping processing of mapping the omnidirectional image supplied from the decoder 122 by the mapping f″ using the vector q1 of the equation (3). The 3D model image obtained by the mapping is an image in which a predetermined direction close to the current gaze direction of the viewer is set as the high resolution direction.
The rotation calculation unit 124 acquires, from the stream determination unit 127, information specifying the high resolution direction of the one encoded stream received by the transmission/reception unit 121, generates rotation information corresponding to the high resolution direction, and supplies the rotation information to the drawing unit 128.
In other words, the six encoded streams acquired from the distribution server 13 are subjected to rotation processing by the rotation processing unit 22 of the generation device 12 so that the same (common) direction in the 3D model coordinate system is the high resolution direction, and then subjected to the mapping processing. Therefore, the rotation calculation unit 124 generates rotation information for returning to the original direction of the 3D model coordinate system on the basis of the information specifying the high resolution direction supplied from the stream determination unit 127, and supplies the rotation information to the drawing unit 128.
The receiving unit 125 receives a detection result of the gyro sensor 16B in
The gaze detecting unit 126 determines a gaze direction of the viewer in the 3D model coordinate system on the basis of the detection result of the gyro sensor 16B supplied from the receiving unit 125, and supplies the determined direction to the stream determination unit 127. Furthermore, the gaze detecting unit 126 acquires a captured image of the marker 16A from the camera 15A, and detects a viewing position in the coordinate system of the 3D model on the basis of the captured image.
Furthermore, the gaze detecting unit 126 determines a visual field range of the viewer in the 3D model coordinate system on the basis of the viewing position and the gaze direction in the 3D model coordinate system. The gaze detecting unit 126 supplies the visual field range and the viewing position of the viewer to the drawing unit 128.
The stream determination unit 127 is supplied with the auxiliary information from the transmission/reception unit 121, and is supplied with the gaze direction of the viewer from the gaze detecting unit 126.
The stream determination unit 127 (selection unit) determines (selects) an encoded stream having a high resolution direction closest to the gaze direction of the viewer among the six encoded streams that can be acquired from the distribution server 13, on the basis of the gaze direction of the viewer and the auxiliary information. In other words, the stream determination unit 127 determines one encoded stream of which the front of the image perspectively projected onto the visual field range of the viewer has high resolution.
The stream determination unit 127 supplies stream selection information indicating the selected encoded stream to the transmission/reception unit 121.
Furthermore, the stream determination unit 127 supplies information specifying the high resolution direction of the selected encoded stream to the rotation calculation unit 124. Specifically, the stream determination unit 127 supplies the azimuth angle θ, the elevation angle φ, and the rotation angle ψ of the auxiliary information corresponding to the selected encoded stream to the rotation calculation unit 124. The rotation calculation unit 124 generates rotation information on the basis of the information specifying the high resolution direction supplied from the stream determination unit 127.
Note that, the stream determination unit 127 may also supply the stream selection information to the rotation calculation unit 124, and the rotation calculation unit 124 may generate the rotation information by referring to the parameter table of
The drawing unit 128 generates an image of the visual field range of the viewer as the display image, by perspectively projecting the 3D model image supplied from the mapping processing unit 123 onto the visual field range of the viewer, with the viewing position supplied from the gaze detecting unit 126 as a focal point.
Processing of the drawing unit 128 will be described with reference to
A of
As illustrated in
In the 3D model image supplied from the mapping processing unit 123, a predetermined direction of the spherical surface 142 of the unit sphere, for example, a direction indicated by an arrow 143 is the high resolution direction. The direction of the arrow 143 is a direction of the image center of the two-dimensional texture image, and is a direction common to six 3D model images corresponding to the six encoded streams, but is a direction unrelated to the gaze direction of the viewer.
The drawing unit 128 rotates the 3D model image in accordance with the rotation information supplied from the rotation calculation unit 124. In the example of
In other words, the drawing unit 128 rotates the high resolution portion of the 3D model image supplied from the mapping processing unit 123 so that the portion is in the original direction of the arrows dir1 to dir6 of
Next, the drawing unit 128 perspectively projects the rotated 3D model image onto a visual field range 146 of the viewer on the basis of the visual field range and viewing position of the viewer supplied from the gaze detecting unit 126. As a result, an image is generated as the display image, the image being mapped onto the unit sphere and viewed from the center point 141 that is the viewing position through the visual field range 146 of the viewer. The generated display image is supplied to the head mounted display 16.
(Description of Processing of Reproduction Device)
First, in step S31, the transmission/reception unit 121 requests the distribution server 13 to transmit auxiliary information, and receives the auxiliary information transmitted from the transmission/reception unit 103 of the distribution server 13 in response to the request. The transmission/reception unit 121 supplies the acquired auxiliary information to the stream determination unit 127.
In step S32, the receiving unit 125 receives a detection result of the gyro sensor 16B in
In step S33, the gaze detecting unit 126 determines a gaze direction of the viewer in the coordinate system of the 3D model on the basis of the detection result of the gyro sensor 16B supplied from the receiving unit 125, and supplies the gaze direction to the stream determination unit 127.
In step S34, the gaze detecting unit 126 determines a viewing position and a visual field range of the viewer in the coordinate system of the 3D model, and supplies the viewing position and the visual field range to the drawing unit 128. More specifically, the gaze detecting unit 126 acquires a captured image of the marker 16A from the camera 15A, and detects the viewing position in the coordinate system of the 3D model on the basis of the captured image. Then, the gaze detecting unit 126 determines the visual field range of the viewer in the 3D model coordinate system on the basis of the detected viewing position and the gaze direction.
In step S35, the stream determination unit 127 determines (selects) one encoded stream from among six encoded streams that can be acquired from the distribution server 13 on the basis of the gaze direction of the viewer and the auxiliary information. In other words, the stream determination unit 127 determines (selects) an encoded stream having a high resolution direction closest to the gaze direction of the viewer from among the six encoded streams. Then, the stream determination unit 127 supplies stream selection information indicating the selected encoded stream to the transmission/reception unit 121.
In step S36, the stream determination unit 127 supplies information specifying the high resolution direction of the selected encoded stream to the rotation calculation unit 124.
In step S37, the rotation calculation unit 124 generates rotation information on the basis of the information specifying the high resolution direction supplied from the stream determination unit 127, and supplies the rotation information to the drawing unit 128.
In step S38, the transmission/reception unit 121 requests the distribution server 13 via the network 14 to transmit one encoded stream corresponding to the stream selection information supplied from the stream determination unit 127, and receives the one encoded stream transmitted from the transmission/reception unit 103 of the distribution server 13 in response to the request. The transmission/reception unit 121 supplies the acquired one encoded stream to the decoder 122.
In step S39, the decoder 122 decodes the encoded stream supplied from the transmission/reception unit 121, generates an omnidirectional image in which high resolution is set in a predetermined direction, and supplies the omnidirectional image to the mapping processing unit 123.
The order of the processing of steps S36 and S37 and the processing of steps S38 and 39 may be reversed. Furthermore, the processing of steps S36 and S37 and the processing of steps S38 and 39 can be executed in parallel.
In step S40, the mapping processing unit 123 generates a 3D model image in which a two-dimensional texture image is pasted on the spherical surface of a unit sphere, by mapping as a texture the omnidirectional image supplied from the decoder 122 onto the unit sphere as the 3D model, and supplies the 3D model image to the drawing unit 128.
Specifically, the mapping processing unit 123 executes mapping processing of the mapping f″ using the vector q1 of the equation (3), and maps the omnidirectional image supplied from the decoder 122 onto the unit sphere.
In step S41, the drawing unit 128 rotates the 3D model image supplied from the mapping processing unit 123 in accordance with the rotation information supplied from the rotation calculation unit 124.
In step S42, the drawing unit 128 generates a display image by perspectively projecting the rotated 3D model image onto the visual field range of the viewer on the basis of the visual field range and viewing position of the viewer supplied from the gaze detecting unit 126.
In step S43, the drawing unit 128 transmits the display image to the head mounted display 16 to display the image.
In step S44, the reproduction device 15 determines whether or not reproduction is to be ended. For example, the reproduction device 15 determines to end the reproduction when operation to end the reproduction is performed by the viewer.
In a case where it is determined in step S44 that the reproduction is not to be ended, the processing returns to step S32, and the above-described processing of steps S32 to S44 is repeated. On the other hand, in a case where it is determined in step S44 that the reproduction is to be ended, the reproduction processing is ended.
A modification of the above-described first embodiment will be described with reference to
In the first embodiment described above, the reproduction device 15 generates the display image of when the viewer views the gaze direction with the center position of the sphere as the viewing position, by acquiring the omnidirectional image on which the mapping processing of the mapping f″ is executed from the distribution server 13, and mapping as a texture the omnidirectional image onto the unit sphere as the 3D model.
The equation for obtaining the vector q1 of the equation (3) corresponding to the mapping processing of the mapping f″ is simply vector addition in which the vector s1 (negative vector s1) is added to the vector p. The addition of the vector s1 is equivalent to operation of moving the viewing position of the 3D model of the unit sphere of the vector p on the reproduction side.
In other words, the reproduction processing described above is processing in which the reproduction device 15 pastes the omnidirectional image (two-dimensional texture image) 61 for the mapping processing of the mapping f″ onto the spherical surface 142 of the unit sphere as the 3D model by the mapping processing of the mapping f″, to generate, as the display image, an image viewed by the viewer from the center point 141 of the unit sphere, as illustrated in A of
On the other hand, the same display image can be generated even when the reproduction device 15 pastes the omnidirectional image (two-dimensional texture image) 61 for the mapping processing of the mapping f′ onto the spherical surface 142 of the unit sphere as the 3D model by the mapping processing of the mapping f′, to generate, as the display image, an image viewed by the viewer from a position 141′ offset by a predetermined amount from the center point 141 of the unit sphere (hereinafter referred to as the offset viewing position 141′), as illustrated in B of
Furthermore, an amount of offset from the center point 141 of the unit sphere to the offset viewing position 141′ corresponds to the vector s1 of the equation (3), and also corresponds to the eccentricity ratio k of the parameter table of
In a case where the reproduction device 15 executes the reproduction processing by viewpoint movement described with reference to B of
The mapping processing unit 123 executes the mapping processing of the mapping f′ using the vector p of the equation (2) on the omnidirectional image supplied from the decoder 122, and supplies a 3D model image obtained as a result of the execution to the drawing unit 128. The mapped 3D model image is an image in which a predetermined direction close to the current gaze direction of the viewer is set as the high resolution direction.
The drawing unit 128 rotates the 3D model image supplied from the mapping processing unit 123 in accordance with the rotation information supplied from the rotation calculation unit 124, and offsets the viewing position from the center of the unit sphere in accordance with the movement information.
Then, the drawing unit 128 generates an image of the visual field range of the viewer as the display image, by perspectively projecting the rotated 3D model image onto the visual field range of the viewer from the viewing position after the offset movement.
A of
In A of
Note that, the offset viewing positions 1415′ and 1416′ are in the direction perpendicular to the page surface and are not illustrated.
B of
With the offset viewing positions 1411′ to 1416′ as references that are the viewing positions of the viewer, parallel movement processing is performed, so that the spherical surfaces 1421 to 1426 of the unit spheres are all positioned at shifted positions.
In comparison with the reproduction processing of A of
In other words, the amount of offset from the center point 141 of the unit sphere to the offset viewing position 141′ corresponds to the length of the vector s1 of the equation (3) and the eccentricity ratio k of the parameter table of
Thus, switching of the six encoded streams is possible only by performing processing of moving the 3D model image obtained by executing the mapping processing of the mapping f′ by the amount of offset corresponding to the eccentricity ratio k. Therefore, when the switching of the six encoded streams becomes necessary, the encoded streams can be switched by only the reproduction device 15 without requesting the distribution server 13 to transmit a new encoded stream.
By the way, the image input to the mapping processing unit 23 of the generation device 12 is a captured image such as an equirectangular image as illustrated in
The image output by the mapping processing unit 23 of the generation device 12 is a two-dimensional texture image (eccentric spherical mapping image) mapped to be viewed from a position shifted by the offset position vector s1 from the center of the unit sphere, onto the spherical surface of the unit sphere as the 3D model, and a position on the two-dimensional texture image is defined by the plane coordinate system of the U-axis and the V-axis.
On the other hand, as illustrated in
Then, the image output from the drawing unit 128 of the reproduction device 15 is an image of the 3D model image perspectively projected onto the visual field range of the viewer, and is determined by the gaze direction including the azimuth angle θ and the elevation angle φ with the viewing position as a reference.
Thus, the coordinate system of the input/output is reversed between the mapping processing unit 23 of the generation device 12 and the mapping processing unit 123 of the reproduction device 15, and although basically the processing is performed in the opposite direction, the mapping processing unit 23 of the generation device 12 and the mapping processing unit 123 of the reproduction device 15 both execute the mapping processing of the mapping f″. The reason will be described why the mapping processing unit 23 of the generation device 12 and the mapping processing unit 123 of the reproduction device 15 can perform processing without using an inverse mapping f″−1.
The mapping processing unit 23 of the generation device 12 performs backward mapping that loops the (u, v) coordinates of the two-dimensional texture image to be output, and calculates which position of the input captured image corresponds to each pixel of the two-dimensional texture image. In two-dimensional image processing, processing of calculating output pixel values one by one is normal processing.
On the other hand, the mapping processing unit 123 of the reproduction device 15 performs forward mapping that loops the (u, v) coordinates of the input two-dimensional texture image, and arranges vertices of the 3D model in accordance with information of the (u, v) coordinates and three-dimensional coordinate position (x, y, z) of the 3D model, for each pixel of the two-dimensional texture image. In three-dimensional CG processing, processing of arranging vertices of a model on a three-dimensional space is often performed and can be calculated by forward processing.
Thus, the generation device 12 side processes forward processing by the backward mapping, and the reproduction device 15 side processes backward processing by the forward mapping, so that the mappings used match each other. As a result, implementation is possible by using only the mapping f″.
Next, a second embodiment will be described of a distribution system to which the present disclosure is applied.
Note that, in the second embodiment, description will be made for only portions different from the first embodiment described above.
The generation device 12 of the distribution system 10 according to the second embodiment is different in part of the mapping processing unit 23 from that of the first embodiment described above. Therefore, only the mapping processing unit 23 will be described for the generation device 12.
The mapping processing unit 23 in
In other words, as illustrated in
A of
B of
C of
When an offset position vector in the second embodiment is a vector s2, the vector q2 corresponding to the mapping g is expressed by an equation (4) below.
[Expression 4]
{right arrow over (q)}
2
=t{right arrow over (p)}+{right arrow over (s)}
2 (4)
The vector q2 is a vector directed from the center of the unit sphere to an intersection of a straight line extended in a direction of the vector p from the offset position vector s2 and the unit spherical surface.
Here, since the vector q2 is a point on the spherical surface of the unit sphere, the condition of an equation (5) is satisfied.
[Expression 5]
|{right arrow over (q)}2|=1 (5)
Furthermore, since the intersection is a point where a straight line extended in the same direction as the direction of the vector p intersects with the unit spherical surface, a parameter t satisfies the following.
t>0 (6)
Thus, the vector q2 of the equation (4) is obtained by adding the offset position vector s2 to the vector p multiplied by a positive constant.
By assigning the equation (4) into the equation (5), an equation (7) is obtained,
[Equation 6]
(t{right arrow over (p)}+{right arrow over (s)})(t{right arrow over (p)}+{right arrow over (s)})=1 (7)
and an equation (8) is obtained that is a quadratic equation for t.
[Expression 7]
|{right arrow over (p)}|2t2+2({right arrow over (s)}·{right arrow over (p)})t+|{right arrow over (s)}|2−1=0 (8)
In the equation (8), which is the quadratic equation for t, when a coefficient of t2, a coefficient of t, and a constant term are defined as
a=|{right arrow over (p)}|
2
b=2({right arrow over (s)}·{right arrow over (p)})
c=|{right arrow over (s)}|
2−1 [Equation 8]
t can be obtained as follows, from the formula of the solution of the quadratic equation.
If t is obtained, the vector q2 can be obtained by the equation (4), so that mapping processing can be executed of the mapping g using the vector q2.
(Description of Processing of Generation Device)
Since generation processing of the generation device 12 in the second embodiment is similar to the processing described with reference to the flowchart in
However, in the generation processing of the second embodiment, instead of executing the mapping processing of the mapping f″ using the vector q1 of the equation (3) in step S314, the mapping processing is executed of the mapping g using the vector q2 of the equation (4).
(Description of Reproduction Device)
The reproduction device 15 will be described of the distribution system 10 according to the second embodiment.
The reproduction device 15 of the distribution system 10 according to the second embodiment is also different in part of the mapping processing unit 123 from that of the first embodiment described above.
Similarly to that the mapping processing unit 123 in the first embodiment has the configuration similar to that of the mapping processing unit 23 of the generation device 12 in the first embodiment, the mapping processing unit 123 according to the second embodiment has a configuration similar to that of the mapping processing unit 23 of the generation device 12 in the second embodiment.
In other words, the mapping processing unit 123 according to the second embodiment has the same configuration as that of the mapping processing unit 23 illustrated in
(Description of Processing of Reproduction Device)
Since reproduction processing of the reproduction device 15 in the second embodiment is similar to the processing described with reference to the flowchart of
However, in the reproduction processing of the second embodiment, instead of executing the mapping processing of the mapping f″ using the vector q1 of the equation (3) in step S40, the mapping processing is executed of the mapping g using the vector q2 of the equation (4).
Note that, since the vector q2 of the equation (4) adopted in the second embodiment is not a simple addition of fixed vectors, it is not possible to apply the reproduction processing by viewpoint movement described as a modification in the first embodiment.
A relationship between the eccentricity ratio k and the resolution improvement ratio will be described of each of the first and second embodiments.
In the first embodiment, a point having the highest resolution is a point farthest from the spherical surface in the opposite direction to the vector s1, and the distance is (1+k), so that when the resolution improvement ratio is μ, the resolution improvement ratio μ becomes μ=(1+k) from the similarity relation of triangles. Since a possible value of the eccentricity ratio k is 0≤k<1, a possible value of the resolution improvement ratio μ is 1≤k<2.
In the second embodiment, a point having the highest resolution is a point closest to the spherical surface in the same direction as the vector s2, and the distance is (1−k), so that when the resolution improvement ratio is μ, the resolution improvement ratio μ is expressed by an equation (10). Since a possible value of the eccentricity ratio k is 0≤k<1, the resolution improvement ratio μ can take from 1 to infinity.
In accordance with a change of the eccentricity ratio k from k=0.5 to 0.9, the resolution is increased but not more than twice.
In accordance with a change of the eccentricity ratio k from k=0.5 to 0.9, the resolution becomes extremely large.
A difference between the mapping processing of the first embodiment and the second embodiment will be described with reference to
The mapping processing unit 23 performs mapping correction of adding the offset position vector s1 after normalizing a vector generated by the omnidirectional mapping coordinate generation unit 41 so that the vector is arranged on the spherical surface of the unit sphere. Reproduction becomes possible by operation of viewing the 3D model of the unit sphere to which a two-dimensional texture image is pasted from a viewpoint shifted by the offset position vector s1. The density becomes the highest in the right direction that is the front, and the density is the lowest in the left direction.
The mapping processing unit 23 performs mapping correction by calculation of setting, as new mapping coordinates, a position where the spherical surface intersects with an extension straight line of the vector generated by the omnidirectional mapping coordinate generation unit 41 from a position shifted by the offset position vector s2 from the center of the unit sphere. Reproduction becomes possible by operation of viewing the 3D model of the unit sphere to which the two-dimensional texture image is pasted from the center of the unit sphere. The density becomes the highest in the right direction that is the front, and the density is the lowest in the left direction.
In the mapping processing in the first embodiment, as illustrated in A of
On the other hand, in the mapping processing in the second embodiment, as illustrated in B of
In a case where the cube model is adopted as the 3D model, as illustrated in
The mapping processing of the first embodiment and the second embodiment may be executed independently as described above, or both types of the mapping processing may be combined and executed. A configuration of executing the mapping processing of both the first embodiment and the second embodiment will be referred to as a third embodiment.
A of
B of
C of
The vector q3 is expressed by an equation (11) in which the vector p of the equation (3) in the first embodiment is replaced with the vector q2 of the equation (4) in the second embodiment.
[Expression 11]
{right arrow over (q3)}={right arrow over (q2)}−{right arrow over (s1)} (11)
Since the vector q2 of the equation (4) is normalized as indicated by the equation (5), it is not necessary to perform normalization processing corresponding to the equation (2).
When the vector q2 of the equation (11) is expanded by using the equation (4), an equation (12) is obtained.
[Expression 12]
{right arrow over (q3)}=t{right arrow over (p)}+{right arrow over (s2)}−{right arrow over (s1)} (12)
In the equation (12), t is the same value as t in the equation (4), and is obtained from the quadratic equation of an equation (9). If t is determined, the vector q3 is obtained by the equation (12), so that mapping processing can be executed of the mapping h using the vector q3.
Similarly to the second embodiment, the mapping processing unit 23 performs mapping correction by calculation of setting, as new mapping coordinates, a position where the spherical surface intersects with an extension straight line of the vector generated by the omnidirectional mapping coordinate generation unit 41 from a position shifted by the offset position vector s2 from the center of the unit sphere. Then, reproduction becomes possible by operation of viewing the 3D model of the unit sphere to which a two-dimensional texture image is pasted from a viewpoint shifted by the offset position vector s1, similarly to the first embodiment.
According to the distribution system 10 of the present disclosure, an image of one encoded stream generated by the generation device 12 and transmitted from the distribution server 13 is an omnidirectional image corresponding to looking around 360 degrees in all directions of top, bottom, left, and right, and an image in which high resolution is set in a direction (front direction) corresponding to the gaze direction of the viewer. As a result, it is possible to present the viewer with an image in which the image quality in the front direction is improved, while securing an image that can be drawn at the periphery of the visual field or at the time of sudden turning around. Compared to a distribution system that distributes an omnidirectional image in which resolution (pixel density) is uniform in all directions, it is possible to present the viewer with an image having a high image quality in the viewing direction side on the same band.
In the generation device 12 that generates the omnidirectional image in which high resolution is set in the predetermined direction corresponding to the front direction, in addition to the omnidirectional mapping coordinate generation unit 41 that distributes the omnidirectional image in which resolution is uniform in all directions, the vector normalization unit 42 and the mapping correction unit 43 are newly provided.
The vector normalization unit 42 converts (normalizes) the vector d corresponding to the predetermined 3D model (the regular octahedron model in the present embodiment) adopted by the omnidirectional mapping coordinate generation unit 41 into the vector p of the 3D model of the unit sphere.
The mapping correction unit 43 performs correction of the 3D model of the unit sphere by calculation of adding a predetermined offset position vector s to the vector p normalized by the vector normalization unit 42.
Some general omnidirectional mapping is defined by a 3D model in which a distance from the center is not constant like cube mapping, and it has been difficult to perform constant resolution correction without depending on the shape of the 3D model.
In the generation device 12, the predetermined 3D model adopted by the omnidirectional mapping coordinate generation unit 41 is temporarily converted into the 3D model of the unit sphere by the vector normalization unit 42, so that arbitrary 3D model can be used for the omnidirectional mapping. In other words, an omnidirectional image can be generated in which high resolution is set in a specific direction, for any kind of omnidirectional mapping. Furthermore, since the processing is always performed on the same unit sphere, the effect becomes constant of the correction by the mapping correction unit 43.
Thus, according to the distribution system 10, it is enabled to provide a mapping that is applicable to an arbitrary mapping method and in which high resolution is set in a viewing direction.
With the eccentricity ratio k corresponding to the length of the offset position vector s added by the mapping correction unit 43, the resolution improvement ratio in the front direction can be set steplessly, and setting can be freely made depending on a reproduction side device, an imaging condition, and the like.
Furthermore, when the eccentricity ratio k is set to a predetermined value (0), the system is in a state where the correction processing is not performed, in other words, is the same as a system that distributes an omnidirectional image in which resolution is uniform in all directions, and it is possible to have compatibility with a distribution system that distributes an omnidirectional image in which resolution is uniform in all directions.
Also in the reproduction device 15, the mapping processing unit 123 is included having the configuration similar to the mapping processing unit 23 of the generation device 12, so that effects are obtained similar to those described above.
In the first embodiment, from the characteristic that the equation for obtaining the mapping-corrected vector q1 is represented by simple vector addition to the vector p, when switching of the encoded stream corresponding to the front direction occurs, it is also possible to cope with only by the processing in the reproduction device 15 that performs processing of moving by the amount of offset corresponding to the eccentricity ratio k.
In the second embodiment, the resolution improvement ratio μ in the high resolution direction can be set up to infinity.
A modification will be described that can be commonly applied to the above-described first to third embodiments.
(Another First Example of Encoded Stream)
In the first to third embodiments described above, a case has been described where the number of encoded streams stored in the distribution server 13 is six; however, the number of encoded streams is not limited to six. More than six, for example, 12 or 24, encoded streams corresponding to high resolution directions may be uploaded to the distribution server 13, and the encoded streams may be switched more finely in the gaze direction of the viewer.
Conversely, a configuration may also be adopted in which less than six encoded streams corresponding to the high resolution directions are uploaded to the distribution server 13.
For example, in a case where the omnidirectional image is an omnidirectional image generated from captured images in which a concert hall is imaged, a configuration can be made in which three encoded streams are prepared in the distribution server 13, the encoded streams having high resolution directions only in a stage direction and its vicinity assumed to be important for the viewer, and in a case where another direction (for example, an opposite direction to the stage direction) is the gaze direction of the viewer, one encoded stream is selected having uniform pixel density in all directions.
In this case, the table generation unit 26 generates the parameter table illustrated in
The three encoded streams corresponding to ID 1 to ID 3 are encoded streams in which predetermined directions are set as high resolution directions, and the one encoded stream corresponding to ID4 is an encoded stream of an omnidirectional image having uniform pixel density in all directions, which is selected when the gaze direction of the viewer is other than directions corresponding to the encoded streams of ID 1 to ID3.
(Another Second Example of Encoded Stream)
In the first to third embodiments described above, an example has been described in which only one type is stored of the eccentricity ratio k of 0.5 of the encoded stream stored in the distribution server 13; however, a plurality of encoded streams may be stored for each of a plurality of types of eccentricity ratio k. For example, the above-described six encoded streams may be generated and stored for three types of eccentricity ratios k of k=0.2, 0.5, and 0.7. In this case, for example, the reproduction device 15 can select an encoded stream with an appropriate eccentricity ratio k depending on the viewing angle of the head mounted display 16 and request the distribution server 13 to transmit the encoded stream.
(Another Third Example of Encoded Stream)
In the embodiments described above, the omnidirectional image transmitted as an encoded stream is an omnidirectional image for a 2D image that displays the same images for right and left eyes of the viewer; however, the omnidirectional image may be an omnidirectional image for a 3D image in which an omnidirectional image for the left eye and an omnidirectional image for the right eye are combined (packed).
Specifically, as illustrated in A of
Furthermore, as illustrated in B of
The omnidirectional image 421 for the left eye is an image obtained by perspectively projecting an omnidirectional image of a viewpoint for the left eye mapped onto a sphere, with the center of the sphere as a focal point, onto a visual field range of the left eye. Furthermore, the omnidirectional image 422 for the right eye is an image obtained by perspectively projecting an omnidirectional image of a viewpoint for the right eye mapped onto a sphere, with the center of the sphere as a focal point, onto a visual field range of the right eye.
In a case where the omnidirectional image is a packing image, the mapping processing unit 123 of
As a result, in a case where 3D display is possible, the head mounted display 16 can perform 3D display of the display image by displaying display images of the viewpoint for the left eye and the viewpoint for the right eye respectively as an image for the left eye and an image for the right eye.
(Example of Live Distribution)
In each of the above-described embodiments, the plurality of encoded streams and auxiliary information generated by the generation device 12 are stored once in the storage 102 of the distribution server 13, and in response to a request from the reproduction device 15, the distribution server 13 transmits the encoded stream and the auxiliary information to the reproduction device 15.
However, one or more encoded streams and auxiliary information generated by the generation device 12 may be distributed in real time (live distribution) without being stored in the storage 102 of the distribution server 13. In this case, the data received by the reception unit 101 of the distribution server 13 is immediately transmitted from the transmission/reception unit 103 to the reproduction device 15.
(Others)
Moreover, in the embodiments described above, the captured image is a moving image, but the captured image may be a still image. Furthermore, in the embodiments described above, an example has been described of using an omnidirectional image; however, the technology according to the present disclosure can be applied to all 360-degree images in which 360 degrees (all directions) are imaged, including an all-sky image, an omni-azimuth image, a 360-degree panoramic image, and the like, in addition to the omnidirectional image.
The distribution system 10 may include a stationary display instead of the head mounted display 16. In this case, the reproduction device 15 does not include the camera 15A, and the viewing position and the gaze direction are input by the viewer operating a controller connected to the reproduction device 15 or the stationary display.
Furthermore, the distribution system 10 may include a mobile terminal instead of the reproduction device 15 and the head mounted display 16. In this case, the mobile terminal performs processing of the reproduction device 15 other than the camera 15A, and displays the display image on the display of the mobile terminal. The viewer inputs the viewing position and the gaze direction by changing the posture of the mobile terminal, and the mobile terminal acquires the input viewing position and gaze direction by causing the built-in gyro sensor to detect the posture of the mobile terminal.
A series of processing steps described above can be executed by hardware, or can be executed by software. In a case where the series of processing steps is executed by software, a program constituting the software is installed in a computer. Here, the computer includes a computer incorporated in dedicated hardware, and a computer capable of executing various functions by installation of various programs, for example, a general purpose personal computer, and the like.
In a computer 900, a central processing unit (CPU) 901, a read only memory (ROM) 902, and a random access memory (RAM) 903 are connected to each other by a bus 904.
Moreover, an input/output interface 905 is connected to the bus 904. The input/output interface 905 is connected to an input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a drive 910.
The input unit 906 includes a keyboard, a mouse, a microphone, and the like. The output unit 907 includes a display, a speaker, and the like. The storage unit 908 includes a hard disk, a nonvolatile memory, or the like. The communication unit 909 includes a network interface and the like. The drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
In the computer 900 configured as described above, for example, the CPU 901 loads the program stored in the storage unit 908 to the RAM 903 via the input/output interface 905 and the bus 904 to execute the above-described series of processing steps.
The program executed by the computer 900 (CPU 901) can be provided, for example, by being recorded in the removable medium 911 as a package medium or the like. Furthermore, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
In the computer 900, the program can be installed to the storage unit 908 via the input/output interface 905 by mounting the removable medium 911 to the drive 910. Furthermore, the program can be installed to the storage unit 908 by receiving with the communication unit 909 via the wired or wireless transmission medium. Besides, the program can be installed in advance to the ROM 902 and the storage unit 908.
Note that, the program executed by the computer 900 can be a program by which the processing is performed in time series along the order described herein, and can be a program by which the processing is performed in parallel or at necessary timing such as when a call is performed.
The technology according to the present disclosure can be applied to various products. The technology according to the present disclosure may be implemented as a device mounted on any type of mobile body, for example, a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, a construction machine, an agricultural machine (tractor), and the like.
Each control unit includes a microcomputer that performs arithmetic processing in accordance with various programs, a storage unit that stores programs executed by the microcomputer, parameters used for various calculations, or the like, and a drive circuit that drives devices to be controlled. Each control unit includes a network I/F for communicating with other control units via the communication network 7010, and a communication I/F for communicating with devices inside and outside a vehicle, a sensor, or the like by wired communication or wireless communication.
The drive system control unit 7100 controls operation of devices related to a drive system of a vehicle in accordance with various programs. For example, the drive system control unit 7100 functions as a control device of a driving force generating device for generating driving force of the vehicle, such as an internal combustion engine or a driving motor, a driving force transmitting mechanism for transmitting driving force to wheels, a steering mechanism for adjusting a steering angle of the vehicle, a braking device for generating braking force of the vehicle, and the like. The drive system control unit 7100 may include a function as a control device, such as Antilock Brake System (ABS) or Electronic Stability Control (ESC).
The drive system control unit 7100 is connected to a vehicle state detecting unit 7110. The vehicle state detecting unit 7110 includes, for example, at least one of a gyro sensor that detects angular velocity of axis rotational motion of a vehicle body, an acceleration sensor that detects acceleration of the vehicle, or a sensor for detecting an operation amount of the accelerator pedal, an operation amount of the brake pedal, a steering angle of the steering wheel, engine speed or wheel rotation speed, or the like. The drive system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detecting unit 7110, and controls the internal combustion engine, the driving motor, the electric power steering device, the brake device, or the like.
The body system control unit 7200 controls operation of various devices equipped on the vehicle body in accordance with various programs. For example, the body system control unit 7200 functions as a control device of a keyless entry system, a smart key system, a power window device, or various lamps such as a head lamp, a back lamp, a brake lamp, a turn signal lamp, and a fog lamp. In this case, to the body system control unit 7200, a radio wave transmitted from a portable device that substitutes for a key, or signals of various switches can be input. The body system control unit 7200 accepts input of these radio waves or signals and controls the door lock device, power window device, lamp, and the like of the vehicle.
The battery control unit 7300 controls a secondary battery 7310 that is a power supply source of the driving motor in accordance with various programs. For example, information such as a battery temperature, a battery output voltage, or a battery remaining capacity is input from a battery device including the secondary battery 7310 to the battery control unit 7300. The battery control unit 7300 performs arithmetic processing using these signals, and performs temperature adjustment control of the secondary battery 7310 or control of a cooling device or the like provided in the battery device.
The vehicle exterior information detection unit 7400 detects information regarding the outside of the vehicle on which the vehicle control system 7000 is mounted. For example, at least one of an imaging unit 7410 or a vehicle exterior information detecting unit 7420 is connected to the vehicle exterior information detection unit 7400. The imaging unit 7410 includes at least one of a Time Of Flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, or other cameras. To the vehicle exterior information detecting unit 7420, for example, at least one of an environmental sensor for detecting the current climate or weather, or a peripheral information detection sensor for detecting another vehicle, an obstacle, a pedestrian, or the like around the vehicle on which the vehicle control system 7000 is mounted.
The environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunshine sensor that detects sunshine degree, or a snow sensor that detects snowfall. The peripheral information detection sensor may be at least one of an ultrasonic sensor, a radar device, or a Light Detection and Ranging (LIDAR) device (Laser Imaging Detection and Ranging (LIDAR) device) The imaging unit 7410 and the vehicle exterior information detecting unit 7420 may be provided as independent sensors or devices, respectively, or may be provided as a device in which a plurality of sensors or devices is integrated together.
Here,
Note that,
Vehicle exterior information detecting units 7920, 7922, 7924, 7926, 7928, and 7930 provided on the front, rear, side, corner, upper part of the windshield in the vehicle interior of the vehicle 7900 may be ultrasonic sensors or radar devices, for example. The vehicle exterior information detecting units 7920, 7926, and 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield in the vehicle interior of the vehicle 7900 may be LIDAR devices, for example. These vehicle exterior information detecting units 7920 to 7930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, or the like.
Referring back to
Furthermore, the vehicle exterior information detection unit 7400 may perform distance detection processing or image recognition processing for recognizing a person, a car, an obstacle, a sign, a character on a road surface, or the like, on the basis of the received image data. The vehicle exterior information detection unit 7400 may perform processing such as distortion correction or alignment on the received image data, and synthesize the image data captured by different imaging units 7410 to generate an overhead image or a panoramic image. The vehicle exterior information detection unit 7400 may perform viewpoint conversion processing using the image data captured by different imaging units 7410.
The vehicle interior information detection unit 7500 detects information regarding the inside of the vehicle. The vehicle interior information detection unit 7500 is connected to, for example, a driver state detecting unit 7510 that detects a state of a driver. The driver state detecting unit 7510 may include a camera that captures an image of the driver, a biometric sensor that detects biological information of the driver, a microphone that collects sound in the vehicle interior, and the like. The biometric sensor is provided, for example, on a seat surface, a steering wheel, or the like, and detects biological information of an occupant sitting on a seat or a driver holding the steering wheel. The vehicle interior information detection unit 7500 may calculate a degree of fatigue or a degree of concentration of the driver on the basis of detected information input from the driver state detecting unit 7510, and may determine whether or not the driver is dozing. The vehicle interior information detection unit 7500 may perform noise canceling processing or the like on a collected sound signal.
The integrated control unit 7600 controls overall operation in the vehicle control system 7000 in accordance with various programs. The integrated control unit 7600 is connected to an input unit 7800. The input unit 7800 is implemented by a device, for example, a touch panel, a button, a microphone, a switch, a lever, or the like to which input operation by the occupant can be performed. Data obtained by performing voice recognition on the sound input by the microphone may be input to the integrated control unit 7600. The input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an external connection device such as a mobile phone or a personal digital assistant (PDA) adaptable to the operation of the vehicle control system 7000. The input unit 7800 may be a camera, for example, and in that case, the occupant can input information by gesture. Alternatively, data may be input obtained by detecting movement of a wearable device worn by the occupant. Moreover, the input unit 7800 may include, for example, an input control circuit or the like that generates an input signal on the basis of information input by the occupant or the like using the input unit 7800, and outputs the input signal to the integrated control unit 7600. By operating the input unit 7800, the occupant or the like inputs various data to the vehicle control system 7000 or gives an instruction to perform processing operation.
The storage unit 7690 may include Read Only Memory (ROM) that stores various programs executed by the microcomputer, and Random Access Memory (RAM) that stores various parameters, calculation results, sensor values, or the like. Furthermore, the storage unit 7690 may be implemented by a magnetic storage device such as a hard disc drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
The general-purpose communication I/F 7620 is a general-purpose communication I/F that mediates communication with various devices existing in an external environment 7750. The general-purpose communication I/F 7620 may implement a cellular communication protocol such as Global System of Mobile communications (GSM) (registered trademark), WiMAX (registered trademark), Long Term Evolution (LTE) (registered trademark), or LTE-Advanced (LTE-A), or other wireless communication protocols such as a wireless LAN (also referred to as Wi-Fi (registered trademark)), and Bluetooth (registered trademark). For example, the general-purpose communication I/F 7620 may connect to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or a company specific network) via a base station or an access point. Furthermore, the general-purpose communication I/F 7620 may connect to a terminal existing in the vicinity of the vehicle (for example, a terminal of a driver, a pedestrian, or a shop, or a Machine Type Communication (MTC) terminal) by using a Peer To Peer (P2P) technology, for example.
The dedicated communication I/F 7630 is a communication I/F supporting a communication protocol formulated for use in vehicles. For example, the dedicated communication I/F 7630 may implement a standard protocol such as Wireless Access in Vehicle Environment (WAVE) that is a combination of IEEE 802.11p of the lower layer and IEEE 1609 of the upper layer, Dedicated Short Range Communications (DSRC), or a cellular communication protocol. The dedicated communication I/F 7630 typically performs V2X communication that is a concept including one or more of Vehicle to Vehicle communication, Vehicle to Infrastructure communication, Vehicle to Home communication, and Vehicle to Pedestrian communication.
For example, the positioning unit 7640 receives a Global Navigation Satellite System (GNSS) signal (for example, a Global Positioning System (GPS) signal from a GPS satellite) from a GNSS satellite to execute positioning, and generates position information including the latitude, longitude, and altitude of the vehicle. Note that, the positioning unit 7640 may specify the current position by exchanging signals with a wireless access point, or may acquire the position information from a terminal such as a mobile phone, a PHS, or a smartphone having a positioning function.
The beacon reception unit 7650 receives radio waves or electromagnetic waves transmitted from a wireless station or the like installed on a road, for example, and acquires information such as the current position, congestion, road closure, or required time. Note that, the function of the beacon reception unit 7650 may be included in the dedicated communication I/F 7630 described above.
The vehicle interior device I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various vehicle interior devices 7760 existing in the vehicle. The vehicle interior device I/F 7660 may establish a wireless connection using a wireless communication protocol such as a wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or Wireless USB (WUSB) Furthermore, the vehicle interior device I/F 7660 may establish a wired connection such as a Universal Serial Bus (USB), High-Definition Multimedia Interface (HDMI) (registered trademark), or Mobile High-definition Link (MHL) via a connection terminal (and a cable if necessary) not illustrated. The vehicle interior device 7760 may include, for example, at least one of a mobile device or a wearable device possessed by the occupant, or an information device carried in or attached to the vehicle. Furthermore, the vehicle interior device 7760 may include a navigation device that performs a route search to an arbitrary destination. The vehicle interior device I/F 7660 exchanges control signals or data signals with these vehicle interior devices 7760.
The in-vehicle network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The in-vehicle network I/F 7680 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 7010.
The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various programs on the basis of information acquired via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning unit 7640, the beacon reception unit 7650, the vehicle interior device i/F 7660, or the in-vehicle network I/F 7680. For example, the microcomputer 7610 may calculate a control target value of the driving force generating device, the steering mechanism, or the braking device on the basis of acquired information inside and outside the vehicle, and output a control command to the drive system control unit 7100. For example, the microcomputer 7610 may perform cooperative control aiming for implementing functions of advanced driver assistance system (ADAS) including collision avoidance or shock mitigation of the vehicle, follow-up traveling based on an inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, vehicle lane departure warning, or the like. Furthermore, the microcomputer 7610 may perform cooperative control aiming for automatic driving or the like that autonomously travels without depending on operation of the driver, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of acquired information around the vehicle.
The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure or a person on the basis of information acquired via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning unit 7640, the beacon reception unit 7650, the vehicle interior device I/F 7660, or the in-vehicle network I/F 7680, and create local map information including peripheral information of the current position of the vehicle. Furthermore, on the basis of the acquired information, the microcomputer 7610 may predict danger such as collision of a vehicle, approach of a pedestrian or the like, or entry into a road closed, and generate a warning signal. The warning signal may be, for example, a signal for generating a warning sound or for turning on a warning lamp.
The audio image output unit 7670 transmits an output signal of at least one of the audio or image to an output device capable of visually or aurally notifying an occupant in the vehicle or the outside of the vehicle of information. In the example of
Note that, in the example illustrated in
Note that, a computer program for implementing each function of the distribution system 10 described above can be implemented in any of the control units or the like. Furthermore, it is also possible to provide a computer readable recording medium in which such a computer program is stored. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Furthermore, the above computer program may be distributed via, for example, a network without using a recording medium.
In a case where the distribution system 10 described above is applied to the vehicle control system 7000 described above, for example, the imaging device 11 of the distribution system 10 corresponds to at least a part of the imaging unit 7410. Furthermore, the generation device 12, the distribution server 13, and the reproduction device 15 are integrated together, and correspond to the microcomputer 7610 and the storage unit 7690. The head mounted display 16 corresponds to the display unit 7720. Note that, in a case where the distribution system 10 is applied to the integrated control unit 7600, the network 14, the camera 15A, the marker 16A, and the gyro sensor 16B are not provided, and the gaze direction and viewing position of the viewer is input by operation of the input unit 7800 by the occupant who is the viewer. As described above, by applying the distribution system 10 to the integrated control unit 7600 of the application example illustrated in
Furthermore, at least a part of the components of the distribution system 10 may be implemented in a module (for example, an integrated circuit module including one die) for the integrated control unit 7600 illustrated in
Furthermore, herein, a system means an aggregation of a plurality of constituents (device, module (component), and the like), and it does not matter whether or not all of the constituents are in the same cabinet. Thus, a plurality of devices that is accommodated in a separate cabinet and connected to each other via a network and one device that accommodates a plurality of modules in one cabinet are both systems.
Note that, the advantageous effects described in the specification are merely examples, and the advantageous effects of the present technology are not limited to them and may include other effects.
Furthermore, the embodiment of the present disclosure is not limited to the embodiments described above, and various modifications are possible without departing from the gist of the present disclosure.
For example, the present disclosure can adopt a configuration of cloud computing that shares one function in a plurality of devices via a network to process in cooperation.
Furthermore, each step described in the above flowchart can be executed by sharing in a plurality of devices, other than being executed by one device.
Moreover, in a case where a plurality of pieces of processing is included in one step, the plurality of pieces of processing included in the one step can be executed by sharing in a plurality of devices, other than being executed by one device.
Note that, the present disclosure can also adopt the following configurations.
(1)
A generation device including
a normalization unit that converts a first vector that maps a 360-degree image onto a predetermined 3D model into a second vector of a 3D model of a unit sphere.
(2)
The generation device according to (1), further including
a correction unit that adds a predetermined offset position vector to the second vector of the 3D model of the unit sphere converted.
(3)
The generation device according to (2), in which
the correction unit adds the offset position vector that is negative to the second vector.
(4)
The generation device according to (2), in which
the correction unit adds the offset position vector to the second vector multiplied by a positive constant.
(5)
The generation device according to any of (1) to (4), further including
a generation unit that generates the first vector.
(6)
The generation device according to any of (1) to (5), further including
a rotation processing unit that rotates a captured image to be converted into the 360-degree image such that a predetermined direction is at an image center.
(7)
The generation device according to any of (1) to (6), in which
there is a plurality of high resolution directions each being a direction in which high resolution is set for the 360-degree image, and
a number of the normalization units are provided corresponding to a number of the high resolution directions.
(8)
The generation device according to any of (1) to (7), further including
a setting unit that determines a high resolution improvement ratio in a high resolution direction that is a direction in which high resolution is set for the 360-degree image.
(9)
The generation device according to any of (1) to (8), further including
a setting unit that generates information specifying a high resolution direction that is a direction in which high resolution is set for the 360-degree image.
(10)
A generation method including
converting a first vector of a predetermined 3D model onto which a 360-degree image is mapped into a second vector of a 3D model of a unit sphere.
(11)
A reproduction device including:
a receiving unit that receives a 360-degree image generated by another device; and
a normalization unit that converts a first vector that maps the 360-degree image onto a predetermined 3D model into a second vector of a 3D model of a unit sphere.
(12)
The reproduction device according to (11), further including
a correction unit that adds a predetermined offset position vector to the second vector of the 3D model of the unit sphere converted.
(13)
The reproduction device according to (12), in which
the correction unit adds the offset position vector that is negative to the second vector.
(14)
The reproduction device according to (12), in which
the correction unit adds the offset position vector to the second vector multiplied by a positive number.
(15)
The reproduction device according to any of (11) to (14), further including
a generation unit that generates the first vector.
(16)
The reproduction device according to any of (11) to (15), further including
a selection unit that selects one 360-degree image from among a plurality of 360-degree images having different high resolution directions, depending on a gaze direction of a viewer, in which
the receiving unit receives the 360-degree image selected by the selection unit from the other device.
(17)
The reproduction device according to any of (11) to (16), further including
a rotation calculation unit that generates rotation information on the 360-degree image on the basis of information specifying a high resolution direction of the 360-degree image.
(18)
The reproduction device according to (17), further including
a drawing unit that generates a display image by rotating a 3D model image onto which the 360-degree image is mapped in accordance with the second vector of the 3D model of the unit sphere on the basis of the rotation information, and perspectively projecting the 3D model image onto a visual field range of a viewer.
(19)
The reproduction device according to (18), in which
the rotation calculation unit calculates, as movement information, an amount of offset from a center of the unit sphere, and
the drawing unit generates the display image by rotating the 3D model image on the basis of the rotation information, offsetting a viewing position in accordance with the movement information, and perspectively projecting the 3D model image, from the viewing position after offset movement, onto the visual field range of the viewer.
(20)
A reproduction method including:
receiving a 360-degree image generated by another device, and
converting a first vector that maps the 360-degree image onto a predetermined 3D model into a second vector of a 3D model of a unit sphere.
Number | Date | Country | Kind |
---|---|---|---|
2017-123934 | Jun 2017 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/022303 | 6/12/2018 | WO | 00 |