The present disclosure is a National Stage Filing of the PCT International Application No. PCT/CN2020/133330 filed on Dec. 2, 2020, which claims priority to the Chinese Patent Application with an application number of 201911231386.6 and a title of “Multi-viewpoint Naked Eye 3D Display Screen and Multi-viewpoint Naked Eye 3D Display Device”, filed to China National Intellectual Property Administration on Dec. 5, 2019, the disclosures of which are hereby incorporated by reference.
The present disclosure relates to the technical field of 3D display, and for example, relates to a multi-viewpoint 3D display screen and a multi-viewpoint 3D display device.
At present, a 3D display device refracts light emitted by pixels through a lenticular grating to achieve a 3D display effect.
In a process of realizing embodiments of the present disclosure, at least the following problems exist in related technologies: the lenticular grating with fixed optical properties are matched with the pixels to form a plurality of views that are horizontally arranged along the display device, and for a plurality of users at different viewing distances, partial users may view poor 3D effects, or cannot view the 3D effects.
In order to provide a basic understanding of some aspects of the disclosed embodiments, a brief summary is given below. The summary is not intended to be a general comment, nor to identify key/important components or describe the scope of protection of the embodiments, but to be a preface to the following detailed description.
Embodiments of the present disclosure provide a multi-viewpoint 3D display screen and a multi-viewpoint 3D display device, for solving a problem that users at different viewing distances cannot simultaneously view a 3D effect.
In some embodiments of the present disclosure, a multi-viewpoint 3D display screen is provided, comprising: a display panel, having a plurality of composite pixels, wherein each composite pixel in the plurality of composite pixels comprises a plurality of composite subpixels, and each composite subpixel in the plurality of composite subpixels comprises a plurality of subpixels arranged in array; and a plurality of spherical gratings, covering the plurality of composite subpixels.
In some embodiments, each composite subpixel is in a shape of square.
In some embodiments, each subpixel in the plurality of subpixels is in a shape of square.
In some embodiments, the plurality of subpixels are arranged in an i×j array, wherein j≥2; and i≥2.
In some embodiments, an aspect ratio of each subpixel in the plurality of subpixels is i/j.
In some embodiments, i≥3; and j≥3.
In some embodiments, the plurality of composite subpixels have different colors, and the plurality of composite subpixels having different colors are alternately arranged.
In some embodiments, the plurality of composite subpixels having different colors are arranged in a triangular shape.
In some embodiments, at least one spherical grating in the plurality of spherical gratings is a round spherical grating or an elliptical spherical grating.
In some embodiments, at least one spherical grating in the plurality of spherical gratings further comprises at least one side surface.
In some embodiments of the present disclosure, a multi-viewpoint 3D display device is provided, comprising: the above multi-viewpoint 3D display screen; and a 3D processing apparatus, configured to render subpixels in the plurality of composite subpixels in the multi-viewpoint 3D display screen.
In some embodiments, each composite subpixel comprises a plurality of subpixels arranged in an i×j array, wherein the plurality of subpixels in the i×j array correspond to i first direction viewpoints and j second direction viewpoints of the multi-viewpoint 3D display device.
In some embodiments, the multi-viewpoint 3D display device further comprises: an eye positioning data acquisition device, configured to acquire eye positioning data.
In some embodiments, the eye positioning data acquisition device is configured to acquire transverse positions of eyes of a user to determine a first direction viewpoint where the eyes of a user locate.
In some embodiments, the 3D processing apparatus is configured to render subpixels, corresponding to the first direction viewpoints, in the plurality of subpixels in array based on the first direction viewpoint where the eyes of the user locate.
In some embodiments, the eye positioning data acquisition device is configured to acquire at least one of depth positions and height positions of the eyes of a user to determine second direction viewpoint where eyes of the user locate.
In some embodiments, the 3D processing apparatus is configured to render subpixels, corresponding to the second direction viewpoints, in the plurality of subpixels in array based on the second direction viewpoint where eyes of the user locate.
The multi-viewpoint 3D display screen and the multi-viewpoint 3D display device provided by the embodiments of the present disclosure may achieve technical effects as follows:
The pixels covered by the spherical gratings respectively play 3D images for spatial positions, corresponding to a plurality of views, at different distances away from the multi-viewpoint 3D display screen, thereby meeting respective 3D viewing needs of users at different viewing distances and realizing high-quality 3D display.
The above general descriptions and the following descriptions are exemplary and explanatory only, and are not used to limit the present disclosure.
One or more embodiments are illustrated by the corresponding drawings, and the illustrations and drawings do not limit the embodiments. Elements having the same reference numerals in the drawings are shown as similar elements, and the drawings are not intended to limit the scale, wherein:
100: multi-viewpoint 3D display device; 110: multi-viewpoint 3D display screen; 111: display panel; 120: processor; 121: register; 130: 3D processing apparatus; 131: cache; 140: video signal interface; 150: eye positioning device; 160: eye positioning data interface; 190: spherical grating; 191: side section; 192: round spherical surface; 193: bottom plane; 400: composite pixel; 410: red composite subpixel; 420: green composite subpixel; 430: blue composite subpixel; 601: one of two images comprised in a video frame of a 3D video signal; 602: one of two images comprised in a video frame of a 3D video signal; 603: composite image.
For more detailed understanding of characteristics and technical contents of embodiments of the present disclosure, the implementation of the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, and the accompanying drawings are used for reference only, instead of limiting the embodiments of the present disclosure.
A multi-viewpoint 3D display screen (such as: a multi-viewpoint naked eye 3D display screen) is provided according to embodiments of the present disclosure, and may be applied to a multi-viewpoint 3D display device. The multi-viewpoint 3D display screen comprises a display panel and a plurality of spherical gratings. The display panel has a plurality of composite pixels, each composite pixel comprises a plurality of composite subpixels, and each composite subpixel is composed of subpixels in an i×j array, wherein i≥2, and j≥2. The plurality of spherical gratings cover the plurality of composite subpixels. In the subpixels of the i×j array, i corresponds to a first direction view (such as a row view, also called a horizontal view) of the multi-viewpoint 3D display device, and j corresponds to a second direction view (such as a column view, also called a height or depth view) of the multi-viewpoint 3D display device. In some embodiments, the subpixels in the i×j array in each composite subpixel are homochromatic subpixels in the i×j array.
In some embodiments, the spherical gratings and the composite subpixel in the multi-viewpoint 3D display screen are of a one-to-one corresponding relation.
In some embodiments, i≥3, and j≥3.
In some embodiments, each composite subpixel is in a shape of square. In the homochromatic subpixels of the i×j array in each composite subpixel, an aspect ratio of each subpixel is equal to i/j. In some embodiments, each subpixel in each composite subpixel is in a shape of square.
As shown in
It may be contemplated that in other embodiments, the composite subpixels and the subpixels may have other configurations. For example, each subpixel in the homochromatic subpixels of the i×j array is in a shape of square, and an aspect ratio i/j of each subpixel is equal to 1.
In some embodiments, the composite subpixels of different colors are alternately arranged in the display panel, and the plurality of composite subpixels of each composite pixel are arranged in a triangular shape.
As shown in
In some embodiments, the display panel 111 of the multi-viewpoint 3D display device 110 may comprise m-column n-row (i.e., m×n array) composite pixels and thus define m×n display resolution. In some embodiments, the m×n display resolution may be resolution of full high definition (FHD) or higher, and comprises but not limited to: 1920×1080, 1920×1200, 2048×1280, 2560×1440, 3840×2160 and the like.
In the embodiments of the present disclosure, each composite subpixel has corresponding subpixels corresponding to views. The plurality of subpixels of each composite subpixel are arranged on the multi-viewpoint 3D display screen in array, and the plurality of subpixels in array form have the same color. Since a plurality of views of the 3D display device are roughly arranged along the horizontal and vertical directions of the multi-viewpoint 3D display screen, different subpixels, corresponding to corresponding views, in each composite subpixel need to be correspondingly dynamically rendered when the eyes are located at views of different azimuths due to back-and-forth and side-to-side movement of the user. Since the homochromatic subpixels in each composite subpixel are arranged in array, cross chrominance brought by persistence of vision can be avoided. In addition, due to refraction of the gratings, one part of currently displayed subpixels may be seen at adjacent view positions, while through homochromatic arrangement in the same row, even if one part of the currently displayed subpixels are seen, a color mixture problem will not occur.
In some embodiments, the plurality of spherical gratings are arranged on the surface of the display panel and respectively cover one composite subpixel. Each spherical grating in the plurality of spherical gratings may, for example, comprise a round spherical surface to form a round spherical grating. In some other embodiments, each spherical grating in the plurality of spherical gratings comprises an elliptical spherical surface to form an elliptical spherical grating. In some other embodiments, the spherical gratings comprise round spherical surfaces and side sections. In some other embodiments, the spherical gratings comprise elliptical spherical surfaces and side sections.
In other embodiments, the bottom planes of the spherical gratings may be of other shapes, such as hexagon, triangle, etc.
In some embodiments, another refracting layer having a different refractive index from the spherical grating is arranged on the spherical surface side of the spherical grating; a surface, facing the spherical grating, of the another refracting layer is a concave surface and is attached to the spherical surface of the spherical grating in a concave-convex fit manner; and a surface back to the spherical grating is a plane, such as a plane parallel to the bottom plane of the spherical grating.
The multi-viewpoint 3D display screen 110 according to the embodiments of the present disclosure may be applied to the multi-viewpoint 3D display device. According to the embodiments of the present disclosure, the multi-viewpoint 3D display device comprises the multi-viewpoint 3D display screen, a video signal interface and a 3D processing apparatus. The video signal interface is configured to receive a video frame of a 3D video signal. The 3D processing apparatus is configured to render related subpixels in each composite subpixel according to the received video frame of the 3D video signal.
In some embodiments, the 3D video signal comprises the video frame.
In some embodiments, the 3D processing apparatus is an FPGA or ASIC or an FPGA or ASIC chipset. In some embodiments, the multi-viewpoint 3D display device 100 may also be provided with more than one 3D processing apparatus 130, which processes rendering of subpixels of each composite subpixel in each composite pixel of the multi-viewpoint 3D display screen 110 in parallel or in series or in series/parallel combination. Those skilled in the art will understand that, the more than one 3D processing apparatus may be allocated in other manners and may process multi-row multi-column composite pixels or composite subpixels of the multi-viewpoint 3D display screen 110 in parallel, which falls within the scope of the embodiments in the present disclosure. In some embodiments, the 3D processing apparatus 130 may further optionally comprise a cache 131, for caching the received video frame.
In some embodiments, the 3D processing apparatus is in communication connection with the multi-viewpoint 3D display screen. In some embodiments, the 3D processing apparatus is in communication connection with a drive device of the multi-viewpoint 3D display screen.
Referring to
In some embodiments, the processor is comprised inside the multi-viewpoint 3D display device. Based on this, the 3D signal interface 140 is an internal interface connecting the processor 120 and the 3D processing apparatus 130. The 3D display device 100 may be, for example, a mobile terminal, and the 3D signal interface 140 may be an MIPI, a mini-MIPI, an LVDS interface, a min-LVDS interface or a Display Port.
In some embodiments, as shown in
In some embodiments, homochromatic subpixels of the i×j array in each composite subpixel of the multi-viewpoint 3D display screen 110 correspond to i first direction viewpoints and j second direction viewpoints of the multi-viewpoint 3D display device. The first direction viewpoints may be row views or called horizontal views, and correspond to view positions of the user in a horizontal direction (X-axis direction) of the multi-viewpoint 3D display screen. The second direction viewpoints may be column views or called depth or height views, and correspond to view positions of the user in a vertical direction (Y-axis direction) and/or a depth direction (Z-axis direction) of the multi-viewpoint 3D display screen. In the embodiments of the present disclosure, the depth is defined by a distance of the user relative to the multi-viewpoint 3D display screen.
As shown in
Transmission and display of a 3D video signal inside the multi-viewpoint 3D display device in the embodiments of the present disclosure will be described below by referring to
In some embodiments, the two images or the composite image may comprise images of different types and may be arranged in various forms.
In the embodiment shown as
In the embodiment shown as
In the embodiment shown as
In the embodiment shown as
In the embodiment shown as
Those skilled in the art will understand that, the embodiments shown in the drawings are merely illustrative. The two images or the composite image comprised in the video frame of the 3D video signal may comprise images of other types and may be arranged in other forms, which falls within the scope of the embodiments of the present disclosure.
In some embodiments, after receiving the video frame comprising the two images 601 and 602, the at least one 3D processing apparatus 130 renders at least one subpixel in each composite subpixel based on one of the two images and renders at least another subpixel in each composite subpixel based on the other image of the two images.
In some embodiments, after receiving the video frame comprising the composite image, the at least one 3D processing apparatus 130 renders at least two subpixels in each composite subpixel based on the composite image. For example, at least one subpixel is rendered according to a first image (partial) in the composite image, and at least another subpixel is rendered according to a second image (partial).
In some embodiments, this is, for example, dynamic rendering performed based on real-time eye positioning data.
In some embodiments, the multi-viewpoint 3D display device further comprises an eye positioning data acquisition device, such as an eye positioning device or an eye positioning data interface, configured to acquire eye positioning data. In some embodiments, the eye positioning data comprise spatial position information of eyes of a user, such as a distance of the eyes or face of the user relative to the multi-viewpoint 3D display screen or the eye positioning device (also called a depth of the eyes or face of the user), a position of the eyes or face of the user in a vertical direction of the multi-viewpoint 3D display screen, a position of the eyes or face of the user in a horizontal direction of the multi-viewpoint 3D display screen, a view position of the eyes of a user, a user view angle, etc.
In the embodiment shown as
In some embodiments, the eye positioning device comprises an eye positioning unit configured to shoot a user image (such as a user face image), an eye positioning image signal processor configured to determine an eye space position based on the shot user image, and an eye positioning data interface configured to transmit eye space position information of the eye space position.
In some embodiments, the eye positioning unit comprises a first camera configured to shoot a first image and a second camera configured to shoot a second image, while the eye positioning image signal processor is configured to identify existence of the eyes based on at least one image in the first image and the second image and to determine an eye view position based on a space position of the eyes in the first image and the second image.
In some embodiments, the eye positioning unit comprises at least one camera configured to shoot at least one image and at least one depth acquisition device configured to at least acquire depth information of the eyes of a user, while the eye positioning image signal processor is configured to identify existence of the eyes based on the shot at least one image and to determine a view position of the eyes based on positions of the eyes in the at least one image and the depth information of the eyes of a user.
In the embodiment shown as
In some embodiments, the eye positioning device acquires a transverse position of the eyes of a user in real time to determine the first direction viewpoint where eyes of a user locate. The 3D processing apparatus renders subpixels, corresponding to the first direction viewpoints, in the homochromatic subpixels of i×j array in each composite subpixel based on the first direction viewpoint where eyes of the user locate.
In some embodiments, the eye positioning device acquires a depth position of the eyes of a user in real time to determine the second direction viewpoint where eyes of the user locate. Or, the eye positioning device acquires a height position of the eyes of a user in real time to determine the second direction viewpoint where eyes of the user locate. Or, the eye positioning device acquires a height position and a depth position of the eyes of a user in real time to determine the second direction viewpoint where eyes of the user locate. The 3D processing apparatus renders subpixels, corresponding to the second direction viewpoints, in homochromatic subpixels of i×j array in each composite subpixel based on the second direction viewpoint where eyes of the user locate.
Referring to
The multi-viewpoint 3D display device according to the embodiments of the present disclosure may be applied to video playback devices, and for example, may be presented as a mobile terminal (e.g., a mobile phone or a tablet personal computer), a TV, a mobile TV, a computer, a cinema viewing system or a family viewing system.
The above description and drawings sufficiently illustrate the embodiments of the present disclosure to enable those skilled in the art to practice them. Other embodiments may comprise structural, logical, electrical, process, and other changes. Unless expressly required, individual components and functions are optional and the order of operations may be changed. Parts and features of some embodiments may be included in or substituted for parts and features of other embodiments. The scope of the embodiments of the present disclosure includes the full scope of the claims, and all available equivalents of the claims. The terms used in the present disclosure are used to describe the embodiments only and not to limit the claims. When used in the present disclosure, the term “comprise”, etc. refer to the presence of at least one of stated features, but does not preclude the presence of other features.
Those skilled in the art may recognize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Those skilled in the art may use different methods for implementing the described functions for each particular application, but such implementations should not be considered beyond the scope of the embodiments of the present disclosure.
In the embodiments disclosed herein, the disclosed method and product (including, but not limited to the apparatus and the device) may be realized in other ways. For example, the device embodiments described above are merely schematic. For example, the division of the units may be only a logical functional division, and may be an additional division manner in actual realization. For example, multiple units or components may be combined or integrated into another system, or some features may be ignored or not executed. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. The units described as separate components may or may not be physically separated, and the components shown as the units may or may not be physical units. The present embodiments may be implemented by selecting some or all of the units according to actual needs. In addition, each functional unit in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
The flowcharts and block diagrams in the drawings show architectures, functions and operations possibly implemented by systems, methods and computer program products according to the embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a part of a module, program segment or code, and part of the module, program segment or code contains one or more executable instructions for implementing specified logical functions. In some alternative implementations, the functions marked in the blocks may also occur in an order different from the order marked in the drawings. For example, two continuous blocks may actually be executed substantially concurrently, or sometimes may be executed in a reverse order, depending on the functions involved. In the descriptions corresponding to the flowcharts and the block diagrams in the drawings, operations or steps corresponding to different blocks may also occur in different orders than those disclosed, and sometimes there is no specific order between different operations or steps. For example, two continuous operations or steps maybe actually performed substantially concurrently, or sometimes may be performed in the reverse order, depending on the functions involved. Each block in the block diagrams and/or flowcharts, and combinations of the blocks in the block diagrams and/or flowcharts, can be implemented by special hardware-based systems that perform specified functions or actions, or implemented by combinations of special hardware and computer instructions.
Number | Date | Country | Kind |
---|---|---|---|
201911231386.6 | Dec 2019 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2020/133330 | 12/2/2020 | WO |