The disclosure relates to an information display technology.
With the development of image processing technology and spatial positioning technology, the application of transparent displays has gradually received attention. In this type of technology, a display may be used in combination with a physical object, and with the help of related virtual information, an interactive experience is generated according to user needs, and information can be presented in a more intuitive way.
However, in current display systems, a user and a physical object are positioned by image recognition, and then, planar graphics data is moved through a transparent display in accordance with relative positions of the user's line of sight and the physical object. Under the premise that the postures of the user and the physical object are not considered, whether planar graphics data or stereoscopic graphics data displayed on the display is unlikely to be precisely overlaid with the physical object, and discomfort is caused to the user during viewing.
The disclosure provides a method, a processing device and a display system for information display.
In one exemplary embodiment of the disclosure, the above-mentioned system includes a display being light transmissive, at least one first information extraction device, at least one second information extraction device, and a processing device. The processing device is connected to the display, the first information extraction device and the second information extraction device. The first information extraction device is configured to extract position information and posture information of a user. The second information extraction device is configured to extract position information and posture information of a target object. The processing device is configured to perform coordinate transformation according to the position information and the posture information of the user as well as the position information and the posture information of the target object, to generate overlay information between the user and the target object, and display related stereoscopic information of the target object on the display according to the overlay information.
In one exemplary embodiment of the disclosure, the above-mentioned method for information display is adapted for a display system that includes a display being light transmissive, at least one first information extraction device, at least one second information extraction device and a processing device. The method for information display includes the following. Position information and posture information of a user are extracted by the first information extraction device. Position information and posture information of a target object are extracted by the second information extraction device. Coordinate transformation is performed according to the position information and the posture information of the user as well as the position information and the posture information of the target object, to generate overlay information between the user and the target object. Related stereoscopic information of the target object is displayed on the display according to the overlay information.
In one exemplary embodiment of the disclosure, the above-mentioned processing device is connected to a display, a first information extraction device and a second information extraction device. The first information extraction device is configured to extract position information and posture information of a user, and the second information extraction device is configured to extract position information and posture information of a target object. The processing device includes a memory and a processor. The memory is configured to store data. The processor is configured to acquire the position information and the posture information of the user that are extracted by the first information extraction device; acquire the position information and the posture information of the target object that are extracted by the second information extraction device; perform coordinate transformation according to the position information and the posture information of the user as well as the position information and the posture information of the target object, to generate overlay information between the user and the target object; and display related stereoscopic information of the target object on the display according to the overlay information.
Several exemplary embodiments accompanied with figures are described in detail below to further describe the disclosure in details.
Some exemplary embodiments of the disclosure will be hereinafter described in detail with reference to the accompanying drawings. In the following description, the same reference numerals in different drawings are regarded as the same or similar elements. These exemplary embodiments are only a part of the disclosure and do not disclose all of the possible implementations of the disclosure. More specifically, these exemplary embodiments are only examples of the method, the device and the system within the scope of the claims of the disclosure.
Referring to
The display 110 is configured to display information, and includes, for example, a penetrating type light transmissive display such as a liquid crystal display (LCD), a field sequential color LCD display, a light emitting diode (LED) display, an electrowetting display, or a projection type light transmissive display. The content viewed by a user through the display 110 may be a physical scene overlaid with a virtual image.
The first information extraction device 120 is configured to extract position information and posture information of the user, and the second information extraction device 130 is configured to extract position information and posture information of a target object in the physical scene. The first information extraction device 120 may include, for example, at least one image sensor or at least one image sensor combined with at least one depth sensor to perform image recognition positioning on the target object. The image sensor may be a visible light sensor or an invisible light sensor such as an infrared sensor; the depth sensor may be a light field radar or an ultrasonic sensor. In addition, the first information extraction device 120 may further include, for example, an optical positioner, to perform optical spatial positioning on the user. Here, the optical positioner may include a positioner device and a positioning light ball. Generally, the positioner device may be an active device, and the positioning light ball may be a passive device. The disclosure is not limited thereto, and specific applications will be described later. That is, a device or a combination thereof falls within the scope of the first information extraction device 120 as long as it can obtain the user's position information and posture information by positioning.
The second information extraction device 120 may include, for example, at least one image sensor or at least one image sensor combined with at least one depth sensor to perform image recognition positioning on the target object, or an optical positioner to perform optical spatial positioning on the target object, or a magnetic positioner to perform electromagnetic navigation spatial positioning on the target object. In addition, in response to special scenario needs, the second information extraction device 130 may include a GPS positioner to position a target object (for example, a building, a vehicle, or a pedestrian) outside the display 110, or may include a combination of rangefinders such as a GPS positioner, a camera, and a radar, to extract position information and posture information of the target object. That is, a device or a combination thereof falls within the scope of the second information extraction device 130 as long as it can obtain the target object's position information and posture information by positioning.
Those of ordinary skill in the art should understand that the above-mentioned image sensor may be configured to capture an image and includes an imaging lens having a lens and a photosensitive element. The above-mentioned depth sensor may be configured to detect depth information, and may be separately implemented by active depth sensing technology and passive depth sensing technology. The active depth sensing technology may be used in combination with a time difference ranging technology to calculate the depth information by active emission of a source light, an infrared ray, ultrasound, laser or the like as a signal. The passive depth sensing technology may use two image sensors to capture two images in front of them respectively at different viewing angles, so as to calculate the depth information using parallax between the two images. The above-mentioned optical positioner may be disposed on a luminous body (for example, a tracking light ball) on the user and/or the target object, and be used in combination with an imaging lens or other optical detector, thereby providing spatial coordinates of the user and/or the target object. These spatial coordinates may include three-degree-of-freedom (3-DOF) information or six-degree-of-freedom (6-DOF) information. The above-mentioned magnetic positioner (for example, an electromagnetic navigation probe) may penetrate deep into the target object to obtain the spatial coordinates of the target object according to a measured physical quantity (for example, a voltage or a current).
The processing device 140 is configured to control operation of the display system 100, and includes a memory 141 and a processor 142. The memory 141 may be any type of fixed or portable random access memory (RAM), read-only memory (ROM), flash memory, hard disk or other similar device, integrated circuit or a combination thereof. The processor 142 may be, for example, a central processing unit (CPU), an application processor (AP), or other programmable general purpose or special purpose microprocessor, a digital signal processor (DSP), an image signal processor (ISP), a graphics processing unit (GPU) or other similar device, or an integrated circuit or a combination thereof.
In this exemplary embodiment, the processing device 140 may be built in the display 110. The first information extraction device 120 and the second information extraction device 130 may respectively be disposed on opposite sides of the display 110, opposite sides of a field to which the display system 100 belongs relative to the display 110, or the like, and are configured to position the user and the target object and transmit information to the processing device 140 in a wired or wireless manner through their respective communication interfaces. In another exemplary embodiment, the display system 100 may be a single integrated system, and may be implemented as a head-mounted display device, a smartphone, a tablet computer or the like. The disclosure is not limited thereto. In the following exemplary embodiments, implementation details of a method for information display will be described with reference to the elements of the display system 100.
Referring to
Next, the processing device 140 performs coordinate transformation according to the position information and the posture information of the user as well as the position information and the posture information of the target object, to generate overlay information between the user and the target object (step S206). Here, the position information and the posture information of the user may include a user position coordinate and a user orientation angle of the user relative to the first information extraction device 120, and the position information and the posture information of the target object may include a target object position coordinate and a target object orientation angle of the target object relative to the second information extraction device 130. The processing device 140 may transform the user position coordinate, the user orientation angle, the target object position coordinate and the target object orientation angle to the same coordinate system (for example, a world coordinate system or a camera coordinate system). Next, the processing device 140 calculates positions and angles of the user and the target object relative to the display 110 according to the user position coordinate after transformation, the user orientation angle after transformation, the target object position coordinate after transformation and the target object orientation angle after transformation, to generate the overlay information suitable for the user to view the target object.
In one exemplary embodiment, the overlay information includes a display position, a display angle and a display size of the related stereoscopic information displayed on the display 110 when the user views the target object. Here, the related stereoscopic information may be stored in the memory 141 or be acquired from other databases through a communication interface (not shown). The display position and display angle of the related stereoscopic information presented on the display 110 are associated with a depth distance of the user relative to the display 110, a depth distance of the target object relative to the display 110, an offset angle of the user relative to a depth axis of the display 110, and an offset angle of the target object relative to the depth axis of the display 110. The display size of the related stereoscopic information on the display 110 is associated with the depth distance of the user relative to the display 110 and the depth distance of the target object relative to the display 110.
Referring first to
in which x is an offset amount of the related stereoscopic information S3 on the plane u axis of the display 110, θrx is an orientation angle of the related stereoscopic information S3 on the plane u axis of the display 110, and R (z) is a scaling ratio of the related stereoscopic information S3. In addition, an offset amount y and an orientation angle θry of the related stereoscopic information S3 on the vertical v axis of the display 110 can also be calculated in a similar manner. The scaling ratio here may be a scaling ratio relative to the original size of the related stereoscopic information, and is adjusted according to a relative distance relationship between the user U3 and the target object T3.
Referring again to
in which x is the offset amount of the related stereoscopic information S3 on the plane u axis of the display 110, θrx is the orientation angle of the related stereoscopic information S3 on the plane u axis of the display 110, and R(z) is the scaling ratio of the related stereoscopic information S3. In addition, the offset amount y and the orientation angle θry of the related stereoscopic information S3 on the vertical v axis of the display 110 can also be calculated in a similar manner.
Referring back to
To facilitate understanding, in the following embodiments, use scenarios and details of the process of
Referring first to
Next, the processing device 140 performs spatial coordinate transformation respectively on the position information (xt,yt,zt) and posture information (θtx,θty,θtz) of the target object T4 and the position information (xu,yu,zu) of the user U4 (steps S406A and S406B) to transform the information to the same coordinate system, and then performs an overlay operation between the target object T4 and the user U4 (step S408), so as to generate overlay information (that is, position information (u,v), orientation angle (θrx,θry), and scaling ratio R(z)). Next, the processing device 140 performs database access (step S410), to acquire the related stereoscopic information of the target object T4 from a database, and displays multi-angle related stereoscopic information S4 according to the overlay information (step S412). In addition, when the user U4 moves to another position, the process will return to steps S404A and S404B to recalculate different overlay information, and the user U4 will view the related stereoscopic information in different positions, different orientation angles, and different sizes on the display 110.
Referring first to
Next, the processing device 140 performs spatial coordinate transformation respectively on the position information (xt,yt,zt) and posture information (θtx,θty,θtz) of the target object T5 as well as the position information (xu,yu,zu) and posture information (θux,θuy,θuz) of the user U5 (steps S506A and S506B) to transform the information to the same coordinate system, and then performs an overlay operation between the target object T5 and the user U5 (step S508), so as to generate the overlay information (that is, position information (u,v), orientation angle (θrx,θry), and scaling ratio R(z)). Next, the processing device 140 performs database access (step S510), to acquire the related stereoscopic information of the target object T5 from a database, and displays multi-angle related stereoscopic information S5 according to the overlay information (step S512).
Referring first to
Next, the processing device 140 performs spatial coordinate transformation respectively on the position information (xt,yt,zt) and posture information (θtx,θty,θtz) of the target object T6 as well as the position information (xu,yu,zu) and posture information (θux,θuy,θuz) of the user U6 (steps S606A and S606B) to transform the information to the same coordinate system, and then performs an overlay operation between the target object T6 and the user U6 (step S508), so as to generate the overlay information (that is, position information (u,v), orientation angle (θrx,θry), and scaling ratio R(z)). Next, the processing device 140 performs database access (step S610), to acquire the related stereoscopic information of the target object T6 from a database, and displays multi-angle related stereoscopic information S6 according to the overlay information (step S612).
Referring first to
Next, the processing device 140 performs spatial coordinate transformation respectively on the position information (xt,yt,zt) and posture information (θtx,θty,θtz) of the target object T7 as well as the position information (xu,yu,zu) and posture information (θux,θuy,θuz) of the user U7 (steps S706A and S706B) to transform the information to the same coordinate system, and then performs an overlay operation between the target object T7 and the user U7 (step S708), so as to generate the overlay information (that is, position information (u,v), orientation angle (θrx,θry), and scaling ratio R(z)). Next, the processing device 140 performs database access (step S710), to acquire the related stereoscopic information of the target object T7 from a database, and displays multi-angle related stereoscopic information S7 according to the overlay information (step S712).
In the method, the processing device and the display system for information display as proposed in the exemplary embodiments of the disclosure, the coordinate transformation is performed using the position information and posture information of both the target object and the user, and the overlay information between the target object and the user is calculated, thereby realizing display of multi-angle overlaid images. Thereby, the accuracy of overlaying virtual information onto a real object is improved. The disclosure is applicable to various technologies that require overlay of image or graphics data, such as those for medical surgery, education and entertainment purposes, mobile vehicles, educational fields or the like.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.
This application claims the priority benefit of U.S. provisional application Ser. No. 63/050,072, filed on Jul. 9, 2020. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
Number | Name | Date | Kind |
---|---|---|---|
10646283 | Johnson et al. | May 2020 | B2 |
20130009863 | Noda | Jan 2013 | A1 |
20130265232 | Yun | Oct 2013 | A1 |
20140098088 | Ryu | Apr 2014 | A1 |
20140204023 | Kumar | Jul 2014 | A1 |
20150154801 | Lee | Jun 2015 | A1 |
20160174934 | Cong | Jun 2016 | A1 |
20160261793 | Sivan | Sep 2016 | A1 |
20170119481 | Romo | May 2017 | A1 |
20180165857 | Lee | Jun 2018 | A1 |
20180246336 | Greenberg | Aug 2018 | A1 |
20190050071 | Liu | Feb 2019 | A1 |
20190053851 | Siemionow et al. | Feb 2019 | A1 |
20200077072 | Hsu | Mar 2020 | A1 |
20210084357 | Tajik | Mar 2021 | A1 |
20210195360 | Leider | Jun 2021 | A1 |
Number | Date | Country |
---|---|---|
107506032 | Dec 2017 | CN |
107613897 | Jan 2018 | CN |
I594174 | Aug 2017 | TW |
202009877 | Mar 2020 | TW |
Entry |
---|
“Office Action of Taiwan Counterpart Application”, dated Dec. 30, 2021, pp. 1-13. |
Number | Date | Country | |
---|---|---|---|
20220013090 A1 | Jan 2022 | US |
Number | Date | Country | |
---|---|---|---|
63050072 | Jul 2020 | US |