The present application claims priority from Japanese Patent Application No. 2023-2947 filed on Jan. 12, 2023, the content of which is hereby incorporated by reference into this application.
The present invention relates to a transparent display apparatus.
In recent years, transparent display apparatuses (in other words, transparent displays) have been developed and provided. The transparent display apparatus displays images (in other words, video images or the like) in a light-transmissive display region made of a liquid crystal layer or the like. A person who is a user can visually recognize display images of the transparent display apparatus from a front surface side and a back surface side in a state of superimposing it with a background.
This transparent display apparatus is used for communication between people (for example, see Patent Document 1 (US patent Application Publication No. 2018/0033171). Patent Document 1 discloses that when performing the communication via a transparent display apparatus, a camera visually recognizes a person's visual line and the image is displayed at a position corresponding to the visual line.
It is conceivable that the transparent display apparatus can be used for many use applications related to the communication between the people, and further technological development for this purpose is desired.
An object of the present invention is to provide a technique capable of extending the use application.
A transparent display apparatus that is one embodiment of the present invention includes: a first substrate having a first surface; a second substrate having a second surface on an opposite of the first surface; a display layer arranged between the first substrate and the second substrate, and having pixels that can transition between a transparent state of transmitting background light and a non-transparent state of displaying an image; a display region provided in a region where the first substrate, the second substrate, and the display layer overlap; a controller controlling a state of the pixels of the display layer; and the image displayed in the display region from a side of the first surface and a background on a side of the second surface being visible, and the image displayed in the display region from the second surface side and the background on the first surface side being visible, in which a sensor device for detecting a viewpoint of a first user on the first surface side of the display region is provided, and the controller: displays a plurality of individually selectable selection images toward the first user in the display region; determines a gaze image, which is the selection image gazed by the first user, from detection information of the sensor device; and controls the state of the pixels so that a corresponding image based on the gaze image is displayed to a second user on the opposite side of the first user in the display region.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In the drawings, the same parts are denoted by the same reference numerals in principle, and a repetitive description thereof will be omitted. In the drawings, in order to facilitate understanding of the invention, components may be schematically represented about a width, a thickness, a shape, and the like of each part in comparison with the actual aspect, but this is merely one example and it is not intended to limit the interpretation of the present invention.
For the purpose of explanation, when explaining processings by programs, the programs, functions, processing units, and the like may be explained as a main body, but the main body of hardware for these is a processor or is a controller configured by the processor and the like, a device, a calculator, a system, and the like. The calculator executes the processing according to a program read onto a memory by the processor while appropriately using resources such as a memory and a communication interface. This realizes predetermined functions, processing units, and the like. The processor is configured by, for example, a semiconductor device such as a CPU/MPU or a GPU. The processing is not limited to software program processing, and can also be implemented by using a dedicated circuit. As the dedicated circuit, FPGA, ASIC, CPLD, and the like can be applied.
A program(s) may be installed in advance as data on a target calculator, or may be distributed as data from a program source to the target calculator. The program source may be a program distribution server on a communication network or be a non-transitory computer-readable storage medium such as a memory card or a disk. A program may be configured by multiple modules. A computer system may be configured by multiple devices. The computer system may be configured by a client/server system, a cloud computing system, an IoT system, and the like. Various types of pieces of data and information are configured with, for example, a structure such as a table or a list, but are not limited thereto. Expressions such as identification information, identifier, ID, name, and number can be replaced with each other.
In reference with
Note that for the purpose of explanation, (X, Y, Z) and (x, y) shown in the drawings may be used as a coordinate system and a direction. A X axis/X direction and a Y axis/Y direction in
As shown in
The transparent display 1 includes a light-transmissive display panel 10, a controller 2 connected to or built into the transparent display panel 10, and an eye tracking device 3 including a first camera 30.
The transparent display panel 10 is, for example, a liquid crystal display panel. A screen 20 of the transparent display panel 10 is configured by a plurality of members. The transparent display panel 10 includes, for example, a first substrate 11, a second substrate 12, and a display layer 13 as members configuring the screen 20. The first substrate 11 is an opposite substrate, the second substrate 12 is an array substrate, and the display layer 13 is a liquid crystal layer. Pixels PIX of display layer 13 emit light in all directions. Although details will be described later, the display layer 13 has a plurality of pixels PIX that configure a display region of the screen 20 (see
In the first embodiment, the transparent display (in other words, a liquid crystal display) 1 having a liquid crystal layer as the display layer 13 of the transparent display panel 10 will be described. Note that in the first embodiment, used as the transparent display panel 10 of the transparent display 1 is a liquid crystal panel realizing a transmittance of 84% or more, the transmittance indicating a degree of transparency of the display region of the screen 20, the transmittance being about equal to the transmittance of a window glass.
The transparent display panel 10 has a first surface s1 on a first substrate 11 side and a second surface s2 on a second substrate 12 side. For the purpose of explanation, the first surface s1 is assumed to be a front surface (in other words, a front), and the second surface s2 is assumed to be a back surface (in other words, a back). By controlling the display layer 13, the transparent display 1 cannot only display an image toward the first user U1 on the first surface s1 side but also display an image toward the second user U2 on the second surface s2 side. In the transparent display 1, when the image is displayed on the screen 20 of the transparent display panel 10 according to the control of the display layer 13, the image can be visually recognized from not only the first user U1 on the first surface s1 side but also the second user U2 on the second surface s2 side s1. Note that in
The controller 2 is electrically connected to the transparent display panel 10, and controls the display layer 13 included in the transparent display panel 10. The controller 2 causes the screen 20 to display the image by controlling a display state of the pixels of the display layer 13 which is a liquid crystal layer. The controller 2 may be built into the transparent display panel 10 or may be connected to an outside of the transparent display panel 10. For example, in addition to a drive circuit or the like, a control circuit configurating the controller 2 may be mounted on a portion of the first substrate 11 or the second substrate 12. In addition thereto, The controller 2 may be an external computer such as a personal computer (PC) connected to the transparent display panel 10. Although not shown, a microphone, a speaker, a lamp, and the like may be installed and connected to the transparent display panel 10.
The eye tracking device 3 is used, for example, when the transparent display 1 is used as a communication tool between the first user U1 and the second user U2, as described later. The eye tracking device 3 is a sensor device for detecting a viewpoint of the first user on the first surface s1 side of the transparent display panel 10. In the first embodiment, the eye tracking device 3 is a sensor device that detects a viewpoint of the first user U1 as the first operator on the first surface s1 side of the transparent display panel 10, and detects the viewpoint of the first user U1, and has the first camera 30 photographing the image for detecting the viewpoint of the first user U1. The first camera 30 is installed so as to face the first surface s1 side of the screen 20 and, by this first camera 30, an image for identifying movement (in particular, movement of pupils) of an eye of the first user U1 is photographed. The first camera 30 is provided, for example, at a top of the transparent display panel 10 so as to be able to photograph a face of the first user U1 from a front.
An installation location of the first camera 30 is not particularly limited as long as it can capture the movement of the eye of the first user U1, and the location does not necessarily have to be on the transparent display panel 10. The first camera 30 is a CCD camera or the like in this example, but is not limited to this, and may be any camera that can photograph the movement of the eye of the first user U1. Further, in this example, the first camera 30 transmits the photographed image to the controller 2, and the controller 2 performs an image processing based on the photographed image (in other words, camera image), thereby detecting the viewpoint of the first user U1. That is, the controller 2 also functions as part of the eye tracking device 3.
Note that the existing device 3 may adopt an already-existing device(s), and its configuration is not particularly limited. As one example, the eye tracking device 3 analyzes the image photographed by the first camera 30, and detects the viewpoint of the first user U1 from the movement of the eyes (in particular, the movement of the pupils) of the first user U1. Further, the eye tracking device 3 may detect the viewpoint (in other words, visual line direction) of the first user U1 by using corneal reflection of infrared rays, for example.
Furthermore, the transparent display 1 can switch at least between a transparent state of transmitting the background and a non-transparent state of displaying the image for each pixel on the screen 20 of the transparent display panel 10. In other words, the transparent display 1 can switch, as control of transparency of the transparent display panel 10, a transparent state where the transparency is in an on state and a non-transparent state where the transparency is in an off state. The non-transparent state is a state in which the image displayed on the screen 20 is easier for the user to visually recognize than the background in comparison with the transparent state, and the transparent state is a state in which the image displayed on the screen 20 is easier for the user to visually recognize than the background. Furthermore, the transparent state may simply be referred to as an image non-display state, and the non-transparent state may be referred to as an image display state, or the transparent state may be replaced as a non-scattering state, and the non-transparent state may be replaced as a scattering state, as will be described later. Note that the control for each pixel of the screen 20 by the transparent display 1 is limited to changing the degree of transparency (sometimes referred to as transparency) between the transparent state and the non-transparent state by using a binary value of on/off, and it may be changed in multiple values.
Such a transparent display 1 can be installed and used at any position. The transparent display 1 can be installed, for example, at a counter or window where people meet, a partition between people, or the like. Furthermore, the use of the transparent display 1 is not necessarily limited to being installed at a specific location. For example, by employing the relatively small transparent display panel 10, it is possible for the operator to use the transparent display 1 while holding it in an operator's hand.
As will be described in detail later, this transparent display 1 is used as the communication tool between the first user U1 and the second user U2. The transparent display 1 can be effectively used, for example, in a medical field for the communication between the first user U1 who is a patient with general paralysis and the second user U2 who is a doctor.
As described above, the transparent display 1 is such a display device to: allow the first user U1 to visually recognize the display image DG displayed on the screen 20 of the transparent display panel 10 from the first surface s1 side of the screen 20; and also allows the second user U2 to visually recognize the display image DG displayed on the screen 20 of the transparent display panel 10 from the second surface s2 of the screen 20.
The transparent display panel 10 has visible characteristics in which the display image DG displayed on the screen 20 and the second user U2 on the second surface s2 side can be visually recognized from the first user U1 on the first surface s1 side. The transparent display panel 10 also has visible characteristics in which the display image DG displayed on the screen 20 and the first user U1 on the first surface s1 side can be visually recognized from the second user U2 on the second surface s2 side. In the transparent display 1, when the image is displayed in the display region of the screen 20 toward the first user U1 on the first surface s1 side, the image is also visibly recognized from the second user U2 on the second surface s2 side.
For example, as shown in
Meanwhile, the second user U2 views the screen 20 of the transparent display panel 10 in a direction (direction Y2) from the second surface s2 side to the first surface s1 side. The second user U2 can visually recognize the display image DG displayed on the screen 20, for example, the character image corresponding to image light DGL2, as shown in
However, the display image DG seen by the second user U2 is just the image viewed from the second surface s2 side of the screen 20, and is different from the display image DG viewed from the first surface s1 side. The display image DG viewed from the second user U2 is an image obtained by reversing the characters “ABC” in the right-left direction, as shown in
Note that at least the display region, in which the image is displayed, among the first surface s1 of the screen 20 has the above characteristics, in other words, background transparency. Similarly, at least the display region, in which the image is displayed, among the second surface s2 of the screen 20 has the above characteristics. A peripheral region (
A hardware configuration example of the transparent display 1 according to the first embodiment will be explained by using
In
As shown in
The first surface s1 of the transparent display panel 10 is provided with a display region DA and a peripheral region PFA corresponding to the screen 20. Note that in this example, the peripheral region PFA becomes also part of the screen 20, but only the display region DA may be the screen 20. The display region DA of the screen 20 is located in a region where the first substrate 11, the second substrate 12, and the display layer 13 overlap as viewed in a plan view in the Y direction. The peripheral region PFA is present outside the display region DA. In
The display region DA is a region where the image is formed according to an input signal supplied from the outside. The display region DA is an effective region where the image is displayed when the first surface s1 or the second surface s2 is viewed in a plan view, for example, in the Y direction. A plurality of pixels PIX are formed in a matrix on the display layer 13 corresponding to the display region DA. The peripheral region PFA is a region including four sides around the display region DA, in other words, a frame region, and no image is displayed.
In this example, the second substrate 12 has a larger width in the X direction than the first substrate 11. The second substrate 12 has a region 40 extended on one side in the X direction. The light source unit 50 and the drive circuit 70 are mounted in the region 40.
In this example, the light source unit 50 is arranged along the peripheral region PFA on a right side with respect to the screen 20, as shown in
The drive circuit 70 generates electrical signals for driving the first substrate 11, the second substrate 12, the display layer 13, and the light source unit 50, and supplies them to each unit. In
Besides components shown in
An optical path of the light emitted from the light source unit 50, a state of the liquid crystal, or the like will be explained with reference to
The second substrate 12, which is an array substrate, has a front surface 12f opposing the liquid crystal layer LQL and the first substrate 11. The opposite substrate, which is the first substrate 11, has a front surface 12f of the second substrate 12 and a back surface 11b opposing the liquid crystal layer LQL. The liquid crystal layer LQL containing the liquid crystal is located between the front surface 12f of the second substrate 12 and the back surface 11b of the first substrate 11. In other words, the liquid crystal layer LQL is an optical modulation element.
The second substrate 12 is an array substrate in which a plurality of transistors (in other words, transistor elements) as switching elements (in other words, active elements) described later are arranged in an array. The first substrate 11 means a substrate arranged opposite to the second substrate 12, which is an array substrate, and can be referred to as an opposite substrate.
The transparent display panel 10 has a function of modulating the light passing through the liquid crystal of the liquid crystal layer LQL by controlling a state of an electric field formed around the liquid crystal layer LQL via the above switching element. The display region DA is provided in a region overlapping with the liquid crystal layer LQL.
The first substrate 11 and the second substrate 12 are bonded together via a seal portion (in other words, a seal material) SLM. The seal portion SLM is arranged to surround the display region DA. The liquid crystal layer LQL is located inside the seal portion SLM. The seal portion SLM plays a role of sealing the liquid crystal between the first substrate 11 and the second substrate 12 and plays a role as an adhesive for bonding the first substrate 11 and the second substrate 12 together.
The light source unit 50 is arranged at a position of opposing one side surface 11s1 of the first substrate 11. In
The liquid crystal of the liquid crystal layer LQL is a polymer dispersed liquid crystal, and contains a liquid crystal polymer and liquid crystal molecules. The liquid crystalline polymer is formed into stripes, and the liquid crystal molecules are dispersed in the gaps between the liquid crystalline polymers. Each of the liquid crystalline polymer and liquid crystal molecules has optical anisotropy or refractive index anisotropy. Responsiveness of liquid crystalline polymers to electric fields is lower than responsiveness of the liquid crystalline polymers to electric fields. An orientation direction of the liquid crystalline polymer hardly changes regardless of presence or absence of the electric field.
Meanwhile, an orientation direction of the liquid crystal molecules changes depending on the electric field when a high voltage equal to or higher than a threshold is applied to the liquid crystal. When no voltage is applied to the liquid crystal, respective optical axes of the liquid crystalline polymer and the liquid crystal molecules are parallel to each other, and the light source light L1 incident on the liquid crystal layer LQL is hardly scattered in the liquid crystal layer LQL and passes through it. Such a state may be referred to as a transparent state (non-scattering state).
When a voltage is applied to the liquid crystal, the respective optical axes of the liquid crystal polymer and liquid crystal molecules intersect with each other, and the light source light L1 incident on the liquid crystal is scattered in the liquid crystal layer LQL. Such a state may be referred to as a scattering state (in other words, a non-transparent state).
The drive circuit 70 provided in the transparent display panel 10 and the controller 2 as a control circuit connected to the drive circuit 70 control the display state of the screen 20 by controlling an orientation of the liquid crystal in the propagation path of the light source light L1. In the scattering state, the light source light L1 is emitted as the emission light L2 by the liquid crystal to the outside of the transparent display panel 10 from the first surface s1 side, which is the front surface 11f, and the second surface s2 side, which is the back surface 12b.
Further, the background light L3 incident from the second surface s2 of the transparent display panel 10, which is the back surface 12b, passes through the second substrate 12, the liquid crystal layer LQL, and the first substrate 11, and is emitted to the outside from the first surface s1 of the transparent display panel 10, which is the front surface 11f. The background light L4 incident from the first surface s1, which is the front surface 11f, passes through the first substrate 11, the liquid crystal layer LQL, and the second substrate 12, and is emitted to the other from the second surface s2 of the transparent display panel 10, which is the back surface 12b.
As described above, these emission light L2 and background light L3 are visually recognized from the first user U1 on the front side, which is the first surface s1. The emission light L2 corresponds to image light DGL1, and background light L3 corresponds to background light BGL1. The first user U1 can recognize the emission light L2 and the background light L3 in combination. In other words, the first user U1 can recognize a state in which the emission light L2 is superimposed on the background light L3.
As described above, the emission light L2 and the background light L4 are visually recognized from the second user U2 on the second surface s2 side, which is the back surface. The emission light L2 corresponds to image light DGL2, and the background light L4 corresponds to background light BGL2. The second user U2 can recognize the emission light L2 and the background light L4 in combination. In other words, the second user U2 can recognize a state in which the emission light L2 is superimposed on the background light L4.
In the example shown in
A configuration example of circuits included in the transparent display panel 10 will be described with reference to
As shown in
The drive circuit 70 includes a signal processing circuit 71, a pixel control circuit 72, a gate drive circuit 73, a source drive circuit 74, a common potential drive circuit 75, and a light source control unit 52. Further, the light source unit 50 includes, for example, a light emitting diode element 51r (for example, red), a light emitting diode element 51g (for example, green), and a light emitting diode element 51b (for example, blue).
The signal processing circuit 71 includes an input signal analysis unit 711, a storage unit 712, and a signal adjustment unit 713. An input signal VS is inputted to the input signal analysis unit 711 of the signal processing circuit 71 from the control unit 90 via a wiring path such as a flexible printed circuit board (not shown). The input signal analysis unit 711 performs an analysis processing based on the input signal VS inputted from the control unit and generates an input signal VCS. The input signal VCS is, for example, a signal determining what kind of gradation value is given to each pixel PIX (
The signal adjustment unit 713 generates an input signal VCSA from the input signal VCS inputted from the input signal analysis unit 711. The signal adjustment unit 713 sends the input signal VCSA to the pixel control circuit 72 and sends a light source control signal LCSA to the light source control unit 52. The light source control signal LCSA is, for example, a signal containing information on a light amount of the light source unit 50 that is set according to the gradation value inputted to the pixels PIX.
The pixel control circuit 72 generates a horizontal drive signal HDS and a vertical drive signal VDS based on the input signal VCSA. For example, in this embodiment, the plurality of pixels PIX are driven in a field sequential method. Therefore, in the pixel control circuit 72, the horizontal drive signal HDS and the vertical drive signal VDS are generated for each color that the light source unit 50 can emit.
The gate drive circuit 73 sequentially selects the gate lines GL (in other words, signal wiring) of the transparent display panel within one vertical scanning period based on the horizontal drive signal HDS. The order of the selection of gate lines GL is arbitrary. As shown in
The source drive circuit 74 supplies a gradation signal corresponding to an output gradation value of each pixel PIX to each source line SL (in other words, signal wiring) of the transparent display panel within one horizontal scanning period based on the vertical drive signal VDS. As shown in
The switching element Tr is formed at each portion in which the gate lines GL and source lines SL are intersected with each other. The plurality of gate lines GL and the plurality of source lines SL correspond to a plurality of signal wirings that transmit drive signals for driving the liquid crystal of the liquid crystal layer LQL in
For example, a thin film transistor is used as the switching element Tr. The type of thin film transistor is not particularly limited. One of the source electrode and the drain electrode of the switching element Tr is connected to the source line SL, the gate electrode is connected to the gate line GL, and the other of the source electrode and drain electrode is connected to one end of the capacitor of a polymer dispersed liquid crystal LC (corresponding to the liquid crystal layer (corresponding to the liquid crystal of the liquid crystal layer LQL of
In the configuration example shown in
Furthermore, as described above, the controller 2 also functions as part of the eye tracking device 3. That is, as one of the functions and the processing units realized by the processor 1001, there are a viewpoint detection processing (in other words, eye tracking processing) for detecting the viewpoint of the first user U1 and the like.
The memory 1002 stores a control program 1011, setting information 1012, image data 1013, and other data and information related to a processing(s). The control program 1011 is a computer program that implements the functions and the like. The setting information 1012 is system setting information and user setting information. The image data 1013 is data for displaying images and video images on the screen 20. The communication interface device 1003 is connected to the drive circuit 70 of the transparent display panel 10, a first camera 30 configurating the eye tracking device 3, and other external devices, and the like, and performs a communication processing(s) by using a predetermined communication interface. An input device(s) and an output device(s) can be connected to the input/output interface device 1004.
The transparent display 1 can be used as a communication tool between the first user U1 and the second user U2. For example, in a medical field, the first user U1 who is a patient with general paralysis and the second user U2 who is a doctor can communicate via the image displayed on the screen 20 of the transparent display panel 10.
Hereinafter, use of the transparent display 1 as a communication tool will be described in detail with reference to
As shown in the flowchart of
In this state, the first user U1 can visually recognize each character image 100, which is the selection image displayed on the screen 20, from the first surface s1 side, and can also visually recognize the second user U1 via the screen 20 on which the character image 100 is displayed. Further, the second user U2 can visually recognize the character image 100 displayed on the screen 20 from the second surface s2 side, and can also visually recognize the first user U1 via the scree 20 on which the character image 100 is displayed. However, the character image 100 viewed from the second user U2 absolutely becomes an image viewed from the second surface s2 side of the screen 20. In other words, the character image 100 viewed from the second user U2 becomes an image in which the characters are reversed in the right-left direction, that is, becomes a so-called mirror character image.
Proceeding to step S2, the eye tracking device 3 included in the transparent display 1 detects the viewpoint of the first user U1 based on the image of the first user U1 photographed by the first camera 30, for example. That is, the eye tracking device 3 detects a destination on the screen 20 where the visual line of the first user U1 is directed (in other words, a gaze point). When the viewpoint of the first user U1 is detected, as shown in
Proceeding to step S3, the transparent display 1 determines whether the first user U1 gazes at one of the plurality of character images 100 from the gaze point of the first user U1 detected by the eye tracking device 3. In other words, the transparent display 1 determines whether the first user U1 intends to select one of the plurality of character images 100.
When the viewpoint of the first user U1 remains on one character image 100 for a predetermined period of time (for example, about several seconds) or more (step S3: YES), the transparent display 1 whether the first user U1 gazes the character image 100. More specifically, as shown in
In other words, when the cursor 110 displayed on the screen 20 moves in the setting region 200 set correspondingly to one character image 100 and a time remaining in the setting region 200 is for the above-mentioned predetermined period of time or more (Step S3: YES), the transparent display 1 determines that the first user U1 has selected the character image 100. For example, when a time during which the cursor 110 displayed on the screen 20 overlaps the one setting region 200 becomes the above-mentioned predetermined period of time or more, the transparent display 1 determines that the character image 100 has been selected by the first user U1. For example, as shown in
In this example, the setting region 200 corresponding to each character image 100 is set within a range of such a degree that each character image 100 is included. However, the setting region 200 may be set within any range, and may be set as a region including the pixels that do not form the character image 100. The setting region 200 corresponding to each character image 100 may be set within a wider range than a range of displaying each character image 100. Note that illustration of the setting region 200 is omitted except in
In this example, the plurality of character images 100 are intended to be displayed within a relatively wide range on the screen 20 of the transparent display panel 10. Consequently, an interval between the respective character images 100 can be relatively wide, and the setting region 200 corresponding to each character image 100 can be set in a relatively wide range. This makes it easier to improve detection accuracy of the character image 100 that the first user U1 intends to select. Of course, the plurality of character images 100 may be displayed in a relatively narrow range on the screen 20. In this case, there is an advantage that the first user U1 and the second user U2 can easily visually recognize facial expressions of the other person via the transparent display panel 10.
Note that instead of setting the setting region 200, for example, the respective character images 100 of a predetermined character size may respectively be arranged at a setting position coordinate that is individually set within the screen 20. In this case, the transparent display 1 may determine which character image 100 the first user U1 intends to select, for example, based on the setting position coordinate of each character image 100 and the position coordinate of the cursor 110.
As described above, in step S3, if it is determined that the first user U1 is gazing at one of the character images 100, for example, the character image 100a of “K” (step S3: YES), the processing proceeds to step S4. In step S4, the transparent display 1 displays a corresponding image corresponding to the character image 100a, which is the gaze image, to the second user U2 on the opposite side of the screen 20.
Here, the “corresponding image” refers to an image obtained by appropriately changing the display state of the character image 100a, which is the gaze image, and “displaying the corresponding image to the second user U2” means to display the corresponding image so that the second user U2 easier visually recognize it than the first user U1. In other words, “displaying the corresponding image to the second user U2” means to display, on the screen 20, an image (corresponding image) changed so that the second user U2 easily visually recognizes the display state of the character image 100a which is the gaze image.
As described above, when determining that the character image 100a of “K” is gazed at by the first user U1, the transparent display 1 displays the corresponding image 120 corresponding to the character image 100a of “K” to the second user U2. That is, the corresponding image 120 changed so that the second user U2 easily visually recognizes the display state of the character image 100a of for “K” is displayed on the screen 20. As shown in
As shown in
Note that if it is determined in step S3 that the first user U1 is not gazing at one of the character images 100 (step S3: NO), the process returns to step S2 and continues detecting the viewpoint of the first user U1.
In this way, when one of the character images 100, which are the selection images, is gazed at by the first user U1, the transparent display 1 displays a corresponding image 120 corresponding to the gazed character image (that is, gaze image) to the second user U2. This makes it easier for the first user U1 to convey to the second user U2 that, for example, the character image of “K” has been selected. Conversely, the second user U2 can easily recognize that the first user U1 has selected the character image 100a of “K”. The second user U2 can easily recognize words and sentences.
As described above, according to the first embodiment, it is possible to use the transparent display 1 in a new way, and it becomes easier to communicate between the first user U1 and the second user U2. The first user U1 and the second user U2 can communicate while looking at mutual expression face-to-face, which facilitates the communication. As described above, according to the present invention, it is possible to provide a technique that can expand use application of the transparent display apparatus that is the transparent display 1.
Note that the transparent display 1 may include a speaker, a voice synthesis system, and the like. In that case, for example, when the first user U1 gazes at one of the character images 100, which is the selection image, on the screen 20 of the transparent display panel 10, the transparent display 1 converts character information corresponding to the character image 100 to audio, and can output it from the speaker. Thus, the second user U2 can communicate with the first user U1 while listening to the audio altogether with the character images on the screen 20.
Furthermore, in the above example, when the transparent display 1 determines that the first user U1 is gazing at one character image 100 for a predetermined period of time or more, it displays the corresponding image 120 to the second user U2. However, timing at which the transparent display 1 displays the corresponding image 120 to the second user U2 is not limited to this. For example, apart from the plurality of character images 100, for example, a “confirm button” image for confirming the selection of the character image 100 is displayed on the screen 20. Then, after determining that the first user U1 is gazing at one character image 100 for a predetermined period of time or more, when the first user U1 further gazes at and selects the “confirm button” image, the transparent display 1 may display the corresponding image 120 to the second user U2.
In the above example, explained has been an example in which the transparent display 1 changes, as the corresponding image 120 corresponding to the gaze image, the display state of the character image 100a to enlarge it more than the other character images 100, and causes the screen 20 of the transparent display panel 10 to display the character image 100a reversed in the right-left direction. However, the corresponding image 120 is not limited to such an image.
For example, as shown in
Further, the corresponding image 120 does not have to be an image obtained by changing the position of the character image 100a, which is the gaze image, at its position. The corresponding image 120 may be an image obtained by causing a second region 220 different from the first region 210 in the screen 20 to display the character image 100a which is the gaze image.
For example, when the first user U1 gazes at the character image 100a of “K”, as shown in
Furthermore, for example, as shown in
Furthermore, in the first embodiment, the transparent display 1 displays the plurality of character images 100 as selection images on the screen 20 of the transparent display panel 10, but the selection images are not limited to the character images 100. For example, as shown in
A transparent display of a second embodiment will be explained by using
As shown in
Similarly to the first camera 30, an installation location of the second camera 31 is not particularly limited as long as it is a location where the movements of the eyes of the second user U2 can be photographed, and it does not necessarily have to be on the transparent display panel 10. Further, in this example, the second camera 31 is a CCD camera or the like in this example, but is not limited to this, and may be any camera that can photograph the eye movements of the second user U2. In this example, the second camera 31 is a CCD camera or the like, but is not limited to this and may be any member as long as the member can photograph the movements of the eyes of the second user U2. Moreover, in this example, the second camera 31 transmits the photographed image to the controller 2, and the controller 2 performs an image processing based on the photographed image (in other words, camera image), thereby detecting the viewpoint of the user U2.
The transparent display 1 of the first embodiment described above is used, for example, in a medical field for communication between the first user U1 who is a patient with general paralysis and the second user U2 who is a doctor. The transparent display 1 of the second embodiment can also be used, for example, for communication between the first user U1 who is a patient with general paralysis and the second user U2 who is a patient with general paralysis. That is, the transparent display 1 of the second embodiment can also be used, for example, for communication between patients suffering from general paralysis. In the transparent display 1 of the first embodiment, by the image displayed on the screen 20, his or her intention can be conveyed only to the second user U2 from the first user U1. In the transparent display 1 according to the second embodiment, by the image displayed on the screen 20, the intention can be conveyed in both directions between the first user U1 and the second user U2.
The transparent display 1 of the second embodiment can switch between a first state, in which a character image of “alphabet” as the selection image (corresponding to first selection image) is displayed toward the first user U1, and a second state in which a character image as the selection image (corresponding to second selection image) is displayed toward the second user U2 according to a request from the first user U1 or the second user U2.
As shown in the flowchart of
In this state, the transparent display 1 detects the viewpoints of the first user U1 and the second user U2 by using the eye tracking device 3 including the first camera 30 and the second camera 31 (step S12). Next, the transparent display 1 determines which of the first state and the second state is selected based on detection result(s). That is, the transparent display 1 determines whether the first state or the second state is requested by the first user U1 or the second user U2. First, in step S13, for example, it is determined whether the first user U1 or the second user U2 has selected the first state.
As shown in
In this case, the process proceeds to step S14, the transparent display 1 sets the display of the transparent display panel 10 to the first state, and as shown in
In this first state, the first user U1 can visually recognize each character image 100 displayed on the screen 20 from the first surface side s1, as shown in
Thereafter, similarly to the first embodiment, an intention communication processing based on the viewpoint of the first user U1 is started (see the flowchart and the like of
Meanwhile, if the first user U1 or the second user U2 does not select the first state (step S13: No), the process proceeds to step S15, and whether the first user U1 or the second user U2 selects the second state is determined. As shown in
In this case, the process proceeds to step S16, the transparent display 1 sets the display of the transparent display panel 10 to the second state and, as shown in
In this second state, the first user U1 can visually recognize each character image 100 displayed on the screen 20, as shown in
Thereafter, the intention communication processing based on the viewpoint of the second user U2 is performed in the same flow as in the first embodiment. That is, the transparent display 1 determines the second selection image, which is one character image 100 gazed by the second user U2, from the detection result of the eye tracking device 3. Then, the transparent display 1 displays, to the first user U1, the corresponding image (corresponding to the second corresponding image) 120 corresponding to the character image 100 as the second gaze image.
Note that when the first user U1 gazes at the “OFF” image 312 of the first switch image 310 while the intention communication processing based on the viewpoint of the first user U1 is being performed, the display on the transparent display panel 10 is switched, for example, the first state to the second state. That is, as shown in
Further, when the second user U2 gazes at the “OFF” image 322 of the second switch image 320 while the intention communication processing based on the viewpoint of the second user U2 is being performing, the display on the transparent display panel 10 is switched, for example, from the second state to the first state. That is, as shown in
According to the second embodiment described above, it is possible to use the transparent display 1 in a new way, and it becomes easier to communicate between the first user U1 and the second user U2. In the transparent display 1 of the second embodiment, the intension communication between the first user U1 and the second user U2 becomes possible in both directions by using the images displayed on the screen 20. Therefore, the range of use of the transparent display 1 is expanded.
Note that in the above example, the order of performing steps S13 and S14, and the order of performing steps S15 and S16 are not particularly limited, and step S15 and step S16 may be performed first. Furthermore, in the second embodiment, described has been one example of a method for switching the display of the transparent display panel 10 between the first state and the second state according to the request from the first user U1 or the second user U2. However, this method is just one example and is not particularly limited.
In the second embodiment, the first user U1 and the second user U2 use the eye tracking device 3 as an input means for performing various inputs and selections, but the input means is not limited to this. As the input means, for example, an input device such as a physically operated switch (so-called physical switch) or a keyboard may be used in combination with the eye tracking device 3. If the hands and fingers of the first user U1 and the second user U2 move to some extent, the operability can be further improved by using the physical switch or the like.
Furthermore, in the second embodiment, according to the request from the first user U1 or the second user U2, the first state in which the first user U1 can select the selection image, and the second state in which the second user U2 can select the selection image are switched, but it is not necessary to switch between the first state and the second state. That is, the first user U1 and the second user U2 may be able to select the selection image at the same time. Specifically, as shown in
In this case, the transparent display 1 determines, from the detection result of the eye tracking device 3, the first selection image (that is, the first gaze image), which is one character image 100A gazed by the first user U1. Then, the transparent display 1 displays the first corresponding image 120A based on the character image 100A, which is the first gaze image, to the second user U2. Meanwhile, the transparent display 1 determines, from the detection result of the eye tracking device 3, the second selection image (that is, the second gaze image) which is one character image 100B gazed by the second user U2. Then, the transparent display 1 displays the second corresponding image 120B based on the character image 100B, which is the second gaze image, to the first user U1. By allowing the first user U1 and the second user U2 to select the selection image at the same time in this way, the convenience can be further improved.
Although the embodiments of the present invention have been specifically described above, the present invention is not limited to the above-described embodiments and can be modified in various ways without departing from the scope thereof. In each embodiment, components can be added, deleted, replaced, and the like except for essential components. Unless specifically limited, each component may be singular or plural. The present invention also includes a form obtained by combinations of the respective embodiments and their modification examples.
In the above-described embodiment, a liquid crystal display, which is a liquid crystal display device, has been described as one example of the transparent display apparatus of the present invention, but the present invention can also be applied to other self-luminous display devices such as an organic EL device. The functions described in the embodiments are similarly applicable to any display device including the display layer (pixels) that can transition between the transparent state and the non-transparent state. Further, the size of the screen of the display device is applicable from a small type to a large type without particular limitation.
Further, in the above-described embodiment, the example in which characteristic control is performed by the controller has been explained as the transparent display apparatus of the present invention, but the configuration of the transparent display apparatus of the present invention is not limited to this and the computer system externally connected to the controller of the transparent display apparatus may perform the similar characteristic control.
Number | Date | Country | Kind |
---|---|---|---|
2023-002947 | Jan 2023 | JP | national |