The disclosure relates to an electronic device and a method of controlling the electronic device. More particularly, the disclosure relates to an electronic device for displaying immersive content and a method of controlling the electronic device.
Recently, interest in the next-generation media environment that provides a user with a virtual environment similar to a real environment has increased. In particular, metaverse is in the spotlight as a representative service that provides a user with a virtual environment. The metaverse is a compound word of meta (virtual and abstract) and universe (the real world). The metaverse refers to a three-dimensional (3D) virtual world. The core technology of the metaverse is extended reality (XR) that encompasses virtual reality (VR), augmented reality (AR), and mixed reality (MR).
Immersive content including a metaverse environment may be implemented by immersive displays and wearable devices. For example, a metaverse environment may be implemented by a personal wearable display device (e.g., a head-mounted display (HMD)). Also, the market for non-wearing type displays that may provide immersion and presence to users without wearing devices has recently increased. Examples of the non-wearing type displays may include a 360° projector, a multi-faceted screen or a room-type screen using a large display panel, a hemispherical screen, and a 3D screen.
According to an aspect of the disclosure, an electronic device comprising at least one sensor; a memory comprising at least one instruction; and at least one processor configured to execute the at least one instruction to: determine, through the at least one sensor, a location of a user, wherein the location comprises: a distance between the user and a screen on which an image, which is output from the electronic device, is displayed, and a direction of the user with respect to the electronic device, select a target object in a first area, which corresponds to a direction in which the user is located, among an entire area displayed on the screen, and control the target object to be enlarged, based on the distance between the user and the screen being within a first threshold distance.
According to an aspect of the disclosure, a method of controlling an electronic device, the method comprising: determining, through at least one sensor, a location of a user, wherein the location comprises a distance between the user and a screen on which an image output from the electronic device is displayed, and a direction of the user with respect to the electronic device; selecting a target object in a first area, which corresponds to a direction in which the user is located, among an entire area displayed on the screen; and controlling the target object to be enlarged, based on the distance between the user and the screen being within a first threshold distance.
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
The terms used herein will be briefly described, and the disclosure will be described in detail.
The terms used herein are general terms currently widely used in the art in consideration of functions in the disclosure, but the terms may vary according to the intention of one of ordinary skill in the art, precedents, or new technology in the art. Also, some of the terms used herein may be arbitrarily chosen by the present applicant, and in this case, these terms are defined in detail below. Accordingly, the specific terms used herein may be defined based on the unique meanings thereof and the whole context of the disclosure.
When a certain part “includes” a certain component, the part does not exclude another component but may further include another component, unless the context clearly dictates otherwise. Also, the term used in the embodiments such as “ . . . unit” or “ . . . module” indicates a unit for processing at least one function or operation, and may be implemented in hardware, software, or in a combination of hardware and software.
Embodiments will now be described more fully with reference to the accompanying drawings for one of ordinary skill in the art to be able to perform the embodiments without any difficulty. However, the disclosure may be embodied in many different forms and is not limited to the embodiments set forth herein. For clarity, portions irrelevant to the descriptions of the disclosure are omitted in the drawings, and like components are denoted by like reference numerals throughout the specification.
The term “user” used herein refers to a person who controls a function or an operation of a computing device or an electronic device by using a control device, and may include a viewer, a manager, or an installation engineer.
The terms “processor” may include various processing circuitry and/or multiple processors. For example, as used herein, including the claims, the term “processor” may include various processing circuitry, including at least one processor, wherein one or more of at least one processor, individually and/or collectively in a distributed manner, may be configured to perform various functions described herein. As used herein, when “a processor”, “at least one processor”, and “one or more processors” are described as being configured to perform numerous functions, these terms cover situations, for example and without limitation, in which one processor performs some of recited functions and another processor(s) performs other of recited functions, and also situations in which a single processor may perform all recited functions. Additionally, the at least one processor may include a combination of processors performing various of the recited/disclosed functions, e.g., in a distributed manner. At least one processor may execute program instructions to achieve or perform various functions.
Hereinafter, the disclosure will be described in detail with reference to the attached drawings.
Referring to
The electronic device 100 according to an embodiment of the disclosure may display an image on a screen 200. The electronic device 100 may include a display panel providing the screen 200, or may include a projector configured to project light onto the screen 200 so that content is displayed on the screen 200. Here, the term “image” may be, but is not limited to, a 3D graphic image. For example, the electronic device 100 may display content including an image, text, and an application made of 3D graphics. For example, the electronic device 100 may display metaverse content. Although the electronic device 100 illustrates an image on the screen 200 through a projector, the disclosure is not limited to the above embodiment.
The electronic device 100 according to an embodiment of the disclosure is a projection device for enlarging and projecting light output from a light source onto a wall or a screen through a projection lens. The electronic device 100 may output an image by using a projection display method (e.g., digital light processing (DLP)) using a digital micromirror display (DMD). In addition, the electronic device 100 may include a cathode ray tube (CRT) projector or a liquid crystal display (LCD) projector.
The electronic device 100 according to an embodiment of the disclosure may display an image on the screen 200 on one side, and may detect a gaze of a user or a location of the user on the other side opposite to the one side. For example, in
The electronic device 100 according to an embodiment of the disclosure may recognize and enlarge an object that the user is interested in among an entire area of the screen 200 through the gaze of the user and the location of the user.
Referring to
The enlargement 101 of
In an embodiment of the disclosure, the electronic device 100 may determine a first area 30 corresponding to a direction in which the user 20 is located among the entire area of the screen 200. For example, the electronic device 100 may determine a portion of the entire area of the screen 200 facing a gaze of the user 20 as the first area 300. The electronic device 100 may select a target object 40 to be enlarged from among at least one object (e.g., 40 and 50) included in the first area 30. For example, the electronic device 100 may recognize text, numbers, characters, and avatars included in the first area 30 and may determine one of them as the target object 40.
The enlargement 102 of
The electronic device 100 according to an embodiment of the disclosure may detect a location change of the user, and may enlarge the target object to be enlarged by determining a location of the user. The electronic device 100 may enhance user experience (UX) by automatically enlarging an object that the user is interested in.
Referring to
The sensor 120 according to an embodiment may detect a gaze of a user or may detect a location of the user. The location of the user may include a distance between the user and the electronic device 100 and a direction of the user with respect to the electronic device 100. The sensor 120 according to an embodiment may include at least one sensor. A sensing value related to the gaze of the user and the location of the user detected by the sensor 120 may be output to the processor 110.
The memory 130 according to an embodiment may store various data, a program, or an application for driving and controlling the electronic device 100. Also, the program stored in the memory 130 may include one or more instructions. The program (the one or more instructions) or the application stored in the memory 130 may be executed by the processor 110.
The memory 130 according to an embodiment may store an enlargement target selecting module 133, an enlargement determining module 134, and an enlargement providing module 135.
The processor 110 according to an embodiment may control an overall operation of the electronic device 100. The processor 110 may execute one or more instructions stored in the memory 130.
The processor 110 according to an embodiment may execute one or more instructions included in the enlargement target selecting module 133, the enlargement determining module 134, and the enlargement providing module 135. The processor 110 according to an embodiment may execute one or more instructions to determine a location of a user, including a distance between the user and a screen on which an image output from the electronic device 100 is displayed and a direction of the user with respect to the electronic device 100, through the at least one sensor 120.
The processor 110 according to an embodiment may execute one or more instructions included in the enlargement target selecting module 133 to select a target object included in a first area corresponding to a direction in which the user is located among an entire area of the screen. The processor 110 according to an embodiment may execute one or more instructions included in the enlargement determining module 134 to determine whether a distance between the user and the first area of the screen is within a first threshold distance. The processor 110 according to an embodiment may execute one or more instructions included in the enlargement providing module 135 to enlarge the target object, when the distance between the user and the first area of the screen is within the first threshold distance.
Referring to
The sensor 120 according to an embodiment may include an image sensor 121 and a position detection sensor 122. The image sensor 121 according to an embodiment may obtain an image frame such as a still image or a moving image through a camera. For example, the image sensor 121 may receive an image of a user within a camera recognition range. The image of the user captured through the image sensor 121 may be processed through the processor 110, and the processor 110 may analyze the image of the user to obtain information about a gaze change of the user, a gesture of the user, and a location of the user.
The position detection sensor 122 according to an embodiment may detect a location of the user. The position detection sensor 122 may measure a distance to an object and may measure a direction with respect to the object. For example, the position detection sensor 122 may include a time of flight (ToF) sensor or a millimeter wave (mmWave) sensor. In an embodiment, the position detection sensor 122 may measure a distance between a screen and the user, by measuring a distance between the user and the electronic device 100 and a distance between the electronic device 100 and the screen. The position detection sensor 122 may measure a direction of the user with respect to the electronic device 100. The location of the user measured through the position detection sensor 122 may be processed through the processor 110.
The memory 130 according to an embodiment may store various data generated during an operation of the electronic device 100. The memory 130 may include a flash memory type, a hard disk type, a multimedia card micro type, or a card type memory (e.g., SD or XD memory), and may include a non-volatile memory including at least one of a read-only memory (ROM), an electrically erasable programmable read-only memory (EPPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk, and a volatile memory such as a random-access memory (RAM) or a static random-access memory (SRAM).
The memory 130 according to an embodiment may store one or more instructions and/or a program so that the electronic device 100 operates to obtain information about an object. The memory 130 according to an embodiment may store the enlargement target selecting module 133, the enlargement determining module 134, the enlargement providing module 135, and an enlargement history managing module 136.
The memory 130 according to an embodiment may include a first DB 131 and a second DB 132. The first DB 131 and the second DB 132 may be configured as one device with the enlargement target selecting module 133, the enlargement determining module 134, the enlargement providing module 135, and the enlargement history managing module 136, or may be configured as separate storage devices to manage data.
The first DB 131 according to an embodiment may store information about one or more objects included in a 3D graphic image. The first DB 131 may include an attribute value of at least one of a text object, a number object, a character object, and an avatar object. The first DB 131 may store information about a priority of objects for each user.
The second DB 132 according to an embodiment may store information about one or more objects included in a 3D graphic image. The second DB 132 may include an attribute value of a user-customized object. The second DB 132 may store information about a priority of objects for each user.
The display 140 according to an embodiment may provide a screen on which a 3D graphic image is displayed. The display 140 generates a driving signal by converting an image signal, a data signal, an OSD signal, and a control signal processed by the processor 110.
The display 140 may include a projector 141 and a display panel 142. Although the electronic device 100 includes both the projector 141 and the display panel 142 in
When the electronic device 100 according to an embodiment includes the projector 141, the electronic device 100 may be a projector display device. The electronic device 100 may display an image on the screen by projecting light onto the screen through the projector 141. The projector 141 may include a light source for projection such as an LED and a laser. The projector 141 may include an optical system such as a lens. The projector 141 may output an image by using a projection display method (e.g., digital light processing (DLP)) using a digital micromirror display (DMD). The projector 141 may include a cathode ray tube (CRT) projector or a liquid crystal display (LCD) projector.
When the electronic device 100 according to an embodiment includes the display panel 142, the electronic device 100 may provide the screen through the display panel 142. The display panel 142 according to an embodiment may include a PDP, an LCD, or an OLED flexible display, or may include a 3D display.
The user interface 150 according to an embodiment may include an input interface and an output interface. The input interface may receive an input from the user and may transmit the input to the processor 110. The input interface may include, for example, a button, a remote controller, a touchscreen, and a voice input microphone. The output interface may output various information related to the electronic device 100. The output interface includes a display, a light emitting diode (LED), or a speaker.
The driving module 160 according to an embodiment may control a direction, an angle, and a location of the electronic device 100 under the control of the processor 110. For example, the driving module 160 may control a direction of the projector 140 to rotate 360° under the control of the processor 110. For example, the driving module 160 may control the electronic device 100 to move in a left-right direction under the control of the processor 110.
The communication module 170 according to an embodiment may transmit and receive data or a signal to and from an external device or a server under the control of the processor 110. The communication module 170 may include, but is not limited to, a Bluetooth communication unit, a Bluetooth low energy (BLE) communication unit, a near field communication unit, a WLAN (Wi-Fi) communication unit, a Zigbee communication unit, an infrared data association (IrDA) communication unit, a Wi-Fi direct (WFD) communication unit, an ultra-wideband (UWB) communication unit, an Ant+ communication unit, or a microwave (uWave) communication unit according to the performance and structure of the electronic device 100.
The power supply module 180 according to an embodiment may supply power input from an external power source to elements in the electronic device 100 under the control of the processor 110. For example, the power supply module 180 may include a power cable or a battery located inside the electronic device 100.
The processor 110 according to an embodiment may execute programs stored in the memory 130 to control the sensor 120, the display 140, the user interface 150, the driving module 160, the communication module 170, and the power supply module 180. The processor 110 may include a separate neural processing unit (NPU) that performs an operation of an artificial intelligence model. Also, the processor 110 may include a central processing unit (CPU) and a graphics processing unit (GPU). In the disclosure, the processor 110 may include one or more processors.
The processor 110 according to an embodiment may determine a location of a user, including a distance between the user and a screen on which an image output from the electronic device is displayed and a direction of the user with respect to the electronic device through at least one sensor 120. The processor 110 according to an embodiment may determine whether there is a gaze change of the user, by tracking a gaze of the user through the image sensor 121. Based on the determination that there is a gaze change of the user, the processor 110 according to an embodiment may determine a location of the user by obtaining a distance and a direction of the user through the position detection sensor 122.
The processor 110 according to an embodiment may execute one or more instructions of a program stored in the memory 130 to control an object displayed on the screen to be enlarged. For example, the processor 110 may execute one or more instructions included in the enlargement target selecting module 133, the enlargement determining module 134, and the enlargement providing module 135.
The processor 110 according to an embodiment may execute one or more instructions of the enlargement target selecting module 133 to select a target object included in a first area corresponding to a direction in which the user is located among an entire area displayed on the screen.
The processor 110 according to an embodiment may execute one or more instructions included in the enlargement determining module 134 and the enlargement providing module 135 to control the target object to be enlarged, when the electronic device 100 (or the processor 110) determines that a distance between the screen and the user is within a first threshold distance.
The processor 110 according to an embodiment may execute one or more instructions of the enlargement target selecting module 133 to determine an area facing a gaze of the user at the location of the user as the first area of the screen. The processor 110 may select a target object included in the first area. The processor 110 may determine a coordinate value of the selected target object.
The processor 110 according to an embodiment may compare attribute values of objects stored in the memory 130 with an attribute value detected from at least one object included in the first area. When the electronic device 100 (or the processor 110) determines that at least one object matches any one of the objects stored in the memory 130, the processor 110 may select the at least one object as the target object. For example, each of the first DB 131 and the second DB 132 may include information about an object and an attribute value of the object.
When the electronic device 100 (or the processor 110) determines that the user is located within the first threshold distance from the first area of the screen for a certain period of time, the processor 110 according to an embodiment may control the target object included in the first area to be enlarged. The processor 110 according to an embodiment may store information about the target object in the memory 130.
The processor 110 according to an embedment may execute one or more instructions of the enlargement history managing module 136 to update a history table including information about a distance change value between the user and the target object and a viewing time of the user for the target object, to the memory 130.
The processor 110 according to an embodiment may execute one or more instructions of the enlargement history managing module 136 to calculate an enlargement priority of the objects stored in the memory 130 based on the updated history table. The processor 110 according to an embodiment may select a target object to be enlarged, based on the calculated priority from among objects included in the first area.
When the electronic device 100 (or the processor 110) determines that the distance between the screen and the user is within the first threshold distance, the processor 110 according to an embodiment may execute one or more instructions included in the enlargement providing module 135 to display a message asking the user whether to enlarge an object on the screen.
The processor 110 according to an embodiment may execute one or more instructions included in the enlargement providing module 135 to control the first area including the target object to be enlarged on a portion of the entire area displayed on the screen. When the electronic device 100 (or the processor 110) determines that the distance between the user and the screen is within a second threshold distance shorter than the first threshold distance based on the direction in which the user is located, the processor 110 according to an embodiment may control the first area including the target object to be enlarged on the entire area of the screen.
Referring to
In an embodiment, the processor 110 may determine whether there is a gaze change of the user through the image sensor 121. The processor 110 may determine a location of the user through the position detection sensor 122, based on the determination that there is a gaze change of the user.
In operation S420, the processor 110 according to an embodiment of the disclosure may execute one or more instructions included in the enlargement target selecting module 133 to select a target object included in a first area corresponding to a direction in which the user is located among an entire area displayed on the screen.
In an embodiment, the processor 110 may determine an area facing the gaze of the user at the location of the user as the first area of the screen. The processor 110 may select a target object included in the first area. The processor 110 may determine a coordinate value of the selected target object.
In an embodiment, the processor 110 may compare attribute values of objects stored in the memory 130 with an attribute value detected from at least one object included in the first area. When the electronic device 100 (or the processor 110) determines that at least one object matches any one of the objects stored in the memory 130, the processor 110 may select the at least one object as the target object.
In an embodiment, when the electronic device 100 (or the processor 110) determines that the user is located within a first threshold distance from the first area of the screen for a certain period of time, the processor 110 may control the target object included in the first area to be enlarged. The processor 110 may store information about the target object in the memory 130.
In an embodiment, the processor 110 may execute one or more instructions included in the enlargement history managing module 136 to update a history table including information about a distance change value between the user and the target object and a viewing time of the user for the target object, to the memory 130.
In an embodiment, the processor 110 may calculate a priority of the objects stored in the memory, based on the updated history table. The processor 110 may select a target object to be enlarged, based on the calculated priority from among objects included in the first area.
In an embodiment, when the electronic device 100 (or the processor 110) determines that the distance between the user and the first area of the screen is within the first threshold distance, the processor 110 may display a message asking the user whether to enlarge an object on the screen.
In operation S430, when the electronic device 100 (or the processor 110) determines that the distance between the screen and the user is within the first threshold distance, the processor 110 according to an embodiment of the disclosure may control the target object to be enlarged. For example, the processor 110 may execute one or more instructions included in the enlargement determining module 134 to determine whether the distance between the user and the first area of the screen is within the first threshold distance. For example, the processor 110 may execute one or more instructions included in the enlargement providing module 135 to control the target object to be enlarged.
In an embodiment, the processor 110 may display an image that is enlarged around the target object on the screen. For example, the processor 110 may display an image in a state where the user approaches the target object. Accordingly, as the user approaches the target object, the target object and a surrounding background or a surrounding object of the target object within a field of view of the user may also be enlarged. In this case, the image enlarged by the processor 110 may be displayed as a continuous image on the screen.
In an embodiment, the processor 110 may control the first area including the target object to be enlarged on a portion of the entire area of the screen. When the electronic device 100 (or the processor 110) determines that the distance between the user and the screen is within a second threshold distance shorter than the first threshold distance based on the direction in which the user is located, the processor 110 may control the first area including the target object to be enlarged on the entire area of the screen.
Referring to
For example, a user 510 may look straight at the screen 200 from the center of the screen 200, and when an object 540 of interest is displayed on the screen 200, may change his/her gaze to a direction in which the object 540 is located. For example, as shown in
In the disclosure, because it is common for a user to look straight at the screen 200, a gaze direction of the user 510 before a gaze change is perpendicular to the screen 200, but the disclosure is not limited to the above embodiment. For example, even before the gaze changes, the user may not look straight at the screen 200 but may look sideways.
Referring to
For example, a user 610 may look at a portion where an object 640 of interest (or target object 640) is located among an entire area displayed on the screen 200, and may move to look closer at the object of interest 640. For example, as shown in
The electronic device 100 according to an embodiment of the disclosure may select the target object 640 included in a first area 630 corresponding to a direction in which the user 620 is located among the entire area displayed on the screen 200. This operation of the electronic device 100 may correspond to operation S420 of
In operation S611, the electronic device 100 may determine an area facing a gaze of the user 620 at the location of the user 620 as the first area 630 that is an area of interest of the screen 200. For example, the processor 110 may determine a portion of an entire area of a 3D graphic image displayed on the screen 200 corresponding to the location of the user 620 as the first area 630. For example, the first area 630 may be an area perpendicular to the gaze of the user 620 looking straight at the screen 200, based on the location of the user 620.
In an embodiment, the processor 110 may determine a portion of the entire area displayed on the screen 200 corresponding to the location of the user 620 and the gaze of the user 620 as the first area 630. Embodiments of determining the first area 630 will be described below with reference to
The electronic device 100 may display a boundary line of the first area 630 through the screen 200, but the disclosure is not limited to the above embodiment. For example, the boundary line of the first area 630 may be a virtual line that is not visible to the user.
In operation S612, the electronic device 100 may select the target object 640 included in the first area 630. For example, when the first area 630, which is an area of interest, is determined, the electronic device 100 may select the target object 640 to be enlarged from among at least one object (e.g., 640 and 650) included in the first area 630. Although the at least one object (e.g., 640 and 650) includes two objects, the disclosure is not limited to the above embodiment.
In an embodiment, the electronic device 100 may select the target object 640, based on information about objects stored in the memory 130. Although the target object 640 is an avatar of the 3D graphic image, the disclosure is not limited to the above embodiment. For example, the memory 130 may include information about one or more objects included in the 3D graphic image of the screen 200. The memory 130 may include attribute values of the one or more objects, for example, an attribute value of at least one of letters, numbers, characters, and avatars. The memory 130 may include an attribute value of a user-customized object. The memory 130 may include enlargement history information of the user.
In an embodiment, the electronic device 100 may detect at least one object (e.g., 640 and 650) included in the first area 630, by using an information detection model. The electronic device 100 may detect an attribute value of the at least one object (e.g., 640 and 650). For example, the information detection model may use an optical character recognition (OCR) method, and may recognize general letters, numbers, special characters, and symbols. For example, the information detection model may include an object recognition model that recognizes objects such as characters or avatars.
In an embodiment, the electronic device 100 may compare an attribute value detected from the at least one object (e.g., 640 and 650) included in the first area 630 with attribute values of the objects stored in the memory 130. When the electronic device 100 (or the processor 110) determines that the at least one object (e.g., 640 and 650) matches any one of the stored objects, the electronic device 100 may select the at least one object (e.g., 640 and 650) as a target object, which will be described below with reference to
In an embodiment, the electronic device 100 may select the at least one object (e.g., 640 and 650) as a target object when a trigger condition is satisfied, even though the at least one object (e.g., 640 and 650) included in the first area 630 is not an object stored in the memory 130. For example, the electronic device 100 may determine how long the user is located at the same position. For example, when the electronic device 100 (or the processor 110) determines that the user 620 is located within a threshold distance for a certain period of time, the electronic device 100 may determine that the trigger condition is satisfied. Even when the at least one object (e.g., 640 and 650) does not match the objects stored in the memory 130, the electronic device 100 may determine the at least one object (e.g., 640 and 650) as an object that the user is interested in and may select the at least one object (e.g., 640 and 650) as a target object, which will be described below with reference to
In an embodiment, the electronic device 100 may include information about a priority of the objects stored in the memory 130. The electronic device 100 may determine one object considering an enlargement history of the user from among the at least one object (e.g., 640 and 650) as a target object, based on the information about the priority of the objects, which will be described below with reference to
In operation S613, the electronic device 100 may determine a coordinate value 641 of the selected target object 640. For example, the electronic device 100 may output information about key points indicating an edge of the target object 640 or coordinate values of the key points. In the disclosure, the term “coordinate value” may include plane coordinates of an object, and rotational coordinates of the object in a 3D space where a 3D shape of the object may be defined. The disclosure is not limited to the above embodiment, and the electronic device 100 may calculate various information for determining a location or a direction of the target object 640 in the first area 630.
The electronic device 100 according to an embodiment of the disclosure may output a message regarding whether to enlarge an object through the screen 200. For example, the electronic device 100 may display a user interface screen 601 saying “Would you like to use the metaverse-specific enlargement function?” on the screen 200. When the electronic device 100 receives a user input for “Yes”, the electronic device 100 may control the target object 640 to be enlarged. However, the disclosure is not limited to the above embodiment, and the electronic device may control the target object 640 to be automatically enlarged without a separate user interface screen. In another example, the electronic device 100 may output a message regarding whether to enlarge an object before selecting the target object 640.
Referring to
In the enlargement 701 of
In the enlargement 702 of
In an embodiment, the electronic device 100 may display an image that is enlarged around the target object 742 on the screen 200. For example, the electronic device 100 may enlarge the target object 742 as if a camera photographing the target object 742 is approaching the target object 742. Accordingly, as the user 710 approaches the target object 742, the target object 742 may be enlarged and enter a field of view of the user 710. In an embodiment, as the target object 742 is enlarged, another object 752 adjacent to the target object 742 may also be enlarged. In this case, the electronic device 100 may continuously display the target object 742 with a background image (e.g., a tree background).
In an embodiment, a plurality of objects (e.g., 741 and 751) may be included in a first area 730 determined by the electronic device 100. The electronic device 100 may control the plurality of objects (e.g., 741 and 751) to be enlarged, based on determination that each of the plurality of objects (e.g., 741 and 751) matches objects stored in the memory 130. In another embodiment, when only one object 741 from among the plurality of objects (e.g., 741 and 751) matches the objects stored in the memory 130, the electronic device 100 may control only the one object 741 to be enlarged.
Because the electronic device 100 according to an embodiment of the disclosure displays 3D graphic images of objects and each of the objects includes an attribute value, the electronic device 100 may control each object that the user is interested in to be enlarged. The electronic device 100 may implement immersive content by enlarging an object displayed on the screen 200.
The electronic device 100 according to an embodiment of the disclosure may perform various operations in addition to an operation of enlarging a target object. The electronic device 100 according to an embodiment may perform an operation of automatically enlarging letters for a user who has a history of enlarging small letters. For example, when the electronic device 100 (or the processor 110) determines that the user's eyesight is low, the electronic device 100 may automatically enlarge letters in content with subtitles, such as news.
Also, the electronic device 100 according to an embodiment may enlarge a target object and may also provide additional information about the target object. For example, for an object with historical information such as a pyramid, the electronic device 100 may provide a YouTube link about the historical information while enlarging the object.
Referring to
In the enlargement 801 of
In the enlargement 802 of
A method by which the processor 110 of the electronic device 100 according to embodiments determines a first area will be described with reference to a first case 901 and a second case 902 of
Referring to the first case 901 of
For example, the processor 110 may determine whether a direction in which the user is located is a left side or a right side of the electronic device 100, and may divide an entire area of the screen 200 into two areas, that is, a left area and a right area. For example, when the electronic device 100 (or the processor 110) determines that the user 910 is on the left side of the electronic device 100, the processor 110 may determine the left area among the entire area of the screen 200 as a first area 931.
However, the disclosure is not limited to the above embodiment, and the processor 110 may divide the entire area of the screen 200 into n areas from a left area to a right area (n is a natural number of 3 or more) based on the electronic device 100.
Referring to the second case 902 of
A method by which the electronic device 100 according to an embodiment of the disclosure determines a first area is not limited to the above example. For example, the processor 110 may determine an arbitrary area corresponding to a location of the user and a gaze of the user as a first area.
Hereinafter, a method by which the electronic device 100 according to embodiments selects a target object will be described with reference to
Referring to
Referring to
Referring to
In an embodiment, the memory 130 may include the first DB 131 in which information about at least one object included in a 3D graphic image of the screen 200 is stored. The first DB 131 may include an attribute value of at least one of a letter object, a number object, a character object, and an avatar object.
In an embodiment, the processor 110 may compare attribute values of objects stored in the first DB 131 with an attribute value detected from the at least one object (e.g., 1140 and 1150). For example, the processor 110 may compare the attribute values of the objects stored in the first DB 131 with an attribute value of a first object 1140. The processor 110 may compare the attribute values of the objects stored in the first DB 131 with an attribute value of a second object 1150.
In operation S1020, when the electronic device 100 (or the processor 110) determines that the at least one object (e.g., 1140 and 1150) included in the first area 1130 matches any one of the objects stored in the memory 130, the processor 110 may select the at least one object as a target object.
Referring to
Also, in an embodiment, the first DB 131 may not include information about the second object 1150. When the electronic device 100 (or the processor 110) determines that the objects included in the first DB 131 do not match the second object 1150 detected in the first area 1130, the processor 110 may not select the second object 1150 as a target object.
In an embodiment of the disclosure, the electronic device 100 may compare an attribute value of each of the first object 1140 and the second object 1150 with attribute values of the objects stored in the memory 130, and when the electronic device 100 (or the processor 110) determines that the first object 1140 matches any one of the objects stored in the memory 130, the electronic device 100 may select the first object 1140 as a target object 1140.
Referring to
Referring to
In an embodiment, the trigger condition may be a distance between the user and the screen 200 and a viewing time of the user. For example, when the electronic device 100 (or the processor 110) determines that a user 1310 is located with a first threshold distance from the screen 200 and is watching a first area 1330 for a certain period of time (e.g., 5 seconds), the processor 110 may determine that the trigger condition is satisfied. The processor 110 may select an object 1350 included in the first area 1330 as a target object.
In an embodiment, even when information about the object 1350 included in the first area 1330 does not match objects stored in the first DB 131, the processor 110 may recognize that the object 1350 is a user-customized object that the user wants to enlarge. The processor 110 may control the object 1350, which is a user-customized object, to be enlarged.
In operation S1220, the processor 110 may store information about the target object in the memory 130. The processor 110 may update an object that has a history of being enlarged by the user, to the memory 130.
Referring to
In an embodiment, when the processor 110 stores the information about the user-customized object in the second DB 132, the user-customized object displayed in a first area may be selected as a target object as shown in
Referring to
Referring to
Referring to
In an embodiment, the processor 110 may obtain a history table 1501 in which a value obtained by multiplying a distance change between the screen 200 including objects and a user by a viewing time of the user for each object is defined as a ‘score’, based on an enlargement history of the user. For example, the processor 110 may obtain the history table 1501 in which a score is assigned to each of Avatar 1, Avatar 2, and Avatar 3, based on an enlargement history of the user for each of Avatar 1, Avatar 2, and Avatar 3.
In detail, referring to the history table 1501, the processor 110 may assign 3 points to Avatar 1 through a history that the user moved 1 m to watch Avatar 1 and watched Avatar 1 for 3 seconds. Also, the processor 110 may assign 5 points to Avatar 2 through a history that the user moved 0.5 m to watch Avatar 2 and watched Avatar 2 for 10 seconds. Also, the processor 110 may assign 6 points to Avatar 3 through a history that the user moved 1.5 m to watch Avatar 3 and watched Avatar 3 for 4 seconds.
In an embodiment, the processor 110 may update the history table 1501 to the first DB 131 and the second DB 132. For example, when the processor 110 obtains a score for an object (e.g., avatar) stored in the first DB 131 in the history table 1501, the processor 110 may update the score to the first DB 131. For example, when the processor 110 obtains a score for an object (e.g., a user-customized object) stored in the second DB 132 in the history table 1501, the processor 110 may update the score to the second DB 132.
In operation S1420, the processor 110 may calculate a priority of objects stored in the memory, based on the updated history table. For example, the processor 110 may calculate information about the priority (e.g., Avatar 3>Avatar 2>Avatar 1) 1502, based on the obtained history table 1501.
In operation S1430, the processor 110 may select a target object to be enlarged, based on the calculated priority from among objects included in a first area. For example, the processor 110 may select the first object 1540 recognized as ‘Avatar 3’ 1503 as a target object based on the calculated priority from among the objects (e.g., 1540, 1550, and 1560) included in the first area 1530.
A method by which the electronic device 100 according to an embodiment of the disclosure selects a target object is not limited to the above example. For example, the processor 110 may obtain an automatic enlargement recommendation function, by executing a learning and modeling model. For example, when the first object 1540 appears, if the user has a history of frequently coming close and watching the first object 1540 for a long time, the processor 110 may automatically suggest whether to enlarge the first object 1540 even when there is no change in a gaze and a distance value of the user.
Hereinafter, another method by which an electronic device according to an embodiment enlarges an object will be described with reference to
Referring to
In operation S1620, the processor 110 may select a target object included in a first area corresponding to a direction in which the user is located among an entire area displayed on the screen 200. Operation S1620 may correspond to operation S420 of
In operation S1630, the processor 110 may output a message regarding whether to enlarge an object through the screen 200. Operation S1630 corresponds to the user interface screen 601 of
In operation S1640, the processor 110 may control the first area including the target object to be displayed on a portion of the screen. 1702 of
In an embodiment, when the electronic device 100 (or the processor 110) determines that a distance D1 between the user 1710 and the screen 200 is within a first threshold distance, the processor 110 may control the first area 1731 to be enlarged on the portion 1732. In this case, the electronic device 100 may enlarge and display the first display 1731 by the portion 1732 and may not enlarge the remaining area 1741. In this case, the remaining portion 1742 displayed in the state 1702 after partial enlargement may be smaller than the remaining area 1741 displayed in the state before enlargement 1701.
In operation S1650, the processor 110 may determine whether the distance D2 between the user 1720 and the screen 200 is within a second threshold distance. When the electronic device 100 (or the processor 110) determines that a distance D2 between the user 1720 and the screen 200 is within a second threshold distance, the processor 110 may perform operation S1660. When the electronic device 100 (or the processor 110) determines that the distance D2 between the user 1720 and the screen 200 is greater than the second threshold distance, the processor 110 may perform operation S1640.
In operation S1660, the processor 110 may control the first area 1731 including the target object to be displayed on an entire area 1733 of the screen 200. 1703 of
Referring to
In an embodiment, the processor 110 may execute the at least one instruction included in the gesture determining module 137 to determine a gesture of a user obtained through a camera of the image sensor 121. The processor 110 may select a target object 1840 included in a first area 1830 through the determined gesture of the user.
For example, the image sensor 121 may receive an image of the user through the camera, and the image of the user captured through the image sensor 121 may be processed through the processor 110.
For example, the image sensor 121 may receive an image including a gesture of a user (e.g., 1810 or 1820) leaning forward and watching the first area 1830 closely. The processor 110 may execute one or more instructions included in the gesture determining module 137 to select the first area 1830 and the target object 1840 through the gesture of the user, and may control the target object 1840 included in the first area 1830 to be enlarged.
In the disclosure, the gesture of the user may include not only a motion of leaning the upper body forward but also all hand gestures, foot gestures, body gestures, etc. of the user who wants to enlarge and watch an image.
Referring to
When the electronic device 100 (or the processor 110) determines that a distance D1 between a user 1910 and the screen output through the display panel 142 is within a first threshold distance, the electronic device 100-1 according to an embodiment may control a first area 1930 corresponding to a direction in which the user 1910 is located to be enlarged. Alternatively, the electronic device 100-1 may control a target object included in the first area 1930 corresponding to the direction in which the user 1910 is located to be enlarged. For example, the electronic device 100-1 may control a letter object included in the first area 1930 to be enlarged.
Referring to
An avatar that may be manipulated by a user appears in the metaverse content. The user may manipulate the avatar to, interact with other avatars, or perform actions suitable for various situations. That is, the user may be displayed as an avatar displayed on the screen 200, not as an actual user outside the electronic device 100. In this case, an object to be enlarged may be enlarged based on a gaze of the avatar and a location of the avatar.
2001 of
2002 of
Referring to
The metaverse providing server 2100 may include a communication module 2120 that may communicate with the electronic device 100, a processor 2130 that may process data received from the electronic device 100, and a memory 2110 (or DB) that may store data or a program for processing data.
The memory 2110 according to an embodiment may store information about one or more objects included in a 3D graphic image. The memory 2110 may include an attribute value of at least one of a letter object, a number object, a character object, and an avatar object. The memory 2110 may include an attribute value of a user-customized object.
The electronic device 100 may access the metaverse providing server 2100 and may display metaverse content. The electronic device 100 may communicate with the electronic device 100 through the communication module 170. For example, the processor 110 may receive information about objects stored in the memory 2110 of the server 2100 through the communication module 170. The processor 110 may execute one or more instructions included in the enlargement target selecting module 133 to select a target object from among at least one object included in a first area, based on the received information.
In the disclosure, the electronic device 100 may select a target object through information of objects stored in the memory 130 of the electronic device 100 in the form of an on-device as shown in
A machine-readable storage medium may be provided as a non-transitory storage medium. Here, ‘non-transitory’ means that the storage medium does not include a signal (e.g., an electromagnetic wave) and is tangible, but does not distinguish whether data is stored semi-permanently or temporarily in the storage medium. For example, the ‘non-transitory storage medium’ may include a buffer in which data is temporarily stored.
According to an embodiment, methods according to various embodiments of the disclosure may be provided in a computer program product. The computer program product may be a product purchasable between a seller and a purchaser. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., a compact disc read-only memory (CD-ROM)), or distributed (e.g., downloaded or uploaded) online via an application store or between two user devices (e.g., smartphones) directly. When distributed online, at least part of the computer program product (e.g., a downloadable application) may be temporarily generated or at least temporarily stored in a machine-readable storage medium, such as a memory of a server of a manufacturer, a server of an application store, or a relay server.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2022-0116682 | Sep 2022 | KR | national |
| 10-2022-0159492 | Nov 2022 | KR | national |
This application is a by-pass continuation application of International Application No. PCT/KR2023/010667, filed on Jul. 24, 2023, which is based on and claims priority to Korean Patent Application Nos. 10-2022-0116682, filed on Sep. 15, 2022, and 10-2022-0159492, filed on Nov. 24, 2022, in the Korean Intellectual Property Office, the disclosures of which are incorporated by reference herein their entireties.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/KR2023/010667 | Jul 2023 | WO |
| Child | 19080454 | US |