The following description relates to a wearable device, a method, and a non-transitory computer readable storage medium for a displaying multimedia content.
With a view to providing an enhanced user experience, development is being actively made on an electronic device capable of providing an augmented reality (AR) service to display computer-generated information in association with an object in the real-world. Such an electronic device may be a wearable device that may be worn by a user. For example, the electronic device may be AR glasses.
According to an aspect of the present disclosure, a wearable device is described. The wearable device may comprise at least one camera; a display; a memory configured to store instructions; and a processor. The processor may be, when the instructions are executed, configured to receive an input for displaying multimedia content in a display area of the display. The processor may be, when the instructions are executed, configured to, based on the reception, identify whether brightness of an environment around the wearable device is greater than or equal to reference brightness. The processor may be, when the instructions are executed, configured to, based on identifying that the brightness is greater than or equal to the reference brightness, identify whether the multimedia content includes at least one area having specified color or not. The processor may be, when the instructions are executed, configured to, based on identifying that the multimedia content includes the at least one area, obtain a first image for a portion of the environment corresponding to a position in which the multimedia content is to be displayed, via the at least one camera. The processor may be, when the instructions are executed, configured to obtain a second image in which color of the first image is converted. The processor may be, when the instructions are executed, configured to display, via the display, the multimedia content, as superimposed on the second image displayed in the position.
According to another aspect of the present disclosure, a method for operating a wearable device comprising at least one camera and a display is described. The method may comprise receiving an input for displaying multimedia content in a display area of the display. The method may comprise, based on the reception, identifying whether brightness of an environment around the wearable device is greater than or equal to reference brightness. The method may comprise, based on identifying that the brightness is greater than or equal to the reference brightness, identifying whether the multimedia content includes at least one area having specified color or not. The method may comprise, based on identifying that the multimedia content includes the at least one area, obtaining a first image for a portion of the environment corresponding to a position in which the multimedia content is to be displayed, via the at least one camera. The method may comprise obtaining a second image in which color of the first image is converted. The method may comprise displaying, via the display, the multimedia content, as superimposed on the second image displayed in the position.
According to another aspect of the present disclosure, a non-transitory computer readable storage medium storing therein at least one program is described. The at least one program may comprise instructions to cause, when executed by at least one processor of a wearable device including at least one camera and a display, the wearable device to receive an input for displaying multimedia content in a display area of the display. The at least one program may comprise instructions to cause, when executed by the at least one processor of the wearable device, the wearable device to, based on the reception, identify whether brightness of an environment around the wearable device is greater than or equal to reference brightness. The at least one program may comprise instructions to cause, when executed by the at least one processor of the wearable device, the wearable device to, based on identifying that the brightness is greater than or equal to the reference brightness, identify whether the multimedia content includes at least one area having specified color or not. The at least one program may comprise instructions to cause, when executed by the at least one processor of the wearable device, the wearable device to, based on identifying that the multimedia content includes the at least one area, obtain a first image for a portion of the environment corresponding to a position in which the multimedia content is to be displayed, via the at least one camera. The at least one program may comprise instructions to cause, when executed by the at least one processor of the wearable device, the wearable device to obtain a second image in which color of the first image is converted. The at least one program may comprise instructions to cause, when executed by the at least one processor of the wearable device, the wearable device to display, via the display, the multimedia content, as superimposed on the second image displayed in the position.
Referring to
The wearable device 110 may be used to provide an augmented reality (AR) service. In order to provide the augmented reality service, the wearable device 110 may include at least one transparent display. Since the at least one transparent display is configured to transmit external light directed to a first surface of the at least one transparent display through a second surface of the at least one transparent display, the at least one transparent display may display a virtual object together with an external object (for example, a physical object) within the real-world. Throughout the present disclosure, the virtual object may be referred to as a visual object in terms of being viewable by a user. In an embodiment, in order to provide the augmented reality service, the wearable device 110 may include a camera used to recognize the external object, another camera used to track the eyes of the user wearing the wearable device 110, or a combination thereof. In an embodiment, in order to provide the augmented reality service, the wearable device 110 may include a communication circuit. The communication circuit may be used to obtain information on the external object from an external electronic device (e.g., a server or a smartphone), or may be used to obtain information for displaying the virtual object from an external electronic device.
In an embodiment, the wearable device 110 within the environment 100 may receive a user input to control a screen (e.g., multimedia content) displayed on the display of the wearable device 110. Since the screen is displayed along with an external object viewed within a display area 115 of the display, the user input may be defined as another input distinguished from a touch input to the display. For example, the user input may be a gesture input caused by a part of the user's body wearing the wearable device 110 or an eye gaze input caused by a gaze of the user wearing the wearable device 110. However, the present disclosure is not limited thereto.
Referring to
The processor 210 may control the overall operations of the wearable device 110. For example, the processor 210 may write data to the memory 220 and read out data recorded in the memory 220. For example, the processor 210 may obtain an image via the camera 230. For example, the processor 210 may transmit a signal to or receive a signal from another electronic device via the communication circuit 240. For example, the processor 210 may display information through the display 250. According to various embodiments, the processor 210 may include multiple processors (for example, the wearable device 110 may comprise at least one processor). For example, the processor 210 may include an application processor (AP) to control an upper layer such as e.g., an application program, a communication processor (CP) to control communication, and a display controller (e.g., display driving integrated circuitry) to control a screen displayed on the display 250 and the like.
The processor 210 may be configured to implement the procedures and/or methods proposed in the present disclosure.
The memory 220 may store instructions, commands, control command codes, control data, or user data for controlling the wearable device 110. For example, the memory 220 may store a software application, an operating system (OS), middleware, and/or a device driver.
The memory 220 may include one or more of volatile memory or non-volatile memory. The volatile memory may include, for example, a dynamic RAM (DRAM), a static RAM (SRAM), a synchronous DRAM (SDRAM), a phase-change RAM (PRAM), a magnetic RAM (MRAM), a resistive RAM (RRAM), a ferroelectric RAM (FeRAM), and the like. The non-volatile memory may include a read only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory and the like.
The memory 220 may further include a non-volatile storage media such as e.g., a hard disk drive (HDD), a solid-state disk (SSD), an embedded multimedia card (eMMC), a universal flash storage (UFS), and so on.
The memory 220 may be operably or operatively coupled with the processor 210. The memory 220 may store one or more programs. For example, the one or more programs may include, when executed by the processor 210 of the wearable device 110, instructions that cause the wearable device 110 to execute at least a portion of the operations of the wearable device 110 exemplified through the following descriptions.
For example, the one or more programs may be obtained from an external electronic device (e.g., a server or a smartphone). For example, the one or more programs stored in a non-volatile memory of the external electronic device may be provided from the external electronic device to the wearable device 110, in response to an input to the wearable device 110. For example, the one or more programs stored in the non-volatile memory of the external electronic device may be provided from the external electronic device to the wearable device 110, in response to an input to the external electronic device. However, the present disclosure is not limited thereto.
The camera 230 may be used to obtain an image of the environment viewed within a display area of the display 250 (e.g., the display area 115 shown in
The camera 230 may be further used to track the eyes of the user wearing the wearable device 110. For example, the camera 230 may be disposed to face the user's eyes so that the field of view of the camera 230 covers an area including the user's eyes wearing the wearable device 110. The camera used to track the eyes of the user may be different to the camera disposed to face the environment; for example, the camera 230 may include a plurality of cameras, with at least one camera used to track the eyes of the user and at least one camera used to obtain an image of the environment.
The camera 230 may be operably or operatively coupled with the processor 210.
The communication circuit 240 may have a variety of communication functions (e.g., cellular communication, Bluetooth, NFC, Wi-Fi, etc.) for communication between the wearable device 110 and at least one external device (e.g., a smartphone, a server, etc.). In other words, the communication circuit 240 may establish communication between the wearable device 110 and the at least one external device.
The communication circuitry 240 may be operably or operatively coupled with processor 210.
The display 250 may include at least one transparent display so that a user wearing the wearable device 110 can view the real-world. For example, the display 250 may be configured to cause external light directed to a first surface to go through a second surface different from the first surface, and configured to display information on the second surface. For example, the second surface may be opposite to the first surface. The display 250 may display a graphical user interface (GUI) so that the user can interact with the wearable device 110. In certain embodiments, the display 250 may be partitioned into different areas or regions. In certain embodiments, the display 250 may comprise a plurality of displays.
The display 250 may be operably or operatively coupled with processor 210.
In an embodiment, the processor 210 may display multimedia content on the display area of the display 250 along with an external object in the real world, viewed within the display area of the display 250 (e.g., the display area 115 shown in
In an embodiment, the processor 210 may obtain recognition information about an external object in the real world viewed within the display area of the display 250. The processor 210 may transmit information about an image including a visual object corresponding to the external object obtained through the camera 230, to another electronic device (e.g., a smartphone, a server, etc.) through the communication circuit 240, and obtain the recognition information on the external object from the other electronic device through the communication circuit 240. The processor 210 may obtain the recognition information on the external object by recognizing the image including the visual object corresponding to the external object, in a stand-alone state. For example, the processor 210 may obtain the recognition information on the external object by recognizing the image including the visual object corresponding to the external object without use of the other electronic device. However, the present disclosure is not limited thereto.
Referring to
In an embodiment, the display 250 including the first display 250-1 and the second display 250-2 may include, for example, a liquid crystal display (LCD), a digital mirror device (DMD), liquid crystal on silicon (LCoS), an organic light emitting diode (OLED), a micro-LED, or the like. In an embodiment, when the display 250 is configured of LCD, DMD, or LCoS, the wearable device 110 may include a light source (not shown in
In an embodiment, the wearable device 110 may further include a first transparent member 270-1 and a second transparent member 270-2. For example, each of the first transparent member 270-1 and the second transparent member 270-2 may be formed of a glass plate, a plastic plate, or a polymer. For example, each of the first transparent member 270-1 and the second transparent member 270-2 may be transparent or translucent.
In an embodiment, the wearable device 110 may include a waveguide 272. For example, the waveguide 272 may be used to transmit a light source generated by the display 250 to the eyes of a user wearing the wearable device 110. For example, the waveguide 272 may be formed of glass, plastic, or polymer. For example, the waveguide 272 may include a nano-pattern configured with a polygonal or curved lattice structure in the waveguide 272 or on a surface of the waveguide 272. For example, light incident to one end of the waveguide 272 may be transferred to the user through the nano-pattern. In an embodiment, the waveguide 272 may include at least one of at least one diffractive element (e.g., a diffractive optical element (DOE), a holographic optical element (HOE), etc.) or a reflective element (e.g., a reflective mirror). For example, the at least one diffractive element or the reflective element may be used to guide light to the user's eyes. In an embodiment, the at least one diffractive element may include an input optical member and/or an output optical member. In an embodiment, the input optical member may mean an input grating area used as an input terminal of light, and the output optical member may mean an output grating area used as an output terminal of light. In an embodiment, the reflective element may include a total internal reflection optical element or a total internal reflection waveguide for total internal reflection (TIR).
In an embodiment, the camera 230 in the wearable device 110 may include at least one first camera 230-1, at least one second camera 230-2, and/or at least one third camera 230-3.
In an embodiment, the at least one first camera 230-1 may be used for motion recognition or spatial recognition of three degrees of freedom (3DoF) or six degrees of freedom (6DoF). For example, the at least one first camera 230-1 may be used for head tracking or hand detection. For example, the at least one first camera 230-1 may be configured with a global shutter (GS) camera. For example, the at least one first camera 230-1 may be configured with a stereo camera. For example, the at least one first camera 230-1 may be used for gesture recognition.
In an embodiment, the at least one second camera 230-2 may be used to detect and track a pupil. For example, the at least one second camera 230-2 may be configured with a GS camera. For example, the at least one second camera 230-2 may be used to identify a user input defined by a user's gaze.
In an embodiment, the at least one third camera 230-3 may be referred to as a high resolution (HR) or photo video (PV) camera, and provide an auto-focusing (AF) function or an optical image stabilization (OIS) function. In an embodiment, the at least one third camera 230-3 may be configured with a GS camera or a remote shutter (RS) camera.
In an embodiment, the wearable device 110 may further include an LED unit 274. For example, the LED unit 274 may be used to assist in tracking the pupil through at least one second camera 230-2. For example, the LED unit 274 may be configured with an infrared LED (IR LED). For example, the LED unit 274 may be used to compensate for brightness when the illuminance around the wearable device 110 is low.
In an embodiment, the wearable device 110 may further include a first PCB 276-1 and a second PCB 276-2. For example, each of the first PCB 276-1 and the second PCB 276-2 may be used to transmit an electrical signal to components of the wearable device 110, such as the camera 230 or the display 250. In an embodiment, the wearable device 110 may further include an interposer disposed between the first PCB 276-1 and the second PCB 276-2. However, the present disclosure is not limited thereto.
Such a wearable device may include a display (e.g., a transparent display) configured to transmit external light directed to a first surface through a second surface to provide an augmented reality service.
Meanwhile, multimedia content displayed via the display may include an area having a certain color (this may also be referred to as a predetermined, or predefined, or specific, or specified, or set etc. color). When displaying an area having the certain color via the display, the wearable device may express the area without any light emission of at least one light emitting element (or at least one light emitting device) for the area (where, for example, the at least one light emitting element may be included in the wearable device (for example, in the display thereof)). Since the area is expressed without any light emission of the at least one light emitting element, an external object may be visible through the area. Since the external object viewed (that is, being visible) through the area may deteriorate the quality of the multimedia content, a method for enhancing the displaying of the area may be required.
Referring to
The multimedia content may be configured with visual information. For example, the multimedia content may include at least one of an image including at least one visual object or at least one text. For example, referring to
In an embodiment, the image 400 may be related to an external object within the environment viewed through the display 250. For example, the processor 210 may obtain an image of the external object through the camera 230, and obtain the image 400 related to the external object based on recognition of the image. For example, the image 400 may include description information or attribute information about the external object. In the meantime, the image recognition may be executed in the wearable device 110, in an electronic device distinct from the wearable device 110, or based on interworking in between the wearable device 110 and the electronic device. However, the present disclosure is not limited thereto. In an embodiment, the image 400 may be independent of the environment viewed through the display 250.
The multimedia content may be an emoji graphic object 410. For example, the emoji graphical object 410 may represent a user of the wearable device 110. For example, the graphic emoji object 410 may have a shape set to suit the user's intention, according to manipulation. For example, the graphic emoji object 410 may have a shape set based on recognizing an image of the user of the wearable device 110. For example, the graphical emoji object 410 may be obtained based on featuring points extracted from a visual object in the image corresponding to the user (or the user's face). For example, the graphic emoji object 410 may indicate a service provider presented through the wearable device 110. However, the present disclosure is not limited thereto. In an embodiment, the emoji graphical object 410 may be configured with a two-dimensional (2D) visual object or a three-dimensional (3D) visual object. In an embodiment, the emoji graphical object 410 may be displayed as associated with an external object 420 (e.g., an air conditioner) within the environment viewed within the display area of the display 250. For example, the emoji graphical object 410 may take a gesture indicating the external object 420. For example, the emoji graphical object 410 may be positioned adjacent to the external object 420. However, the present disclosure is not limited thereto. In an embodiment, the emoji graphical object 410 may be associated with visual information 425 derived from the emoji graphical object 410. For example, the emoji graphic object 410 and the visual information 425 may be adjacent to each other or connected to each other. However, the present disclosure is not limited thereto. For example, the visual information 425 may include information about an external object (e.g., the external object 420) that is identified by a user input and is viewed within the display of the display 250, or include information on various functions executed according to control of the external object or under the control of the emoji graphic object 410. However, the present disclosure is not limited thereto.
Referring again to
For example, the input may be an input 520 for selecting one multimedia content from among a plurality of multimedia contents, displayed on a graphical user interface (GUI) of a software application. For example, in operation 302, the processor 210 may receive the input 520 to select one executable object from among executable objects 535 for playing each of the plurality of multimedia contents displayed within the GUI 530. For example, since selecting one executable object from among the plurality of executable objects 535 means displaying the multimedia content, the input received in operation 302 may include the input 520. In an example, the GUI 530 may be displayed via the display 250 of the wearable device (for example, a head-up display (HUD) arrangement), and an executable object provided within the GUI may be selected through an input (for example, a gesture input) to select a multimedia content. In another example, the input may be an input received from an external electronic device, where a multimedia content has been selected at the external electronic device (for example, via a GUI provided by the external electronic device such as a smartphone) and the selection is communicated to the wearable device and received as the input for displaying the multimedia content.
For example, the input may be an input 540 for selecting an external object viewed within the display area of the display 250. For example, in operation 302, the processor 210 may receive the input 540 for selecting an external object 550. For example, since selecting the external object 550 means that multimedia content related to the external object 550 is to be displayed, the input received in operation 302 may include the input 540.
In certain embodiments, the input may be a gesture input (for example, for selecting an object viewed or displayed within the display area of the display 250), a touch input (for example, for selecting a multimedia content displayed on a GUI of a software application which is output on a touchscreen), or a voice input (for example, where voice recognition is performed to identify an object or multimedia content indicated in a voice input).
Although not shown in
Referring back to
The brightness of the environment around the wearable device 110 may be identified using various methods.
In an embodiment, the processor 210 may obtain, through the camera 230, an image for the environment viewed within the display area of the display 250, based on receiving the input, and identify the brightness of the environment based on data indicating the brightness of the obtained image. For example, when the image is encoded based on a YUV attribute, a YUV format, or a YUV model, the data may be of luma data. However, the present disclosure is not limited thereto.
In an embodiment, the processor 210 may obtain sensing data via an illuminance sensor of the wearable device 110 based on receiving the input, and identify the brightness of the environment based on the obtained sensing data. In an embodiment, the processor 210 may identify the brightness of the environment based on the sensing data and the data indicating the brightness of the image.
Meanwhile, in an embodiment, the reference brightness may be set as a value to identify whether an external object is viewed through at least one area within the multimedia content having a specified color, which will be described later referring to operation 306. In an embodiment, the reference brightness may be set as a value to identify whether external light having an intensity greater than or equal to a specified intensity through the at least one area is received by the eyes of the user of the wearable device 110, which will be described later referring to operation 306. However, the present disclosure is not limited thereto.
In an embodiment, the processor 210 may execute operation 306 on condition that the brightness is equal to or greater than the reference brightness, or execute operation 314 on condition that the brightness is less than the reference brightness. In an embodiment, operation 306 or operation 314 may be performed, in a more general sense, based on a brightness of the environment (that is, without specification of a reference brightness).
In other embodiments, the processor 210 may alternatively or additionally (to operation 304) identify, or detect, or determine, whether a brightness of the multimedia content (for example, a brightness of any portion of a current image of a multimedia content) is less than or equal to another reference brightness, based on receiving the input. If so, the outcome is the same as if the outcome of operation 304 is positive; if not, the outcome is the same as if the outcome of operation 304 is negative. The another reference brightness may be set as a value to identify whether one or more areas within the multimedia content have the specified color. For example, the another reference brightness may be set as a value according to an identified brightness of the environment; for instance, the another reference brightness may be set to a higher value in a brighter environment than in a less-bright environment.
In operation 306, the processor 210 may identify (or determine, detect etc.) whether the multimedia content includes the at least one area having the specified color, based on the identifying that the brightness is equal to or greater than the reference brightness.
The specified color may be a color expressed by the display 250 without light emission under the control of the processor 210. For example, the specified color may be black. However, the present disclosure is not limited thereto. For example, while the multimedia content is displayed on the display 250, at least one first light emitting element disposed for the at least one area having the specified color among the plurality of light emitting elements may be deactivated, as opposed to at least one second light emitting element disposed for the remaining area of the multimedia content having a color distinct from the specified color among the plurality of light emitting elements. For example, referring to
For example, referring to
Referring back to
In operation 308, the processor 210 may obtain a first image for at least a portion of the environment viewed within the display area of the display 250, via the camera 230, based on the identification that the multimedia content includes the at least one area. In an embodiment, the processor 210 may obtain the first image of a portion of the environment corresponding to a position in the display area where the multimedia content is to be displayed, based on the identification. In an embodiment, processor 210 may obtain the first image for at least one portion of the environment corresponding to the at least one area of the multimedia content; for example, with the multimedia content displayed, via the display 250, as superimposed over the environment, or a part thereof, the at least one portion of the environment may correspond to a portion(s) of the environment over which the at least one area of the multimedia content is superimposed.
In an embodiment, operation 308 may be executed based on a user input. For example, referring to
For example, the processor 210 may display a message 850 via the display 250 based on the identification that the multimedia content includes the at least one area. For example, the message 850 may be displayed to identify whether to execute operations 308 to 312. For example, the message 850 may include a text to enquire as to whether to generate a second image (e.g., compensation image) in operation 310. For example, the message 850 may include an executable object 855 to indicate executing operations 308 to 312 and an executable object 860 to indicate refraining from executing operations 308 to 312. For example, the processor 210 may execute operation 308 based on receiving a user input 865 for the executable object 855.
The present disclosure is not limited to operation 308 being executed based on a user input. For example, operation 308 may be performed automatically, for instance in response to identifying that the multimedia content includes at least one area having the specific color.
Referring back to
Referring back to
Referring back to
Referring back to
In an embodiment, a color of the background layer 1000 may be changed according to the color of the environment 650 or the color of the at least one area 670 of the environment 650. For example, the processor 210 may identify the color of the environment 650 or a color of the at least one area 670 of the environment 650 at a designated time interval, and change, based on the identified color, the color of the background layer 1000. However, the present disclosure is not limited thereto.
Referring back to
For example, displaying the multimedia content as superimposed on the second image may be executed by building up the second image 950 and the multimedia content 600 on different virtual layers, or virtual planes, in a virtual 3D space. In certain examples, the position of the different virtual layers relative to one another is based on a size of the multimedia content 600 as displayed via the display 250 (e.g., a size of the displayed multimedia content 600 in the FOV of a user wearing the wearable device), and/or a position of the displayed multimedia content 600 on the display 250 (e.g., the position of the displayed multimedia content 600 in the FOV of a user wearing the wearable device).
Meanwhile, referring back to
Meanwhile, referring back to
Meanwhile, in operation 314, the processor 210 may display the multimedia content based on identifying that the brightness is less than the reference brightness. For example, the processor 210 may refrain from executing the operation 306 and display the multimedia content. For example, the processor 210 may display the multimedia content without displaying the second image.
Meanwhile, in operation 316, the processor 210 may display the multimedia content based on identifying that the multimedia content does not include the at least one area. For example, the processor 210 may refrain from executing the operations 308 to 312 and display the multimedia content. For example, the processor 210 may display the multimedia content without displaying the second image.
Although the foregoing description in relation to
As described above, the wearable device 110 can prevent, alleviate or minimize at least a portion of the external environment from being viewed through the displayed multimedia content, by displaying the multimedia content as superimposed on the second image.
Referring to
In operation 1404, the processor 210 may identify whether a ratio of the size of the at least one area to the size of the multimedia content is equal to or greater than a reference value, based on the identification. For example, when the size of the at least one area is relatively small, the decrease in quality of the multimedia content owing to the at least one area may be relatively small, so the processor 210 may identify whether the ratio is equal to or greater than the reference value. For example, the processor 210 may execute operation 1404 to reduce resource consumption of the wearable device 110 by the execution of operation 308 and operation 310.
The processor 210 may execute operation 1406 on condition that the ratio is equal to or greater than the reference value, or execute operation 1408 on condition that the ratio is less than the reference value.
In operation 1406, the processor 210 may display the multimedia content as superimposed on the second image, based on identifying that the ratio is equal to or greater than the reference value. For example, the processor 210 may obtain the second image by executing operations 308 and 310 based on identifying that the ratio is greater than or equal to the reference value, and display the multimedia content as superimposed on the second image.
In operation 1408, the processor 210 may display the multimedia content based on identifying that the ratio is less than the reference value. For example, the processor 210 may display the multimedia content without displaying the second image.
As described above, through the execution of operation 1404, the wearable device 110 can adaptively execute obtaining the first image and the second image and displaying the multimedia content as superimposed on the second image. The wearable device 110 can optimize the efficiency of using the resource in the wearable device 110, with such adaptive execution.
Referring to
In operation 1504, the processor 210 may identify whether the position of the at least one area is within a center area in the display area of the display 250, based on the identification. For example, the center area may be an attention area of a user wearing the wearable device 110. For example, the center area may be an area within the display area of the display 250 that the user frequently views. For example, when the at least one area is positioned within a corner area in the display area distinct from the center area, the decrease in quality of the multimedia content owing to the at least one area is relatively small, so the processor 210 may identify whether the position of the at least one area is within the center area. For example, the processor 210 may execute operation 1504 to reduce resource consumption of the wearable device 110 by the execution of operations 308 and 310.
The processor 210 may execute operation 1506 on condition that the position of the at least one area is within the center area, or execute operation 1508 on condition that the position of the at least one area is outside the center area.
In operation 1506, the processor 210 may display the multimedia content as superimposed on the second image, based on identifying that the position of the at least one area is within the center area. For example, the processor 210 may obtain the second image by executing the operations 308 and 310, based on identifying that the position of the at least one area is within the center area, and display the multimedia content as superimposed on the second image.
In operation 1508, the processor 210 may display the multimedia content, based on identifying that the position of the at least one area is out of the center area. For example, the processor 210 may display the multimedia content without displaying the second image.
As described above, the wearable device 110 can adaptively execute, via the execution of operation 1504, obtaining the first image and the second image and displaying the multimedia content as superimposed on the second image. The wearable device 110 can optimize the efficiency of using resource of the wearable device 110 through such an adaptive execution.
Referring to
In operation 1604, the processor 210 may change color of the second image based on a color temperature of the first image. For example, since the second image is an image for compensating for a portion of the external environment (e.g., at least one area 670) to be viewed through the at least one area (e.g., at least one area 610), the processor 210 may estimate the color temperature of the portion of the external environment by identifying the color temperature of the first image in response to obtaining the first image. The processor 210 may change the color of the second image based on the estimated color temperature. For example, when the color temperature corresponds to the color temperature of blue light, the processor 210 may change the color of the second image by blending red with the second image. For example, when the color temperature corresponds to the color temperature of red light, the processor 210 may change the color of the second image by blending blue with the second image. However, the present disclosure is not limited thereto.
In operation 1606, the processor 210 may display the multimedia content as superimposed on the second image having the changed color. For example, the second image having the changed color, displayed through the display 250 may form the background layer. For example, the color of the background layer may be changed from the reference color by the second image having the changed color. For example, referring to
As described above, the wearable device 110 can enhance the quality of the multimedia content displayed through the display 250, by adaptively changing the color of the second image according to the color temperature of the environment in which the wearable device 110 is located.
Referring to
Referring back to
In an embodiment, operations 1802 and 1804 may be executed on condition that the multimedia content is a static image. For example, the processor 210 may execute operations 1802 to 1804 on condition that the multimedia content is a static image, or execute operations 310 and 312 on condition that the multimedia content is not a static image. However, the present disclosure is not limited thereto.
As described above, the wearable device 110 can adaptively obtain the portion of the second image, thereby reducing resource consumption by displaying the second image.
Referring to
In operation 2004, the processor 210 may identify whether there is an external object moving within a portion of the environment including the wearable device 110, which is hidden by displaying the multimedia content superimposed on the second image. For example, since the user wearing the wearable device 110 is not able to identify the movement of the external object owing to displaying of the multimedia content, the user may not recognize that he or she is in an unexpected situation. In order to prevent such unrecognized situation, the processor 210 may obtain images via the camera 230, while displaying the multimedia content superimposed on the second image, and based on the obtained images, may identify whether there exists such a moved or moving external object. The processor 210 may execute operation 2006 on condition that the (moved/moving) external object exists, or keep executing operation 2004 while displaying the multimedia content superimposed on the second image on condition that the external object does not exist.
In operation 2006, the processor 210 may cease displaying the second image based on identifying that the external object exists. For example, ceasing displaying the second image may cause formation or provision of the background layer to be terminated. For example, referring to
Referring back to
Although
In operation 2010, the processor 210 may identify whether the movement of the external object is ceased, while ceasing to display the second image and(/or) displaying the multimedia content having the reduced opacity. For example, the processor 210 may identify whether the movement of the external object is ceased or whether the external object moves out of the field of view of the camera 230, based on the images obtained through the camera 230. The processor 210 may execute operation 2012 on condition that the movement of the external object is ceased, or maintain executing operation 2010 on condition that the movement of the external object is maintained.
In operation 2012, the processor 210 may resume displaying the second image and restore the opacity of the multimedia content, based on identifying that the movement of the external object is ceased. For example, the processor 210 may resume displaying the second image and restore the opacity of the multimedia content in order to enhance the quality of displaying of the multimedia content.
As described above, the wearable device 110 may execute operation 2004 so that the user wearing the wearable device 110 can recognize an external object moving around the wearable device 110 while displaying the multimedia content superimposed on the second image. For example, the wearable device 110 may execute operation 2004 so that the user can view the multimedia content in a safe environment.
Referring to
In operation 2204, the processor 210 may identify the color of a second visual display to be displayed under the at least one first visual object, based on the color of the at least one first visual object or the color of the multimedia content. For example, the second visual object may be a visual object displayed under the at least one first visual object to enhance the quality of displaying of the at least one first visual object. For example, the second visual object may be a background of the at least one first visual object. However, the present disclosure is not limited thereto.
In an embodiment, the processor 210 may identify the color of the second visual object based on the color of the multimedia content amongst the at least one first visual object and the color of the multimedia content, on condition that the at least one first visual object has only at least one specified color (this “specified color” may be unrelated to the “specified color” described in combination with at least one area within a multimedia content previously), and identify the color of the second visual object based on the color of the at least one first visual object amongst the at least one first visual object and the color of the multimedia content, on condition that the at least one first visual object has a different color distinguished from the at least one specified color. For example, the at least one specified color may be a color in which visibility of the at least one first visual object is ensured, independently (or irrespectively) of which color the color of the second visual object is identified as. For example, the at least one specified color may be black and white. However, the present disclosure is not limited thereto. For example, when the visibility of the at least one first visual object having only the at least one specified color is ensured, the processor 210 may identify the color of the second visual object based on the color of the multimedia content, for harmonizing with the multimedia content. For example, when the visibility of the at least one first visual object having the different color is not ensured, the processor 210 may identify the color of the second visual object as a complementary color to the color of the at least one first visual object, in order to enhance the visibility of the at least one first visual object. For example, referring to
Referring again to
As described above, the wearable device 110 may, based on detecting an event for displaying the at least one first visual object related to the multimedia content while displaying the multimedia content as superimposed on the second image, identify the color of the second visual object to be displayed under the at least one first visual object based on the color of the at least one first visual object, or identify the color of the second visual object to be displayed under the at least one first visual object based on the color of the multimedia content, thereby enhancing the visibility of the at least one first visual object or displaying the second visual object in harmony with the multimedia content.
An electronic device, a method, and a non-transitory computer-readable storage medium according to an embodiment can enhance the quality of the multimedia content by displaying multimedia content superimposed on a second image in which a first image obtained through the camera is converted in color.
As described above, a wearable device may comprise at least one camera, a display, a memory configured to store instructions, and a processor. The processor may be configured to execute the instructions to obtain a user request for displaying multimedia content in a display area of the display. The processor may be configured to execute the instructions to, based on the user request, identify whether brightness of an environment around the wearable device is greater than or equal to reference brightness. The processor may be configured to execute the instructions to, based on the brightness greater than or equal to the reference brightness, identify whether the multimedia content includes at least one area having specified color. The processor may be configured to execute the instructions to, based on the multimedia content including the at least one area, generate a first image for a portion of the environment corresponding to a position in which the multimedia content is to be displayed, via the at least one camera. The processor may be configured to execute the instructions to generate a second image in which color of the first image is converted. The processor may be configured to execute the instructions to display, via the display, the multimedia content, as superimposed on the second image displayed in the position.
According to an embodiment, the color(s) of the first image is(are) inverted in the second image.
According to an embodiment, the processor may be configured to execute the instructions to, based on the brightness less than the reference brightness, refrain from identifying whether the multimedia content includes the at least one area.
According to an embodiment, the processor may be configured to execute the instructions to, based on the multimedia content not including the at least one area, refrain from generating the first image and the second image.
According to an embodiment, the processor may be configured to execute the instructions to, in response to the user request, identify the brightness of the environment, based on data indicating brightness of an image obtained via the at least one camera. According to an embodiment, the processor may be configured to execute the instructions to identify whether the brightness of the environment, identified based on the data, is greater than or equal to the reference brightness.
According to an embodiment, the processor may further comprise an illuminance sensor. According to an embodiment, the processor may be configured to execute the instructions to, in response to the user request, identify the brightness of the environment, further (or alternatively) based on data obtained via the illuminance sensor. According to an embodiment, the processor may be configured to execute the instructions to identify whether the brightness of the environment identified further (or alternatively) based on the data obtained via the illuminance sensor is greater than or equal to the reference brightness.
According to an embodiment, the processor may be configured to execute the instructions to identify a first virtual plane defined on a virtual three-dimensional (3D) space, based on the first image. According to an embodiment, the processor may be configured to execute the instructions to render the second image on the first virtual plane. According to an embodiment, the processor may be configured to execute the instructions to render the multimedia content on a second virtual plane defined on the virtual three-dimensional space and distinguished from the first virtual plane. According to an embodiment, the processor may be configured to execute the instructions to display the multimedia content as superimposed on the second image, by projecting the rendered second image and the rendered multimedia content onto a third virtual plane defined on the virtual 3D space, the third virtual plane corresponding to the display area of the display.
According to an embodiment, the processor may be further configured to execute the instructions to, based on the brightness less than the reference brightness and/or the multimedia content not including the at least one area, display the multimedia content by emitting, from among first light emitting elements for the at least one area and second light emitting elements for at least another area of the multimedia content having color distinct from the specified color, light from the second light emitting elements.
According to an embodiment, the processor may be further configured to execute the instructions to adjust opacity of the multimedia content. According to an embodiment, the processor may be further configured to execute the instructions to display, via the display, the multimedia content with the adjusted opacity, as superimposed on the second image displayed in the position.
According to an embodiment, the processor may be further configured to execute the instructions to change color of the second image, based on color temperature of the first image. According to an embodiment, the processor may be further configured to execute the instructions to display the multimedia content, as superimposed on the second image with the changed color.
According to an embodiment, the processor may be further configured to execute the instructions to, after the second image is generated, extract a portion of the second image corresponding to the at least one area. According to an embodiment, the processor may be further configured to execute the instructions to display the multimedia content, as superimposed on the extracted portion of the second image displayed in at least one position in the display area corresponding to the at least one area. According to an embodiment, the processor may be further configured to execute the instructions to display the at least one area of the multimedia content, as superimposed on the extracted portion of the second image displayed in at least one position in the display area corresponding to the at least one area.
According to an embodiment, the processor may be configured to execute the instructions to, based on the user request, identify whether the multimedia content is a static image. According to an embodiment, the processor may be further configured to execute the instructions to, based on the multimedia content that is a static image, display the multimedia content, as superimposed on the extracted portion of the second image. According to an embodiment, the processor may be further configured to execute the instructions to, based on the multimedia content that is not a static image, display the multimedia content, as superimposed on the second image.
According to an embodiment, the processor may be further configured to execute the instructions to, based on at least one image obtained via the at least one camera while the multimedia content superimposed on the second image is displayed, identify whether there exists an external object moving in a portion of the environment hidden by displaying the multimedia content superimposed on the second image. According to an embodiment, the processor may be further configured to execute the instructions to, based on the identification of the external object, cease displaying the second image. According to an embodiment, displaying of the multimedia content may be maintained while displaying of the second image is ceased.
According to an embodiment, the processor may be further configured to execute the instructions to, based on the identification of the external object, decrease opacity of the multimedia content displayed via the display. For example, the external object moving within the portion of the environment may be viewed through the display area, according to the decrease of the opacity of the multimedia content.
According to an embodiment, the processor may be further configured to execute the instructions to, based on at least one image obtained via the at least one camera while ceasing to display the second image and displaying the multimedia content, identify whether the movement of the external object is terminated. According to an embodiment, the processor may be further configured to execute the instructions to, based on termination of the movement of the external object, display the multimedia content superimposed on the second image by resuming displaying the second image in the position.
According to an embodiment, the user request may comprise an input for executing a software application used to play the multimedia content.
According to an embodiment, the processor may be further configured to execute the instructions to, while displaying the multimedia content superimposed on the second image, identify color of at least one first visual object to be displayed in association with the multimedia content. According to an embodiment, the processor may be further configured to execute the instructions to identify, based on color of the at least one first visual object or color of the multimedia content, color of a second visual object to be displayed under the at least one first visual object. According to an embodiment, the processor may be further configured to execute the instructions to display the at least one first visual object associated with the multimedia content, as superimposed on the second visual object with the identified color.
According to an embodiment, the processor may be configured to execute the instructions to, on condition that the at least one first visual object has only at least one specified color, identify the color of the second visual object, based on the color of the multimedia content from among the color of the at least one first visual object and the color of the multimedia content. According to an embodiment, the processor may be configured to execute the instructions to, on condition that the at least one first visual object has another color distinct from the at least one specified color, identify the color of the second visual object, based on the color of the at least one first visual object from among the color of the at least one first visual object and the color of the multimedia content.
According to an embodiment, the processor may be configured to execute the instructions to, based on the multimedia content including the at least one area, identify a ratio of size of the at least one area to size of the multimedia content. According to an embodiment, the processor may be configured to execute the instructions to, based on the ratio greater than or equal to a reference ratio, display the multimedia content superimposed on the second image. According to an embodiment, the processor may be configured to execute the instructions to, based on the ratio less than the reference ratio, refrain from generating the first image and the second image and display the multimedia content without displaying of the second image.
According to an embodiment, intensity of light passing through the at least one area may be greater than or equal to reference intensity, while displaying the multimedia content that is not superimposed on the second image, and may be less than the reference intensity, while displaying the multimedia content superimposed on the second image.
According to an embodiment, the size of the second image may be greater than or equal to the size of the multimedia content.
The electronic device according to various embodiments disclosed herein may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above.
It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element.
As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, and may be interchangeably used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment of the disclosure, the module may be implemented in a form of an application-specific integrated circuit (ASIC).
Various embodiments as set forth herein may be implemented as software including one or more instructions that are stored in a storage medium that is readable by a machine. For example, a processor of the machine may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium.
According to an embodiment of the disclosure, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., a compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server.
According to various embodiments of the disclosure, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments of the disclosure, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments of the disclosure, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments of the disclosure, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2021-0151835 | Nov 2021 | KR | national |
| 10-2021-0170075 | Dec 2021 | KR | national |
This application is a continuation application, claiming priority under § 365(c), of an International application No. PCT/KR2022/014167, filed on Sep. 22, 2022, which is based on and claims the benefit of a Korean patent application number 10-2021-0151835, filed on Nov. 6, 2021, in the Korean Intellectual Property Office, and of a Korean patent application number 10-2021-0170075, filed on Dec. 1, 2021, in the Korean Intellectual Property Office, the disclosure of each of which is incorporated by reference herein in its entirety.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/KR2022/014167 | Sep 2022 | WO |
| Child | 18640606 | US |