The disclosure relates to a wearable device for providing a virtual object for guiding shooting and a method thereof.
In order to provide an enhanced user experience, an electronic device providing an augmented reality (AR) service that displays information generated by a computer in association with an external object in the real-world are being developed. The electronic device may be a wearable device that may be worn by a user. For example, the electronic device may be AR glasses and/or a head-mounted device (HMD).
The above-described information may be provided as a related art for the purpose of helping to understand the present disclosure. No claim or determination is raised as to whether any of the above-described information may be applied as a prior art associated with the present disclosure.
According to an example embodiment, a wearable device may comprise: a housing, a first display, based on being worn by a user, disposed toward an eye of the user, a second display directed to a second direction opposite to a first direction to which the first display is directed, one or more cameras, memory storing instructions, comprising one or more storage media, and at least one processor comprising processing circuitry. At least one processor individually or collectively, may be configured to execute the instructions and configured to cause the wearable device to: obtain images using the one or more cameras. At least one processor individually or collectively, may be configured to cause the wearable device to control the first display to display a screen representing environment adjacent to the wearable device using at least portion of the images. At least one processor individually or collectively, may be configured to cause the wearable device to, while displaying the screen, receive a first input to execute a camera application. At least one processor individually or collectively, may be configured to cause the wearable device to, in response to the first input, control the first display to visually highlight, with respect to a remaining portion of the screen, a portion of the screen to be captured using the camera application. At least one processor individually or collectively, may be configured to cause the wearable device to, while displaying the portion of the screen visually highlighted with respect to the remaining portion of the screen, receive a second input to capture the portion of the screen. At least one processor individually or collectively, may be configured to cause the wearable device to, in response to the second input, capture the portion of the screen. At least one processor individually or collectively, may be configured to cause the wearable device to control the second display to display an indicator providing a notification of performing shooting using the one or more cameras.
According to an example embodiment, a method of operating a wearable device may be provided. The wearable device may comprise a housing, a first display, based on being worn by a user, disposed toward an eye of the user, a second display directed to a second direction opposite to a first direction to which the first display is directed, and one or more cameras. The method may comprise obtaining images using the one or more cameras. The method may comprise controlling the first display to display a screen representing environment adjacent to the wearable device using at least portion of the images. The method may comprise, while displaying the screen, receiving a first input to execute a camera application. The method may comprise, in response to the first input, controlling the first display to visually highlight, with respect to a remaining portion of the screen, a portion of the screen to be captured using the camera application. The method may comprise, while displaying the portion of the screen that is visually highlighted with respect to the remaining portion of the screen, receiving a second input to capture the portion of the screen. The method may comprise, in response to the second input, capturing the portion of the screen. The method may comprise controlling the second display to display an indicator to notify performing shooting using the one or more cameras.
According to an example embodiment, a wearable device may comprise: a housing, a first display disposed on a first surface of the housing directed to, based on the wearable device being worn by a user, a face of the user, a second display disposed on a second surface of the housing directed to, based on the wearable device being worn by the user, an external environment of the wearable device, a plurality of cameras configured to obtain a plurality of images with respect to at least portion of the external environment of the wearable device, memory storing instructions, comprising one or more storage media, and at least one processor comprising processing circuitry. At least one processor individually or collectively, may be configured to execute the instructions and configured to cause the wearable device to: display, through the first display, a composite image with respect to at least portion of the external environment generated based on the plurality of images, and a view finder object at least partially superimposed on the composite image. At least one processor individually or collectively, may be configured to cause the wearable device to, in an image shooting mode, display, through the second display, a first visual notification corresponding to the image shooting mode while the composite image and the view finder object is displayed through the first display. At least one processor individually or collectively, may be configured to cause the wearable device to, in a video shooting mode, display, through the second display, a second visual notification corresponding to the video shooting mode while the composite image and the view finder object is displayed through the second display. At least one processor individually or collectively, may be configured to cause the wearable device to, in the image shooting mode or the video shooting mode, store at least portion of the composite image corresponding to the view finder object in the memory in response to a user input.
According to an example embodiment, a wearable device may comprise: a housing, a display disposed on at least a portion of the housing and arranged in front of an eye of a user based on wearing the wearable device, a plurality of cameras configured to obtain images with respect to at least a portion of an external environment of the wearable device, memory storing instructions, and at least one processor comprising processing circuitry. At least one processor, individually and/or collectively, may be configured to execute the instructions and configured to cause the wearable device to: in response to a first input, display a view finder object on a composite image of the images, wherein composite image may be displayed to represent a portion of the external environment beyond the display. At least one processor individually or collectively, may be configured to cause the wearable device to, in response to a second input for moving or resizing the view finder object, change at least one of a position or a size of the view finder object, while displaying the view finder object on the composite image. At least one processor individually or collectively, may be configured to cause the wearable device to, in response to a third input for shooting, store, in the memory, a portion of the composite image corresponding to the view finder object, while displaying the view finder object on the composite image.
According to an example embodiment, a method of operating a wearable device may be provided. The wearable device may comprise: a housing, a display disposed on at least a portion of the housing and arranged in front of an eye of a user wearing the wearable device, a plurality of cameras obtaining images with respect to at least a portion of an external environment of the wearable device. The method may comprise: in response to a first input, displaying a view finder object on a composite image of the images, wherein the composite image may be displayed to represent a portion of the external environment beyond the display. The method may comprise, in response to a second input for moving or resizing the view finder object, changing at least one of a position or a size of the view finder object, while displaying the view finder object on the composite image. The method may comprise, in response to a third input for shooting, storing, in the memory, a portion of the composite image corresponding to the view finder object, while displaying the view finder object on the composite image.
According to an example embodiment, a non-transitory computer-readable storage medium including instructions may be provided. The instructions, when executed by at least one processor, individually and/or collectively, of a wearable device comprising a housing, a display disposed on at least a portion of the housing and arranged in front of an eye of a user wearing the wearable device, and a plurality of cameras configured to obtain images with respect to at least a portion of an external environment of the wearable device, may cause the wearable device to display, on the display, a view finder object, superimposed on a composite image of the images. The instructions, when executed by at least one processor, may cause the wearable device to, in response to receiving an input for shooting while displaying the view finder object at a first position of the display, store a first portion of the composite image corresponding to the first position in the memory. The instructions, when executed by at least one processor, may cause the wearable device to, in response to receiving an input for shooting while displaying the view finder object at a second position of the display, store a second portion of the composite image corresponding to the second position in the memory.
According to an example embodiment, a wearable device may comprise: a housing, a display disposed on at least a portion of the housing and arranged in front of an eye of a user based on wearing the wearable device, a plurality of cameras configured to obtain images with respect to at least a portion of an external environment of the wearable device, memory storing instructions, and at least one processor, comprising processing circuitry. At least one processor, individually and/or collectively, may be configured to execute the instructions and may be configured to cause the wearable device to: display, on the display, a view finder object superimposed on a composite image of the images. At least one processor individually or collectively, may be configured to cause the wearable device to, in response to receiving an input for shooting while displaying the view finder object at a first position of the display, store a first portion of the composite image corresponding to the first position in the memory. At least one processor individually or collectively, may be configured to cause the wearable device to, in response to receiving an input for shooting while displaying the view finder object at a second position of the display, store a second portion of the composite image corresponding to the second position in the memory.
The above and other aspects, features and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
Hereinafter, various example embodiments of the disclosure will be described with reference to the accompanying drawings.
The various embodiments of the present disclosure and terms used herein are not intended to limit the technology described in the present disclosure to various example embodiments, and should be understood to include various modifications, equivalents, or substitutes of the corresponding embodiment. In relation to the description of the drawings, a reference numeral may be used for a similar component. A singular expression may include a plural expression unless it is clearly meant differently in the context. In the present disclosure, an expression such as “A or B”, “at least one of A and/or B”, “A, B or C”, or “at least one of A, B and/or C”, and the like may include all possible combinations of items listed together. Expressions such as “1st”, “2nd”, “first” or “second”, and the like may modify the corresponding components regardless of order or importance, is only used to distinguish one component from another component, but does not limit the corresponding components. When a (e.g., first) component is referred to as “connected (functionally or communicatively)” or “accessed” to another (e.g., second) component, the component may be directly connected to the other component or may be connected through another component (e.g., a third component).
The term “module” used in the present disclosure may include a unit configured with hardware, software, or firmware, or any combination thereof, and may be used interchangeably with terms such as logic, logic block, component, or circuit, and the like, for example. The module may be an integrally configured component or a minimum unit or part thereof that performs one or more functions. For example, a module may be configured with an application-specific integrated circuit (ASIC).
According to an embodiment, the wearable device 101 may execute a function associated with augmented reality (AR) and/or mixed reality (MR). For example, in a state that the user 110 wears the wearable device 101, the wearable device 101 may include at least one lens disposed adjacent to the user's 110 eye. The wearable device 101 may combine light emitted from a display of the wearable device 101 with ambient light passing through the lens. A displaying area of the display may be formed within a lens through which the ambient light passes. Since the wearable device 101 combines the ambient light and the light emitted from the display, the user 110 may look at an image in which a real object recognized by the ambient light and a virtual object formed by the light emitted from the display are mixed. The above-described augmented reality, mixed reality, and/or virtual reality may be referred to as extended reality (XR).
According to an embodiment, the wearable device 101 may execute a function associated with a video see-through (or visible see-through (VST)) and/or virtual reality (VR). For example, in a state that the user 110 wears the wearable device 101, the wearable device 101 may include a housing covering the user's 110 eye. The wearable device 101 may include a display disposed on a first surface of the housing facing the eye in the state. The wearable device 101 may include at least one display that forms at least a portion of the housing of the wearable device 101 to be arranged in front of the eye of the user 110 wearing the wearable device 101.
The wearable device 101 may include a camera disposed on a second surface opposite to the first surface. The wearable device 101 may include one or more cameras obtaining images with respect to at least a portion of an external environment. The wearable device 101 may include a plurality of cameras exposed through at least a portion of the housing of the wearable device 101 to obtain images (or videos) of at least a portion of the external environment. Using the camera, the wearable device 101 may obtain an image and/or video representing ambient light. The wearable device 101 may output the image and/or video within the display disposed on the first surface so that the user 110 recognizes the ambient light through the display. A displaying area (or active area) (or displaying region (or active region) of the display disposed on the first surface may be formed by one or more pixels included in the display. By synthesizing a virtual object with an image and/or video output through the display, the wearable device 101 may enable the user 110 to recognize the virtual object together with a real object recognized by the ambient light.
Referring to
According to an embodiment, the wearable device 101 may display a user interface (UI) for controlling a camera that at least partially captures an external environment. For example, in the screen 130 including an image with respect to at least a portion of the external environment, the wearable device 101 may display a view finder object 150 superimposed on the image. An example operation of the wearable device 101 displaying the view finder object 150 will be described in greater detail below with reference to
For example, without an additional panel, window (or activity in, for example, Android operating system), and/or virtual object, the wearable device 101 may guide a portion of an external environment corresponding to an image file and/or video file, using the view finder object 150 superimposed on a background image provided for VST. The wearable device 101 may obtain the image 160 and/or video with respect to the external environment corresponding to the view finder object 150, using a user's 110 motion (e.g., the user's 110 head, gaze, hand gesture and/or speech) associated with the view finder object 150. The motion of the user 110 may include a hand gesture of the user 110 spaced apart from the wearable device 101, a motion of the user 110 detected by an external electronic device (e.g., a remote controller) connected to the wearable device 101, and/or a motion of the user 110 associated with a dial 109 and/or a button exposed to the outside through a housing of the wearable device 101.
The wearable device 101 may execute a function associated with a camera, using the user's 110 motion associated with the view finder object 150. For example, an operation of the wearable device 101 adjusting a focal length of the camera using the user's 110 gaze toward the inside of the view finder object 150 will be described with reference to
Hereinafter, an example hardware configuration of the wearable device 101 displaying the view finder object 150 will be described in greater detail with reference to
Referring to
According to an embodiment, the processor 210 of the wearable device 101 may include a hardware component for processing data based on one or more instructions. For example, a hardware component for processing data may include an arithmetic and logic unit (ALU), a field programmable gate array (FPGA), a central processing unit (CPU), and/or an application processor (AP). In an embodiment, the wearable device 101 may include one or more processors. The processor 210 may have a structure of a multi-core processor such as a dual core, a quad core, a hexa core, and/or an octa core. The multi-core processor structure of the processor 210 may include a structure (e.g., a big-little structure) based on a plurality of core circuits, which are divided by power consumption, clock, and/or computational amount per unit time. In an embodiment including the processor 210 having the multi-core processor structure, operations and/or functions of the present disclosure may be collectively performed by one or more cores included in the processor 210. The processor 210 may include various processing circuitry and/or multiple processors. For example, as used herein, including the claims, the term “processor” may include various processing circuitry, including at least one processor, wherein one or more of at least one processor, individually and/or collectively in a distributed manner, may be configured to perform various functions described herein. As used herein, when “a processor”, “at least one processor”, and “one or more processors” are described as being configured to perform numerous functions, these terms cover situations, for example and without limitation, in which one processor performs some of recited functions and another processor(s) performs other of recited functions, and also situations in which a single processor may perform all recited functions. Additionally, the at least one processor may include a combination of processors performing various of the recited/disclosed functions, e.g., in a distributed manner. At least one processor may execute program instructions to achieve or perform various functions.
According to an embodiment, the memory 215 of the wearable device 101 may include a hardware component for storing data and/or instructions, which are inputted to the processor 210 and/or output from the processor 210. For example, the memory 215 may include a volatile memory such as a random-access memory (RAM) and/or a non-volatile memory such as a read-only memory (ROM). For example, the volatile memory may include at least one of a dynamic RAM (DRAM), a static RAM (SRAM), a Cache RAM, and a pseudo SRAM (PSRAM). For example, the non-volatile memory may include at least one of a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a flash memory, a hard disk, a compact disk, and an embedded multi-media card (eMMC). In an embodiment, the memory 215 may be referred to as storage.
In an embodiment, the display 220 of the wearable device 101 may output visualized information to a user (e.g., the user 110 of
In an embodiment, the camera 225 of the wearable device 101 may include optical sensors (e.g., a charged coupled device (CCD) sensor, a complementary metal oxide semiconductor (CMOS) sensor) that generate an electrical signal indicating color and/or brightness of light. The camera 225 may be referred to as an image sensor, and may be included in the sensor 230 of
According to an embodiment, the wearable device 101 may include, as an example of the camera 225, a plurality of cameras disposed in different directions. Referring to
For example, the outward camera may be disposed toward the front (e.g., a direction that can be faced by two eyes) of the user wearing the wearable device 101. The wearable device 101 may include a plurality of outward cameras. The disclosure is not limited thereto, and the outward camera may be disposed toward an external space. The processor 210 may identify an external object using an image and/or a video obtained from the outward camera. For example, the processor 210 may identify a position, shape, and/or gesture (e.g., hand gesture) of a hand of the user (e.g., the user 110 of
According to an embodiment, the sensor 230 of the wearable device 101 may generate electronic information capable of being processed and/or stored by the processor 210 and/or the memory 215 of the wearable device 101 from non-electronic information associated with the wearable device 101. The information may be referred to as sensor data. The sensor 230 may include a global positioning system (GPS) sensor for detecting a geographic position of the wearable device 101, an image sensor, an audio sensor (e.g., a microphone and/or a microphone array including a plurality of microphones), an illuminance sensor, an inertial measurement unit (IMU) (e.g., an acceleration sensor, a gyro sensor and/or a geomagnetic sensor), and/or a time-of-flight (ToF) sensor (or ToF camera). The wearable device 101 may include a sensor configured to detect a distance between the wearable device 101 and an external object, such as the ToF sensor. The sensor detecting the distance between the wearable device 101 and the external object may be referred to as a depth sensor.
In an embodiment, the depth sensor included in the wearable device 101 may include the ToF sensor and/or a structured light (SL) sensor. The SL sensor may be referred to as an SL camera. The ToF sensor may be referred to as a ToF camera. The SL sensor may emit or output a light pattern (e.g., a plurality of dots) of a specific wavelength (e.g., an infrared wavelength). When an external object reflects the light pattern, the light pattern may be distorted by embossing of a surface of the external object. By detecting reflected light with respect to the light pattern, the SL sensor and/or the processor 210 connected to the SL sensor may recognize the distortion.
The ToF sensor may emit light of a specific wavelength (e.g., an infrared wavelength) in units of nanoseconds. The ToF sensor may measure a time during which light reflected by the external object is propagated to the ToF sensor. Using the measured time, the ToF sensor and/or the processor 210 may calculate or determine the distance between the external object and the wearable device 101. Using the ToF sensor, the processor 210 may output light in different directions. Using the time during which reflected lights for each of the output lights is propagated to the ToF sensor, the processor 210 may detect distances between the wearable device 101 and external objects disposed in each of the directions. 2 dimensional distribution of the distances may be referred to as a depth map.
In an embodiment where the wearable device 101 includes the ToF sensor and the SL sensor, the processor 210 may use the SL sensor to detect an external object spaced apart from the wearable device 101 by less than a specified distance (e.g., 10 m), and may use the ToF sensor to detect an external object spaced apart from the wearable device 101 by the specified distance or more. However, the disclosure is not limited thereto.
In an embodiment, the communication circuit 235 of the wearable device 101 may include a circuit for supporting transmission and/or reception of an electrical signal between the wearable device 101 and an external electronic device. For example, the communication circuit 235 may include at least one of a MODEM, an antenna, and an optic/electronic (O/E) converter. The communication circuit 235 may support the transmission and/or reception of the electrical signal based on various types of protocols, such as ethernet, a local area network (LAN), a wide area network (WAN), a wireless fidelity (WiFi), Bluetooth, bluetooth low energy (BLE), ZigBee, long term evolution (LTE), 5G new radio (NR), 6G and/or above-6G. In an embodiment, the communication circuit 235 may be referred to as a communication processor and/or a communication module.
According to an embodiment, in the memory 215 of the wearable device 101, one or more instructions (or commands), which indicate data to be processed and calculations and/or operations to be performed by the processor 210 of the wearable device 101, may be stored. A set of one or more instructions may be referred to as a program, a firmware, an operating system, a process, a routine, a sub-routine, and/or a software application (hereinafter, “application”). For example, when a set of a plurality of instruction distributed in a form of an operating system, a firmware, a driver, a program, and/or an application is executed, the wearable device 101 and/or the processor 210 may perform at least one of operations of
Referring to
For example, programs (e.g., a position tracker 271, a space recognizer 272, a gesture tracker 273, and/or a gaze tracker 274) designed to target at least one of the hardware abstraction layer 280 and/or the application layer 240 may be included in the framework layer 250. The programs included in the framework layer 250 may provide an application programming interface (API) capable of being executed (or invoked (or called)) based on another program.
For example, a program designed for a user of the wearable device 101 may be included in the application layer 240. As an example of programs classified into the application layer 240, an extended reality (XR) system user interface (UI) 241 and/or an XR application 242 are illustrated, but the disclosure is not limited thereto. For example, programs (e.g., a software application) included in the application layer 240 may call an application programming interface (API) to cause execution of a function supported by programs classified into the framework layer 250.
For example, the wearable device 101 may display, on the display 220, one or more visual objects for performing interaction with the user, based on the execution of the XR system UI 241. A visual object may refer, for example, to an object, which is deployable within a screen, for transmission and/or interaction of information, such as text, image, icon, video, button, check box, radio button, text box, slider, and/or table. The visual object may be referred to as a visual guide, a virtual object, a visual element, a UI element, a view object, and/or a view element. The wearable device 101 may provide functions available in a virtual space to a user, based on the execution of the XR system UI 241.
Referring to
For example, based on the execution of the lightweight renderer 243, the wearable device 101 may obtain a resource (e.g., API, system process and/or library) used to define, generate, and/or execute a rendering pipeline capable of being partially changed. The lightweight renderer 243 may be referred to as a lightweight render pipeline in terms of defining the rendering pipeline capable of being partially changed. The lightweight renderer 243 may include a renderer (e.g., a prebuilt renderer) built before execution of a software application. For example, the wearable device 101 may obtain a resource (e.g., API, system process, and/or library) used to define, generate, and/or execute the entire rendering pipeline, based on the execution of the XR plug-in 244. The XR plug-in 244 may be referred to as an open XR native client in terms of defining (or setting) the entire rendering pipeline.
For example, the wearable device 101 may display a screen representing at least a portion of a virtual space on the display 220, based on the execution of the XR application 242. The XR plug-in 244-1 included in the XR application 242 may include instructions that support a function similar to the XR plug-in 244 of the XR system UI 241. Among descriptions of the XR plug-in 244-1, a description overlapping those of the XR plug-in 244 may be omitted. The wearable device 101 may cause execution of a virtual space manager 251 based on the execution of the XR application 242.
According to an embodiment, the wearable device 101 may provide a virtual space service based on the execution of the virtual space manager 251. For example, the virtual space manager 251 may include a platform for supporting a virtual space service. The wearable device 101 may identify, based on the execution of the virtual space manager 251, a virtual space formed based on a user's position indicated by data obtained through the sensor 230, and may display at least a portion of the virtual space on the display 220. The virtual space manager 251 may be referred to as a composition presentation manager (CPM).
For example, the virtual space manager 251 may include a runtime service 252. For example, the runtime service 252 may be referred to as an OpenXR runtime module (or an OpenXR runtime program). The wearable device 101 may execute at least one of a user's pose prediction function, a frame timing function, and/or a space input function, based on the execution of the runtime service 252. As an example, the wearable device 101 may perform rendering for a virtual space service to a user based on the execution of the runtime service 252. For example, based on the execution of the runtime service 252, a function associated with a virtual space, executable by the application layer 240, may be supported.
For example, the virtual space manager 251 may include a pass-through manager 253. While a screen (e.g., the screen 130 of
For example, the virtual space manager 251 may include an input manager 254. The wearable device 101 may identify data (e.g., sensor data) obtained by executing one or more programs included in a perception service layer 270, based on execution of the input manager 254. The wearable device 101 may identify a user input associated with the wearable device 101, using the obtained data. The user input may be associated with a motion (e.g., hand gesture), gaze and/or speech of the user, which are identified by the sensor 230 and/or the camera 225 (e.g., the outward camera). The user input may be identified based on an external electronic device connected (or paired) through the communication circuit 235.
For example, a perception abstract layer 260 may be used for data exchange between the virtual space manager 251 and the perception service layer 270. The perception abstract layer 260 may be referred to as an interface in terms of being used for data exchange between the virtual space manager 251 and the perception service layer 270. For example, the perception abstract layer 260 may be referred to as OpenPX. The perception abstract layer 260 may be used for a perception client and a perception service.
According to an embodiment, the perception service layer 270 may include one or more programs for processing data obtained from the sensor 230 and/or the camera 225. The one or more programs may include at least one of a position tracker 271, a space recognizer 272, a gesture tracker 273, and/or a gaze tracker 274. The type and/or number of one or more programs included in the perception service layer 270 is not limited to those illustrated in
For example, the wearable device 101 may identify a posture of the wearable device 101 using the sensor 230, based on the execution of the position tracker 271. The wearable device 101 may identify a 6 degrees of freedom pose (6 dof pose) of the wearable device 101 using data obtained using the camera 225 and/or the IMU (e.g., gyro sensor, acceleration sensor and/or geomagnetic sensor), based on the execution of the position tracker 271. The position tracker 271 may be referred to as a head tracking (HeT) module (or a head tracker, a head tracking program).
For example, the wearable device 101 may obtain information for providing a 3-dimensional virtual space corresponding to an environment (e.g., external space) adjacent to the wearable device 101 (or a user of the wearable device 101), based on the execution of the space recognizer 272. Based on the execution of the space recognizer 272, the wearable device 101 may reconstruct the environment adjacent to the wearable device 101 in 3-dimensions, using data obtained using an outward camera. The wearable device 101 may identify at least one of a plane, a slope, and a step, based on the environment adjacent to the wearable device 101, which is reconstructed in 3-dimensions based on the execution of the space recognizer 272. The space recognizer 272 may be referred to as a scene understanding (SU) module (or a scene recognition program).
For example, the wearable device 101 may identify (or recognize) a pose and/or a gesture of the user's hand of the wearable device 101, based on the execution of the gesture tracker 273. For example, the wearable device 101 may identify a pose and/or a gesture of the user's hand, using data obtained from an outward camera, based on the execution of the gesture tracker 273. For example, the wearable device 101 may identify a pose and/or a gesture of the user's hand, based on data (or image) obtained using an outward camera, based on the execution of the gesture tracker 273. The gesture tracker 273 may be referred to as a hand tracking (HaT) module (or a hand tracking program) and/or a gesture tracking module.
For example, the wearable device 101 may identify (or track) movement of the user's eye of the wearable device 101, based on the execution of the gaze tracker 274. For example, the wearable device 101 may identify the movement of the user's eye using data obtained from the gaze tracking camera, based on the execution of the gaze tracker 274. The gaze tracker 274 may be referred to as an eye tracking (ET) module (or eye tracking program) and/or a gaze tracking module.
In an embodiment, the processor 210 of the wearable device 101 may display an image (e.g., the background image of the screen 130 of
A software application (e.g., a camera application) for obtaining an image and/or a video using a view finder object may be installed in the memory 215 of the wearable device 101. Hereinafter, an example operation of the wearable device 101 associated with a user input for executing the software application will be described in greater detail with reference to
Referring to
In the example screen 301 of
In an embodiment, the wearable device 101 may receive an input for executing a camera application. The input may include an input for selecting the icon 320 representing the camera application in the screen 301. The input for selecting the icon 320 may include a hand gesture detected by a hand 112 of a user 110. For example, the wearable device 101 may obtain an image and/or video of a body part including the hand 112 of the user 110, using a camera (e.g., the camera 225 of
Referring to
Referring to
For example, in response to a pinch gesture detected while displaying the screen 301 including the virtual object 342 extending toward the icon 320, the wearable device 101 may execute a camera application corresponding to the icon 320. An example operation of executing a camera application using a hand gesture such as a pinch gesture is described, but the disclosure is not limited thereto. For example, the wearable device 101 may execute the camera application in response to a user's speech (e.g., “Let's run the camera application”) and/or rotation and/or pressing of a dial (e.g., the dial 109 of
Referring to
In an embodiment, the wearable device 101 may display a visual object 152 associated with a focal length inside the view finder object 150. The wearable device 101 may display a control handle 330 at a position adjacent to the view finder object 150. Although the control handle 330 adjacent to a right side of the view finder object 150 is illustrated, a position, a size, and/or a shape of the control handle 330 are not limited thereto. Functions executable using the control handle 330 will be described in greater detail below with reference to
In an embodiment, the wearable device 101 may execute a plurality of software applications substantially simultaneously (e.g., multitasking). Referring to
In order to display the view finder object 150 superimposed on the composite image, the wearable device 101 may cease to display at least one of the virtual objects 351, 352, and 353, which was displayed before executing the camera application (or before displaying the view finder object 150), or may hide it. For example, in response to an input for executing a camera application, a virtual object 352, which is disposed at a position where the view finder object 150 provided by the camera application is to be displayed, may be removed from the screen 304, or may be displayed with specified transparency (or opacity) (e.g., transparency less than or equal to 100%). For example, a portion of the virtual object 353 superimposed on the view finder object 150, may be removed from the screen 304, or may be displayed with specified transparency (e.g., transparency less than or equal to 100%). When the view finder object 150 is moved in the screen 304, the wearable device 101 may remove another virtual object superimposed on the view finder object 150, or may adjust (or set) transparency of the other virtual object to specified transparency.
Referring to
Hereinafter, an example operation of the wearable device 101 displaying the view finder object 150 will be described in greater detail with reference to
Referring to
In an embodiment, the processor may guide at least a portion to be captured by a user input within the image for the real environment, using an image in which the view finder object is overlaid. The input of operation 410 may be identified or detected by a hand gesture (e.g., pinch gesture) associated with a virtual object (e.g., the icon 320 of
Referring to
Before receiving the input of operation 420, the processor may detect another input for executing a function associated with the view finder object. The other input may include an input for obtaining depth information (e.g., a depth map) using a depth sensor. The other input may include an input for combining or synthesizing one or more virtual objects displayed on the display to an image and/or video with respect to an external environment obtained through the camera. The other input may include an input for obtaining an image and/or video associated with an external object by tracking the external object.
Referring to
Hereinafter, an example operation of a wearable device displaying an image associated with the pass-through of operation 410 and a view finder object superimposed on the image will be described in greater detail with reference to
Referring to
Referring to an example composite image 520 of
The composite image 520 generated by the wearable device 101 may be displayed on a display (e.g., the display 220 of
The disclosure is not limited thereto, and in an embodiment including one outward camera, the wearable device 101 may display a screen 530 having an image of the one outward camera as a background image. In an embodiment including one outward camera, the outward camera may be disposed toward the front of the user 110 wearing the wearable device 101, or may be disposed toward at least a portion of an external space including the front.
In the screen 530 of
As described above, according to an embodiment, the wearable device 101 may customize at least a portion to be stored in the image 540 (or video) within the composite image 520, using the view finder object 150. Hereinafter, an example operation of the wearable device 101 adjusting a focal length of at least one camera included in the wearable device 101 before receiving a shooting input will be described in greater detail with reference to
Referring to
Referring to
Referring to
As described above, the wearable device 101 may change a focal length of at least one camera driven to provide an external space, using a direction of a gaze facing the point p1 within the view finder object 150. Hereinafter, referring to
Referring to
Referring to
Referring to
In response to a shooting input received after adjusting the focal length of operation 730, the processor of the wearable device may store a portion of a composite image corresponding to the view finder object in an image and/or video format. In response to the shooting input, the processor may store an image and/or video having the adjusted focal length of operation 730.
Hereinafter, an example operation of a wearable device adjusting a position and/or a size of a view finder object within a display will be described in greater detail with reference to
Referring to
The view finder object 150 may be anchored at a specific point (e.g., a center point of the display) of the screen 801 (or the display). For example, when a direction d1 of the user's 110 head changes, a portion of the composite image superimposed with the view finder object 150 anchored to a specific point of the display may be changed, and a position and/or size of the view finder object 150 may be maintained within the display. Referring to
For example, within the screen 802 displayed while the user 110 looks in the direction d2, the wearable device 101 may display the view finder object 150 centered on the screen 802. Referring to the screens 801 and 802, while the head of the user 110 rotates, the wearable device 101 may maintain the position and/or size of the view finder object 150 on the display. For example, the view finder object 150 may have 0 degree of freedom (DoF) and follow FoV of the user 110.
A reference position and/or size of the view finder object 150 on the display may be changed by a user input associated with the view finder object 150. Referring to
Referring to
Referring to
The wearable device 101 may move three-dimensionally the view finder object 150, along a path of the hand 112 maintaining the pinch gesture. For example, the wearable device 101 may cause the view finder object 150 to move away from or approach the wearable device 101, by adjusting the binocular disparity and/or a depth value, as well as a horizontal direction and/or a vertical direction of the display. When the view finder object 150 is moved away from the wearable device 101 by the hand 112 maintaining the pinch gesture, the size of the view finder object 150 displayed through the display may be reduced. When the view finder object 150 approaches the wearable device 101 by the hand 112 maintaining the pinch gesture, the size of the view finder object 150 displayed through the display may be enlarged.
While moving the view finder object 150 using a direction of the virtual object 342 corresponding to the hand 112 maintaining the pinch gesture, the wearable device 101 may detect or determine whether a shape of the hand 112 is changed to a shape different from the pinch gesture. For example, when two fingers in contact with each other for the pinch gesture are separated, the wearable device 101 may cease moving the view finder object 150 along the path of the hand 112.
The view finder object 150 may be anchored to a position in the display at a time point at which ceasing of the pinch gesture is identified. For example, when the user's 110 head rotates after the view finder object 150 is anchored back to a specific position at the time point, the wearable device 101 may maintain a position of the view finder object 150 in the display at the specific position.
In an embodiment, the wearable device 101 may detect or receive an input for resizing the view finder object 150. Referring to the example screen 801 of
Referring to
Adjusting the size of the view finder object 150 using the position and/or direction of the hand 112 having the pinch gesture may be performed below the maximum size. For example, in order to maintain visibility of a boundary line of the view finder object 150 in the display, the maximum size may be set. The wearable device 101 may change the size of the view finder object 150 below the maximum size. For example, the maximum size may correspond to a specified ratio (e.g., 90%) of the FoV of the display.
In an embodiment, the wearable device 101 may provide a user experience disconnected from an external space, such as VR. For example, the wearable device 101 may not display a composite image with respect to the external space. The wearable device 101 may execute a software application associated with VR to display at least a portion of a virtual space provided from the software application. For example, the entire displaying area of the display may be occupied by the virtual space.
Referring to
The wearable device 101 may display an outer image (e.g., the composite image 520 of
Referring to
As described above, the wearable device 101 may adjust the position and/or size of the view finder object 150 displayed on the display. The wearable device 101 displaying the view finder object 150 at a first position of the display may store, in memory, a first portion of a composite image corresponding to the first position, in response to receiving an input for shooting. Similarly, while displaying the view finder object 150 at a second position of the display different from the first position, the wearable device 101 may store a second portion of the composite image corresponding to the second position in response to receiving the input for shooting. For example, a portion of an external space corresponding to the image and/or video stored in the memory may be associated with a position and/or size of the view finder object 150 in the display at a time point of obtaining the image and/or video. For example, the wearable device 101 may obtain or store an image and/or video of a portion of an external space bounded by the position and/or size of the view finder object 150 within the display.
Hereinafter, an example operation of the wearable device 101 described with reference to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
For example, a user may change a size and/or position of a view finder object displayed on a display by performing the input described with reference to
Referring to
Referring to
In an example state of displaying the screen 302 of
In an embodiment, the wearable device 101 may record a video based on the view finder object 150. Referring to the screen 302 of
Referring to
Referring to the screen 1002 of
In response to an input (e.g., a pinch gesture of the hand 112 having a direction corresponding to the indicator 1022 and/or the virtual object 1023) of selecting the indicator 1022 and/or the virtual object 1023, the wearable device 101 may display the view finder object 150 again. For example, the wearable device 101 may display the view finder object 150 again at a position of the view finder object 150, which was displayed before receiving an input for executing a software application corresponding to the virtual object 1021.
After receiving the input for selecting the indicator 1022 and/or the virtual object 1023, the wearable device 101 may check whether to cease displaying the virtual object 1021 before displaying the view finder object 150 again. For example, the wearable device 101 may display a virtual object checking whether to minimize or reduce a size of the virtual object 1021. In response to an input indicating selection of the virtual object, the wearable device 101 may cease displaying the virtual object 1021 and display the view finder object 150 again.
The wearable device 101 that obtains a video corresponding to the view finder object 150 in response to an input for video recording may further receive an input of ceasing the video recording. The input may also be detected by a pinch gesture of the hand 112 having a direction corresponding to a point outside the view finder object 150, similar to the input for video recording. The disclosure is not limited thereto, and while recording a video, a visual object (e.g., a shutter) for receiving an input for ceasing the video recording may be further displayed. The wearable device 101 may cease recording the video, in response to an input (e.g., an input indicating selection of the visual object using a pinch gesture) associated with the visual object.
Referring to
For example, the user 110 wearing the wearable device 101 may look the screen 302 using a display (e.g., the display 220 of
An example of displaying the visual object 1032 including a portion of a composite image corresponding to the view finder object 150 is described, but the disclosure is not limited thereto. For example, in response to a shooting input, the wearable device 101 may display an image, text, and/or icon indicating the shooting input on the display 1030. For example, while recording a video, the wearable device 101 may display the indicator 1022 of
Hereinafter, an example operation of the wearable device 101 described with reference to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Hereinafter, an operation of the wearable device associated with a control handle (e.g., the control handle 330 of
Referring to
While displaying the screen 302, the wearable device 101 detecting a body part including a hand 112 from an image and/or video obtained from an outward camera may display a virtual object 340 corresponding to the hand 112 within the screen 302. The wearable device 101 may display a virtual object 342 representing a direction of the hand 112, together with the virtual object 340. In response to a pinch gesture of the hand 112 detected while a direction of the hand 112 represented by the virtual object 342 faces a point p1 on the control handle 330, the wearable device 101 may detect or receive an input indicating selection of the control handle 330.
In response to the input indicating selection of the control handle 330, the wearable device 101 may display visual objects 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, and 1231 corresponding to each of functions associated with shooting, along a direction of an edge of the view finder object 150 having a rectangular shape. Referring to
Referring to
The visual object 1227 may correspond to a function for obtaining a depth map using a depth sensor, together with an image and/or video using a composite image. An example operation of the wearable device 101 executing the function in response to an input indicating selection of the visual object 1227 will be described in greater detail with reference to
The visual object 1228 may correspond to a function for combining or synthesizing a virtual object displayed on a display, within an image and/or video using a composite image. An example operation of the wearable device 101 receiving an input indicating selection of the visual object 1228 will be described in greater detail with reference to
The visual object 1229 may correspond to a function for obtaining an image and/or video associated with a specific external object. An example operation of the wearable device 101 executing the function in response to selection of the visual object 1229 will be described in greater detail with reference to
The visual object 1230 may be referred to as a shutter. In response to an input (e.g., a pinch gesture of the hand 112 having a direction corresponding to the visual object 1230) associated with the visual object 1230, the wearable device 101 may obtain an image and/or video. For example, the shooting input may include an input indicating selection of the visual object 1230.
The visual object 1231 may correspond to a function for browsing an image and/or video stored in the memory (e.g., the memory 215 of
Hereinafter, an example operation of the wearable device 101 associated with the control handle 330 will be described in greater detail with reference to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Within the screen 1402 of
For example, the visual object 1417 corresponding to the external object 1411 positioned closest to the wearable device 101 among the external objects 1411, 1412, and 1413 may be displayed in bright blue. For example, the visual object 1418 corresponding to the external object 1412 positioned farther from the wearable device 101 than the external object 1411 may be displayed in blue. For example, the visual object 1419 corresponding to the external object 1413 positioned farther from the wearable device 101 than the external object 1412 may be displayed in dark blue.
Referring to
In an embodiment, the depth map 1422 stored with the image 1421 may be used to display the image 1421 in three dimensions. For example, in response to an input for displaying the image 1421, the wearable device 101 may set binocular disparity of each of pixels of the image 1421, using a depth value individually allocated to the pixel by the depth map 1422. The wearable device 101 may three-dimensionally display the image 1421 including visual objects 1417, 1418, and 1419 corresponding to each of the external objects 1411, 1412, and 1413, using the example depth map 1422 of
In an embodiment, the wearable device 101 may receive an input for adjusting a maximum distance and/or a minimum distance to be measured using a depth sensor. Referring to the example screen 1403 of
Referring to the example screen 1403 of
Referring to
Referring to
Referring to
As described above, according to an embodiment, the wearable device 101 may obtain or store information (e.g., the depth map 1422 of
Hereinafter, an example operation of the wearable device 101 described with reference to
Referring to
Referring to
For example, in response to the input, activation of the option may be toggled. When the option of operation 1520 is activated, the processor may perform operation 1530.
Referring to
Referring to
Referring to
Hereinafter, an example operation of a wearable device obtaining an image and/or video to track an external object will be described in greater detail with reference to
Referring to
Referring to
Referring to
Before receiving an input for obtaining an image and/or video including a specific external object, the wearable device 101 may display the view finder object 150 anchored at a specific point of the display. Referring to
For example, when the user 110 who was looking in the direction d1 looks in a direction d2, the wearable device 101 may display a screen 1602 including a composite image corresponding to the direction d2. Within the screen 1602, the wearable device 101 may display the view finder object 150, so that the visual object 1611 corresponding to the pot 1610 is disposed inside the view finder object 150. The wearable device 101 may display the virtual object 1612 adjacent to the visual object 1611, together with the view finder object 150.
The virtual object 1612 may be displayed while the wearable device 101 moves a position of the view finder object 150 using a position within the composite image of the visual object 1611. In response to an additional input (e.g., a pinch gesture of a hand having a direction associated with the virtual object 1612) associated with the virtual object 1612, the wearable device 101 may cease displaying the virtual object 1612 or change an external object linked with the virtual object 1612 into another external object. For example, an input representing selection of the virtual object 1612 may be mapped to a function that ceases moving the view finder object 150 using a position of the visual object 1611 corresponding to the virtual object 1612 within the composite image. The wearable device 101 receiving the input may cease displaying the virtual object 1612.
As described above, in response to an input for tracking an external object (e.g., the pot 1610), the wearable device 101 may change a position of the view finder object 150 on a composite image, using a position of the external object associated with the composite image. When receiving a shooting input, the wearable device 101 may obtain or store an image and/or video associated with a portion of a composite image corresponding to the view finder object 150 including a visual object (e.g., the visual object 1611) corresponding to an external object. While recording a video in response to the shooting input, the wearable device 101 may obtain or store an image and/or video including the visual object 1611 corresponding to an external object in a composite image, independently of rotation and/or movement of a head of the user 110 wearing the wearable device 101.
As described above with reference to
Hereinafter, an example operation of the wearable device 101 described with reference to
Referring to
Referring to
Referring to
Referring to
Hereinafter, an example operation of a wearable device associated with a virtual object will be described in greater detail with reference to
Referring to
Referring to the example screen 1801 of
While displaying the screen 1801 of
Based on the portion 1811 displayed using the specified transparency, the wearable device 101 may receive an input for selecting an area 1820 of the virtual object 1810 to be combined to a portion of the composite image corresponding to the view finder object 150. For example, the wearable device 101 may receive an input for selecting the area 1820, in response to a drag gesture (e.g., a path in which the hand 112 maintaining a pinch gesture is moved) within the portion 1811. The wearable device 101 receiving the input may display the area 1820 of the virtual object 1810 corresponding to the input within the view finder object 150, such as the screen 1802. For example, the wearable device 101 may normally display the area 1820 using 0% transparency.
While displaying the area 1820 like the screen 1802 of
An example operation of the wearable device 101 associated with the virtual object 1810 having a two-dimensional panel shape is described, but the disclosure is not limited thereto. Referring to the example screen 1803 of
While displaying the screen 1803 of
In response to the shooting input received while displaying the screen 1804 of
As described above, according to an embodiment, the wearable device 101 may combine a virtual object displayed on the display with an image and/or video with respect to a composite image representing an external space.
An operation in which the wearable device 101 displays the view finder object 150 according to an input of executing a camera application and stores an image and/or video associated with the view finder object 150 in response to a shooting input has been described, but the disclosure is not limited thereto. For example, the wearable device 101 may support a function of automatically obtaining an image and/or video. Hereinafter, an example operation of the wearable device 101 that automatically stores an image and/or video according to a specific time and/or specific condition will be described in greater detail with reference to
In an embodiment, the wearable device 101 may determine whether to obtain an image and/or video by checking a preset condition. The condition may be preset by a user input associated with a camera application. For example, the preset condition may be associated with a time and/or a section reserved by a user 110. For example, when the user 110 wears the wearable device 101 at a time associated with the preset condition, the wearable device 101 may automatically obtain an image and/or video. In the example, the wearable device 101 may repeatedly obtain an image and/or video using a section associated with the preset condition.
For example, the preset condition may be associated with a geographical location set by the user 110. Using geofence and/or GPS coordinate, the wearable device 101 may compare a geographical location included in the preset condition with a current location of the wearable device 101. When the current location of the wearable device 101 corresponding to a position included in the preset condition is detected while being worn by the user 110, the wearable device 101 may obtain an image and/or a video.
For example, the preset condition may be associated with whether an external object of a specific type (or a specific category) set by the user 110 is detected. When detecting an external object of a specific type set by the user 110, the wearable device 101 may obtain or store an image and/or video including the external object.
For example, the preset condition may be associated with whether a specific user registered by the user 110 is detected. When the specific user is detected, the wearable device 101 may obtain or store an image and/or video including the specific user.
For example, the preset condition may be associated with a sound detected by the wearable device 101. For example, the wearable device 101, which receives a sound having a volume exceeding a specified volume (e.g., a volume with a unit of decibel), may initiate obtaining an image and/or video.
For example, the preset condition may be associated with motion detected through an outward camera. For example, when an external object moving beyond a specified moving distance and/or specified speed is detected using the outward camera, the wearable device 101 may initiate obtaining an image and/or video.
Referring to
After displaying the virtual object 1910 as illustrated in the screen 1901, the wearable device 101 may display an indicator 1920 for notifying obtainment of a video based on expiration of a specified duration (e.g., a few seconds) as illustrated in the screen 1902, instead of displaying the virtual object 1910. The indicator 1920 of
Hereinafter, an example operation of the wearable device 101 described with reference to
Referring to
Referring to
Referring to
Referring to
Referring to
As described above, according to an embodiment, the wearable device may capture a portion of an image of an external environment displayed through at least one display covering two eyes of a user, using a view finder object. The view finder object may be moved on the image displayed through a display. A size of the view finder object may be adjusted by a user input. The wearable device may change a position and/or size of the view finder object based on a gesture (e.g., a hand gesture) of the user wearing the wearable device. The wearable device may obtain an image and/or video based on the user's gesture. The wearable device may obtain a depth map together with the image and/or the video, based on the user's gesture. The wearable device may obtain an image and/or video with respect to a specific external object included in the external environment based on the user's gesture, independently of motion of the user wearing the wearable device. The wearable device may obtain an image and/or video including at least one virtual object displayed through a display and the external environment, based on the user's gesture.
Hereinafter, an example exterior of the wearable device 101 described with reference to
Referring to
According to an embodiment, the wearable device 2100 may be wearable on a portion of the user's body. The wearable device 2100 may provide augmented reality (AR), virtual reality (VR), or mixed reality (MR) combining the AR and the VR to a user wearing the wearable device 2100. For example, the wearable device 2100 may display a virtual reality image provided from at least one optical device 2182 and 2184 of
According to an embodiment, the at least one display 2150 may provide visual information to a user. For example, the at least one display 2150 may include a transparent or translucent lens. The at least one display 2150 may include a first display 2150-1 and/or a second display 2150-2 spaced apart from the first display 2150-1. For example, the first display 2150-1 and the second display 2150-2 may be disposed at positions corresponding to the user's left and right eyes, respectively.
Referring to
According to an embodiment, the at least one display 2150 may include at least one waveguide 2133 and 2134 that transmits light transmitted from the at least one optical device 2182 and 2184 by diffracting to the user. The at least one waveguide 2133 and 2134 may be formed based on at least one of glass, plastic, or polymer. A nano pattern may be formed on at least a portion of the outside or inside of the at least one waveguide 2133 and 2134. The nano pattern may be formed based on a grating structure having a polygonal or curved shape. Light incident to an end of the at least one waveguide 2133 and 2134 may be propagated to another end of the at least one waveguide 2133 and 2134 by the nano pattern. The at least one waveguide 2133 and 2134 may include at least one of at least one diffraction element (e.g., a diffractive optical element (DOE), a holographic optical element (HOE)), and a reflection element (e.g., a reflection mirror). For example, the at least one waveguide 2133 and 2134 may be disposed in the wearable device 2100 to guide a screen displayed by the at least one display 2150 to the user's eyes. For example, the screen may be transmitted to the user's eyes through total internal reflection (TIR) generated in the at least one waveguide 2133 and 2134.
The wearable device 2100 may analyze an object included in a real image collected through a photographing camera 2160-4, combine with a virtual object corresponding to an object that become a subject of augmented reality provision among the analyzed object, and display on the at least one display 2150. The virtual object may include at least one of text and images for various information associated with the object included in the real image. The wearable device 2100 may analyze the object based on a multi-camera such as a stereo camera. For the object analysis, the wearable device 2100 may execute space recognition (e.g., simultaneous localization and mapping (SLAM) using the multi-camera and/or time-of-flight (ToF). The user wearing the wearable device 2100 may watch an image displayed on the at least one display 2150.
According to an embodiment, a frame may be configured with a physical structure in which the wearable device 2100 may be worn on the user's body. According to an embodiment, the frame may be configured so that when the user wears the wearable device 2100, the first display 2150-1 and the second display 2150-2 may be positioned corresponding to the user's left and right eyes. The frame may support the at least one display 2150. For example, the frame may support the first display 2150-1 and the second display 2150-2 to be positioned at positions corresponding to the user's left and right eyes.
Referring to
For example, the frame may include a first rim 2101 surrounding at least a portion of the first display 2150-1, a second rim 2102 surrounding at least a portion of the second display 2150-2, a bridge 2103 disposed between the first rim 2101 and the second rim 2102, a first pad 2111 disposed along a portion of the edge of the first rim 2101 from one end of the bridge 2103, a second pad 2112 disposed along a portion of the edge of the second rim 2102 from the other end of the bridge 2103, the first temple 2104 extending from the first rim 2101 and fixed to a portion of the wearer's ear, and the second temple 2105 extending from the second rim 2102 and fixed to a portion of the ear opposite to the ear. The first pad 2111 and the second pad 2112 may be in contact with the portion of the user's nose, and the first temple 2104 and the second temple 2105 may be in contact with a portion of the user's face and the portion of the user's ear. The temples 2104 and 2105 may be rotatably connected to the rim through hinge units 2106 and 2107 of
According to an embodiment, the wearable device 2100 may include hardware (e.g., hardware described above based on the block diagram of
According to an embodiment, the microphone (e.g., the microphones 2165-1, 2165-2, and 2165-3) of the wearable device 2100 may obtain a sound signal, by being disposed on at least a portion of the frame. The first microphone 2165-1 disposed on the bridge 2103, the second microphone 2165-2 disposed on the second rim 2102, and the third microphone 2165-3 disposed on the first rim 2101 are illustrated in
According to an embodiment, the at least one optical device 2182 and 2184 may project a virtual object on the at least one display 2150 in order to provide various image information to the user. For example, the at least one optical device 2182 and 2184 may be a projector. The at least one optical device 2182 and 2184 may be disposed adjacent to the at least one display 2150 or may be included in the at least one display 2150 as a portion of the at least one display 2150. According to an embodiment, the wearable device 2100 may include a first optical device 2182 corresponding to the first display 2150-1, and a second optical device 2184 corresponding to the second display 2150-2. For example, the at least one optical device 2182 and 2184 may include the first optical device 2182 disposed at a periphery of the first display 2150-1 and the second optical device 2184 disposed at a periphery of the second display 2150-2. The first optical device 2182 may transmit light to the first waveguide 2133 disposed on the first display 2150-1, and the second optical device 2184 may transmit light to the second waveguide 2134 disposed on the second display 2150-2.
In an embodiment, a camera 2160 may include the photographing camera 2160-4, an eye tracking camera (ET CAM) 2160-1, and/or the motion recognition camera 2160-2 and camera 2160-3. The photographing camera 2160-4, the eye tracking camera 2160-1, and the motion recognition camera 2160-2 and camera 2160-2 may be disposed at different positions on the frame and may perform different functions. The eye tracking camera 2160-1 may output data indicating a position of eye or the gaze of the user wearing the wearable device 2100. For example, the wearable device 2100 may detect the gaze from an image including the user's pupil obtained through the eye tracking camera 2160-1.
The wearable device 2100 may identify an object (e.g., a real object, and/or a virtual object) focused by the user, using the user's gaze obtained through the eye tracking camera 2160-1. The wearable device 2100 identifying the focused object may execute a function (e.g., gaze interaction) for interaction between the user and the focused object. The wearable device 2100 may represent a portion corresponding to eye of an avatar indicating the user in the virtual space, using the user's gaze obtained through the eye tracking camera 2160-1. The wearable device 2100 may render an image (or a screen) displayed on the at least one display 2150, based on the position of the user's eye.
For example, visual quality of a first area associated with the gaze within the image and visual quality (e.g., resolution, brightness, saturation, grayscale, and pixels per inch (PPI)) of a second area distinguished from the first area may be different. The wearable device 2100 may obtain an image having the visual quality of the first area matching the user's gaze and the visual quality of the second area using foveated rendering. For example, when the wearable device 2100 supports an iris recognition function, user authentication may be performed based on iris information obtained using the eye tracking camera 2160-1. An example in which the eye tracking camera 2160-1 is disposed toward the user's right eye is illustrated in
In an embodiment, the photographing camera 2160-4 may photograph a real image or background to be matched with a virtual image in order to implement the augmented reality or mixed reality content. The photographing camera 2160-4 may be used to obtain an image having a high resolution based on a high resolution (HR) or a photo video (PV). The photographing camera 2160-4 may photograph an image of a specific object existing at a position viewed by the user and may provide the image to the at least one display 2150. The at least one display 2150 may display one image in which a virtual image provided through the at least one optical device 2182 and 2184 is overlapped with information on the real image or background including an image of the specific object obtained using the photographing camera. The wearable device 2100 may compensate for depth information (e.g., a distance between the wearable device 2100 and an external object obtained through a depth sensor), using an image obtained through the photographing camera 2160-4. The wearable device 2100 may perform object recognition through an image obtained using the photographing camera 2160-4. The wearable device 2100 may perform a function (e.g., auto focus) of focusing an object (or subject) within an image and/or an optical image stabilization (OIS) function (e.g., an anti-shaking function) using the photographing camera 2160-4. While displaying a screen representing a virtual space on the at least one display 2150, the wearable device 2100 may perform a pass through function for displaying an image obtained through the photographing camera 2160-4 overlapping at least a portion of the screen. In an embodiment, the photographing camera may be disposed on the bridge 2103 disposed between the first rim 2101 and the second rim 2102.
The eye tracking camera 2160-1 may implement a more realistic augmented reality by matching the user's gaze with the visual information provided on the at least one display 2150, by tracking the gaze of the user wearing the wearable device 2100. For example, when the user looks at the front, the wearable device 2100 may naturally display environment information associated with the user's front on the at least one display 2150 at a position where the user is positioned. The eye tracking camera 2160-1 may be configured to capture an image of the user's pupil in order to determine the user's gaze. For example, the eye tracking camera 2160-1 may receive gaze detection light reflected from the user's pupil and may track the user's gaze based on the position and movement of the received gaze detection light. In an embodiment, the eye tracking camera 2160-1 may be disposed at a position corresponding to the user's left and right eyes. For example, the eye tracking camera 2160-1 may be disposed in the first rim 2101 and/or the second rim 2102 to face the direction in which the user wearing the wearable device 2100 is positioned.
The motion recognition camera 2160-2 and camera 2160-3 may provide a specific event to the screen provided on the at least one display 2150 by recognizing the movement of the whole or portion of the user's body, such as the user's torso, hand, or face. The motion recognition camera 2160-2 and camera 2160-3 may obtain a signal corresponding to motion by recognizing the user's motion (e.g., gesture recognition), and may provide a display corresponding to the signal to the at least one display 2150. The processor may identify a signal corresponding to the operation and may perform a preset function based on the identification. The motion recognition camera 2160-2 and 2160-3 may be used to perform SLAM for 6 degrees of freedom pose (6 dof pose) and/or a space recognition function using a depth map. The processor may perform a gesture recognition function and/or an object tracking function, using the motion recognition cameras 2160-2 and 2160-3. In an embodiment, the motion recognition camera 2160-2 and camera 2160-3 may be disposed on the first rim 2101 and/or the second rim 2102.
The camera 2160 included in the wearable device 2100 is not limited to the above-described eye tracking camera 2160-1 and the motion recognition camera 2160-2 and 2160-3. For example, the wearable device 2100 may identify an external object included in the FoV using a camera disposed toward the user's FoV. That the wearable device 2100 identifies the external object may be performed based on a sensor for identifying a distance between the wearable device 2100 and the external object, such as a depth sensor and/or a time of flight (ToF) sensor. The camera 2160 disposed toward the FoV may support an autofocus function and/or an optical image stabilization (OIS) function. For example, in order to obtain an image including a face of the user wearing the wearable device 2100, the wearable device 2100 may include the camera 2160 (e.g., a face tracking (FT) camera) disposed toward the face.
Although not illustrated, the wearable device 2100 according to an embodiment may further include a light source (e.g., LED) that emits light toward a subject (e.g., user's eyes, face, and/or an external object in the FoV) photographed using the camera 2160. The light source may include an LED having an infrared wavelength. The light source may be disposed on at least one of the frame, and the hinge units 2106 and 2107.
According to an embodiment, the battery module 2170 may supply power to electronic components of the wearable device 2100. In an embodiment, the battery module 2170 may be disposed in the first temple 2104 and/or the second temple 2105. For example, the battery module 2170 may be a plurality of battery modules 2170. The plurality of battery modules 2170, respectively, may be disposed on each of the first temple 2104 and the second temple 2105. In an embodiment, the battery module 2170 may be disposed at an end of the first temple 2104 and/or the second temple 2105.
The antenna module 2175 may transmit the signal or power to the outside of the wearable device 2100 or may receive the signal or power from the outside. In an embodiment, the antenna module 2175 may be disposed in the first temple 2104 and/or the second temple 2105. For example, the antenna module 2175 may be disposed close to one surface of the first temple 2104 and/or the second temple 2105.
A speaker 2155 may output a sound signal to the outside of the wearable device 2100. A sound output module may be referred to as a speaker. In an embodiment, the speaker 2155 may be disposed in the first temple 2104 and/or the second temple 2105 in order to be disposed adjacent to the ear of the user wearing the wearable device 2100. For example, the speaker 2155 may include a second speaker 2155-2 disposed adjacent to the user's left ear by being disposed in the first temple 2104, and a first speaker 2155-1 disposed adjacent to the user's right ear by being disposed in the second temple 2105.
The light emitting module (not illustrated) may include at least one light emitting element. The light emitting module may emit light of a color corresponding to a specific state or may emit light through an operation corresponding to the specific state in order to visually provide information on a specific state of the wearable device 2100 to the user. For example, when the wearable device 2100 requires charging, it may emit red light at a constant cycle. In an embodiment, the light emitting module may be disposed on the first rim 2101 and/or the second rim 2102.
Referring to
According to an embodiment, the wearable device 2100 may include at least one of a gyro sensor, a gravity sensor, and/or an acceleration sensor for detecting the posture of the wearable device 2100 and/or the posture of a body part (e.g., a head) of the user wearing the wearable device 2100. Each of the gravity sensor and the acceleration sensor may measure gravity acceleration, and/or acceleration based on preset 3-dimensional axes (e.g., x-axis, y-axis, and z-axis) perpendicular to each other. The gyro sensor may measure angular velocity of each of preset 3-dimensional axes (e.g., x-axis, y-axis, and z-axis). At least one of the gravity sensor, the acceleration sensor, and the gyro sensor may be referred to as an inertial measurement unit (IMU). According to an embodiment, the wearable device 2100 may identify the user's motion and/or gesture performed to execute or stop a specific function of the wearable device 2100 based on the IMU.
Referring to
According to an embodiment, the wearable device 2200 may include cameras 2160-1 for photographing and/or tracking two eyes of the user adjacent to each of the first display 2150-1 and the second display 2150-2. The cameras 2160-1 may be referred to as the gaze tracking camera 2160-1 of
Referring to
For example, using cameras 2160-11 and 2160-12, the wearable device 2200 may obtain an image and/or video to be transmitted to each of the user's two eyes. The camera 2160-11 may be disposed on the second surface 2220 of the wearable device 2200 to obtain an image to be displayed through the second display 2150-2 corresponding to the right eye among the two eyes. The camera 2160-12 may be disposed on the second surface 2220 of the wearable device 2200 to obtain an image to be displayed through the first display 2150-1 corresponding to the left eye among the two eyes. The cameras 2160-11 and 2160-12 may correspond to the photographing camera 2160-4 of
According to an embodiment, the wearable device 2200 may include the depth sensor 2230 disposed on the second surface 2220 in order to identify a distance between the wearable device 2200 and the external object. Using the depth sensor 2230, the wearable device 2200 may obtain spatial information (e.g., a depth map) about at least a portion of the FoV of the user wearing the wearable device 2200. Although not illustrated, a microphone for obtaining sound output from the external object may be disposed on the second surface 2220 of the wearable device 2200. The number of microphones may be one or more according to embodiments.
In an embodiment, a method of obtaining an image and/or video using a user interface (UI) that is adjustable by a user may be required. As described above, according to an example embodiment, a wearable device (e.g., the wearable device 101 of
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to display a pointer object (e.g., the virtual object 342 of
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to the second input detected while displaying the pointer object facing an edge of the boundary line having a rectangular shape, change the position of the view finder object using a direction of the pointer object.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to the second input detected while displaying the pointer object facing a vertex (e.g., the vertex 150-2 of
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to the second input, change the size to less than or equal to a maximum size, which is specified to maintain visibility of the boundary line within the display.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to detect the third input using the shape of the hand, while displaying a pointer object facing another portion different from the portion of the composite image specified by the view finder object.
For example, the wearable device may include another camera disposed to face the eye of the user. At least one processor, individually and/or collectively, may be configured to cause the wearable device to determine a direction of the eye using the other camera. At least one processor individually and/or collectively, may be configured to cause the wearable device to change a focal length of at least one of the plurality of cameras using a portion of the external environment corresponding to the direction, based on the direction facing the portion of the composite image specified by the view finder object.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to a fourth input indicating to select a handle object (e.g., the control handle 330 of
For example, the wearable device may include a depth sensor. At least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to the third input, store, in the memory, a depth map obtained using the depth sensor and corresponding to the portion, together with the portion of the composite image.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to display a virtual object representing a measurable range by the depth sensor, inside the view finder object.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, before detecting the third input, display a virtual object representing the depth map obtained from the depth sensor, superimposed on the portion of the composite image specified by the view finder object.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to the first input for executing a first software application, display the view finder object. At least one processor individually and/or collectively, may be configured to cause the wearable device to, in response to a fourth input for executing a second software application different from the first software application that is detected while obtaining a video for the portion of the composite image in response to the third input, cease to display the view finder object and display an indicator (e.g., the indicator 1022 of
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to, in response to a fifth input indicating to select the indicator, display the view finder object at a position of the view finder object, which was displayed before receiving the fourth input.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to: in response to the first input detected while displaying a virtual object superimposed on the composite image, display the virtual object in which at least a portion superimposed on the view finder object is masked, together with the view finder object. At least one processor individually and/or collectively, may be configured to cause the wearable device to, in response to a fourth input for capturing the portion of the composite image together with the virtual object, display, in the view finder object, the masked at least portion of the virtual object using a specified transparency.
For example, at least one processor, individually and/or collectively, may be configured to cause the wearable device to detect a fourth input indicating tracking of an external object in the portion of the composite image specified by the view finder object, while displaying the view finder object on the composite image. At least one processor individually and/or collectively, may be configured to cause the wearable device to, in response to the fourth input, change the position of the view finder object on the composite image using a position of the external object associated with the composite image.
As described above, according to an example embodiment, a method of a wearable device may be provided. The wearable device may comprise a housing, a display disposed on at least a portion of the housing and arranged in front of an eye of a user wearing the wearable device, a plurality of cameras obtaining images with respect to at least a portion of an external environment of the wearable device. The method may comprise: in response to a first input, displaying a view finder object on a composite image of the images, wherein the composite image may be displayed to represent a portion of the external environment beyond the display. The method may comprise, in response to a second input for moving or resizing the view finder object, changing at least one of a position or a size of the view finder object, while displaying the view finder object on the composite image. The method may comprise, in response to a third input for shooting, storing, in the memory, a portion of the composite image corresponding to the view finder object, while displaying the view finder object on the composite image.
For example, the changing may comprise displaying a pointer object extended from a point of the composite image associated with a body part including a hand. The method may comprise detecting the second input using a shape of the hand, while displaying the pointer object facing a boundary line of the view finder object.
For example, the changing may comprise, in response to the second input detected while displaying the pointer object facing an edge of the boundary line having a rectangular shape, changing the position of the view finder object using a direction of the pointer object.
For example, the changing may comprise, in response to the second input detected while displaying the pointer object facing a vertex of the boundary line having a rectangular shape, changing the size of the view finder object using the direction of the pointer object.
For example, the changing may comprise, in response to the second input, changing the size to less than or equal to a maximum size, which is specified to maintain visibility of the boundary line within the display.
For example, the storing may comprise detecting the third input using the shape of the hand, while displaying a pointer object facing another portion different from the portion of the composite image specified by the view finder object.
For example, the method may comprise determining a direction of the eye by using another camera disposed to face the eye of the user. The method may comprise changing a focal length of at least one of the plurality of cameras using a portion of the external environment corresponding to the direction, based on the direction facing the portion of the composite image specified by the view finder object.
For example, the method may comprise, in response to a fourth input indicating to select a handle object displayed in the display together with the view finder object, displaying visual objects corresponding to each of functions associated with shooting, along a direction of an edge of the view finder object having a rectangular shape.
For example, the storing may comprise, in response to the third input, storing, in the memory, a depth map obtained using a depth sensor of the wearable device and corresponding to the portion, together with the portion of the composite image.
For example, the displaying may may comprise displaying a virtual object representing a measurable range by the depth sensor, inside the view finder object.
For example, the method may comprise, before detecting the third input, displaying a virtual object representing the depth map obtained from the depth sensor, superimposed on the portion of the composite image specified by the view finder object.
For example, the displaying may comprise, in response to the first input for executing a first software application, displaying the view finder object. The method may comprise, in response to a fourth input for executing a second software application different from the first software application detected while obtaining a video for the portion of the composite image in response to the third input; and ceasing to display the view finder object and displaying an indicator indicating a recording of the video.
The according to an example embodiment method may comprise, in response to a fifth input indicating to select the indicator, displaying the view finder object at a position of the view finder object, which was displayed before receiving the fourth input.
For example, the displaying may comprise, in response to the first input detected while displaying a virtual object superimposed on the composite image, displaying the virtual object in which at least a portion superimposed on the view finder object is masked, together with the view finder object. The method may comprise, in response to a fourth input for capturing the portion of the composite image together with the virtual object, displaying, in the view finder object, the masked at least portion of the virtual object using a specified transparency.
The method according to an example embodiment may comprise detecting a fourth input indicating tracking of an external object in the portion of the composite image specified by the view finder object, while displaying the view finder object on the composite image. The method may comprise, in response to the fourth input, changing the position of the view finder object on the composite image using a position of the external object associated with the composite image.
As described above, according to an example embodiment, a non-transitory computer-readable storage medium including instructions may be provided. The instructions, when executed by at least one processor, individually and/or collectively, of a wearable device comprising: a housing, a display disposed on at least a portion of the housing and arranged in front of an eye of a user based on wearing the wearable device, and a plurality of cameras configured to obtain images with respect to at least a portion of an external environment of the wearable device, may cause the wearable device to display, on the display, a view finder object, superimposed on a composite image of the images. The instructions, when executed by the processor, may cause the wearable device to, in response to receiving an input for shooting while displaying the view finder object at a first position of the display, store a first portion of the composite image corresponding to the first position in the memory. The instructions, when executed by the processor, may cause the wearable device to, in response to receiving an input for shooting while displaying the view finder object at a second position of the display, store a second portion of the composite image corresponding to the second position in the memory.
As described above, according to an example embodiment, a wearable device (e.g., the wearable device 101 of
As described above, according to an example embodiment, a wearable device may comprise: a housing, a first display configured, when worn by a user, to be disposed toward an eye of the user, a second display directed to a second direction opposite to a first direction to which the first display is directed, one or more cameras, memory storing instructions, comprising one or more storage media, and at least one processor comprising processing circuitry. At least one processor, individually and/or collectively, may be configured to execute the instructions and may be configured to cause the wearable device to: obtain images using the one or more cameras. At least one processor individually and/or collectively, may be configured to cause the wearable device to, control the first display to display a screen representing environment adjacent to the wearable device using at least portion of the images. At least one processor individually and/or collectively, may be configured to cause the wearable device to, while displaying the screen, receive a first input to execute a camera application. At least one processor individually and/or collectively, may be configured to cause the wearable device to, in response to the first input, control the first display to visually highlight, with respect to a remaining portion of the screen, a portion of the screen to be captured using the camera application. At least one processor individually and/or collectively, may be configured to cause the wearable device to, while displaying the portion of the screen that is visually highlighted with respect to the remaining portion of the screen, receive a second input to capture the portion of the screen. At least one processor individually and/or collectively, may be configured to cause the wearable device to, in response to the second input, capture the portion of the screen. At least one processor individually and/or collectively, may be configured to cause the wearable device to control the second display to display an indicator to notify performing shooting using the one or more cameras.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to control the first display to display executable objects associated with the shooting, in a position adjacent to the portion of the screen that is visually highlighted with respect to the remaining portion of the screen.
For example, the executable objects may be superimposed on the screen.
For example, at least one processor, individually or collectively, may be configured to cause the wearable device to control the first display to further display a visual object for a focal point on the portion of the screen based on displaying the portion of the screen visually highlighted with respect to the remaining portion of the screen.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to: in response to the second input, store the portion of the screen and a depth map with respect to the portion.
For example, the wearable device may comprise a button at least partially visible through the housing. At least one processor individually or collectively, may be configured to cause the wearable device to receive the second screen through the button.
As described above, according to an example embodiment, a wearable device may comprise: a housing, a first display disposed on a first surface of the housing that, based on the wearable device being worn by a user, faces a face of the user, a second display disposed on a second surface of the housing that, based on the wearable device being worn by the user, faces an external environment of the wearable device, a plurality of cameras configured to obtain a plurality of images with respect to at least portion of the external environment of the wearable device, memory storing instructions, comprising one or more storage media, and at least one processor comprising processing circuitry. At least one processor individually or collectively, may be configured to execute the instructions and may be configured to cause the wearable device to: display, through the first display, a composite image with respect to at least portion of the external environment generated based on the plurality of images, and a view finder object at least partially superimposed on the composite image. At least one processor individually or collectively, may be configured to cause the wearable device to, in an image shooting mode, display, through the second display, a first visual notification corresponding to the image shooting mode while the composite image and the view finder object is displayed through the first display. At least one processor individually or collectively, may be configured to cause the wearable device to, in a video shooting mode, display, through the second display, a second visual notification corresponding to the video shooting mode while the composite image and the view finder object is displayed through the second display. At least one processor individually or collectively, may be configured to cause the wearable device to, in the image shooting mode or the video shooting mode, store at least portion of the composite image corresponding to the view finder object in the memory in response to a user input.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to display, through the first display, a control handle at a position adjacent to the view finder object.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to display, through the first display, the control handle to include at least one of visual object corresponding to a function to browse an image or a video stored in the memory within the control handle. At least one processor individually or collectively, may be configured to cause the wearable device to, based on an input to select the visual object, display, through the first display, a list including the image or the video.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to control the first display to display a visual object associated with a focal point positioned within the view finder object.
For example, the wearable device may comprise: a button at least partially visible through the housing. At least one processor individually or collectively, may be configured to cause the wearable device to receive the input through the button.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to, in response to the input, store the at least portion of the composite image and a depth map with respect to the at least portion.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to, in response to the input, display, through the second display, at least one of the first visual notification or the second visual notification.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to display, through the first display, executable objects associated with the image shooting mode or the video shooting mode.
For example, at least one processor individually or collectively, may be configured to cause the wearable device to display the view finder object by visually highlighting a portion of a screen displayed on the first display, corresponding to the at least portion of the composite image to be stored by the input, with respect to other portion.
As described above, according to an example embodiment, a method of operating a wearable device may be provided. The wearable device may comprise: a housing, a first display configured, based on being worn by a user, to be disposed toward an eye of the user, a second display directed to a second direction opposite to a first direction to which the first display is directed, and one or more cameras. The method may comprise obtaining images using the one or more cameras. The method may comprise controlling the first display to display a screen representing environment adjacent to the wearable device using at least portion of the images. The method may comprise, while displaying the screen, receiving a first input to execute a camera application. The method may comprise, in response to the first input, controlling the first display to visually highlight, with respect to a remaining portion of the screen, a portion of the screen to be captured using the camera application. The method may comprise, while displaying the portion of the screen that is visually highlighted with respect to the remaining portion of the screen, receiving a second input to capture the portion of the screen. The method may comprise, in response to the second input, capturing the portion of the screen. The method may comprise controlling the second display to display an indicator to notify performing shooting using the one or more cameras.
For example, the controlling the first display may comprise controlling the first display to display executable objects associated with the shooting, in a position adjacent to the portion of the screen visually highlighted with respect to the remaining portion of the screen.
For example, the executable objects may be superimposed on the screen.
For example, the controlling the first display may comprise controlling the first display to further display a visual object for a focal point on the portion of the screen based on displaying the portion of the screen visually highlighted with respect to the remaining portion of the screen.
For example, the capturing may comprise in response to the second input, storing the portion of the screen and a depth map with respect to the portion.
The device described above may be implemented as a hardware component, a software component, and/or a combination of a hardware component and a software component. For example, the devices and components described in the various example embodiments may be implemented using one or more general purpose computers or special purpose computers, such as a processor, controller, arithmetic logic unit (ALU), digital signal processor, microcomputer, field programmable gate array (FPGA), programmable logic unit (PLU), microprocessor, or any other device capable of executing and responding to instructions. The processing device may perform an operating system (OS) and one or more software applications executed on the operating system.
In addition, the processing device may access, store, manipulate, process, and generate data in response to the execution of the software. For convenience of understanding, there is a case that one processing device is described as being used, but a person who has ordinary knowledge in the relevant technical field may see that the processing device may include a plurality of processing elements and/or a plurality of types of processing elements. For example, the processing device may include a plurality of processors or one processor and one controller. In addition, another processing configuration, such as a parallel processor, is also possible.
The software may include a computer program, code, instruction, or a combination of one or more thereof, and may configure the processing device to operate as desired or may command the processing device independently or collectively. The software and/or data may be embodied in any type of machine, component, physical device, computer storage medium, or device, to be interpreted by the processing device or to provide commands or data to the processing device. The software may be distributed on network-connected computer systems and stored or executed in a distributed manner. The software and data may be stored in one or more computer-readable recording medium.
The method according to an example embodiment may be implemented in the form of a program command that may be performed through various computer means and recorded on a computer-readable medium. In this case, the medium may store a program executable by the computer or may temporarily store the program for execution or download. In addition, the medium may be various recording means or storage means in the form of a single or a combination of several hardware, but is not limited to a medium directly connected to a certain computer system, and may exist distributed on the network. Examples of media may include may be those configured to store program instructions, including a magnetic medium such as a hard disk, floppy disk, and magnetic tape, optical recording medium such as a CD-ROM and DVD, magneto-optical medium, such as a floptical disk, and ROM, RAM, flash memory, and the like. In addition, examples of other media may include recording media or storage media managed by app stores that distribute applications, sites that supply or distribute various software, servers, and the like.
While the disclosure has been illustrated and described with reference to various example embodiments, it will be understood that the various example embodiments are intended to be illustrative, not limiting. It will be further understood by those skilled in the art that various changes in form and detail may be made without departing from the full scope of the disclosure, including the appended claims and their equivalents. It will also be understood that any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0170072 | Nov 2023 | KR | national |
10-2023-0193656 | Dec 2023 | KR | national |
This application is a continuation of International Application No. PCT/KR2024/013510 designating the United States filed on Sep. 6, 2024, in the Korean Intellectual Property Receiving Office and claiming priority to Korean Patent Application Nos. 10-2023-0170072, filed on Nov. 29, 2023, and 10-2023-0193656, filed on Dec. 27, 2023, in the Korean Intellectual Property Office, the disclosures of each of which are incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2024/013510 | Sep 2024 | WO |
Child | 18917183 | US |