This disclosure relates generally to pairing of devices, e.g., and in particular to auto-pairing through rotation vectors.
Rotation vector (RV) can be described as a quaternion parameterization of a device's orientation under earth's frame of reference. For example, a fixed orientation reference system may be defined by directions east (E), north (N) and up (U). RV may be specified with one or more values indicating by how many degrees with respect to which axis (E, N, U) a device has rotated. Electronic devices (e.g., smart phones, mobile terminals, smart glasses, wearables, etc.) can be equipped with inertial measurement unit (IMU) sensors (e.g., gyroscope, accelerometer, magnetometer) so that the device can calculate its own RV.
RV has traditionally been used for positioning purposes, e.g., to determine a position of a device, or to determine a change in the device's position. However, the use of RV may be extended beyond that of positioning.
The following presents a simplified summary relating to one or more aspects and/or examples associated with the apparatus and methods disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects and/or examples, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects and/or examples or to delineate the scope associated with any particular aspect and/or example. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects and/or examples relating to the apparatus and methods disclosed herein in a simplified form to precede the detailed description presented below.
An exemplary first device is disclosed. The first device may comprise a memory, a communicator, and a processor communicatively connected to the memory and the communicator. The processor may be configured to determine, utilizing an internal measurement unit (IMU), a first rotation vector (RV) of a first camera of the first device. The processor may also be configured to receive one or more RVs from one or more devices including a second RV from a second device. The second RV may be an RV of a second camera of the second device. The processor may further be configured to determine whether the second RV is aligned with the first RV. The processor may yet be configured to auto-pair with the second device (120, 420) when the second RV is aligned with the first RV.
An exemplary method of a first device is disclosed. The method may comprise determining, utilizing an internal measurement unit (IMU), a first rotation vector (RV) of a first camera of the first device. The method may also comprise receiving one or more RVs from one or more devices including a second RV from a second device. The second RV may be an RV of a second camera of the second device. The method may further comprise determining whether the second RV is aligned with the first RV. The method may yet comprise auto-pairing with the second device when the second RV is aligned with the first RV.
Another exemplary first device is disclosed. The first device may comprise means for determining, utilizing an internal measurement unit (IMU), a first rotation vector (RV) of a first camera of the first device. The first device may also comprise means for receiving one or more RVs from one or more devices including a second RV from a second device. The second RV may be an RV of a second camera of the second device. The first device may further comprise means for determining whether the second RV is aligned with the first RV. The first device may yet comprise means for auto-pairing with the second device when the second RV is aligned with the first RV.
A non-transitory computer-readable medium storing computer-executable instructions for a first device configured is disclosed. The computer-executable instructions may comprise one or more instructions instructing the first device to determine, utilizing an internal measurement unit (IMU), a first rotation vector (RV) of a first camera of the first device. The computer-executable instructions may also comprise one or more instructions instructing the first device to receive one or more RVs from one or more devices including a second RV from a second device. The second RV may be an RV of a second camera of the second device. The computer-executable instructions may further comprise one or more instructions instructing the first device to determine whether the second RV is aligned with the first RV. The computer-executable instructions may yet comprise one or more instructions instructing the first device to auto-pair with the second device when the second RV is aligned with the first RV.
An exemplary device is disclosed. The device may comprise a memory, a communicator, and a processor communicatively connected to the memory and the communicator. The processor may be configured to render a virtual scene based on a password of a user. The password may comprise a sequence of one or more symbols. The one or more symbols may comprise one or more visual symbols, one or more sound symbols, or both. The processor may also be configured to determine a selected vector sequence selected by the user within the virtual scene. The selected vector sequence may comprise a sequence of one or more vectors. Each vector may be a rotation vector (RV) or a game rotation vector (GRV). The processor may further be configured to determine whether the selected vector sequence matches the password. The processor may yet be configured to authenticate the user when the selected vector sequence matches the password.
An exemplary method of a device is disclosed. The method may comprise rendering a virtual scene based on a password of a user. The password may comprise a sequence of one or more symbols. The one or more symbols may comprise one or more visual symbols, one or more sound symbols, or both. The method may also comprise determining a selected vector sequence selected by the user within the virtual scene. The selected vector sequence may comprise a sequence of one or more vectors. Each vector may be a rotation vector (RV) or a game rotation vector (GRV). The method may further comprise determining whether the selected vector sequence matches the password. The method may yet comprise authenticating the user when the selected vector sequence matches the password.
Another exemplary device is disclosed. The device may comprise means for rendering a virtual scene based on a password of a user. The password may comprise a sequence of one or more symbols. The one or more symbols may comprise one or more visual symbols, one or more sound symbols, or both. The device may also comprise means for determining a selected vector sequence selected by the user within the virtual scene. The selected vector sequence may comprise a sequence of one or more vectors. Each vector may be a rotation vector (RV) or a game rotation vector (GRV). The device may further comprise means for determining whether the selected vector sequence matches the password. The device may yet comprise means for authenticating the user when the selected vector sequence matches the password.
A non-transitory computer-readable medium storing computer-executable instructions for a device configured is disclosed. The computer-executable instructions may comprise one or more instructions instructing the device to render a virtual scene based on a password of a user. The password may comprise a sequence of one or more symbols. The one or more symbols may comprise one or more visual symbols, one or more sound symbols, or both. The computer-executable instructions may also comprise one or more instructions instructing the device to determine a selected vector sequence selected by the user within the virtual scene. The selected vector sequence may comprise a sequence of one or more vectors. Each vector may be a rotation vector (RV) or a game rotation vector (GRV). The computer-executable instructions may further comprise one or more instructions instructing the device to determine whether the selected vector sequence matches the password. The computer-executable instructions may yet comprise one or more instructions instructing the device to authenticate the user when the selected vector sequence matches the password.
Other features and advantages associated with the apparatus and methods disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.
A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure.
Other objects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description. In accordance with common practice, the features depicted by the drawings may not be drawn to scale. Accordingly, the dimensions of the depicted features may be arbitrarily expanded or reduced for clarity. In accordance with common practice, some of the drawings are simplified for clarity. Thus, the drawings may not depict all components of a particular apparatus or method. Further, like reference numerals denote like features throughout the specification and figures.
Aspects of the present disclosure are illustrated in the following description and related drawings directed to specific embodiments. Alternate aspects or embodiments may be devised without departing from the scope of the teachings herein. Additionally, well-known elements of the illustrative embodiments herein may not be described in detail or may be omitted so as not to obscure the relevant details of the teachings in the present disclosure.
In certain described example implementations, instances are identified where various component structures and portions of operations can be taken from known, conventional techniques, and then arranged in accordance with one or more exemplary embodiments. In such instances, internal details of the known, conventional component structures and/or portions of operations may be omitted to help avoid potential obfuscation of the concepts illustrated in the illustrative embodiments disclosed herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As indicated above, many devices (e.g., smart phones, mobile terminals, smart glasses, etc.) may be able to calculate its own RV based on measurements from IMU sensors (e.g., gyroscope, accelerometer, magnetometer, etc.). The gyroscope may provide an instantaneous rotation (e.g., angle, velocity) measurement. That is, the gyroscope may measure how fast the device is rotating.
Accelerometer typically provides gravity direction within the measure frame. That is, the accelerometer can provide the direction of gravity with respect to the current orientation of the device. Thus, the accelerometer can be used to verify and/or correct orientation change reported by the gyroscope with respect to gravitational information.
Magnetometer typically provides orientation of the device with respect to magnetic north.
Alternatively or in addition thereto, if the magnetometer (or the device itself) is calibrated, orientation with respect to true north may be provided. Thus, the magnetometer may be used to verify and/or correct orientation change reported by the gyroscope with respect to earth's north direction. Note that if only a change with respect to the north direction is of interest, the difference between true and magnetic north may not be of concern. Note that devices may also calculate game RVs (GRV) instead of or in addition to its RV. In GRV, the Y axis need not point to north, but may point to a direction in some other reference.
It will be appreciated that the components may be implemented in different types of apparatuses in different implementations (e.g., in an ASIC, in a System-on-Chip (SoC), etc.). The illustrated components may also be incorporated into other apparatuses in a communication system. For example, other apparatuses in a system may include components similar to those described to provide similar functionality. Also, a given apparatus may contain one or more of the components. For example, an apparatus may include multiple transceiver components that enable the apparatus to operate on multiple carriers and/or communicate via different technologies.
The apparatuses 110, 120 may each include at least one communicator (represented by communicators 111, 121) for communicating with other devices. The communicators 111, 121 may be capable of communicating through wired and/or or wireless protocols (e.g., wi-fi, Bluetooth, LTE, New Radio (NR), etc.). The communicator 111 may include at least one transmitter (represented by transmitter 112) for transmitting and encoding signals (e.g., messages, indications, information, and so on) and at least one receiver (represented by receiver 113) for receiving and decoding signals (e.g., messages, indications, information, pilots, and so on). The communicator 111 may also be referred to as a transceiver. The communicator 121 may include at least one transmitter (represented by transmitter 122) for transmitting signals (e.g., messages, indications, information, pilots, and so on) and at least one receiver (represented by receiver 123) for receiving signals (e.g., messages, indications, information, and so on). The communicator 111 may also be referred to as a transceiver.
A transmitter and a receiver may comprise an integrated device (e.g., embodied as a transmitter circuit and a receiver circuit of a single communicator) in some implementations, may comprise a separate transmitter device and a separate receiver device in some implementations, or may be embodied in other ways in other implementations. In an aspect, a transmitter may include a plurality of antennas, such as an antenna array, that permits the respective apparatus to perform transmit “beamforming,” as described further herein. Similarly, a receiver may include a plurality of antennas, such as an antenna array, that permits the respective apparatus to perform receive beamforming, as described further herein. In an aspect, the transmitter and receiver may share the same plurality of antennas, such that the respective apparatus can only receive or transmit at a given time, not both at the same time. A wireless communicator (e.g., one of multiple wireless communicators) of the apparatus 120 may also comprise a Network Listen Module (NLM) or the like for performing various measurements.
The apparatuses 110, 120 may also include other components used in conjunction with the operations as disclosed herein. The apparatus 110 may include a processing system 114 for providing functionality relating to, for example, communication with other devices, authentication, rotation vector functions, AR/XR functions, object detection, etc. The apparatus 120 may include a processing system 124 for providing functionality relating to, for example, communication with other devices, authentication, rotation vector functions, AR/XR functions, object detection, etc. In an aspect, the processing systems 114, 124 may each include, for example, one or more general purpose processors, multi-core processors, ASICs, digital signal processors (DSPs), field programmable gate arrays (FPGA), other programmable logic devices, processing circuitry, or any combination thereof.
The apparatuses 110, 120 may include measurement components 116 and 126, respectively, for obtaining RV measurements. The measurement component 116 may measure rotation vectors associated with the apparatus 110. The measurement component 116 may comprise a gyroscope, an accelerometer, a magnetometer, or any combination thereof. Similarly, the measurement component 126 may measure rotation vectors associated with the apparatus 120. The measurement component 126 may comprise a gyroscope, an accelerometer, a magnetometer, or any combination thereof. The measurement components 116, 126 may also be referred to as inertial measurement units (IMU) of the apparatuses 110, 120.
The apparatuses 110, 120 may include memory components 115 and 125 (e.g., each including a memory device), respectively, for maintaining information (e.g., information indicative of reserved resources, thresholds, parameters, and so on). In various implementations, the memory 115 may comprise a computer-readable medium storing one or more computer-executable instructions where the one or more instructions instruct the apparatus 110 (e.g., the processing system 114 in combination with other aspects of the apparatus 110) to perform any of the methods of
In addition, the apparatuses 110, 120 may include user interfaces 117 and 127, respectively, for providing indications (e.g., audible, visual, and or haptic indications) to a user and/or for receiving user input (e.g., upon user actuation of a sensing device such a keypad, a touch screen, a microphone, haptic actuators, and so on).
The apparatuses 110, 120 may respectively include camera components 118 and 128 for providing views, e.g., to take still pictures and/or record videos. In one aspect, the camera component 118 may be housed within the apparatus 110 (e.g., a camera of a mobile phone). Alternatively or in addition thereto, the camera component 118 may be housed in a separate unit, and a communication link (wired or wireless) may be established between the camera component 118 and the apparatus 110. Similarly, the camera 128 may be housed within the apparatus 120 (e.g., a camera of a mobile phone). Alternatively or in addition thereto, the camera 128 may be housed in a separate unit, and a communication link (wired or wireless) may be established between the camera 128 and the apparatus 120.
For convenience, the apparatuses 110, 120 are shown in
The apparatus 110 may transmit and receive messages via a link 160, which may be wireless, with the apparatus 120, the messages including information related to various types of communication (e.g., voice, data, multimedia services, associated control signaling, etc.). The wireless link 160 may operate over a communication medium of interest, shown by way of example in
In an aspect, it is proposed to use RV as a protocol for device pairing, e.g., auto-pairing of devices.
For illustration purposes, RVs of associated with various devices are shown in
The first device may recognize that RV2 is aligned with RV1. For example, the first device may confirm that RV2 is in opposite direction to RV1 (plus or minus a threshold angle). In other words, the first and second devices (or at least their cameras) may be facing each other. Alternatively or in addition thereto, the first device may confirm that RV2 is in a same direction to RV1 (again plus or minus the threshold angle). Note that direction of pairing may be a designer choice. The threshold angle may be based on accuracies of measurement components of the devices, security requirements, etc.
Upon determining that the RV1 and RV2 are aligned, the first device may automatically pair with the second device. Once paired, the first and second devices may exchange information with each other. For example, the first device may send a view to the second device. The view may be a first camera view (view of its camera) and/or a rendered view, e.g., by processing the first camera view with AR and/or XR rendering. Alternatively or in addition thereto, the first device may receive a view from the second device. This view may be a second camera view (view of the camera of the second device) and/or a rendered second camera view. In an aspect, the first device may further process the view received from the second device.
One area in which the proposed auto-pairing may be used is in user-to-user (U2U) authentication, for example, in AR/XR situations. Enabling AR/XR use cases (e.g., gaming, navigation, business collaboration) can be of great value. Various form factors (e.g., smartphone, wearables, etc.) may be supported.
While the scenario illustrated in
Similarly, device 2 (or second device 420) may include the following: connectivity system 421, RV/GRV system 423, AR/XR system 424, object detection 425, IMU 426, decision module 427, camera 428 and selector 429. Each system or module 421, 423, 424, 425, 426, 427, 428 and 429 may be implemented in hardware or in a combination of hardware and software. For example, each system or module 421, 423, 424, 425, 426, 427, 428 and 429 may be implemented through a hardware circuitry or through one or more components of apparatus 120 of
In
The object detection 415 may perform computer vision processing of views from the camera 418. For example, the object detection 415 may analyze what is in front of the camera 418 (e.g., what is in front of the AR/XR smart glasses) to detect one or more objects of interest. The objects of interest may include a wearable unit such as smart glasses (e.g., AR/XR glasses of second user), a human face (e.g., face of second user), a mobile device (e.g., the second device 420 held by the second user), etc. In an aspect, the object detection 415 may be implemented through a processor (e.g., processing system 114) and/or a memory (e.g., memory component 115). The camera 418 may be implemented through a camera component (e.g., e.g., camera component 118).
In decision module 417, it may be determined whether the object of interest is detected. If so (i.e., object is in view of the camera 418), then the first RV (i.e., RV of the first device 410) may be provided to the connectivity system 411 (e.g., through a mixer or selector 419). The connectivity system 411 may broadcast the first RV to other devices including to the second device 420 as part of a network protocol. The connectivity system 411 may also receive the second RV from the second device 420. The connectivity system 411 may then auto-pair the first device 410 with the second device 420 if the first and second RVs are aligned with each other. In general, auto-pairing may be viewed as pairing of the first and second devices automatically taking place upon determination that the devices are aligned with each other. In an aspect, the decision module 417 may be implemented through a processor (e.g., processing system 114) and/or a memory (e.g., memory component 115). The connectivity system 411 may be implemented through a processor (e.g., processing system 114), a memory (e.g., memory component 115), and/or a communicator (e.g., communicator 111). The selector 419 may be implemented through a processor (e.g., processing system 114) and/or a memory (e.g., memory component 115).
In block 510, the first device (e.g., RV/GRV system 413, IMU 416) may determine a first rotation vector (RV) of a first camera (e.g., camera 418) of the first device. Means for performing block 510 may include the measurement component 116, the processing system 114, and/or the memory component 115 of the apparatus 110. The camera component 118 of the apparatus 110 may be an example of the first camera.
In block 520, the first device (e.g., connectivity system 411) may receive one or more RVs from one or more devices. Means for performing block 520 may include the communicator 111, the processing system 114, and/or the memory component 115 of the apparatus 110. Among the one or more RVs may be a second RV from a second device (e.g., apparatus 120, 420). The second RV may be an RV of a second camera (e.g., camera 428) of the second device.
In block 530, the first device (e.g., RV/GRV system 413) may determine whether the second RV is aligned with the first RV. Means for performing block 530 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
In one aspect, the orientations of the first and second RVs may be deemed comparable if they are in opposite orientations. For example, the cameras of the first and second devices may be facing each other. This is the situation illustrated in
Alternatively, the orientations of the first and second RVs may be deemed comparable if they are in a same orientation. For example, the cameras of the first and second devices may be looking in a same direction. Again, measurement errors may be taken into account. That is, the first device may determine that the first and second RVs are aligned when they are in a same orientation with each other within the threshold angle tolerance.
If the first device determines that the first and second RVs do not have comparable orientations (‘N’ branch from block 610), then in block 650, the first device may determine that the first and second RVs are not aligned. Means for performing block 650 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
On the other hand, if the first device determines that the first and second RVs do have comparable orientations (‘Y’ branch from block 610), then in block 640, the first device may determine that the first and second RVs are aligned. Means for performing block 640 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
But in an aspect, it may be desirable to verify that the users have intended the alignment of the RVs. One way for the users to show intent is to maintain the alignment of the RVs for a threshold time, e.g., such as two seconds. For example, in
In this aspect, if the first device determines that the first and second RVs do have comparable orientations (‘Y’ branch from block 610), then in block 620, the first device may determine whether the threshold time has passed. Means for performing block 620 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If the first device determines that the threshold time has not yet passed (‘N’ branch from block 620), the first device may proceed back to block 610 to determine whether the orientations of the first and second RVs remain comparable. This implies that the first and/or the second devices may continually monitor and broadcast their respective RVs. That is, blocks 510 and 520 may continually be performed. If the first device determines that the threshold time has passed (‘Y’ branch from block 620), the first device may proceed back to block 640 to determine that the first and second RVs are aligned.
Alternatively, there may yet be a further check performed determine whether the first and second RVs are aligned. In this alternative aspect, in addition to the first and second RVs having comparable orientations, it may also be required to determine that the first and second users are actually facing each other. To state it another way, it may also be required that objects associated with the devices be visible to each other.
This is illustrated in
In block 630, the first device (e.g., object detection 415, camera 418) may detect whether an object associated with the second device is within a view of the first camera. In other words, the first device may determine whether an object of interest is within the view of the first camera. For ease of reference, this view may also be referred to as the first camera view. Such objects may include a face (e.g., a face of user), a wearable unit (e.g., smart glasses), a mobile device (e.g., the second device itself), etc. Means for performing block 630 may include the camera component 118, the processing system 114, and/or the memory component 115 of the apparatus 110.
If the first device determines that the first and second RVs do not have comparable orientations (‘N’ branch from block 610) or determines that the object associated with the second device is not within the first camera view (‘N’ branch from block 630), then in block 650, the first device may determine that the first and second RVs are not aligned. Means for performing block 650 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
On the other hand, if the first device determines that the first and second RVs do have comparable orientations (‘Y’ branch from block 610) and determines that the object associated with the second device is within the first camera view (‘Y’ branch from block 630), then in block 640, the first device may determine that the first and second RVs are aligned. Means for performing block 640 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
Note that in
In an aspect, the first device may perform blocks 610 and 630 in parallel, perform block 610 followed by block 630, or perform block 630 followed by block 610. If blocks 610 and 630 are performed in parallel, it may result in a faster execution relative to performing the blocks serially. For example, if both blocks 610 and 630 evaluate to true, performing these blocks in parallel should be faster than performing them serially. Of course, if block 610 (630) evaluates to false, performance of block (630 (610) maybe stopped.
In one aspect, if block 610 is performed first, then block 630 may be performed only when block 610 determines that there is a second RV aligned with the first RV. That is, block 630 may serve as a confirmation of block 610. In another aspect, if block 630 is performed first, then block 610 may be performed only when block 630 detects that an object of interest is within the first camera view. That is, block 610 may serve as a confirmation of block 630. In this instance, a low resolution camera may be sufficient. If blocks 610 and 630 are performed sequentially, there may be some power savings relative to performing blocks 610 and 630 in parallel. For example, if block 610 (630) is performed first and evaluates to false, then other block 630 (610) need not be performed at all.
In an aspect, it may again be desirable to verify that the users have intended the alignment of the RVs, e.g., by maintaining the alignment of the RVs within each other's view for a threshold time. In this aspect, if the first device determines that the first and second RVs do have comparable orientations (‘Y’ branch from block 610) and that that the object associated with the second device is within the first camera view (‘Y’ branch from block 630), then in block 620, the first device may determine whether the threshold time has passed. Means for performing block 620 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If the first device determines that the threshold time has not yet passed (‘N’ branch from block 620), the first device may proceed back to block 610 (to determine whether the orientations of the first and second RVs remain comparable) and block 630 (to determine whether the object associated with the second device remains in the first camera view). This again implies that the first and/or the second devices may continually monitor and broadcast their respective RVs. If the first device determines that the threshold time has passed (‘Y’ branch from block 620), the first device may proceed back to block 640 to determine that the first and second RVs are aligned.
Referring back to
Alternatively, while not shown, the first device may request the user permission to pair with the second device upon determining that the first and second devices are aligned in block 530. In this alternative aspect, the pairing may take place when an input from the user indicates that the permission to pair is granted.
In block 710, the first device may determine a first rotation vector (RV) of a first camera of the first device. Block 710 may be assumed to be similar to block 510. Therefore, a detailed description thereof will be omitted for sake of brevity.
In block 715, the first device (e.g., connectivity system 411) may broadcast the first RV, e.g., to other devices within a neighborhood of the first device. Means for performing block 715 may include the communicator 111, the processing system 114, and/or the memory component 115 of the apparatus 110.
In block 720, the first device may receive one or more RVs from one or more devices, including the second RV from the second device. Block 720 may be assumed to be similar to block 520. Therefore, a detailed description thereof will be omitted for sake of brevity. Note that blocks 710, 715 and 720 may be continually performed.
In block 730, the first device may determine whether the second RV is aligned with the first RV. Block 730 may be assumed to be similar to block 530 including blocks of
When it is determined that the first and second RVs are aligned, the first device in block 740 may auto-pair with the second device. Block 740 may be assumed to be similar to block 540. Therefore, a detailed description thereof will be omitted for sake of brevity.
After auto-pairing with the second device, the first device (e.g., AR/XR system 414, camera 418, connectivity system 411) in block 750 may share information with the second device. Means for performing block 750 may include the communicator 111, the camera component 118, the processing system 114, and/or the memory component 115 of the apparatus 110.
The shared information may include a first shared view. In an aspect, the first shared view may simply be a view of the first camera, e.g., the view captured by the first camera without any augmentations or extensions. Such view may also be referred to as the first camera view. Alternatively or in addition thereto, the first shared view may be a rendered version of the first camera view, which also may be referred to as the first rendered view. For example, the first rendered view may be an augmented reality view of the first camera view, an extended reality view of the first camera view, or both.
Instead of or in addition to sharing the first shared view, the first device (e.g., AR/XR system 414, connectivity system 411) in block 760 may display a second shared view received from the second device. Block 760 may be performed after auto-pairing with the second device in block 740. Means for performing block 750 may include the communicator 111, the user interface 117, the processing system 114, and/or the memory component 115 of the apparatus 110.
The second shared view may simply be a view of the second camera of the second device, e.g., the view captured by the second camera without any augmentations or extensions, which may also be referred to as the second camera view. Alternatively or in addition thereto, the second shared view may be a rendered version of the second camera view, which also may be referred to as the second rendered view. For example, the second rendered view may be an augmented reality view of the second camera view, an extended reality view of the second camera view, or both. Note that the first device may render the second camera view and/or further render the second rendered view.
As described with respect to
In an AR/XR scene, the device may render a virtual scene that includes the user's PIN and another set of randomly generated different characters. In the virtual scene, the characters may be placed randomly in space. The device may compute the RV/GRV (game RV). The user may select a password (i.e., PIN) sequence by turning sequentially to each character and maintain for a short while on each character. The device may compare the on-device RV/GRV logs with the ground truth used during rendering to provide a pass/fail. In an aspect, the virtual scene may be bigger than a viewable scene. For example, the virtual scene may be greater than a field of view (FOV) of the AR/XR glasses. The user may pan different portions of the virtual scene when the device's FOV is less than the virtual scene.
While
Alternatively or in addition thereto, if spatial audio is available in the device, the password may include one or more pre-defined sounds. This is illustrated in
In block 1010, the device (e.g., AR/XR system 414) may render a virtual scene based on a password of a user. The password may be pre-registered with the device and may comprise a sequence of one or more symbols. The symbols of the password may comprise one or more visual symbols, one or more sound symbols, or both. Means for performing block 1010 may include the user interface 117, the processing system 114, and/or the memory component 115 of the apparatus 110. It should be noted that for visual symbols, a visual symbol may be differentiated from another visual symbol based on one or more characteristics such as color, font (if symbol is character), size, and so on.
Optionally, in block 1120, the device (e.g., RG/GRV system 413, AR/XR system 414) may distribute one or more visual symbols that are not included in the password throughout the virtual scene. Means for performing block 1120 may include the user interface 117, the processing system 114, and/or the memory component 115 of the apparatus 110. In an aspect, these non-password visual symbols may be randomly generated. For example, the non-password visual symbols generated in one authentication attempt may not be the same as the non-password visual symbols generated in another authentication attempt. Alternatively or in addition thereto, These non-password visual symbols may be randomly distributed. For example, the distribution of the non-password visual symbols in one authentication attempt may be different from the distribution of the non-password visual symbols in another authentication attempt.
In block 1125, for at least one sound symbol of the password, the device (e.g., RG/GRV system 413, AR/XR system 414) may render another sound symbol in another RV or another GRV determined for the another sound symbol. Means for performing block 1125 may include the user interface 117, the processing system 114, and/or the memory component 115 of the apparatus 110.
The at least one sound symbol may be different from the another sound symbol. For example, the at least one sound symbol may be waterfall sound symbol and the another sound symbol may be a glass breaking sound. Also, the RV or the GRV may be different from the another RV or the another GRV. For example, if the at least one sound symbol is rendered as originating from left, the another sound symbol may be rendered as originating from right.
Further, the at least one sound symbol and the another sound symbol may be rendered contemporaneously. For example, the at least one sound symbol and the another sound symbol may be rendered simultaneously or at least where rendering of the at least one sound symbol and the another sound symbol overlap with each at least partially. More generally, there may be a window of time defined in which both the at least one sound symbol and the another sound symbol will be rendered, and the user may choose between the sound symbols (e.g., by turning towards the rendered sound) during or immediately subsequent to the window of time being passed.
Referring back to
In block 1020, the user may be changing the orientation of the device to enter the password within the virtual space. Note that block 1020 may apply to when the password includes one or more visual symbols or to when the password includes one or more sound symbols.
In block 1230, the device may log the vector in the selected vector sequence. Means for performing block 1230 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
In block 1240, the device may determine whether the vector sequence selection has finished. If not (‘N’ branch from block 1240), the device may go back to block 1210. Otherwise, (‘Y’ branch from block 1240), the device may exit the process of implementing block 1020. Means for performing block 1240 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
In an aspect, it may be desirable to verify that the users have intended the selection of the symbol within the virtual scene. One way for the users to show intent is for the user to explicitly indicate (not shown), e.g., through a user interface, the selection of the symbol. For example, if the virtual scene is displayed on a display of a device such as a touch screen of a mobile device, the user may indicate by tapping the selected symbol on the screen. As another example, if the virtual scene is displayed on smart glasses such as AR/XR glasses, then the user may orient the glasses to center the selected symbol within view and tapping on a button input.
Another way is for the user to orient the device on the selected symbol and maintain the orientation for a threshold time, e.g., such as two seconds. Thus, after block 1210, the device (e.g., RV/GRV system 413, IMU 416) in block 1220 may determine whether the vector is held for the threshold time. Means for performing block 1220 may include the measurement component 116, the processing system 114, and/or the memory component 115 of the apparatus 110.
If the vector is held for the threshold time (‘Y’ branch from block 1220), then the device may proceed to block 1230 to log the vector. Otherwise (‘N’ branch from block 1220), the device may proceed to block 1240 to determine whether the vector sequence selection process is finished.
Referring back to
Recall that the symbols distributed in the virtual scene includes the symbols of the password. The device may then determine, for each symbol (visual or sound) of the password, a corresponding vector within the virtual scene. In an aspect, the password vector sequence may be the sequence of RVs or GRVs randomly generated in block 1110 and/or in block 1115. Thus, password vector sequence may be generated in block 1310.
In block 1320, the device may determine whether a number of vectors in the password vector sequence and a number of vectors in the selected vector sequence are equal. Means for performing block 1320 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If the number of vectors of the password and selected vector sequences are not equal (‘N’ branch from block 1320), the device in block 1340 may determine that the selected vector sequence does not match the password. Means for performing block 1340 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If the number of vectors of the password and selected vector sequences are equal (‘Y’ branch from block 1320), then in block 1330, the device may determine whether all vectors of the password vector sequence match corresponding vectors of the selected vector sequence within a threshold angle. For example, for a vector of the selected vector sequence to match a corresponding vector of the password vector sequence, the vector of the selected vector sequence should be within the threshold angle of the corresponding vector of the password vector sequence. The threshold angle may be set according to a desired level of security. For example, if the security requirement is high, the threshold angle may be set low, i.e., set to be narrow. This implies that a greater precision is required from the user when the selected vector sequence is generated. Means for performing block 1330 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If not all vectors of the password vector sequence match the corresponding vectors of the selected vector sequence (‘N’ branch from block 1330), the device may proceed to block 1340 to determine that the selected vector sequence does not match the password. Means for performing block 1340 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If all vectors of the password vector sequence do match the corresponding vectors of the selected vector sequence (‘Y’ branch from block 1330), then in block 1350, the device may determine that the selected vector sequence does match the password. Means for performing block 1350 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
For visual symbols, each symbol of the selected symbol sequence may be a visual symbol located within a threshold angle of a position in the virtual scene indicated by a corresponding vector of the selected vector sequence. For sound symbols, each symbol of the selected symbol sequence may be a sound symbol rendered within the threshold angle within the virtual scene. The device may then determine, for each vector of the selected vector sequence, a symbol (visual or sound) located at the position in the virtual scene within the threshold angle. Thus, selected symbol sequence may be generated in block 1410. Again, the threshold angle may be set based on a desired level of security.
In block 1420, the device may determine whether a number of symbols in the password and a number of symbols in the selected symbol sequence are equal. Means for performing block 1420 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If the number of symbols in the password and in the selected symbol sequence are not equal (‘N’ branch from block 1420), the device in block 1440 may determine that the selected vector sequence does not match the password. Means for performing block 1440 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
If the number of symbols in the password and in the selected symbol sequence are equal (‘Y’ branch from block 1420), then in block 1430, the device may determine whether all symbols of the password match corresponding symbols of the selected symbol sequence.
If not all symbols of the password match the corresponding symbols of the selected symbol sequence (‘N’ branch from block 1430), the device may proceed to block 1440 to determine that the selected vector sequence does not match the password.
If all symbols of the password do match the corresponding symbols of the selected symbol sequence (‘Y’ branch from block 1430), then in block 1450, the device may determine that the selected vector sequence does match the password. Means for performing block 1450 may include the processing system 114 and/or the memory component 115 of the apparatus 110.
Referring back to
Implementation examples are described in the following numbered clauses:
Clause 1: A method of a first device, the method comprising: determining, utilizing an internal measurement unit (IMU), a first rotation vector (RV) of a first camera of the first device; receiving one or more RVs from one or more devices including a second RV from a second device, the second RV being an RV of a second camera of the second device; determining whether the second RV is aligned with the first RV; and auto-pairing with the second device when the second RV is aligned with the first RV.
Clause 2: The method of clause 1, wherein determining whether the second RV is aligned with the first RV comprises: determining that the second RV is aligned with the first RV when the first and second RVs have comparable orientations, the first and second RVs having comparable orientation if an orientation of the first RV is opposite to an orientation of the second RV within a threshold angle, or the orientation of the first RV is same as the orientation of the second RV within the threshold angle.
Clause 3: The method of clause 2, wherein determining whether the second RV is aligned with the first RV further comprises: determining that the second RV is aligned with the first RV when the orientations of the first and second RVs remain comparable for a threshold time.
Clause 4: The method of clause 1, wherein determining whether the second RV is aligned with the first RV comprises: determining whether the first and second RVs have comparable orientations, the first and second RVs having comparable orientation if an orientation of the first RV is opposite to an orientation of the second RV within a threshold angle, or the orientation of the first RV is same as the orientation of the second RV within the threshold angle; and determining whether an object associated with the second device is detected within a first camera view, the first camera view being a view of the first camera, wherein it is determined that the second RV is aligned with the first RV when the first and second RVS have comparable orientations and the object associated with the second device is detected within the first camera view.
Clause 5: The method of clause 4, wherein the object associated with the second device is any one or more of a face, a wearable unit, and a mobile device.
Clause 6: The method of clause 5, wherein the wearable unit are smart glasses.
Clause 7: The method of any of clauses 1-6, further comprising: broadcasting the first RV.
Clause 8: The method of any of clauses 1-7, further comprising: sharing, subsequent to auto-pairing with the second device, a first shared view with the second device, the first shared view being a first camera view or a first rendered view, the first camera view being a view of the first camera, and the first rendered view being a view after rendering the first camera view.
Clause 9: The method of clause 8, wherein the first rendered view is an augmented reality (AR) view of the first camera view, an extended reality (XR) view of the first camera view, or both.
Clause 10: The method of any of clauses 1-9, further comprising: displaying, subsequent to auto-pairing with the second device, a second shared view received from the second device, the second shared view being a second camera view or a second rendered view, the second camera view being a view of the second camera, and the second rendered view being a view after rendering the second camera view.
Clause 11: A method of a device, the method comprising: rendering a virtual scene based on a password of a user, the password comprising a sequence of one or more symbols, the one or more symbols comprising one or more visual symbols, one or more sound symbols, or both; determining a selected vector sequence selected by the user within the virtual scene, the selected vector sequence comprising a sequence of one or more vectors, each vector being a rotation vector (RV) or a game rotation vector (GRV); determining whether the selected vector sequence matches the password; and authenticating the user when the selected vector sequence matches the password.
Clause 12: The method of clause 11, wherein the password comprises the one or more visual symbols, and wherein in rendering the virtual scene comprises: distributing the one or more visual symbols of the password throughout the virtual scene.
Clause 13: The method of clause 12, wherein rendering the virtual scene further comprises: distributing one or more visual symbols that are not included in the password throughout the virtual scene.
Clause 14: The method of any of clauses 11-13, wherein the password comprises the one or more sound symbols, and wherein rendering the virtual scene comprises: rendering, for each sound symbol of the password, the sound symbol in an RV or a GRV determined for the sound symbol; and rendering, for at least one sound symbol of the password, another sound symbol in another RV or another GRV determined for the another sound symbol, the at least one sound symbol and the another sound symbol being rendered contemporaneously, the at least one sound symbol being different from the another sound symbol, and the RV or the GRV being different from the another RV or the another GRV.
Clause 15: The method of any of clauses 11-14, wherein determining the selected vector sequence comprises: determining a vector of the device, the vector being an RV or a GRV; and logging the vector in the selected vector sequence, wherein determining and logging the vector repeats until a vector sequence selection process is finished.
Clause 16: The method of clause 15, wherein determining the selected vector sequence further comprises: logging the vector in the selected vector sequence when the vector is held for a threshold time.
Clause 17: The method of any of clauses 11-16, wherein determining whether the selected vector sequence matches the password comprises: generating a password vector sequence based on the password and the virtual scene, the password vector sequence comprising one or more vectors, each vector being an RV or a GRV; determining whether a number of vectors in the password vector sequence and a number of vectors in the selected vector sequence are equal; determining whether all vectors of the password vector sequence match corresponding vectors of the selected vector sequence within a threshold angle; determining that the selected vector sequence does not match the password when it is determined that the number of vectors in the password vector sequence and the number of vectors in the selected vector sequence are not equal, or not all vectors of the password vector sequence match the corresponding vectors of the selected vector sequence within the threshold angle, or both; and determining that the selected vector sequence does match the password when it is determined that the number of vectors in the password vector sequence and the number of vectors in the selected vector sequence are equal, and all vectors of the password vector sequence match the corresponding vectors of the selected vector sequence within the threshold angle.
Clause 18: The method of clause 17, wherein the threshold angle is set based on a level of security.
Clause 19: The method of any of clauses 11-16, wherein determining whether the selected vector sequence matches the password comprises: generating a selected symbol sequence comprising one or more symbols based on the selected vector sequence, each symbol of the selected symbol sequence being a symbol located within a threshold angle of a position in the virtual scene indicated by a corresponding vector of the selected vector sequence; determining whether a number of symbols in the password and a number of symbols in the selected symbol sequence are equal; determining whether all symbols of the password match corresponding symbols of the selected symbol sequence; determining that the selected vector sequence does not match the password when it is determined that the number of symbols in the password and the number of symbols in the selected symbol sequence are not equal, or not all symbols of the password match the corresponding symbols of the selected symbol sequence, or both; and determining that the selected vector sequence does match the password when it is determined that the number of symbols in the password and the number of symbols in the selected symbol sequence are equal, and all symbols of the password match the corresponding symbols of the selected symbol sequence.
Clause 20: The method of clause 19, wherein the threshold angle is set based on a level of security.
Clause 21: A first device comprising at least one means for performing a method of any of clauses 1-10.
Clause 22: A first device comprising a memory and a processor communicatively connected to the memory, the processor being configured perform a method of any of clauses 1-10.
Clause 23: A non-transitory computer-readable medium storing code for a first device comprising a memory and a processor communicatively connected to the memory, and instructions stored in the memory and executable by the processor to cause the first device to perform a method of any of clauses 1-10.
Clause 24: A first device comprising at least one means for performing a method of any of clauses 11-20.
Clause 25: A first device comprising a memory and a processor communicatively connected to the memory, the processor being configured perform a method of any of clauses 11-20.
Clause 26: A non-transitory computer-readable medium storing code for a first device comprising a memory and a processor communicatively connected to the memory, and instructions stored in the memory and executable by the processor to cause the first device to perform a method of any of clauses 11-20.
As used herein, the terms “user equipment” (or “UE”), “user device,” “user terminal,” “client device,” “communication device,” “wireless device,” “wireless communications device,” “handheld device,” “mobile device,” “mobile terminal,” “mobile station,” “handset,” “access terminal,” “subscriber device,” “subscriber terminal,” “subscriber station,” “terminal,” and variants thereof may interchangeably refer to any suitable mobile or stationary device that can receive wireless communication and/or navigation signals. These terms include, but are not limited to, a music player, a video player, an entertainment unit, a navigation device, a communications device, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, an automotive device in an automotive vehicle, and/or other types of portable electronic devices typically carried by a person and/or having communication capabilities (e.g., wireless, cellular, infrared, short-range radio, etc.). These terms are also intended to include devices which communicate with another device that can receive wireless communication and/or navigation signals such as by short-range wireless, infrared, wireline connection, or other connection, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the other device. In addition, these terms are intended to include all devices, including wireless and wireline communication devices, that are able to communicate with a core network via a radio access network (RAN), and through the core network the UEs can be connected with external networks such as the Internet and with other UEs. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over a wired access network, a wireless local area network (WLAN) (e.g., based on IEEE 802.11, etc.) and so on. UEs can be embodied by any of a number of types of devices including but not limited to printed circuit (PC) cards, compact flash devices, external or internal modems, wireless or wireline phones, smartphones, tablets, tracking devices, asset tags, and so on. A communication link through which UEs can send signals to a RAN is called an uplink channel (e.g., a reverse traffic channel, a reverse control channel, an access channel, etc.). A communication link through which the RAN can send signals to UEs is called a downlink or forward link channel (e.g., a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein the term traffic channel (TCH) can refer to either an uplink/reverse or downlink/forward traffic channel.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any details described herein as “exemplary” is not to be construed as advantageous over other examples. Likewise, the term “examples” does not mean that all examples include the discussed feature, advantage or mode of operation. Furthermore, a particular feature and/or structure can be combined with one or more other features and/or structures. Moreover, at least a portion of the apparatus described herein can be configured to perform at least a portion of a method described herein.
It should be noted that the terms “connected,” “coupled,” or any variant thereof, mean any connection or coupling, either direct or indirect, between elements, and can encompass a presence of an intermediate element between two elements that are “connected” or “coupled” together via the intermediate element unless the connection is expressly disclosed as being directly connected.
Any reference herein to an element using a designation such as “first,” “second,” and so forth does not limit the quantity and/or order of those elements. Rather, these designations are used as a convenient method of distinguishing between two or more elements and/or instances of an element. Also, unless stated otherwise, a set of elements can comprise one or more elements.
Those skilled in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Nothing stated or illustrated depicted in this application is intended to dedicate any component, action, feature, benefit, advantage, or equivalent to the public, regardless of whether the component, action, feature, benefit, advantage, or the equivalent is recited in the claims.
In the detailed description above it can be seen that different features are grouped together in examples. This manner of disclosure should not be understood as an intention that the claimed examples have more features than are explicitly mentioned in the respective claim. Rather, the disclosure may include fewer than all features of an individual example disclosed. Therefore, the following claims should hereby be deemed to be incorporated in the description, wherein each claim by itself can stand as a separate example. Although each claim by itself can stand as a separate example, it should be noted that-although a dependent claim can refer in the claims to a specific combination with one or one or more claims-other examples can also encompass or include a combination of said dependent claim with the subject matter of any other dependent claim or a combination of any feature with other dependent and independent claims. Such combinations are proposed herein, unless it is explicitly expressed that a specific combination is not intended. Furthermore, it is also intended that features of a claim can be included in any other independent claim, even if said claim is not directly dependent on the independent claim.
It should furthermore be noted that methods, systems, and apparatus disclosed in the description or in the claims can be implemented by a device comprising means for performing the respective actions and/or functionalities of the methods disclosed.
Furthermore, in some examples, an individual action can be subdivided into one or more sub-actions or contain one or more sub-actions. Such sub-actions can be contained in the disclosure of the individual action and be part of the disclosure of the individual action.
While the foregoing disclosure shows illustrative examples of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions and/or actions of the method claims in accordance with the examples of the disclosure described herein need not be performed in any particular order. Additionally, well-known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects and examples disclosed herein. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.