The present disclosure relates to the field of virtual reality in general. More particularly, and without limitation, the disclosed embodiments relate to electronic devices, systems, and methods, for text input in a virtual environment.
Virtual reality (VR) systems or VR applications create a virtual environment and artificially immerse a user or simulate a user's presence in the virtual environment. The virtual environment is typically displayed to the user by an electronic device using a suitable virtual reality or augmented reality technology. For example, the electronic device may be a head-mounted display, such as a wearable headset, or a see-through head-mounted display. Alternatively, the electronic device may be a projector that projects the virtual environment onto the walls of a room or onto one or more screens to create an immersive experience. The electronic device may also be a personal computer.
VR applications are becoming increasingly interactive. In many situations, text data input at certain locations in the virtual environment are useful and desirable. However, traditional means of entering text data to an operational system, such as a physical keyboard or a mouse, are not applicable for text data input in the virtual environment. This is because a user immersed in the virtual reality environment typically does not see his or her hands, which may be at the same time holding a controller to interact with the objects in the virtual environment. Using a keyboard or mouse to input text data may require the user to leave the virtual environment or release the controller. Therefore, a need exists for methods and systems that allow for easy and intuitive text input in virtual environments without compromising the user's concurrent immersive experience.
The embodiments of the present disclosure include electronic systems and methods that allow for text input in a virtual environment. The exemplary embodiments use a hand-held controller and a text input processor to input text at suitable locations in the virtual environment based on one or more gestures detected by a touchpad and/or movements of the hand-held controller. Advantageously, the exemplary embodiments allow a user to input text through interacting with a virtual text input interface generated by the text input processor, thereby providing an easy and intuitive approach for text input in virtual environments and improving user experience.
According to an exemplary embodiment of the present disclosure, an electronic system for text input in a virtual environment is provided. The electronic system includes at least one hand-held controller, a detection system to determine the spatial position and/or movement of the at least one hand-held controller, and a text input processor to perform operations. The at least one hand-held controller includes a light blob, a touchpad to detect one or more gestures, and an electronic circuitry to generate electronic instructions corresponding to the gestures. The detection system includes at least one image sensor to acquire one or more images of the at least one hand-held controller and a calculation device to determine the spatial position based on the acquired images. The operations include: receiving the spatial position and/or movement, such as rotation, of the at least one hand-held controller from the detection system; generating an indicator in the virtual environment at a coordinate based on the received spatial position and/or movement of the at least one hand-held controller; entering a text input mode when the indicator overlaps a text field in the virtual environment and upon receiving a trigger instruction from the at least one hand-held controller; receiving the electronic instructions from the at least one hand-held controller; and performing text input operations based on the received electronic instructions in the text input mode.
According to a further exemplary embodiment of the present disclosure, a method for text input in a virtual environment is provided. The method includes receiving, using at least one processor, a spatial position and/or movement of at least one hand-held controller. The at least one hand-held controller includes a light blob, a touchpad to detect one or more gestures, and an electronic circuitry to generate one or more electronic instructions corresponding to the gestures. The method further includes generating, by the at least one processor, an indicator at a coordinate in the virtual environment based on the received spatial position and/or movement of the at least one hand-held controller; entering, by the at least one processor, a text input mode when the indicator overlaps a text field or a virtual button (not shown) in the virtual environment and upon receiving a trigger instruction from the at least one hand-held controller; receiving, by the at least one processor, the electronic instructions from the at least one hand-held controller; and performing, by the at least one processor, text input operations based on the received electronic instructions in the text input mode.
According to a yet further exemplary embodiment of the present disclosure, a method for text input in a virtual environment is provided. The method includes: determining a spatial position and/or movement of at least one hand-held controller. The at least one hand-held controller includes a light blob, a touchpad to detect one or more gestures, and electronic circuitry to generate one or more electronic instructions based on the gestures. The method further includes generating an indicator at a coordinate in the virtual environment based on the spatial position and/or movement of the at least one hand-held controller; entering a standby mode ready to perform text input operations; entering a text input mode from the standby mode upon receiving a trigger instruction from the at least one hand-held controller; receiving the electronic instructions from the at least one hand-held controller; and performing text input operations based on the received electronic instructions in the text input mode.
The details of one or more variations of the subject matter disclosed herein are set forth below and in the accompanying drawings. Other features and advantages of the subject matter disclosed herein will be apparent from the detailed description below, the accompanying drawings, and the claims.
Further modifications and alternative embodiments will be apparent to those of ordinary skill in the art in view of the disclosure herein. For example, the systems and the methods may include additional components or steps that are omitted from the diagrams and description for clarity of operation. Accordingly, the detailed description below is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the general manner of carrying out the present disclosure. It is to be understood that the various embodiments disclosed herein are to be taken as exemplary. Elements and structures, and arrangements of those elements and structures, may be substituted for those illustrated and disclosed herein, objects and processes may be reversed, and certain features of the present teachings may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of the disclosure herein.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the present disclosure, and together with the description, serve to explain the principles of the disclosure.
This description and the accompanying drawings that illustrate exemplary embodiments should not be taken as limiting. Various mechanical, structural, electrical, and operational changes may be made without departing from the scope of this description and the claims, including equivalents. Similar reference numbers in two or more figures represent the same or similar elements. Furthermore, elements and their associated features that are disclosed in detail with reference to one embodiment may, whenever practical, be included in other embodiments in which they are not specifically shown or described. For example, if an element is described in detail with reference to one embodiment and is not described with reference to a second embodiment, the element may nevertheless be claimed as included in the second embodiment.
The disclosed embodiments relate to electronic systems and methods for text input in a virtual environment created by a virtual reality or augmented reality technology. The virtual environment may be displayed to a user by a suitable electronic device, such as a head-mounted display (e.g., a wearable headset or a see-through head-mounted display), a projector, or a personal computer. Embodiments of the present disclosure may be implemented in a VR system that allows a user to interact with the virtual environment using a hand-held controller.
According to an aspect of the present disclosure, an electronic system for text input in a virtual environment includes a hand-held controller. The hand-held controller may include a light blob that emits visible and/or infrared light. For example, the light blob may emit visible light of one or more colors, such as red, green, and/or blue, and infrared light, such as near infrared light. According to another aspect of the present disclosure, the hand-held controller may include a touchpad that has one or more sensing areas to detect gestures of a user. The hand-held controller may further include electronic circuitry in connection with the touchpad that generates text input instructions based on the gestures detected by the touchpad.
According to another aspect of the present disclosure, a detection system is used to track the spatial position and/or movement of the hand-held controller. The detection system may include one or more image sensors to acquire one or more images of the hand-held controller. The detection system may further include a calculation device to determine the spatial position based on the acquired images. Advantageously, the detection system allows for accurate and automated identification and tracking of the hand-held controller by utilizing the visible and/or infrared light from the light blob, thereby allowing for text input at positions in the virtual environment selected by moving the hand-held controller by the user.
According to another aspect of the present disclosure, the spatial position of the hand-held controller is represented by an indicator at a corresponding position in the virtual environment. For example, when the indicator overlaps a text field or a virtual button in the virtual environment, the text field may be configured to display text input by the user. In such instances, electronic instructions based on gestures detected by a touchpad and/or movement of the hand-held controller may be used for performing text input operations. Advantageously, the use of gestures and the hand-held controller allows the user to input text in the virtual environment at desired locations via easy and intuitive interaction with the virtual environment.
Reference will now be made in detail to embodiments and aspects of the present disclosure, examples of which are illustrated in the accompanying drawings. Where possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Those of ordinary skill in the art in view of the disclosure herein will recognize that features of one or more of the embodiments described in the present disclosure may be selectively combined or alternatively used.
Touchpad 122 includes one or more tactile sensing areas that detect gestures applied by at least one finger of the user. For example, touchpad 122 may include one or more capacitive-sensing or pressure-sensing sensors that detect motions or positions of one or more fingers on touchpad 122, such as tapping, clicking, scrolling, swiping, pinching, or rotating. In some embodiments, as shown in
In some embodiments, as shown in
In some embodiments, the images acquired by both image sensors 210a and 210b may be further processed by image processing device 220 before being used for extracting spatial position and/or movement information of hand-held controller 100. Image processing device 220 may receive the acquired images directly from image acquisition device 210 or through communication device 240. Image processing device 220 may include one or more processors selected from a group of processors, including, for example, a microcontroller (MCU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic devices (CPLD), a digital signal processor (DSP), an ARM-based processor, etc. Image processing device 220 may perform one or more image processing operations stored in a non-transitory computer-readable medium. The image processing operations may include denoising, one or more types of filtering, enhancement, edge detection, segmentation, thresholding, dithering, etc. The processed images may be used by calculation device 230 to determine the position of light blob 110 in the processed images and/or acquired images. Calculation device 230 may then determine the spatial position and/or movement of hand-held controller 100 and/or light blob 110 in a 3-D space or on a 2-D plane based on the position of hand-held controller 100 in images and one or more parameters. These parameters may include the focal lengths and/or focal points of the image sensors, the distance between two image sensors, etc.
In some embodiments, calculation device 230 may receive movement data acquired by IMU 130 of
In some embodiments, the coordinate of indicator 420 in virtual environment 400 changes with the spatial position of hand-held controller 100. Therefore, a user may select a desired text field 410 to input text by moving hand-held controller 100 in a direction such that indicator 420 moves towards the desired text field 410. When indicator 420 overlaps the desired text field 410 in virtual environment 400, text input processor 300 of
As described herein, a current coordinate of indicator 420 in virtual environment 400 based on the spatial position and/or movement of hand-held controller 100 can be determined based on a selected combination of parameters. For example, one or more measurements of the spatial position, orientation, linear motion, and/or rotation may be used to determine a corresponding coordinate of indicator 420. This in turn improves the accuracy of the representation of the spatial position of hand-held controller 100 by the coordinate of indicator 420 in virtual environment 400.
In some embodiments, a user may select a desired text field 410 to input text by moving hand-held controller 100 in a direction such that indicator 420 overlaps a virtual button (not shown), such as a virtual TAB key, in virtual environment 400. Additionally or alternatively, a trigger instruction may be generated by a gesture, such as a clicking, sliding, or tapping gesture for selecting desired text field 410 for text input. The clicking, sliding, or tapping gesture can be detected by touchpad 122 of hand-held controller 100.
In the standby mode, upon receiving a trigger instruction, text input processor 300 may enter a text input mode, in which text input interface generator 310 of
In some embodiments, when indicator 420 moves away from text field 410 or a virtual button, e.g., due to movement of hand-held controller 100, text input processor 300 may remain in the text input mode. However, when further operations are performed while indicator 420 is away from text field 410, text input processor 300 may exit the text input mode.
Exemplary text input operations performed by text input processor 300 based on the exemplary embodiment of text input interface 430 of
As shown in
First virtual interface 432 may select to use any suitable types or layouts of virtual keypads, and not limited to the examples described herein. Instructions to select and/or input text strings or text characteristics may be generated by various interactive movements of handheld controller 100 suitable for the selected type or layout of virtual keypad. In some embodiments, first virtual interface 432 may be a ray-casting keyboard where one or more rays simulating laser rays may be generated and pointed towards keys in the virtual keyboard in virtual environment 400. Changing the orientation and/or position of handheld controller 100 may direct the laser rays to keys that have the desired characters. Additionally or alternatively, one or more virtual drumsticks or other visual indicators may be generated and pointed towards keys in the virtual keyboard in virtual environment 400. Changing the orientation and/or position of handheld controller 100 may direct the drumsticks to touch or tap onto keys that have the desired characters. In other embodiments, first virtual interface 432 may be a direct-touching keyboard displayed on a touch screen or surface. Clicking or tapping of the keys in the keyboard allows for the selection of the desired characters represented by the keys.
Second virtual interface 434 may display one or more candidate text strings based on the characters selected by the user from first virtual interface 432 (The exemplary candidate text strings “XYZ” shown in
In some embodiments, text input interface 430 may include a third virtual interface 436. Third virtual interface 436 may include one or more functional keys, such as modifier keys, navigation keys, and system command keys, to perform functions, such as switching between lowercase and uppercase or switching between traditional or simplified characters. A functional key in third virtual interface 436 may be selected by moving indicator 420 towards the key such that indicator 420 overlaps the selected key. In such instances, the function of the selected key may be activated when text input processor 300 receives an electronic instruction corresponding to a clicking, sliding, or tapping gesture detected by touchpad 122. Other suitable gestures may be used to select a suitable functional key in third virtual interface 436. For example, the electronic instruction for activating a functional key in third virtual interface 436 may be generated by pressing a control button (not shown) on hand-held controller 100.
As shown in
In some embodiments, states a0 to an correspond to different layouts or types of the virtual keypad. The description below uses a plurality of 3-by-3 grid layouts of the keypad as an example. Each grid layout of first virtual interface 432 may display a different set of characters. For example, a first grid layout in state a0 may display the Latin alphabets, letters of other alphabetical languages, or letters or root shapes of non-alphabetical languages (e.g., Chinese), a second grid layout in state a1 may display numbers from 0 to 9, and a third grid layout in state a2 may display symbols and/or signs. In such instances, a current grid layout for text input may be selected from the plurality of grid layouts of first virtual interface 432 based on one or more electronic instructions from hand-held controller 100.
In some embodiments, text input interface generator 310 may switch first virtual interface 432 from a first grid layout in state a0 to a second grid layout in state a1 based on an electronic instruction corresponding to a first sliding gesture detected by touchpad 122. For example, the first sliding gesture, represented by A in
Additionally, as shown in
In each of the states a0 to an, one of the grid layouts of first virtual interface 432 is displayed, and a character may be selected for text input by selecting the key having that character in the virtual keypad. For example, the sensing areas of touchpad 122 may have a corresponding 3-by-3 grid keypad layout such that a character-selection gesture of the user detected by a sensing area of touchpad 122 may generate an electronic instruction for text input processor 300 to select one of the characters in the corresponding key in first virtual interface 432. The character-selection gesture, represented by B in
As described herein, text input processor 300 may switch between the operational states a0 to an in the character-selection mode based on electronic instructions received from hand-held controller 100, such as the sliding gestures described above. Alternatively, electronic instructions from hand-held controller 100 may be generated based on the movement of hand-held controller 100. Such movement may include rotating, leaping, tapping, rolling, tilting, jolting, or other suitable movement of hand-held controller 100.
Advantageously, in the character-selection mode of text input processor 300, a user may select one or more characters from one or more grid layouts of first virtual interface 432 by applying intuitive gestures on touchpad 122 while being immersed in virtual environment 400. Text input interface generator 310 may display one or more candidate text strings in second virtual interface 434 based on the characters selected by the user.
Text input processor 300 may switch from the character-selection mode to the string-selection mode when one or more candidate text strings are displayed in second virtual interface 434. For example, based on an electronic instruction corresponding to a third sliding gesture detected by touchpad 122, text input processor 300 may switch from any of the states a0 to an to state X, where second virtual interface 434 is activated for string selection. The third sliding gesture, represented by C1 in
As shown in
Text input processor 300 may switch from any of states S1 to Sn to state X, i.e., the string-selection mode, based on an electronic instruction corresponding to the third sliding gesture, C1, detected by touchpad 122. In state X, second virtual interface 434 is activated for string selection. However, in state 00, when no strings are displayed in second virtual interface 434 or when second virtual interface 434 is closed, text input processor 300 may not switch from state 00 to state X.
As described herein, text input processor 300 may switch between the operational states S1 to Sn in the string-selection mode based on electronic instructions received from hand-held controller 100. As described above, electronic instructions from hand-held controller 100 may be generated based on the gestures detected by touchpad 122 and/or movement of hand-held controller 100. Such movement may include rotating, leaping, tapping, rolling, tilting, jolting, or other types of intuitive movement of hand-held controller 100.
In some embodiments, when text input processor 300 is in state X or operates in the string-selection mode, a selection of the sensing areas of the 3-by-3 grid layout of touchpad 122 and/or a selection of the virtual keys of first virtual interface 432 may each be assigned a number, e.g., a number selected from 1 to 9. As shown in
In some embodiments, more than one desired text strings may be selected in sequence from the candidate text strings displayed in second virtual interface 434. Additionally or alternatively, one or more characters may be added to or removed from the candidate text strings after a desired text string is selected. In such instances, text input interface generator 310 may update the candidate text strings and/or the numbering of the candidate text strings displayed in second virtual interface 434. As shown in
In some embodiments, when second virtual interface 434 is closed or deactivated, e.g., based on an electronic instruction corresponding to a gesture detected by touchpad 122, a user may edit the text input already in text field 410. For example, when an electronic instruction corresponding to a backspace operation is received, text input interface generator 310 may delete a character in text field 410 before cursor 412, e.g., a character Z in the text string XYZ, as shown in
As described herein, the operational states of text input processor 300 described in reference to
In some embodiments, two hand-held controllers 100 may be used to increase the efficiency and convenience to perform the above-described text input operations by a user. For example, text input interface 430 may include two first virtual interfaces 432, each corresponding to a hand-held controller 100. One hand-held controller 100 may be used to input text based on a first 3-by-3 grid layout of first virtual interfaces 432 while the other hand-held controller 100 may be used to input text based on a second 3-by-3 grid layout. Alternatively, one hand-held controller 100 may be used to select one or more characters based on first virtual interfaces 432 while the other hand-held controller 100 may be used to select one or more text strings from second virtual interfaces 434.
As shown in
As shown in
A character may be selected from circular keypad 438 based on one or more gestures applied on touchpad 122 of hand-held controller 100. For example, text input processor 300 may receive an electronic instruction corresponding to a circular motion applied on touchpad 122. The circular motion may be partially circular. Electronic circuitry of hand-held controller 100 may convert the detected circular or partially circular motions into an electronic signal that contains the information of the direction and traveled distance of the motion. Text input interface generator 310 may rotate circular keypad 438 in a clockwise direction or a counterclockwise direction based on the direction of the circular motion detected by touchpad 122. The number of virtual keys 438a traversed during the rotation of circular keypad 438 may depend on the traveled distance of the circular motion. Accordingly, circular keypad 438 may rotate as needed until pointer 440 overlaps or selects a virtual key 438a representing one or more characters to be selected. When text input processor 300 receives an electronic instruction corresponding to a clicking, sliding, or tapping gesture detected by touchpad 122, one or more characters from the selected virtual key may be selected to add to the candidate text strings.
Two circular keypads 438, each corresponding to a hand-held controller 100, may be used to increase the efficiency and convenience to perform text input operations by a user. In some embodiments, as shown in
As shown in
Similarly as in
System 10 of
As shown in
In step 514, text input processor 300 may generate indicator 420 at a coordinate in virtual environment 400 of
In some embodiments, when indicator 420 overlaps text field 410, text input processor 300 may proceed to step 517 and enter a standby mode ready to perform operations to enter text in text field 410. In step 518, text input processor 300 determines whether a trigger instruction, such as an electronic signal corresponding to a tapping, sliding, or clicking gesture detected by touchpad 122 of hand-held controller 100 has been received via data communication module 320. If a trigger instruction is not received by text input processor 300, text input processor 300 may stay in the standby mode to await the trigger instruction or return to step 512. In step 520, when text input processor 300 receives the trigger instruction, text input processor 300 may enter a text input mode. Operating in the text input mode, in steps 522 and 524, text input processor 300 may proceed to receive further electronic instructions and perform text input operations in the text input mode. The electronic instructions may be sent by communication interface 140 of hand-held controller 100 and received via data communication module 320. The text input operations may further include the steps as described below in reference to
In step 534, text input processor 300 may make a selection of a character based on an electronic instruction corresponding to a gesture detected by touchpad 122 and/or movement of hand-held controller 100. The electronic instruction may be sent by communication interface 140 and received by data communication module 320. In some embodiments, text input processor 300 may select a plurality of characters from first virtual interface 432 based on a series of electronic instructions. In some embodiments, one or more functional keys of third virtual interface 436 may be activated prior to or between the selection of one or more characters.
When at least one character is selected, text input processor 300 may perform step 536. In step 536, text input processor 300 may display one or more candidate text strings in second virtual interface 434 based on the selected one or more characters in step 534. In some embodiments, text input processor 300 may update the candidate text strings already in display in second virtual interface 434 based on the selected one or more characters in step 534. Upon receiving an electronic instruction corresponding to a backspace operation, in step 538, text input processor 300 may delete a character in the candidate text strings displayed in second virtual interface 434. Text input processor 300 may repeat or omit step 538 depending on the electronic instructions sent by hand-held controller 100.
In step 540, text input processor 300 may determine whether an electronic instruction corresponding to a gesture applied on touchpad 122 for switching the current layout of first virtual interface 432 is received. If not, text input processor 300 may return to step 534 to continue to select one or more characters. If an electronic instruction corresponding to a first sliding gesture is received, in step 542, text input processor 300 may switch the current layout of first virtual interface 432, such as a 3-by-3 grid layout, to a prior layout. Alternatively, if an electronic instruction corresponding to a second sliding gesture is received, text input processor 300 may switch the current layout of first virtual interface 432 to a subsequent layout. The direction of the first sliding gesture is opposite to that of the second sliding gesture. For example, the direction of the first sliding gesture may be sliding from right to left horizontally while the direction of the second sliding gesture may be sliding from left to right horizontally, or vice versa.
In step 556, text input processor 300 may make a selection of a text string from the candidate text strings in second virtual interface 434 based on an electronic instruction corresponding to a gesture detected by touchpad 122 and/or movement of hand-held controller 100. The electronic instruction may be sent by communication interface 140 and received by data communication module 320. As described above, touchpad 122 may have one or more sensing areas, and each sensing area may be assigned a number corresponding to a candidate text string displayed in second virtual interface 434. In some embodiments, text input processor 300 may select a plurality of text strings in step 556.
In step 558, text input processor 300 may display the selected one or more text strings in text field 410 in virtual environment 400. The text strings may be displayed before cursor 412 in text field 410 such that cursor 412 moves towards the end of text field 410 as more text strings are added. In some embodiments, in step 560, the selected text strings are deleted from the candidate text strings in second virtual interface 434. In step 562, text input processor 300 may determine whether there is at least one candidate text string in second virtual interface 434. If yes, text input processor 300 may proceed to step 564 to update the remaining candidate text strings and/or their numbering. After the update, text input processor 300 may return to step 556 to select more text strings to input to text field 410. Alternatively, text input processor 300 may switch back to character-selection mode based on an electronic instruction generated by a control button or a sliding gesture received from hand-held controller 100, for example. If no candidate text strings remains, text input processor 300 may proceed to step 566, where text input processor 300 may close second virtual interface 434. Text input processor 300 may return to character-selection mode or exit text input mode after step 566.
As described herein, the sequence of the steps of method 500 described in reference to
A portion or all of the methods disclosed herein may also be implemented by an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), a printed circuit board (PCB), a digital signal processor (DSP), a combination of programmable logic components and programmable interconnects, single central processing unit (CPU) chip, a CPU chip combined on a motherboard, a general purpose computer, or any other combination of devices or modules capable of providing text input in a virtual environment disclosed herein.
The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure can be implemented as hardware or software alone. In addition, while certain components have been described as being coupled or operatively connected to one another, such components may be integrated with one another or distributed in any suitable fashion.
Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as nonexclusive. Further, the steps of the disclosed methods can be modified in any manner, including reordering steps and/or inserting or deleting steps.
Instructions or operational steps stored by a computer-readable medium may be in the form of computer programs, program modules, or codes. As described herein, computer programs, program modules, and code based on the written description of this specification, such as those used by the controller, are readily within the purview of a software developer. The computer programs, program modules, or code can be created using a variety of programming techniques. For example, they can be designed in or by means of Java, C, C++, assembly language, or any such programming languages. One or more of such programs, modules, or code can be integrated into a device system or existing communications software. The programs, modules, or code can also be implemented or replicated as firmware or circuit logic.
The features and advantages of the disclosure are apparent from the detailed specification, and thus, it is intended that the appended claims cover all systems and methods falling within the true spirit and scope of the disclosure. As used herein, the indefinite articles “a” and “an” mean “one or more.” Similarly, the use of a plural term does not necessarily denote a plurality unless it is unambiguous in the given context. Words such as “and” or “or” mean “and/or” unless specifically directed otherwise. Further, since numerous modifications and variations will readily occur from studying the present disclosure, it is not desired to limit the disclosure to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the disclosure.
In some aspects, methods consistent with disclosed embodiments may exclude disclosed method steps, or may vary the disclosed sequence of method steps or the disclosed degree of separation between method steps. For example, method steps may be omitted, repeated, or combined, as necessary, to achieve the same or similar objectives. In various aspects, non-transitory computer-readable media may store instructions for performing methods consistent with disclosed embodiments that exclude disclosed method steps, or vary the disclosed sequence of method steps or disclosed degree of separation between method steps. For example, non-transitory computer-readable media may store instructions for performing methods consistent with disclosed embodiments that omit, repeat, or combine, as necessary, method steps to achieve the same or similar objectives. In certain aspects, systems need not necessarily include every disclosed part, and may include other undisclosed parts. For example, systems may omit, repeat, or combine, as necessary, parts to achieve the same or similar objectives.
Other embodiments will be apparent from consideration of the specification and practice of the embodiments disclosed herein. It is intended that the specification and examples be considered as example only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2017/091262 | Jun 2017 | CN | national |