The present disclosure, in at least some embodiments, is directed to systems, methods, and apparatuses for providing a user interface, and in particular for such systems, methods, and apparatuses, for providing a user interface with/for a wearable device.
“Hands-free” VR (virtual reality) apps have become more common thanks to very cheap VR devices such as Google Cardboard. However, these devices allow very limited interactions or means for a user interface, as the user has no physical button to press.
In that case, the classical way to interact with a widget (e.g., select a button) is to gaze at the desired widget for a few seconds. This means current state of the art involves staring and waiting at the same widget until the system “understands” you want to select it. This provides a poor user experience, as the fluidity of interaction is broken, especially when the user wants to browse a vast quantity of items (videos, music, articles, and the like) and needs to wait several seconds to browse from one page to another.
Thus, a need exists for methods, apparatuses, and systems that can dynamically determine the location of a mobile device, dynamically determine the nature of the mobile device's environment, and can efficiently determine actions for the mobile device to take based on the dynamically-determined information.
Embodiments of the present disclosure include systems, methods and apparatuses for performing optical analysis in order to provide a fluid UI (user interface) for a virtual environment, such as a VR (virtual reality) environment or an AR (augmented reality) environment for example. Optical analysis is performed on movement data obtained from a sensor. Preferably the sensor is attached to a body part of the user, such as for example through a wearable device. The data may optionally comprise visual data or inertial data. The visual data may optionally comprise video data, for example, as a series of frames. If visual data is captured, the sensor may be implemented as a camera, for example as an RGB, color, grayscale or infrared camera, a charged coupled device (CCD), a CMOS sensor, a depth sensor, and/or the like. Optical analysis is performed on the visual data to determine an indication provided by the user, such as a movement by the user. The determined indication is then matched to a UI function, such as selecting an action to be performed through the UI.
Non-limiting examples of optical analysis algorithms include differential methods for optical flow estimation, phase correlation, block-based methods for optical flow estimation, discrete optimization methods, simultaneous localization and mapping (SLAM) or any type of 6 DOF (degrees of freedom) algorithm. Differential methods for optical flow estimation include but are not limited to Lucas-Kanade method, Horn-Schunck method, Buxton-Buxton method, Black-Jepson method and derivations or combinations thereof.
According to some non-limiting embodiments, tracking of the movement of the user's head or other body part can be performed through SLAM (“Simultaneous Localization and Mapping”). SLAM was initially applied to problems of independent movement of a mobile robot (device). In some such systems, the location of the mobile device (e.g., robot) is necessary—that is, its location on a map of an environment, as is a map the environment, so that the mobile device can determine its relative location within that environment. In some known systems, however, these tasks cannot be performed simultaneously, which results in substantial delays when processing mobile device location information.
SLAM can be performed with sensor data from a number of different sensor types. Visual SLAM refers to the use of visual data from a visual sensor, such as for example a camera, to perform the SLAM process. In some cases, only such visual data is used for the SLAM process (see, for example, Fuentes-Pacheco et al., “Visual Simultaneous Localization and Mapping: A Survey,” Artificial Intelligence Review 43(1), November 2015).
Various types of sensors and the use of their data in the SLAM process are described in C. Cadena et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age,” in IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016. (available at https://arxiv.org/pdf/1606.05830.pdf). This article also describes the importance of the “pose,” or position and orientation, for the SLAM process. The pose relates to the position and orientation of the robot or other entity to which the sensor is attached, while the map describes the environment for that robot.
Additionally, some known systems cannot dynamically determine the nature of the mobile device's environment, and therefore, cannot dynamically determine navigation instructions, and/or other information. For example, in some known systems, a navigator for the mobile device can input pre-determined environment data into the known system to provide a description of the environment. Such known systems, however, cannot modify the description of the environment substantially in real-time, based on new environmental information, and/or the like.
In some embodiments, an optical-based UI system is provided for a wearable device, including without limitation, a head-mounted wearable device that optionally includes a display screen. Such systems, methods and apparatuses can be configured to accurately (and in some embodiments, quickly) determine a motion of the wearable device, e.g., through computations performed with a computational device. A non-limiting example of such a computational device is a smart cellular phone or other mobile computational device. Such a motion may then be correlated to a UI function, for example by allowing the user to make a selection with a relevant motion of the wearable device.
If inertial measurements are used, they may be provided through an IMU (inertial measurement unit), and/or other sensors. If the sensor is implemented as an IMU, the sensor can be an accelerometer, a gyroscope, a magnetometer, and/or the like. One drawback of an IMU is its poor ability to track the position of an object over time because of drift. That is, an IMU can detect the motion and acceleration of an object quite well, but not slow or subtle movements nor the precise position of an object.
Optionally a combination of inertial measurements and visual data may be used, for example by having an IMU and a camera.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.
Various embodiments of the methods, systems and apparatuses of the present disclosure can be implemented by hardware and/or by software or a combination thereof. For example, as hardware, selected steps of methodology according to some embodiments can be implemented as a chip and/or a circuit. As software, selected steps of the methodology (e.g., according to some embodiments of the disclosure) can be implemented as a plurality of software instructions being executed by a computer (e.g., using any suitable operating system). Accordingly, in some embodiments, selected steps of methods, systems and/or apparatuses of the present disclosure can be performed by a processor (e.g., executing an application and/or a plurality of instructions).
Although embodiments of the present disclosure are described with regard to a “computer”, and/or with respect to a “computer network,” it should be noted that optionally any device featuring a processor and the ability to execute one or more instructions is within the scope of the disclosure, such as may be referred to herein as simply a computer or a computational device and which includes (but not limited to) any type of personal computer (PC), a server, a cellular telephone, an IP telephone, a smartphone or other type of mobile computational device, a PDA (personal digital assistant), a thin client, a smartwatch, head mounted display or other wearable that is able to communicate wired or wirelessly with a local or remote device. To this end, any two or more of such devices in communication with each other may comprise a “computer network.”
Embodiments of the disclosure are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that particulars shown are by way of example and for purposes of illustrative discussion of the various embodiments of the present disclosure only and are presented in order to provide what is believed to be a useful and readily understood description of the principles and conceptual aspects of the various embodiments of inventions disclosed therein.
In accordance with preferred embodiments, wearable device 105 can itself be a mobile computational device or computational device 172 can be a mobile computational device. In some preferred embodiments camera 103 can be the camera of a mobile computational device.
In accordance with some preferred embodiments, optical analyzer 104 is configured to operate an optical analysis process to determine a location of wearable device 105 within a computational device-generated map, as well as being configured to determine a map of the environment surrounding wearable device 105. For example, optical analyzer 104 In accordance with other preferred embodiments, optical analyzer 104 is configured to operate an optical analysis process to determine movement of a wearable device 105 across a plurality of sets of image data and not within a map of the environment in which wearable device 105 is moving. For example, as described in further detail below in connection with
In some implementations, because the preprocessed sensor data is abstracted from the specific sensor(s) (e.g., one or more cameras or types of cameras), the optical analyzer 104, therefore, can be sensor-agnostic, and can perform various actions without knowledge of the particular sensors from which the sensor data was derived.
As a non-limiting example, camera 103 can be a digital camera including a resolution, for example, of 640×480 and greater, at any frame rate including, for example 60 fps, such that movement information may be determined by optical analyzer 104 according to a plurality of images from the camera. For such an example, video preprocessor 102 preprocesses the images before optical analyzer 104 performed the analysis. Such preprocessing can include converting images to grayscale or some other color reduction technique, image compression or size reduction, or some other technique to reduce the payload on the processing. In some embodiments, preprocessing can include computing a Gaussian pyramid for one or more images, which is also known as a MIPMAP (multum in parvo map), in which the pyramid starts with a full resolution image, and the image is operated on multiple times, such that each time, the image is half the size and half the resolution of the previous operation. In a preferred embodiment a 2- or 3-level Gaussian pyramid is generated.
Optical analyzer 104 receives image data that may or may not be preprocessed to determine motion in the image. Optical analyzer 104 may perform a wide variety of different variations on various optical analysis algorithms, including but not limited to differential methods for optical flow estimation, phase correlation, block-based methods for optical flow estimation, discrete optimization methods, simultaneous localization and mapping (SLAM) or any type of 6 DOF (degrees of freedom) algorithm. Differential methods for optical flow estimation include but are not limited to Lucas-Kanade method, Horn-Schunck method, Buxton-Buxton method, Black-Jepson method and derivations or combinations thereof. It should be understood that the above differential methods for optical flow estimation include the methods listed as well as their derivatives.
Skilled artisans can appreciate that each of the above methods can be used to determine whether a series of images shows movement of a user and the type of movement or movement in the environment of a user and the type of movement. In particular, the skilled artisan can appreciate that some optical analysis methods work well for large movements but not minor movements. Thus, to the extent the UI interactions correlate to minor body movements (e.g., head nod, hand movement with a stationary arm, etc.) either alone or in combination with larger body movements (e.g., shoulder or body lean, arm raise, etc.), embodiments that use, for example, Lucas-Kincade (which prefers small motions) either alone or in combination with, for example, Horn-Schunk (which is sensitive to noise in the images and thus, is preferable for larger movements) can be more suitable. Likewise, where UI interactions correlate to larger body movements only, algorithms best applicable to recognizing larger displacements within an image can be more suitable. Embodiments including SLAM preferably include no other optical analysis methods because of the computational cost, while embodiments that implement a differential method for optical flow estimation can include preferably no more than two method types.
In other embodiments, an IMU (inertial measurement unit) can be included to help detect movement. In yet other embodiments, an IMU can be used in lieu of optical analysis. In embodiments including an IMU, sensor data related to the movement inertia of a user is received by computational device 172. Embodiments using an IMU are limited in the types of movements that can be correlated to a UI object or action given that an IMU will detect movement and the cessation of movement but not a static pose. For example, in some embodiments, a user can lean a shoulder to the right or left and hold the lean to communicate to the UI a scrolling action to the right or left, respectively. An IMU can have difficulty detecting position over time because of drift. Thus, preferred embodiments in which UI interactions correlate to positions or static poses of the user, a sensor that captures visual or image data is useful either alone or in combination with an IMU. Furthermore, it should be understood that embodiments that rely on sensor data only from an IMU or non-visual sensor data would not require optical analyzer 104.
In preferred embodiments, optical analyzer 104 can detect static poses of a user. In other words, optical analyzer 104 can detect in image data the movement of an identified body or body part to a position and that the position of the body or body part remains in the position (i.e., does not move again out of the position). For example, optical analyzer 104 can detect a shoulder lean movement where the user holds the lean. A static pose with which the user can interact with the UI can include any static pose at the end of a movement. For example, a static pose can include a shoulder lean, holding an arm in a certain position, tilting the head up, down, or to the side, and the like. A static pose is a continuation and culmination of a movement gesture that operates in conjunction with the movement to communicate a UI action. In accordance with preferred embodiment, a UI interaction can be correlated to the movement and the static pose together; a first UI interaction can be correlated to the movement and a second UI interaction to continue the first UI interaction can be correlated to the static pose; or a first UI interaction can be correlated to the movement and a second UI interaction to cease the first UI interaction can be correlated to a movement away from the static pose.
Static poses can be difficult for some users to hold or to hold steady in such a way as to maintain the UI activity. For example, a scroll through a list using a shoulder lean can be disrupted if the user fails to hold still during the lean or to maintain the lean far enough. Preferred embodiments to maintain a scroll through a list can use fuzzy logic (discussed below in connection with
In accordance with preferred embodiments, optical analyzer 104 is not limited to detecting movements within image data that are correlated only to instructions to interact with an already-selected UI object. Optical analyzer 104 can also detect a movement within image data where the movement is correlated to an instruction to activate a UI object. In some embodiments, correlating a movement to UI object activation can be done by itself or in conjunction with the classical gaze technique for UI object activation. In such embodiments, the position of a reticle or cursor over an object during a movement can trigger activation. In yet other embodiments, the position of a body part during movement of the body part against a computer-generated image map can be translated to the UI so that an object in the UI that has a position that corresponds to the position of the body part can be activated. In embodiments that correlate a movement within image data to the activation of a UI object, again it is preferable to apply fuzzy logic to avoid accidental.
Still referring to
In some implementations, the wearable device 105 can be operatively coupled to the camera 103 and the computational device 172 (e.g., wired, wirelessly). The wearable device 105 can be a device (such as an augmented reality (AR) and/or virtual reality (VR) headset, and/or the like) configured to receive sensor data, so as to track a user's movement when the user is wearing the wearable device 105. It should be understood that embodiments of the present invention are not limited to headset wearable devices and that wearable device 105 can be attached to another part of the body, such as the hand, wrist, torso, and the like such that movements by other parts of the body are used to interact with the UI. The wearable device 105 can be configured to send sensor data from the camera 103 to the computational device 172, such that the computational device 172 can process the sensor data to identify and/or contextualize the detected user movement.
In some implementations, the camera 103 can be included in wearable device 105 and/or separate from wearable device 105. In embodiments including a camera 103 included in wearable device 105, optical analysis can be performed on the visual data of the user's environment to determine the user's movement. In embodiments including a camera 103 separate from wearable device 105, optical analysis can be performed on visual data of the user or a portion of the user's body to determine the user's movement. Camera 103 can be a video camera or still-frame camera and can be one of an RGB, color, grayscale or infrared camera or some other sensor that uses technology to capture image data such as a charge-coupled device (CCD), a CMOS sensor, a depth sensor, and the like.
At step 151, optionally a calibration step is performed. Preferably, such a calibration step is performed in some embodiments using optical flow methods which do not feature 6 DOF. For other types of optical flow methods, a calibration step is optionally and preferably performed for greater accuracy of determining movements of the user. For example, SLAM in 3D does feature 6 DOF which means that, in some preferred embodiments, no calibration is performed. Calibration is discussed further below in connection
At step 152, the user moves the body part on which the wearable device is mounted, for example by moving the user's head, shoulders, arm, etc. At step 154, the camera records visual data, such as video data for example.
At step 156, the optical analyzer detects movement of the user's body part, such as the user's head, shoulders, arm, etc., according to movement of the wearable device within the user's environment, by analyzing the visual data of the environment. The optical analyzer may optionally perform such analysis according to any of the above optical algorithms. Again, for embodiments in which camera 103 is separate from wearable device 105, the optical analyzer detects movement of the user's body part according to movement of the user's body part.
At step 158, the optical analyzer determines which UI action corresponding to movement has been selected by the user. For example, as described in greater detail below, a movement of the user's head or shoulders could optionally be related to a UI action of scrolling through a plurality of choices in the UI. Such choices could be visually or audibly displayed to the user. A different movement of the user's head could relate to selecting one of the choices. For some UI interactions, the interaction can be continuous (e.g., scrolling through images or a list). In preferred embodiments, the optical analyzer detects that the user has moved from a first position to a second position and as the user remains in the second position, the continuous interaction remains active (e.g., scrolling continues). In such embodiments, the optical analyzer determines whether the user has moved back or near to the first position or to a third position which can indicate an instruction to cease the continuous interaction (e.g., cease scrolling). In other words, in preferred embodiments, the user need not make another, distinct gesture to further interact with the UI. Instead, the user can maintain a static pose to interact with the UI and when the user moves, for example, back to the original position, the interaction can cease. The optical analyzer can ascertain the speed of the user's movement to a second position and determine a quality of the UI interaction (e.g., faster user movement=faster scrolling). In yet other embodiments, the optical analyzer can determine a quality of the UI interaction like scrolling speed by further movement in the same direction. For example, if the optical analyzer determines that the user has leaned right which correlates to a scroll to the right command, the optical analyzer can then determine whether the user leans more to the right which can correlate to faster scrolling. It should be understood that preferred embodiments can use other types of UI interaction in conjunction with the movement-based interaction such as audible or tactile (e.g., mouse, keyboard, etc.).
If the choices are provided audibly, for example by announcing a name or title of a choice, then the UI system would not be limited to a virtual environment, or at least not to such a virtual environment that is based solely or mainly on display of visual information.
At step 160, the application that supports the UI would execute a UI action, based upon determining which UI action corresponds to which movement. In preferred embodiments, an optical analyzer will assign a classification to a movement and the system will have a data store that maintains a correlation between a UI interaction and the movement classification. The particular classifications of movement and correlations can depend on the nature and complexity of the UI as well as the type of sensors used (e.g., mounted on a wearable and directed at the environment vs. directed at the user, type and number of wearables on which sensors are mounted, etc.) In one preferred embodiment, as shown in
In some implementations, calibration processor 202 can be configured to calibrate the camera input, such that the input from individual cameras and/or from different types of cameras can be calibrated. As an example of the latter, if a camera's type and/or model is known and has been analyzed in advance, calibration processor 202 can be configured to provide the camera abstraction interface 200 with information about device type calibration requirements (for example), so that the camera abstraction interface 200 can abstract the data correctly and in a calibrated manner. For example, the calibration processor 202 can be configured to include information for calibrating known makes and models of cameras, and/or the like. Skilled artisans can appreciate that calibration can be necessary for different types or models of cameras because of differing types and levels of distortion and the like from different types or models of cameras. Calibration processor 202 can also be configured to perform a calibration process to calibrate each individual camera separately, e.g., at the start of a session (upon a new use, turning on the system, and the like) using that camera. The user (not shown), for example, can take one or more actions as part of the calibration process, including but not limited to displaying printed material on which a pattern is present. The calibration processor 202 can receive the input from the camera(s) as part of an individual camera calibration, such that calibration processor 202 can use this input data to calibrate the camera input for each individual camera. The calibration processor 202 can then send the calibrated data from camera abstraction interface 200 to camera data preprocessor 204, which can be configured to perform data preprocessing on the calibrated data, including but not limited to reducing and/or eliminating noise in the calibrated data, normalizing incoming signals, and/or the like. The video preprocessor 102 can then send the preprocessed camera data to an optical analyzer 104.
Optical analyzer 104, according to at least some embodiments, may include a tracking processor 210 and a mapping processor 212. Optionally, only tracking processor 210 is featured, depending upon the type of optical algorithm performed. For example, for some types of optical algorithms, only tracking of the user's movements through the wearable device would be needed and/or performed, such that optionally only tracking processor 210 is featured. For other types of optical algorithms, in addition to tracking, the relative location of the movement of the wearable device on a map that corresponds to the UI would also be determined; for such embodiments, mapping processor 212 is also included. Optionally a localization processor is also included (for example, for SLAM implementations).
In some implementations, the mapping processor 212 can be configured to create and update a map of an environment surrounding the wearable device (not shown). Mapping processor 212, for example, can be configured to determine the geometry and/or appearance of the environment, e.g., based on analyzing the preprocessed sensor data received from the video preprocessor 102. Mapping processor 212 can also be configured to generate a map of the environment based on the analysis of the preprocessed data. In some implementations, the mapping processor 212 can be configured to send the map to the localization processor 206 to determine a location of the wearable device within the generated map.
Tracking processor 210 would then track the location of wearable device on the map generated by mapping processor 212.
In some implementations, tracking processor 210 can determine the current location of the wearable device 105 according to the last known location of the device on the map and input information from one or more sensor(s), so as to track the movement of the wearable device 105. Tracking processor 210 can use algorithms such as a Kalman filter, or an extended Kalman filter, to account for the probabilistic uncertainty in the sensor data.
In some implementations, the tracking processor 210 can track the wearable device 105 in a way so as to reduce jitter. In other words, a UI cursor or a UI interaction (e.g., scrolling) can track so closely to a user's movement that if the user's velocity frequently changes over the course of a larger movement, the UI can reflect each sudden change thus causing some disorientation to the user. For example, when a user moves an arm up in interaction with the UI, the user can have a shoulder muscle twitch or can struggle raising the arm past horizontal smoothly so that there are sudden movements of the arm from side to side or hitches during the movement. Because the user will not move smoothly or because of the potential for lag in the tracking, a direct correlation of a UI object movement and the user movement may result in jitter (i.e., non-fluid movement of the UI object). To alleviate such jitter, the tracking processor 210 could estimate an error of movement at each step of the process to modulate the mapping process results. However, such processing can be computationally intensive given its frequency. Therefore, preferred embodiments can determine an estimated error value or constant to modulate the results of the optical analysis and the mapping process and use this error value over multiple periods of the process which is discussed further in connection with
In some implementations, the output of tracking processor 210 can be sent to mapping processor 212, and the output of mapping processor 212 can be sent to tracking processor 210, so that the determination by each of the location of the wearable device 105 and the map of the surrounding environment can inform the determination of the other.
Computational device 302 preferably operates a camera preprocessor 316, which may optionally operate as previously described for other camera preprocessors. Preferably, camera preprocessor 316 receives input data from one or more cameras 318 and processes the input data to a form which is suitable for use by movement analyzer 314. Movement analyzer 314 may operate as previously described for other optical analyzers.
Movement analyzer 314 is preferably in contact with a UI mapper 320, which correlates each movement determined by movement analyzer 314 with a particular UI function or action. UI mapper 320 can include a repository of UI function or actions with identifiers of the movements to which they correlate. In some embodiments, UI mapper 320 can be customized by a user. For example, a user may have difficulty with a particular movement or gesture that is correlated to a UI function or action. In that case, the user may adjust UI mapper 320 so that a modified movement or gesture or a different movement or gesture is correlated to the UI function or action. Correlation data store 326 can be used to maintain correlations of UI functions and movement information. The movement information can be an identifier of classification of movement or of a specific movement as generated by movement analyzer 314 (or optical analyzer 104 from
As shown in
First, the user 400 places the cursor (here, a red circle, shown as 404) on top of the desired item using the classical gaze system, by gazing at the desired item in a display 402. In preferred embodiments the user can invoke a gesture to activate a cursor 404 in display 402. For example, the user can invoke a gesture (e.g., head nod, head title, arm raise, etc.) to activate a cursor and invoke a second gesture (e.g., tilt head to the left, lean body to the left, swipe arm to the left, etc.) to move the cursor over the item. Skilled artisans can appreciate that different gestures or movements can be used to activate and interact with UI objects. Side, top and screen views are shown in
Turning now to
At step 426, the optical analyzer detects movement of the user's body part, such as the user's head, according to movement of the wearable device, by analyzing the visual data. The optical analyzer may optionally perform such analysis according to any of the above optical algorithms. At step 428, the optical analyzer determines which UI action corresponding to movement has been selected by the user. For example, as described in greater detail below, a movement of the user's head could optionally be related to a UI action of scrolling through a plurality of choices in the UI. Such choices could be visually or audibly displayed to the user. A different movement of the user's head could relate to selecting one of the choices.
If the choices are provided audibly, for example by announcing a name or title of a choice, then the UI system would not be limited to a virtual environment, or at least not to such a virtual environment that is based solely or mainly on display of visual information.
At step 430, the application that supports the UI causes a new menu to be displayed, the display of the new menu being the UI action that is performed in response to a particular movement or set of movements. Then the method optionally continues as follows: at step 432A, the user selects a menu item, preferably with some type of movement or combination of movements. In other words, the camera records another movement, similar to step 424; the optical analyzer detects the movement in the image data received from the camera, similar to step 426; and the analyzer determines that a selection UI action corresponding to the movement, similar to step 428. Alternatively, the process returns to step 420 if the user does not make such a selection, that is, the analyzer determines that no relevant movement by the user is indicated in image data received from a camera (for example, in order to see more selections from the menu).
The boundary thresholds could be determined based on the user's age, size, or some other physical characteristics. In some embodiments, a boundary threshold could be set through calibration by the user. For example, the user can hold one or more static poses and communicate to the system when the user was holding the static pose, when the user had completed the static pose, or both. For example, the user can issue an audible command or select an option on a calibration interface to begin or end a calibration exercise. The system could record movement data and generate a value indicating the distance from boundary to boundary from the movement data. Additionally, multiple fuzzy logic boundaries can be calibrated depending on the type of movement. Larger movements can be associated with larger boundary distances and smaller movements can be associated with smaller boundary distances. For example, in some embodiments, a movement error of 2 cm can be used for a lean gesture. That is, after the user has completed the movement of the lean and is holding the lean to maintain the UI activity, the user can move 2 cm in any direction and maintain the UI activity (e.g., list scroll) without disruption to or change in the UI activity. A static pose for an arm gesture, a gesture more easily controlled by most users, can have an error value of 1.5 cm and a static pose for a hand, even more controllable, can have an error value of 1 cm. In some embodiments, a default value can be used before or in place of calibration or receipt of user-specific data.
Any and all references to publications or other documents, including but not limited to, patents, patent applications, articles, webpages, books, etc., presented in the present application, are herein incorporated by reference in their entirety.
Example embodiments of the devices, systems and methods have been described herein. As noted elsewhere, these embodiments have been described for illustrative purposes only and are not limiting. Other embodiments are possible and are covered by the disclosure, which will be apparent from the teachings contained herein. Thus, the breadth and scope of the disclosure should not be limited by any of the above-described embodiments but should be defined only in accordance with claims supported by the present disclosure and their equivalents. Moreover, embodiments of the subject disclosure may include methods, systems and apparatuses which may further include any and all elements from any other disclosed methods, systems, and apparatuses, including any and all elements corresponding to target particle separation, focusing/concentration. In other words, elements from one or another disclosed embodiment may be interchangeable with elements from other disclosed embodiments. In addition, one or more features/elements of disclosed embodiments may be removed and still result in patentable subject matter (and thus, resulting in yet more embodiments of the subject disclosure). Correspondingly, some embodiments of the present disclosure may be patentably distinct from one and/or another reference by specifically lacking one or more elements/features. In other words, claims to certain embodiments may contain negative limitation to specifically exclude one or more elements/features resulting in embodiments which are patentably distinct from the prior art which include such features/elements.
Number | Date | Country | |
---|---|---|---|
62476965 | Mar 2017 | US |