The present invention generally relates to augmented reality, and particularly to provide visual assistance for performing manual tasks with a requirement of accurate alignment of an axis of a tool with a reference axis.
Many surgical procedures, like insertion of an external ventricular drain (EVD) into a ventricle of the brain, insertion of a screw into the pedicle of a vertebra, insertion of a biopsy or ablation needle into a lung or liver tumor, etc., require visual assistance for accurately aligning the surgical instrument with a reference trajectory and steadily advancing the instrument along the trajectory without deviation. For example, a crucial step of the EVD insertion procedure involves advancing a rigid needle like instrument called a stylet through a burr hole in the skull, into a patient's brain, until it reaches anterior horn of a lateral ventricle. Typically done free-hand by a neurosurgeon using surface landmarks, inaccuracy in positioning and advancing the stylet can result in sub-optimal placement of the EVD. Corrective revision procedures are reported in 40% of the cases, each procedure adding to patient morbidity and procedural costs. Especially, in cases of distorted ventricular anatomy or unusually small ventricles, to provide means for guiding the EVD stylet safely into the ventricle is critically important. Some existing methods provide visual assistance in the form of real-time image guidance. For example, surgical navigation used for EVD insertion shows a real-time display of orthogonal Magnetic Resonance Imaging (MRI)/Computed Tomography (CT) image slices corresponding to the real-time position and the orientation of the EVD stylet. Projections of the pre-planned reference trajectory and the real-time trajectory of the EVD stylet are drawn on the image slices as graphical lines of two distinct colours.
The viewer is expected to manually adjust the EVD stylet using freehand movements to achieve overlap between the two differently coloured lines. Perfect overlap indicates accurate alignment between the real-time trajectory and the reference trajectory. However, it is cumbersome and time consuming to discover a position and orientation of the EVD stylet that achieves perfect overlap looking only at projections of the 3D space on a 2D display. This problem is additionally exacerbated because the perspective and orientation of the display does not generally match that of the surgeon, making the relation between physical hand movements and corresponding changes in the displayed lines very unintuitive. Moreover, if inadvertent movement causes even a small deviation between the two trajectories, it is cumbersome to realign them. If this inadvertent movement happens after the tissue has been penetrated, there is risk of causing damage to the tissue in the process of realigning the trajectories. Thus, there is a need for providing visual assistance for aligning an axis of a physical object with a pre-determined virtual reference trajectory such that there is a quick and an intuitive alignment between the trajectories, thereby reducing the probability of inadvertent off-trajectory movements and quick and intuitive course correction if inadvertent movements occur.
An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
Accordingly, in one aspect of the present invention relates to a method for indicating alignment of a body-fixed axis (300) with a reference axis (301) of a pre-determined reference pose, the method comprising: acquiring a real-time measurement of the body-fixed axis (300) predefined in a coordinate frame of the physical object (101), rendering a first surface (103) with an intersection point (304) of the reference axis (301) on a first surface (103) using a three-dimensional display device (100), rendering a second surface (305) at an offset from the intersection point (304) of the reference axis (301) present on the first surface (103), rendering a plurality of set of feature graphics on the first surface (103) and the second surface (305) in one or more visual states, wherein atleast one set of feature graphics of the plurality of set of feature graphics are reference feature graphics that is positionally distributed along the reference axis of the pre-determined reference pose (301), updating the positions of another set of feature graphics of the plurality of set of feature graphics based on a current position of the physical object (101), wherein the another set of feature graphics is dynamic feature graphics that is positionally distributed along the body-fixed axis (300) of the physical object (101) and modifying the visual states of the plurality of set of feature graphics based on the extent of alignment between the body-fixed axis (300) and the reference axis (301).
Another aspect of the present invention relates to a visual guidance system for indicating alignment of a physical object (101) with a reference axis (301) of a pre-determined reference pose, the visual guidance system comprising one or more processors coupled and configured with components of the visual guidance system for indicating alignment of the physical object (101) with the pre-determined reference axis (301), the system comprising: a three-dimensional display device (100) for rendering a first surface (103) with an intersection point (304) of the reference axis (301) on a first surface(103), a physical object (101) for performing an action, a tracking system (102) for tracking the position and orientation of the physical object (101), memory device comprising the reference axis (301) of the pre-determined reference pose, the three dimensional display device (100) for rendering a body-fixed axis (300) based on the tracked position and orientation of the physical object and a plurality of set of feature graphics on the first surface (103) and the second surface (305) in one or more visual states, wherein atleast one set of feature graphics of the plurality of set of feature graphics is positionally distributed along the pre-determined reference axis (301) and the three dimensional display device (100) for rendering modified visual states of the plurality of set of feature graphics based on the extent of alignment between the body-fixed axis (300) and the reference axis (301).
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
The above and other aspects, features, and advantages of certain exemplary embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure. Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic is intended to provide.
A virtual three dimensional 3D environment is an immersive computer-generated environment which the user can perceive and interact with. Augmented reality AR is a technology that is used to generate and present a virtual 3D environment, where the user perceives computer generated graphics to be a part of the real environment. One of the applications of AR is in providing visual guidance, in the form of graphical elements overlaid on the tools used for performing complex or safety critical tasks. These graphical elements perceived by the user as physical extensions of the tools enhance hand-eye co-ordination as all directions perceived by the user in a physical space map to same set of directions in the virtual 3D environment. The visual guidance is provided to the user through a three dimensional display device, which could be a stereoscopic optical or video see-through head mounted display, a head-mounted virtual reality display, or any other three dimensional display device such as a light-field or holographic display—not necessarily head-mounted.
AR visual guidance can assist several medical applications where an instrument must access a lesion in the patient without impairing healthy anatomy. The intended position and orientation of the instrument is its reference pose the user wants to achieve. Reference pose could be a linear trajectory that can be used for advancing EVD stylets, for setting up biopsy needle holders etc. Reference pose could be a linear trajectory with preferred depth along the trajectory used for introducing biopsy needles, inserting K-wires into vertebrae, fine needle aspiration, introducing ablation needles, dispensing bone cement for vertebroplasty, positioning electrodes for deep-brain stimulation, administering nerve blocks, positioning orthopedic implants etc. Reference pose could be a linear trajectory with preferred depth along the trajectory and orientation about the trajectory used for positioning imaging equipment, positioning instrument holders etc. In these cases, the linear trajectory used to define the reference pose is the reference axis, the preferred depth along the trajectory to be achieved by the instrument is captured by the reference point and the preferred orientation about the trajectory is captured by the reference direction.
AR visual guidance can assist non-medical applications where an object must be precisely positioned and oriented relative to another. Reference pose containing only linear trajectory could be used for positioning visual inspection instrument relative to specimens being inspected. Reference pose containing a linear trajectory with preferred depth along the trajectory could be used on the assembly line to guide a mechanical arm driving fasteners into a chassis. Reference pose containing a linear trajectory with preferred depth and orientation about the trajectory can be used to guide a glue dispensing mechanism to follow a complex lip-groove contour on a product. In these cases, the instrument direction used to define the reference pose is the reference axis, the preferred depth along the trajectory to be achieved by the instrument is captured by the reference point and the preferred orientation about the trajectory is captured by the reference direction.
The virtual 3D environment provides one or more user interface features that allow the surgeon to use the physical object 101 for rotating and scaling the virtual patient model 103. In an embodiment, the orientation of the virtual patient model 103 is the same as that of the real patient 104. Displaying the virtual patient model 103 in the same orientation enhances hand-eye co-ordination as all directions perceived by the viewer, that is the surgeon in the physical space map to the same set of directions in the virtual 3D environment. To present the virtual 3D environment to the user in the user's perspective, the user's eye position relative to head mounted display 100 is assumed to be a constant. In an embodiment, the position as well as the orientation of the virtual model 103 is the same as that of the real patient 104, this requires estimating the user's eye position relative to the three dimensional display unit 100 using a calibration step such as single point active alignment method SPAAM. In another embodiment, the user's eye position relative to the three dimensional display device is tracked in real-time and used as the projection point.
Upon locking of both the position and orientation, the virtual patient model 103 is perceived to be completely overlapped with the real patient 104. This is the most intuitive mode of visualization for the highest-accuracy hand-eye coordination, as it enables true augmentation where virtual objects behave as graphical extensions of the real objects.
The visual guidance system comprises one or more processors and one or more computer readable storage medium. The one or more processors are coupled and configured with the components of the visual guidance system, that is the three dimensional display device 100, the tracking system 102, and the physical object 101 for indicating alignment of an axis of the physical object 101 with the pre-determined reference axis 301. The methods and algorithms corresponding to the visual guidance system may be implemented in a computer readable storage medium appropriately programmed for general purpose computers and computing devices. Typically the processor, for e.g., one or more microprocessors receive instructions from a memory or like device, and execute those instructions, thereby performing one or more processes defined by those instructions. Further, programs that implement such methods and algorithms may be stored and transmitted using a variety of media, for e.g., computer readable storage media in a number of manners. A “processor” means any one or more microprocessors, Central Processing Unit CPU devices, computing devices, microcontrollers, digital signal processors or like devices.
The term “computer-readable storage medium” refers to any medium that participates in providing data, for example instructions that may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory volatile media include Dynamic Random Access Memory DRAM, which typically constitutes the main memory. A transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor and the computer readable storage media for providing the data. Common forms of computer-readable storage media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a Compact Disc-Read Only Memory CD-ROM, Digital Versatile Disc DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a Random Access Memory RAM, a Programmable Read Only Memory PROM, an Erasable Programmable Read Only Memory EPROM, an Electrically Erasable Programmable Read Only Memory EEPROM, a flash memory, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. In general, the computer-readable programs may be implemented in any programming language. Some examples of languages that can be used include C, C++, C#, or JAVA. The program will use various security, encryption and compression techniques to enhance the overall user experience. The software programs may be stored on or in one or more mediums as an object code. A computer program product comprising computer executable instructions embodied in a computer-readable medium comprises computer parsable codes for the implementation of the processes of various embodiments.
The method and the visual guidance system disclosed herein can be configured to work in a network environment comprising one or more computers that are in communication with one or more devices via a network. In an embodiment, the computers communicate with the devices directly or indirectly, via a wired medium or a wireless medium such as the Internet, a local area network LAN, a wide area network WAN or the Ethernet, a token ring, or via any appropriate communications mediums or combination of communications mediums. Each of the devices comprises processors, examples of which are disclosed above, that are adapted to communicate with the computers. In an embodiment, each of the computers is equipped with a network communication device, for example, a network interface card, a modem, or other network connection device suitable for connecting to a network. Each of the computers and the devices executes an operating system, examples of which are disclosed above. While the operating system may differ depending on the type of computer, the operating system provides the appropriate communications protocols to establish communication links with the network. Any number and type of machines may be in communication with the computers.
In an embodiment, the visual guidance system for indicating alignment of the physical object 101 with the reference axis 301 of the pre-determined reference pose, the visual guidance system comprising one or more processors coupled and configured with components of the visual guidance system for indicating alignment of the physical object 101 with the pre-determined reference axis 301. The system comprises the three-dimensional display device 100 for rendering the first surface 103 with the intersection point 304 of the reference axis 301 on the first surface 103, the physical object 101 for performing an action, the tracking system 102 for tracking the position and orientation of the physical object 101, the memory device comprising the reference axis 301 of the pre-determined reference pose. The three dimensional display device 100 renders the body-fixed axis 300 based on the tracked position and orientation of the physical object and the plurality of set of feature graphics on the first surface 103 and the second surface 308 in one or more visual states, wherein atleast one set of feature graphics of the plurality of set of feature graphics is positionally distributed along the pre-determined reference pose 301. The three dimensional display device 100 for renders modified visual states of the plurality of set of feature graphics based on the extent of alignment between the body-fixed axis 300 and the reference axis 301. In an embodiment, rendering is one of providing and/or displaying the first surface and the second surface. The set of feature graphics of the plurality of feature graphics along the reference axis 301 are the first reference feature graphic 302 and the second reference feature graphic 307 and the another set of feature graphics of the plurality of set of feature graphics are the first dynamic feature graphic 310 and the second dynamic feature graphic 312. The position and orientation of the first surface 103 is same as position and orientation of the real environment object 104. The second surface 305 rendered by the three dimensional display device 100 is transparent. The tracking system 102 also tracks the position and orientation of the three dimensional display device 100 in real time.
The line 300 is the body-fixed axis of the physical object 200, intersecting the first surface 103 at the intersection point 308 and intersecting the second surface 305 at the intersection point 309. In real-time as the user moves the physical object 200, the body-fixed axis 300, the intersection point 308 and the intersection point 309 is updated.
The first dynamic feature graphic 310 could be any symmetric shape drawn on the first surface 103 coupled to the intersection point 308. In an embodiment, first dynamic feature graphic 310 is a filled circle 310 centered about the intersection point 308. The initial visual state of the first dynamic feature graphic is the yellow color 311. The second dynamic feature could be any symmetric shape drawn on the second surface 305 coupled to the intersection point 309, for example, an annular ring 312 centered at the intersection point 309. The initial visual state of the second dynamic feature 312 is the yellow color, second visual state, 311. The initial visual state of the first and the second dynamic feature graphic is shared by the color 311. Although the first and the second dynamic feature graphics have the same colors here, they could also have different colors. The first dynamic feature graphics 310 and the second dynamic feature graphics 312 have the same dimensions as the first reference feature graphics 302 and the second reference feature graphics 307 respectively.
There are two features of the visualization that enhance the viewer's hand-eye coordination. Firstly, rendering the virtual patient model 103, the body-fixed axis 300 to the user in a perspective and orientation close to the close to the perception of the physical object 200 and the real patient 104. Secondly, ensuring the relative pose between the physical object 200 and the real patient 104 is the same as the relative pose between the body-fixed axis 300 and the virtual patient model 103, thereby enabling the user to perceive the body-fixed axis 300 as mimicking the motions of the physical object 200 in the real environment.
The extent of alignment is governed by the alignment between the body-fixed axis 300 with the reference axis 301 and by the alignment of the third reference feature graphic 600 with the third dynamic feature graphic 502. As the alignment error between the body-fixed axis 300 and the reference axis 301 decreases, the area of overlap between the reference feature graphics and the dynamic feature graphics on the first surface 103 and second surface 105 increases. The decreased error between the body-fixed axis 300 and the reference axis 301 leads to modification of visual states of the areas of overlap. As the spatially tracked physical object 101 advances along the reference axis 301 and approaches the intended depth, the distance between the reference point and the body-fixed point decreases, thereby the distance between the third reference feature graphics 600 and the third dynamic feature graphics 502 decreases. In response to achieving the intended depth along reference axis within a threshold, both the third reference feature graphic 600 and third dynamic feature graphic 502 are brought into the same modified visual state
In an embodiment, the first surface 103 is one of transparent, translucent, and opaque or a combination thereof. In an embodiment, the real environment object 104 is a patient or any body part of the patient. In another embodiment, the real environment object 104 is any physical object that exists in a real world. In an embodiment, the first surface 103 is a three dimensional visualization of the real environment object 104. In another embodiment, the first surface 103 is a plane rendered at an offset from a real environment object 104. In an embodiment, an action is a medical procedure. In another embodiment, an action is a non medical procedure. In an embodiment, the first visual state 303 and the second visual state 315 are distinct and the third visual state 403 is distinct from the first visual state 303 and the second visual state 315. In another embodiment, the first visual state 303 and the second visual state 315 are not distinct and the third visual state 403 is distinct from the first visual state 303 and the second visual state 315.
The method further comprises rendering 904 a plurality of set of feature graphics on the first surface 103 and the second surface 305 in one or more visual states, wherein atleast one set of feature graphics of the plurality of set of feature graphics are reference feature graphics that is positionally distributed along the reference axis of the pre-determined reference pose 301. The method further comprises rendering the first reference feature graphic 302 of the first visual state 303 on the first surface 103 coupled to the point of intersection 304 of the reference axis 301 with the first surface 103 and rendering the second reference feature graphic 307 of the first visual state 303 on the second surface 305 coupled to a point of intersection 306 of the reference axis 301 with the second surface 305, wherein the position of the second reference feature graphic 307, the first reference feature graphic 302 and the reference axis 301 is static. The method further comprises rendering the first dynamic feature graphic 310 of a second visual state 311 on the first surface 103 coupled to the point of intersection 308 of the body-fixed axis 300 with the first surface 103 and rendering the second dynamic feature graphic 312 of the second visual state 311 on the second surface 305 coupled to the point of intersection 309 of the body-fixed axis 300 with the second surface 305, wherein the position of the first dynamic feature graphic 310, the second dynamic feature graphic 312 are updated in real time based on the position and the orientation of the physical object 101. The set of feature graphics of the plurality of feature graphics along the reference axis 301 are the first reference feature graphic 302 and the second reference feature graphic 307 and the another set of feature graphics of the plurality of set of feature graphics are the first dynamic feature graphic 310 and the second dynamic feature graphic 312.
The dimension of the first dynamic feature graphic 310 is equal to the dimension of the first reference feature graphic 302 and the dimension of the second dynamic feature graphic 312 is equal to the dimension of the second reference feature graphic 307. Upon intersection of the first reference feature graphic 302 with the first dynamic feature graphic 310 and the second reference feature graphic 307 with the second dynamic feature graphic 312, portions of the intersection 402, 404 are displayed in a third visual state 403 distinct from the first visual state 303 and the second visual state 311. The first visual state 303 is a first colour, the second visual state 311 is a second colour and the third visual state 403 is a third colour. The first visual state of the first reference graphic and the first dynamic feature graphic is the first shape 303, the first visual state of the second reference graphic and the second dynamic feature graphic is the second shape 311 and the modified visual state 403 is the third shape. The position and orientation of the first surface 103 is same as the position and orientation of the real environment object 104. The perspective of the user is tracked and the measurement of the perspective of the user is used for displaying a virtual three-dimensional environment in the same orientation as that of the real environment object 104. The method further comprising updating the orientation of the another set of feature graphics of the plurality of set of feature graphics based on a current position and orientation of the physical object 101.
The method further comprises updating 905 the positions of the another set of feature graphics of the plurality of set of feature graphics based on a current position and orientation of the physical object 101, wherein the another set of feature graphics is dynamic feature graphics that is positionally distributed along the body-fixed axis 300 of the physical object 101 and modifying 906 the visual states of the plurality of set of feature graphics based on the extent of alignment between the body-fixed axis 300 and the reference axis 301. The tracking system 102 provides an input to the three dimensional display device 100 based on the tracking of the position and orientation of the physical object 101 for creating the real-time body-fixed axis 300 and updating the positions of the another set of feature graphics of the plurality of set of feature graphics. The real environment object 104 is spatially tracked and the reference pose is static with respect to the real environment object 104.
The pre-determined reference pose comprises the reference direction non-parallel to the reference axis and/or a reference point on the reference axis. The modification of the visual states of the reference feature graphics and dynamic feature graphics based on the extent of alignment between the body-fixed axis 300 with the reference axis 301, and a body-fixed direction with the reference direction is performed by acquiring the real-time measurement of the predefined body-fixed direction non-parallel to the body-fixed axis 300. The method further comprising acquiring a real-time measurement of a body-fixed point on the body-fixed axis 300, rendering the third reference feature graphic at the reference point along the reference axis 301 comprising an initial visual state, rendering the third dynamic feature graphic coupled to the body-fixed point in an initial visual state, and modifying the visual states of the third reference feature graphic and the third dynamic feature graphic based on the distance between the body-fixed point and the reference point.
The method and the visual guidance system disclosed herein are not limited to a particular computer system platform, processor, operating system, or network. The method and the visual guidance system disclosed herein are not limited to be executable on any particular system or group of systems, and are not limited to any particular distributed architecture, network, or communication protocol.
In an embodiment, the computer programs that implement the methods and algorithms disclosed herein are stored and transmitted using a variety of media, for example, the computer readable media in a number of manners. In an embodiment, hard-wired circuitry or custom hardware is used in place of, or in combination with, software instructions for implementing the processes of various embodiments. Therefore, the embodiments are not limited to any specific combination of hardware and software. The computer program codes comprising computer executable instructions can be implemented in any programming language. Examples of programming languages that can be used comprise C, C++, C#, Java®, JavaScript®, Fortran, Ruby, Perl®, Python®, Visual Basic®, hypertext preprocessor PHP, Microsoft® .NET, Objective-C®, etc. Other object-oriented, functional, scripting, and/or logical programming languages can also be used. In an embodiment, the computer program codes or software programs are stored on or in one or more mediums as object code. In another embodiment, various aspects of the method and the visual guidance system disclosed herein are implemented in a non-programmed environment comprising documents created, for example, in a hypertext markup language HTML, an extensible markup language XML, or other format that render aspects of a graphical user interface GUI or perform other functions, when viewed in a visual area or a window of a browser program. In another embodiment, various aspects of the method and the visual guidance system disclosed herein are implemented as programmed elements, or non-programmed elements, or any suitable combination thereof.
The foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the method and the visual guidance system disclosed herein. While the method and the visual guidance system have been described with reference to various embodiments, it is understood that the words, which have been used herein, are words of description and illustration, rather than words of limitation. Furthermore, although the method and the visual guidance system have been described herein with reference to particular means, materials, and embodiments, the method and the visual guidance system have are not intended to be limited to the particulars disclosed herein; rather, the method and the visual guidance system extend to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. While multiple embodiments are disclosed, it will be understood by those skilled in the art, having the benefit of the teachings of this specification, that the method and the visual guidance system disclosed herein are capable of modifications and other embodiments may be effected and changes may be made thereto, without departing from the scope and spirit of the method and the system disclosed herein.
Those skilled in this technology can make various alterations and modifications without departing from the scope and spirit of the invention. Therefore, the scope of the invention shall be defined and protected by the following claims and their equivalents.
In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively.
Number | Date | Country | Kind |
---|---|---|---|
201821030732 | Aug 2018 | IN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IN2019/050602 | 8/16/2019 | WO | 00 |