The present subject matter is related, in general to fundus imaging systems and more particularly, but not exclusively to a method and an apparatus for imaging of wider angle fundus of an eye of a subject.
It is often useful, typically for medical reasons, to examine the fundus of the eye of a human or animal. The fundus is the interior surface opposite the lens, and includes retina, optic disc, fovea, etc. Visible abnormalities in the eye may harm the eye, or reveal the state of other organs of the body.
Most often with a patient able to communicate, such examination involves directing the patient to look at something, standardizing the orientation and state of the eye. The gaze of a small child can be briefly captured by showing an interesting object, but this does not allow for a prolonged study. It is hard or impossible to capture the gaze of an animal or a baby, particularly if premature, or the gaze of a human of any age who is unconscious or otherwise unresponsive, due to trauma. This can make the non-contact scheme used in fundus imaging and in prescribing glasses impractical.
Hence, the medical practitioner needs to apply an ophthalmic coupling gel as an optical bridge between the cornea and lens of the imaging device, and hold the imaging device against the eye. This contact increases discomfort for the patient, and hence there is a need for finishing the procedure quickly. Further, the imaging device holds the eye in position, obviating the need for managing gaze.
Another factor in eye examination is the size of the pupil, which varies with the state of the iris. It is easier to see more through a wider window than a narrow one. In a mature human or infants, the iris enlarges in dim light, therefore eye examinations normally avoid bright lighting. But, this reflex is lacking in a premature baby. Suitable eye-drops enlarge the pupil in most patients, though not necessarily in cases of trauma or drug effects. The drops take up to half an hour to act, and hours to dissipate, and they cause discomfort even in adults. There is thus a range of situations where the fundus must be examined through a narrow pupil. This gives a further reason for contact examination, since one can see more through a small window from very close than from a larger distance.
Even with an imaging system in contact with the cornea, the field of view is limited, as illustrated in
It is usually required to see the view from more than one direction, so the device 100 is rotated, while neither losing contact with the cornea 150 nor applying unsafe pressure, and remaining aligned on the pupil. This is easier to achieve with a direct control than a mechanical mounting. Where the imaging device 200 is an indirect ophthalmoscope, a head-mounted video camera sharing what the physician sees constitutes a Video Indirect Ophthalmoscope (VIO). Alternatively, one may omit the direct optical view, by including an image sensor in a hand-held device 100, and viewing its output on a fixed display.
Most commonly the immediate output is live video, which enables the user to explore fundus regions of interest by turning the imaging device 200. The view may be directed toward the fovea i.e. central to the fundus where vision is most acute, or to the optic disc i.e. to the nose side of the fovea where glaucoma and other pathologies are most visible, or other points of interest. The user may also sweep through different viewpoints systematically, so as to see everything within a certain distance of the fovea. This is valuable in diagnosis and screening, and the digital output can be stored for comparison, sharing, and so on. A video record of such a sweep, however, requires a large amount of storage, and is cumbersome to search and revisit. It is more helpful to select a set of still images from the video record which between them cover a wide region, and digitally stitch them together to create a single view of that region. Unfortunately, many individual VIO frames have poor quality, and one study reports that only 24% of these videos can be utilized for Retinopathy of Prematurity (ROP) evaluation with ROP tool. Few stills are better, with correspondingly better-stitched wide-angle results, but it remains the responsibility of the user to move smoothly enough, through a wide enough range of directions, for the stitching to cover the desired region. Only a user having long familiarity with retinal geography, and practice in moving a device direction of view around it, can produce adequate results. The cost of such imaging devices is quite high. Further, the usage of such imaging devices is limited due to insufficient pools of expertise.
Manual or automated selection of stills from the video record must be done not only on the quality, but also collective overlap and lacunae. The live or recorded video requires continuous illumination of the retina, and it is harmful for eyes at the developmental stage of a premature infant.
In an aspect of the present disclosure, a method for imaging of wider angle fundus of an eye of a subject is provided. The method comprises guiding placement of an imaging device at set of predetermined locations on the eye, receiving set of images of the fundus of the eye upon determining the imaging device to be at least one of the set of predetermined locations, wherein the set of images are captured by the imaging device upon receiving instructions from the computing unit, determining quality of the set of images and concatenating the one or more images to provide a wider angle target image of the fundus of the eye.
In an embodiment of the present disclosure, a computing unit for imaging of wider angle fundus of an eye of a subject is provided. The computing unit comprises a processor and a memory communicatively coupled to the processor. The memory stores processor-executable instructions, which, on execution, causes the processor to guide placement of an imaging device at set of predetermined locations on the eye, receive set of images of the fundus of the eye upon determining the imaging device to be at the set of predetermined locations, wherein the set of images are captured by the imaging device upon receiving instructions from the computing unit, determine quality of the set of images, and concatenate the one or more images to provide a target image of fundus of the eye.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
Embodiments of the present disclosure are related to a method and an apparatus for imaging of wider angle fundus of an eye of a subject. In particular, the present disclosure relates to a method for capturing images of the retina of an unresponsive subject. The method provides a guidance scheme for using the apparatus. Also, the method provides for constructing there from a wide-angle view of the fundus of the eye.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
The term subject refers to any human being, including an infant, a child, a patient or any other person.
In an embodiment, the imaging device 200 is a hand-held unit intended to be used near the eye or touching the eye. A tip 203 of the imaging device 200 is shaped to be held against the cornea and coupled to the eye by using ophthalmic gel. The coupling is done for optical advantages and to stabilise the eye. However, a person skilled in the art would understand that uncoupled embodiments are also possible. The hand-held unit 200 includes a light emitting unit 205 of delivering light through the cornea and lens of the eye. The hand-held unit 200 further comprises a light collecting unit 207 for collecting the light reflected from the fundus and directing the collected light to an image sensor 209, which is capable of obtaining still images. The sensor 209 is connected to the computing unit 212 through a communication network (not shown). In an embodiment, the sensor 209 may connected wirelessly or through wired connection to the computing unit 212.The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
The imaging device 200 receives instructions for capturing images from the computing unit 212.Then, the imaging device 200 transmits the captured images to the computing unit 212 for further processing. In an embodiment, the image sensor 209 captures still images, of high resolution and quality. In an embodiment, the image sensor 209 may also capture a video. As an example, the image sensor 209 may be hyper-spectral sensor.
The images may be acquired with the imaging device 200 which is held still for each imaging, or at moments selected by the computing unit 212. The moments are selected by the computing unit 212 by continuous, un-paused motion of the imaging device 212, when the device is detected to be at an appropriate position.
In an embodiment, the imaging device 212 is moved by a human user. The method of present disclosure provides guidance to user on to go the selected positions as successive static locations, or to move through continuously them. Also, a feedback is provided when a re-take is needed.
Suppose that the patient's eye is directed vertically upward. The eye axis then corresponds to the vertical arrow 301 in
In the image acquired by the imaging device 200 what is seen through the iris is approximately a disk centred on the ‘straight forward’ point of the imaging device 200, so that at all points of interest a rotation in software can compensate for rotation about the unit axis.
In an exemplary embodiment, one direction 302 is along the eye axis, with the others grouped around it. Each axis direction gives a set of through-iris visibility directions, which is approximately a cone centred at the tip 103 of the imaging device 200. The focusing effect of the eye leads to a more complicated dependence of visibility on the orientation of the unit, but for clarity of exposition, a common centre for all views is considered. Further, it is considered that through-iris visibility exists in a right circular cone whose axis is the optical axis of the imaging device 200.
The present disclosure requires images of retinal regions that overlap to create a larger region without gaps, but the approximation allows us to consider cones of visibility from a shared point, and their overlaps. These may be visualised using the boundary circles of their meeting with a unit sphere.
It is not sufficient, however, to ensure that the angle between viewing directions satisfies the condition below:
β≦2α (1)
which holds in
For small angles, condition (2) above requires approximately that
β≦α√{square root over (3)} (3)
The condition is like the relation between the sides of an equilateral triangle and the distance sufficient to cover the centre from a corner. However, on the sphere of possible directions, there are only a few networks of points with equiangular triangles, and the angles are not small. The points are necessarily vertices of a regular polyhedron, of which only the tetrahedron, octahedron and icosahedron have triangular faces. The smallest angle β between neighbouring vertices, measured from the centre, is approximately 63.9°, and equation (2) requires that the visibility cone angle α should be at least 37.38°. If a particular design for the unit enables such an ‘α’, six of the twelve vertex directions of the icosahedron can be used (the other six being the same ones, backward).
An alternative use,
Different selections of direction sets are appropriate in different embodiments of the present disclosure, because different medical goals give rise to different needs both in the size of the retinal region imaged, and in the positioning of it. In some contexts, the most important area of the retina is the fovea, where the sensitivity to detail is most acute and the surrounding macula. The fovea lies opposite the eye's iris and lens, so one may centre an image on it by alignment with the eye axis. As an example, for a composite image, aligning 701, 800 or analogous directions with it. Further, the macula may be successfully imaged by a three-region overlap. For a wider view, a set of five or six directions around a central direction like
Even within the context of ROP, then, the medical need may be for a combined image of part of the retina, centred on the optic disc, or for a wide overview that includes Zone III and for efficiency in acquisition should centre on the fovea. Correspondingly, the direction set to be used may be symmetrical about an axis through the optic disk or an axis through the fovea, with or without alignment of one of the set directions with the symmetry axis
As illustrated in
The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
In an embodiment, the subject is placed in a supine position, with gaze upward. A user of a device following current art needs considerable experience with retinal imaging to choose directions by what is seen through the device.
Referring back to
The process of guiding the placement of the imaging device 200 using reflection method is described herein. In an embodiment, a detachable curved reflective surface 270 is added to the imaging device 200, and a target light or other bright source is included at a fixed point in the environment. If the viewpoint of the user of the imaging device 200is fixed at another point in the environment, the point on the surface 270 at which the user sees the reflection of bright source depends on the orientation of the imaging device 200. If the user moves the imaging device 200 (maintaining contact of the tip 203 with the eye) to make this reflection point coincide with a marking on the surface 207, this constrains the possible orientations of the imaging device 200. If further the bright source has a visible direction (for instance, as a short narrow strip of light), and the marking on the surface similarly has a direction, aligning the reflection and its direction with the marking completely determines the orientation of the imaging device 200. A set of such directional markings, each corresponding to an orientation of the imaging device 200 produces a view in the desired set of directions, thus enables the user to place the imaging device 200 in each of such orientation in turn. A change of the surface 270 to one with different markings enables a different set.
Another technique for guiding the placement of the imaging device 200 is using gravitation technique. The curved surface 207 may be transparent and displays a bubble beneath it, in the manner of a spirit level. For each orientation, a particular point on the surface 207 is highest, so that the bubble moves to that point.
Another alternative embodiment for guiding the placement of the imaging device 200 is using digital technique. The position and orientation of the imaging device 200 may be found at successive moments by digital processing of electronically acquired data. For example, one or more digital video cameras may acquire fixed-viewpoint images of an object with landmark features, from which algorithmic recognition enables computation of the location and orientation of the imaging device 200. In another example, sensors within an object may measure its translational and rotation acceleration, enabling numerical integration to deduce its current location and orientation. Any system for achieving this, current or developed in the future, may be used within the embodiments of the present disclosure.
In an embodiment, the computing unit 212 can display the current orientation of the imaging device 200, relative to a target orientation to the user. In such embodiment, the computing unit 212 continually recalculates the location of the tip of the imaging device 200. The tip should be on the cornea throughout the procedure. Any substantial change signals that the imaging device 200 has slipped or the tracking system is wrong, both of which is signaled to the user as a fault.
At block 1620, receiving, by the computing unit, set of images of the fundus of the eye upon determining the imaging device to be at least one of the set of predetermined locations. In an embodiment, the set of images are captured by the imaging device 200 upon receiving instructions from the computing unit 212.
In an exemplary embodiment, since the computing unit 212 can assess whether the imaging device 200 is in an appropriate position, it is not necessary for the user to take a separate action to trigger the acquisition of an image. The computing unit 212 can highlight each mark 1211 or 1311 in turn, wait until the imaging device 200 is within tolerance of the required direction, acquire an image, and highlight the next mark. If the image is acquired by a brief flash, motion artefacts may be absent; so that the user needs only to make the marker 1455 follow a curve sufficiently close to a trajectory 1400 that each mark 1411 is visited in turn. This can be done faster than by watching video output of a retinal image, with the need to fixate on landmarks individual to the subject, and it can be trained without the need of an artificial eyeball with an internal pattern.
In the exemplary embodiment illustrated in
In an embodiment, the user of the imaging device 200 selects a region, such as “Zone 2”, to be imaged, and a choice of left or right eye, which determines the direction set to be used. As described above, the patient is placed in a supine position, with gaze upward. The user places the imaging device 200 against the cornea of the chosen eye, optically coupled to it by gel.
The computing unit 212 in or connected to the imaging device 200 uses a display, on the imaging device 200 or separate, to show markers corresponding to the current orientation and to those in the selected direction set, with the first one highlighted. While the user moves the imaging device 200, maintaining contact with the eye, the computing unit 212 repetitively tests whether the optical axis of the imaging device 200 is within a tolerance ‘ε’ of the next required direction (and, optionally, a required angle around that direction). If it is not, the highlight remains on the current target option while the user continues to move the imaging device 200 in a direction intended to bring its axis closer to the target. If it is within tolerance, the computing unit 212 triggers a flash and acquires the corresponding image.
At block 1630, determining, by the computing unit, quality of the set of images. If quality of the image is not adequate, the highlight remains on the current target option, and the user continues to handle the imaging device 200 with the same target, until the image quality is acceptable. If the image quality is adequate, the computing unit 212 checks 1609 whether the remaining directions are covered. If all the directions are not covered, the computing unit 212 advances to the next target and highlight it. In the alternative, if all the directions are covered, the method proceeds to block 1640.
At block 1640, concatenate, by the apparatus, the one or more images to provide a wider angle target image of the fundus of the eye. The separate images overlap in their coverage of fundus regions. The computing unit 212 stitches the images together into a single wide-angle fundus image.
Each image is non-linearly distorted, relative to the spherical shape of the retina. More importantly, it is non-linearly distorted, relative to each other image.
The processor 1802 may be disposed in communication with one or more input/output (I/O) devices (1811 and 1812) via I/O interface 1801. The I/O interface 1801 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
Using the I/O interface 1801, the computer system 1800 may communicate with one or more I/O devices (1811 and 1812). For example, the input device 1811 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output device 1812 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma display panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
In some embodiments, the processor 1802 may be disposed in communication with a communication network 1809 via a network interface 1803. The network interface 1803 may communicate with the communication network 1809. The network interface 1803 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network 1809 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface 1803 and the communication network 1809, the computer system 1800 may communicate with security system 1810.
In some embodiments, the processor 1802 may be disposed in communication with a memory 1805 (e.g., RAM, ROM, etc. not shown in
The memory 1805 may store a collection of program or database components, including, without limitation, user interface application 1806, an operating system 1807, web server 1808 etc. In some embodiments, computer system 1800 may store user/application data 1806, such as the data, variables, records, etc. as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase.
The operating system 1807 may facilitate resource management and operation of the computer system 1800. Examples of operating systems include, without limitation, Apple Macintosh OS X, Unix, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeB SD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like. User interface 1817 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces may provide computer interaction interface elements on a display system operatively connected to the computer system 1800, such as cursors, icons, check boxes, menus, scrollers, windows, widgets, etc. Graphical user interfaces (GUIs) may be employed, including, without limitation, Apple Macintosh operating systems' Aqua, IBM OS/2, Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, web interface libraries (e.g., ActiveX, Java, Javascript, AJAX, HTML, Adobe Flash, etc.), or the like.
In some embodiments, the computer system 1800 may implement a web browser 1808stored program component. The web browser may be a hypertext viewing application, such as Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, etc. Secure web browsing may be provided using HTTPS (secure hypertext transport protocol), secure sockets layer (SSL), Transport Layer Security (TLS), etc. Web browsers may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java, application programming interfaces (APIs), etc. In some embodiments, the computer system 1800 may implement a mail server 1819 stored program component. The mail server may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as ASP, ActiveX, ANSI C++/C#, Microsoft .NET, CGI scripts, Java, JavaScript, PERL, PHP, Python, WebObjects, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), Microsoft Exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 1800 may implement a mail client stored program component. The mail client may be a mail viewing application, such as Apple Mail, Microsoft Entourage, Microsoft Outlook, Mozilla Thunderbird, etc.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
Advantages of the embodiment of the present disclosure are illustrated herein.
In an embodiment, the positions for imaging are recognized by the user and the computing unit without immediate reference to what the imaging device sees. Therefore, it is not necessary to illuminate the retina for navigational purposes, or to train the user in recognition of retinal features. This reduces total incident light, and simplifies the provision of a wider pool of users qualified to obtain images for screening purposes.
In an embodiment, by stillness of the device, by the choice of still photography, and by the immediate quality control, the captured images are superior to those extracted from a video record.
In an embodiment, the operator selected still image enables to focus the image suitably before acquiring and storing in a definite ordered sequence. This makes it easier for image stitching while making the wide angle image.
In an embodiment, the present disclosure provides reduced storage requirement compared to the current video equipment for any type of compression.
The described operations may be implemented as a method, system or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a “non-transitory computer readable medium”, where a processor may read and execute the code from the computer readable medium. The processor is at least one of a microprocessor and a processor capable of processing and executing the queries. A non-transitory computer readable medium may comprise media such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), etc. Further, non-transitory computer-readable media comprise all computer-readable media except for a transitory. The code implementing the described operations may further be implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.).
Still further, the code implementing the described operations may be implemented in “transmission signals”, where transmission signals may propagate through space or through a transmission media, such as an optical fiber, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The transmission signals in which the code or logic is encoded is capable of being transmitted by a transmitting station and received by a receiving station, where the code or logic encoded in the transmission signal may be decoded and stored in hardware or a non-transitory computer readable medium at the receiving and transmitting stations or devices. An “article of manufacture” comprises non-transitory computer readable medium, hardware logic, and/or transmission signals in which code may be implemented. A device in which the code implementing the described embodiments of operations is encoded may comprise a computer readable medium or hardware logic. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the invention, and that the article of manufacture may comprise suitable information bearing medium known in the art.
The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise.
The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise.
The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.
The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
The illustrated operations of
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
1233/CHE/2015 | Mar 2015 | IN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2015/053668 | 5/19/2015 | WO | 00 |