Various types of hearing prostheses provide persons with different types of hearing loss with the ability to perceive sound. Hearing loss may be conductive, sensorineural, or some combination of both conductive and sensorineural. Conductive hearing loss typically results from a dysfunction in any of the mechanisms that ordinarily conduct sound waves through the outer ear, the eardrum, and/or the bones of the middle ear. Sensorineural hearing loss typically results from a dysfunction in the inner ear, such as in the cochlea where sound or acoustic vibrations are converted into neural signals, or any other part of the ear, auditory nerve, or brain that may process the neural signals.
Persons with some forms of conductive hearing loss may benefit from hearing prostheses, such as acoustic hearing aids or vibration-based hearing devices. An acoustic hearing aid typically includes a small microphone to detect sound, an amplifier to amplify certain portions of the detected sound, and a small speaker to transmit the amplified sound into the person's ear. Vibration-based hearing devices typically include a small microphone to detect sound and a vibration mechanism to apply vibrations, which represent the detected sound, directly or indirectly to a person's bone or teeth, thereby causing vibrations in the person's inner ear and bypassing the person's auditory canal and middle ear.
Vibration-based hearing devices include, for example, bone conduction devices, direct acoustic cochlear stimulation devices, and other vibration-based devices. A bone conduction device typically utilizes a surgically implanted mechanism or a passive connection through the skin or teeth to transmit vibrations via the skull. Similarly, a direct acoustic cochlear stimulation device typically utilizes a surgically implanted mechanism to transmit vibrations, but bypasses the skull and more directly stimulates the inner ear. Other vibration-based hearing devices may use similar vibration mechanisms to transmit acoustic signals via direct or indirect vibration applied to teeth or other cranial or facial structures.
Persons with certain forms of sensorineural hearing loss may benefit from implanted prostheses, such as cochlear implants and/or auditory brainstem implants. Generally, cochlear implants and auditory brainstem implants electrically stimulate auditory nerves in the cochlea or the brainstem to enable persons with sensorineural hearing loss to perceive sound. For example, a cochlear implant uses a small microphone to convert sound into a series of electrical signals, and uses the series of electrical signals to stimulate the auditory nerve of the recipient via an array of electrodes implanted in the cochlea. An auditory brainstem implant can use technology similar to cochlear implants, but instead of applying electrical stimulation to a person's cochlea, the auditory brainstem implant applies electrical stimulation directly to a person's brainstem, bypassing the cochlea altogether.
In addition, some persons may benefit from a bimodal hearing prosthesis that combines one or more characteristics of acoustic hearing aids, vibration-based hearing devices, cochlear implants, or auditory brainstem implants to enable the person to perceive sound.
The present disclosure relates to a user interface that utilizes a magnetic field to control a device, such as a hearing prosthesis. More particularly, the user interface utilizes a magnetic field sensor that generates a signal that is indicative of the position of the magnetic field sensor in the magnetic field. A processor is configured to process signals from the magnetic field sensor and to use the processed signals to control one or more parameters or operational settings of the device.
In one embodiment, the magnetic field is an asymmetric magnetic field that is characterized by different magnitudes and/or directions at different points in the magnetic field. In contrast, a single bar magnet has a symmetric magnetic field along an axis extending through the north and south poles. The asymmetric magnetic field may be a rotationally asymmetric magnetic field, which is generally a magnetic field that is that is characterized by different magnitudes and/or directions at different points about an axis of the magnetic field. In one example, the axis extends perpendicularly from a plane, and the rotationally asymmetric magnetic field has different magnitudes and/or directions throughout different points that are spaced radially from the axis and parallel to the plane. In this example, a magnetic field sensor may be spaced radially from the axis and may be moved generally parallel to the plane. The magnetic field sensor generates different signals (that are indicative of the magnetic field) as the sensor is moved through the magnetic field, and a processor is configured to interpret these different signals generated by the sensor as user inputs that are used to control operational settings of the device.
Illustratively, the device can be a hearing prosthesis that includes a first component and a second component. In use, the first component may be at least partially implanted in a recipient and the second component may be external to the recipient. Further, the first component can include a first magnetic field source that generates a first asymmetric magnetic field, and the second component can include a second magnetic field source that generates a second asymmetric magnetic field that is complimentary to the first magnetic field. The first component can be coupled to the second component by the first and second complimentary magnetic fields.
In addition, the second component may include a magnetic field sensor that generates signals that are indicative of the position of the magnetic field sensor in the first asymmetric magnetic field. The second component can also include a processor that is configured to process signals from the magnetic field sensor to detect movement of the second component relative to the first component. More particularly, the processor can process the signals from the magnetic field sensor to detect changes in an angular configuration between the first and second components.
The processor can then use these detected changes to control operational settings of the device. In one example, a volume change action can be initiated by a pressing a button of the second component while simultaneously rotating the second component with respect to the first component, and then releasing the button. The magnetic field sensor will generate a signal that is indicative of the rotated angle of the second component. The processor can then change the volume of the device in accordance with the rotated angle. Illustratively, a clockwise rotation of the second component can increase the volume of the hearing prosthesis, while a counter-clockwise rotation can decrease the volume.
In another example, the rotation of the second component with respect to the first component can be used to control other operational settings, such as switching between different user stimulation maps or programs. In this example, the rotation of the second component with respect to the first component may be accompanied by pressing a button in order to initiate the program change.
In another embodiment, the magnetic field may be a symmetric magnetic field, and a plurality of magnetic field sensors can be moved through the symmetric magnetic field. In combination, the plurality of magnetic field sensors generate different signals as the sensors are moved through the magnetic field, and the processor is configured to interpret these different signals generated by the sensors as user inputs that are used to control operational settings of the device.
Generally, the use of a magnetic field and a magnetic field sensor, as disclosed herein, provides a user interface that allows for finer user inputs, as compared to only pushbuttons, for example. Further, the user interface disclosed herein is a simple design that provides an intuitive user interface that may also utilize some components that are already present in some devices (e.g., magnetic coupling components). In addition, the present disclosure is directed to a user interface that can avoid the addition of additional buttons or dial switches to devices that already are designed to have a small form factor.
The following detailed description sets forth various features and functions of the disclosed embodiments with reference to the accompanying figures. In the figures, similar reference numbers typically identify similar components, unless context dictates otherwise. The illustrative embodiments described herein are not meant to be limiting. Aspects of the disclosed embodiments can be arranged and combined in a variety of different configurations, all of which are contemplated by the present disclosure. For illustration purposes, some features and functions are described with respect to medical devices, such as hearing prostheses. However, the features and functions disclosed herein may also be applicable to other types of devices, including other types of medical and non-medical devices.
Referring now to
In
In use, the first component 22 may be coupled to the second component 24 by the coupling components 39, 53. These coupling components 39, 53 may each include magnets that have complimentary magnetic fields that exert attractive forces to couple the first and second components 22, 24. As will be described in more detail hereinafter, one or both of the coupling components 39, 53 can include a plurality of magnets (or other magnetic field sources) that in combination generates an asymmetric magnetic field, which may generally be characterized by magnetic field lines having different magnitudes and/or directions at different positions throughout a plane that intersects the asymmetric magnetic field. In another example, one of the coupling components may include a magnetic field source that generates an asymmetric magnetic field, and the other coupling component may include a magnetic material that is attracted to the asymmetric magnetic field. In further alternate examples, one of the coupling components could be a ferrous material, such as an iron plate or iron bar.
In one embodiment, the sensor 38 is a magnetic field sensor that generates signals that are indicative of the asymmetric magnetic field generated by the coupling component 53. When the sensor 38 is moved through the asymmetric magnetic field, the sensor generates different signals that are indicative of the asymmetric magnetic field at different positions. The processor 30 can interpret these signals from the sensor 38 as user inputs to control one or more operational settings of the device, such as increasing and decreasing a volume of the device, turning the device on and off, switching between auditory stimulation settings (e.g., different user stimulation maps or programs that are defined generally by threshold and comfort hearing levels), switching between different listening modes (e.g., directional or omnidirectional microphone modes, telephone mode, music mode, direct audio input port mode, and wireless streaming) and the like. Further, different user stimulation maps or programs can have settings that are optimized for different listening modes, and the user interface described herein can be used to switch between such user simulation maps.
Alternatively or in conjunction, the sensor 52 can function similarly to generate signals that are indicative of the asymmetric magnetic field generated by the coupling component 53 as the sensor 52 is moved relative to the asymmetric magnetic field, and the processor 44 can interpret these signals as user inputs to control operational settings of the device. Generally, the sensors 38, 52 may include one or more sensors, such as hall-effect sensors, search-coil sensors, magnetotransistor sensors, magnetodiode sensors, magneto-optical sensors, giant magnetoresistive sensors, and the like, and may be configured to sense the magnetic field (magnitude and direction) in one or more axes.
The transducer 28 may include a microphone that is configured to receive external audible sounds 60. Further, the microphone may include combinations of one or more omnidirectional or directional microphones that are configured to receive background sounds and/or to focus on sounds from a specific direction, such as generally in front of the prosthesis recipient. Alternatively or in addition, the transducer 28 may include telecoils or other sound transducing components that receive sound and convert the received sound to electronic signals. Further, the device 20 may be configured to receive sound information from other sound input sources, such as electronic sound information received through the data interface 26 of the first component 22 or from the communication electronics 42 of the second component 24.
In one example, the processor 30 of the first component 22 is configured to convert or encode the audible sounds 60 (or other electronic sound information) into encoded electronic signals that include audio data that represents sound information, and to apply the encoded electronic signals to the communication electronics 32. In the present example, the communication electronics 32 of the first component 22 are configured to transmit the encoded electronic signals as electronic output signals 62 to the communication electronics 42 of the second component 24. Illustratively, the communication electronics 32, 42 can include magnetically coupled coils that establish an RF link between the units 22, 24. Accordingly, the communication electronics 32 can transmit the output signals 62 encoded in a varying or alternating magnetic field over the RF link between the components 22, 24.
Generally, the communication electronics 32, 42 can include an RF inductive transceiver system or circuit. Such a transceiver system may further include an RF modulator, a transmitting/receiving coil, and associated driver circuitry for driving the coil to radiate the output signals 62 as electromagnetic RF signals. Illustratively, the RF link can be an On-Off Keying (OOK) modulated 5 MHz RF link, although different forms of modulation and signal frequencies can be used in other examples.
Each of the power supplies 36, 50 provides power to various components of the first and second components 22, 24, respectively. In another variation of the system 20 of
Further, the data storage 34, 48 may be any suitable volatile and/or non-volatile storage components. The data storage 34, 48 may store computer-readable program instructions and perhaps additional data. In some embodiments, the data storage 34, 48 stores data and instructions used to perform at least part of the processes disclosed herein and/or at least part of the functionality of the systems described herein. Although the data storage 34, 48 in
As mentioned above, the processor 30 is configured to convert the audible sounds 60 into encoded electronic signals, and the communication electronics 32 are configured to transmit the encoded electronic signals as the output signals 62 to the communication electronics 42. In particular, the processor 30 may utilize configuration settings, auditory processing algorithms, and a communication protocol to convert the audible sounds 60 into the encoded electronic signals that are transmitted as the output signals 62. One or more of the configuration settings, auditory processing algorithms, and communication protocol information can be stored in the data storage 34. Illustratively, the auditory processing algorithms may utilize one or more of speech algorithms, filter components, or audio compression techniques. The output signals 62 can also be used to supply power to one or more components of the second component 24. Generally, the encoded electronic signals themselves include power that can be supplied to the second component 24. Additional power signals can also be added to the encoded electronic signals to supply power to the second component 24.
The second component 24 can then apply the encoded electronic signals to the stimulation electronics 46 to allow a recipient to perceive the electronic signals as sound. Generally, the stimulation electronics 46 can include a transducer or actuator that provides auditory stimulation to the recipient through one or more of electrical nerve stimulation, audible sound production, or mechanical vibration of the cochlea, for example.
In the present example, the communication protocol defines how the encoded electronic signals are transmitted from the first component 22 to the second component 24. For example, the communication protocol can be an RF protocol that the first component applies after generating the encoded electronic signals, to define how the encoded electronic signals will be represented in a structured signal frame format of the output signals 62. In addition to the encoded electronic signals, the communication protocol can define how power signals are supplied over the structured signal frame format to provide a more continuous power flow to the second component 24 to charge the power supply 50, for example. Illustratively, the structured signal format can include output signal data frames for the encoded electronic signals and additional output signal power frames.
Once the encoded electronic signals and/or power signals are converted into the structured signal frame format using the communication protocol, the encoded electronic signals and/or power signals can be provided to the communication electronics 32, which can include an RF modulator. The RF modulator can then modulate the encoded electronic signals and/or power signals with a carrier signal, e.g., a 5 MHz carrier signal, and the modulated signals can then be transmitted over the RF link from the communication electronics 32 to the communication electronics 40. In various examples, the modulations can include OOK or frequency-shift keying (FSK) modulations based on RF frequencies between about 100 kHz and 50 MHz.
The second component 24 may then receive the output signals 62 via the communication electronics 42. In one example, the communication electronics 42 include a receiving coil and associated circuitry for receiving electromagnetic RF signals, such as the output signals 62. The processor 44 is configured to then decode the output signals 62 and extract the encoded electronic signals. And the processor 44 can then apply the encoded electronic signals and the included audio data to the recipient via the stimulation electronics 46 to allow the recipient to perceive the electronic signals as sound. Generally, the stimulation electronics 46 can include a transducer or actuator that provides auditory stimulation to the recipient through one or more of electrical nerve stimulation, audible sound production, or mechanical vibration of the cochlea, for example. Further, when the output signals 62 include power signals, the communication electronics 42 are configured to apply the received output signals 62 to charge the power supply 50.
As described generally above, the communication electronics 32 can be configured to transmit data and power to the communication electronics 42. Likewise, the communication electronics 42 can be configured to transmit signals to the communication electronics 32, and the communication electronics 32 can be configured to receive signals from the second component 24 or other devices or components.
Referring back to the stimulation electronics 46 of
For embodiments where the hearing prosthesis 20 is a bone conduction device, the microphone 28 and the processor 30 are configured to receive, analyze, and encode audible sounds 60 (or other electronic sound information) into the output signals 62. The communication electronics 42 receive the output signals 62, and the processor 44 applies the output signals to the bone conduction device recipient's skull via the stimulation electronics 46. In this embodiment, the stimulation electronics 46 may include an auditory vibrator to transmit sound to the recipient via direct bone vibrations, for example.
In addition, for embodiments where the hearing prosthesis 20 is an auditory brain stem implant, the microphone 28 and the processor 30 are configured to receive, analyze, and encode the audible sounds 60 (or other electronic sound information) into the output signals 62. The communication electronics 42 receive the output signals 62, and the processor 44 applies the output signals to the auditory brain stem implant recipient's auditory nerve via the stimulation electronics 46 that, in the present example, includes one or more electrodes.
In embodiments where the hearing prosthesis 20 is a cochlear implant, the microphone 28 and the processor 30 are configured to receive, analyze, and encode the external audible sounds 60 (or other electronic sound information) into the output signals 62. The communication electronics 42 receive the output signals 62, and the processor 44 applies the output signals to an implant recipient's cochlea via the stimulation electronics 46. In this example, the stimulation electronics 46 includes or is otherwise connected to an array of electrodes.
Further, in embodiments where the hearing prosthesis 20 is an acoustic hearing aid or a combination electric and acoustic bimodal hearing prosthesis, the microphone 28 and the processor 30 are configured to receive, analyze, and encode audible sounds 60 (or other electronic sound information) into output signals 62. The communication electronics 42 receive the output signals 62, and the processor 44 applies the output signals to a recipient's ear via the stimulation electronics 46 comprising a speaker, for example.
The device 20 illustrated in
In general, the computing device 70 and the link 72 are used to operate the device 20 in various modes. In a first example mode, the computing device 70 is used to develop and/or load a recipient's configuration data to the device 20, such as through the data interface 26. In another example mode, the computing device 70 is used to upload other program instructions and firmware upgrades, for example, to the device 20. In yet other example modes, the computing device 70 is used to deliver data (e.g., sound information or the predetermined orientation data) and/or power to the device 20 to operate the components thereof and/or to charge the power supplies 36, 50. Still further, the computing device 70 and the link 72 can be used to implement various other modes of operation of the prosthesis 20.
The computing device 70 can further include various additional components, such as a processor and a power source. Further, the computing device 70 can include a user interface or input/output devices, such as buttons, dials, a touch screen with a graphical user interface, and the like, that can be used to turn the one or more components of the device 20 on and off, adjust the volume, switch between one or more operating modes and user stimulation maps, adjust or fine tune the configuration data, etc. Various modifications can be made to the device 20 illustrated in
In the embodiment illustrated in
In one embodiment, the external component 102 and the implantable component 104 may include components for coupling the external component with the implantable component. In one example, the coupling mechanism may use one or more magnets or other magnetic field sources 112 that are included in one or more of the external component 102 or the implantable component 104. Illustratively, the external component 102 may include magnets 112A, 112B, and the implantable component may include magnets 112C, 112D. In this example, the magnet 112A represents a north pole and the magnet 112B represents a south pole. Similarly, the magnet 112C represents a north pole and the magnet 112D represents a south pole. This arrangement of magnets provides one example of an asymmetric magnetic field, as illustrated in
In the example of
In
In one embodiment, the processor may interpret the electrical signals from the sensor 114 only after the user presses the pushbutton 116. In one example, a volume change action can be initiated by a pressing the pushbutton 116 while simultaneously rotating the external component with respect to the implantable component, and then releasing the button. The sensor 114 generates a signal that is indicative of the rotated angle of the external component. The processor can then change the volume of the hearing prosthesis in accordance with the rotated angle. Illustratively, a clockwise rotation of the external component can increase the volume of the hearing prosthesis, while a counter-clockwise rotation can decrease the volume. In another example, the processor is configured to interpret the electrical signals from the sensor 114 for a predetermined time period after the pushbutton 116 is pressed. Generally, the use of the pushbutton 116 to trigger the processor to interpret the signals from the sensor 114 can be helpful to distinguish from inadvertent movements of the sensor 114.
In other examples, a rotation of the sensor 114 (with or without pressing a pushbutton) can be used to turn on and off the device or to switch between auditory stimulation settings or listening modes of the device. The movement of the sensor 114 can also be used to control other operational settings of the device. Further, in other embodiments, the processor can utilize a signal analysis algorithm to monitor the signal from the magnetic sensor and to identify user-input movements, as distinguished from other non-user-input movements. In these embodiments, the pushbutton 116 may be omitted.
In yet another example, a rotation of the external component with respect to the implantable component can be used as a volume control and to switch between user stimulation programs. In this example, the rotation of the external component together with pressing a pushbutton can control one of the volume or the user simulation program, and the rotation of the external component without pressing the pushbutton can control the other of the volume or the user stimulation program. Other examples of movements of the external component with respect to the internal component, with or without pressing the pushbutton, can be used to individually control different operational settings.
Generally, as seen in
As mentioned above, the sensor 114 may include one or more sensors, such as hall-effect sensors, search-coil sensors, magneto-transistor sensors, magnetodiode sensors, magneto-optical sensors, giant magnetoresistive sensors, and the like, and may also be configured to sense the magnetic field in one or more axis. In one example, one or more sensors are used that may each be configured to sense the magnetic field along a single axis, and these single-axis sensor(s) may be aligned so that the sensing axis is parallel with the XY-plane or orthogonal to the Z-axis (referring to
Referring to
Referring now to
The external component 152 may combine various components illustrated in
Further, the external component 152 in this embodiment includes one or more magnetic field sensors 160 and a pushbutton 162.
As the external component 154 is moved with respect to the asymmetric magnetic field of the second coupling component 156 (e.g., rotated with respect to the second coupling component or moved up/down/left/right with respect to the coupling component), the magnetic field sensor 160 generates different electrical signals that are indicative of the asymmetric magnetic field. A processor (such as the processor 30) coupled to the external component may interpret the electrical signals from the sensor 160 as user inputs to control operational settings of the hearing prosthesis. As similarly discussed above, in one embodiment, the processor may interpret the electrical signals from the sensor 160 only after the recipient presses the pushbutton 162 (e.g., while the pushbutton is depressed and/or for a predetermined time period after the pushbutton is pressed).
Referring now to
The method 200 can be performed using the devices 20, 100, and 150 described above, for example, or some other device that is configured to detect movements of the device with respect to a magnetic field. In the method 200, at block 202, a magnetic field sensor generates signals that are indicative of a magnetic field. At block 204, a processor identifies user-input movements of the device based on the magnetic field signals and, at block 206, the processor controls device settings in response to the identified user-input movements.
More particularly, in the method 200, a recipient of a hearing prosthesis device, such as any of the devices described herein, may move a component of the hearing prosthesis in relation to a magnetic field that is generated by the hearing prosthesis. As discussed above, the component may include the magnetic field sensor and the magnetic field may be a rotationally asymmetric magnetic field. As the recipient moves the magnetic field sensor through the rotationally asymmetric magnetic field, the sensor generates changing electrical signals that are indicative of the changing magnitudes and directions at different locations of the magnetic field.
Generally at block 204, the processor can process these changing electrical signals to identify specific user-input movements of the device. In one embodiment, the processor processes the changing electrical signals after a user presses a pushbutton, as described above. In another embodiment, the processor can utilize a signal analysis algorithm to monitor the signal from the magnetic sensor and to identify user-input movements, as distinguished from other non-user-input movements.
For example, the signal analysis algorithm may determine an initial or preset position of the magnetic field sensor with respect to the magnetic field, and then may monitor the signal from the magnetic sensor to detect movements away from the initial position. The signal analysis algorithm may also utilize a movement threshold and/or a time delay to help to identify user-input movements. For example, the signal analysis algorithm may only identify a user-input movement that is greater than a given threshold (e.g., greater than about 5 mm). The signal analysis algorithm may also require a user-input movement to be characterized by moving the sensor away from an initial position and then holding the sensor stationary for greater than a given time delay. Such a time delay may be useful in some of the embodiment disclosed herein where the magnetic forces between the coupling components tends to re-align the device toward the initial position. In some embodiments, the magnetic forces re-align the components into an optimal configuration after the recipient releases the external component.
At block 204, the processor may also be configured to determine characteristics of the user-input movement, such as a direction and/or magnitude of the movement. At block 206, the processor can use these movement characteristics to control operational settings of the device such as increasing or decreasing a volume, turning the device on or off, adjust hearing thresholds, switching between operating modes, and the like. In one example, the processor uses the direction of movement to determine whether to increase or decrease the volume, and uses the magnitude of the movement to determine an amount of volume increase or decrease.
Each block 202-206 may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer-readable medium or storage device including a disk or hard drive, for example. The computer-readable medium may include non-transitory computer-readable medium, such as computer-readable media that stores data for short periods of time like register memory, processor cache, and Random Access Memory (RAM). The computer-readable medium may also include non-transitory media, such as secondary or persistent long-term storage, like read-only memory (ROM), optical or magnetic disks, compact-disc read-only memory (CD-ROM), etc. The computer-readable media may also include any other volatile or non-volatile storage systems. The computer-readable medium may be considered a computer-readable storage medium, for example, or a tangible storage device. In addition, one or more of the blocks 202-206 may represent circuitry that is wired to perform the specific logical functions of the method 200.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.
The present application is a continuation application of U.S. patent application Ser. No. 14/802,437, filed on Jul. 17, 2015, which in turn claims priority to U.S. Provisional Application No. 62/025,742 filed on Jul. 17, 2014. The entire contents of these applications are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4352960 | Dormer et al. | Oct 1982 | A |
5553152 | Newton | Sep 1996 | A |
5626629 | Faltys et al. | May 1997 | A |
5949895 | Ball et al. | Sep 1999 | A |
6161046 | Maniglia et al. | Dec 2000 | A |
6560488 | Crawford | May 2003 | B1 |
6564807 | Schulman et al. | May 2003 | B1 |
7571006 | Gordon et al. | Aug 2009 | B2 |
7577459 | Tuomela et al. | Aug 2009 | B2 |
8542857 | Asnes et al. | Sep 2013 | B2 |
20050209657 | Ching et al. | Sep 2005 | A1 |
20070047749 | Kasztelan et al. | Mar 2007 | A1 |
20070055949 | Thomas | Mar 2007 | A1 |
20070208403 | Della Santina et al. | Sep 2007 | A1 |
20070239992 | White et al. | Oct 2007 | A1 |
20070265508 | Sheikhzadeh-Nadjar et al. | Nov 2007 | A1 |
20080025537 | Ritter et al. | Jan 2008 | A1 |
20090079576 | Yankelevitz et al. | Mar 2009 | A1 |
20090163980 | Stevenson | Jun 2009 | A1 |
20090208043 | Woods et al. | Aug 2009 | A1 |
20100046778 | Crawford et al. | Feb 2010 | A1 |
20110044483 | Edgar | Feb 2011 | A1 |
20120197345 | Staller | Aug 2012 | A1 |
20130023954 | Meskens | Jan 2013 | A1 |
20130064404 | Ridler et al. | Mar 2013 | A1 |
Number | Date | Country |
---|---|---|
1160651 | Dec 2001 | EP |
2302885 | Mar 2011 | EP |
2005121939 | Dec 2005 | WO |
2007146773 | Dec 2007 | WO |
2008121492 | Oct 2008 | WO |
2009056167 | Jul 2009 | WO |
2010056768 | May 2010 | WO |
2010017118 | Jul 2010 | WO |
2010083389 | Jul 2010 | WO |
2013011483 | Jan 2013 | WO |
Entry |
---|
Microchip implant (human)—Wikipedia, the free encyclopedia; http://en.wikipedia.org/wiki/Microchip_implant_(human); printed from the worldwide web on Jul. 18, 2011. |
U.S. employees VeriChipped; http://www.spchips.com/prss-releases/us-employees-verichipped.html; Feb. 9, 2006. |
International Search Report and Written Opinion of International Application No. PCT/IB2012/053698 dated Feb. 28, 2013 (dated Mar. 4, 2013). |
Specification for U.S. Appl. No. 14/543,093, filed Nov. 17, 2014. |
Number | Date | Country | |
---|---|---|---|
20180262851 A1 | Sep 2018 | US |
Number | Date | Country | |
---|---|---|---|
62025742 | Jul 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14802437 | Jul 2015 | US |
Child | 15975084 | US |