The present application is a national stage application of the international application titled, “SELF-ADJUSTING HEAD-MOUNTED AUDIO DEVICE,” filed on Jul. 31, 2015 and having application number PCT/US2015/043099. The subject matter of this related application is hereby incorporated herein by reference.
Embodiments of the present invention relate generally to audio systems and, more specifically, to a self-adjusting head-mounted audio device.
Various technological advancements in the consumer electronics industry have dramatically increased the degree to which audio devices, such as media players, communications devices, and computers, are integrated into the daily lives of users. In order to avoid disturbing others and/or to attenuate external noise, many users listen to audio devices using a head-mounted device, such as a pair of headphones. For example, many users listen to mobile audio devices via circumaural headphones that isolate the user from distracting external noise and prevent others from hearing the audio stream to which the user is listening. Similarly, a commercial pilot may use an aviation headset to block out engine noise while communicating with co-pilots and air traffic control.
Many head-mounted audio devices include a variety of adjustment mechanisms that enable each device to comfortably and securely fit a wide variety of head shapes and sizes. As an example, many circumaural and supra-aural headphones include an adjustable headband that enables the height of the headphones to be modified. In addition, some head-mounted audio devices enable the location of the headphone speakers to be adjusted relative to various components of the head support (e.g., headband) associated with the headphones.
Although such adjustment mechanisms enable a head-mounted audio device to be worn by multiple users, making adjustments each time the head-mounted audio device is used can be onerous for the user(s). For example, an aviation headset that is shared between multiple pilots may need to be adjusted each time a new pilot enters the cockpit. In addition, even when a particular head-mounted audio device has only a single user, the user usually needs to repeatedly adjust the device over the course of time, such as when the device is stored in and later removed from a carrying case, or when the device is expanded to be worn around the user's neck and later readjusted when placed back on the user's head.
As the foregoing illustrates, more effective techniques for adjusting head-mounted audio devices would be useful.
One embodiment of the present invention sets forth a system that includes a head-mounted audio device that includes at least one speaker. The system further includes at least one actuator coupled to the head-mounted audio device and a processor coupled to the at least one actuator. The process is configured to receive an indication that the head-mounted audio device has been placed on a head of a user and, in response, cause the at least one actuator to transition the head-mounted audio device from a first state to a second state. The first state corresponds to a first set of physical parameters associated with the head-mounted audio device, and the second state corresponds to a second set of physical parameters associated with the head-mounted audio device.
Further embodiments provide, among other things, a non-transitory computer-readable storage medium and a method configured to implement various aspects of the system set forth above.
At least one advantage of the disclosed techniques is that a head-mounted audio device may be automatically adjusted to comfortably and securely fit the head of a user. Accordingly, a user does not need to make manual adjustments to a head-mounted device each time the device is placed on his or her head and removed from his or her head.
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
In the following description, numerous specific details are set forth to provide a more thorough understanding of the embodiments of the present invention. However, it will be apparent to one of skill in the art that the embodiments of the present invention may be practiced without one or more of these specific details.
The first sensors 120 may include touch sensors (e.g., capacitive sensors), proximity sensors (e.g., infrared, laser, or ultrasound sensors), pressure sensors, and/or thermal sensors that are capable of detecting whether the head-mounted audio device 110 is being worn on the head of a user. The first sensors 120 may further determine the distance from various components of the head-mounted audio device 110 to the head and/or ears of the user. For example, and without limitation, the first sensors 120 may determine the distance from the head support 114 to the top of the head and/or determine whether the ears are aligned with the speakers 112, such as by determining the distance from each ear to a corresponding speaker 112. The second sensors 122 include pressure sensors capable of detecting whether the head-mounted audio device 110 is being worn on the head of a user as well as how much force the speakers 112 are exerting on the ears of the user. The first sensors 120 and/or second sensors 122 may further detect whether the head-mounted audio device 110 is being worn around the neck of the user, stored in a carrying case, or being carried, but not worn, by the user.
The head support 114 may include a headband, as shown in
The head support 114 includes one or more adjustable regions 140 that enable the position(s) of one or more components of the head support 114 to be modified relative to the position(s) of the speakers 112. For example, and without limitation, the adjustable regions 140 shown in
The actuators 130 may include various types of devices that are capable of modifying various parameters of the adjustable regions 140. Some non-limiting examples of actuators 130 that may be implemented with the head-mounted audio device 110 include mechanical motors, hydraulic and pneumatic actuators, thermal actuators, and piezoelectric actuators. The actuator(s) 130 are positioned proximate to components of the head support 114 and/or speakers 112 in order to modify the physical dimensions and relative locations of these components. For example, and without limitation, the actuators 130 illustrated in
Although the sensors 120, 122, actuators 130, and speakers 112 shown in
Computing device 150 includes a processing unit 160, input/output (I/O) devices 170, and a memory unit 180. Memory unit 180 includes an adjustment application 182 configured to interact with a database 184. The computing device 150 is coupled to the sensors 120, 122, the actuators 130, and/or the speakers 112.
Processing unit 160 may include a central processing unit (CPU), digital signal processing unit (DSP), and so forth. In various embodiments, the processing unit 160 is configured to execute the adjustment application 182 to analyze data acquired by the sensor(s) 120, 122 and to determine biometric data and locations, distances, orientations, etc. of the speakers 112, components of the head support 114, and/or head and ears of a user. The biometric data and locations, distances, orientations, etc. of components and/or the user may be stored in the database 184. The processing unit 160 is further configured to execute the adjustment application 182 to control the operation of the actuators 130. For example, and without limitation, the processing unit 160 may receive data from the sensors 120, 122 and process the data to determine whether the head support 114 is in contact with the head of the user and/or whether the speakers 112 are properly aligned with the ears of the user. Then, based on the data received from the sensors 120, 122, the processing unit 160 causes adjustments to be made to the adjustable regions 140 of the head support 114 via one or more actuators 130.
I/O devices 170 may include input devices, output devices, and devices capable of both receiving input and providing output. For example, and without limitation, I/O devices 170 may include wired and/or wireless communication devices that send data to and/or receive data from the sensor(s) 120, 122, the speakers 112, and/or various types of audio devices (e.g., media players, smartphones, computers, radios, and the like) to which the system 100 may be coupled. Further, in some embodiments, the I/O devices 170 include one or more wired or wireless communication devices that receive (e.g., via a network, such as a local area network and/or the Internet) biometric data associated with one or more users and/or audio streams that are to be reproduced by the speakers 112.
Memory unit 180 may include a memory module or a collection of memory modules. Adjustment application 182 within memory unit 180 may be executed by processing unit 160 to implement the overall functionality of the computing device 150, and, thus, to coordinate the operation of the system 100 as a whole. The database 184 may store biometric data, location data, orientation data, algorithms, audio streams, object recognition data, etc.
Computing device 150 as a whole may be a microprocessor, an application-specific integrated circuit (ASIC), a system-on-a-chip (SoC), a mobile computing device such as a tablet computer or cell phone, a media player, and so forth. In other embodiments, the computing device 150 may be coupled to, but separate from the system 100. In such embodiments, the system 100 may include a separate processor that receives data (e.g., biometric data, actuator 130 states, audio streams) from and transmits data (e.g., sensor data) to the computing device 150, which may be included in a consumer electronic device, such as a smartphone, portable media player, personal computer, vehicle head unit, navigation system, etc. For example, and without limitation, the computing device 150 may communicate with an external device that provides additional processing power. However, the embodiments disclosed herein contemplate any technically feasible system configured to implement the functionality of the system 100.
In operation, the first sensors 120 and/or the second sensors 122 track whether the head-mounted audio device 110 has been placed on the head of a user. When the first sensors 120 and/or the second sensors 122 detect that the head-mounted audio device 110 has been placed on the head of a user, the sensors 120, 122 transmit an indication to the processing unit 160. The processing unit 160 then causes the actuators 130 to begin transitioning the head-mounted audio device 110 from a first state to a second state. For example, and without limitation, if the head-mounted audio device 110 is removed from storage (e.g., a carrying case) and placed on the head of the user, the head support 114 could initially be in a collapsed state (e.g., with each adjustable region 140 at a minimum setting). Consequently, when the head support 114 is placed in contact with the top of the user's head, the speakers 112 would not be properly aligned with the user's ears. Accordingly, the processing unit 160 would cause the actuators 130 to modify the adjustable regions 140 to increase the distance between the speakers 112 and the upper region 115 of the head support 114 and align the speakers 112 with the user's ears.
Alternatively, the head support 114 could be in an elongated state (e.g., with one or more adjustable regions 140 at or near a maximum setting) prior to being placed on the head of the user. Then, when the user places the speakers 112 over his or her ears, the head support 114 may not be in contact with the top of the user's head. Accordingly, the processing unit 160 would cause the actuators 130 to modify one or more of the adjustable regions 140 to decrease the distance between the speakers 112 and the upper region 115 of the head support 114 so that the headband rests securely on the top of the user's head.
In some embodiments, the head-mounted audio device 110 transitions between two or more states (e.g., from the first state to the second state) based on sets of physical parameters that are stored in the database 184. For example, and without limitation, biometric data associated with a particular user could be stored in the database 184 and used to determine a set of physical parameters, such as the distances between components of the head support 114 and the speakers 112, distances between various components of the head support 114, and/or orientations of components of the head support 114 relative to the speakers 112. Then, when a user puts the head-mounted audio device 110 on his or her head, the head-mounted audio device 110 could transition to a state that is associated with the user's biometric data and set of physical parameters.
Additionally, in some embodiments, the head-mounted audio device 110 transitions between two or more states based on feedback received from the first sensors 120 and/or the second sensors 122. For example, and without limitation, when a user puts the head-mounted audio device 110 on his or her head, the first sensors 120 could transmit sensor data to the processing unit 160, which would then determine, based on the sensor data, the distance between components of the head support 114 and the user's head, whether the ears are properly aligned with the speakers 112, whether the speakers 112 are at the proper angle relative to the user's ears/head, whether the speaker(s) 112 are exerting a proper amount of force on the user's ear(s), etc. The processing unit 160 could then adjust the actuators 130, based on the feedback received from the sensors 120, 122, until the head support 114 is properly fitted to the user's head, until the speakers 112 are properly aligned with the user's ear(s), until the speakers 112 are oriented properly relative to the user, and/or until an appropriate amount of force is being placed on the user's ear(s) by the speaker(s) 112.
In some embodiments, the user may press a button (e.g. on the head-mounted audio device 110 itself or on a remote control, such as a smartphone application) to indicate, via an I/O device 170, that the head-mounted audio device 110 should transition to a particular state. For example, and without limitation, after placing the head-mounted audio device 110 on his or her head, the user could transmit an identifier to the head-mounted audio device 110, such as by logging into an application associated with the head-mounted audio device 110. Biometric data and/or a set of physical parameters (e.g., distances, orientations, angles, pressures, etc.) associated with the identifier would then be retrieved from the database 184 (or from a remote database), and the head-mounted audio device 110 would transition to a state associated with the biometric data and/or physical parameters.
Additionally, a user may press a button to store a set of physical parameters in the database 184. For example, and without limitation, a user could put the head-mounted audio device 110 on his or her head and adjust the head support 114 so that the head-mounted audio device 110 fits comfortably. The user could then press a button to store the state of the head-mounted audio device 110 (e.g., to store a set of physical parameters that correspond to the preferred state of the head-mounted audio device 110) in the database 184 or in a remote database (e.g., cloud storage). In some embodiments, the set of physical parameters may be stored in conjunction with a user identifier. Then, the next time the user puts the head-mounted audio device 110 on his or her head, the head-mounted audio device 110 would automatically return to the preferred state (e.g., when the user presses a button or when the adjustment application 182 detects that the head-mounted audio device 110 has been placed on a user's head).
In some embodiments, the adjustment application 182 may identify the user via one or more sensors that detect biometric information associated with the user. After identifying the user, the adjustment application 182 then accesses a set of physical parameters associated with the user (e.g., associated with a user identifier) and adjusts the head-mounted audio device 110 to the preferred state. For example, and without limitation, the adjustment application 182 could retrieve the set of physical parameters from the database 184 and/or download the set of physical parameters from a remote server (e.g., by downloading dimensions of the user's head that were measured and/or stored by an online service). Additionally, if a new user is identified, the adjustment application 182 may store a set of physical parameters (e.g., in the database 184 and/or on a remote server) based on the specific physical adjustments the new user makes to the head-mounted audio device 110.
For example, and without limitation, the adjustment application 182 could identify a new or existing user via a fingerprint sensor while the user is holding the head-mounted audio device 110. The adjustment application 182 could then adjust the head-mounted audio device 110 to a preferred state associated with the user. In other non-limiting examples, the adjustment application 182 include a heartbeat sensor that identifies a new or existing user by detecting specific characteristics of the user's heartbeat and/or a microphone that identifies a new or existing user by detecting specific characteristics of the user's voice (e.g., a voiceprint or voice identifier). In addition, after adjusting the head-mounted audio device 110 to a preferred state associated with a user, the adjustment application 182 may detect adjustments the user makes to the head-mounted audio device 110 and update the set of physical parameters associated with the corresponding user identifier. In general, any technically feasible sensor for detecting biometric information associated with a user may be implemented with the head-mounted audio device 110.
The head-mounted audio device 110 may further include additional states, such as a storage state, an around-the-neck state, etc. For example, and without limitation, the head-mounted audio device 110 could transition to a storage state, in which the actuators 130 adjust each of the adjustable regions 140 to a minimum position (e.g., a fully collapsed state). A transition to the storage state may be initiated when the sensors 120, 122 detect that the head-mounted audio device 110 is being put into storage and/or when the user presses a button indicating that the head-mounted audio device 110 is being put into storage.
In another non-limiting example, the head-mounted audio device 110 could transition (e.g., in response to sensor data and/or a button press) to an around-the-neck state, such as when the user has removed the head-mounted audio device 110 to engage in a conversation, listen to the environment, take a break from listening to music, etc. When transitioning to the around-the-neck state, the actuators 130 could adjust each of the adjustable regions 140 towards a maximum position (e.g., an expanded state) in order to prevent the head-mounted audio device 110 from uncomfortably squeezing the neck or face of the user. The head-mounted audio device 110 may then transition from the storage state or the around-the-neck state back to the appropriate state when placed back on the head of the user. Accordingly, the user does not need to manually adjust the head-mounted audio device 110 each time the device is put on and removed from the head of the user.
Although the exemplary adjustments shown in
Additionally, in some embodiments, the head-mounted audio device 110 could include noise isolation characteristics (e.g., passive or active noise cancellation) and an externally mounted microphone that detects noise levels in the surrounding environment. Then, in response to detecting elevated noise levels in the environment, the head-mounted audio device 110 may automatically increase the force between the ear pad of the speaker(s) 112 and the ear(s) of the user (e.g. from F1 to F2), increasing the degree to which external noises are attenuated. The head-mounted audio device 110 may further automatically decrease the force between the ear pad of the speaker(s) 112 and the ear(s) of the user (e.g. from F2 to F1) once the external noise level has fallen below a threshold level.
In some embodiments, an adjustable region 140 of the head support 114 could be expanded via the actuator to increase the height of a headband or retracted to decrease the height of the headband. Although the actuator 130 illustrated in
In operation, upon receiving an indication that the head-mounted audio device 110 should be transitioned between one or more states, the processing unit 160 may cause an external stimulus, such as voltage or temperature change, to be applied to the shape-memory material actuator 130. In response, the length of the layer 150 decreases, causing the headband to fold or bow inward. Accordingly, the head-mounted audio device 110 transitions from a first state associated with a first distance 505-1 between the speakers 112 to a second state associated with a second distance 505-2 between the speakers 112. Additionally, application of a stimulus to the shape-memory material actuator 130 may cause the length of the layer 150 to increase, causing the headband to fold or bow outward. Thus, when the head-mounted audio device 110 is being worn by a user, application of a stimulus may increase or decrease the force between the ear pad(s) of the speaker(s) 112 and the ear(s) of the user.
The processing unit 160 may receive an indication that the head-mounted audio device 110 should be transitioned between one or more states via any of the techniques described herein. In one non-limiting example, after a user removes the head-mounted audio device 110 from his or her head, a button may be pressed to apply a stimulus to the shape-memory material actuator 130, causing the head-mounted audio device 110 to transition to a storage state or an around-the-neck state. Additionally, when a user would like to put the head-mounted audio device 110 on his or her head, a button may be pressed to apply a different stimulus to the shape-memory material actuator 130, causing the head-mounted audio device 110 to return to a preferred state, such as a set of physical parameters associated with a particular user identifier.
As shown, a method 600 begins at step 610, where the adjustment application 182 executing on the processing unit 160 determines whether an indication has been received that the head-mounted audio device 110 has been placed on a head of a user. As described herein, an indication that the head-mounted audio device 110 has been placed on a head of a user may be received via a button press (e.g., a physical button on the head-mounted audio device 110, a virtual button in a software application, etc.). Additionally, in some embodiments, the indication may be received in response to analyzing data acquired by the first sensors 120 and/or second sensors 122 and determining, based on the sensor data, that the head-mounted audio device 110 has been placed on a head of a user.
If the adjustment application 182 determines that an indication has not been received, then the method 600 remains at step 610 and continues to wait for an indication. If the adjustment application 182 determines that an indication has been received, then the method 600 proceeds to step 620, where the adjustment application 182 causes at least one actuator 130 to transition the head-mounted audio device from a first state associated with a first set of physical parameters (e.g., a storage state, an undefined state, a state associated with a different user, etc.) to a second state associated with a second set of physical parameters. As described herein, the adjustment application 182 could cause the actuator(s) 130 to transition the head-mounted audio device 110 to the second state based on a set of physical parameters that are stored in the database 184. For example, and without limitation, biometric data associated with a particular user could be stored in the database 184 and retrieved by the adjustment application 182 to determine physical parameters, such as the distances between components of the head support 114 and the speakers 112, the orientations/angles of the speakers 112 relative to the head support, distances/orientations between various components of the head support 114 itself, forces between components of the head support 114 and the user's head/ears, and/or forces between the speakers 112 and the user's head/ears.
Additionally, in some embodiments, the adjustment application 182 could cause the actuator(s) 130 to transition the head-mounted audio device 110 to the second state based on feedback received from the first sensors 120 and/or the second sensors 122. For example, and without limitation, the first sensors 120 could transmit sensor data to the adjustment application 182, which would then determine, based on the sensor data, the distance between components of the head support 114 and the user's head, whether the ears are properly aligned with the speakers 112, whether the speakers 112 are at the proper angle relative to the user's ears/head, whether an appropriate amount of force is being applied to the user's head and/or ears, etc.
Next, at step 630, the adjustment application 182 determines whether an indication has been received that the head-mounted audio device 110 has been removed from the head of the user, positioned around the neck of the user, or placed in storage. As described above, such indications may be received via a button press and/or in response to analyzing data acquired by the first sensors 120 and/or second sensors 122. If the adjustment application 182 determines that an indication has not been received, then the method 600 remains at step 630 and continues to wait for an indication. If the adjustment application 182 determines that an indication has been received, then the method 600 proceeds to step 640.
At step 640, the adjustment application 182 causes at least one actuator 130 to transition the head-mounted audio device 110 from the second state to an around-the-neck state, to a storage state, or back to the first state. As described above, the adjustment application 182 could cause the actuator(s) 130 to transition the head-mounted audio device 110 to an around-the-neck state, to a storage state, or back to the first state based on a set of physical parameters that are stored in the database 184. Additionally, the adjustment application 182 could cause the actuator(s) 130 to transition the head-mounted audio device 110 to an around-the-neck state, to a storage state, or back to the first state based on feedback received from the first sensors 120 and/or the second sensors 122. The method 600 then returns to step 610, previously described herein.
In sum, the adjustment application receives an indication that the head-mounted audio device has been placed on the head of the user and, in response, causes the actuators to adjust the head-mounted audio device based on a particular set of physical parameters. Then, when the adjustment application receives an indication that the head-mounted audio device has been removed from the head of the user, the adjustment application causes the actuators to adjust the head-mounted audio device based on a different set of physical parameters (e.g., parameters associated with an around-the-neck state, a storage state, or a different user state).
At least one advantage of the techniques described herein is that a head-mounted audio device may be automatically adjusted to comfortably and securely fit the head of a user. Accordingly, a user does not need to make manual adjustments to a head-mounted device each time the device is placed on his or her head and removed from his or her head. Additionally, the head-mounted device may automatically modify the attenuation of external noise by increasing or decreasing the force between the speakers and the ears of the user.
The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.
Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable processors.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The invention has been described above with reference to specific embodiments. Persons of ordinary skill in the art, however, will understand that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. For example, and without limitation, although many of the descriptions herein refer to specific types of actuators, sensors, and head supports, persons skilled in the art will appreciate that the systems and techniques described herein are applicable to other types of actuators, sensors, and head supports. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/043099 | 7/31/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/023243 | 2/9/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20100189303 | Danielson | Jul 2010 | A1 |
20110002478 | Pollard et al. | Jan 2011 | A1 |
20110103631 | Fyke | May 2011 | A1 |
20130038458 | Toivola et al. | Feb 2013 | A1 |
20130129106 | Sapiejewski | May 2013 | A1 |
20140003646 | Andersen | Jan 2014 | A1 |
20140064500 | Lee | Mar 2014 | A1 |
20140355778 | Cheng | Dec 2014 | A1 |
20150189422 | Nakano | Jul 2015 | A1 |
20150201268 | Chang et al. | Jul 2015 | A1 |
20160205459 | Kamada | Jul 2016 | A1 |
Number | Date | Country |
---|---|---|
20150037246 | Feb 2015 | JP |
Entry |
---|
International Search Report Application No. PCT/US2015/043099, dated Apr. 28, 2016, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20200084534 A1 | Mar 2020 | US |