Some audio systems—such as headphones—include speaker elements that are worn close to users' ears. As a result, these speaker elements may output audio at a comparatively low volume that may enable users wearing such audio systems to enjoy media without disturbing others close by. For users that desire to listen to audio with one or more other users, some audio systems include speaker elements that are configured to output audio at a volume that may be heard by a group of nearby users (e.g., in the same room). However, current audio systems typically are not configured to operate selectively as both a personal-listening system (e.g., headphones) and as a group-listening system (e.g., a public-address system). As a result, a user may need to utilize one audio system for personal listening and a second, separate audio system for group listening.
The foregoing embodiments and many of the attendant advantages will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
Various embodiments of the attachment apparatus may be described with reference to certain anatomical features of a human user's ear. For ease of reference, the anatomical features of a user's ear may be referred to in this disclosure using the following terms. The term “root of an ear” refers to a portion of the ear that is proximal to the user's head. Specifically, the root of a user's ear may be a portion or structure of the ear that secures the ear to the user's head. Also, as used herein, the term “outer ear” refers to the portion of the ear that is distal to the user's head as compared to the root of the ear. The outer ear may include or otherwise be defined by at least a portion of the ear's auricle, helix, and/or lobule. Typically, the perimeter of the outer ear of an ear is greater than the perimeter of the root of the ear. The term “upper root portion of the ear” generally refers to a portion of the root of the ear that is proximal to the top of the user's head. In contrast, the term “lower root portion of the ear” refers to a portion of the root of the ear that is distal to the top of the user's head. Further, the terms “front of an ear” and “anterior portion of an ear” are used interchangeably and refer to a portion of the ear that is proximal to a user's face and distal to the back of the user's head. The front of the ear may include portions of the helix, the antihelix, tragus, and antitragus that are proximal to the user's face. The term “anterior root portion of the ear” generally refers to a portion of the root of the ear corresponding to the anterior portion of the ear. The terms “back of an ear” and “posterior portion of an ear” are used interchangeably and refer to a portion of the ear that is proximal to the back of the user's head and distal to the user's face. The back of the ear may include portions of the helix and the antihelix proximal to the back of the user's head. Similarly, the term “posterior root portion of the ear” generally refers to a portion of the root of the ear corresponding to the posterior portion of the ear. The term “interior portion of an ear” refers to a portion of the outer ear proximal to, but not including, the ear canal. The interior portion of an ear may include, without limitation, at least part of one or more of the concha, anti-helix, anti-tragus, and tragus. Further descriptions and references to the foregoing terms are provided herein.
As used herein, the terms “speaker” or “loud speaker” are used interchangeably and generally refers to an electroacoustic transducer that is configured to convert an electrical signal into audible sound. The term “personal-listening speaker” refers to a speaker that is configured to play out audio at a volume that is suitable for use as a personal listening device. By way of a non-limiting example, a personal-listening speaker may be included in headphone or earphone devices configured to output audio close to a user's ear without damaging the user's hearing. The term “group-listening speaker” refers to a speaker that is configured to output audio at a volume that is suitable for use as a group-listening device. In a non-limiting example, a group-listening speaker may be included in a portable loud speaker, such as a portable Bluetooth® speaker, and may be configured to play out audio having a volume that is audible to a group of individuals close to the group-listening speaker. As used herein, the term “back volume” generally refers to a volume of air on a rearward-facing side of a speaker driver, and the term “front volume” generally refers to another volume of air on a frontward-facing side of a speaker driver, as would be known by one of ordinary skill in the art.
As used herein, the term “full-range speaker” refers to a speaker that is configured to generate sound frequencies at least substantially within the human-hearing range (˜20 Hz to 20,000 Hz). The term “low-range speaker” refers herein to a speaker that is configured to generate sound frequencies primarily (or exclusively) in a range that is at least substantially lower than a mid-to-high-range speaker. By way of a non-limiting example, the frequency range of a low-range speaker may be from 20 Hz to 2,000 Hz. In another non-limiting example, the low-range speaker may be configured as a woofer, mid-woofer, or subwoofer, as would be known by one skilled in the art. The term “high-range speaker” refers herein to a speaker that is configured to generate sound frequencies primarily (or exclusively) in a range that is at least substantially higher than the range of frequencies produced by a low-range speaker. By way of a non-limiting example, a high-range speaker may produce frequencies between 2,000 Hz and 20,000 Hz. In another non-limiting example, a high-range speaker may be configured as a tweeter, as would be known by one skilled in the art.
In overview, aspects of the present disclosure include audio systems that feature improvements over current audio systems, such as those described above. In some embodiments, an audio system may include a first audio device that includes a first speaker and a second speaker. The first speaker may be selectively configurable to operate as either a full-range, personal-listening speaker or a low-range, group-listening speaker. The second speaker may be configured to be inactive (or in a lower power state) while the first speaker is configured as a full-range, personal-listening speaker or to be configured as a high-range, group-listening speaker while the first speaker is configured to operate as a low-range, group-listening speaker. In some embodiments, the first audio device may be secured to a user's ear so that the first speaker is positioned near the user's ear. While secured on the user's ear, the first audio device may be configured to operate in a personal-listening mode whereby the first speaker is configured to operate as a full-range, personal-listening speaker. Specifically, because the first speaker is positioned near the user's ear, the first speaker may be configured to output sound in a wide range of frequencies and at a relatively low volume so that the user may comfortably enjoy a full-range of sound coming from the first speaker. While the first audio device is configured in a personal-listening mode, the second speaker may not be used and, in some embodiments, may be caused to operate in a low power state.
In some embodiments, the first audio device may be configured to operate in a group-listening mode in which the first and second speakers of the audio device are configured to operate as group-listening speakers. The first audio device may not be secured to the user's ear (as to avoid damaging the user's hearing). In some embodiments in which the first audio device is configured to operate in a group-listening mode, the first speaker may be configured to operate as a low-range, group-listening device. Specifically, the first speaker may be configured to generate sounds having frequencies in a lower portion of the range of human hearing (e.g., without limitation, frequencies between 20 Hz and 2000 Hz). The first speaker may be configured to generate these sounds at a volume that may be experienced by users within the immediate area of the first audio device. The second speaker may be configured to generate sounds having frequencies in a higher portion of the range of human hearing (e.g., without limitation, frequencies between 2000 Hz and 20,000 Hz). The second speaker may be configured to generate these sounds at a volume that may also be experienced by users within the immediate area of the first audio device.
In some embodiments, the first audio device may be powered by a battery. Accordingly, to achieve an increase in power usage efficiency, the first speaker may have one or more characteristics that may enable the first speaker to generate low-range frequencies more efficiently than the second speaker. By way of a non-limiting example, the first speaker may be larger than the second speaker so that the first speaker may generate lower frequencies using less energy than the second speaker. The second speaker may be configured to have one or more characteristics that enable the second speaker to generate high-range frequencies more efficiently than the first speaker. In a non-limiting example, the second speaker may be a micro-speaker with a form factor than is smaller than the first speaker. Due to the second speaker's smaller form factor, the second speaker may generate high-range frequencies using less power than the power required for the first speaker to generate the same high-range frequencies. Further, in some embodiments in which the audio device is portable, the combination of the smaller form factor of the second speaker and the larger form factor of the first speaker may enable the audio device to produce a high-quality sound using comparatively less power while keeping the overall weight of the first audio device down.
In some embodiments in which the first audio device is configured to operate in a group-listening mode, a speaker control service running on one or more processors included on the first audio device (e.g., one or more CPUs or DSPs) may coordinate and synchronize output of sound via the first and second speakers. For example, the speaker control service may cause an audio signal representing sound to be split between the first and second speakers, respectively. In such an example, low-range frequencies may be directed to the first speaker, and high-range frequencies may be directed to the second speaker. Accordingly, output of the audio signal as sound via the first and second speakers may be synchronized so that the full-range of frequencies represented in the audio signal are included in the combined sound generated from the first and second speakers.
In some embodiments, the first audio device may be configured to form an acoustic chamber when the first audio device is secured to a user's ear or when the first audio device is placed against a surface (e.g., a table top). In such embodiments, the first speaker may output sound into the acoustic chamber both when the first audio device is configured to operate in a personal-listening mode or in a group-listening mode. The acoustic chamber may function as a front volume for the first speaker, enabling the first speaker to use the air in the acoustic chamber to generate sound relatively efficiently. When the first audio device is secured to the user and operating in a personal-listening mode, full-range sound generated from the first speaker is directed to the user's ear. However, when the first audio device is operating in a group-listening mode, sound generated from the first and second speaker may be directed outward so that one or more nearby users can hear the sound without placing the first audio device near their ear.
In some embodiments in which the first audio device is placed on a surface, the first audio device may be configured to form an acoustic opening along a portion of the first audio device. In such embodiments, while the first audio device is configured to operate in a group-listening mode, low-range sounds generated by the first speaker may be directed from the acoustic chamber through the acoustic opening into ambient air, and the acoustic chamber and acoustic opening may thereby function essentially as a front volume and an acoustic horn that collectively improve impedance matching, bass response, and power consumption while also effectively directing sound away from the first audio device into the ambient air. In some additional (or alternative) embodiments, the acoustic chamber and acoustic opening may function as a Helmholtz resonator, thereby enabling the first speaker to generate low-frequency sounds effectively and with less power. At the same time the first speaker is generating low-frequency sounds, the second speaker may be configured to generate synchronized, high-frequency sound that is directed away from the first audio device.
In some embodiments, an audio system may include the first audio device and a second audio device. The second audio device may be configured as a mirror image of the first audio device. The second audio device may include, inter alia, a third speaker configured as a mirror image of the first speaker of the first audio device and may include a fourth speaker configured as a mirror image of the second speaker of the first audio device. The second audio device may be selectively configured to operate in a personal-listening mode or a group-listening mode as generally described with reference to the first audio device. In some embodiments, the first and second audio device may be collectively configured to operate in a personal-listening mode or a group-listening mode at the same time. In some embodiments, the first and second device may be configured to output sound in concert in either a personal-listening mode using the first and third speakers or a group-listening mode using the first, second, third, and fourth speakers. In some embodiments, the first audio device and the second audio device may output different portions (e.g., channels) of an audio stream. For example, the first audio device may output sound represented in a left channel of an audio stream, and the second audio device may output sound represented in a right channel of the same audio stream. In various embodiments, the first, second, third, and fourth speakers may be coordinated to play out audio in concert (e.g., synchronized).
As described, the first audio device may be configured to form an acoustic chamber near the first speaker when secured to a user or when coupled to a surface. As the second audio device may be configured as a mirror image of the first audio device in some embodiments, the second audio device may also be configured to form an acoustic chamber near the third speaker. In some embodiments, the first audio device and the second audio device may be configured so that they are selectively coupled to each other via one or more coupling devices (e.g., interlocking components, magnets, or the like). While coupled together, the acoustic chamber formed by the first audio device and the acoustic chamber formed by the second audio device may collectively form/define a combined acoustic chamber. In this configuration, each of the first audio device and the second audio device may collectively utilize the combined acoustic chamber to generate sound suitable for group listening. While the first audio device and the second audio device are decoupled, the combined acoustic chamber may be unformed, and the first audio device and the second audio device may be individually configured to generate sound suitable for personal listening (or group-listening) as described above.
In some embodiments, when the first audio device is coupled to the second audio device, the first and second audio devices may be collectively configured to form an acoustic opening that enables sound to exit from the acoustic chamber. In some embodiments, the acoustic opening may direct low-range frequency sounds down towards a surface on which the first and second audio devices are resting, which may reflect off the surface into the ambient air. In some embodiments, vibrations generated by the first and third speakers may pass through the first and second audio devices into the surface on which the first and second audio devices are resting, thereby causing the surface to act as a resonator and increasing the perceived volume of the sound generated by the first and third speakers.
In some embodiments, the first speaker of the first audio device and the third speaker of the second audio device may collectively utilize the combined acoustic chamber as a front volume in order to generate sound suitable for group listening. In such embodiments, the frontward side of the first speaker of the first audio device may be configured to face the combined acoustic chamber and to direct sound into the combined acoustic chamber. Similarly, the frontward side of the third speaker of the second audio device may also be configured to face the combined acoustic chamber and to direct sound into the combined acoustic chamber at or about the same time as the first speaker of the first audio device directs sound into the combined acoustic chamber. The combined acoustic chamber may have a shape that is suitable for mixing, combining, blending, concentrating, acoustically/passively amplifying, and/or directing the sound output from the first audio device and/or the second audio device. In some embodiments, the first speaker may generate sound within the combined acoustic chamber that is in phase with sound generated by the third speaker. The combined acoustic chamber may enable this in-phase sound to create high sound pressure levels and improved frequency extension down to bass frequencies without requiring additional power consumption by the first and second audio devices. Thus, by coupling together the first and second audio devices, the perceived volume of sound produced from the speakers of the first and second audio device may be increased and/or the characteristics of the sound may be modified, such as by improving the bass response of such sound. According to such embodiments, coupling the first and second audio devices together may enable or improve the ability of the audio system to function as a group-listening device.
In some embodiments, one or more speakers included in the first audio device may be configured to operate as personal-listening speakers while the first audio device is not coupled to the second audio device (or, in some embodiments, while also not coupled to a base device). For example, while the first audio device is not coupled to the second audio device, the second speaker included in the first audio device may be deactivated or disabled and the first speaker included in the first audio device may be activated or enabled and configured to operate as a personal-listening speaker. Upon coupling the first audio device to the second audio device (or to the base device or other surface), one or more of the speakers included in the first audio device may be configured to operate as group-listening speakers. In a non-limiting example, in response to coupling the first audio device to the second audio device (or to the base device or other surface), the second speaker included in the first audio device may be activated or enabled and the first speaker included in the first audio device may be configured to operate as a group-listening speaker. In some embodiments, coupling or decoupling the first audio device from the second audio device or the base device may cause one or more speakers included in the first audio device to transition from operating as group-listening speakers to personal-listening speakers, or vice versa. Accordingly, in such embodiments, the first speaker included in the first audio device may selectively function as either a group-listening speaker or a personal-listening speaker. The second audio device may be configured similarly to the first audio device (e.g., configured as a mirror-image of the first audio device) and thus may include one or more speakers configured to operate as personal-listening speakers while the second audio device is not coupled to the first audio device (or to the base device) and configured to operate as group-listening speakers while coupled to the first audio device (or to the base device).
Various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to examples and implementations are for illustrative purposes and are not intended to limit the scope of the invention or the claims.
The first audio device 102a and the second audio device 102b may communicate with each other via a wireless communication link 113, such as a Wi-Fi Direct, Bluetooth®, near-field magnetic, induction or similar communication link. In some embodiments, the first audio device 102a and the second audio device 102b may maintain a master-slave relationship in which one of the first audio device 102a or the second audio device 102b (the “master” device) coordinates activities, operations, and/or functions between the devices 102a, 102b via the wireless communication link 113. The other of the first audio device 102a or the second audio device 102b (the “slave” device) may receive commands from and may provide information or confirmations to the master device via the communication link 113. By way of a non-limiting example, the first audio device 102a may be the master device and may provide audio data and timing/synchronization information to the second audio device 102b to enable the second audio device 102b to begin output of the audio data in sync with output of the audio data by the first audio device 102a. In this example, the first audio device 102a may provide a data representation of a song and timing information to the second audio device 102b to enable the second audio device 102a and the first audio device 102a to play the song at the same time via one or more of their respective speakers. Alternatively, the first audio device 102a and the second audio device 102b may be peer devices in which each of the devices 102a, 102b shares information, sensor readings, data, and the like and coordinates activities, operations, functions, or the like between the devices 102a, 102b without one device directly controlling the operations of the other device.
The first audio device 102a and/or the second audio device 102b may be in communication with the base device 103, for example, via wired or wireless communication links (e.g., wireless links 112, 114). In some embodiments, the base device 103 may provide information or other data (e.g., audio data) to each of the first audio device 102a and the second audio device 102b. By way of a non-limiting example, the base device 103 may provide audio data and/or timing data to the first audio device 102a and the second audio device 102b to enable the devices 102a, 102b to play out the audio data at the same or nearly the same time. In some embodiments, the base device 103 may be in communication with only one of the first audio device 102a and the second audio device 102b (e.g., the “master” device, as described), and information or data provided from the base device 103 to the master device may be shared with the other one of the first audio device 102a and the second audio device 102b (e.g., the “slave” device, as described).
In some embodiments, at least one device of the audio system 101 (e.g., one of the first audio device 102a, the second audio device 102b, or the base device 103) may be in communication with one or more computing devices outside of the audio system 101 and may send and receive information and other data to and from these computing devices. In the non-limiting example illustrated in
Additionally (or alternatively), at least one device of the audio system 101 may be in direct or indirect communication with one or more servers 116 via at least one network 121. For example, at least one of the devices in the audio system 101 may establish a wireless communication link 115 (e.g., a Wi-Fi link, a cellular LTE link, or the like) to a wireless access point, a cellular base station, and/or another intermediary device that may be directly or indirectly in communication with the one or more servers 116. In such embodiments, at least one of the devices in the audio system 101 may communicate indirectly with the one or more servers 116 via one or more intermediary devices. In another example, the first audio device 102a and/or the second audio device 102b may send, via the network 121, a request for a stream of audio data from the one or more servers 116, and the one or more servers 116 may respond to the request by providing the first audio device 102a and/or the second audio device 102b with the requested stream of data via a communication link 117 with the network 121. In some embodiments, at least one device of the audio system 101 may include a microphone configured to receive an analog source of sound 104 (e.g., a human).
Each of the communication links 110, 111, 112, 113, 114, 115, 117 described herein may be communication paths through networks (not shown), which may include wired networks, wireless networks or combination thereof (e.g., the network 121). In addition, such networks may be personal area networks, local area networks, wide area networks, cable networks, satellite networks, cellular telephone networks, etc. or combination thereof. In addition, the networks may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. In some embodiments, the networks may be private or semi-private networks, such as a corporate or university intranets. The networks may also include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or some other type of wireless network. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein.
For ease of description, the audio system 101 is illustrated in
As illustrated, the first audio device 102a may include (or be coupled to) an input/output device interface 122, a network interface 118, least one optional microphone 156, a memory 124, a processing unit 126, a power source 128, an optional display 170, a first speaker 132, a second speaker 134, a computer-readable-medium drive 160, all of which may communicate with one another by way of a communication bus. The network interface 118 may provide connectivity to one or more networks or computing systems, and the processing unit 126 may receive and/or send information and instructions from/to other computing systems or services via the network interface 118. For example (as illustrated in
The processing unit 126 may communicate to and from memory 124 and may provide output information for an optional display 170 via the input/output device interface 122. In some embodiments, the memory 124 may include RAM, ROM, and/or other persistent, auxiliary or non-transitory computer-readable media. The memory 124 may store an operating system 164 that provides computer program instructions for use by the processing unit 126 in the general administration and operation of the first audio device 102a. In some embodiments, the memory 124 may contain digital representations of audio data 162 or electronic audio signals (e.g., digital copies of songs or videos with audio). In such embodiments, the processing unit 126 may obtain the audio data 162 or electronic audio signals from the memory 124 and may provide electronic audio signals to the first speaker 132 and/or the second speaker 134 for playout as sound.
In some embodiments, the memory 124 may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in some embodiments, the memory 124 may include a speaker control service 166, which may be executed by the processing unit 126 to perform various operations. In some embodiments, the speaker control service 166 may implement various aspects of the present disclosure, for example, by utilizing sensor, input, or other information to determine whether to configure the first speaker 132 to operate as a group-listening speaker or as a personal-listening speaker and to determine whether to configure the second speaker 134 to operate as a group-listening speaker or to cause the second speaker 134 to become inactive or enter a low-power state The processes by which the speaker control service 166 utilizes to enable personal-listening mode or group-listening mode selectively are further described with reference to
In some embodiments, the input/output interface 122 may also receive input from an input device 172, such as a keyboard, mouse, digital pen, microphone, touch screen, gesture recognition system, voice recognition system, image recognition through an imaging device (which may capture eye, hand, head, body tracking data and/or placement), gamepad, accelerometer, gyroscope, or another input device known in the art. In some embodiments, the at least one microphone 156 may be configured to receive sound from an analog sound source (e.g., the analog sound source 104 described with reference to
In some embodiments, the one or more sensors 150 may include, but are not limited to, one or more biometric sensors, heat sensors, chronological/timing sensors, geolocation sensors, gyroscopic sensors, accelerometers, pressure sensors, force sensors, light sensors, or the like. In such embodiment, the one or more sensors 150 may be configured to obtain sensor information from a user of the first audio device 102a and/or from an environment in which the first audio device 102a is utilized by the user. The processing unit 126 may receive sensor readings from the one or more sensors 150 and may generate one or more outputs based on these sensor readings. For example, the processing unit 126 may configure a light-emitting diode included on the audio system (not shown) to flash according to a preconfigured pattern based on the sensor readings.
With reference to the examples illustrated in
The hooking body 302 of the first audio device 102a may be configured to have a shape that approximates a profile of a root of a posterior portion of a human ear. This shape may be referred to generally as a C-shape. When the hooking body 302 is secured to the user's ear (e.g., as illustrated in
The first audio device 102a may include a hinge 330. In some embodiments, the device body 306 may be coupled to the hooking body 302 via the hinge 330. For example, the hinge 330 may be one of various types of hinges (e.g., a tension hinge). The hinge 330 may be configured to couple the device body 306 to the hooking body 302 so that movement of one of the device body 306 or the hooking body 302 is limited in relation to each other. In some embodiments (not shown), the hooking body 302 and the device body 306 may each include complementary magnetic elements that maintain the hooking body 302 and the device body 306 in the closed configured. As such, as the device body 306 is moved towards the hooking body 302, the complementary magnetic elements may pull towards each other, thereby urging the device body 306 and the hooking body 302 towards each other.
The hinge 330 may be formed from one or more portions of the hooking body 302 and the device body 306. In some embodiments, the hinge 330 may additionally include one or more other structural features. In a non-limiting example, the hinge 330 may be formed at least in part by a portion of the hooking body 302, a portion of the device body 306, a spring, a first anchor device configured to couple the portion of the hooking body 302 to the spring, and a second anchor device configured to couple the portion of the device body 306 to the spring. In some alternative (or additional) embodiments, the hinge 330 may be a separate structural feature that is separately coupled to the hooking body 302 and the device body 306. In a non-limiting example, the hinge 330 may include a housing configured to couple to a portion of the hooking body 302 and a portion of the device body 306 such that, while the hooking body 302 and the device body 306 are coupled to the hinge 330, the hinge 330 governs the movement of the hooking body 302 and the device body 306 in relation to one another.
The hinge 330 may be configured to enable the device body 306 to be moved (e.g., swung, rotated, or pivoted) away from the hooking body 302 to cause the first audio device 102a to transition from a closed configuration to an open configuration by rotating about a rotational axis (not shown). The hinge 330 may also be configured to enable the device body 306 to be moved (e.g., swung, rotated, or pivoted) back towards the hooking body 302, for example, to transition the first audio device 102a from an open configuration to a closed configuration by rotating in the opposite direction along the rotational axis.
In various embodiments described herein, the first audio device 102a may be described as transitioning from a closed configuration to an open configuration. However, the first audio device 102a may, in some additional or alternative embodiments, may be configured to transition from an open configuration to a closed configuration in a manner opposite of the manner described above with reference to transitioning from a closed configuration to an open configuration.
In some embodiments, the device body 306 may include or be coupled to an edge member 318. The edge member 318 may include or be made from one or more materials that are suitable for physically engaging a user's ear and/or portions of the user's face. In such embodiments, while the first audio device 102a is secured to a user's ear (e.g., as illustrated in
In some embodiments, the ear pad may be coupled to, attached to, or positioned towards a back-facing side of the device body 306. The ear pad may include or may be made from one or more materials, such as one or more soft, pliable materials suitable for physically engaging a human ear. In some embodiments, while the first audio device 102a is configured in an open configuration, a posterior portion of the user's ear may be inserted between the hooking body 302 and the device body 306 (e.g., as described above). When the first audio device 102a transitions from an open configuration to a closed configuration, the device body 306 may move towards the hooking body 302, thereby causing the ear the occupy at least a portion of the recessed area 320.
In some embodiments, the device body 306 may include a touch-sensitive sensor or sensors (not shown). By way of a non-limiting example, the touch sensitive sensor or sensors may be a capacitive touch sensor or one or more other touch sensitive sensors known in the art. In such embodiments, the device body 306 may be made from a material suitable for enable the touch-sensitive sensor or sensors to measure changes in electrical properties, such as when a user's finger touches the device body 306.
In some embodiments (e.g., as illustrated in
As described, in some embodiments, the first audio device 102a may be secured to a user's outer ear 202. While secured on the user's outer ear 202, the first audio device 102a may be configured to operate in a personal-listening mode whereby the first speaker 132 is configured to operate as a full-range, personal-listening speaker. Specifically, because the first speaker 132 is positioned near an interior portion 220 of the user's outer ear 202, the first speaker 132 may be configured to output sound in a wide range of frequencies and at a relatively low volume so that the user 201 may comfortably enjoy a full-range of sound coming from the first speaker 132. While the first audio device is configured in a personal-listening mode, the second speaker 134 may not be used to output sound and, in some embodiments, may instead be caused to operate in a low power state.
With reference to the example illustrated in
In some embodiments, the hinge 330 (not shown) may urge the device body 306 and the hooking body 302 towards each other, and the device body 306 and the hooking body 302 may collectively apply a compressive force to the posterior portion 208 of the outer ear 202 that may ensure that the first audio device 102a is secured to the outer ear 202.
The hooking body 302 and the device body 306 of the first audio device 102a may be configured collectively so that the first audio device 102a may be worn on and secured to the outer ear 202. The first audio device 102a may be configured in an open configuration (e.g., by moving the hooking body 302 away from the device body 306 via the hinge 330) so that a space or gap is present between the hooking body 302 and the device body 306. The first audio device 102a may then be placed on the outer ear 202 by hooking, hanging, or otherwise positioning the hooking body 302 along the root of the upper portion 204 of the outer ear 202 and by rotating the hooking body 302 until the hooking body 302 engages the root of the posterior portion 208 of the outer ear 202. Because the first audio device 102a features a space or gap between the hooking body 302 and the device body 306 while the first audio device 102a is in an open configuration, the posterior portion 208 of the outer ear 202 may move into, at least partially, in such space or gap and remain in such space or gap once the hooking body 302 engages the root of the posterior portion 208 of the outer ear 202 (e.g., as shown in the example illustrated in
While the hooking body 302 is hooked onto the outer ear 202 and while the first audio device 102a is configured in an open configuration, the device body 306 may be moved (e.g., swung) towards the hooking body 302. As the device body 306 continues moving towards the hooking body 302, the space or gap between the hooking body 302 and the device body 306 may decrease in at least one dimension until the device body 306 physically contacts at least the posterior portion 208 of the outer ear 202. In some embodiments, once the device body 306 contacts the posterior portion 208 of the outer ear 202, the device body 306 may begin pressing the posterior portion 208 against the hooking body 302, generating a compressive force that secures the posterior portion 208 of the outer ear 202 between the device body 306 and the hooking body 302. For ease of description, the first audio device 102a may be described herein as being configured in a partially closed configuration while the posterior portion 208 of the outer ear 202 is secured between the device body 306 and the hooking body 302.
When the device body 306 is moved (e.g., swung) so that the first audio device 102a transitions to the closed position, the mid-ear portion 324 of the device body 306 may move into proximity of the interior portion 220 of the outer ear 202. In some embodiments, the first speaker 132 may move nearer to the interior portion 220 of the outer ear 202, thereby enabling the user 201 to experience sound generated from the first speaker 132. For example, when the first audio device 102a is secured to the user's ear, the first speaker 132 may be positioned in proximity to the interior portion of the ear (e.g., close to the meatus of the user's ear canal) so that audio played through the first speaker 132 is directed towards the ear canal. In such embodiments, the first speaker 132 may be positioned at a predetermined angle so that sound outputted from the first speaker 132 is directed towards the meatus of the user's ear canal when the first audio device 102a is secured to the user's ear.
With reference to the examples illustrated in
As described, the first audio device 102a may be powered by a battery (not shown). To achieve an increase in power usage efficiency of the first audio device 102a and thus a longer battery life, the first speaker 132 may be configured to have one or more characteristics that may enable the first speaker 132 to generate low-range frequencies more efficiently than the second speaker 134. By way of a non-limiting example, the first speaker 132 may be larger than the second speaker 134 (e.g., a 40 mm speaker driver v. a micro speaker) so that the first speaker 132 may generate lower frequencies using less energy than the energy the second speaker 134 would require to generate the lower frequency sounds. The first speaker 132 may also, or alternatively, be configured to generate lower frequency sounds with less distortion than lower-frequency sounds that could be generated by the second speaker 134.
In some embodiments, the second speaker 134 may be configured to have one or more characteristics that enable the second speaker 134 to generate high-range frequencies more efficiently than the first speaker 132. In a non-limiting example, the second speaker 134 may be a micro-speaker with a form factor than is smaller than the first speaker 132, which may, in this example, be a 40 mm speaker. Due to smaller form factor of the second speaker 134, the second speaker 134 may generate high-range frequencies using less power than the power required for the first speaker 132 to generate the same high-range frequencies. Further, in some embodiments in which the first audio device 102a is portable, the combination of the smaller form factor of the second speaker 134 and the larger form factor of the first speaker 132 may enable the audio device to produce a high-quality sound using comparatively less power while keeping the overall weight of the first audio device 102a down.
With reference to the examples illustrated in
With reference to
In some embodiments, the first speaker 132 may output sound into the acoustic chamber 323. The acoustic chamber 323 may function as a front volume for the first speaker 132, enabling the first speaker 132 to use the air in the acoustic chamber 323 to generate sound relatively efficiently. When the first audio device 102a is operating in a group-listening mode such that the first speaker 132 is configured to operate as a low-range, group-listening speaker, low-frequency sound generated from the first speaker 132 may be directed into the acoustic chamber 323, and the sound may exit the acoustic opening 321 into the ambient air (e.g., in a direction indicated by dotted line 614). In some embodiments, the acoustic chamber 323 and the acoustic opening 321 (and possibly the surface of the object 602) may function essentially as an acoustic horn that collectively improve impedance matching, bass response, and power consumption of the first speaker 132 while also effectively directing sound away from the first audio device into the ambient air. In some additional (or alternative) embodiments, the acoustic chamber 323 and acoustic opening 321 may function as a Helmholtz resonator, thereby enabling the first speaker 132 to generate low-frequency sounds effectively and with less power. At the same time the first speaker 132 is generating low-frequency sounds, the second speaker 134 may be configured to generate synchronized, high-frequency sound that is directed away from the first audio device via the opening 354 (e.g., in a direction indicated by dotted line 612).
In some embodiments (e.g., as illustrated in
In some embodiments, the first audio device 102a may be configured as described above (e.g., with reference to
The processing unit 126a may communicate to and from memory 124a. In some embodiments, the memory 124a may include RAM, ROM, and/or other persistent, auxiliary or non-transitory computer-readable media. The memory 124a may store an operating system 164a that provides computer program instructions for use by the processing unit 126a in the general administration and operation of the second audio device 102b. In some embodiments, the memory 124a may contain digital representations of audio data 162a or electronic audio signals (e.g., digital copies of songs or videos with audio). In such embodiments, the processing unit 126a may obtain the audio data 162a or electronic audio signals from the memory 124a and may provide electronic audio signals to the first speaker 132a and/or the second speaker 134a for playout as sound.
In some embodiments, the memory 124a may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in some embodiments, the memory 124a may include a speaker control service 166a, which may be executed by the processing unit 126a to perform various operations. In some embodiments, the speaker control service 166a may implement various aspects of the present disclosure, for example, by utilizing sensor, input, or other information to determine whether to configure the first speaker 132a to operate as a group-listening speaker or as a personal-listening speaker and to determine whether to configure the second speaker 134a to operate as a group-listening speaker or to cause the second speaker 134a to become inactive or enter a low-power state The processes by which the speaker control service 166a utilizes to enable personal-listening mode or group-listening mode selectively are further described with reference to
In some embodiments, the input/output interface of the second audio device 102b may also receive input from an input device in communication with the second audio device 102b, such as a keyboard, mouse, digital pen, microphone, touch screen, gesture recognition system, voice recognition system, image recognition through an imaging device (which may capture eye, hand, head, body tracking data and/or placement), gamepad, accelerometer, gyroscope, or another input device known in the art. In some embodiments, the at least one microphone of the second audio device 102b may be configured to receive sound from an analog sound source
In some embodiments, the one or more sensors 150 of the first audio device 102a and the one or more sensors 150a of the second audio device 102b may include one or more sensors that may detect when the first audio device 102a is coupled to the second audio device 102b. By way of a non-limiting example, the sensors 150, 150a may include proximity sensors, Hall effect sensors paired with magnetic elements on the other audio device, or the like. In some embodiments, the speaker control service 166 may cause the first audio device 102a to operate in (or may enable) a personal-listening mode in response to determining that the first audio device 102a is not coupled to the second audio device 102b. The speaker control service 166 may cause the first audio device 102a to operate (or may otherwise enable) a group-listening mode in response to determining that the first audio device 102a is coupled to the second audio device 102b, for example, by determining that one or more of the sensors 150 (e.g., a Hall-effect sensor) has detected a magnetic field generated by a component of the second audio device 102b.
In some embodiments, the one or more sensors 150 of the first audio device 102a and the one or more sensors 150a of the second audio device 102b may include one or more sensors that may detect when the first audio device 102a and/or the second audio device 102b are in the closed configuration, in the open configuration or in a partially closed configuration (such as when the audio devices 102a, 102b are secured to a user's ear). By way of a non-limiting example, the sensors 150, 150a may include proximity sensors or Hall effect sensors to sense a configuration of the audio devices 102a, 102b. In some embodiments, the speaker control service 166 may cause the first audio device 102a to operate in (or may enable) a personal-listening mode in response to determining that the first audio device 102a is in the partially closed configuration associated with the first audio device 102a being secured to a user's ear. The speaker control service 166 may cause the first audio device 102a to operate (or may otherwise enable) a group-listening mode in response to determining that the first audio device 102a is in the completely closed configuration.
With reference to
The audio devices 102a, 102b may be configured to be coupleable together. In some embodiments, the audio devices 102a, 102b may be configured to include one or more coupling devices in their respective hooking bodies 302, 802. Specifically, in the example illustrated in
In some embodiments, the audio devices 102a, 102b may be in electronic communication with each other (e.g., via a wireless communication signal, such as Bluetooth or near-field magnetic induction). In such embodiments, respective processing units (not show) of the audio devices 102a, 102b may coordinate in order to play out synchronized sound of at least one of speakers 132, 134 with the sound played from at least one of speakers 132a, 134a. In some embodiments, the respective processing units of the audio devices 102a, 102b may communicate a state of their respective audio devices 102a, 102b that may enable those processing units to cause their respective audio devices 102a, 102b to operate in the same state. For example, a processing unit of the first audio device 102a may notify a processing unit of the second audio device 102b that the first audio device 102a has begun operating in a group-listening mode, and the processing unit of the second audio device 102b may then cause the second audio device 102b to begin operating in a group-listening mode. The processing units of the audio devices 102a, 102b may similarly coordinate with respect to operating in a personal-listening mode. As a result, the playout of the speakers 132, 134 of the first audio device 102a may be synchronized or at least coordinated with playout of the speakers 132a, 134a of the second audio device 102b.
In some embodiments, the first audio device 102a and the second audio device 102b may, respectively, include sensors 150, 150a. Each of the sensors 150, 150a may be configured to detect the presence of the other sensor or another element. The sensors 150, 150a may be in communication with a processing unit on their respective audio devices 102a, 102b. In some embodiments, when the sensors 150, 150a detect each other (or another element in the other audio device), the sensors 150, 150a may send a signal indicating that the audio devices 102a, 102b are coupled together. In response, the processing units may selectively change the behavior of features or components on their respective audio devices 102a, 102b, such as the speaker 132, 132a. For example, the speaker systems 132, 132a may be playing out sound as full-range, personal-listening speakers while the audio devices 102a, 102b are not coupled together (e.g., when the sensors 150, 150a do not detect the presence of each other), but the processing units may cause the speaker systems 132, 132a to operate as low-range, group-listening speakers when the audio devices 102a, 102b are coupled together (e.g., when the sensors 150, 150a do detect the presence of each other) and, optionally, in response to receiving an input (e.g., from the mobile device 106 in communication with at least one of the audio devices 102a, 102b). In some embodiments, the processing units may selective activate features or components on their respective audio devices 102a, 102b when the sensors 150, 150a do not detect the presence of each other. By way of a non-limiting example, the audio devices 102a, 102b may be in a low-power or “standby” state while they are coupled to each other, but upon decoupling, the processing units may activate or resume operations, activities, functions, features, etc. For example, in response to determining that the sensors 150, 150a no longer detect each other, the processing units may resume communications with each other (and/or another electronic device) and may resume playing out sound via the speaker system 132 in first audio device 102a and a similar situated speaker system 132a in second audio device 102b.
In some embodiments, at least one of the audio devices 102a, 102b may communicate information indicating whether the audio devices 102a, 102b are coupled together to a computing device in communication with at least one of the audio devices 102a, 102b (e.g., the mobile computing device 106). The mobile computing device 106 may use the information to enable a group-listening option presented to a user (e.g., a user input element, such as a virtual button, toggle, or the like). In response to receiving a user input selecting the group-listening option, the mobile computing device 106 may send a signal to at least one of the audio devices 102a, 102b that may cause the audio devices 102a, 102b to begin operating in a group listening mode. Similarly, the mobile computing device 106 may send a signal to at least one of the audio devices 102a, 102b (e.g., in response to receiving a user input selecting a personal-listening mode) that may cause the audio devices 102a, 102b to begin operating in a personal-listening mode.
In some embodiments (e.g., as illustrated in
In some embodiments, sound that is played out from the first speaker 132 of the first audio device 102a and the first speaker 132a of the second audio device 102b may enter the combined acoustic chamber 840 and may mix and/or to combine in the acoustic chamber 840. The audio played out from the first speaker 132 and the first speaker 132a may be configured to have a power, volume, or gain having a first value. The sound from each of the first speakers 132, 132a may mix in the acoustic chamber 840 and may be passively amplified due to through audio signal addition, constructive interference, and/or sound reinforcement. The resulting sound may have a power, volume, or gain having a second value greater than the first value. In some embodiments, the first speaker 132 and the first speaker 132a may be configured such that first audio played from the first speaker 132 is in phase with second audio played from the first speaker 132a. As a result, the first audio may combine with the second audio via constructive interference to produce a resulting audio having a higher amplitude/volume than the first audio or the second audio individually. In some embodiments, the speakers 132, 132a may be configured to operate as low-range, group-listening speakers, and low-frequency sounds generated by the speakers 132, 132a may be amplified as described above in the combined acoustic chamber 840.
As described, the first speaker 132 of the first audio device 102a and the first speaker 132a of the second audio device 102b may be configured to play out audio into the combined acoustic chamber 840. In some embodiments, the first speaker 132 and the first speaker 132a may be respectively oriented within the first audio device 102a and the second audio device 102b such that audio 950 that is played out from the first speaker 132 and audio 952 that is played out from the first speaker 132a are both directed along a second direction that intersects within the first direction within the combined acoustic chamber 840. For example, the audio 950, 952 may be in phase with each other and may combine in the combined acoustic chamber 840 via a process of constructive interference. Accordingly, the combined audio may have a volume, gain, and/or energy that is greater than the same for either of the audio 950 or 952 individually. By way of another example, the audio 950 and 952 may be separate audio portions of the same audio output (e.g., separate monophonic sounds, such as a left channel and a right channel). In this example, the audio 950 and 952 may blend within the combine acoustic chamber 840 such that the combined audio formed from the audio 950 and 952 includes a more complete audio output (e.g., stereophonic sound).
With reference to
In some embodiments, the speakers 132, 132a may output sound into the combined acoustic chamber 840. The combined acoustic chamber 840 may function as a front volume for the speakers 132, 132a, enabling the speakers 132, 132a to use the air in the combined acoustic chamber 840 to generate sound relatively efficiently. When the audio devices 102a, 102b are operating in a group-listening mode such that the speakers 132, 132a are configured to operate as low-range, group-listening speakers, low-frequency sounds generated from the speaker 132, 132a may be directed into the combined acoustic chamber 840, and the sound may exit into the ambient air via the combined acoustic opening 842 (e.g., in a direction indicated by dotted line 918). In some embodiments, the combined acoustic chamber 840, the combined acoustic opening 842, and (in some embodiments, e.g., as illustrated in
At the same time the speakers 132, 132a are generating low-frequency sounds, the speaker 134, 134a may be configured to generate synchronized, high-frequency sound that is directed away from the first audio device 102a and the second audio device 102b, respectively, via the opening 354 (e.g., in a direction indicated by dotted line 912a) and via the opening 854 (e.g., in a direction indicated by dotted line 912b).
In some embodiments (e.g., as illustrated in
The speaker control service 166 may begin performing the operations of the method 1000 by causing the first speaker 132 of the first audio device 102a to transition to an active state, in block 1002. In some embodiments of the operations performed in block 1002, the speaker control service 166 may cause the first speaker 132 to transition to a high-power state, for example, from a low-power or standby state.
In determination block 1004, the speaker control service 166 may determine whether to configure the first audio device to operate in a personal-listening mode. In some embodiments, the speaker control service 166 may determine whether a user input has been received that indicates a user's desire to activate the personal-listening mode (e.g., by receiving a command signal from a computing device in communication with the first audio device 102a as a result of the user's selection of a personal-listening mode option on the computing device). In some embodiments, the speaker control service 166 may determine to configure the first audio device 102a to operate in a personal-listening mode in response to determining that the first audio device 102a is decoupled from the second audio device 102b (e.g., as determine based on sensor readings from the sensors 150 indicating that the devices 102a, 102b are decoupled).
In response to determining to configure the first audio device 102a to operate in a personal-listening mode (i.e., determination block 1004=“YES”), the speaker control service 166 may cause the first speaker to operate as a full-range, personal-listening speaker, in block 1006. In some embodiments, the speaker control service 166 may perform the operations of block 1006 by causing the first speaker 132 to operate as a full-range, personal-listening speaker by causing one or more processing units on the first audio device 102a to send full-range audio signals to the first speaker 132 to output as full-range sound.
In some embodiments in which the first audio device 102a is in communication with the second audio device 102b and plays out audio in conjunction with the second audio device 102b (e.g., synchronized audio output), the speaker control service 166 may also cause the first speaker 132a of the second audio device 102b to operate as a full-range, personal-listening speaker, for example, by sending a signal to the speaker control service 166a operating on the second audio device 102b.
In optional block 1008, the speaker control service 166 may cause the second speaker 134 to transition to an inactive state, for example, in the event that the second speaker 134 was in an active state. Specifically, the speaker control service 166 may perform the operations in optional block 1008 in order to reduce the amount of power used by the second speaker 134, thereby prolonging battery life in some embodiments in which the first audio device 102a is battery powered.
In response to determining not to configure the first audio device 102a in a personal-listening mode (i.e., determination block 1004=“NO”) or after causing the first speaker to operate as a full-range, personal-listening speaker in block 1006, the speaker control service 166 may determine whether to configure the first audio device 102a to operate in a group-listening mode, in determination block 1010. In some embodiments, the speaker control service 166 may determine to configure the first audio device 102a to operate in a group-listening mode in response to receiving a signal from a computing device in connection with the first audio device 102a indicating that a user has selected an option enabling the group-listening mode. In some embodiments, the speaker control service 166 may determine to configure the first audio device 102a to operate in a group-listening mode in response to determining that the first audio device 102a is coupled to the second audio device 102b. In some embodiments, the speaker control service 166 may determine to configure the first audio device 102a to operate in a group-listening mode in response to determining that the first audio device 102a is coupled to the second audio device 102b and determining that a user selection of a group-listening mode has been made (e.g., via selection of a graphical user element—such as a virtual button—on a computing device in communication with the first audio device 102a).
In response to determining to configure the first audio device to operate in a group-listening mode (i.e., determination block 1010=“YES”), the speaker control service 166 may cause the first speaker 132 of the first audio device 102a to operate as a low-range, group-listening speaker. In some embodiments, the speaker control service 166 may cause one or more processing units to provide audio signals to the first speaker 132 that include audio frequencies in a low range (e.g., bass frequencies). In some embodiments, the speaker control service 166 may cause the first speaker 132a of the second audio device 102b to operate as a low-range, group-listening speaker by sending a signal to the speaker control service 166a of the second audio device 102b indicating as much.
In optional block 1014, the speaker control service 166 may cause the second speaker 134 of the first audio device 102a to transition to an active state, for example, in the event the second speaker 134 was operating in a lower-power or inactive state (e.g., as a result of the speaker control service 166's performing the operations of block 1008). In block 1016, the speaker control service 166 may cause the second speaker 134 to operate as a high-range, group-listening speaker. In some embodiments, the speaker control service 166 may cause one or more processing units to provide audio signals to the second speaker 134 that include high-range sound frequencies. The speaker control service 166 may cause the speaker control service 166a of the second audio device 102b to cause the second speaker 134a to operate as a high-range, group-listening speaker by sending the speaker control service 166a a signal indicating the same.
In response to determining not to configure the first audio device to operate in a group-listening mode (i.e., determination block 1010=“NO”) or after causing the second speaker to operate as a high-range, group-listening speaker in block 1016, the speaker control service 166 may determine whether to configure the first audio device to operate in a low-power mode, in determination block 1018. For example, the speaker control service 166 may determine whether a user input or user inactivity (as determined by a timer) indicate that the first audio device 102a should be put in a low-power or standby mode.
In response to determining not to configure the first audio device 102a in a low-power mode (i.e., determination block 1018=“NO”), the speaker control service 166 may repeat the operations performed above in a loop starting in determination block 1004. In response to determining to configure the first audio device 102a to operate in a low-power mode (i.e., determination block 1018=“YES”), the speaker control service 166 may cause the first speaker 132 and the second speaker 134 to transition from an active state to an inactive or standby state in block 1020. The speaker control service 166 may then cease performing the operations of the method 1000.
While the operations of the method 1000 are described above as being performed by the first audio device 102a (e.g., by the speaker control service 166 operating on the first audio device 102a), in some embodiments, the second audio device 102b, the base device 103, and/or another computing device in communication with the first audio device 102a and the second audio device 102b (e.g., the mobile computing device 106 as described with reference to
In the above descriptions, audio devices are referred to as a “first” audio device and as a “second” audio device. Such references are merely for ease of reference and do not limit an audio device to being solely a “first” audio device or a “second” audio device. Similarly, in some embodiments, speakers are referred to as a “first” speaker and as a “second” speaker. Such references are merely for ease of reference and do not limit a speaker device to being solely a “first” speaker or a “second” speaker.
Although the terms group-listening speaker and group-listening mode are used herein, it is to be understood that such a group-listening speaker or group-listening mode is not necessarily limited to sound output functionality (or listening by a user). Rather, it is appreciated that “group-listening speaker” and “group-listening mode” may encompass use of the audio devices (and speakers thereof) described herein as a 2 way speaker phone with a suitable microphone for receiving sound from a user or group of users. Accordingly, a group-listening speaker and a group-listening mode may also be considered and/or referred to as a group-communication speaker and a group-communication mode, respectively.
The audio systems and methods described herein may also utilize various electronic filter circuitry to minimize distortion and reduce power consumption. For example, in some implementations, the audio systems and methods may utilize a cross over filter in combination with a notch filter that is precisely matched to the resonance of one of the speakers (e.g., the second speaker, which may be a micro-speaker configured to efficiently generate high-range frequencies), as illustrated, for example, in
It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 62/858,035, filed on Jun. 6, 2019, which application is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62858035 | Jun 2019 | US |