The subject matter disclosed herein relates to personal electronic multi-media devices. More particularly, the subject matter disclosed herein relates to the addition of a physical and technological ‘layer’ to the design of a laptop-type computer, netbook computer, ultrabook computer, or tablet-like computer (hereafter, each being referred to as a “laptop-type computer” for descriptive convenience) that provides enhanced audio output. This added layer will hereafter be referred to as an “acoustic layer.”
As personal electronic devices become smaller and provide more multi-media entertainment features and capabilities, one of the disadvantages that accompanies the trend toward the smaller size is that the audio speakers contained in such a compact laptop-type computer also tend to be smaller, thereby providing a less than satisfactory audio experience. Also, there has been inadequate attention to the design of an intentional audio space as part of the design of the product's audio output.
The subject matter disclosed herein is illustrated by way of example and not by limitation in the accompanying figures in which like reference numerals indicate similar elements.
As used throughout this application, the word “exemplary” means “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not to be construed as necessarily preferred or advantageous over other embodiments. Additionally, for simplicity and/or clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for illustrative clarity. Further, in some figures only one or two of a plurality of similar elements are indicated by reference characters for illustrative clarity of the figure, whereas all of the similar element may not be indicated by reference characters. Further still, it should be understood that although some portions of components and/or elements of the subject matter disclosed herein have been omitted from the figures for illustrative clarity, good engineering, construction and assembly practices are intended.
The terms “pad,” “electronic pad-type device,” “pad-type device,” “tablet,” “tablet-type device,” “multi-media computing device,” “smartphone,” “smartphone-type device,” “personal multi-media electronic tablet,” “personal multi-media electronic device,” and “electronic pad device” are intended to be interchangeable terms throughout this application, and are intended to refer to similar type devices. Exemplary pad-type devices include, but are not limited to, pad-type computing devices (e.g., those sold under the APPLE Corporation trademark ‘IPAD,’ etc.), mobile phone devices (e.g., those sold under the APPLE Corporation trademark ‘IPHONE,’ etc.), a media player, a handheld-computing device, or a handheld multimedia device, numerous variations of any of which device types are available from alternate manufacturers, and in various sizes, as an ordinarily skilled artisan will readily recognize.
The acoustic layer device 100 provides a robust stereo audio output with enhanced-bass for a pad-type device, while also providing a protective cover for the pad-type device. In particular, the acoustic layer device 100 comprises a case or housing 101 that is adapted to receive a pad-type device (not shown) in a recessed-well region 102 that is formed on the top side of acoustic layer 100, and best shown in
Exemplary case 101 encloses an audio processing device, such as an audio amplifier with functional controls, two audio transducers (i.e., speakers), an audio enhancement acoustic waveguide structure, and a power source. The audio processor device drives the audio transducers in a well-known manner to generate an audio output that is projected from the front side of the audio transducers and through apertures 103a, 103b. According to the subject matter disclosed herein, the audio output that is generated from the back side of each transducer is channeled through an acoustic waveguide structure, as shown in
In an exemplary embodiment, case 101 is formed by a top cover 106 and a bottom cover 107. Top cover 106 is releasably hinged to bottom cover 107 along an axis 108 so that top cover 106 and bottom cover 107 open and close in a clam-shell manner along axis 108, thereby making the internal components of the acoustic layer accessible. The hinging (not shown) is releasable so that top cover 106 can be conveniently separated from bottom cover 107. In another exemplary embodiment, top cover 106 comprises an integral protective screen cover (not shown) that protects a pad-type device when the pad-type device is received into recessed-well region 102. In one exemplary embodiment, the protective screen cover provides a see-through window that permits the display of the pad-type device to be seen and provides openings through which the audio output from the acoustic layer device can pass. In one exemplary embodiment, the protective screen cover provides an opaque cover to the pad-type device and/or openings through which the audio output from the acoustic layer device can pass. In another exemplary embodiment, the integral protective screen cover is hinged at or near axis 108 and can be rotated from a closed position and positioned at a selected angle with respect to the bottom of the acoustic layer device, thereby permitting a user to view the pad-type device at a selected angle.
In an alternative exemplary embodiment, the integral protective screen cover is hinged at or near front edge 115.
In one exemplary embodiment, acoustic layer device 100 includes a camera lens piece 113 that provides a lens function for a camera contained in a pad-type device. In another exemplary embodiment of acoustic layer device 100, camera lens piece 113 also provides a release mechanism to mechanically release a pad-type device from the acoustic layer device. For the lens function, camera lens piece 113 comprises a lens that allows light to pass from the bottom of the acoustic layer device to the lens of a camera of a pad-type device. For the release mechanism, lens piece 113 can be depressed from the bottom side of acoustic layer 100 by a user and a cylindrical member containing the lens moves toward the top of the acoustic layer device, thereby lifting a pad-type device contained in recessed-well region 102 and allowing a user to grip the edges of the pad-type device. It should be understood that the exemplary embodiment of camera lens piece 113 is merely an example and other embodiments are contemplated. In another exemplary embodiment, the camera lens piece 113 can be replaced by an aperture that provides a viewing port for the lens of a camera of a pad-type device.
Audio signal processor device 120 is coupled to and drives audio transducers 130a, 130b in a well-known manner to generate an audio output that is projected from the front side of transducers 130a, 130b, and out through apertures 103a, 103b. The audio output that is generated from the back side of each transducer 130a, 130b is contained by the acoustic waveguide structure 140 and channeled through aperture 104.
Power source 160 is coupled to and provides power to audio processor device 120 in a well-known manner. In one exemplary embodiment, audio processing device 120 is coupled to an audio transducer, such as audio speakers 181 and/or headphones 182, through a wireless adapter 180 that provides an optical and/or a radio frequency (RF) link 183, such as, but not limited to, a Bluetooth-type link and/or a WiFi-type link, to audio speakers 181 and/or headphones 182. In another exemplary embodiment, the link between wireless adapter 180 and audio speakers 181 and/or headphones 182 is a bi-directional link. In still another exemplary embodiment, the link between wireless adapter 180 and headphones 182 is an output-directive link in which the output from the acoustic layer device is directed to headphones 182. In yet another exemplary embodiment, wireless adaptor 180 provides a bi-directional wireless link between acoustic layer 100 and an external device, such as but not limited to a data source and/or an Internet connection. It should also be understood that the spaces for the various functional components depicted in
In one exemplary embodiment, acoustic waveguide structure 140 comprises walls 141 that are configured to form chambers 142a, 142b, a waveguide 143a, 143b, an acoustic waveguide mixing region 144, and an acoustic output channel 145, which is fluidly coupled to bass output aperture 104. Chambers 142a, 142b are configured so that a length L and a width W of the chamber enhances a bass response of the audio transducers. In one exemplary embodiment, walls 141 are joined to bottom portion 107 so that there is a smooth radius of curvature where wall 141 joins bottom portion 107 in order to minimize air turbulence and provide optimum and efficient audio enhancement. Acoustic waveguide mixing region 144 is configured to couple the respective audio signals from chambers 142a, 142b.
It should be understood that the exemplary configuration of acoustic waveguide structure 140 and the arrangement of audio processor device 120, transducers 130a, 130b, and power source 160 depicted in
In one exemplary embodiment, the acoustic layer device according to the subject matter disclosed herein comprises a microphone 121 that detects audio signals that are then processed by, for example, audio processing device 120. In another exemplary embodiment, the acoustic layer device according to the subject matter disclosed herein comprises at least two microphones 121 configured in a spatial-diversity microphone arrangement that passes their respective signals through optional amplifiers (not shown) and then to digitizers that are part of, for example, audio processor device 120. The digitized microphone signals are then digitally signal processed by, for example, a digital signal processor (DSP), to determine and extract speaker-positional information, and/or room acoustical details, such as but not limited to room reverberation, room echo, room noise, room acoustical delay, and room frequency response, thereby providing a directive sound enhancement and focusable directive sound capture ability.
Additionally, the extracted audio information can be used to enhance the intelligibility of an intentionally generated audio signal in a room, such as when the acoustic layer device is being used as a speaker phone. That is, the acoustic layer device can be configured to provide enhanced speakerphone capability by providing room de-reverberation, noise cancelling, equalization, and other possible features, such as but not limited to speaker identification or speaker positional information. In one exemplary embodiment, the acoustic layer device may also provide voice-recognition capabilities, thereby allowing transcription and/or voice-activated control of the functional aspects of the acoustic layer device, such as but not limited to volume, equalization, muting, or any aspect of the performance of the hardware, firmware, or an application running on the personal multi-media electronic device. Generally, digital signal processing can be added to further voice the acoustic layer output sound to change the equalization, spatialization (for example, stereo separation), phase linearization, or other acoustic properties of the delivered sound experience.
In one exemplary embodiment, muting effectuated by voice command, referred to herein as “smart-muting,” only mutes the audio signal that is ultimately passed along to a listener at the other end of a conversation while still being capable of listening for and processing subsequent voice commands, such as but not limited to “unmute.”
Generally, microphones 121 configured in a spatial-diversity arrangement in conjunction with DSP can be used to improve the intelligibility of any intentionally generated user input or environmentally ambient sound that might be used by an application running on the acoustic layer device, the encased personal multi-media electronic device, or combinations thereof. A plurality of microphones configured in a spatial-diversity arrangement can also be used to record sound from the room and/or to calibrate room acoustics, thereby providing information to the DSP making it possible to provide specific equalization for enhancing a listening experience, such as but not limited to removing variations in a frequency response of a room and/or linearizing the phase of the acoustic signal delivered to a listener by removing unwanted sounds, such as ambient and/or background noise. In an exemplary embodiment, the spatial-diversity microphone configuration can be configured to provide a monaural modality.
In an exemplary embodiment, a portion of audio processing device 120 provides two-dimensional and/or three-dimensional tactile and/or haptic feedback 122 to a user such as, but not limited to, vibration that could be generated by, for example, one or more piezo-electric devices, electro-static devices, magneto-static devices, and/or speaker motor or any other device that creates a physical motion in the case that can be sensed by a user as a vibration, impulse, or jerk. The vibration generated by a tactile/haptic portion 122 of audio processing device 120 could also provide haptic abilities for any soft button, hard button, control input, or on-screen touch of any sort, or combinations thereof. The vibration can also be used to enhance a user experience of an application, such as but not limited to a video game, movie, or audio.
Further, vibration can be used to alert a user to any aspect of the operation of either the personal media-media electronic device and/or the acoustic layer device, or even in response to some sound that the microphones have picked up either with or without DSP being applied. Vibration can be used in some way as part of an application itself. Examples might include but are not limited to massage, alarm-clock, or as a stimulus for some sort of measurement or trigger of additional hardware or of the environment.
In an exemplary embodiment, power source 160 (
The battery discharging/charging technique used by the acoustic layer device monitors the current state of the respective batteries state-of-charge (SOC), and measures the rate of change of the energy of the batteries over time, and then uses this data to create two discharge curves predicting the end of playback for each device. The technique then charges either the battery of the acoustic layer device and/or the battery of the pad-type device so that discharge of the respective batteries occurs at substantially the same time. At the point in which charging of the batteries has compensated any initial discharge time differences to be substantially equal, both batteries are charged in the appropriate proportions to maintain equal playback time until both batteries are fully charged. In another exemplary embodiment, the battery discharge/charge functionality is provided by another component other than power source 160, such as, but not limited to, processing device 120.
While the description above pertains to use of the conceived acoustic layer device 100 with a pad-type device, the embodiments likewise include acoustic layer device embodiments configured and beneficially employed for enhancing the audio output performance of other devices, such as but not limited to laptop-type computing devices.
Instead of using a hinge for attachment, the design shown in
An example of a structure similar to 800 exists when a tablet-device (such as an APPLE IPAD) in used in conjunction with a Bluetooth keyboard/case. In that example, most of the computer's electronic components are located in the display layer rather than the keyboard layer.
For each of the depicted existing laptop computer configurations 600, 700, and 800, the emphasis on compact size has led to a computer design that has dramatic restrictions on the quality of any acoustic performance that the computer will attempt to produce, because there is no intentional layer included to do a decent job of reproducing the sounds that the laptop may create while a user is enjoying, music, audio books, movies, video games and other applications with audio content.
The shape of the acoustic layer 901 shown in
The performance improvements that the inclusion of an intentional acoustic layer brings to the various multi-media functions of a laptop computer are many. Such improvements include but are not limited to much-higher audio power output, waveguide acoustic design to greatly enhance the bass response, advanced DSP functions such as equalization, increased LEFT/RIGHT channel separation, bass-enhancement algorithms, dynamic range algorithms (such as compression), and advanced support for speakerphone operation including such capabilities as spatial rendering of the physical location of various speakers in the room and de-reverberation of room acoustics. Some of these capabilities may be greatly improved through the inclusion of two microphones in the design.
While the features of the acoustic layer are described as including speaker drivers, power supplies, audio amplifiers, DSP, microphones, back-wave speaker ports, front-wave speaker ports, acoustic waveguide structure and various interconnect, it is not necessary that all of these constituents are physically located inside the confines of that acoustic layer. Some of these components may be integrated into other layers (e.g., the keyboard-layer or the display-layer) since it may be more economical to do so, or there may be improved performance in some aspect by doing so. What is important is that the inclusion of these acoustic-layer features to a normal laptop computer is a major improvement to the laptop computer. It is possible to create a laptop-type device that includes an acoustic layer, but which may be missing the hinge structure, and/or missing one of the other layers (e.g. keyboard layer or display layer). In such a case, the non-apparent layer is likely integrated into one of the other layers. An exemplary embodiment is a tablet computer that integrates the keyboard and display layer into a single integrated layer. It is possible to add an acoustic layer, as described in this disclosure, to such an integrated structure, or one without a hinge.
According to the subject matter disclosed herein, the audio output generated from the back side of each transducer is channeled through an acoustic waveguide structure adapted to enhance the bass response of the audio transducers. The output of the acoustic waveguide structure is through a bass output aperture 905. The acoustic waveguide structure provides a richer, fuller-sounding audio output in comparison to the audio output from only the front side of the audio transducers.
The internal structure and components of the acoustic layer 904 (
For the purpose of this disclosure, surface 102 in
Acoustic port 104 is shown to be on the top surface of the recessed-well region 102, such as when the acoustic layer is the topmost surface of the laptop-type device in an embodiment. If the acoustic layer is an inner layer, the acoustic port 104 would more likely exit through one of the side surfaces, such as the front (depicted as 905 in
The process of monitoring the discharge levels of the batteries starts at 1101 of
If a difference in discharge levels is determined, flow continues to 1104 where power source 160 selects the battery having the higher charge level to power both the acoustic layer device and the pad-type device, to balance discharge levels of the batteries so that the battery operating time for the acoustic layer device and a pad-type device are substantially equal. Flow then continues from 1104 back to 1102. If, at 1103, no difference in discharge level is detected, flow continues to 1105 where it is determined whether the batteries have been depleted. If, at 1105, it is determined that the batteries have not been depleted, flow returns to 1102. If, at 1105, it is determined that the batteries have been depleted, flow continues to 1106 where the acoustic layer device shuts down both the acoustic layer device and the pad-type device.
Referring now to
If, at 1203, it is determined that more than a trickle charge is needed to charge the batteries, flow continues to 1206 where power source 160 monitors the charge level of the battery of the acoustic layer device and the battery of the pad-type device. Flow continues to 1207 where it is determined whether there is a difference in charge level between the battery of the acoustic layer device and the battery of the pad-type device. If a difference in charge level is determined at 1207, flow continues to 1208 where the charge rate of each battery is adjusted so that the battery detected as having the lower charge level receives a higher rate of charge.
In one exemplary embodiment, the battery that is determined to be farther to the right (i.e., lower in charge) along the corresponding curve in
Flow continues from 1208 to 1205 where periodically, such as about every 15 minutes, charge is applied to only one battery so that the charge level of the other battery is monitored to determine where it lies along its charge level curve (
If, at 1207, no difference in charge levels is detected, flow continues to 1205 where periodically, such as about every 15 minutes charge is applied to only one battery so that the charge level of the other battery is monitored to determine where it lies along its charge level curve (
In one exemplary embodiment, the acoustic layer device comprises a keyboard (not shown) that is integral to the acoustic layer device. In another exemplary embodiment, the acoustic layer device comprises a keyboard (not shown) that is removably coupled to the acoustic layer device. In still another exemplary embodiment, the acoustic layer device comprises a keyboard (not shown) that is wirelessly coupled to the acoustic layer device, such as through an RF link and/or an infrared link.
Although the foregoing disclosed subject matter is described in some detail for purposes of clarity of understanding, it will be apparent to an ordinarily skilled artisan that certain changes and modifications may be practiced that are within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the subject matter disclosed herein is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
This application is a continuation of U.S. patent application Ser. No. 14/231,664, filed Mar. 31, 2014; which is a continuation-in-part of and claims the benefit of priority to co-pending Patent Cooperation Treaty application number PCT/US2012/069692, filed Dec. 14, 2012 by Avnera Corporation, which in turn claims priority to U.S. Provisional Patent Application Ser. No. 61/576,863, filed Dec. 16, 2011 and now expired; and this application is also a continuation-in-part of and claims the benefit of priority to co-pending U.S. Non-Provisional patent application Ser. No. 13/419,222, filed Mar. 13, 2012, now U.S. Pat. No. 9,204,211, issued Dec. 1, 2015; which in turn also claims priority to U.S. Provisional Patent Application Ser. No. 61/576,863; and this application also claims priority to pending U.S. Provisional Patent Application Ser. No. 61/806,786 filed Mar. 29, 2013; the entire contents of each of which are expressly incorporated in this application by this reference.
Number | Date | Country | |
---|---|---|---|
61806786 | Mar 2013 | US | |
61576863 | Dec 2011 | US | |
61576863 | Dec 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14231664 | Mar 2014 | US |
Child | 15601566 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2012/069692 | Dec 2012 | US |
Child | 14231664 | US | |
Parent | 13419222 | Mar 2012 | US |
Child | 14231664 | US |