The present invention relates generally to conferencing and, more specifically, to automatically customizing a conferencing system based on proximity of a participant.
Conferencing may be used to allow two or more participants at remote locations to communicate using audio. Additionally, Videoconferencing may be used to allow such participants to communicate using both video and audio. Each participant location may include a conferencing system for audio and/or video communication with other participants. Some of the conferencing systems may be customized by participants, e.g., where the conferencing system has a participant's desired conferencing settings, contact list, etc. However, the participant's custom settings are localized only to those conferencing systems that the participant has manually configured. Accordingly, improvements in conferencing are desired.
Various embodiments are presented of a system and method for customizing a conferencing system based on proximity of a participant.
Initially, the method may detect that a participant is in proximity to a conferencing system. According to various embodiments, the conferencing system may be an audio conferencing system or a videoconferencing system. Additionally, the detection may be performed manually or automatically, as desired. For example, the participant may provide user input to “check in” to the conferencing system, e.g., by entering a personal identification number of the participant, logging in to the conferencing system, etc. Alternatively, the conferencing system (or some device associated with the conferencing system) may be configured to automatically detect the participant, without receiving user input identifying the participant. For example, the conferencing system may be configured to detect a personal device (e.g., mobile communication device such as a cell phone or smart phone, or other types of devices, such as personal digital assistants (PDAs), netbooks, tablets, laptops, etc.) of the participant. In some embodiments, the personal device may be detected via a short range communication protocol, e.g., via 802.11x, Bluetooth, near field communication (NFC) protocols, etc. As another example, the conferencing system may detect a geographic position of the personal device (e.g., which may report that geographic position, such as GPS coordinates, to a server) and compare the position to its own, known position. Thus, the method may detect that a participant is proximate to the conferencing system.
Based on the detection that the participant is in proximity to the conferencing system, the method may automatically customize the conferencing system. Customizing the conferencing system may include loading content associated with the first participant. In some embodiments, the content may already be stored on the conferencing system or it may be automatically downloaded from a server (e.g., over a local area network (LAN) or wide area network (WAN), such as the Internet). Once downloaded, the content may be loaded onto the conferencing system so as to customize the conferencing system for the participant. The content may be any of various settings or other information that is already associated with the participant. For example, the content may include a contact list associated with the first participant. Thus, even though the participant may be at a new conferencing system, his contact list may be loaded and available at the new conferencing system based on the automatic customization. Similarly, other content may be loaded, such as recording settings, camera settings, conferencing layout settings (e.g., for videoconferences), presentation settings, background images, menu layouts, etc. Further, a conference schedule associated with the participant may be loaded.
In one embodiment, the participant may have a scheduled conference call that is associated with a different conferencing system. However, in response to detecting that the participant is proximate to the conferencing system above, the conferencing system may automatically associate the upcoming conference with the new conferencing system. Accordingly, the participant may use the new conferencing system for the conference even though it was originally scheduled with a different conferencing system. Further, the method may automatically release or free the original conferencing system since it is now being performed by the new conferencing system.
Additionally, where multiple participants are detected in proximity to the conferencing room, the method may perform various actions. For example, in one embodiment, the customization may occur for the participant that arrived soonest, for a participant having highest priority or seniority, or via other automatic selections. Alternatively, or additionally, the customization may occur for the participant that manually checks in (e.g., via the methods described above, among others). For example, in response to detecting the multiple participants, the conferencing system may display a log in or check in screen for one of the participants to check in, rather than automatically selecting one of the participants.
A better understanding of the present invention may be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. Note the headings are for organizational purposes only and are not meant to be used to limit or interpret the description or claims. Furthermore, note that the word “may” is used throughout this application in a permissive sense (i.e., having the potential to, being able to), not a mandatory sense (i.e., must). The term “include”, and derivations thereof, mean “including, but not limited to”. The term “coupled” means “directly or indirectly connected”.
U.S. patent application titled “Video Conferencing System Transcoder”, Ser. No. 11/252,238, which was filed Oct. 17, 2005, whose inventors are Michael L. Kenoyer and Michael V. Jenkins, is hereby incorporated by reference in its entirety as though fully and completely set forth herein.
U.S. patent application titled “Virtual Decoders”, Ser. No. 12/142,263, which was filed Jun. 19, 2008, whose inventors are Keith C. King and Wayne E. Mock, is hereby incorporated by reference in its entirety as though fully and completely set forth herein.
U.S. patent application titled “Video Conferencing Device which Performs Multi-way Conferencing”, Ser. No. 12/142,340, whose inventors are Keith C. King and Wayne E. Mock, is hereby incorporated by reference in its entirety as though fully and completely set forth herein.
U.S. patent application titled “Conferencing System Utilizing a Mobile Communication Device as an Interface”, Ser. No. 12/692,915, whose inventors are Keith C. King and Matthew K. Brandt, is hereby incorporated by reference in its entirety as though fully and completely set forth herein.
U.S. patent application Ser. No. 13/093,948, titled “Recording a Videoconference Based on Recording Configurations”, filed Apr. 26, 2011, whose inventors are Ashish Goyal and Binu Kaiparambil Shanmukhadas.
U.S. patent application Ser. No. 12/724,226, titled “Automatic Conferencing Based on Participant Presence”, filed on Mar. 15, 2010, whose inventor is Keith C. King, is hereby incorporated by reference in its entirety as though fully and completely set forth herein.
The following is a glossary of terms used in the present application:
Memory Medium—Any of various types of memory devices or storage devices. The term “memory medium” is intended to include an installation medium, e.g., a CD-ROM, floppy disks, or tape device; a computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc.; or a non-volatile memory such as a magnetic media, e.g., a hard drive, or optical storage. The memory medium may comprise other types of memory as well, or combinations thereof. In addition, the memory medium may be located in a first computer in which the programs are executed, or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution. The term “memory medium” may include two or more memory mediums which may reside in different locations, e.g., in different computers that are connected over a network.
Carrier Medium—a memory medium as described above, as well as a physical transmission medium, such as a bus, network, and/or other physical transmission medium that conveys signals such as electrical, electromagnetic, or digital signals.
Computer System—any of various types of computing or processing systems, including a personal computer system (PC), mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), smart phone, television system, grid computing system, or other device or combinations of devices. In general, the term “computer system” can be broadly defined to encompass any device (or combination of devices) having at least one processor that executes instructions from a memory medium.
Automatically—refers to an action or operation performed by a computer system (e.g., software executed by the computer system) or device (e.g., circuitry, programmable hardware elements, ASICs, etc.), without user input directly specifying or performing the action or operation. Thus the term “automatically” is in contrast to an operation being manually performed or specified by the user, where the user provides input to directly perform the operation. An automatic procedure may be initiated by input provided by the user, but the subsequent actions that are performed “automatically” are not specified by the user, i.e., are not performed “manually”, where the user specifies each action to perform. For example, a user filling out an electronic form by selecting each field and providing input specifying information (e.g., by typing information, selecting check boxes, radio selections, etc.) is filling out the form manually, even though the computer system must update the form in response to the user actions. The form may be automatically filled out by the computer system where the computer system (e.g., software executing on the computer system) analyzes the fields of the form and fills in the form without any user input specifying the answers to the fields. As indicated above, the user may invoke the automatic filling of the form, but is not involved in the actual filling of the form (e.g., the user is not manually specifying answers to fields but rather they are being automatically completed). The present specification provides various examples of operations being automatically performed in response to actions the user has taken.
FIGS. 1 and 2—Exemplary Participant Locations
In some embodiments, the participant location may include camera 104 (e.g., an HD camera) for acquiring images (e.g., of participant 114) of the participant location. Other cameras are also contemplated. The participant location may also include display 101 (e.g., an HDTV display). Images acquired by the camera 104 may be displayed locally on the display 101 and/or may be encoded and transmitted to other participant locations in the videoconference. In some embodiments, images acquired by the camera 104 may be encoded and transmitted to a multipoint control unit (MCU), which then provides the encoded stream to other participant locations (or videoconferencing endpoints)
The participant location may further include one or more input devices, such as the computer keyboard 140. In some embodiments, the one or more input devices may be used for the videoconferencing system 103 and/or may be used for one or more other computer systems at the participant location, as desired.
The participant location may also include a sound system 161. The sound system 161 may include multiple speakers including left speakers 171, center speaker 173, and right speakers 175. Other numbers of speakers and other speaker configurations may also be used. The videoconferencing system 103 may also use one or more speakerphones 105/107 which may be daisy chained together.
In some embodiments, the videoconferencing system components (e.g., the camera 104, display 101, sound system 161, and speakerphones 105/107) may be coupled to a system codec 109. The system codec 109 may be placed on a desk or on the floor. Other placements are also contemplated. The system codec 109 may receive audio and/or video data from a network, such as a LAN (local area network) or the Internet. The system codec 109 may send the audio to the speakerphone 105/107 and/or sound system 161 and the video to the display 101. The received video may be HD video that is displayed on the HD display. The system codec 109 may also receive video data from the camera 104 and audio data from the speakerphones 105/107 and transmit the video and/or audio data over the network to another conferencing system, or to an MCU for provision to other conferencing systems. The conferencing system may be controlled by a participant or user through the user input components (e.g., buttons) on the speakerphones 105/107 and/or input devices such as the keyboard 140 and/or the remote control 150. Other system interfaces may also be used.
In various embodiments, the codec 109 may implement a real time transmission protocol. In some embodiments, the codec 109 (which may be short for “compressor/decompressor” or “coder/decoder”) may comprise any system and/or method for encoding and/or decoding (e.g., compressing and decompressing) data (e.g., audio and/or video data). For example, communication applications may use codecs for encoding video and audio for transmission across networks, including compression and packetization. Codecs may also be used to convert an analog signal to a digital signal for transmitting over various digital networks (e.g., network, PSTN, the Internet, etc.) and to convert a received digital signal to an analog signal. In various embodiments, codecs may be implemented in software, hardware, or a combination of both. Some codecs for computer video and/or audio may utilize MPEG, Indeo™, and Cinepak™, among others.
In some embodiments, the videoconferencing system 103 may be designed to operate with normal display or high definition (HD) display capabilities. The videoconferencing system 103 may operate with network infrastructures that support T1 capabilities or less, e.g., 1.5 mega-bits per second or less in one embodiment, and 2 mega-bits per second in other embodiments.
Note that the videoconferencing system(s) described herein may be dedicated videoconferencing systems (i.e., whose purpose is to provide videoconferencing) or general purpose computers (e.g., IBM-compatible PC, Mac, etc.) executing videoconferencing software (e.g., a general purpose computer for using user applications, one of which performs videoconferencing). A dedicated videoconferencing system may be designed specifically for videoconferencing, and is not used as a general purpose computing platform; for example, the dedicated videoconferencing system may execute an operating system which may be typically streamlined (or “locked down”) to run one or more applications to provide videoconferencing, e.g., for a conference room of a company. In other embodiments, the videoconferencing system may be a general use computer (e.g., a typical computer system which may be used by the general public or a high end computer system used by corporations) which can execute a plurality of third party applications, one of which provides videoconferencing capabilities. Videoconferencing systems may be complex (such as the videoconferencing system shown in
The videoconferencing system 103 may execute various videoconferencing application software that presents a graphical user interface (GUI) on the display 101. The GUI may be used to present an address book, contact list, list of previous callees (call list) and/or other information indicating other videoconferencing systems that the participant may desire to call to conduct a videoconference.
Note that the videoconferencing system shown in
FIGS. 3A and 3B—Coupled Conferencing Systems
FIG. 4—Customizing a Conferencing System Based on Proximity of a Participant
In 402, the method may detect that a participant is in proximity to a conferencing system. As used herein “in proximity to” or general descriptions related to a participant being “proximate to” a conferencing system refers to a participant that is close enough in distance to participate in a conference using the conferencing system. Generally, a conferencing system may be within a conference room, so a participant being proximate to such a conferencing system may simply refer to the participant being within the same conference room as the conferencing system. However, where a room is very large, a participant may need to be closer to the conferencing system than simply being within the same room to be considered “proximate to” the conferencing system.
The detection that the participant is proximate to the conferencing system may be performed manually or automatically, as desired. For example, the participant may manually provide user input to “check in” to the conferencing system. The user input may be provided in a variety of ways. For example, the conferencing system may include a log in screen and the participant may provide a user name and password in order to check in to the conferencing system. As another example, the participant may provide a personal identification number (PIN) that is associated with the participant to the conferencing system, e.g., via a remote. As another example, the participant may be able to provide audible commands to the conferencing system in order to check in, such as by speaking the participants name, providing a log in phrase, etc. In further embodiments, the participant may be able to provide a visual gesture in order to check in to the conferencing system. For example, the participant may provide a gesture that is unique to the participant. Thus, in some embodiments, the unique detection or determination of the participant may be performed by the participant manually providing unique identification information.
Alternatively, or additionally, the detection of the participant may be performed automatically. More specifically, the conferencing system (or some device associated with the conferencing system) may be configured to automatically detect the participant, without receiving user input identifying the participant. For example, the conferencing system may be configured to detect a personal device (e.g., mobile communication device such as a cell phone or smart phone, or other types of devices, such as personal digital assistants (PDAs), netbooks, tablets, laptops, etc.) of the participant. In some embodiments, the personal device may be detected via a short range communication protocol, e.g., via 802.11x, Bluetooth, near field communication (NFC) protocols, etc. Thus, the conferencing system may detect the presence of the personal device, e.g., via a short range protocol, and then determine the participant associated with the personal device. In one embodiment, the conferencing system may store or be able to access (e.g., on a remote server) associations between personal devices and participants. For example, a MAC address of the personal device may be associated with the participant in a database, e.g., stored on a server that is accessible by the conferencing system. Alternatively, the personal device may be configured to provide identification information of the participant during communication. In another embodiment, the conferencing system may detect a geographic position of the personal device (e.g., which may report that geographic position, such as GPS coordinates, to a server) and compare the position to its own, known position. The geographic position may be determined via GPS, WiFi triangulation, cell tower triangulation, and/or any method for determining the position of the personal device. Accordingly, when the two positions are within a threshold, the conferencing system may detect that the participant is proximate to the conferencing system. One or more can also be run just to increase accuracy of identifying the participant.
In further embodiments, the conferencing system may be configured to automatically detect the participant via image recognition, such as face recognition. Additionally, or alternatively, the conferencing system may be configured to automatically detect the participant via voice recognition, e.g., automatically identifying the participant when the participant speaks. For example, the image or voice recognition may be performed whenever the participant speaks or is within visible range of the conferencing system. Alternatively, the recognition may be performed in response to user input, e.g., when using a phrase for checking in, such as “check in” or “log me in” and/or when using a visual gesture for checking in, such as waving at the conferencing system.
Further manual and automatic methods for detecting the participant are envisioned. Thus, the method may detect that a participant is proximate to the conferencing system.
In 404, based on the detection that the participant is in proximity to the conferencing system, the method may automatically customize the conferencing system. Customizing the conferencing system may include loading content associated with the first participant. In some embodiments, the content may already be stored on the conferencing system or it may be automatically downloaded from a server (e.g., over a local area network (LAN) or wide area network (WAN), such as the Internet). Accordingly, the content may be loaded onto the conferencing system so as to customize the conferencing system for the participant. The content may be any of various settings or other information that is already associated with the participant. For example, the content may include a contact list associated with the first participant. Thus, even though the participant may be at a new conferencing system, his contact list may be loaded and available at the new conferencing system based on the automatic customization. Similarly, other content may be loaded, such as system or room identification settings (e.g., to rename the system to indicate the presence of or customization for the participant), lighting settings, recording settings, camera settings or presets, conferencing layout settings (e.g., for videoconferences), presentation settings, background images, menu layouts, etc.
Further, a conference schedule associated with the participant may be loaded. For example, the participant may be able to select an upcoming conference and initiate the conference using the schedule. Additionally, some of the customizations described above may also be based on an upcoming conference. For example, if the upcoming conference is for a single person in the conferencing room, the camera settings or presets may be customized for having a single person. Similarly, the loaded layout settings may be customized based on the expected number of participants or other endpoints in an upcoming conference.
Accordingly, in one embodiment, the conferencing system may be customized based on the proximity of the participant to effectively convert the conferencing system into a personal conferencing system of the participant. In a further embodiment, the method may also broadcast presence status to all interested users or participants about the participant's presence in the meeting room (or proximity to the conferencing system).
In 406, a conference may be initiated or performed between a plurality of participants at respective participant locations. More specifically, the conference may be initiated between the participant using the conferencing system (e.g., an endpoint at a first participant location) and a plurality of other participants using other conferencing systems (e.g., at other participant locations). The conference may be established according to any of a variety of methods, e.g., the one described in patent application Ser. No. 11/252,238, which was incorporated by reference above. The conference may utilize an instant messaging service or conferencing service over the Internet, as desired. In some embodiments, a multipoint control unit (MCU) may perform or control the conference between the plurality of conferencing systems. For example, in a videoconference, one of the conferencing systems may act as the MCU and may perform decoding and encoding operations on video information transmitted in the first videoconference between the plurality of videoconferencing endpoints. In some embodiments, the conference may be initiated automatically, as described in U.S. patent application Ser. No. 12/724,226, which was incorporated by reference above.
FIG. 5—Using a New Conferencing System Based on Proximity of a Participant
In 502, the method may detect that a participant is in proximity to a conferencing system similar to 402 above. Additionally, the conferencing system may be customized in the manner described above.
In 504, an upcoming conference for the participant may be determined. For example, the conferencing system may determine a schedule of conferences associated with the participant, e.g., via communication with a server that stores the participant's schedule. For example, conferences associated with the participant may include any conferences the participant has organized or agreed to join. Such scheduling may be performed in any number of ways, e.g., via appointments within email clients, scheduling programs associated with conferencing, via websites, etc. In one embodiment, the conferencing system may send a request to the server to determine if there is an upcoming conference within a threshold of time of the current time. Alternatively, the conferencing server may download the schedule and automatically perform the time comparison. In various embodiments, the threshold of time may be 5 minutes, 10 minutes, 15 minutes, 30 minutes, etc.
In 506, the conferencing system may be associated with or reserved for the upcoming conference for the participant. For example, the upcoming conference may have been previously associated with a different conferencing system. However, in response to the participant being proximate to the conferencing system within the threshold of time, the conferencing system may be used for the conference instead of the previously reserved conferencing system. In some embodiments, the method may automatically unassociate the previous conferencing system with the conference so that it may be used for other conferences. For example, the conferencing system may provide a message to release the second conferencing system from the scheduled conference. Alternatively, this release may be performed automatically by a server managing the conferencing systems in response to the association of the conferencing system with the upcoming conference in 506.
Thus, in one embodiment, the participant may have a scheduled conference call that is associated with a different conferencing system. However, in response to detecting that the participant is proximate to the conferencing system above, the conferencing system may automatically associate the upcoming conference with the new conferencing system. Accordingly, the participant may use the new conferencing system for the conference even though it was originally scheduled with a different conferencing system. Further, the method may automatically release or free the original conferencing system since it is now being performed by the new conferencing system.
In 508, the conference may be performed using the conferencing system, similar to 406 above. Also similar to above, In some embodiments, the conference may be initiated automatically, as described in U.S. patent application Ser. No. 12/724,226, which was incorporated by reference above.
Embodiments of a subset or all (and portions or all) of the above may be implemented by program instructions stored in a memory medium or carrier medium and executed by a processor.
In some embodiments, a computer system at a respective participant location may include a memory medium(s) on which one or more computer programs or software components according to one embodiment of the present invention may be stored. For example, the memory medium may store one or more programs that are executable to perform the methods described herein. The memory medium may also store operating system software, as well as other software for operation of the computer system.
Further modifications and alternative embodiments of various aspects of the invention may be apparent to those skilled in the art in view of this description. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the general manner of carrying out the invention. It is to be understood that the forms of the invention shown and described herein are to be taken as embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed, and certain features of the invention may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the invention. Changes may be made in the elements described herein without departing from the spirit and scope of the invention as described in the following claims.
Number | Date | Country | Kind |
---|---|---|---|
1004/DEL/2010 | Apr 2010 | IN | national |
This application is a continuation in part of U.S. patent application Ser. No. 13/093,948, titled “Recording a Videoconference Based on Recording Configurations”, filed Apr. 26, 2011 now U.S. Pat. No. 8,717,404, whose inventors were Ashish Goyal and Binu Kaiparambil Shanmukhadas, which claims benefit of priority of Indian Patent Application No. 1004/DEL/2010 titled “Recording a Videoconference Using a Streaming Server” filed Apr. 27, 2010, whose inventors were Keith C. King, Binu Kaiparambil Shanmukhadas, Ashish Goyal, and Sunil George, which are both hereby incorporated by reference in their entirety as though fully and completely set forth herein.
Number | Name | Date | Kind |
---|---|---|---|
4737863 | Eto et al. | Apr 1988 | A |
4855843 | Ive | Aug 1989 | A |
5812865 | Theimer et al. | Sep 1998 | A |
6095420 | Kawai et al. | Aug 2000 | A |
6163692 | Chakrabarti et al. | Dec 2000 | A |
6275575 | Wu | Aug 2001 | B1 |
6359902 | Putzolu | Mar 2002 | B1 |
6414635 | Stewart et al. | Jul 2002 | B1 |
6448978 | Salvador et al. | Sep 2002 | B1 |
6498955 | McCarthy et al. | Dec 2002 | B1 |
6587456 | Rao et al. | Jul 2003 | B1 |
6741608 | Bouis et al. | May 2004 | B1 |
6798753 | Doganata et al. | Sep 2004 | B1 |
6870916 | Henrikson et al. | Mar 2005 | B2 |
6968179 | De Vries | Nov 2005 | B1 |
7062567 | Benitez et al. | Jun 2006 | B2 |
7092002 | Ferren et al. | Aug 2006 | B2 |
7133922 | She et al. | Nov 2006 | B1 |
7233792 | Chang | Jun 2007 | B2 |
7242421 | Center et al. | Jul 2007 | B2 |
7292845 | Flannery | Nov 2007 | B2 |
7312809 | Bain et al. | Dec 2007 | B2 |
7362776 | Meier et al. | Apr 2008 | B2 |
7522181 | Wilson | Apr 2009 | B2 |
7532231 | Pepperell et al. | May 2009 | B2 |
7602893 | Bhatia et al. | Oct 2009 | B2 |
7664109 | Li | Feb 2010 | B2 |
7692683 | Kenoyer et al. | Apr 2010 | B2 |
7770115 | Gallmeier et al. | Aug 2010 | B2 |
7788380 | Shim et al. | Aug 2010 | B2 |
7835378 | Wijnands et al. | Nov 2010 | B2 |
7881233 | Bieselin | Feb 2011 | B2 |
7929678 | Shaffer et al. | Apr 2011 | B2 |
7936872 | Krumm et al. | May 2011 | B2 |
7945573 | Barnes et al. | May 2011 | B1 |
7986637 | Panwar et al. | Jul 2011 | B2 |
7986665 | Kezys et al. | Jul 2011 | B2 |
8050917 | Caspi et al. | Nov 2011 | B2 |
8103750 | O'Neal et al. | Jan 2012 | B2 |
8116612 | Vasilevsky et al. | Feb 2012 | B2 |
8125509 | Kenoyer | Feb 2012 | B2 |
8127043 | Vecchio et al. | Feb 2012 | B2 |
8139100 | King et al. | Mar 2012 | B2 |
8218753 | Khouri et al. | Jul 2012 | B2 |
8237765 | King et al. | Aug 2012 | B2 |
8265240 | Langgood et al. | Sep 2012 | B2 |
8270320 | Boyer et al. | Sep 2012 | B2 |
8319814 | King et al. | Nov 2012 | B2 |
8326276 | Chin | Dec 2012 | B2 |
8358763 | Patel et al. | Jan 2013 | B2 |
8467350 | Kezys et al. | Jun 2013 | B2 |
8487758 | Istoc | Jul 2013 | B2 |
8605879 | Wellard et al. | Dec 2013 | B2 |
20030044654 | Holt | Mar 2003 | A1 |
20040001446 | Bhatia et al. | Jan 2004 | A1 |
20040141606 | Torvinen | Jul 2004 | A1 |
20040199580 | Zhakov et al. | Oct 2004 | A1 |
20040207724 | Crouch et al. | Oct 2004 | A1 |
20060045030 | Bieselin | Mar 2006 | A1 |
20060067250 | Boyer et al. | Mar 2006 | A1 |
20060087553 | Kenoyer et al. | Apr 2006 | A1 |
20060215585 | Taniwaki | Sep 2006 | A1 |
20070081651 | Iyer et al. | Apr 2007 | A1 |
20070188598 | Kenoyer | Aug 2007 | A1 |
20070264989 | Palakkal et al. | Nov 2007 | A1 |
20070285502 | Yee | Dec 2007 | A1 |
20070285504 | Hesse | Dec 2007 | A1 |
20080063174 | Patel et al. | Mar 2008 | A1 |
20080292074 | Boni et al. | Nov 2008 | A1 |
20080294724 | Strong | Nov 2008 | A1 |
20080316295 | King et al. | Dec 2008 | A1 |
20080316297 | King et al. | Dec 2008 | A1 |
20080316298 | King et al. | Dec 2008 | A1 |
20090108057 | Mu et al. | Apr 2009 | A1 |
20090123035 | Khouri et al. | May 2009 | A1 |
20100155464 | Swayn et al. | Jun 2010 | A1 |
20100188473 | King et al. | Jul 2010 | A1 |
20100225736 | King et al. | Sep 2010 | A1 |
20100226288 | Scott et al. | Sep 2010 | A1 |
20100226487 | Harder et al. | Sep 2010 | A1 |
20100228825 | Hegde et al. | Sep 2010 | A1 |
20100315483 | King | Dec 2010 | A1 |
20110014929 | Moshfeghi et al. | Jan 2011 | A1 |
20110085732 | Cheng | Apr 2011 | A1 |
20110149628 | Langtry et al. | Jun 2011 | A1 |
20110234746 | Saleh et al. | Sep 2011 | A1 |
20110251949 | Kay et al. | Oct 2011 | A1 |
20110254912 | Mock et al. | Oct 2011 | A1 |
20110261147 | Goyal et al. | Oct 2011 | A1 |
20110261148 | Goyal et al. | Oct 2011 | A1 |
20110279631 | Ranganath et al. | Nov 2011 | A1 |
20110290882 | Gu et al. | Dec 2011 | A1 |
20120140016 | Shanmukhadas et al. | Jun 2012 | A1 |
20120185291 | Ramaswamy et al. | Jul 2012 | A1 |
20120274731 | Shanmukhadas et al. | Nov 2012 | A1 |
20120293599 | Norlin et al. | Nov 2012 | A1 |
20120311038 | Trinh et al. | Dec 2012 | A1 |
Entry |
---|
U.S. Appl. No. 13/191,189, filed Jul. 26, 2011, inventors Ranganath et al. |
U.S. Appl. No. 13/171,292, filed Jun. 28, 2011, inventor Wayne E. Mock. |
U.S. Appl. No. 13/327,904, filed Dec. 16, 2011, inventor Wayne E. Mock. |
Number | Date | Country | |
---|---|---|---|
20110279631 A1 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13093948 | Apr 2011 | US |
Child | 13194655 | US |