The present invention relates to wireless communication of video information and in particular to wireless communication of uncompressed video information.
In conventional wireless communication systems for transmission of compressed video information, typically multiple compressed video frames transmitted from a transmitter are buffered at a receiver to overcome wireless channel bandwidth fluctuation, reduce transmission jitter and facilitate error concealment. Such buffering at the receiver may be appropriate for compressed video since the total required memory buffer size to store multiple compressed video frames is small and can be accommodated by available buffer size on typical wireless chips.
However, for uncompressed video information such as high definition (HD) video, typical wireless chips do not include sufficient memory to buffer even a single uncompressed video frame. For example, for a 1080p video format, each video frame comprises active picture information of 1920 rows and 1080 lines, with each pixel having 24 bits, wherein each video frame comprises about 6 Mbytes of video information. Due to the size and cost constraints of millimeter-wave (mmWave) wireless chips, usually a receiver with an mmWave wireless chip can buffer only a portion of an uncompressed video frame. Therefore, commonly used error concealment schemes (such as copying information from a previous frame to recover error in a current frame) are not applicable to mmWave wireless chips.
At a 60 GHz frequency band, there is more free space loss than at 2 or 5 GHz since free space loss increases quadratically with frequency. In principle, this higher free space loss can be compensated for using antennas with more pattern directivity while maintaining small antenna dimensions. When such antennas are used, however, obstacles can easily cause a substantial drop of received power and block the channel for several seconds. Conventional solutions such as dynamic beam searching can be used to relieve this blocking effect and reduce the blocking time. However, for uncompressed video streaming applications, due to a very limited available buffer size compared to the very high data rate, such conventional solutions for reducing blocking time still significantly degrade uncompressed video reception quality such as Quality of Service (QoS).
The present invention provides a method and system for communication of uncompressed video information in wireless systems. One embodiment involves receiving a frame of video pixel information, partitioning spatially correlated pixels into different partitions and allocating pixel information from the pixel partitions across one or more of said multiple channels based on channel conditions. The allocated pixel information is transmitted on the one or more multiple channels.
Allocating pixel information may further include evenly allocating the pixel information from the pixel partitions across said multiple channels, based on the channel conditions. Allocating pixel information may further include unevenly allocating the pixel information from the pixel partitions across said multiple channels, based on the channel conditions. Allocating pixel information may further include allocating pixel information from the pixel partitions to a subset of said multiple channels, and allocating error correction information for said pixel information to another subset of said multiple channels, based on the channel conditions. Further, allocating pixel information may include adaptively allocating pixel information from the pixel partitions to said multiple channels based on changing channel conditions.
Transmitting the allocated pixel information may further include transmitting the pixel information on each channel by beamforming transmission over multiple antennas. Transmitting the allocated pixel information may further include transmitting the pixel information on each channel by directional transmission via directional antennas. Transmitting the allocated pixel information on the one or more multiple channels may further include transmitting the allocated pixel information by time-division switched multi-beam transmission over said multiple channels. Transmitting the allocated pixel information by time-division switched multi-beam transmission over said multiple channels may further include transmitting the allocated pixel information by multi-beam transmission over said multiple channels in a time-division manner with a weighted round-robin pattern.
These and other features, aspects and advantages of the present invention will become understood with reference to the following description, appended claims and accompanying figures.
The present invention provides a method and system for communication of uncompressed video information in wireless systems. One embodiment involves performing spatial partitioning of uncompressed video frames for transmission over wireless channels, such as millimeter-wave (mmWave). The spatial partitioning process utilizes multiple beams or multiple sectored antennas (in a time-division manner or in parallel), for transmitting different portions of a video frame (such as different partitions of video pixels in an uncompressed video frame). Such spatial partitioning reduces the channel blocking problem and may improve Quality of Service (QoS) for uncompressed video over, e.g., 60 GHz wireless channels.
Adaptive Spatial Partitioning Transmission
An adaptive spatial partitioning process for wireless transmission uncompressed video, according to the present invention, meets the QoS requirements of streaming over mmWave wireless channels. The adaptive spatial partitioning process includes pixel partitioning and spatial partitioning, as described in more detail below.
Pixel Partitioning
In the uncompressed video frame, geographically neighboring (spatially correlated) pixels usually have very similar, or even the same values. Regardless of how pixel partitioning is performed, so long as spatially neighboring pixels are partitioned and placed into different packets for transmission, then if pixel information in a received packet is corrupted (i.e., lost or damaged), one or more other packets which contain pixels that are spatially related to the corrupt pixel(s) can be used to recover (compensate for) the corrupt pixel information.
Preferably, partitioning is performed such that pixels with minimal spatial distance are placed into different packets for transmission over a wireless channel. Further, partitioning can be performed by distributing y number of spatially correlated pixels into z number of different packets, wherein y≠z. In one example, y can be greater than z, whereby at least one of the packets includes two or more spatially correlated (neighboring) pixels from a partition. It is also possible to split pixels vertically. However, for an interlaced format, since two neighboring lines are already split into two separate fields, it is preferable to partition horizontally for each field if only two partitions are required.
Specifically, for the type 0 pixels (i.e., partition j), the indices i and j are even numbers (i.e., i=0, 2, 4, . . . , etc., and j=0, 2, 4, . . . , etc.), and the type 0 pixels are placed in the Packet 0. For the type 1 pixels (i.e., partition 1), the index i is odd (i.e., i=1, 3, 5, . . . etc.), the index j is even (i.e., j=0, 2, 4, . . . , etc.), and the type 1 pixels are placed in the Packet 1. For the type 2 pixels (i.e., partition 2), the index i is even (i.e., i=0, 2, 4, . . . , etc.), the index j is odd (i.e., j=1, 3, 5, . . . , etc.), and the type 2 pixels are placed in the Packet 2. For the type 3 pixels (i.e., partition 3), the indices i and j are odd numbers (i.e., i=1, 3, 5, . . . , etc., and j=1, 3, 5, . . . , etc.), and the type 3 pixels are placed in the Packet 3. A cyclic redundancy check (CRC) value for each packet may be appended at the end of the packet before transmission to a receiver of a wireless channel.
If during transmission, a pixel in one packet (e.g., Packet 0) is corrupted, then spatially related pixels in the other three packets (e.g., Packets 1, 2 or 3) can be used at the receiver to compensate for the corrupted pixel. As such, if pixel information in position P in a packet (e.g., Packet 0) is corrupted, then the pixel information in position P in other spatially related packets (e.g., Packets 1, 2 or 3) can be used to compensate for the corrupted information.
Different packets can be transmitted at a single channel or at different channels/paths. In addition to robustness improvement, in the case when one channel/path cannot meet the bandwidth requirement for an uncompressed video stream, spatial pixel partitioning can take advantage of multi-channels/paths to transmit all data of an uncompressed video stream.
In general, square/rectangular blocks 104 (each block including multiple pixels therein), can be used for partitioning the multiple pixels in each block into corresponding multiple packets, wherein for each block, preferably each pixel in that block is placed in a different packet for transmission.
As such, the neighboring pixels in a video frame are partitioned to different packets and each packet is transmitted separately over a lossy wireless channel. If one packet is lost or erroneous, data in other packets carrying the neighboring pixels can be used to recover the pixels in the lost or erroneous packet. There are at least two approaches to recover a lost or erroneous pixel. The simplest approach involves copying pixels from a neighboring packet. The second approach involves using the average value of each pixel in all other neighboring partition packets.
In
Spatial Partitioning Transmission
Usually multiple beams or directional paths can be found between a mmWave wireless transmitter and a receiver 10 in a video streaming wireless system 150 shown in
In conventional dynamic beam-steering approaches, the transmitter and the receiver always attempt to find the best directional beam for their directional transmissions. If the best beam is blocked by an object, the transmitter and receiver perform dynamic beam searching to determine a new best directional beam. Such dynamic beam-searching and beam-tracking approaches require long durations to detect whether a current beam is of sufficient quality for transmission. An additional time period is required to determine another best beam. As such, the overall duration for recovering (recovery delay) from a blocked best beam (best channel/path) to finding another beam is substantial (e.g., at least 1 ms). In case a beam is blocked, the limited sizes of the transmitter and receiver buffers do not allow for buffering an incoming uncompressed video stream while the transmitter and receiver engage in a typically lengthy beam-searching to find another beam. As such, when a mmWave channel is blocked, video streaming is interrupted while the transmitter and receiver establish another beam for transmission therebetween. As a result, conventional dynamic beam-searching and beam-tracking approaches cannot meet QoS requirements of uncompressed video streaming.
The spatial partitioning transmission process according to the present invention provides switched multi-beam transmission of spatially partitioned pixel information between a transmitter and receiver, to reduce the possibility of video streaming interruption when a mmWave channel is blocked.
At the antenna training stage, a beam candidate table (TxB) 311 is generated at the Tx 302 and a corresponding RxB 312 is generated at the Rx 304. Each beam candidate entry in a beam candidate table includes a beam index number and related antenna configuration information. Table 1 below show entries of example beam candidate table TxB:
Table 2 below show entries of example beam candidate table RxB:
The beam candidates (BCs) in the tables are ordered according to beam quality (e.g., a beam indexed as TxB_m, RxB_n has better channel quality than a beam indexed as TxB_m, RxB_n, if m<n). Beam candidate tables at the transmitter/receiver are updated periodically to reflect the dynamic beam channel conditions between the transmitter and the receiver. The TxB table and the RxB table have corresponding entries.
According to the TxB and RxB table entries, the antenna configuration information for each candidate beam specifies a set (combination) 303 of the transmitter antennas 153, and a set (combination) 305 of receiver antennas 155. This provides multiple logical or physical antenna sets 303 at the transmitter side, including: Tx Antenna Set 1 (TAS 1), Tx Antenna Set 2 (TAS 2), . . . , Tx Antenna Set N (TAS N). There are also multiple logical or physical antenna sets 305 at the receiver side, including: Rx Antenna Set 1 (RAS 1), Rx Antenna Set 2 (RAS 2), . . . , Rx Antenna Set N (RAS N).
If the antenna configuration ACI can be determined in a short time period (e.g., less than about 10 to 20 microseconds), the same Tx antenna combinations can be used by different sets 303, and the same Rx antenna combinations can be used by the sets 305. Otherwise, the antennas 153 and 155 are physically divided and assigned to different antenna sets 303 and 305, respectively.
For design simplicity and cost reduction at the receiver side, a switched multi-beam transmission process may be used in place of the above-described parallel multi-beam transmission process. Specifically, different pairs of antenna sets 303 and 305 operate in a time-division manner with a weighted round-robin pattern. For example in
The following example assumes two sets 303 of antennas 153 at the transmitter (Tx) side, and two sets 305 of antennas 155 at the receiver (Rx) side. During an antenna training stage, TAS 1 and RAS 1 find a best beam (first beam path) with each other, then TAS 2 and RAS 2 find a good alternative beam (second beam) with each other under the constraint that the alternative beam is far away in beam pointing direction from the best beam. Depending on the conditions of the first and second beam paths, there are several possible ways to allocate the video data on the two beam paths, as described by examples below.
If the first beam path (Beam 1) as the primary path can provide sufficient bandwidth for all partitions of pixels, even when sharing channel time with the second beam path (Beam 2), then the second beam can be the alternative or redundant path. A first allocation example 400 is shown in
A second allocation example involves allocating pixel partitions evenly to the two beam paths Beam 1 and Beam 2. Specifically, as shown by an example allocation 500 in
Yet a third allocation example involves allocating pixel partitions unevenly to two beam paths, Beam 1 and Beam 2. Specifically, as shown by an example allocation 600 in
The main purpose of the antenna switching control functions 307 and 309 is to switch between different beams with time-division. The functions 307 and 309 may also switch between allocations. For example, during the video transmission stage, the pixel partition allocations to the two beam paths can be dynamically adjusted using antenna switching control functions 307 and 309 (e.g., switching between primary-redundant allocation, even allocation, uneven allocation), according to the actual channel conditions of the two beam paths.
Pixel Partitioning
Primary/Alternative Allocation
Uneven Allocation
Even Allocation
A frame structure may be used for data transmission between wireless stations. Frame aggregation can be used in a Media Access Control (MAC) layer and a PHY layer. The MAC layer obtains a MAC Service Data Unit (MSDU) and attaches a MAC header thereto, in order to construct a MAC Protocol Data Unit (MPDU), for transmission. The MAC header includes information such as a source address (SA) and a destination address (DA). The MPDU is a part of a PHY Service Data Unit (PSDU) and is transferred to a PHY layer in the transmitter to attach a PHY header (i.e., PHY preamble) thereto to construct a PHY Protocol Data Unit (PPDU). The PHY header includes parameters for determining a transmission scheme including a coding/modulation scheme. Before transmission as a packet from a transmitter to a receiver, a preamble is attached to the PPDU, wherein the preamble can include channel estimation and synchronization information.
The sender 202 includes a PHY layer 206, a MAC layer 208 and an application layer 210. The PHY layer 206 includes a radio frequency (RF) communication module 207 which transmits/receives signals under control of a baseband process module 230, via wireless channels 201. The baseband module 230 may include a low-rate channel (LR) communication module 203 for communicating control information, and a high-rate (HR) channel communication module 205 for communication video information.
The application layer 210 includes an audio/visual (A/V) pre-processing module 211 for pixel partitioning and packetizing streams as described above (e.g.,
The receiver 204 includes a PHY layer 214, a MAC layer 216 and an application layer 218. The PHY layer 214 includes a RF communication module 213 which transmits/receives signals under control of a base band process module 231. The module may include a LR channel communication module 215 and a HR channel communication module 217. The application layer 218 includes an A/V post-processing module 219 for de-partitioning and de-packetizing into streams the video information in the MAC packets, received by the MAC layer 216. The application layer 218 further includes an AV/C control module 220 which handles stream control and channel access. Beamforming transmissions are performed over the HR channels. The MAC/PHY layers perform antenna training and beaming switching control (
An example implementation of the present invention for mmWave wireless such as a 60 GHz frequency band wireless network can be useful with Wireless HD (WiHD) applications. Wireless HD is an industry-led effort to define a wireless digital network interface specification for wireless HD digital signal transmission on the 60 GHz frequency band, e.g., for consumer electronics (CE) and other electronic products. An example WiHD network utilizes a 60 GHz-band mmWave technology to support a physical (PHY) layer data transmission rate of multi-Gbps (gigabits per second), and can be used for transmitting uncompressed high definition television (HDTV) signals wirelessly. Another example application is for ECMA (TC32-TG20), wireless radio standard for very high data rate short range communications (ECMA stands for European Computer Manufacturers Association, which provides international standards association for information and communication systems). The present invention is useful with other wireless communication systems as well, such as IEEE 802.15.3c.
In addition to mmWave wireless applications discussed above, a spatial partitioning process according to the present invention is applicable to other wireless technologies which use beam-forming or directional transmission. Such other wireless technologies include an IEEE 802.11n wireless local area network (WLAN). Further, the present invention is applicable to certain compressed video streams in the format of multiple description coding (MDC) and layered coding.
Using a spatial pixel partitioning, a partition packet allocation, an antenna training and beam switching control process according to the present invention, the recovery delay from a blocked channel/beam in wireless communication of video information is reduced, thereby improving the overall QoS for video streaming such as uncompressed video streaming.
As is known to those skilled in the art, the aforementioned example architectures described above, according to the present invention, can be implemented in many ways, such as program instructions for execution by a processor, as logic circuits, as an application specific integrated circuit, as firmware, etc. The present invention has been described in considerable detail with reference to certain preferred versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.
This application claims priority from U.S. Provisional Patent Application Ser. No. 60/961,541, filed on Jul. 20, 2007, incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4661862 | Thompson | Apr 1987 | A |
5189510 | Henaff et al. | Feb 1993 | A |
5289190 | Shimoda et al. | Feb 1994 | A |
5453840 | Parker et al. | Sep 1995 | A |
5936669 | Niesen | Aug 1999 | A |
5969764 | Sun et al. | Oct 1999 | A |
6052159 | Ishii et al. | Apr 2000 | A |
6088045 | Lumelsky et al. | Jul 2000 | A |
6094453 | Gosselin et al. | Jul 2000 | A |
6115420 | Wang | Sep 2000 | A |
6201834 | Zhu | Mar 2001 | B1 |
6239886 | Klassen et al. | May 2001 | B1 |
6298085 | Kondo et al. | Oct 2001 | B1 |
6418240 | Yu | Jul 2002 | B1 |
6512218 | Canini et al. | Jan 2003 | B1 |
6571016 | Mehrotra et al. | May 2003 | B1 |
6757435 | Kondo | Jun 2004 | B2 |
6868186 | Sadeh | Mar 2005 | B1 |
6973221 | Xue | Dec 2005 | B1 |
7015961 | Kakarala | Mar 2006 | B2 |
7027515 | Lin | Apr 2006 | B2 |
7075993 | O'Brien, Jr. | Jul 2006 | B2 |
7082166 | Prakash et al. | Jul 2006 | B2 |
7099678 | Vaidyanathan | Aug 2006 | B2 |
7103669 | Apostolopoulos | Sep 2006 | B2 |
7113556 | Heegard et al. | Sep 2006 | B1 |
7227900 | Porter et al. | Jun 2007 | B2 |
7283165 | Alderson et al. | Oct 2007 | B2 |
7339993 | Brooks et al. | Mar 2008 | B1 |
7627348 | Lysejko et al. | Dec 2009 | B2 |
7630442 | Sekiguchi et al. | Dec 2009 | B2 |
7734106 | Zhang et al. | Jun 2010 | B1 |
7991055 | Cancemi et al. | Aug 2011 | B2 |
8098741 | Suh et al. | Jan 2012 | B2 |
20030072366 | Bartolucci et al. | Apr 2003 | A1 |
20040194008 | Garudadri et al. | Sep 2004 | A1 |
20060013299 | Satio et al. | Jan 2006 | A1 |
20060013320 | Oguz et al. | Jan 2006 | A1 |
20060146940 | Gomila et al. | Jul 2006 | A1 |
20060239360 | Kadono et al. | Oct 2006 | A1 |
20060268760 | Fang et al. | Nov 2006 | A1 |
20070014360 | Botzko et al. | Jan 2007 | A1 |
20070091999 | Nissan-Cohen et al. | Apr 2007 | A1 |
20070098063 | Reznic et al. | May 2007 | A1 |
20070189383 | Shao et al. | Aug 2007 | A1 |
20070202842 | Shao et al. | Aug 2007 | A1 |
20070296822 | Lan et al. | Dec 2007 | A1 |
20080101467 | MacMullan et al. | May 2008 | A1 |
20080107330 | Cotman et al. | May 2008 | A1 |
20080144553 | Shao et al. | Jun 2008 | A1 |
20080232478 | Teng et al. | Sep 2008 | A1 |
20080267299 | Hannuksela et al. | Oct 2008 | A1 |
20080285651 | Au et al. | Nov 2008 | A1 |
20090063935 | Singh et al. | Mar 2009 | A1 |
20100014584 | Feder et al. | Jan 2010 | A1 |
20100150463 | Yeung et al. | Jun 2010 | A1 |
20100166057 | Huchet et al. | Jul 2010 | A1 |
20100265392 | Shao et al. | Oct 2010 | A1 |
20110109792 | Montag | May 2011 | A1 |
Number | Date | Country |
---|---|---|
1402852 | Mar 2003 | CN |
1708754 | Dec 2005 | CN |
1905679 | Jan 2007 | CN |
2002-032295 | Jan 2002 | JP |
2006-526367 | Nov 2006 | JP |
2008-219479 | Sep 2008 | JP |
10-2009-0100219 | Sep 2009 | KR |
2004073201 | Aug 2004 | WO |
2004073201 | Aug 2004 | WO |
2008030032 | Mar 2008 | WO |
2008060025 | May 2008 | WO |
Entry |
---|
International Search Report and Written Opinion mailed Dec. 6, 2010 in PCT/KR2010/002192, 6 pp., Korean Intellectual Property Office, Republic of Korea. |
International Search Report dated Oct. 12, 2007 for International Application No. PCT/KR2007/003251 from Korean Intellectual Property Office , filed Jul. 4, 2007, 2 pages. |
Hitachi, Ltd. et al., “High-Definition Multimedia Interface Specification Version 1.2,” Aug. 22, 2005, pp. 1-214, United States. |
Schwarz, H. et al., “Overview of the Scalable Video Coding Extension of the H.264/AVC Standard”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 9, Sep. 2007, pp. 1-18, United States. |
International Search Report and Written Opinion dated Oct. 12, 2007 for International Application No. PCT/KR2007/003251 from Korean International Property Office, filed Jul. 4, 2007, 10 pages, Seo-gu, Daejeon, Republic of Korea. |
U.S. Non-Final Office Action for U.S. Appl. No. 11/598,920 mailed May 6, 2011. |
U.S. Final Office Action for U.S. Appl. No. 11/598,920 mailed Jul. 25, 2012. |
Korean Office Action dated Nov. 11, 2009 for Korean Patent Application No. 10-2008-7006604, pp. 1-3, Korean Intellectual Property Office, Seo-gu, Daejeon, Republic of Korea (Machine-generated English translation attached, pp. 1-2). |
Korean Final Office Action dated Feb. 18, 2010 for Korean Patent Application No. 10-2008-7006604, pp. 1-2, Korean Intellectual Property Office, Seo-gu, Daejeon, Republic of Korea (Machine-generated English translation attached, p. 1). |
U.S. Final Office Action for U.S. Appl. No. 11/598,920 mailed Oct. 19, 2011. |
U.S. Notice of Allowance for U.S. Appl. No. 11/897,087 mailed Jan. 20, 2012. |
U.S. Non-Final Office Action for U.S. Appl. No. 11/598,920 mailed Apr. 3, 2012. |
U.S. Notice of Allowance for U.S. Appl. No. 11/897,087 mailed Apr. 25, 2012. |
U.S. Non-Final Office Action for U.S. Appl. No. 11/598,920 mailed Mar. 13, 2013. |
U.S. Non-Final Office Action for U.S. Appl. No. 12/754,522 mailed Jan. 18, 2013. |
U.S. Notice of Allowance for U.S. Appl. No. 11/598,920 mailed Aug. 15, 2013. |
Chinese Office Action dated Dec. 26, 2013 for Chinese Patent Application No. 201080017149.0, pp. 1-14, China Intellectual Property Office, Beijing City, China (A machine-generated English translation, pp. 1-7). |
Japanese Office Action dated Feb. 18, 2014 for Japanese Patent Application No. 2012-505807, pp. 1-4, Japan Patent Office, Tokyo, Japan (A machine-generated English translation, pp. 1-2). |
U.S. Final Office Action for U.S. Appl. No. 12/754,522 mailed May 16, 2013. |
Number | Date | Country | |
---|---|---|---|
20090021646 A1 | Jan 2009 | US |
Number | Date | Country | |
---|---|---|---|
60961541 | Jul 2007 | US |