The present invention relates to the field of communications, and in particular, to a method and an apparatus for processing video image data, a videoconferencing system and a videoconferencing terminal.
With development of coding and information compression technologies and rapid development of digital networks, a videoconferencing system emerges and accesses to the market. Since the first series of international standards about the videoconferencing system (H.320) were granted and implemented in the beginning of the 1990s, the videoconferencing system has been applied more and more widely. Meanwhile, demands for voice experience and video experience that are provided by the videoconferencing system are closely concerned. The voice experience is required to evolve towards high-fidelity voice recurrence, and the video experience is required to evolve towards a high resolution and broad viewing angle.
In an existing videoconferencing television system, a videoconferencing terminal at a transmitting end captures an image by using a single high definition camera, where resolutions of captured high definition videos are generally 720p30f, 720p60f, 1080i30f, 1080i60f, 1080p30f, and 1080p60f, and then the captured videos are compressed and coded to generate a video code stream; then, through a data transmission network, the video code stream is transmitted to a videoconferencing terminal at a receiving end; and the videoconferencing terminal at the receiving end decodes the received video code stream to obtain a high definition video image from the transmitting end and display the image.
The videoconferencing terminal can provide a higher video resolution than that provided by a standard definition videoconferencing terminal, and can bring better visual experience to users. However, a viewing angle of a provided video image is limited.
The Cisco TelePresence System solves the forgoing problem to some extent. As shown in
However, the inventor finds that the foregoing solution has at least the following problem.
This solution has a strict demand on the decoration and layout of a conference room, especially on a position of a camera set and a distance between a user and the camera set; otherwise, an overlapping phenomenon may occur on an image that is displayed on a display device, and this strict demand results in complex installation of the system.
In view of the forgoing description, embodiments of the present invention provide a method and an apparatus for processing video image data, a videoconferencing system and a videoconferencing terminal, to solve a problem that installation of a system is complex in the prior art.
The embodiments of the present invention are implemented as follows.
A method for processing video image data includes:
obtaining multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
combining the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information; and
after recombining the panoramic video image data into multiple channels of video image data satisfying a display requirement, sending the recombined video image data to a display device for display.
An apparatus for processing video image data includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data combining unit, configured to combine the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data recombining unit, configured to recombine the panoramic video image data into multiple channels of video image data satisfying a display requirement; and
multiple data output interfaces, connected to an external display device, and configured to transmit the video image data processed and obtained by the data recombining unit to the display device.
An apparatus for processing video image data includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information that are collected by multiple cameras, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data combining unit, configured to combine the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data recombining unit, configured to recombine the panoramic video image data into multiple channels of video image data satisfying a display requirement; and
a data sending unit, configured to send, through a communication network, the multiple channels of video image data processed and obtained by the data recombining unit to a remote videoconferencing device, so that the videoconferencing device displays the video image data through a corresponding display device.
An apparatus for processing video image data includes:
a data input interface, configured to obtain multiple channels of coded video image data;
multiple data decoders, configured to simultaneously decode the multiple channels of coded video image data, where multiple channels of decoded video image data include multiple sub-images obtained by partitioning a panoramic video image, and corresponding synchronous information and reconstruction information;
a data synchronizing unit, configured to classify decoded sub-images according to the corresponding synchronous information;
a data reconstructing unit, configured to reconstruct, according to the reconstruction information, the classified sub-images, to obtain multiple channels of video image data, where each channel of video image data is arranged according to a position of each channel of video image data in the panoramic video image data; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of video image data processed and obtained by the data reconstructing unit to a corresponding display device.
A videoconferencing system includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data combining unit, configured to process the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data sending unit, configured to send, through a communication network, the coded panoramic video image data after encoding;
a data receiving unit, configured to receive the panoramic video image data that is carried on the communication network;
a data recombining unit, configured to recombine the decoded panoramic video image data after decoding into multiple channels of video image data satisfying a display requirement, where the decoded panoramic video image data is received by the data receiving unit; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of video image data processed and obtained by the data recombining unit to a corresponding display device.
A videoconferencing system includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data sending unit, configured to send the coded multiple channels of correlative video image data after encoding and correlative information through a communication network;
a data receiving unit, configured to receive the multiple channels of correlative video image data and correlative information that are carried on the communication network;
a data combining unit, configured to process the decoded multiple channels of correlative video image data after decoding into a single channel of panoramic video image data by using the correlative information;
a data recombining unit, configured to recombine the panoramic video image data into multiple channels of video image data satisfying a display requirement; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of video image data processed and obtained by the data recombining unit to a corresponding display device.
A videoconferencing system includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data combining unit, configured to process the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data recombining unit, configured to recombine the panoramic video image data into multiple channels of video image data satisfying a display requirement;
a data sending unit, configured to send, through a communication network, the coded multiple channels of video image data after encoding, where the coded multiple channels of video image data is processed and obtained by the data recombining unit;
a data receiving unit, configured to receive the multiple channels of video image data that are carried on the communication network; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of decoded video image data after decoding to a corresponding display device, where each channel of decoded video image data is received by the data receiving unit.
A videoconferencing terminal includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data combining unit, configured to process the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data transceiving unit, configured to send the panoramic video image data to a remote videoconferencing device through a communication network, and receive a single channel of panoramic video image data sent by the videoconferencing device through the communication network;
a data recombining unit, configured to recombine the panoramic video image data received by the data transceiving unit into multiple channels of video image data satisfying a display requirement; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of video image data processed and obtained by the data recombining unit to a corresponding display device.
A videoconferencing terminal includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data transceiving unit, configured to send the multiple channels of video image data and the correlative information to a remote videoconferencing device through a communication network, and receive multiple channels of video image data and correlative information that are sent by the videoconferencing device through the communication network;
a data combining unit, configured to process the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data recombining unit, configured to recombine the panoramic video image data processed and obtained by the data combining unit into multiple channels of video image data satisfying a display requirement; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of video image data processed and obtained by the data recombining unit to a corresponding display device.
A videoconferencing terminal includes:
a data input interface, configured to obtain multiple channels of correlative video image data and correlative information, where the correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data;
a data combining unit, configured to process the multiple channels of correlative video image data into a single channel of panoramic video image data by using the correlative information;
a data recombining unit, configured to recombine the panoramic video image data processed and obtained by the data combining unit into multiple channels of video image data satisfying a display requirement;
a data transceiving unit, configured to send, through a communication network, the multiple channels of video image data processed and obtained by the data recombining unit to a remote videoconferencing device, and receive multiple channels of recombined video image data sent by the videoconferencing device through the communication network; and
multiple data output interfaces, connected to multiple external display devices, and configured to respectively transmit each channel of video image data received by the data transceiving unit to a corresponding display device.
It can be seen from the forgoing technical solutions that, compared with the prior art, in the embodiments of the present invention, after the multiple channels of video image data collected by the multiple cameras are obtained, the multiple channels of video image data are processed into the single channel of panoramic video image data, and the single channel of panoramic video image data is recombined into several channels of video image data according to a display requirement for display. In this process, an operation of processing the multiple channels of video image data into the single channel of panoramic video image data may eliminate an overlapping situation existing between each channel of video image data. Therefore, the overlapping situation existing between each channel of video image data collected by the camera is allowed, so that requirements on a position where a camera set is placed and a distance between a user and the camera set are lowered, and installation complexity of the system is simplified.
The accompanying drawings described herein are used to provide for further understanding of the present invention, which are a part of this application, but are not intended to limit the present invention. In the accompanying drawings:
a and
In order to make the objectives, technical solutions, and advantages of the present invention more comprehensible, the present invention is described in further detail in the following with reference to the embodiments and the accompanying drawings. Here, the exemplary embodiments of the present invention and descriptions of the embodiments are only used to explain the present invention, but are not intended to limit the present invention.
For the purpose of reference and clarity, technical terms, short forms or abbreviations used in this specification are concluded as follows:
H.320: ITU-T Recommendation H.320, Narrow-band visual telephone systems and terminal equipment, a standard defined by International Telecommunication Union Telecommunication Standardization Section, which specifies a multimedia communication system based on a narrow-band switching system;
H.323: ITU-T Recommendation H.323, Packet-based Multimedia Communications Systems, a standard defined by International Telecommunication Union Telecommunication Standardization Section, which specifies an architecture of a multimedia communication system based on a packet switching system;
IP: Internet Protocol, that is, network protocol;
ISDN: Integrated Services Digital Network, that is, integrated services digital network;
ITU-T: International Telecommunication Union Telecommunication Standardization Sector, that is, International Telecommunication Union Telecommunication Standardization Sector;
RTP: Real-time Transport Protocol, real-time transport protocol;
MCU: Multipoint Control Unit, multipoint control unit;
UDP: User Datagram Protocol, user datagram protocol;
YPbPr: luminance (Y) and color difference (Pb/Pr);
DVI: Digital Visual Interface, digital visual interface;
HDMI: High Definition Multimedia Interface, high definition multimedia interface;
VGA: Video Graphic Array, video graphic array;
MPEG: Moving Pictures Experts Group, that is, Moving Pictures Experts Group, where MPEG1, MPEG2 and MPEG4 are all MPEG standards;
video images correlative to each other (which are referred to as correlative video images hereinafter in order to facilitate description): video images obtained by multiple cameras in the same scenario, where generally, since the cameras are placed randomly, an overlapping area exists between these images, and as shown in
image combination: combining multiple small-sized (small viewing-angle) images from the same scenario into a large-sized (wide viewing-angle) image; and processing the overlapping area between the correlative images during combination, for example, the image 21 and the image 22 shown in
image recombination: partitioning and filtering a large-sized video image to form multiple small-sized video images.
The technical solutions in the embodiments of the present invention are clearly and fully described in the following with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the embodiments to be described are only a part rather than all of the embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
An embodiment of the present invention discloses a method for processing video image data, where obtained multiple channels of correlative video image data are combined into a single channel of panoramic video image data, and according to a display requirement, the panoramic video image data is recombined into one or multiple channels (equal to the number of display devices) of video image data, and the video image data is displayed by a display device.
The number of display devices may be multiple, and after combined panoramic video image data is recombined into multiple channels of video image data, the recombined video image data may be respectively sent to each display device. The display device performs display according to a position of each channel of video image data in the panoramic video image data, so as to provide wide viewing-angle visual experience for users. As shown in
Step S41: Obtain multiple channels of correlative video image data and correlative information between each channel of video image data.
The multiple channels of correlative video image data are from multiple cameras that are disposed in the same scenario, and these cameras are placed at different positions in the scenario.
The correlative information includes: information that is used to indicate a physical position of video image data, and captured timestamp information of the video image data.
Step S42: Combine the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information.
Specifically, the multiple channels of video image data are combined according to physical position information and captured timestamp information of each channel of video image data, to form the single channel of panoramic video image data.
Step S43: Recombine the panoramic video image data into multiple channels of video image data satisfying a display requirement.
The panoramic video image data is recombined and filtered into the corresponding number of channels of video image data according to the number of display devices, a supported size of a frame, and a supported format of the video image data.
Step S44: Send each channel of recombined video image data to each display device for display respectively.
If the number of the display devices is four, as shown in
It should be noted that, in the forgoing step S41, a type of a video interface for obtaining video images may be any one or several kinds of the following: a YPbPr interface, a DVI interface, an HDMI interface and a VGA interface, that is, video input interfaces provided by each camera may be the same, and may also be different. In the forgoing step S43, a format of the recombined video image data is consistent with a video image format supported by a display device, and is determined according to the video image format supported by the display device.
Furthermore, it should be noted that, a video input interface type in the forgoing step S41 and a video output interface type in step S44 may be the same (for example, the video input interface is a YPbPr interface, and the video output interface is also the YPbPr interface) or different (for example, the video input interface is the YPbPr interface, and the video output interface may be an HDMI interface). When a video interface type and a video image data format that are supported by each display device are different, after the single channel of panoramic video image data is recombined into multiple channels of video image data, a format of each channel of video image data also needs to be converted respectively according to a video interface type and a video image format that are supported by a corresponding display device, and then the video image data is sent to the corresponding display device.
In this embodiment of the present invention, after the multiple channels of video image data collected by the multiple cameras are obtained, the multiple channels of video image data are combined into the single channel of panoramic video image data, and are recombined into several channels of video image data according to a display requirement, and then the several channels of video image data are sent to display devices for display. The displayed video images can be combined into a wide viewing-angle video image by merely arranging the display devices according to positions of the video images in the panoramic video image data, so as to provide better visual experience for users. Moreover, in this embodiment of the present invention, a process of combining the multiple channels of video image data into the single channel of panoramic video image data may eliminate an overlapping situation existing between each channel of video image data. Therefore, an overlapping phenomenon existing between images obtained by each camera may be allowed, which means that no particularly strict requirement is imposed on a position where a camera is placed and a distance between a user and a camera set, so that installation complexity of the camera is lowered.
The display device may also be a display device that is adaptive to the panoramic video image data, and in this case, the number of display devices may be one. After the combined panoramic video image data is recombined into multiple channels of video image data, the multiple channels of video image data are respectively sent to the display devices according to positions of the multiple channels of video image data in the panoramic video image data, and the display device combines each channel of video image data into panoramic video image data for display. This embodiment of the present invention may be applied to a remote panoramic videoconferencing process, where each party taking part in a conference may send their own video image data to an opposite party (that is, a video image data sending process), and receive and display video image data that is sent by the opposite party (that is, a video image data receiving process).
As shown in
Step S61: Obtain video image data collected by multiple cameras placed at a local conference site and correlative information between each channel of video image data.
Each camera is placed at a different position, but obtained video image data is correlative, and the correlative information includes a physical position and a captured timestamp of each channel of video image data.
Step S62: Combine the multiple channels of video image data into a single channel of panoramic video image data according to a physical position and a captured timestamp of each channel of video image data.
Step S63: Send the panoramic video image data through a communication network.
Persons skilled in the art may understand that, in the forgoing step S61, processes of obtaining the video image data collected by the multiple cameras and obtaining the correlative information between each channel of video image data are implemented simultaneously, and it is doubtless that, to enable a user in front of multiple displayers to view frames captured by the cameras at the same same, it must be ensured that the multiple cameras collect scenario images synchronously. In addition, to ensure integrity of transmitted video images, it must be ensured that no disconnection occurs between scenario images shot by adjacent cameras, and an overlapping area is preferred, where the overlapping area may be removed in an image combining process.
In this embodiment, a network interface of the communication network may be: an ISDN interface, an E1 interface, or a V35 interface, where the ISDN interface, the E1 interface, or the V35 interface is based on circuit switching, an Ethernet interface based on packet switching, or a wireless port based on a wireless connection.
Being corresponding to the forging video image sending process, as shown in
Step S71: Obtain panoramic video image data sent from a remote conference site through a communication network.
Step S72: Recombine the panoramic video image data into multiple channels of video image data satisfying a display requirement.
Step S73: Send each channel of video image data to a corresponding display device for display.
In other embodiments, the video image data sending process may also be: after obtaining multiple channels of video image data and correlative information between each channel of video image data, directly sending the obtained video image data and correlative information through a communication network. Correspondingly, the video image data receiving process is: after receiving the multiple channels of video image data and the correlative information, combining the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information, and recombining the panoramic video image data into multiple channels of video image data according to the number of display devices and sending the recombined video image data to corresponding display devices for display. It should be noted that, in these embodiments, the correlative information between the multiple channels of video image data may be embedded in the video image data (or compressed video image data) for transmission, for example, when the communication network is the Ethernet, the correlative information may be embedded in a video RTP packet for transmission, which facilitates synchronization between the correlative information and the video image data. Definitely, the correlative information may also be transmitted separately, for example, transmitted through an independent data channel.
In other embodiments, the video image data sending process may further be: after obtaining multiple channels of video image data and correlative information between each channel of video image data, combining the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information, and after recombining the panoramic video image data into multiple channels of video image data according to the number of display devices (display devices at the remote coferrence site), sending the recombined video image data through a communication network. Correspondingly, the video image data receiving process is: receiving the multiple channels of recombined video image data, and directly sending the received video image data to display devices at a local conference site for display.
In addition, in the forgoing embodiment, a transmitting end may send the panoramic video image data directly, and may also send the panoramic video image data after coding. The coding manner may be: H.261, H.263, H.264, MPEG1, MPEG2 or MPEG4. Correspondingly, the panoramic video image data received by a receiving end may be uncoded raw data, and may also be coded data. It should be noted that, a size of a combined image is generally several times larger than a size of an original image, and in this case, even if a coder is used for coding, the amount of transmitted data is still larger, which imposes a strict requirement on capability of the coder. Based on the forgoing description, in other embodiments of the present invention, multiple coders are adopted for parallel processing, and furthermore, due to randomness of image data, synchronization of a sequence of coded data cannot be ensured, and to ensure that images displayed by multiple displayers at a display end are shot at the same time, the coded data needs to be synchronized.
Specifically, the forgoing process of recombining the panoramic video image data is actually an image partitioning process, which includes the following steps.
a. Partition the panoramic video image into multiple sub-images, and meanwhile obtain multiple pieces of synchronous information for generating the multiple sub-images, where each sub-image is corresponding to one piece of the synchronous information.
The synchronous information is specifically a timestamp of the received panoramic video image data and may also be a self-defined sequence number. A manner for defining the sequence number needs to ensure that sequence numbers of multiple sub-images obtained by partitioning the same panoramic video image data meet a preset rule, for example, the sequence numbers may be the same or consecutive.
b. Allocate reconstruction information for a partitioning manner of each sub-image, where the reconstruction information is used for recording the partitioning manner of each sub-image.
c. Send each sub-image and corresponding synchronous information and reconstruction information of each sub-image to another device.
Therefore, the for going method for processing video image data further includes a synchronization process and a reconstruction process, which are respectively introduced as follows.
The synchronization process is as follows:
Receive each sub-image and corresponding synchronous information and reconstruction information of each sub-image, where the each sub-image and corresponding synchronous information and reconstruction information of each sub-image are sent by another device, and then classify the sub-images according to the synchronous information to find multiple sub-images obtained by partitioning the same panoramic video image data, that is, image information obtained at the same time.
A device for implementing the forgoing method includes a receiving buffer, a reconstruction buffer, and a sending buffer. The receiving buffer receives partitioned sub-images, where synchronous information of sub-images that belong to the same panoramic image meets a preset rule, for example, the synchronous information is the same or consecutive, the reconstruction buffer stores a sub-image to be reconstructed, and the sending buffer stores a reconstructed image.
The reconstruction information may be a partitioning manner, and the reconstruction process is: reconstructing the classified sub-images according to the partitioning manner to obtain multiple channels of video image data, where each channel of video image data is arranged according to a position of each channel of video image data in the panoramic video image data.
The synchronization process and the reconstruction process may specifically include:
Step a: Implement an initialization operation, that is, determine minimum synchronous information MinSyinfo.
The “minimum synchronous information” in this step may not be minimum, and may be randomly selected and assumed to be the “minimum synchronous information”.
Step b: Take an unselected sub-image from the receiving buffer, and obtain synchronous information CurrSyinfo.
Step c: Judge whether the MinSyinfo is greater than the CurrSyinfo, and if the MinSyinfo is greater than the CurrSyinfo, proceed to step d; otherwise, proceed to step e.
Step d: Determine the CurrSyinfo as the MinSyinfo, and return to step b.
Step e: Perform CDT (Check Delay Time, check delay time) processing, and if the delay time is greater than a specified delay, proceed to step f; otherwise, proceed to step g.
Step f: Directly output an image stored in the sending buffer, and return to step a.
Step g: Judge whether the MinSyinfo is smaller than the CurrSyinfo, and if the MinSyinfo is smaller than the CurrSyinfo, return to step b; otherwise, proceed to step h.
Step h: Perform CDT processing, and if the delay time is greater than a specified delay, proceed to step f; otherwise, proceed to step i.
Step i: Store the sub-image in the reconstruction buffer.
Step j: Judge whether an unselected sub-image exists in the receiving buffer, and if an unselected sub-image exists in the receiving buffer, return to step b; and if no unselected sub-image exists in the receiving buffer, proceed to step k.
Step k: Reconstruct the sub-image stored in the reconstruction buffer according to the reconstruction information, store a reconstructed image to the sending buffer, and proceed to step f.
It may be understood that, after the sending buffer sends data in step f, the buffer is not released at once, so that when the process proceeds to step f from step e or step h, the image stored in the sending buffer is a previous frame of image that is successfully reconstructed; and the image stored in the sending buffer is updated in step k, where the update may be implemented in a data overwriting manner, and may also be implemented in a manner of releasing the sending buffer and then storing data in the sending buffer, or in another data updating manner.
In other embodiments, in a recombining process, before the multiple sub-images and the corresponding synchronous information and reconstruction information are sent, the multiple sub-images and the corresponding synchronous information and reconstruction information are coded, where a coding manner may be a compression standard code stream format that meets various current mainstream standards, such as h261, h263, h263++, mpeg1, mpeg2 or mpeg4.
Correspondingly, in the synchronization process, after the sub-images and the corresponding synchronous information and reconstruction information are received, decoding is performed first, and being corresponding to multiple coders in the recombining process, multiple decoders may also be set. Afterward, the decoded sub-images are classified according to the synchronous information to find multiple sub-images obtained by partitioning the same panoramic video image data, that is, image information obtained at the same time.
An embodiment of the present invention further discloses an apparatus for processing video image data, which may implement the method disclosed in the foregoing embodiment.
A structural form of the apparatus for processing video image data is shown in
The data input interfaces 83 are multiple, which are respectively connected to multiple cameras, and configured to obtain multiple channels of video image data and correlative information between each channel of video image data, where the correlative information includes: information that is used to indicate a physical position of the video image data, and captured timestamp information of the video image data.
The data combining unit 81 is configured to combine the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information. Specifically, according to a physical position and capture time of each channel of video image data, the multiple channels of video image data are combined into the single channel of panoramic video image data.
The data recombining unit 82 is configured to, according to the number and the size of display devices and a video image format supported by the display devices, recombine the single channel of panoramic video image data into multiple channels of video image data satisfying a display requirement of multiple display devices.
The data output unit 84 is configured to send, through a communication network, the multiple channels of video image data processed and obtained by the data recombining unit 82 to a remote videoconferencing device (which may be a terminal, and may also be an MCU).
Therefore, the remote videoconferencing device may arrange the multiple channels of video image data according to positions of video images in the panoramic video image, and then transmits the video image data to the multiple display devices. Video images displayed by all the display devices may be combined into a wide viewing-angle video image, so as to bring panoramic visual experience to users.
The data input interface 83 may be a YPbPr interface, a DVI interface, an HDMI interface, or a VGA interface.
It should be noted that, in order to reduce the amount of data to be transmitted and ensure transmission safety, another structure of the apparatus for processing video image data may further include a functional unit for compression and coding. As shown in
Functions of the data combining unit 91, the data recombining unit 92, the data input interface 93, and the data output unit 94 are basically the same as functions of the data combining unit 81, the data recombining unit 82, the data input interface 83, and the data output unit 84 respectively.
The data coder 95 is configured to obtain multiple channels of video image data recombined and obtained by the data recombining unit 92, and after the obtained video image data is coded, provide the coded video image data for the data output unit 94. A coding manner may be: H.261, H.263, H.264, MPEG1, MPEG2, or MPEG4.
In order to accelerate a data processing speed to ensure real-time data transmission, in other embodiments, multiple data coders may be adopted to simultaneously perform coding processing on the multiple channels of video image data recombined and obtained by the data recombining unit 92. In this case, each channel of video image data processed and obtained by the data recombining unit 92 includes: each sub-image obtained by recombining the panoramic video image and corresponding synchronous information and reconstruction information of each sub-image. After the data output unit 94 outputs the multiple channels of video image data coded by the multiple coders, a device that receives the multiple channels of video image data may perform a synchronization process and a reconstruction process according to the synchronous information and the reconstruction information, where specific content of the synchronization process and the reconstruction process may be made referrence to the description of the foregoing method, and is not repeated here.
The device that receives the multiple channels of video image data is another structural form of the apparatus for processing video image data, which includes multiple data input interfaces and multiple data output interfaces, and further includes data decoders, a data synchronizing unit, and a data reconstructing unit.
The data input interface is configured to obtain multiple channels of coded video image data.
The data decoders are multiple, which are configured to simultaneously decode the multiple channels of coded video image data, where multiple channels of decoded video image data include multiple sub-images obtained by partitioning the panoramic video image and corresponding synchronous information and reconstruction information of the multiple sub-images.
The data synchronizing unit is configured to classify the decoded sub-images according to corresponding synchronous information of the decoded sub-images, and a specific process may be made referrence to the description in the foregoing method embodiment.
The data reconstructing unit is configured to reconstruct the classified sub-images according to the reconstruction information to obtain multiple channels of video image data, and provide the obtained video image data to the data output interface, where each channel of video image data is arranged according to a position of each channel of video image data in the panoramic video image data.
Another structural form of the apparatus for processing video image data is shown in
Functions of the data combining unit 101 and the data recombining unit 102 are basically the same as the functions of the data combining unit 81 and the data recombining unit 82 respectively.
A difference between this structure and the structure shown in
Specifically, the data input interface 103 may be formed by a network interface and a data receiving unit, where the network interface is configured to establish a connection with the communication network, and the data receiving unit is configured to receive, through the network interface, video image data transmitted by another device through the communication network.
The network interface may be an ISDN interface, an E1 interface, or a V35 interface, where the ISDN interface, the E1 interface, or the V35 interface is based on circuit switching, an Ethernet interface based on packet switching, or a wireless port based on a wireless connection.
In addition, if multiple channels of video image data and correlative information between each channel of video image data are coded, where the multiple channels of video image data and the correlative information between each channel of video image data are received by the data input interface 103, another structure of the apparatus for processing video image data needs to include a functional unit for decoding, as shown in
Functions of the data combining unit 111, the data recombining unit 112, the data input interface 113, and the data output interface 114 are basically the same as the functions of the data combining unit 101, the data recombining unit 102, the data input interface 103, and the data output interface 104 respectively.
The data decoder 115 is configured to decode multiple channels of video image data and correlative information between each channel of video image data, where the multiple channels of video image data and the correlative information between each channel of video image data are obtained by the data input interface 113, and provide the decoded multiple channels of video image data and correlative information between each channel of video image data for the data combining unit 111.
In addition, an embodiment of the present invention further provides a videoconferencing system, and a specific structure of the system is shown in
The data combining unit 121, the data sending unit 122, and the data input interfaces 125 are located at a videoconferencing site at one side. The multiple data input interfaces 125 obtain multiple channels of video image data and correlative information between each channel of video image data, the data combining unit 121 combines the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information, and then the data sending unit 122 sends the single channel of panoramic video image data to a remote conference site at the other side through a communication network.
The data receiving unit 123, the data recombining unit 124, and the data output interfaces 126 are located at the remote conference site at the other side. The data receiving unit 123 receives the single channel of panoramic video image data that is carried on the communication network, and then provides the panoramic video image data to the data recombining unit 124, the data recombining unit 124 recombines, according to the number of display devices, a supported size of a frame, and a supported video image format, the single channel of panoramic video image data into multiple channels of video image data satisfying a display requirement of multiple display devices, and the data output interfaces 126 provide the video image data to corresponding display devices.
The display devices are placed according to positions of the video images in the panoramic video image, and the video images displayed by all the display devices may be combined into a wide viewing-angle video image, so as to bring panoramic visual experience to users.
It should be noted that, the data input interface 125 and the data output interface 126 may be YPbPr interfaces, DVI interfaces, HDMI interfaces or VGA interfaces. In addition, the types of the data input interface 125 and the data output interface 126 may be different, and a format of video image data obtained by the data input interface 125 may be converted according to the type of the data output interface 126 during recombination of the data recombining unit 123. For example, the data input interface 125 is a DVI interface, the obtained video image data is in a DVI format, and the data output interface 126 is an HDMI interface, so that when the data recombining unit 123 recombines video image data, video image data in a DVI format needs to be converted into video image data in an HDMI format.
The conference sites at both sides are required to play roles of a transmitter and a receiver at the same time, that is, to send video image data from a local conference site through the data input interfaces 125, the data combining unit 121, and the data sending unit 122, and receive and process video image data from a remote conference site through the data receiving unit 123, the data recombining unit 124, and the data output interfaces 126.
It should be noted that, in a system with another structural form, a transmitter only needs to obtain multiple channels of video image data and correlative information between each channel of video image data through the data input interfaces 125, and then send the obtained video image data and correlative information through a communication network to a remote conference site. A receiver obtains the multiple channels of video image data and the correlative information between each channel of video image data from the communication network, and then performs operations such as combination and recombination.
The data sending unit 132 and the multiple data input interfaces 135 are located at a conference site at one side, and the data receiving unit 133, the data combining unit 131, the data recombining unit 134, and the multiple data output interfaces 136 are located at a conference site at the other side.
The multiple data input interfaces 135 obtain multiple channels of video image data and correlative information between each channel of video image data, and the data sending unit 132 sends the multiple channels of video image data and the correlative information between each channel of video image data to the conference site at the other side through a communication network; and the data receiving unit 133 at the conference site at the other side receives the multiple channels of video image data and the correlative information between each channel of video image data, and then provides the received video image data and correlative information for the data combining unit 131, the data combining unit 131 combines the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information, and then provides the single channel of panoramic video image data for the data recombining unit 134, the data recombining unit 134 recombines, according to the number of display devices, a supported size of a frame, and a supported video image format, the single channel of panoramic video image data into multiple channels of video image data satisfying a display requirement of multiple display devices, and the data output interfaces 136 provides the multiple channels of video image data for corresponding display devices.
It should be noted that, since the data sending units (the data sending units 122 and 132) send the video image data through the communication network, in order to reduce the amount of data to be transmitted and ensure transmission safety, data sent by the data sending units may be coded. Correspondingly, after receiving the data sent through the communication network, the data receiving units (the data receiving units 123 and 133) decode the data.
It should be noted that, in a system with another structural form, after receiving multiple channels of video image data and correlative information, a transmitter combines the multiple channels of video image data into a single channel of panoramic video image data according to the correlative information, then recombines the single channel of panoramic video image data into several channels of video image data, and sends the several channels of video image data; and a receiver receives the several channels of video image data, and then provides the several channels of video image data for display devices at a local conference site for display. A specific structural form is shown in
Functions of the units are basically the same as the units in
In addition, another structure may further include data coders, data decoders, a data synchronizing unit, and a data reconstructing unit.
The data coders are multiple, which are disposed at the conference site where the data recombining unit 144 is located, and configured to simultaneously process multiple channels of video image data recombined and obtained by the data recombining unit 144, where each channel of video image data recombined and obtained by the data recombining unit 144 includes each sub-image obtained by partitioning the panoramic video image, and corresponding synchronous information and reconstruction information of each sub-image.
The number of the data decoders is the same as the number of the data coders. The data decoders are disposed at the conference site where the data receiving unit 143 is located, and configured to simultaneously decode multiple channels of coded video image data received by the data receiving unit 143.
The data synchronizing unit is configured to classify, according to corresponding synchronous information of sub-images, sub-images decoded by the data decoders, and a specific process may be made referrence to the description in the foregoing method embodiment.
The data reconstructing unit is configured to reconstruct the classified sub-images according to the reconstruction information to obtain multiple channels of video image data, and provide the obtained multiple channels of video image data for the data output interface, where each channel of video image data is arranged according to a position of each channel of video image data in the panoramic video image data.
It can be seen that, this embodiment of the present invention is suitable to a situation that videoconferencing sites at both sides are the same (that is, the number of display devices, a supported size of a frame, and a supported video image format are the same).
Being corresponding to the forgoing method and apparatus for processing video image data and the videoconferencing system, an embodiment of the present invention further discloses a videoconferencing terminal at the same time, and since roles played by both sides of a video conference are mutual (that is, both act as a transmitter and a receiver at the same time), a specific structure of the videoconferencing terminal is shown in
The network interface 153 is configured to establish a connection with an external communication network, and the data transceiving unit 152 is configured to obtain data sent from the communication network and send data to the communication network.
Functions of other functional units, such as the data combining unit 151, the data recombining unit 154, the data input interfaces 155, and the data output interfaces 156, may be made referrence to the content of the apparatus for processing video image data and the videoconferencing system in the foregoing description.
As a transmitter, the videoconferencing terminal needs to obtain multiple channels of video image data and correlative information between each channel of video image data at a local conference site, combine the obtained video image data and correlative information into a single channel of panoramic video image data, and send the single channel of panoramic video image data to a remote conference site through the communication network; and meanwhile, as a receiver, the videoconferencing terminal needs to receive a panoramic video image data sent from the remote conference site through the communication network, recombine the panoramic video image data into multiple channels of video image data, and then transmit the multiple channels of video image data to display devices at the local conference site.
Functions of the data combining unit 161, the data transceiving unit 162, the network interface 163, the data recombining unit 164, the data input interfaces 165, and the data output interfaces 166 are basically the same as the functions of the data combining unit 151, the data transceiving unit 152, the network interface 153, the data recombining unit 154, the data input interfaces 155, and the data output interfaces 156 respectively.
The data coder 167 codes data before the data transceiving unit 162 sends the data, and the data decoder 168 decodes the data after the data transceiving unit 162 receives the data.
Another structure of the videoconferencing terminal is shown in
Functions of the units are basically the same as the functions of the units in
A difference lies in that, as a transmitter, the videoconferencing terminal obtains multiple channels of video image data and correlative information between each channel of video image data at a local conference site, and then directly sends the obtained video image data and correlative information to a remote conference site through a communication network. Meanwhile, as a receiver, the videoconferencing terminal receives multiple channels of video image data and correlative information between each channel of video image data at the remote conference site through the communication network, combines the received video image data and correlative information into a single channel of panoramic video image data, recombines the single channel of panoramic video image data into multiple channels of video image data, and transmits the multiple channels of video image data to display devices at the local conference site.
Functions of the units are basically the same as the functions of the units in
A difference lies in that, as a transmitter, the videoconferencing terminal obtains multiple channels of video image data and correlative information between each channel of video image data at a local conference site, and codes and sends the obtained video image data and correlative information directly to a remote conference site through a communication network. Meanwhile, as a receiver, the videoconferencing terminal receives multiple channels of video image data and correlative information between each channel of video image data that are sent from the remote conference site through the communication network, decodes and combines the received video image data and correlative information into a single channel of panoramic video image data, then recombines the single channel of panoramic video image data into multiple channels of video image data, and transmits the multiple channels of video image data to display devices at the local conference site.
In other embodiments, as a transmitter, the videoconferencing terminal obtains multiple channels of video image data and correlative information between each channel of video image data at a local conference site, combines the obtained video image data and correlative information into a single channel of panoramic video image data, recombines, according to the number of display devices, a supported size of a frame, and a supported video image format at a remote conference site, the single channel of panoramic video image data into multiple channels of video image data satisfying a display requirement of multiple display devices, and sends the multiple channels of video image data to the remote conference site through a communication network (or through the communication network after coding). Meanwhile, as a receiver, the videoconferencing terminal receives multiple channels of video image data sent from the other side through the communication network, provides the received video image data for display devices at the local conference site for display (or provides the received video image data for the display devices at the local conference site for display after decoding). It should be noted that, in this case, during coding, multiple coders may be adopted to simultaneously code multiple channels of recombined video image data, and during decoding, multiple decoders are adopted to simultaneously decode multiple channels of coded video image data; and furthermore, a synchronization process and a reconstruction process are performed, that is, classifying, according to corresponding synchronous information of sub-images, sub-images decoded by the data decoders, and reconstructing the classified sub-images according to the reconstruction information to obtain multiple channels of video image data, where each channel of video image data is arranged according to a position of each channel of video image data in the panoramic video image data.
The embodiments in this specification are described in a progressive manner, each embodiment emphasizes a difference from the other embodiments, and the identical or similar parts between the embodiments may be made referrence to each other. Since the apparatuses disclosed in the embodiments are corresponding to the methods disclosed in the embodiments, the description of the apparatuses is simple and relevant parts may be made reference to the description of the methods.
Persons skilled in the art may understand that information, a message, and a signal may be represented by using any one of many different techniques and technologies. For example, the message and information mentioned in the forgoing description may be represented as a voltage, a current, an electromagnetic wave, a magnetic field or a magnetic particle, an optical field, or any combination of the forgoing.
Persons skilled in the art may further realize that, units and steps of algorithms according to the description of the embodiments disclosed by the present invention can be implemented by electronic hardware, computer software, or a combination of the two. In order to describe interchangeability of hardware and software clearly, compositions and steps of the embodiments are generally described according to functions in the forgoing description. Whether these functions are executed by hardware or software depends upon specific applications and design constraints of the technical solutions. Persons skilled in the art may use different methods for each specific application to implement the described functions, and such implementation should not be construed as a departure from the scope of the present invention.
Persons of ordinary skill in the art may understand that all or a part of the steps in the method of the forgoing embodiments may be accomplished through a program instructing relevant hardware. The program may be stored in a computer readable storage medium, and the storage medium may include a ROM, a RAM, a magnetic disk, or an optical disk.
The objectives, technical solutions, and beneficial effects of the present invention have been described in further detail through the forgoing specific embodiments. It should be understood that the forgoing descriptions are merely specific embodiments of the present invention, but are not intended to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of the present invention should fall within the protection scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
200910161963.9 | Sep 2009 | CN | national |
This application is a continuation of International Application No. PCT/CN2010/076763, filed on Sep. 9, 2010, which claims priority to Chinese Patent Application No. 200910161963.9, filed on Sep. 10, 2009, both of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2010/076763 | Sep 2010 | US |
Child | 13416919 | US |