This application claims priority under 35 U.S.C. §119(a)-(d) of United Kingdom Patent Application No. 1209683.0, filed on May 31, 2012 and entitled “Method, device, computer program and information storage means for transmitting a source frame into a video display system”. The above cited patent application is incorporated herein by reference in its entirety.
The invention relates to the field of video display systems, more particularly to multi projector video systems.
The invention further relates to the transmission of a source frame into such a system and the division and processing of the source frame by the system.
The present invention relates to a video display system, comprising a multi projector (MP) arrangement, capable of distributing sub-frames of images between a plurality of VP (video projectors).
Consider a video projection system comprising multiple projectors arranged to generate adjacent, partially overlapping, images on a projection screen. The goal is to display high-definition (HD) video offering the user high image quality. Each individual video projector generates an image with a given definition and a size determined by the projector lens focal length, the size of the projector's light modulation device (e.g. an LCD panel) and the distance between the projector and the screen. Increasing the projection distance yields a larger, but also darker, image since the brightness decreases with the square of the distance. Covering a very large projection screen with a sufficient definition and brightness usually requires aggregating several video projectors in a manner that they cover adjacent, partially overlapping, zones of the total screen area. In the overlapping zones, a technique known as blending ensures a smooth transition between adjacent projectors in a manner that can accommodate small displacements introduced e.g. by vibrations or thermal expansion. Blending consists of continuously decreasing the brightness of the image generated by one projector towards the border of the zone covered by said projector and, complementarily, increasing the image brightness by the adjacent projector in a manner so as to obtain uniform brightness after superposition. Such a technique is well known and is described in the prior art.
Patent application US 2011/0019108 discloses how to extract sub-frames illuminated by each projector, overlay areas of illumination and define blending parameters. The partitioning and blending of the image to be projected by a multi-projector system has to be carefully adjusted to give the user the impression of a perfectly matching, continuous image over the whole screen area. Such a calibration process is typically carried out during system installation, or during power-up, and generally requires some user interaction, e.g. taking digital calibration photos of the screen.
Each VP (video projector) in the system displays a sub-image of the overall video image to be displayed by the complete MP (multi projection) system.
In order to make the system easy to install and rearrange, or to make it look sleeker, communication and data transfer between video projectors are done using a synchronous wireless network (i.e. network elements share a common network clock). This kind of network is well adapted to constant bandwidth transfers, such as video and audio. The system can be designed towards display of uncompressed high quality video and an associated wireless network normally uses a 60 GHz frequency band, in order to provide the required bandwidth.
With an aim of matching as many video formats and video frame ratios as possible, each video projector of the system may be rotated in various orientations, according to the screen size, image shape, mounting environment . . . etc. This results in a situation where all the projectors constituting the system do not have the same orientation. Some might be horizontal (projecting an image in landscape orientation), while some others might be vertical (projecting an image in portrait orientation).
For video display, a video frame is conventionally displayed line by line, from the top to the bottom, in landscape mode (i.e. long side of the frame is horizontal). The instant corresponding to the beginning of a video frame display process is marked by a signal called Vertical Synchronization (Vsync) signal.
When using a multi-projector, i.e. multi-display, video projection system, each video projector is in charge of displaying a sub-frame of the global frame of the image to display. It is important that the display of all sub-frames, associated with all video projectors in the system, are well synchronized, in order to avoid visual artifacts. This means that Vsync signals marking the beginning of a sub-frame display on each slave video projector are arranged to occur simultaneously.
Patent application US2005/0174482 A1 describes an example of the synchronization of multiple video output data in a multi display system. In particular, the document describes a multi-screen video reproducing system capable of performing synchronized video reproduction for a long time by using a simple system without recognizing an absolute time. The multi-screen video reproducing system comprises a LAN (local area network) functioning as a network.
In classical multi-projection systems, the device in charge of splitting and distributing the video sub-frame data (e.g. the master projector) buffers the full video frame. Then it splits it into sub-frames and distributes it line by line to the corresponding display. This device also distributes synchronization signals. Display devices (e.g. the slave projectors) receive data line by line and display them, in accordance with the synchronization signals.
As the full source video frame is stored before processing, this classical method generates a latency corresponding to a full frame buffering time, before starting to feed video line data to the video projectors. This latency may be critical for a real time application such as video gaming, flight simulator, etc. and comprises a problem of the video display system.
The invention sets out to address the problem of latency, while keeping a correct synchronization between all video projectors constituting the video projection system.
According to a first aspect, this is achieved by provision of a method of transmitting a source frame into a video display system comprising a plurality of projectors, the video display system further arranged to produce a projected image from the source frame, each projector arranged to display a sub-frame of the projected image,
the method, comprising the steps of:
The proposed invention comprises the determination of an optimized latency between the beginning of a source video frame and the beginning of sub-frame display by projectors comprised in the system. This optimized latency determination is based on the orientation and position of the projectors, as some configurations (for instance, if all projectors displaying the bottom edge of a global frame from the video source are set in landscape position) allow the start of a frame display before all video data has been received by the video projectors.
A main advantage to this method is removal of superfluous latency between the video source data delivery from the video source device and the display of those data on the screen: the latency is optimized, the optimization taking account of the multi-projection system configuration. At the same time, a tight and correct synchronization is kept between all projector displays, thereby avoiding visual artifacts.
In a further embodiment of the invention, the plurality of projectors comprises a master projector and a plurality of slave projectors, the master projector being arranged to implement the steps of the method.
A (master) projector can be arranged to be in charge of global source video data reception and video sub-frame distribution to other projectors and also host functions to split, rotate and store video frame sub-frame data. It should be noted, however, that these functions may also be hosted in e.g. the source device.
In a further embodiment of the invention, the method further comprises the step of:
Buffering means are determined according to the optimized latency. Video synchronization signals, used to drive the beginning of frame display on the projectors, are generated, so that a time interval equal to the optimal latency is maintained between video source synchronization signals and projectors video synchronization signals.
An advantage to this method is that a sub-frame rotation process, when needed, is done on the fly during video data reception. Thus, this processing does not generate extra latency.
In a further embodiment of the invention, the buffering comprises
With such a method, the buffering need is optimized.
In a further embodiment of the invention, the method further comprises the step of:
This embodiment of the invention takes advantage of projector video flip capacity to optimize end to end display latency.
In a further embodiment of the invention, the method further comprises the step of:
In a further embodiment of the invention, the method further comprises the step of:
In a further embodiment of the invention, the method further comprises the step of:
In another aspect of the invention, there is provided a method of operating a video display system comprising a plurality of projectors, the video display system further arranged to cooperate with a video source to produce a projected image from a source frame, each projector arranged to display a sub-frame of the projected image comprising lines,
the method comprising the steps of:
In another aspect of the invention, there is provided a method of operating a video display system comprising a plurality of projectors, the video display system further arranged to cooperate with a video source to produce a projected image from a source frame, each projector arranged to display a sub-frame of the projected image comprising lines,
the method comprising the steps of:
In a further embodiment of the invention, the plurality of projectors comprises a master projector (102) and a plurality of slave projectors (103, 104, 105), the master projector being arranged to implement the steps of the method.
In a further embodiment of the invention, the method further comprises the step of:
In a further embodiment of the invention, the method further comprises the step of:
In another aspect of the invention, there is provided a computer program comprising instructions for carrying out each step of the method according to any embodiment of any previous aspect of the invention when the program is loaded and executed by a programmable apparatus.
In another aspect of the invention, there is provided an information storage means readable by a computer or a microprocessor storing instructions of a computer program, wherein it makes it possible to implement the method according to any embodiment of any previous aspect of the invention.
In another aspect of the invention, there is provided a device for transmitting a source frame into a video display system comprising a plurality of projectors, the video display system further arranged to produce a projected image from the source frame, each projector arranged to display a sub-frame of the projected image, the device comprising:
The proposed invention comprises the determination of an optimized latency between the beginning of a source video frame and the beginning of sub-frame display by projectors comprised in the system. This optimized latency determination is based on the orientation and position of the projectors, as some configurations (for instance, if all projectors displaying the bottom edge of a global frame from the video source are set in landscape position) allow the start of a frame display before all video data has been received by the video projectors.
A main advantage to this method is removal of superfluous latency between the video source data delivery from the video source device and the display of those data on the screen: the latency is optimized, the optimization taking account of the multi-projection system configuration. At the same time, a tight and correct synchronization is kept between all projector displays, thereby avoiding visual artifacts.
In a further embodiment of the invention the device further comprises:
Buffering means are determined according to the optimized latency. Video synchronization signals, used to drive the beginning of frame display on the projectors, are generated, so that a time interval equal to the optimal latency is maintained between video source synchronization signals and projectors video synchronization signals.
An advantage to this method is that a sub-frame rotation process, when needed, is done on the fly during video data reception. Thus, this processing does not generate extra latency.
In a further embodiment of the invention the device further comprises:
In a further embodiment of the invention the device further comprises:
This embodiment of the invention takes advantage of projector video flip capacity to optimize end to end display latency.
In a further embodiment of the invention, the plurality of projectors comprises a master projector and a plurality of slave projectors.
a illustrates timings of control signals which facilitate the synchronous display of video in a multi-projector system, according to common practice.
b illustrates timings of control signals which facilitate the synchronous display of video in a multi-projector system, according to an embodiment of the invention.
Using such a system, the user is able to manage the display of high quality video (for example 4k2k video: with an associated size image comprising 3840×2160 pixels) on a large area with standard video projector devices (able to support 1080p HD video 1920×1080 pixels).
In the exemplary system described, video input (4k2k video) is provided by a video source device 100 to one video projector of the system. This video projector, referred to as the master video projector 102, is able to manage the cutting of video, e.g. comprising a global source frame 110S, into four 1080p HD streams or sub-frames, and the creating of a blending area, comprising an overlapping area, in order to compensate chrominance and luminance between the video projectors102, 103, 104 and 105. The master projector 102 is also in charge of video display synchronization signal generation and distribution to slave projectors. (Alternatively, all or part of the cutting, synchronization or distribution functions of the master projector 102 can be implemented in the source device 100). This results in a final display which appears as if issued from a single projector. The video source 100 is connected to master video projector 102 of the multi-projector system through connection 101, either through a wired HDMI connection, or through a specific wireless connection able to sustain the high bit data rate of video without degradation.
In the multi-projection video system 10, connections between projectors are based on wireless technology, such as 60 GHz able to provide several Gbps (Giga bits per second) data throughput that may be used for video exchange without need of compression technology. As an alternative embodiment, connections between projectors are based on wired technology.
In the exemplary multi-projection video system 10, an image capture device (such as a camera, not represented) is connected in a preferred mode through a wired connection to a video projector, preferably the master video projector, and enables the capture of the full display projection area 106. In another embodiment, this connection could also be achieved through wireless radio means to one of the video projector of the system.
In such a multi-projection video system 10, each video projector 102, 103, 104 and 105 is in charge of projecting a sub-frame 106A, 106B, 106C, 106D, respectively, of the global source frame 110S, and the aggregation of all those sub frame projections is the global frame projection 110P.
A video source device 100 provides a global video frame data 110S line by line, starting from the top left of the source frame.
The video projector displays frames by means of a LCD (liquid crystal display) grid, crossed by light coming from a powerful lamp, each intersection of the grid being arranged to represent a pixel. This LCD grid is always in landscape orientation (i.e. when the video projector is set up right on a table, the picture it displays is broader than its height), and video data is set in the LCD grid line by line.
If a video projector has to display a global frame sub-part in landscape orientation, lines in the global frame correspond to lines in the video projector LCD grid, thus the LCD grid can be filled with video data in the same order as this video data is provided to the video projector, and thus a display process of the frame sub-part can start as soon as video data is provided to the video projector.
In the case of a projector (most often a slave projector) set in portrait mode, lines of the global frame correspond to columns of the video projector LCD (as the video projector, and thus the LCD grid, has been rotated by 90 or 270 degree with respect to the lens axis). All lines of the global frame sub-frame corresponding to the rotated video projector (i.e. all lines of the sub-frame the video projector has to display) have to be provided to the video projector before a line of the video projector LCD grid can be completed. This means that a video projector in portrait orientation has to receive and store (in a buffer) the full video data of the frame sub-frame it has to display before actually starting to display it (i.e. filling data in its LCD grid).
Such a situation illustrates that if any video projector, allocated to display the lowest parts of the global frame, is in portrait orientation, then the last data of the global frame (bottom right corner) need to be received before starting the display process. As all video projectors have to start displaying sub-frames corresponding to a same global frame at same time to avoid visual artifacts, this means that the full global frame data has to be received from the source and stored before being processed. This corresponds to a full frame latency (i.e. a frame is being received from the video source, while the previous frame is being displayed).
a illustrates a full frame latency situation and the associated timings. As described in video standards specifications (such as the HDMI 1.4 standard), video frame data presentation is driven by a set of synchronization signals (or clocks). In the specifications, there is provided a vertical synchronization signal, here referred to as Vsync, which marks each new frame start (see HDMI standard). Video source and video projector are all following the same synchronization signal Vsync 200. During two subsequent occurrences of Vsync 200A and 200B, as viewed by a slave video projector, the video source is delivering global video frame data n+1 230, while the video projectors are displaying video frame sub-part data corresponding to global video frame n 220. Thus latency between a global frame delivery by the source and its actual display corresponds to the duration of a global frame (16 ms if video is 60 frames per seconds). In other words, the latency as indicated 201 equals the full image buffering time.
Conversely, as illustrated in
This latter situation, referred to as latency optimization, is illustrated in
b illustrates an example of this “display delay”, i.e. latency. Video source is following Vsync signal 210, while all video projectors (master and slaves) are following Vsync signal 211. Vsync signals 210 and 211 have the same period (one frame duration), but the delay between an occurrence of source Vsync 210 and the following occurrence of video projector's Vsync 211 is reduced in accordance with the “start of display” threshold, and corresponds to a ‘display_delay’ or latency 212. It follows that the start of display of a frame can take place earlier than in the conventional display technique of
The method begins by following steps 400 to 404, which are run for each video projector present in the system, including master video projector. These steps comprise:
During step, 401, the video projector is arranged to display a specific pattern which facilitates determination of the video projector orientation, for instance an arrow. While this pattern is being displayed, a snapshot of the global screen is taken 402 by means of a digital camera here connected to the master video projector. During steps 403 and 404 the snapshot is used to determine the projector orientation and the coordinates of the sub-frame area displayed by the video projector (the coordinates being defined relative to the global display area). Many techniques have been described in the prior art to implement this system of coordination, thus it is not described further here. Once steps 400 to 404 have been effected for each video projector present in the system, the display_delay value (as defined above) is determined 405. The display_delay is based on position and orientation information, previously determined. This display_delay value determination is detailed further in
In the affirmative, the SOD threshold is defined as a (pixel) line following the lowest bottom border of the sub-frames located immediately above the lowest sub-frames 503.
In an alternative embodiment, if sub-frames located immediately above the lowest sub-frames are all in landscape orientation, the SOD threshold is the first (pixel) line of the lowest sub-frame.
Then a display_delay value is determined based on the SOD threshold. Step 504: determine display_delay based on SOD. Indeed, the display_delay can be defined as the delay between the Vsync signal corresponding to the beginning of the current frame and a Hsync signal marking the beginning of the line corresponding to the SOD threshold. As the period and resolution of the frame are known, the display_delay (in seconds) can be obtained by:
display_delay=(number of lines in top vertical blanking+active line number of SOD threshold)/(number of frames per second)
Finally, buffering means to store data to be displayed by each video projector is determined (in step 505: determine buffering means based on SOD and projector position and orientation (for each projector)). More details are provided with reference to
If step 502 returns a negative, i.e. if at least one (lowest) video projector is upside-down, an optional request 506 may be sent to the upside-down video projector, to flip the video to be displayed (so that this video would be displayed up right). If all upside-down video projectors confirm they have the ability to process such a flip (request OK ? 507), the display_delay value can be determined from step 503, previously described, with steps 504 and 505 following sequentially. If the video projectors do not have the ability to flip the image, the display_delay value is determined by step 501.
This step is an embodiment of step 505, discussed previously.
Video data provided by the video source has to be buffered, e.g. by the master video projector, for a time period equivalent to the display_delay previously determined.
If the video projector is in portrait mode, or in landscape mode but not in the bottom of the global display (i.e. does not display parts of the global frame bottom edge), the sub-frame to be displayed by the video projector needs to be fully stored (step 601: buffer is a full frame capacity memory), because all data constituting this sub-frame will be received by the master video projector before SOD threshold has been reached. A memory twice the size of the sub-frame data is required. Usually such a memory is of a “ping pong” type, indicating data corresponding to a sub-frame are stored in a first half of the memory while data corresponding to the previous sub-frame are read from the second half of the memory.
buff_above—SOD=SOD_offset−offset of sub-frame first line
(where SOD_offset is the offset of the SOD threshold in number of lines; offsets are defined with reference to the global frame first line).
Moreover, all the video projectors have the same frame rate as the video source, but individual video projector vertical resolution is in some cases lower than video source resolution. This means data will be delivered by video source faster than they will be transmitted to video projectors (same frame rate, but fewer lines to deliver). Thus, some extra buffer capacity is needed to store this data. The number of lines to be stored in this extra buffer is calculated by:
Buff_below—SOD=(src—vres−SOD_offset)*(1−(VP—vres/src—vres))
where src_vres is the vertical resolution of the video source frame (global frame) and VP_vres is the vertical resolution of the video projector.
Data corresponding to the top of the sub-part is read (in order to be transferred to the video projector) while data corresponding to the bottom of the sub-part is written, and thus memory such as a First In First Out (FIFO) memory type is needed. Step 603 indicates the buffer choice: buffer is a FIFO memory with capacity for buff_above_SOD+Buff_below_SOD. The memory size is arranged so that the memory can accommodate at least buff_above_SOD+Buff_below_SOD lines.
Video projectors have to fill their LCD matrix with video data line by line. But if a video projector is in portrait position, a line of the global frame sub-frame it has to display becomes a column in the video projector LCD matrix. Thus, sub-frame video data have to be rotated according to video projector orientation before being transmitted to the video projector. This rotation process is a reordering of the sub-frame pixel data. In the present invention, this pixel data reordering is done before storage in the master video projector, so that data can be transmitted to the video projector in the same order as they are read from the buffering memory. Once the storage means capacity has been defined, it is time to determine in which order data has to be stored, depending on the orientation of the video projector that will display those data.
This process is implemented according to the method provided in
If the angle between the horizontal, and the horizontal axis of the video projector is 90° 703 (video projector is in portrait orientation, and its right side on the top), a reordering is needed, so the storage mode is set to “mode 2” (step 704). Data are stored in memory so that the sub-frame is rotated 90° (i.e. first line first column pixel data is stored in the memory address corresponding to last line first column pixel, first line second column pixel data is stored in the memory address corresponding to last but one line, first column pixel, and so on).
If the angle between the horizontal, and the horizontal axis of the video projector is 180° 705 (video projector is in landscape orientation, upside-down), a reordering is needed, so storage mode is set to “mode 3” (step 706). Data are stored in memory so that frame sub-part is rotated 180° (i.e. first lire first column pixel data is stored in the memory address corresponding to last line last column pixel, first line second column pixel data is stored in the memory address corresponding to last line, last but one column pixel, and so on).
If the angle between the horizontal, and the horizontal axis of the video projector is 270° 707 (video projector is in portrait orientation, and its right side on the bottom), a reordering is needed, so storage mode is set to “mode 4” (step 708). Data are stored in memory so that frame sub-part is rotated 270° (ie. first line first column pixel data is stored in the memory address corresponding to first line last column pixel, first line second column pixel data is stored in the memory address corresponding to second line, first column pixel, and so on).
Other angle values are not allowed, so in such a case the method ends with error in step 709.
When a pixel data is received (step 800: Rx pixel data), a loop is followed, parsing all global frame sub-frames 800A. For each sub-frame 801, it is checked if the pixel belongs to the sub-frame (step 802). This check is done by comparing pixel coordinates in the global frame with sub-frame vertical and horizontal border offsets determined previously. If the pixel does not belong to the current sub-frame, the loop is reprocessed 801 with the next sub-frame. If the pixel belongs to the current sub-frame, the storage mode of the sub-frame is checked (step 803: storage mode is 1? If storage mode is “mode 1” (up-right video projector, no pixel data reordering), pixel data is stored in the FIFO memory 805 corresponding to the sub-frame. If storage mode is any other mode, pixel data is stored in the “ping pong” memory corresponding to the sub-frame, following the storage mode rule corresponding to this sub-frame. Step 804: store pixel data in memory corresponding to sub-frame based on storage mode and pixel index in the frame.
Once the loop has been processed on all sub-frames, the associated pixel data is processed.
The figure illustrates the synchronization, used to ensure that all video projectors are synchronized for the display of global frame sub-frames. On all video projectors, pixels are provided to video projector LCD matrix following a clock referred to as the pixel clock. The Vsync signal is generated as the first pixel of a video frame (including blanking pixels) is processed. The pixel clock can be adjusted using “phase lock loops” (PLL) so that the time delimited by two consecutive Vsync occurrences match a targeted frame rate.
In
From the slave video projector perspective in
The master VP reads out data from buffering memories, in order to transmit it to slave video projectors and to its own display sub-frame.
b illustrates the “ping pong” memory case, used if storage mode other than “mode 1” has been chosen for a video projector. Step 1010 illustrates the wait for the local Vsync signal occurrence. Once this local Vsync occurrence is detected 1010A, the memory bank of the “ping pong” memory that was used to read data is switched to write mode, so that pixel data corresponding to the next frame will be stored in this bank; and the memory bank of the “ping pong” memory that was used to write data is switched to read mode, so that pixel data corresponding to the frame to be now displayed will be read from this bank (step 1011: flip memory storage bank). Then, pixel data are read out from the read bank of the ping pong memory and either provided to master VP display means, following local pixel clock, if data has to be displayed by the master VP, or transmitted to a slave VP, through the wireless network adapter. (This is represented in the figure by step 1012: start reading pixel data from memory and transmit them to destination projector). In this latter case, depending on the communication protocols run on the wireless network, pixel data might be regrouped into packets before transmission. Such technics are well known, and are not further described here.
A processor, e.g. CPU 1201, is present in order to manage most of the configuration tasks, and implement several methods, such as those described in relation to
A video source interface (IF) module 1204 receives the video data and synchronization information from the video source (not shown). For example, the video source interface can be an HDMI adapter. This video source IF outputs the video data (not shown) to a video splitter module 1205, and also outputs synchronization signals to the synchronization controller 1209. The synchronization controller 1209 is responsible for generating the local and ref Vsync signals (i.e. the display frame synchronization signals), and the pixel clock signal, according to methods as presented in
Modules comprised in the area delimited by box 1211 are modules which have been added to a classical video projector device, in order to implement an embodiment of the invention. In an alternative embodiment of the invention, a part or the totality of the modules included in area 1211 can be integrated in a different device such as the video source device.
Number | Date | Country | Kind |
---|---|---|---|
1209683.0 | May 2012 | GB | national |