The invention concerns the field of rendering video, specifically the display of video on a display device.
When a user watches video on a display device, a menu or other type of banner may appear in the display area of the device if the user performs an operation such as a channel or channel change. Typically, the generated menu is overlaid over the video picture of the program that the user is watching, as shown in
It is possible that the other video source (such as a set top box) has its own menu or other type of object that is also shown on the display device as shown in
A method and apparatus are disclosed for modifying the display area of a display device. In one illustrative embodiment of the present invention, the display device moves an object rendered with an on screen display from a first area to a second area when an object collision takes place in the first area.
A method and apparatus are disclosed for modifying the display area of a display device. In another illustrative embodiment of the present invention, the display device detects an area of the display screen that is subject to a text crawl. In response to this detection, the display device scales the video of said display area to remove the area subject to the text crawl.
The present invention is directed towards the modification of a display area of a display device in view of objects (such as an on screen display generated (OSD) menu, text, a channel banner, closed captioning data, a user selectable option, and a text crawl) that may interfere with the display of the video programming. It is understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Preferably, the present invention is implemented as a combination of hardware and software. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage device. Such an application program may be capable of running on an operating system as Windows CE™, Unix based operating system, and the like where the application program is able to manipulate video information from a video signal.
The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof) that is executed via the operating system. The application program primarily providing video data controls to recognize the attributes of a video signal and for rendering video information provided from a video signal.
The application program may also control the operation of the OSD embodiments described in this application, the application program being run a computer processor as a Pentium™ III, as an example of a type of processor. The application program also may operate with a communications program (for controlling a communications interface) and a video rendering program (for controlling a display processor). Alternatively, all of these control functions may be integrated into the processor for the operation of the embodiments described for this invention.
It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying Figures are preferably implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.
The operation of the invention with the OSD displaying menu or text information works with a display processor that displays video signals at different display formats. Video signals that are processed by the display processor are received terrestrially, by cable, DSL, satellite, the Internet, or any other means capable of transmitting a video signal. Preferably, video signals comport to a video standard as DVB, ATSC, MPEG, NTSC, or another known video signal standard.
Similarly, the display OSD operates with a processor coupled to a communications interface as a cable modem, DSL modem, phone modem, satellite interface, or other type of communications interface capable of handling a bi-directional communications. Preferably, the processor is capable of receiving data communicated via a communications interface, such communicated data representing web page data that is encoded with a formatting language as HTML, or other type of formatting commands. Additionally, the processor is capable of decoding data transmitted as an MPEG based transmission, graphics data, audio data, or textual data that are able to be rendered either using a display processor, OSD, or audio processing unit as a SoundBlaster™ card. Such communicated data is decoded and rendered via the processor. In the case of HTML data, a format parser (as a web browser) is used with the graphics processor to display HTML data representing a web page, although other types of formatted data may be rendered as well.
Video decoder 25 is conditioned to scale the attributes of a decoded video signal. For example, video decoder 25 zooms in to a specific area of a decoded video signal or video decoder 25 reduces the size of a decoded video signal relative to the display area such a signal will be rendered on. Other scaling functions are available, depending upon the needs of illustrative embodiments of the present invention.
In other input data modes, units 72, 74 and 78 provide interfaces for Internet streamed video and audio data from telephone line 18, satellite data from feed line 11 and cable video from cable line 14 respectively. The processed data from units 72, 74 and 78 is appropriately decoded by unit 17 and is provided to decoder 100 for further processing in similar fashion to that described in connection with the terrestrial broadcast input via antenna 10.
A user selects for viewing either a TV channel or an on-screen menu, such as a program guide, by using a remote control unit 70. Processor 60 uses the selection information provided from remote control unit 70 via interface 65 to appropriately configure the elements of
The transport stream provided to decoder 100 comprises data packets containing program channel data and program specific information. Unit 22 directs the program specific information packets to processor 60 that parses, collates and assembles this information into hierarchically arranged tables. Individual data packets comprising the User selected program channel are identified and assembled using the assembled program specific information. The program specific information contains conditional access, network information and identification and linking data enabling the system of
Processor 60 assembles received program specific information packets into multiple hierarchically arranged and inter-linked tables. The hierarchical table arrangement includes a Master Guide Table (MGT), a Channel Information Table (CIT) as well as Event Information Tables (EITs) and optional tables such as Extended Text Tables (ETTs). The hierarchical table arrangement also incorporates new service information (NSI) according to the invention. The resulting program specific information data structure formed by processor 60 via unit 22 is stored within internal memory of unit 60.
If a user selects option 630, the display device is configured to have OSD generated object be placed in a location that would not interfere with the placement of OSD text from a video source, such as a set top box. As shown previously in
Upon the recognition by video decoder 25 that an OSD generated object is already located in a display area, video decoder 25 moves the OSD object that it creates to a second location in display area. As shown in
A display device can be configured to recognize the presence of a text crawling across a display area and eliminate such text. By analyzing the success video frames of decoded video, a display device determines a bounded region of a display area that is occupied by the video crawl text inserted by a broadcaster.
The inventors recognize that video crawl text region is typically located at the lower extremity of a display area. This region lends itself to the removal of the text crawl from the display area by excising the horizontal lines occupied by the text crawl from the display area. Preferably, this operation is accomplished by scaling the video display area by use of video decoder 25 (from
Specifically, text crawl can be detecting by using motion detection techniques and/or OCR devices. Optical characters or block motions vectors within a crawl area exhibit a horizontal motion that is restricted in the magnitude of the motion of the text crawl where such text moves at a relative horizontal velocity across a display area. Once such conditions are detect, the bounded area described by this activity is defined and the horizontal lines occupied by the text crawl are identified. This area of text crawl is then excised from a rendered display area.
The operation of using motion detection to detect a text crawl begins with the process shown in
In order to determine the motion vectors corresponding to a text crawl, video decoder 25 performs a motion compensation operation to detect the rectilinear motion of a present frame relative to a preceding frame. Changes in the vertical and horizontal directions of the blocks that constitute a video frame are detected and used to predict the corresponding blocks of the present frame. The horizontal motion of a text crawl is detected by analysis and comparison of horizontal motion vectors in a particular region of the video area relative to the horizontal motion vectors throughout the whole video area. A resultant horizontal motion vector for each row of blocks is computed by using vector addition, as shown in display area 1200 of
The process continues with a bifurcated process where in step 1315, each row of macroblocks for a particular frame is compared against a second row of macroblocks from a previous frame. This operation helps determine a series of vectors that correspond to horizontal motion of such macroblock rows. Then in step 1325, it is determined if the resultant of such vectors corresponds to a text crawl if a number of resultant vectors have the close to the same magnitude and point to the same direction, as defined above.
Step 1320 proceeds in a similar fashion as step 1315, but instead of calculating resultant motion vectors between at least two frames for rows of macroblocks, motion vectors corresponding to rows of macroblocks representing an average, are calculated. Then in step 1330, it is determined if the resultant of such average vectors correspond to a text crawl if the resulting vectors have the close to the same magnitude and point to the same direction.
If either steps 1325 and/or 1330 result in a determination that macroblocks corresponding to a certain row or rows represent a text crawl, step 1335 has information being stored that corresponds to the macroblock rows and frames that have been identified as being associated with a text crawl. In step 1340, video decoder 25 determines which rows of macroblocks have resultant vectors that have been identified as being associated with a text crawl. In step 1350, video decoder 25 defines the crawl boundaries and excises such a region from the display area by the removal of the rows corresponding to such a region or a video scaling function.
The present invention may be embodied in the form of computer-implemented processes and apparatus for practicing those processes. The present invention may also be embodied in the form of computer program code embodied in tangible media, such as floppy diskettes, read only memories (ROMs), CD-ROMs, hard drives, high density disk, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. The present invention may also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the computer program code segments configure the processor to create specific logic circuits.