1. Field
The present invention is directed to a system for enhancing video.
2. Description of the Related Art
Some events (e.g., sporting events or other types of events) are difficult to follow on television. For example, participants or objects used in the events are difficult to see or the area that an event is taking place in cannot be properly viewed on television.
In other instances, the skills or talent of a participant are not easily appreciated by the lay person. Spectators would enjoy the event better if they understood the intricacies of what was happening in the event.
In the past, broadcasters have deployed a varied repertoire of technologies to highlight various aspects of events for viewers. However, many of the technologies utilized by broadcasters are limited due to various constraints. For example, some broadcasters have inserted virtual graphics into video during post production in order to show the skills of star athletes. While such enhanced video is interesting, many viewers prefer to see the enhancements made to video during the event.
Broadcasters have also begun inserting virtual graphics into live video. However, systems that insert graphics into live video have not provided the full degree of freedom that some producers would like.
A system is proposed that can enhance video captured by a mobile camera capable of changing location and orientation. In one embodiment, the camera is mounted on a aircraft (e.g., helicopter, airplane, balloon, glider, etc.) so that the camera can be moved anywhere the airplane can fly. In one example implementation, the camera is mounted such that its orientation can be changed (e.g., panned, titled, rolled) with respect to the aircraft (which can also change its orientation). Sensors are used to automatically determine an instantaneous location and orientation of the camera. Various moving objects within the field of view of the camera can also be equipped with sensors to measure location and orientation. The information from the above-described sensors is used to create graphics and add those graphics to video from the camera in proper perspective, in real time. The concept of real time can include small delays for processing, but is not meant to include post processing for the insertion of a graphic into the video of an event after the event is over.
In one example implementation, the graphics are created as a function of the location and/or orientation of one or more of the moving objects. The graphics can also be created as a function of one or more atmospheric conditions (e.g., wind speed, wind direction, humidity, precipitation, temperature, etc.). The location and/or orientation of the camera on the aircraft is used to transform the graphic to an image in the video from the camera.
On the shore there is a Production System 40 which is in communication with a Video Communication system 42 (or multiple Video Communication systems), Base Station 44 (or multiple Base Stations) and Reference Station 46 (or multiple Reference Stations). Production system 40 will receive video from camera apparatus 22, enhance the video as described herein, and output the video for broadcast and/or storage.
In one embodiment, camera apparatus 22 includes various electronics to automatically sense and determine, in real time, the location and orientation of the video camera capturing video the sailboat race. Sailboat 2, sailboat 4, buoy 10, buoy 12, power boat 14 and power boat 16 also include electronics for automatically determining the location and orientation, in real time, of the respective objects. Note that although buoy 10 and buoy 12 may be anchored to the bottom of the sea (or anchored to another object), the buoys will be able to move due to the tide. In some embodiments, the system will not include orientation sensing for buoys 10 and 12.
The video from camera apparatus 22 is wirelessly transmitted to Video Communication system 42 using means known in the art. Upon being received at Video Communication system 42, the video is provided to Production System 40, where a time stamp will be added to the video and the video will subsequently be enhanced, as described herein. In another embodiment, the time code can be added by Video Communication system 42 prior to transmission to Production System 40.
Each of sailboat 2, sailboat 4, buoy 10, buoy 12, power boat 14, power boat 16 and camera apparatus 22 wirelessly transmit their sensor data (location and/or orientation sensor data) to Base Station 44. Any suitable means known in the art for wireless transmission of this type of data can be used. In one embodiment, the data is communicated using a TDMA protocol in the 2.5 GHz band. In one embodiment, Ethernet can also be used. Base Station 44 can transfer the received information to Production System 40 so that the video from camera apparatus 22 can be enhanced based on the received sensor data. Additionally, Production System 40 can also provide information to Base Station 44 for transmission to each of the moving objects (e.g., boats, buoys and helicopter) at sea.
In one embodiment, location sensing is performed using the Global Positioning System (GPS). Reference station 46 includes a GPS Receiver, and is surveyed with accuracy to determine its precise location. Reference station 46 will receive GPS information from GPS satellites and determine differential GPS error correction information, as is known in the art. This differential GPS error correction information is communicated from Reference Station 46 to Base Station 44 via Production System 40 (or directly from Reference Station 46 to Base Station 44) for retransmission to the GPS Receivers (or accompanying computers) on sailboat 2, sailboat 4, buoy 10, buoy 12, power boat 14, power boat 16 and camera apparatus 22. In another embodiment, the system can use pseudolites to provide additional data to the GPS Receivers instead of or in combination with differential GPS error correction information.
In operation, the various sensors on the objects described above will be used to determine location and orientation of the various components described above. Based on this location and orientation information, various metrics, performance information and statistics can be determined. In addition, based on that location and orientation information, one or more graphics are created and inserted into the video captured by camera apparatus 22. The insertion of the graphics into the video is performed by Production System 40. More details will be provided below.
Camera system 102 includes a high definition camera mounted to a camera base such that the camera can move with respect to the base along multiple axes. The camera base is mounted to the aircraft such that the camera base will not move with respect to the aircraft and the camera itself will move with respect to the camera base. Sensors are used to detect movement of the camera and provide information on orientation of the camera to computer 100. One example of a suitable gyro-stabilized airborne camera system includes the Cineflex V14HD gyro-stabilized airborne camera system by Axsys Technologies, of General Dynamics Advanced Information Systems. Camera system 102 includes five axes of motion, remote steering and fine correctional movements for stabilization to a sub-pixel level. In one implementation, the capture device is a Sony HDC-1500 1080p professional broadcast camera. This camera has three CCDs and outputs 1080p high definition video at an aspect ratio of 16:9. The camera is mounted such that it can move along two axes with respect to the base. These two axes will be referred to as an inner ring and an outer ring.
The output of camera system 102 provides the following information to computer 100: pan (also called azimuth) of the outer ring (referred to below as PanOuter), tilt (also called elevation) of the outer ring (referred to below as TiltOuter), pan of the inner ring (referred to below as PanInner), tilt of the inner ring (referred to below as TiltInner), roll (referred to below as RollInner), zoom (a voltage level indicating how far the camera lens is zoomed), a measured focus value (a voltage indicating the position of the focus ring), and a measured value of the 2× Extender (e.g., an on/off value indicating whether the 2× Extender is turned on or off). This data received from camera system 102 provides information to determine the orientation of the camera with respect to the camera base.
GPS Receiver 104 is a real time kinematic (RTK) GPS Receiver from NovAtel, Inc. (www.novatel.com). GPS Receiver 104 will receive signals from multiple GPS satellites to determine a location of the GPS Receiver. Differential GPS error correction information will be used to reduce error in the GPS derived location. That information is provided to computer 100 and/or to IMU 106.
IMU 106 automatically detects its orientation. One suitable IMU 106 is the AIRINS Geo-referencing and Orientation System from IXSEA. In one embodiment, IMU 106 will include 6 axes: 3 closed loop fiberoptic gyros, and 3 accelerometers. Other forms of an IMU can be also used. IMU 106 can determine true heading in degrees and roll/pitch in degrees. In one embodiment, IMU 106 is programmed by inputting the relative difference in location between IMU 106 and GPS Receiver 104. In this manner IMU 106 can receive the GPS derived location from GPS Receiver 104 and determine its own location based on and as a function of the location of GPS Receiver 104. Similarly, IMU 106 can be programmed by inputting the difference in location between IMU 106 and the camera base of camera system 102 so that IMU 106 can also calculate the location of the camera base of camera system 102. This location information can be provided to computer 100 for transmission to production center 40 via transceiver 108. Computer 100, or a computer in Production System 40, can be programmed to know the difference in orientation between IMU 106 and the camera base of camera system 102. Therefore, when IMU 106 reports its orientation information, computer 104 (or another computer in Production System 40) can easily translate that orientation information to the orientation of camera base 102. Note that the locations determined by GPS Receiver 104 and IMU 106 are in world space.
In step 144, IMU 106 will determine the location and orientation of the camera base of camera system 102, as discussed above. In step 146, IMU 106 will transmit the location and orientation of the camera base to computer 100. In step 148 computer 100 will add time code to the location and orientation information received from IMU 106. That location and orientation information, with the time code, will be stored by computer 100. In other embodiments, the time code can be added to the data by Production System 40, or another component of the overall system. The process of
Communication Control computer 420 is connected to Communication Interface 422 (e.g., network card, modem, router, wireless access point, etc.), which is in communication with Base Station 44 and Reference Station 46. Via Communication Interface 422, Communication Control computer 420 receives the sensor data from camera apparatus 22 mounted to helicopter 20, the sail boats, the power boats, the buoys and other sources of data. Communication Control computer 420 synchronizes and stores the sensor data (locally or with another computer). Communication Control computer 420 also receives differential GPS error correction information from the GPS Reference Station 46 and sends that data to the various GPS Receivers described above.
Vertical Interval Time Code (VITC) inserter 404 receives program video (from helicopter 20 via Video Communication System 42) and adds a time code. Race computer 404 receives the video from VITC inserter 406 and sensor data from Communication Control computer 420. Race computer 404 uses the sensor data described herein to calculate/update metrics and performance data, and determine how/where to create graphics. For example, Race computer 404 may determine where in world coordinates lay lines should be (see discussion bellow) and then transform the world coordinates of the lay lines to positions in a video image. Race computer uses the time code in the video to identify the appropriate sensor data (including camera data, boat position/orientation data and atmospheric data).
Although it is Race computer 404 that determines the graphics to be inserted into the video, it is Render computer 408 that actually draws the graphics to be inserted into the video. Race computer 504 sends to Render computer 408 a description of the graphics to draw. Render computer 408 uses the information from race computer 504 to create an appropriate key and fill signals which are sent to keyer 410. Keyer 410 uses the key signal from render computer 408 to blend the graphics defined by the fill signal with the program video. The program video is provided to keyer 410 from video delay 412, which receives the program video from VITC 406. Video delay 412 is used to delay the video to account for the processing of Race computer 404 and Render computer 408. A video is still considered live if it is delayed a small number of frames (or small amount of time); for example, live video may include video that was delayed a few seconds.
Booth UI computer 532 has a monitor with a mouse (or other pointing device) or touch screen (or other type of user interface) which displays the available graphics that the system can add to the video. An operator can touch the screen to choose a particular graphic. This selection is sent to Communication Control computer 420 and Race computer 404.
Race computer 504 presents feedback to the Booth UI computer 432 which is transformed into a visual representation of confidence-of-measure and availability on a GPS Receiver basis. Race computer 404 smoothes small gaps in data via interpolation. Race computer 404 also stores data for use in replay. Render computer 408 can interpolate the 2D coordinates of the objects in video between frames since (in one embodiment) Race computer 404 only computes positions per frame. In one embodiment, the functions of Race computer 404 and Render computer 408 can be combined into one computer. In other embodiment, other computers or components of
Tsync computer 434 is used to synchronize video time to GPS time. Tsync 434 is connected to GPS Receiver 436, VITC reader 435 and VITC inserter 406. VITC reader 435 is also connected to the output VITC inserter 406.
In step 560, the graphics created in step 558 are added to the video without drawing over (occluding) images of real world objects. Once it is determined where to add a graphic to the video, the system needs to make sure not to draw the graphic over objects that should not have a graphic drawn over them. In one embodiment, the system will blend the graphic using a keyer or similar system. A graphic and video are blended by controlling the relative transparency of corresponding pixels in the graphic and in the video through the use of blending coefficients. One example of a blending coefficient is an alpha signal used in conjunction with a keyer. The value of a blending coefficient for a pixel in the graphic is based on the luminance and chrominance characteristics of that pixel, or a neighborhood of pixels in the video. Inclusions and exclusions can be set up which define which pixels can be drawn over and which pixels cannot be drawn over based on colors or other characteristics. For example, U.S. Pat. No. 6,229,550, incorporated herein by reference in its entirety, provides one example how to blend a graphic using keying based on color.
In another embodiment, geometric keying can be used. In this embodiment, the system will know locations of real word objects based on the GPS information and orientation information. The system will model where those objects are and make sure not to draw over those locations. Either type of keying can be used to make sure that the graphics do not occlude real world objects. Rather, using this type of keying will allow the real world objects to occlude the graphics for more realistic affect.
Step 556 includes transforming locations in world coordinates to positions in the video. The task is to calculate the screen coordinates, (sx, sy), given the world coordinates (world space) of a point. In practice, the point in world space might correspond to a physical object like a boat location, or a part of a geometrical concept, like a lay line, but in general can be any arbitrary point. One example method is to break the overall mapping into three separate mappings:
When composited together, the three mappings create a mapping from world coordinates into screen coordinates:
Each of the three mapping noted above will now be described in more detail.
The mapping from 3D world coordinates to 3D camera centered coordinates (TWTC) will be implemented using 4×4 homogeneous matrices and 4×1 homogeneous vectors. The simplest way to convert a 3D world point into a 3D homogeneous vector is to add a 1 into the 4th element of the 4×1 homogeneous vector:
The way to convert from a 3D homogeneous vector back to a 3D inhomogeneous vector is to divide the first 3 elements of the homogenous vector by the 4th element. Note that this implies there are infinitely many ways to represent the same inhomogeneous 3D point with a 3D homogeneous vector since multiplication of the homogeneous vector by a constant does not change the inhomogeneous 3D point due to the division required by the conversion. Formally we can write the correspondence between one inhomogeneous vector to infinitely many homogeneous vectors as:
for any k≠0.
In general the mapping TWTC can be expressed with a 4×4 matrix:
which can be expressed using row vectors as:
Finally if we use homogeneous vectors for both the world point in world coordinates, Xw, and the same point expressed in camera centered coordinates, Xc the mapping between the two is given by matrix multiplication using TWTC:
Xc=TWTCXw (6)
If we want the actual inhomogeneous coordinates of the point in the camera centered coordinate system we just divide by the 4th element of Xc. For example if we want the camera centered x-component of a world point we can write:
To build the matrix TWTC, we start in the world coordinate system (word space)—which is a specific UTM zone—and apply the following transformations:
Thus the final rigid-body transform, TWTC which converts points expressed in world coordinates to points expressed in the camera centered coordinate system and suitable for multiplication by a projection transform is given by:
The form of the three rotation matrices: Rx, Ry, Rz suitable for use with 4×1 homogeneous vectors are given below. Here the rotation angle specifies the rotation between the two coordinate systems basis vectors.
The matrix representation of the translation transform that operates on 4×1 homogeneous vectors is given by:
The mapping of camera centered coordinates to undistorted screen coordinates (K) can also be expressed as a 4×4 matrix which operates on homogenous vectors in the camera centered coordinate system. In this form the mapping from homogeneous camera centered points, Xc, to homogeneous screen points, Su is expressed:
To get the actual undistorted screen coordinates from the 4×1 homogenous screen vector we divide the first three elements of Su by the 4th element.
Note further that we can express the mapping from homogeneous world points to homogeneous undistorted screen points via matrix multiplication.
One embodiment uses a pinhole camera model for the projection transform K. If it is chosen to orient the camera centered coordinate system so that the x-axis is parallel to the sx screen coordinate axis, and the camera y-axis is parallel to the sy screen coordinate axis—which itself goes from the bottom of an image to the top of an image—then K can be expressed as:
N
y=number of pixels in vertical screen direction.
φ=vertical field of view
par=pixel aspect ratio
uo,vo=optical center
A,B=Clipping plane parameters. (17)
The clipping plane parameters, A, B, do not affect the projected screen location, sx, sy, of a 3D point. They are used for the details of rendering graphics and are typically set ahead of time. The number of vertical pixels, Ny and the pixel aspect ratio par are predetermined by video format used by the camera. The optical center, (uo, vo) is determined as part of a calibration process. The remaining parameter, the vertical field of view φp, is the parameter that varies dynamically.
The screen width, height and pixel aspect ratio are known constants for a particular video format: for example Nx=1920, Ny=1080 and par=1 for 1080i. The values of uo, vo are determined as part of a calibration process. That leaves only the field of view, φ, which needs to be specified before K is known.
The field of view is determined on a frame by frame basis using the following steps:
One field of view mapping curve is required per possible 2× Extender state. The field of view mapping curves are determined ahead of time and are part of a calibration process.
One mapping between measured zoom, focus and 2× Extender and the focus expansion factor is required per possible 2× Extender state. The focus expansion factor mappings are determined ahead of time and are part of a calibration process.
The mapping (f) between undistorted screen coordinates to distorted screen coordinates (pixels) is not (in one embodiment) represented as a matrix. In one example, the model used accounts for radial distortion. The steps to compute the distorted screen coordinates from undistorted screen coordinates are:
The two constants k1, k2 are termed the distortion coefficients of the radial distortion model. An offline calibration process is used to measure the distortion coefficients, k1, k2, for a particular type of lens at various 2× Extender states and zoom levels. Then at run time the measured values of zoom and 2× Extender are used to determine the values of k1 and k2 to use in the distortion process. If the calibration process is not possible to complete, the default values of k1=k2=0 are used and correspond to a camera with no distortion. In this case the distorted screen coordinates are the same as the undistorted screen coordinates.
The above discussion provides one set of examples for tracking objects and enhancing video from a mobile camera based on that tracking. The technology for accommodating mobile cameras can also be used in conjunction with other systems for tracking and enhancing video, such as the systems described in U.S. Pat. No. 5,912,700; U.S. Pat. No. 5,862,517; U.S. Pat. No. 5,917,553; U.S. Pat. No. 6,744,403; and U.S. Pat. No. 6,657,584. All five of these listed patents are incorporated herein by reference in their entirety.
A lay line is a line made up of all points a boat can sail to a mark (e.g., the buoy 10) without having to tack, for a given wind speed and wind direction. If the wind speed or wind direction change, the lay lines will also change. For a given wind speed and direction, there are two lay lines (e.g., lay line 622 and lay line 624). Optionally, a boat will sail parallel to one of the lay lines until it reaches the other lay line, at which point the boat will tack and follow the lay line to the mark.
Isochrons (also called ladder lines) are perpendicular to the mark. Every boat on an isochron is the same amount of time and distance away from the mark, regardless of how close in distance boats are to the mark or the lay lines. Typically, isochrons are drawn to indicate distance between isochrons or time between isochrons.
In one embodiment, isochrons 626, 628 and 630 are drawn up at predetermined fixed intervals from each other (e.g., interval x and interval y). It is also possible to create custom isochrons at the bow of each boat. For example,
If the viewer were looking at video that shows the boats in the orientation of
Although the above examples are given with respect to sailing, the technology can be used with other events, too. For example, the same technology can be used with automobile racing. In one example, a GPA tracking system for automobile racing is disclosed in U.S. Pat. No. 6,744,403 (the '403 patent). The technology described above can be added to the system of the '403 patent to enhance the GPA tracking system for enhancing video. The technology described above can also be used with respect to foot racing, soccer, tracking automobiles for a fleet (or other purpose), military applications, tracking hikers, tracking people at cultural events (e.g., concerts, festivals such a Burning Man, carnivals, etc.). The technology is not intended to be restricted to sailing.
One embodiment includes automatically sensing a location of a movable camera that can change locations, receiving position data from a sensor for an object, converting a location in world space to a position in a video image of the camera based on the sensed location of the camera (the location in world space is based on the sensed location of the camera), and enhancing the video image based on the position.
In some embodiment, the sensing the location of the camera includes sensing the location of the camera while the camera is changing location and/or while the camera is unrestrained in a local space.
Some embodiments further include determining an orientation of the camera, with the location in world space being converted to the position in the video image of the camera based on the sensed location of the camera and the determined orientation of the camera.
One embodiment includes a first set of one or more sensors that sense location information for a movable camera that can change locations, a second set of one or more sensors that sense position information for one or more objects, and one or more processors in communication with the first set of one or more sensors and the second set of one or more sensors. The one or more processors obtain a location in world space (e.g., world coordinates) based on the position information from the second set of one or more sensors. The one or more processors convert the location in world space to a position in a video image of the camera based on the sensed location of the camera and enhance the video image based on the position in the video image.
One embodiment includes a first set of one or more sensors that sense location information for a movable camera that is unrestrained in a local space, a second set of one or more sensors that sense orientation information for the camera with the first set of sensors and the second set of sensors being co-located with the camera on an aircraft, a third set of one or more sensors that concurrently sense location information for multiple moving objects, one or more communication stations, and one or more processors in communication with the one or more communication stations. The one or more communication stations are also in communication with the first set of one or more sensors, the second set of one or more sensors and the third set of one or more sensors. The one or more processors receive video from the camera. The one or more processors convert locations of the moving objects into positions in a video image from the camera based on the location information for the camera and the orientation information for the camera. The one or more processors create one or more graphics based on the positions in the video image and add the one or more graphics to the video image.
Note that the flow charts depicted in the drawings shows steps in a sequential manner. However, it is not always required that the steps be performed in the same order as in the flow charts. Furthermore, many of the steps can also be performed concurrently.
The foregoing detailed description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto.
This application claims priority to provisional application 61/515,836, filed on Aug. 5, 2011, incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3580993 | Sandorf | May 1971 | A |
3595987 | Vlahos | Jul 1971 | A |
3840699 | Bowerman | Oct 1974 | A |
3973239 | Kakumoto | Aug 1976 | A |
4064528 | Bowerman | Dec 1977 | A |
4067015 | Mogavero | Jan 1978 | A |
4084184 | Crain | Apr 1978 | A |
4100569 | Vlahos | Jul 1978 | A |
4179704 | Moore | Dec 1979 | A |
4319266 | Bannister | Mar 1982 | A |
4344085 | Vlahos | Aug 1982 | A |
4386363 | Morrison | May 1983 | A |
4409611 | Vlahos | Oct 1983 | A |
4420770 | Rahman | Dec 1983 | A |
4521196 | Briard | Jun 1985 | A |
4589013 | Vlahos | May 1986 | A |
4591897 | Edelson | May 1986 | A |
4612666 | King | Sep 1986 | A |
4625231 | Vlahos | Nov 1986 | A |
4674125 | Carlson | Jun 1987 | A |
4700306 | Wallmander | Oct 1987 | A |
4811084 | Belmares-Sarabia | Mar 1989 | A |
4817171 | Stentiford | Mar 1989 | A |
4855822 | Narendra | Aug 1989 | A |
4924507 | Chao | May 1990 | A |
4950050 | Pernick | Aug 1990 | A |
4970666 | Welsh | Nov 1990 | A |
4975770 | Troxell | Dec 1990 | A |
4999709 | Yamazaki | Mar 1991 | A |
5063603 | Burt | Nov 1991 | A |
5100925 | Watson | Mar 1992 | A |
5150895 | Berger | Sep 1992 | A |
5179421 | Parker | Jan 1993 | A |
5184820 | Keating | Feb 1993 | A |
5191341 | Gouard et al. | Mar 1993 | A |
5202829 | Geier | Apr 1993 | A |
5207720 | Shepherd | May 1993 | A |
5249039 | Chaplin | Sep 1993 | A |
5264933 | Rosser | Nov 1993 | A |
5305107 | Gale | Apr 1994 | A |
5313304 | Chaplin | May 1994 | A |
5343252 | Dadourian | Aug 1994 | A |
5353392 | Luquet | Oct 1994 | A |
5398075 | Freytag | Mar 1995 | A |
5423549 | Englmeier | Jun 1995 | A |
5436672 | Medioni | Jul 1995 | A |
5452262 | Hagerty | Sep 1995 | A |
5459793 | Naoi | Oct 1995 | A |
5465308 | Hutcheson | Nov 1995 | A |
5469536 | Blank | Nov 1995 | A |
5488675 | Hanna | Jan 1996 | A |
5491517 | Kreitman | Feb 1996 | A |
5517205 | van Heyningen | May 1996 | A |
5543856 | Rosser | Aug 1996 | A |
5564698 | Honey | Oct 1996 | A |
5566251 | Hanna | Oct 1996 | A |
5592236 | Rosenbaum | Jan 1997 | A |
5610653 | Abecassis | Mar 1997 | A |
5627915 | Rosser | May 1997 | A |
5642285 | Woo et al. | Jun 1997 | A |
5668629 | Parker | Sep 1997 | A |
5731788 | Reeds | Mar 1998 | A |
5742521 | Ellenby | Apr 1998 | A |
5808695 | Rosser | Sep 1998 | A |
5862517 | Honey | Jan 1999 | A |
5881321 | Kivolowitz | Mar 1999 | A |
5892554 | DiCicco | Apr 1999 | A |
5903317 | Sharir | May 1999 | A |
5912700 | Honey | Jun 1999 | A |
5917553 | Honey | Jun 1999 | A |
5923365 | Tamir | Jul 1999 | A |
5953076 | Astle | Sep 1999 | A |
5977960 | Nally | Nov 1999 | A |
6014472 | Minami | Jan 2000 | A |
6031545 | Ellenby | Feb 2000 | A |
6037936 | Ellenby | Mar 2000 | A |
6072571 | Houlberg | Jun 2000 | A |
6100925 | Rosser | Aug 2000 | A |
6122013 | Tamir | Sep 2000 | A |
6154174 | Snider | Nov 2000 | A |
6191825 | Sprogis | Feb 2001 | B1 |
6201579 | Tamir | Mar 2001 | B1 |
6208386 | Wilf | Mar 2001 | B1 |
6229550 | Gloudemans | May 2001 | B1 |
6252632 | Cavallaro | Jun 2001 | B1 |
6271890 | Tamir | Aug 2001 | B1 |
6292227 | Wilf | Sep 2001 | B1 |
6297853 | Sharir | Oct 2001 | B1 |
6304298 | Steinberg | Oct 2001 | B1 |
6307556 | Ellenby | Oct 2001 | B1 |
6354132 | Van Heyningen | Mar 2002 | B1 |
6380933 | Sharir | Apr 2002 | B1 |
6384871 | Wilf | May 2002 | B1 |
6438508 | Tamir | Aug 2002 | B2 |
6559884 | Tamir | May 2003 | B1 |
6567038 | Granot | May 2003 | B1 |
6690370 | Ellenby | Feb 2004 | B2 |
6714240 | Caswell | Mar 2004 | B1 |
6728637 | Ford | Apr 2004 | B2 |
6738009 | Tsunoda | May 2004 | B1 |
6744403 | Milnes | Jun 2004 | B2 |
6864886 | Cavallaro | Mar 2005 | B1 |
6965297 | Sandahl | Nov 2005 | B1 |
7075556 | Meier | Jul 2006 | B1 |
7313252 | Matei et al. | Dec 2007 | B2 |
7341530 | Cavallaro | Mar 2008 | B2 |
7565155 | Sheha et al. | Jul 2009 | B2 |
7732769 | Snider et al. | Jun 2010 | B2 |
7773116 | Stevens | Aug 2010 | B1 |
7916138 | John et al. | Mar 2011 | B2 |
7934983 | Eisner | May 2011 | B1 |
7948518 | Baker et al. | May 2011 | B1 |
20040006424 | Joyce | Jan 2004 | A1 |
20040224740 | Ball et al. | Nov 2004 | A1 |
20060215027 | Nonoyama et al. | Sep 2006 | A1 |
20080278314 | Miller et al. | Nov 2008 | A1 |
20090040305 | Krajec | Feb 2009 | A1 |
20100310121 | Stanfill et al. | Dec 2010 | A1 |
20110007150 | Johnson | Jan 2011 | A1 |
20130027555 | Meadow | Jan 2013 | A1 |
20130054138 | Clark | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
4101156 | Jan 1991 | DE |
2 794 524 | Dec 2000 | FR |
1659078 | Jun 1991 | SU |
WO9510915 | Apr 1995 | WO |
WO9510919 | Apr 1995 | WO |
Entry |
---|
PCT International Search Report dated Oct. 23, 2012, PCT Patent Application No. PCT/US2012/048872. |
PCT Written Opinion of the International Searching Authority dated Oct. 23, 2012, PCT Patent Application No. PCT/US2012/048872. |
Ann Eisenberg, “The America's Cup, Translated for Television,” The New York Times, Jun. 18, 2011, pp. 1/3-3/3, XP002684843, URL: http://www.nytimes.com/2011106/19/business/19novel.html?—r=0#. |
Replay 2000—The Ultimate Workstation for Sport Commentators and Producers, Orad Hi-Tec Systems, Apr. 1995. |
SailTrack, GPS Tracking System for Animated Graphics Broadcast Coverage of the America's Cup Races, 1992. |
SailTrack Technical Overview, 1992. |
Sail Viz Software Documentation, 1992. |
Airins Georeferencing and Orientation System, iXSea, www.ixsea.com, Jan. 2011. |
Cineflex V14 HD, Gyro-Stabilized Airborne Camera Systems, Axsys Technologies, General Dynamics Advanced Information Systems, www.axsys.com, Apr. 2009. |
Valdes, “How the Predator UAV Works,” http://science.howstuffworks.com/predator.htm/printable, Jan. 2007. |
Vitrual Eye Sailing, Virtual Eye, Animation Research Ltd., Nov. 2009. |
European Response to Office Action dated Sep. 22, 2014, European Patent Application No. 12754115.9. |
Number | Date | Country | |
---|---|---|---|
20130033598 A1 | Feb 2013 | US |
Number | Date | Country | |
---|---|---|---|
61515836 | Aug 2011 | US |