1. Field of the Invention
The present invention is directed to a system for using camera attitude sensors.
2. Description of the Related Art
The remarkable, often astonishing, physical skills and feats of great athletes draw millions of people every day to follow sports. In particular, the number of people watching sports on television and the amount of advertising revenue received for televised sports has increased significantly. To satisfy the increased demand for televised sports, broadcasters have deployed a varied repertoire of technologies to highlight these exciting events for viewers. For example, broadcasters have started adding graphical enhancements to the video of the sporting events. Examples of graphic enhancements have included highlighting moving objects, highlighting portions of a playing field (e.g. first down line), adding virtual advertisements and the addition of other graphics to the video of the event.
The systems being employed for providing graphical enhancements to video have generally fallen into two categories. The first category of systems uses pattern recognition to recognize certain features in the video in order to accurately place the graphic into the video. A second category of systems uses sensors to measure the attitude of the camera capturing the video and then uses the measured camera attitude information to accurately insert the graphic into the video. It has been found that prior systems that only use pattern recognition have not been robust enough to account for rapid movement of the camera during the event and may be too slow for live events. Some systems that use pattern recognition have attempted to compensate for these deficiencies by using camera attitude sensors in combination with pattern recognition.
Systems that rely on camera attitude information require precise measurements of the orientation of a camera at any given time. Certain situations beyond the broadcaster's control can interfere with and be a source of error when measuring camera attitude information. For example, cameras at a sporting event typically are located at predesignated camera locations. Sometimes the camera location has a floor that can sag or wobble. As a heavy camera is panned and tilted, the weight distribution of the camera and/or operator may cause the floor to sag or wobble. A camera operator moving at the camera location may also cause the floor to sag or wobble. Additionally, during an event, the tripod holding the camera can be kicked or moved. The floor of the camera location can also vibrate at either a high frequency or low frequency because of other activity in the stadium, for example, fans jumping, fans stomping their feet, etc. Additionally, mechanical compliance of the various parts of the tripod and mount can also hinder an accurate camera attitude reading.
Thus, there is a need for an improved camera attitude measurement system to better measure camera attitude in light of the sources of error described above.
The present invention is directed to an improved system for using attitude sensors with a camera. The camera can be part of a camera assembly which includes a movable portion and a fixed portion. One example of a camera assembly includes a tripod base, a tripod head interface mounted on the tripod base, a tripod head mounted on the tripod head interface and a camera mounted on the tripod head. In one embodiment, the system includes a first sensor coupled to the camera assembly and a first inclinometer coupled to the camera assembly. Instead of, or in addition to, the first inclinometer, the system could have a first gyroscope (“gyro”) coupled to the camera assembly. The first sensor measures the position of the moveable portion of the camera assembly relative to the fixed portion of the camera assembly. In one embodiment, the first sensor is an optical encoder. In one alternative, the system includes two optical encoders, two inclinometers and three gyros. Data from the camera attitude sensors are combined to describe the orientation of the camera. One means for describing the orientation of the camera includes setting up one or more transformation matrices. Alternatively, the data from the various camera attitude sensors can be combined to result in a set of angles describing the orientation of the camera. This information can be displayed on a monitor, printed, stored on a computer readable storage medium or passed to a software process.
The output of the camera attitude sensors are typically communicated to a camera sensor electronics package which receives the camera attitude data and packages the data for communication to graphics production equipment. In one embodiment, the data from the sensors is encoded on an audio signal and sent to the graphics production equipment (usually located remotely from the camera) via an audio line (or microphone line) from the camera. In one use of the present invention, the graphics production equipment receives the sensor data, demodulates the audio and uses the camera attitude data to add a graphic to a video image from the camera. In one alternative, the graphic corresponds to a three dimensional location within a field of view of the camera. The three dimensional location corresponds to a first position in the video image, and the graphic is added to the video image at the first position. In one embodiment, the three dimensional location is converted to the first position in the video image using one or more transformation matrices.
One method for practicing the present invention includes sensing data from a first sensor, sensing data from a second sensor and combining the data from the two sensors. In one embodiment, the second sensor can be a gyro or an inclinometer. The first sensor measures relative position of the movable portion of the camera assembly with respect to the fixed portion of the camera assembly.
Portions of the above-described process are performed using the sensors described above in combination with various hardware and software. The software for implementing the present invention can be stored on processor readable storage media. Examples of suitable processor storage media include RAM, ROM, hard disk, floppy disk, CD-ROM, flash memory, etc. In another alternative, the method can be performed on specialized hardware designed to specifically perform the functionality described herein.
The hardware and software described to perform the present invention can be used for purposes of adding one or more graphics to live or delayed video of a sporting event. Alternatively, the hardware and/or software of the present invention can be used to determine attitude information for other purposes, for example, enhancing video of non-sporting events and for determining attitude for purposes other than enhancement of video.
These and other objects and advantages of the present invention will appear more clearly from the following description in which the preferred embodiment of the invention has been set forth in conjunction with the drawings.
Tilt encoder 20 has a shaft. That shaft is coupled to top platform 86 of tripod head 8 (see
One example of a suitable inclinometer uses liquid between a pair of plates, and measures change of capacitance. Another example is an electrolyte varying the conductance between two conductors. In one embodiment, a suitable inclinometer indicates an absolute angle (relative to gravity or other acceleration). In one example, the inclinometer can indicate angles up to plus or minus one degree, plus or minus 1.5, degrees, or plus or minus six degrees. Other suitable ranges can also be used. An example of a suitable inclinometer is the Ceramic Tilt Sensor SH50054 from Spectron, 595 Old Willets Path, Hauppaug, N.Y. 11788, (516) 582-5600. Other suitable inclinometers can also be used with the present invention.
Looking back at
In one embodiment, the gyros of
As will be shown in
Mounted on one surface of block 50 is inclinometer 30. Mounted on a second surface of block 50 is a second inclinometer 28. The surface that inclinometer 28 is mounted on is orthogonal to the surface that inclinometer 30 is mounted on. Mounted in front of inclinometer 30 is a PC board 84. Inclinometer 28 and inclinometer 30 are both connected to PC board 84. In one embodiment, PC board 84 includes electronics that are in communication with camera sensor electronics 16. Block 50 includes four holes 72, 74, 76 and 96 (hole 96 is shown in
As described above, one embodiment that simplifies the math includes mounting the gyros on plate 80. In one embodiment, plate 80 depicted in
Gyro 24 is connected to interface board 220, which is connected to analog to digital converter 214. Interface board 220 comprises electronics for receiving a signal from gyro 24 and presenting the information to analog to digital converter 214. The electronics of board 220 includes a differential amplifier and other electronics which can reject common mode noise and amplify the signal from the gyro. The output of gyro 26 is connected to interface board 222. Interface board 222 operates in the same manner as interface board 220 and is also connected to analog to digital converter 214.
Signal 224 represents the electrical output of the zoom lens potentiometer of the camera and is connected to analog to digital converter 214. Signal 226 represents the electrical output of the 2X extender of the camera and is connected to analog to digital converter 214. Signal 228 represents the connection to the lens of the camera, provides the value of the focus of the camera and is connected to analog to digital converter 214.
The output of inclinometer 28 is connected to interface board 230. The output of inclinometer 30 is connected to interface board 232. The outputs of interface board 230 and interface board 232 are both connected to analog to digital converter 214. Analog to digital converter 214 converts the input analog signals to digital signals, and sends the output digital signals to FPGA 212. FPGA 212 includes a register for each of the sensors. In one embodiment, the electronics of interface boards 230 and 232 are included on PC board 84. In one alternative, PC board 84 can include electronics and LEDs to indicate when tripod head 8 is level.
Processor 216 is in communication with data memory 236 for storing data and program memory 238 for storing program code. In one alternative, memory 238 is a flash memory and memory 236 is a static RAM. In one embodiment, processor 216 is an 8032 processor from Intel. Processor 216 also receives an output signal from sync decoder 240. Sync decoder 240 receives a video signal 250 from camera 2. Sync decoder 240 generates a sync signal so that the data from the sensors can be synchronized to the video. In one embodiment, the video is transmitted at 30 frames per second. Other video rates can also be used. Processor 216 assembles data from each of the sensors into a packet and sends the data to modulator 244. Processor 216 assembles the data using the sync signal so that data is collected and sent in synchronization with the video from the camera. For example, data can be sent for every field, every video frame, every other video frame, every third video frame, etc. In one embodiment, the packet of data sent from processor 216 does not include time code or any type of synchronization signal.
Modulator 244 receives the packet of data from processor 216 and encodes data for transmission on an audio frequency signal. The output of modulator 244 is sent to audio driver 246 and coax driver 248. Most broadcast cameras have a microphone input channel. The output of audio driver 246 is sent to the microphone input channel for camera 2. The camera then combines the audio input channel with the video and sends a combined signal to the production equipment. If the audio signal is needed on a coax cable, then that signal is received from coax driver 248. In one embodiment, there can also be an RS232 or RS422 output directly from processor 216.
The combined data from the sensors is sent to computer 406. Computer 406, computer 408, delay 410 and keyer 412 are used to enhance live video from a chosen camera. The present invention works with various systems for enhancing the video. For example, suitable systems are described in the following patents/applications: U.S. Pat. No. 5,912,700, A System for Enhancing the Television Presentation of an Object at a Sporting Event, U.S. Pat. No. 5,917,553, Method And Apparatus For Enhancing The Broadcast of a Live Event, U.S. patent application Ser. No. 09/041,238, System For Determining The Position Of An Object, filed Jan. 6, 1998, U.S. patent application Ser. No. 09/160,534, A System For Enhancing a Video Presentation of a Live Event, filed Sep. 24, 1998, all of which are incorporated herein by reference.
Computer 406 receives the sensor data, key data and time codes from concentrator 402. Computer 406 also receives the video signal, including VITC. In one embodiment, computer 406 is used to choose a location on the playing field of a sporting event. The location can be chosen using any suitable means including a pointing device, a keyboard, or a software process. The three dimensional coordinates associated with the chosen position are determined using any number of means in the art including using a model, prestoring locations, manually entering the locations, sensors (infra-red, radar, . . . ) etc. Using the data from the camera attitude sensors, computer 406 converts the three dimensional location(s) of the chosen location to two dimensional position(s) in the frame or field of video from the chosen camera. That two dimensional position(s) are sent to computer 408 to draw (or set up) a field or frame of video with the graphic. Computer 408 then sends instructions to a keyer to combine the graphic with the video from the camera. The video from the chosen camera is sent first to delay 410 in order to delay the video a number of frames to allow for the processing of the camera attitude information and the other methods of computers 406 and 408. After being delayed, the video is sent from delay 410 to keyer 412 for combining with the graphic(s) generated by computer 408. The output of keyer 412 can be sent for broadcast or recorded for future use. In the embodiment described above, the components operate in real time and enhance live video. In another embodiment, the camera attitude information can be used to enhance pre-stored video. An example of a graphic can be a line added to a video of a football game; a virtual advertisement; a cloud to show the location of a moving object; or any other suitable graphic.
In one embodiment, a third computer can be added to computers 406 and 408. The third computer can be used to provide a user interface which, among other things, allows an operator to choose which colors can be replaced with a graphic. This third computer would supply the key data to concentrator 402. In this embodiment, computer 406 determines where within a given field or frame a graphic should be inserted and computer 408 draws the graphic and synchronizes the graphic with the appropriate field or frame.
The reason why there are samples ahead in time is because the video is delayed by frame delay 410. Other time frames other than samples of 30 frames or 30 fields can also be used.
In step 564, the values read from the inclinometers are converted to a camera attitude parameter for use by the production equipment. For example, the voltages can be converted to angles. One means for converting voltage to an angle is to use a look up table. In another embodiment, a scaling factor can be applied to the voltage to convert the voltage to an angle. The angle can be expressed in degrees, radians or another unit of measure. Rather than converting the voltage to an angle, the voltage can be converted to a different form of an attitude parameter (e.g. such as a variable in a matrix).
In step 568, the system reads the encoder. Note that step 568 may be performed at the same time as step 560. Step 568 includes FPGA 212 receiving data from one of the encoders and providing that data to processor 216. Processor 216 will read the data in accordance with the sync signal from sync decoder 240. In step 570, the data from the encoder is converted to a camera attitude parameter for use by the production equipment. For example, the voltage can be converted to an angle. In step 572, the parameter from the encoder is combined with the parameter(s) from the inclinometer(s). Remember that the encoder measures the amount of rotation of the camera with respect to the base. In one embodiment, the inclinometers measure attitude of the base. Thus, the actual orientation is determined by using information from both encoders and both inclinometers. The inclinometers can be thought of as measuring roll and pitch of the tripod, while the pan encoder measures pan angle of the camera with respect to the base of the tripod or in relation to the tripod head interface. The pan axis is moved from a completely vertical axis to a different orientation based on roll and/or pitch. A similar analysis applies for tilt. One method for performing step 572 is to create one or more transformation matrices (to be used in step 712 of
Gyro 610 is a fiber optic gyro which measures angular rate. It does not have any reference to absolute angle. It can accurately measure the relative angular change. The output of gyro 610 is integrated using a digital or analog integrator 614. The output of integrator 614 will represent an angle. The output of integrator 614 is scaled (block 624). We can describe the measured output of the integrated gyro signal as AG=A+eG where AG=integrated angle, A=actual angle gyro 610 was rotated and eG=error induced by gyro 610. The error eG is largely due to the offset drift of the gyro and only has low frequency components.
Inclinometer 604 can accurately measure the true pitch or roll of the camera mount but is subject to acceleration errors. Gravity and acceleration of a reference frame are indistinguishable, so it is difficult to tell the difference between gravity indicating which way is “down” and acceleration imposed on the sensor. Imagine that the camera mount is not disturbed and is on a very stable platform. The inclinometer will accurately read the angle of the camera mount because the only force acting on the inclinometer is gravity. If the camera mount is tilted, it will accurately measure the new pitch or roll. During the transient when rotating the camera mount, acceleration will be induced on the sensor unless the axis of rotation is precisely through the inclinometer sensor and the pan axis is nearly frictionless. The axis of rotation will typically not be through the inclinometer, so changes in pitch or roll will induce a transient error in the inclinometer reading. In addition to this error, the device being used has a slow response to transients. If the device is rotated rapidly about the axis so as not to induce acceleration errors, the response time is about one second. If we think about the inclinometer signal output errors in the frequency domain, we can say that the low frequency errors are very small because the average acceleration will be near zero as long as we are not translating the inclinometer to a new position. Most of the inclinometer errors will be high frequency due to transient accelerations. Let AI=A+eI where AI=measured angle from inclinometer, A=actual angle inclinometer 604 was rotated and eI=the inclinometer measurement error due to acceleration and sensor response. The error eI will have very little low frequency components.
Summer 616 will subtract AG (output of scale block 624) from AI (output of inclinometer 604) yielding eI–eG. This signal is passed through low pass filter (LPF) 618. The cutoff frequency of LPF 618 is chosen to pass the gyro error signal eg but reject inclinometer error signal eI. A typical cutoff frequency is 0.2 Hz. The output of LPF 618 will be −eG. Summer 620 will add signal AG from scale block 624 to −eG from LPF 618. The result is signal A, the desired actual angular rotation.
An enhancement to the method is to adaptively set the cutoff frequency of LPF 618. If the system is not experiencing any acceleration, it is advantageous to raise the cutoff frequency to reduce errors due to drift in the gyro. If the system is experiencing accelerations, it is advantageous to lower the cutoff frequency to reduce the errors due to the inclinometer. Acceleration errors will be seen as a high frequency signal at the output of summer 616. The output of summer 616 is sent to high pass filer (HPF) 626. The output of HPF 626 is then sent to a fast attack, slow decay detector 628. The output of detector 628 is used to set the cutoff frequency of LPF 618.
In another enhancement, a gyro may be placed on the stationary portion of the camera assembly (e.g. tripod, tripod head interface) so it is sensitive to the pan axis. If the camera is panned very quickly, the camera mount may twist. The pan encoder 618 will not measure the amount of twist. A gyro will measure this twist and a correction can be applied to the resulting pan axis.
In step 708, the attitude information is extracted from the audio signals. In one embodiment, attitude information for all of the cameras are extracted from all the respective audio signals by audio demodulator(s) 400 and sent to concentrator 402 which then sends the combined signal to computer 406. In another embodiment, only the data from the tallied camera is extracted. In step 710, the system determines the attitude of the tallied camera. Determining the attitude can include the teachings of
The following discussion provides more detail in regard to steps 710 and 712 for an exemplar system using inclinometers, and/or gyros, with encoders. To convert a three-dimensional location to a two dimensional position, a four-by-four transformation matrix [Mw,c] maps a four dimensional row vector representing the three dimensional location in world coordinates into a two dimensional position in camera coordinates.
There are four coordinate systems to consider: world coordinates, three dimensional camera coordinates, two dimensional camera (screen) coordinates and roll/pitch coordinates. The roll/pitch coordinates define the coordinate system with the inclinometers at or near the origin. The pan axis may or may not be vertical. The tilt axis may or may not be horizontal. Roll and pitch will describe the direction vector of the pan axis and the tilt axis.
The world coordinates are the coordinates of the playing field or event. In the system for enhancing a football game, the world coordinates can be a system with the origin in the corner of the football field. The positive x-axis in world coordinates is defined by the sideline closest to the cameras, the positive y-axis in world coordinates is defined by the goal line to the left of the direction of the cameras and the positive z-axis points upward, perpendicular to the x-axis and y-axis. If the positive y-axis in world coordinates is rotated counter-clockwise Φ degrees around the positive z-axis so that the new positive y-axis is pointing in the same direction as the camera when the tilt of the camera is level (perpendicular to the direction of gravity) and the camera's pan encoder measures zero, then the pan offset for that particular camera is Φ. For the analysis below, assume that the roll is represented by the variable ρ and the pitch is represented by the variable ψ. The values ρ&ψ are the output A of summer 620 of
(ν1,ν2,ν3)·(1,0,0)=ν1=cos(90°−ρ)=sin(ρ)
(ν1,ν2,ν3)·(0,1,0)=ν2=−cos(90°−ψ)=−sin(ψ)
ν12+ν22+ν32=1
(0,0,1)·(ν1,ν2,ν3)=ν3
T=arc cos(ν3)
Additionally, assume that the origin of the roll/pitch axis (location of a fixed point near inclinometers and/or gyros) in world coordinates is (rpx, rpy, rpz). Using these variables, the matrix [Mw,c] is created by multiplying a first matrix [Mw,rp] by a second matrix [Mrp,c]. The matrix [Mw,rp] represents a transformation from world coordinates to roll/pitch coordinates. The matrix [Mrp,c] represents a transformation 110 from roll/pitch coordinates to two dimensional camera coordinates. The matrices are defined as follows:
[Mw,c]=[Mw,rp]×[Mrp,c].
The matrix [Mw,rp] is defined by
[Mw,rp]=[T-rp][Rz,-Φ][RT]
where
The matrix [Mrp,c] is defined by
[Mrp,c]=[Rz,Φ][Trp][K]
where
The matrix [Rz,Φ] is the four-by-four matrix corresponding to a counter-clockwise rotation by Φ degrees around the positive z-axis. The matrix [Trp] denotes a four by four matrix corresponding to a translation by (rpx, rpy, rpz). The matrix [Rt] is the inverse of the matrix which provides a transformation from roll/pitch coordinates to world coordinates. The matrix [K] represents a transformation matrix for transforming a three dimensional location in world coordinates to a two dimensional position in camera coordinates for a system that uses encoders but does not use inclinometers or gyros, and is similar to the transformation matrices described in U.S. patent application Ser. No. 09/160,534 and U.S. Pat. No. 5,912,700, both of which are incorporated by reference. The matrix [K] is defined as:
[K]=[T][A][D][B][C][G]
where the matrix [G] is defined as
The matrix [G] models the effective focal length of the lens as a function of zoom, focus, and 2X Extender settings. The variables n and f are the distances to the mathematical near and far clipping planes, which are only important in assigning a useful range for z-buffered graphics drawings; therefore, nominal values are used of n=1 yard and f=100 yards. The variable fh is the effective horizontal focal length of the lens. The variable fv is the effective vertical focal length of the lens. The aspect ratio, which is constant, is fv/fh. A software routine is used to convert the appropriate zoom factor and aspect ratio to fh and fv.
The matrix [A] is defined as:
The matrix [B] is defined as:
The matrix [C] is defined as:
The matrix [D] is defined as:
The matrix [T] is defined as
The parameters in the above described matrices are discussed below.
The pan parameter is defined as (pan=pan/reg−ptzfdit.pan), where ptzfdit.pan is measured with the pan optical encoder during the event. The variable pan/reg is determined using the pan optical encoder prior to the event. First, the camera's optical center is pointed to a known fiducial. A known fiducial is a marking or location whose coordinates are known by accurately measuring the coordinates in relation to the origin. The coordinates of a fiducial can be measured using a laser plane, tape measure, and/or other suitable methods. The pan encoder reading in degrees (θ) is noted. The x,y coordinates of the fiducial (x1,y1) are noted. The x, y coordinates of the camera are noted (x2,y2). An angle α is determined as:
α=tan−1((y1−y2)/(x1−x2)).
The pan registration variable is computed as:
pan/reg=180°−θ−α
The tilt parameter is defined as (tilt=ptzfdit.tilt−level—tilt,), where ptzfdit.tilt is measured with the tilt optical encoder during the event. The variable level—tilt represents the output of the tilt optical encoder at level tilt. Level tilt is the tilt of the camera when the optical axis is perpendicular to the force of gravity. Level tilt is found by setting a laser plane next to the camera at the level of the camera's lens. A stick or other object that can be used to view the marking from the laser plane should be placed across the stadium at a height to receive the beam. By pointing the optical center of the camera on the point illuminated on the stick by the laser plane across the stadium, the camera is brought to level tilt. The level—tilt parameter is the encoder reading, in degrees (or radians) at level tilt.
The twist parameter is determined by pointing the camera at the field (or other portion of an environment) and the output of the camera is sent to a computer. The image of the camera is superimposed over a transformed image of a model of the environment. A slider on a graphical user interface (GUI) is used to alter the twist of the camera image so that it completely aligns with the image of the model. The degree of alignment correction is recorded as the twist parameter. Note that the transformation of the image of the model is performed with the best parameters known at the time.
The nodal—dist variable (used below) is the distance from the pan axis to the nodal point of the camera model. The distance is positive in the direction of the camera along the optical axis through the front piece of glass on the lens of the camera. The nodal point is the position of the camera's virtual point of view measured as a distance forward of the pan axis when the camera is in the horizontal position. The variable, nodal—dist, changes for different zoom percentages and extender settings of the camera. The manufacturer of the lens can provide values that determine nodal—dist at different zoom percentages and extender settings. In one example, the manufacturer of the lens can provide a table of the distance of the nodal point from the front piece of glass on the lens for each extender setting and a range of zoom percentages. For example, if the distance of the nodal point from the front piece of glass on the lens is dp yards, and the length of the lens is lens—len yards, then nodal—dist=lens—len−dp, where nodal—dist is measured in yards. If data from the manufacturer of the lens is not available, the information can be measured on an optical bench and a lookup table built as a function of zoom position, focus, and 2X Extender setting. The information of the lookup table is measured by placing two targets in the view of the camera, off-center, one farther away than the other, so they appear in line through the viewfinder. Where a line extended through those targets intersects the optical axis of the camera is the position of the nodal point or virtual point of view.
The coordinates (cx,cy,cz) are the world coordinates of the location of the camera, which is defined as the intersection of the pan axis and the optical axis when the tilt is level and the pan measures zero on the pan encoder. The coordinates (lx,ly,lz) are the world coordinates of the nodal point of the camera model for a given tilt, pan, zoom, and extender setting of the camera. The coordinates (lx,ly,lz) are defined by: (lx,ly,lz)=(cx,cy,cz)+(nx,ny,nz), where (nx,ny,nz, 1)=(0,nodal—dist,0,1)[Rx,tilt][Ry,pan]. The matrix [Rx,tilt] is defined as:
and the matrix [Ry,pan] is defined as
After using the transformation matrices, the system takes into account lens distortion. That is, each two-dimensional pixel position is evaluated in order to determine if the two-dimensional position should change due to lens distortion. For a given two-dimensional pixel position, the magnitude of a radius from the optical center to the two-dimensional pixel position is determined. Lens distortion is accounted for by moving the pixel's position along that radius by an amount ΔR:
ΔR=K(R)2
where
At a fixed focus, the distortion factor is measured at a number of zoom values using a GUI slider to align the model to the video. These values are used to generate a distortion curve. During operation, the distortion factor at the current zoom is interpolated from the curve and applied to all transformed two-dimensional pixel positions points. The distortion data can also be obtained from the lens manufacturer or can measured by someone skilled in the art.
The foregoing detailed description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. The invention is, thus, intended to be used with many different types of live events including various sporting events and non-sporting events. It is intended that the scope of the invention be defined by the claims appended hereto.
This Application claims the benefit of U.S. Provisional Application No. 60/166,725, Measuring Camera Attitude, filed on Nov. 22, 1999. That Provisional Application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3769710 | Reister | Nov 1973 | A |
4084184 | Crain | Apr 1978 | A |
4811491 | Phillips et al. | Mar 1989 | A |
5264933 | Rosser et al. | Nov 1993 | A |
5311195 | Mathis et al. | May 1994 | A |
5462275 | Lowe et al. | Oct 1995 | A |
5479597 | Fellous | Dec 1995 | A |
5523811 | Wada et al. | Jun 1996 | A |
5534967 | Matsuzawa | Jul 1996 | A |
5543856 | Rosser et al. | Aug 1996 | A |
5627915 | Rosser et al. | May 1997 | A |
5649237 | Okazaki | Jul 1997 | A |
5808695 | Rosser et al. | Sep 1998 | A |
5815411 | Ellenby et al. | Sep 1998 | A |
5892554 | DiCicco et al. | Apr 1999 | A |
5894323 | Kain et al. | Apr 1999 | A |
5903317 | Sharir et al. | May 1999 | A |
5923365 | Tamir et al. | Jul 1999 | A |
5953076 | Astle et al. | Sep 1999 | A |
6100925 | Rosser et al. | Aug 2000 | A |
6122013 | Tamir et al. | Sep 2000 | A |
6184937 | Williams et al. | Feb 2001 | B1 |
6191825 | Sprogis et al. | Feb 2001 | B1 |
6201579 | Tamir et al. | Mar 2001 | B1 |
6208386 | Wilf et al. | Mar 2001 | B1 |
6271890 | Tamir et al. | Aug 2001 | B1 |
6292227 | Wilf et al. | Sep 2001 | B1 |
6297853 | Sharir et al. | Oct 2001 | B1 |
6304298 | Steinberg et al. | Oct 2001 | B1 |
6380933 | Sharir et al. | Apr 2002 | B1 |
6384871 | Wilf et al. | May 2002 | B1 |
6438508 | Tamir et al. | Aug 2002 | B2 |
6446261 | Rosser | Sep 2002 | B1 |
6486910 | Kaneda et al. | Nov 2002 | B1 |
6529613 | Astle | Mar 2003 | B1 |
6559884 | Tamir et al. | May 2003 | B1 |
6626412 | Lindsay | Sep 2003 | B1 |
6734901 | Kudo et al. | May 2004 | B1 |
6781622 | Sato et al. | Aug 2004 | B1 |
20010001242 | Tamir et al. | May 2001 | A1 |
20010048483 | Steinberg et al. | Dec 2001 | A1 |
Number | Date | Country |
---|---|---|
2312582 | Oct 1997 | GB |
2323733 | Sep 1998 | GB |
105764 | May 1993 | IL |
09133964 | Sep 1995 | JP |
WO 9302524 | Feb 1993 | WO |
WO 9605689 | Feb 1996 | WO |
WO 9703517 | Jan 1997 | WO |
WO 9824243 | Jun 1998 | WO |
WO 9828906 | Jul 1998 | WO |
WO 9938320 | Jul 1999 | WO |
WO 9953339 | Oct 1999 | WO |
WO 0014959 | Mar 2000 | WO |
WO 0028731 | May 2000 | WO |
WO 0064144 | Oct 2000 | WO |
WO 0064176 | Oct 2000 | WO |
WO 0113645 | Feb 2001 | WO |
Number | Date | Country | |
---|---|---|---|
60166725 | Nov 1999 | US |