System and method for adjusting an image for a vehicle mounted camera

Abstract
A system and method provides an image that adjusts in response to at least one vehicle mounted sensor.
Description
TECHNICAL FIELD

The present invention is generally related to vehicle mounted cameras. More particularly, example embodiments of the present invention are related systems and methods for adjusting an image, e.g., an image horizon, for a vehicle mounted camera.


BACKGROUND OF THE INVENTION

Vehicle mounted cameras are utilized in a variety of applications, from personal use to record street or track or flight performance to professional use in racecars.


Referring to Prior Art FIGS. 1 and 2, a traditional camera image in NASCAR is illustrated generally at 10, with FIGS. 1 and 2 illustrating a fixed image horizon (note virtual image horizon line 12 provided across the image to show the fixed perspective of the image) relative to the hood 14 of the racecar between a straightaway and a turn. However, this virtual line 12 shows a change in horizon relative to the sky 16 due to a change in angle of the track.


What is needed in the art is a system and method that permits adjustment of an image from a vehicle mounted camera in a desired fashion.


SUMMARY OF THE INVENTION

The present system and method for adjusting an image for a vehicle mounted camera overcomes and alleviates the problems and disadvantages in the prior art by providing an adjustable image that adjusts in response to at least one vehicle mounted sensor.


In exemplary embodiments, telemetry of a vehicle from a plurality of sensors may be used to automatically adjust an image, e.g. an image horizon, in a desired way.


In other exemplary embodiments, data from at least one sensor is used to automatically adjust an image horizon to match a skyline horizon during tilting of a vehicle.


In other exemplary embodiments, both image horizon and zoom are automatically adjusted during tilting of a vehicle.


In exemplary embodiments, such image horizon adjustment may be provided as a digital video effect, alleviating the need to actually adjust the angle of a camera during vehicle tilt.


The above and other exemplary embodiments will be discussed in more detail below in the detailed description of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS

Many aspects of the invention can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. In the FIGURES:


PRIOR ART FIG. 1 is a view of a racecar camera image with a fixed image horizon on a racetrack straightaway;


PRIOR ART FIG. 2 is a view of the racecar camera image with a fixed image horizon on a banked turn of a racetrack;



FIG. 3 is a flowchart of an exemplary method for adjusting the image horizon of a vehicle mounted camera;



FIG. 4 is a view of a racecar camera image with an adjustable image horizon on a racetrack straightaway;



FIG. 5 is a view of a racecar camera image with an adjusted image horizon on a racetrack turn with zoom remaining constant;



FIG. 6 is a view of a racecar camera image with an adjustable image horizon on a racetrack straightaway;



FIG. 7 is a view of a racecar camera image with an adjusted image horizon on a racetrack turn adjusted zoom;



FIG. 8 is an exemplary system for adjusting image for a vehicle mounted camera;


PRIOR ART FIG. 9 is a diagram comparing relative pixel dimensions of high definition and greater than high definition images;



FIG. 10 is an exemplary graphical user interface of a 4K captured image with a 720p selectable extraction window;



FIG. 11 is an exemplary first system for capturing and transporting a 4K image to an offsite processor and graphical user interface; and



FIG. 12 is an exemplary second system for capturing and processing a 4K image onsite, followed by transport of a high definition image offsite.





DETAILED DESCRIPTION OF THE INVENTION

Further to the brief description provided above and associated textual detail of each of the FIGURES, the following description provides additional details of example embodiments of the present invention. It should be understood, however, that there is no intent to limit example embodiments to the particular forms and particular details disclosed, but to the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of example embodiments and claims. Like numbers refer to like elements throughout the description of the FIGURES.


It will be understood that, although the terms first, second, etc. may be used herein to describe various steps or calculations, these steps or calculations should not be limited by these terms. These terms are only used to distinguish one step or calculation from another. For example, a first calculation could be termed a second calculation, and, similarly, a second step could be termed a first step, without departing from the scope of this disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.


As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the FIGURES. For example, two FIGURES shown in succession, or steps illustrated within any given FIGURE, may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.


Hereinafter, exemplary embodiments of the present invention are described in detail.


As we noted above, the present invention relates to adjusting an image, e.g., an image horizon, for a vehicle mounted camera by providing an image that adjusts in response to at least one vehicle mounted sensor.


In exemplary embodiments, telemetry of a vehicle from a plurality of sensors may be used to automatically adjust an image horizon in a desired way. Sensor data may include any convenient type of data, including gyro data, vehicle angle, attitude, altitude, speed, acceleration, traction, etc., data, navigational data, or the like. Sensor data may also comprise data that describes environmental conditions for the vehicle, such as weather, sensed track conditions, wind, including turbulence, shear, etc., temperature, and others, including any sensed data that may be useful in adjusting an image.


Such adjusting of an image may include, as in specific examples described below, adjustment of an image horizon, or another type of image adjustment, such as crop, selection of image portions, tracking of objects of interest in images, rendering selective high definition images from greater than high definition cameras, selective capture of image points of interest, adjustment of the image responsive to environmental conditions, etc. Examples are described by co-pending U.S. patent application Ser. No. 13/567,323 to the present inventor, filed Aug. 6, 2012 and claiming priority to U.S. Patent Application Ser. Nos. 61/515,549, filed Aug. 5, 20011 and 61/563,126, filed Nov. 23, 2011, the entire contents of which are incorporated herein by reference. A selection from Ser. No. 13/567,323 relating to selective capture and presentation of native image portions follows:


Common image or video formats are typically referred to either in terms of vertical resolution or horizontal resolution. Prior Art FIG. 9 shows an example of relative pixel dimensions at a 2.39:1 aspect ratio, with 720p and 1080p formats being letterboxed.


Examples of vertical high resolution designators are 720p (1280×720 pixels), 1080i (utilizing an interlace of two fields of 1920×540 pixels for a total resolution of 1920×1080 pixels) or 1080p (representing a progressive scan of 1920×1080 pixels).


Examples of horizontal high resolution designators, which are more common to digital cinema terminology, include 2K (2048 pixels wide) and 4K (4096 pixels wide). Overall resolution would depend on the image aspect ratio, e.g. a 2K image with a Standard or Academy ratio of 4:3 would have an overall ratio of 2048×1536 pixels, whereas an image with a Panavision ratio of 2.39:1 would have an overall ratio of 2048×856 pixels. PRIOR ART FIG. 9 illustrates a comparison of relative pixel dimensions for 720p, 1080p, 2K and 4K captured images.


Currently, technologies exist for greater than high definition capture for digital cinema, e.g. up to 2K, 4K and beyond. However, for consumer home viewing of the captured digital cinema, the captured image is compressed down at the distributing studio to a version that is specific to traditional usable consumer high definition formats for broadcast or other distribution, e.g., at 720p, 1080i or 1080p.


Also, while digital cinema has utilized large resolution capture, traditional broadcast capture has not. This broadcast capture is performed at the desired consumer display resolution, e.g., 1080p, both due to limitations at the consumer display device as well as to bandwidth restrictions of broadcast carriers. Thus, in scenarios calling for magnification of the broadcast image, for example to better show selected detail within an image, the display resolution is considerably less than the native image captured at the venue.


In exemplary embodiments related to selective capture, a first image or video is captured at a first resolution, which resolution is greater than high definition and higher than a predetermined broadcast display resolution. A desired portion of the first image or video is then displayed at a second, lower resolution, which resolution is less than and closer to the predetermined broadcast display resolution. Accordingly, a selected portion of the captured image may be displayed at or near the predetermined broadcast display resolution (i.e., minimizing or eliminating loss of image detail relative to the predetermined broadcast display resolution).


An example of this is illustrated at FIG. 10, which shows a screenshot of a full-raster 4K moving video image 1110. A portion of the 4K image, illustrated as a 720p moving video selectable extraction window 1112, is then selected for presentation. Thus, native image capture occurs at a greater than high definition resolution, and portions of that greater than high definition image are selected for presentation via the 720p extraction window. While, FIG. 10 specifically illustrates 4K capture and a 720p extraction window, it should be recognized that both or either of the captured image and extraction window may be provided at or sized to other resolutions.


Also, while one extraction window is illustrated in FIG. 10, the present disclosure contemplates simultaneous multiple extraction windows that may be applied to the same captured image.


In further exemplary embodiments, the selectable extraction window (1112 in FIG. 10) is provided at a graphical user interface (“GUI”) (1114 in FIGS. 11 and 12) that is configured to allow an operator to navigate within a captured image and select portions of the captured image for presentation. In exemplary embodiments, the extraction window is configured to allow the operator to adjust the size and position of the extraction window. In other exemplary embodiments, the extraction window is configured to track or scan across moving images, e.g., to follow a play or subject of interest during a sporting event. In other exemplary embodiments, plural operators may extract from the same images via the same or via plural GUIs.


Referring now to FIGS. 11 and 12, processing of the captured images may occur either offsite (FIG. 11) or onsite (FIG. 12). Referring to FIG. 11, an exemplary system is illustrated wherein a camera 1116 captures 4K images onsite, e.g., at a field (shown generally at 1118) for a sporting event. A transport mechanism 1120, e.g. a fiber capable of transporting a full bandwidth 4K video, transports the captured images to an operations base (“OB”) (shown generally at 1122), e.g., a production truck away from the field 1118.


An image recorder 1124 records the captured images, e.g., as a data stream on a server, and is configured to allow an operator to go back in time relative to the recording and examine selected portions of the captured image as described above. Such control is provided to an operator via the GUI 1114 through a processor 1126 interfacing with the GUI 1114 and recorder 1124. In exemplary embodiments, the recorder, processor and GUI are configured to allow the operator to go back instantaneously or near-instantaneously to select portions of the recorded image for presentation.


For example, with regard to FIG. 10, an operator in a truck would use a GUI to navigate the full raster 4K image and maneuver the selective 16:9 extraction window, in a manner similar to a cursor, to select an area of interest. In exemplary embodiments, the GUI is configured such that the extraction window may select an area of interest in one or both of live and recorded video. Also, as has been noted above, the present disclosure contemplates sizing and zooming capabilities for the extraction window. In other exemplary embodiments, the system is configured to mark keyframes and establish mapping for desired moves, e.g., pans and zooms, among others, around the image.


Referring again to FIG. 11, in exemplary embodiments, the output 1128 of the system (e.g., a 720p/59.94 output relative to a 4K capture) is provided to a router 1130 that allows the output to be taken live to a switcher 1132 or to be ingested at a server 1134 (“EVS”) for later playout. Also, in exemplary embodiments, a resulting image can be slowed down for replay or rendered as a still image, if desired, either at the server 1134 or at the operator's position (via processor 1126).



FIG. 12 provides an alternate exemplary embodiment, wherein capture, transport and recording of the native image (in this example 4K images) occurs onsite, e.g., at the field 1118 of a sporting event). An onsite processor 1126 provides or interfaces with an operator GUI 1114 in an operations base 1122 (e.g., a truck, though the GUI could be accessed from any convenient location) and provides a reference video 1138 of the image to allow the operator to navigate the image via the extraction window. The output 1128 is then transported from the field to an offsite router 1130.


In another embodiment, at least one GUI is accessed by a tablet controller as a navigation tool for the system. Such tablet controller may be wireless and portable to allow for flexible a primary or supplemental navigation tool.


In other exemplary embodiments, multiple cameras may be positioned to capture images from different points of view, and extraction windows may be provided relative to the multiple image captures in a system for selectively displaying portions of native images from different points of view.


Further exemplary embodiments provide real time or near real time tracking of subjects of interest (e.g., identified, selected or pre-tagged players of interest or automatic tracking of a ball in a game). Additional exemplary embodiments also provide virtual directing of operated and automatically tracked subjects of interest for cutting into a full live broadcast, utilizing backend software and tracking technology to provide a virtual viewfinder that operates in manners similar to otherwise human camera operators. Such processes may also use artificial technology for simple tracking, e.g., of a single identified object, or for more complex operations approximating motions utilized by human camera operators, e.g., pan, tilt and zoom of the extraction window in a manner similar to human operators. For those examples using 4K (or the like) capture, camera capture could utilize a specifically designed 4K camera. A camera may also use wider lensing to capture more of the subject, with possible reconstituting or flattening in post production. Also, different lensing can be used specific to different applications.


Such processes may use the above-described multiple cameras and/or multiple extraction windows, or may run with specific regard to one camera and/or one extraction window. In such a way, an artificial intelligence can automatically capture, extract and display material for broadcast, utilizing the extraction window(s) as virtual viewfinders.


Additional exemplary embodiments also provide for virtual 3D extraction, e.g. via s single camera at 4K or 8K with a two window output.


In other exemplary embodiments, an increased image capture frame rates relative to a broadcast frame rate along with or in lieu of an increased image capture resolution, as has been discussed above.


In such embodiments, a first video is captured at a first frame rate, which frame rate is higher than a predetermined broadcast frame rate. A desired portion of the first video is then displayed at a second, lower frame rate, which frame rate is less than and closer to the predetermined broadcast frame rate. The desired portion of the first video is captured by an extraction window that extracts frames across the native captured video. In such a way, the extracted video provides smooth and clear video, without edgy or blurred frames. Such captured first video may be at any frame rate that is above the predetermined broadcast frame rate.


In further exemplary embodiments, the first video is captured at a first frame rate that is in super motion or hyper motion. In traditional video, this equates to approximately 180 (“supermotion”) frames per second or above (“hypermotion” or “ultramotion”) in a progressive frame rate. In exemplary embodiments, hypermotion is recorded in discrete times sufficient to capture a triggered instance of an action of camera subject for playback. In other exemplary embodiments, the present system performs a full time record of a camera in hypermotion, e.g., of sufficient length for replay playback archiving, such as more than fifteen minutes, more than thirty minutes, more than an hour, more than an hour and a half, or more than two hours, among others.


In other exemplary embodiments, raw data from at least one camera is manipulated to adjust the image quality (make it “paintable”) to broadcast specifications. In exemplary embodiments, broadcast “handles” may be integrated into the system to affect the raw data in a manner that is more germane to broadcast color temperatures, hues and gamma variables.


The present disclosure thus advantageously provides systems and methods for selective capture of and presentation of native image portions, for broadcast production or other applications. By providing exemplary embodiments using a selectable extraction window through a GUI, an operator has complete control over portions within the native images that the operator desires for presentation. Also, by providing exemplary embodiments with image capture greater than high definition (e.g., 4K), desired portions of the image selected by an operator may be presented at or relatively near high definition quality (i.e., without relative degradation of image quality). Further, by providing exemplary embodiments with image capture frame rates greater than that of a predetermined broadcast frame rate, extracted video therefrom provides smooth and clear video, without edgy or blurred frames. Finally, various exemplary embodiments utilizing enhanced GUI features, such as automatic tracking of subjects of interests, plural GUIs or extraction windows for one or plural (for different points of view) captured images provide advantageous production flexibilities and advantages.


Referring now to FIG. 3, an exemplary method for adjusting an image for a vehicle mounted camera is illustrated generally at 20, including receiving image data from a vehicle mounted camera (described at box 22), receiving data from at least one vehicle mounted sensor (described at box 24), and adjusting the image horizon utilizing the data received from the at least one vehicle mounted sensor (described at box 26).


In exemplary embodiments, such adjusting of the image horizon may be applied as a digital video effect, such that actual manipulation of a vehicle mounted camera is unnecessary. Further, any type of image horizon adjustment is contemplated, whether or not such adjustment results in matching image horizon with a skyline horizon.


Additionally, it should be recognized that some or all of image adjustment may be performed on the vehicle. For example, an on-board (on the vehicle) processor may perform some or all of the image adjustment based upon data from the at least one sensor. Allocating processing power to the vehicle may be particularly useful, e.g., in wireless transmission applications where a reduced data package can take advantage of bandwidth limitations. Further, in exemplary embodiments, an operator can communicate with an on-board processor over a separate channel, leaving one or more wireless transmission channels from the vehicle substantially dedicated to video output.


Additionally, exemplary embodiments contemplate automatic adjustment of image horizon based upon received vehicle telemetry data.


In other exemplary embodiments, data from at least one sensor is used to automatically adjust an image horizon to match a skyline horizon during tilting of a vehicle, for example as a racecar banks around a turn off of a straightaway. Reference is made to FIGS. 4 and 5, as compared to PRIOR ART FIGS. 1 and 2. FIGS. 4 and 5 show exemplary adjustment of an image horizon 12 such that it matches a skyline horizon (shown as line 28) during tilting of a racecar as it transitions from a straightaway to a banked turn.


In other exemplary embodiments, both image horizon and zoom are automatically adjusted during tilting of a vehicle. Reference is made to FIGS. 6 and 7, as compared to PRIOR ART FIGS. 1 and 2. FIGS. 6 and 7 show exemplary adjustment of an image horizon 12 such that it matches a skyline horizon (shown as line 28) during tilting of a racecar as it transitions from a straightaway to a banked turn, with an increase in zoom during the turn.



FIG. 8 illustrates an exemplary system for adjusting an image horizon from a vehicle mounted camera. The system 100 may include a server 101. The server 101 may include a plurality of information, including but not limited to, vehicle telemetry information, static and continuous video images from a vehicle mounted camera, algorithms and processing modules and other data storage. The server 101 may be in communication with a network 106 via a communication channel 110.


Additionally, the system 100 may access or interface with additional, third party data sources or servers 103. Third party sources of data 103 may be in communication with the network 106 via a communication channel 111. It is noted that although illustrated as separate, the source 103 may include a server substantially similar to server 101. The server 101 or source 103 may include a data service provider, for example, a cellular service provider, a business information provider, or any other suitable provider or repository. The server 101 or source 103 may also include an application server providing applications and/or computer executable code implementing any of the interfaces/methodologies described herein. The server 101 or source 103 may present a plurality of application defaults, choices, set-ups, and/or configurations such that a device may receive and process the application accordingly. The server 101 or source 103 may present any application on a viewer interface or web-browser of a device for relatively easy selection by a viewer of the device.


Alternately, another server component or local computer apparatus, e.g., 104, 105 and/or 106, may produce the viewer interface and control connectivity to the server 101 or source 103. Also, the server 101 or one or more of the local computer apparatus 104, 105 and 106 may be configured to periodically access the source 103 and cache data relevant to data used in embodiments of the present invention.


The network 106 may be any suitable network, including the Internet, wide area network, and/or a local network. The server 101 and the source 103 may be in communication with the network 106 over communication channels 110, 111. The communication channels 110, 111 may be any suitable communication channels including wireless, satellite, wired, or otherwise.


An exemplary system 100 further includes computer apparatus 105 in communication with the network 106, over communication channel 112. The computer apparatus 105 may be any suitable computer apparatus including a personal computer (fixed location), a laptop or portable computer, a personal digital assistant, a cellular telephone, a portable tablet computer, a portable audio player, or otherwise. For example, the system 100 may include computer apparatuses 104 and 106, which are embodied as a portable cellular telephone and a tablet, respectively. The apparatuses 104 and 106 may include display means 141, 161, and/or buttons/controls 142. The controls 142 may operate independently or in combination with any of the controls noted above.


Further, the apparatuses 104, 105, and 106 may be in communication with each other over communication channels 115, 116 (for example, wired, wireless, Bluetooth channels, etc); and may further be in communication with the network 106 over communication channels 112, 113, and 114.


Therefore, the apparatuses 104, 105, and 106 may all be in communication with one or both of the server 101 and the source 103, as well as each other. Each of the apparatuses may be in severable communication with the network 106 and each other, such that the apparatuses 104, 105, and 106 may be operated without constant communication with the network 106 (e.g., using data connection controls of an interface). For example, if there is no data availability or if a viewer directs an apparatus to work offline, the data used by any of the apparatuses 104, 105, and 106 may be based on stored or cached information/parameters. It follows that each of the apparatuses 104, 105, and 106 may be configured to perform the methodologies described in the various exemplary embodiments.


Furthermore, using any of the illustrated communication mediums, the apparatuses 104, 105, and 106 may manipulate, share, transmit, and/or receive different data previously or currently produced at any one of the illustrated elements of the system 100. For example, data may be available on the server 101 and/or the source 103. Moreover, viewers of any of the devices 104, 105, and 106 may independently manipulate, transmit, etc., data, e.g., to separately determine a current value of the index at a given time. Thus, any suitable device may be utilized to use vehicle telemetry data from at least one vehicle sensor to adjust image horizon from a vehicle mounted camera.


It should be emphasized that the above-described embodiments of the present invention, particularly, any detailed discussion of particular examples, are merely possible examples of implementations, and are set forth for a clear understanding of the principles of the invention. Many variations and modifications may be made to the above-described embodiment(s) of the invention without departing from the spirit and scope of the invention. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present invention and protected by the following claims.

Claims
  • 1. A method for adjusting a broadcast image for a vehicle mounted camera, comprising: providing a camera mounted in a vehicle, the camera configured provide an image for a broadcast;providing at least one sensor in said vehicle, the sensor detecting a change in tilt of said vehicle upon entry of the vehicle into a banked position during a turn; andautomatically adjusting an image horizon and zoom in response to said detected change in tilt of said vehicle.
  • 2. A method in accordance with claim 1, wherein said image horizon automatically adjusts in response to said detected change in tilt of said vehicle.
  • 3. A method in accordance with claim 2, wherein said image horizon is adjusted as a digital video effect.
  • 4. A method in accordance with claim 1, wherein said detected change in tilt is provided as vehicle telemetry data.
  • 5. A method in accordance with claim 2, wherein said image horizon is adjusted to match or approximate a skyline horizon during tilting of said vehicle.
  • 6. A method in accordance with claim 2, wherein said image horizon is adjusted with a variation in zoom of the image.
  • 7. A method in accordance with claim 1, further comprising: capturing a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution;selecting a first desired portion of the captured, native first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; anddisplaying said selected first portion at said second, output resolution.
  • 8. A method in accordance with claim 7, wherein said selecting of a desired first portion of the first image or video is provided by a graphical user interface having a selectable extraction window.
  • 9. A method in accordance with claim 8, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation.
  • 10. A system for adjusting a broadcast image for a vehicle mounted camera, comprising: a camera mounted in a vehicle, the camera configured provide an image for a broadcast;at least one sensor in said vehicle, the sensor configured to a change in tilt of said vehicle;a processor configured to access camera image data and data indicating tilt of said vehicle upon entry of the vehicle into a banked position during a turn; anda digital video effects component, the digital video effects component configured to automatically adjust an image horizon and zoom in response to said detected change in tilt of said vehicle.
  • 11. A system in accordance with claim 10, wherein said digital video effects component is configured to automatically adjust image horizon in response to said detected change in tilt of said vehicle.
  • 12. A system in accordance with claim 11, wherein said detected change in tilt is provided as vehicle telemetry data.
  • 13. A system in accordance with claim 11, wherein said image horizon is adjusted to match or approximate a skyline horizon during tilting of said vehicle.
  • 14. A system in accordance with claim 10, wherein said image horizon is adjusted with a variation in zoom of the image.
  • 15. A system in accordance with claim 10, wherein said camera is configured to capture a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution, the system further comprising: a processor in communication with a graphical user interface, said interface configured to select a first desired portion of the native, first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; andan output mechanism configured to transport said selected first portion to a router, switcher or server at said second, output resolution.
  • 16. A system in accordance with claim 15, wherein said graphical user interface has a selectable extraction window.
  • 17. A system in accordance with claim 16, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation.
  • 18. A method for adjusting a broadcast image for a vehicle mounted camera, comprising: providing a camera mounted in a vehicle, the camera configured provide an image for a broadcast;providing at least one sensor in said vehicle, the sensor detecting a data of interest relative to said vehicle upon entry of the vehicle into a banked position during a turn; andautomatically adjusting an image inclusive of image horizon and zoom in response to said detected data of said vehicle.
  • 19. A method in accordance with claim 18, wherein said sensor data includes one or more of: gyro data; vehicle angle; attitude; altitude; speed; acceleration; traction; and navigational data.
  • 20. A method in accordance with claim 18, wherein said sensor data includes environmental conditions for the vehicle, including one or more of: weather; sensed track conditions; wind; and temperature.
  • 21. A method in accordance with claim 18, wherein said image adjustment includes one or more of: adjustment of an image horizon;adjustment of image crop; selection of image portions;tracking of objects of interest in images;rendering selective high definition images from greater than high definition cameras;selective capture of image points of interest; or adjustment of the image responsive to environmental conditions.
  • 22. A method in accordance with claim 21, wherein said image adjustment is provided as a digital video effect.
  • 23. A method in accordance with claim 22, wherein said at least a portion of said image adjustment is performed by an on-board vehicle processor.
  • 24. A method in accordance with claim 23, wherein said adjusted image is transmitted via wireless protocol to an external computing device.
  • 25. A method in accordance with claim 18, further comprising: capturing a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution;selecting a first desired portion of the captured, native first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; anddisplaying said selected first portion at said second, output resolution.
  • 26. A method in accordance with claim 25, wherein said selecting of a desired first portion of the first image or video is provided by a graphical user interface having a selectable extraction window.
  • 27. A method in accordance with claim 26, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of Ser. No. 14/207,998 filed Mar. 13, 2014, which claims the benefit of U.S. provisional patent application Ser. No. 61/778,641 filed Mar. 13, 2013, which is a continuation-in-part of U.S. patent application Ser. No. 13/567,323, filed Aug. 6, 2012, now U.S. Pat. No. 10,939,140, and claims priority to U.S. Patent Application Ser. Nos. 61/515,549, filed Aug. 5, 2011 and 61/563,126, filed Nov. 23, 2011, the entire contents of which are incorporated herein by reference.

US Referenced Citations (180)
Number Name Date Kind
4184270 Presbrey Jan 1980 A
4679068 Lillquist et al. Jul 1987 A
4975770 Troxell Dec 1990 A
5342051 Rankin et al. Aug 1994 A
5413345 Nauck May 1995 A
5489099 Rankin et al. Feb 1996 A
5517236 Sergeant May 1996 A
5729471 Jain et al. Mar 1998 A
5789519 Vock et al. Aug 1998 A
5865624 Hayashigawa Feb 1999 A
5892554 DiCicco et al. Apr 1999 A
5912700 Honey et al. Jun 1999 A
5938545 Cooper et al. Aug 1999 A
5953056 Tucker Sep 1999 A
6100925 Rosser et al. Aug 2000 A
6144375 Jain et al. Nov 2000 A
6154250 Honey et al. Nov 2000 A
6201554 Lands Mar 2001 B1
6224492 Grimes May 2001 B1
6233007 Carlbom et al. May 2001 B1
6236940 Rudow et al. May 2001 B1
6449010 Tucker Sep 2002 B1
6520864 Wilk Feb 2003 B1
6525690 Rudow et al. Feb 2003 B2
6750919 Rosser Jun 2004 B1
6774932 Ewing et al. Aug 2004 B1
6958772 Sugimori Oct 2005 B1
7158676 Rainsford Jan 2007 B1
7250952 Johnson et al. Jul 2007 B2
7315631 Corcoran et al. Jan 2008 B1
7356082 Kuhn Apr 2008 B1
7380259 Schroeder May 2008 B1
7450758 Cohen et al. Nov 2008 B2
7529298 Yasuda May 2009 B2
7693679 Warnke et al. Apr 2010 B1
7839926 Metzger et al. Nov 2010 B1
7843510 Ayer et al. Nov 2010 B1
7873910 Chaudhri et al. Jan 2011 B2
7996771 Girgensohn et al. Aug 2011 B2
8077917 Forsgren Dec 2011 B2
8381259 Khosla Feb 2013 B1
8495697 Goldfeder et al. Jul 2013 B1
8648857 Williams Feb 2014 B2
8682417 Huff Mar 2014 B2
8702504 Hughes et al. Apr 2014 B1
8743219 Bledsoe Jun 2014 B1
8756641 Ivanov et al. Jun 2014 B2
8949889 Erdmann Feb 2015 B1
9094615 Aman et al. Jul 2015 B2
9137558 Gibbon et al. Sep 2015 B2
9138652 Thompson et al. Sep 2015 B1
9288545 Hill et al. Mar 2016 B2
9535879 Allen Jan 2017 B2
20020019258 Kim et al. Feb 2002 A1
20020057217 Milnes et al. May 2002 A1
20020082122 Pippin et al. Jun 2002 A1
20020090217 Limor Jul 2002 A1
20020118875 Wilensky Aug 2002 A1
20020168006 Yasuda Nov 2002 A1
20030009270 Breed Jan 2003 A1
20030021445 Larice et al. Jan 2003 A1
20030033602 Gibbs et al. Feb 2003 A1
20030103648 Ito et al. Jun 2003 A1
20030151835 Su Aug 2003 A1
20030210329 Aagaard et al. Nov 2003 A1
20040136592 Chen et al. Jul 2004 A1
20040218099 Washington Nov 2004 A1
20040258154 Liu et al. Dec 2004 A1
20040261127 Freeman et al. Dec 2004 A1
20050040710 Ahn Feb 2005 A1
20050052533 Ito et al. Mar 2005 A1
20050137958 Huber et al. Jun 2005 A1
20050147278 Rui et al. Jul 2005 A1
20050237385 Kosaka et al. Oct 2005 A1
20050255914 Mchale et al. Nov 2005 A1
20060003825 Iwasaki et al. Jan 2006 A1
20060044410 Shinkai et al. Mar 2006 A1
20060078047 Shu et al. Apr 2006 A1
20060078329 Ohnishi et al. Apr 2006 A1
20060197839 Senior et al. Sep 2006 A1
20060197843 Yoshimatsu Sep 2006 A1
20060197849 Wernersson Sep 2006 A1
20070018952 Arseneau et al. Jan 2007 A1
20070024706 Brannon, Jr. et al. Feb 2007 A1
20070076957 Wang et al. Apr 2007 A1
20070139562 Miyake Jun 2007 A1
20070198939 Gold Aug 2007 A1
20080019299 Lekutai et al. Jan 2008 A1
20080021651 Seeley et al. Jan 2008 A1
20080129825 DeAngelis et al. Jun 2008 A1
20080129844 Cusack et al. Jun 2008 A1
20080175441 Matsumoto et al. Jul 2008 A1
20080192116 Tamir et al. Aug 2008 A1
20080199043 Forsgren Aug 2008 A1
20080261711 Tuxen Oct 2008 A1
20080277486 Seem et al. Nov 2008 A1
20080311983 Koempel et al. Dec 2008 A1
20090003599 Hart et al. Jan 2009 A1
20090009605 Ortiz Jan 2009 A1
20090021583 Salgar et al. Jan 2009 A1
20090028440 Elangovan et al. Jan 2009 A1
20090031382 Cope Jan 2009 A1
20090037605 Li Feb 2009 A1
20090040308 Temovskiy Feb 2009 A1
20090046152 Aman Feb 2009 A1
20090066782 Choi et al. Mar 2009 A1
20090067670 Johnson et al. Mar 2009 A1
20090082139 Hart Mar 2009 A1
20090136226 Wu et al. May 2009 A1
20090140976 Bae et al. Jun 2009 A1
20090160735 Mack Jun 2009 A1
20090225845 Veremeev et al. Sep 2009 A1
20090245571 Chien et al. Oct 2009 A1
20090262137 Walker et al. Oct 2009 A1
20090271821 Zalewski Oct 2009 A1
20090284601 Eledath et al. Nov 2009 A1
20090290848 Brown Nov 2009 A1
20100077435 Kandekar et al. Mar 2010 A1
20100091017 Kmiecik Apr 2010 A1
20100095345 Tran et al. Apr 2010 A1
20100141772 Inaguma et al. Jun 2010 A1
20100179005 Meadows et al. Jul 2010 A1
20100192088 Iwano Jul 2010 A1
20100208082 Buchner et al. Aug 2010 A1
20100265125 Kelly et al. Oct 2010 A1
20100265344 Velarde et al. Oct 2010 A1
20100289904 Zhang et al. Nov 2010 A1
20100289913 Fujiwara Nov 2010 A1
20100321389 Gay et al. Dec 2010 A1
20110013087 House et al. Jan 2011 A1
20110013836 Gefen et al. Jan 2011 A1
20110016497 Bloom et al. Jan 2011 A1
20110067065 Karaoguz et al. Mar 2011 A1
20110149094 Chen Jun 2011 A1
20110149103 Hatakeyama et al. Jun 2011 A1
20110157370 Livesey Jun 2011 A1
20110169959 Deangelis et al. Jul 2011 A1
20110181728 Tieman Jul 2011 A1
20110191023 Engstrom Aug 2011 A1
20110205022 Cavallaro et al. Aug 2011 A1
20110292030 Jiang et al. Dec 2011 A1
20110304843 Rogers et al. Dec 2011 A1
20120060101 Vonog et al. Mar 2012 A1
20120090010 Dace Apr 2012 A1
20120154593 Anderson Jun 2012 A1
20120277036 Lee Nov 2012 A1
20120295679 Izkovsky et al. Nov 2012 A1
20120316843 Beno et al. Dec 2012 A1
20120331387 Lemmey et al. Dec 2012 A1
20130016099 Rinard et al. Jan 2013 A1
20130033605 Davies et al. Feb 2013 A1
20130041755 Ivanov Apr 2013 A1
20130211774 Bentley et al. Aug 2013 A1
20130227596 Pettis et al. Aug 2013 A1
20140005929 Gale et al. Jan 2014 A1
20140229996 Ellis et al. Aug 2014 A1
20140236331 Lehmann et al. Aug 2014 A1
20140240500 Davies Aug 2014 A1
20140245367 Sasaki et al. Aug 2014 A1
20140266160 Coza Sep 2014 A1
20140344839 Woods et al. Nov 2014 A1
20150057108 Regimbal et al. Feb 2015 A1
20150062339 Ostrom Mar 2015 A1
20150094883 Peeters et al. Apr 2015 A1
20150149250 Fein et al. May 2015 A1
20150149837 Alonso et al. May 2015 A1
20150226828 Davies et al. Aug 2015 A1
20150234454 Kurz Aug 2015 A1
20150318020 Pribula Nov 2015 A1
20150370818 Des Jardins et al. Dec 2015 A1
20150382076 Davisson et al. Dec 2015 A1
20160173958 Ryu et al. Jun 2016 A1
20160198228 Hill et al. Jul 2016 A1
20160203694 Hogsten et al. Jul 2016 A1
20160217345 Appel et al. Jul 2016 A1
20170201779 Publicover et al. Jul 2017 A1
20170280199 Davies et al. Sep 2017 A1
20170366866 Davies et al. Dec 2017 A1
20170366867 Davies et al. Dec 2017 A1
20200107075 Davies et al. Apr 2020 A1
Foreign Referenced Citations (50)
Number Date Country
2213485 Feb 1995 CA
101090472 Dec 2007 CN
2575079 Apr 2013 EP
H06105231 Apr 1994 JP
H07141022 Jun 1995 JP
H08164896 Jun 1996 JP
H0952555 Feb 1997 JP
2001268562 Sep 2001 JP
2003125414 Apr 2003 JP
2003162213 Jun 2003 JP
2003242517 Aug 2003 JP
2004048116 Feb 2004 JP
2004056473 Feb 2004 JP
2004354236 Dec 2004 JP
2004354256 Dec 2004 JP
2005073218 Mar 2005 JP
2005144003 Jun 2005 JP
2005159385 Jun 2005 JP
2006081696 Mar 2006 JP
2006340108 Dec 2006 JP
2008005110 Jan 2008 JP
2008035006 Feb 2008 JP
2008199370 Aug 2008 JP
2009188976 Aug 2009 JP
2009194234 Aug 2009 JP
2010005267 Jan 2010 JP
2010152556 Jul 2010 JP
2010194074 Sep 2010 JP
2010245821 Oct 2010 JP
2011108165 Jun 2011 JP
2011130112 Jun 2011 JP
2011183138 Sep 2011 JP
2011527527 Oct 2011 JP
2012034365 Feb 2012 JP
2012095914 May 2012 JP
2013020308 Jan 2013 JP
2013118712 Jun 2013 JP
20060134702 Dec 2006 KR
1020090056047 Jun 2009 KR
20130086814 Aug 2013 KR
20140023136 Feb 2014 KR
9728856 Aug 1997 WO
0114021 Mar 2001 WO
0228093 Apr 2002 WO
2005027516 Mar 2005 WO
2008057285 May 2008 WO
2010019024 Feb 2010 WO
2010140858 Dec 2010 WO
2012051054 Apr 2012 WO
2014036363 Mar 2014 WO
Non-Patent Literature Citations (98)
Entry
Au Application No. 2015360249 Examination Report No. 1 dated May 9, 2019, 4 pages.
AU Application No. 2015360250 Examination Report No. 1 dated May 23, 2019, 5 pages.
AU Application No. 2015360251 Examination Report No. 1 dated May 17, 2019, 5 pages.
AU Application No. 2015360252 Examination Report No. 1 dated May 8, 2019, 4 pages.
AU Application No. 2019271924 Examination Report No. 1 dated Nov. 17, 2020, 5 pages.
Australian Application No. 151189D1AU Examination Report No. 1 dated Nov. 27, 2018, 3 pages.
Australian Application No. 2013308641 Examination Report No. 1 dated Mar. 8, 2018, 4 pages.
Australian Application No. 2017219030 Office Action dated Feb. 12, 2019, 4 pages.
Australian Patent Application No. 2012294568 Office Action dated Aug. 22, 2016, 3 pages.
EP Application No. 14776040.9 Extended European Search Report dated Oct. 7, 2016, 8 pages.
EP Application No. 15156533.0 Extended European Search Report dated Jun. 10, 2015, 6 pages.
EP Application No. 15867249 EP Search Report and Written Opinion dated May 17, 2018, 8 pages.
EP Application No. 15867249.3 Office Action dated Jun. 6, 2019, 8 pages.
EP Application No. 15867249.3 Oral Proceedings Summons dated Aug. 25, 2020, 10 pages.
EP Application No. 15867985 Supplementary EP Search Report and Written Opinion dated May 30, 2018, 9 pages.
EP Application No. 15867985.2 Office Action dated Jun. 6, 2019, 8 pages.
EP Application No. 15868450 Supplementary EP Search Report and Written Opinion dated Jun. 1, 2018, 9 pages.
EP Application No. 15868450.6 Office Action dated Jun. 6, 2019, 9 pages.
EP Application No. 15868581 Supplementary EP Search Report and Written Opinion dated Jun. 1, 2018, 8 pages.
EP Application No. 15868581.8 Office Action dated Jun. 6, 2019, 7 pages.
EP Application No. 18809839.6 Extended EP Search Report dated Sep. 11, 2020, 7 pages.
First Examination Report for New Zealand IP No. 734221, dated Aug. 28, 2017 (2 pp.).
Further Examination Report for New Zealand IP No. 719619, dated Oct. 16, 2017 (1 pp.).
Further Examination Report for New Zealand IP No. 719619, dated Sep. 20, 2017 (2 pp.).
Golf Relay Broadcast, Proceedings of Workshop of the Institute of Television Engineers of Japan and Institute of Television Engineers of Japan Using Multimedia PC besides Katori, Nov. 26, 1993, vol. 17. No. 74, p. 23-27.
International Application No. PCT/US2013/057450 International Search Report and Written Opinion dated Dec. 27, 2013, 12 pages.
International Application No. PCT/US2018/035007 International Search Report and Written Opinion dated Sep. 17, 2018, 10 pgs.
JP Patent Application No. 2014-525086 English Translation of Trial Decision dated Apr. 3, 2018, 17 pages.
JP Patent Application No. 2016-501836 Notice of Reasons for Refusal dated Jan. 15, 2019, 3 pages.
JP Patent Application No. 2016-501836 Notice of Reasons for Refusal dated May 18, 2018, 6 pages.
JP Patent Application No. 2017-531609 Notice of Reasons for Refusal dated Jun. 18, 2019, 3 pages.
JP Patent Application No. 2017-531610 Notice of Reasons for Refusal dated Jun. 18, 2019, 3 pages.
JP Patent Application No. 2017-531612 Decision of Refusal dated May 26, 2020, 4 pages.
JP Patent Application No. 2017-531612 Notice of Reasons for Refusal dated Jul. 30, 2019, 6 pages.
New Zealand Application No. 620992 First Examination Report dated Jul. 15, 2014, 2 pages.
Notice of Reasons for Refusal for Japanese Patent Application No. 2015-530081, shipping date Oct. 10, 2017 (10 pp.).
NZ Application No. 751181 First Examination Report dated Mar. 21, 2019, 2 pages.
NZ IP No. 768143; First Examination Report; dated Sep. 28, 2020, 2 pages.
PCT Application No. PCT/US2012/049707 Written Opinion and International Search Report dated Jan. 7, 2013, 13 pages.
Relay watch in the “synchronization World Cup 2006” besides ** Group work, broadcast technology, * 6 hall publication incorporated company, Jan. 1, 2007, vol. [ 60th ] No. 1 (716th vol. of the set), p. 19-29.
Rodrigues, Pedro. “A Field, Tracking and Video Editor Tool for a Football Resource Planner”, IEEE Conference Publications, US, 2013 (6 pp.).
U.S. Appl. No. 13/567,323 Final Office Action dated Jan. 17, 2018, 44 pages.
U.S. Appl. No. 14/207,998 Final Office Action dated Feb. 22, 2018, 26 pages.
U.S. Appl. No. 14/207,998 Non-Final Office Action dated Jun. 30, 2017, 46 pages.
U.S. Appl. No. 14/424,632 Final Office Action dated Feb. 8, 2019, 27 pages.
U.S. Appl. No. 14/424,632 Non-Final Office Action dated Jun. 28, 2018, 44 pages.
U.S. Appl. No. 15/535,243 Final Office Action dated Jul. 2, 2019, 27 pages.
U.S. Appl. No. 15/535,243 Non-Final Office Action dated Aug. 6, 2020, 24 pages.
U.S. Appl. No. 13/567,323 Final Office Action dated Sep. 18, 2019, 60 pages.
U.S. Appl. No. 15/535,257 Final Office Action dated May 6, 2019, 31 pages.
AU Application No. 2019271924 Notice of Acceptance for Patent Application dated Nov. 19, 2021, 3 pages.
AU Application No. 2020201003 First Examination Report dated Feb. 26, 2021, 5 pages.
Austrialian Application No. 2014244374 Examination Report No. 1, dated Mar. 17, 2017, 3 pages.
China Application No. 201280044974.9 Second Office Action dated Jul. 22, 2016, 10 pages.
CN Application No. 201280044974.9 First Office Action and Search Report dated Sep. 30, 2015, 14 pages.
EP Application No. 12822586.9 Extended European Search Report dated Feb. 5, 2015, 6 pages.
EP Application No. 12822586.9 Office Action dated Feb. 28, 2017, 4 pages.
EP Application No. 13832174.0 Extended European Search Report ated Mar. 23, 2016, 9 pages.
EP Application No. 13832174.0 Office Action dated Apr. 25, 2017, 4 pages.
EP Application No. 14776040.9 Office Action dated Dec. 8, 2017, 4 pages.
EP Application No. 15156533 Office Action dated May 15, 2017, 4 pages.
EP Application No. 15867249.3 Decision to Refuse dated Mar. 12, 2021, 13 pages.
EP Application No. 15867249.3 Provisional Opinion re Oral Proceedings ated Feb. 18, 2021, 9 pages.
EP Application No. 15867985.2 Decision to Refuse dated Mar. 12, 2021, 9 pages.
EP Application No. 15867985.2 Provisional Opinion re Oral Proceedings dated Feb. 18, 2021, 6 pages.
EP Application No. 15868450.6 2 Provisional Opinion re Oral Proceedings dated Feb. 18, 2021, 7 pages.
EP Application No. 15868450.6 Decision to Refuse dated Mar. 15, 2021, 12 pages.
International Application No. PCT/US2014/025362 International Search Report and Written Opinion dated Aug. 19, 2014, 15 pages.
International Application No. PCT/US2015/065472 International Search Report and Written Opinion dated Apr. 5, 2016, 14 pages.
International Application No. PCT/US2015/065474 International Search Report and Written Opinion dated May 4, 2016, 12 pages.
International Application No. PCT/US2015/065477 International Search Report and Written Opinion dated May 4, 2016, 11 pages.
International Application No. PCT/US2015/065481 International Search Report and Written Opinion dated May 4, 2016, 12 pages.
JP Patent Application No. 2014-525086 Decision of Rejection dated Nov. 8, 2016, 4 pages.
JP Patent Application No. 2014-525086 Notice of Reasons for Rejection dated May 10, 2016, 5 pages.
JP Patent Application No. 2014-525086; Unfavorable Trial Decision dated Apr. 3, 2018; 17 pages.
Miike—More homers? Blame the seats; Published Apr. 20, 2009 (Year: 2009), 3 pages.
New Zealand Patent Application No. 620992 Further Examination Report dated Feb. 1, 2016, 3 pages.
New Zealand Patent Application No. 715962 First Examination Report dated Feb. 1, 2016, 2 pages.
New Zealand Patent Application No. 719619 First Examination Report dated May 19, 2016, 3 pages.
Newton—Autodesk wind simulation to enhance Fox Sports Super Bowl coverage; Published Jan. 31, 2014 (Year: 2014), 3 pages.
NZ IP No. 719619 Further Examination Report dated Sep. 20, 2017, 2 pages.
NZ IP No. 734221 First Examination Report dated Aug. 28, 2017, 2 pages.
U.S. Appl. No. 13/567,323 Final Office Action dated Sep. 24, 2015, 31 pages.
U.S. Appl. No. 13/567,323 Non-final Office Action dated Apr. 26, 2016, 35 pages.
U.S. Appl. No. 13/567,323 Non-Final Office Action dated May 30, 2017, 42 pages.
U.S. Appl. No. 13/567,323 Non-Final Office Action dated Mar. 2, 2015, 30 pages.
U.S. Appl. No. 13/567,323 Final Office Action dated Nov. 10, 2016, 38 pages.
U.S. Appl. No. 14/207,998 Final Office Action dated Sep. 9, 2016, 36 pages.
U.S. Appl. No. 14/207,998 Non-Final Office Action dated Dec. 2, 2015, 33 pages.
U.S. Appl. No. 14/424,632 Non-Final Office Action dated Nov. 6, 2015, 25 pages.
U.S. Appl. No. 14/804,637 Notice of Allowance dated Nov. 17, 2015, 18 pages.
U.S. Appl. No. 15/068,819 Non-Final Office Action dated May 6, 2016, 21 pages.
U.S. Appl. No. 15/535,243 Non-Final Office Action dated Sep. 27, 2018, 50 pages.
U.S. Appl. No. 15/535,257 Non-Final Office Action dated Sep. 20, 2018, 51 pages.
U.S. Appl. No. 15/621,126 Non-Final Office Action dated Dec. 20, 2017, 34 pages.
EPP Application No. 18 809 839.6-1209 of May 30, 2018; Communication; dated Feb. 4, 2022; 5 pages.
Non-Final Office Action for U.S. Appl. No. 15/535,257; Application Filing Date: Jun. 12, 2017; Date of Notification: Dec. 13, 2021, 28 pages.
First Examination Report for NZ Application No. 786618; Deate fo Report: Mar. 30, 2022; 2 pages.
Related Publications (1)
Number Date Country
20210306598 A1 Sep 2021 US
Provisional Applications (3)
Number Date Country
61778641 Mar 2013 US
61563126 Nov 2011 US
61515549 Aug 2011 US
Continuations (1)
Number Date Country
Parent 14207998 Mar 2014 US
Child 17346693 US
Continuation in Parts (1)
Number Date Country
Parent 13567323 Aug 2012 US
Child 14207998 US