The invention relates to systems for autonomous vehicles.
A number of companies have manufactured autonomous (i.e., self-driving) vehicles. The state of Nevada has declared that self-driving cars and trucks may legally use the roads. It is possible that other states will follow Nevada's lead and allow any number of autonomous taxis, tractor-trailers, private luxury cars, and other such vehicles onto the roads.
Unfortunately, autonomous vehicles are subject to many of the same limitations as traditional cars. For example, even if a vehicle is driven carefully, there is a risk of an accident that comes with driving in an unpredictable environment. Additionally, physical limits such as visibility or traction apply to autonomous vehicles just as to traditional cars. Sudden darkness, indiscernible roadway markers, intense cloudbursts or white-out snow conditions, as well as dark clothing or dark-colored animals at night are all examples things that can interfere with the ability of a vehicle to safely navigate the streets. Even autonomous vehicles that use LIDAR and RADAR in combination with cameras are susceptible to accidents in a variety of poor conditions. There is a need, therefore, for improved technology to aid in broad implementation and use of autonomous vehicles.
The invention provides systems for autonomous vehicles that make use of real-time, high-dynamic range (HDR) cameras. An HDR camera for use in the invention comprises pipeline processing of pixel values from multiple image sensors to provide a view of a vehicle's environment in real-time, in a frame independent manner, as the vehicle operates. As pixel values are provided by the camera's image sensors, those values are streamed directly through a pipeline processing operation and on to the HDR system without any requirement to wait and collect entire images, or frames, before using the video information. The pipeline operates to merge images taken at different light levels by replacing saturated parts of an image with corresponding parts of a lower-light image to stream a video with a dynamic range that extends to include very low-light and very intensely lit parts of a scene. Because the dynamic range is high, the vehicle detects dim, hard to discern features, even if a scene is dominated by bright light such as oncoming vehicle headlights or the sun. The HDR video camera may be the primary road-viewing system of the vehicle or it may work in conjunction with other detection systems such as panoramic cameras or detection and range-finding systems like LIDAR or RADAR.
By using a real-time, streaming HDR video camera, the HDR system can detect and interpret features in the environment rapidly enough that the vehicle can be controlled in response to those features. Not only can, for example, poorly lit road signs be read by the system, unexpected hazards can be seen and processed in time for accidents to be avoided. Because the camera is HDR, hazards may be detected even where the environment would make human visual detection difficult or impossible. Since multiple sensors are operating at different light levels, even where a blinding sun appears in-scene, a low exposure sensor can form an image of obstacles on the road. Since the camera streams the HDR video through to the control system in real time, the control system can respond to sudden changes in the environment. For example, the vehicle can apply the brakes if an object unexpectedly appears in the roadway. Since the vehicle detects and interprets difficult to see objects, and since the vehicle is able to react to unexpected features in real time, costly crashes will be avoided. The operation of autonomous vehicles will be safer, making those vehicles suitable for a wide range of commercial and recreational uses.
In certain aspects, the invention provides an HDR system for a vehicle. The system includes an HDR camera operable to produce a real-time HDR video and a processing system. The processing system communicates with the HDR camera and a control system of a vehicle. Using the HDR video, the processing system determines an appearance of an item in an environment of the vehicle and issues to the control system an instruction that directs a change in the operation of the vehicle based on the appearance of the item.
The HDR system can be an installed, OEM part of a vehicle as the vehicle is shipped from a factory. The HDR system can be a component sold to OEM vehicle manufacturers, e.g., to be integrated into a vehicle on the assembly line or added as a dealer option. The HDR system can be an after-market accessory sold, for example, to a consumer to be used with an existing vehicle.
The HDR system offers functionality that may be employed in fully autonomous vehicles, as part of a driver assistance feature, or to provide accessory functionality outside of the primary driving functions, such as by augmenting a vehicle's navigational, safety, or entertainment systems. The HDR system may read lane markings and assist in keeping a vehicle in-lane. The system may, for example, use an HDR camera to read street signs or other landmarks to provide navigational assistance. Additionally or alternatively, the HDR camera may be used to detect and interpret road conditions such as dips, bumps, potholes, construction, metal plates, etc., and set up a car's electronic suspension damping for such features. The system may use one or more HDR cameras to collect information and feed the information to a server for, for example, a larger cartographical projects, such as building a road and business database for a navigational or emergency service system. In preferred embodiments, the HDR camera-based system is used to improve the utility and safety of fully autonomous vehicles. As discussed in greater detail herein, a fully autonomous vehicle can use the HDR video camera-based system to fully see and interpret all manner of detail in the road and environment, providing for optimized safety and efficiency in operation.
The HDR system may be provided as part of, or for use in, an automobile, such as a consumer's “daily driver” or in a ride-sharing or rental car. Such a vehicle will typically have 2 to 7 seats and a form factor such as a sedan, compact SUV or CUV, SUV, wagon, coupe, small truck, roadster, or sports car. Additionally or alternatively, the HDR system may be provided as part of, or for use in, a cargo truck, semi truck, bus, or other load carrying vehicle. The HDR system may be provided as part of, or for use in, a military or emergency response vehicle, such as a HUMVEE, tank, jeep, fire truck, police vehicle, ambulance, bomb squad vehicle, troop transport, etc. The HDR system may be provided as part of, or for use in, a utility vehicle such as a forklift, warehouse robot in a distribution facility, office mail cart, golf cart, personal mobility device, autonomous security vehicle, Hollywood movie dolly, amusement park ride, tracked or trackless mine cart, or others. The HDR system may be provided as part of, or for use in, a non-road-going vehicle, such as a boat, plane, train or submarine. In fact, it may be found that the HDR camera offers particular benefits for vehicles that operate in lighting conditions not well suited to the human eye, such as in the dark, among rapidly flashing lights, extremely bright lights, unexpected or unpredictable lighting changes, flashing emergency lights, light filtered through gels or other devices, night-vision lighting, etc. Thus, compared to vehicles controlled solely by a human, a vehicle using the control system may perform better in environments such as night, underground, Times Square, lightning storms, house fires or forest fires, emergency road conditions, military battles, deep-sea dives, mines, etc.
The HDR system may be provided as part of, or for use in, a military or emergency vehicle. The real-time HDR video camera provides the ability to detect and respond to a variety of inputs that a human would have difficulty processing, such as large numbers of inputs in a busy environment, or hard to detect inputs, such as very small things far away. As but one example, a squadron of airplanes using the HDR systems could detect and respond to each other as well as to ambient clouds, birds, topography, etc., to fly in perfect formation for long distances, e.g., and even maintain a formation while flying beneath some critical altitude over varying topography. In some embodiments, the HDR system is for a military or emergency vehicle and provides an autopilot or assist functionality. An operator can set the system to control the vehicle for a time. Additionally or alternatively, the system can be programmed to step in for an operator should the operator lose consciousness, get distracted, hit a panic button, etc. For example, the system can be connected to an eye tracker or physiological sensor such as a heart rate monitor, and can initiate a backup operation mode should such sensor detect values over a certain threshold (e.g., extremely low or elevated heart rate; exaggerated or suppressed eye movements or eye movements not directed towards an immediate path of travel). The system can be operated to place a vehicle in a holding pattern, e.g., fly in a high-altitude circle for a few hours while a pilot sleeps. It will be appreciated that a wide variety of features and functionality may be provided by the vehicle.
In preferred embodiments, the system includes the real-time HDR video camera and a processing system that communicates with the camera and a control system of a vehicle. The control system of the vehicle will typically include a vehicle's OEM electronic control unit (ECU), e.g., a hardware unit including memory coupled to a processor that is installed in a vehicle (e.g., bolted to the firewall) and controls functions such as fuel injection mapping, torque sensing/torque vectoring, steering, etc. The HDR camera's processing system is programmed to “talk to” the vehicle ECU. It is understood that vehicles may have one or more units providing ECU functionality. As used herein, ECU may be taken to refer to all such units operating together on a vehicle.
In preferred embodiments, the HDR camera has a plurality of image sensors coupled to a processing device (which in turn may be linked to the processing system). The HDR camera streams pixel values from the image sensors in a frame-independent manner through a pipeline on the processing device. The pipeline includes a kernel operation that identifies saturated pixel values and a merge module to merge the pixel values to produce the HDR video in real-time.
The camera may be mounted stationary on the vehicle. The camera may look in direction, or it may move. For example, the camera may rotate in 360 degrees. The HDR camera may be a 360-degree camera that captures a 360-degree view around the vehicle, e.g., either a stationary 360 degree camera that captures a ring-shaped image, or a directional camera that rotates. In embodiments wherein the captured 360-degree view is ring-shaped, the processing system may perform a de-warping process to convert the 360-degree view into a rectangular panoramic image (e.g., for display to human).
In certain embodiments, the system is operable to work in conjunction with, or include, a detection and ranging sensor (generally a RADAR or LIDAR sensor). The processing system may be operable to detect an object with the detection and ranging sensor, detect the object with HDR camera, and correlate an image of the object in the HDR video with a detected range of the object determined via the detection and ranging system.
The system provides a variety of features and benefits. For example, an HDR camera system may be particularly adept at operating in high glare conditions. E.g. driving through a city during evening rush hour, the processing system may be operable to detect glare in the environment within the 360-degree view and use the HDR camera to capture an HDR image of a portion of the environment affected by the glare. The system may be particularly adept at responding to situations where light levels provide important information. For example, the HDR camera can detect an appearance of an item such as a taillight of another vehicle and detect an illumination status of the taillight. Even where the HDR video includes a direct or reflected view of the sun, the HDR camera can detect the presence of a moving object in scene.
The HDR camera itself preferably includes a lens and at least one beamsplitter. The plurality of image sensors includes at least a high exposure (HE) sensor and a middle exposure (ME) sensor. The HE sensor, the ME sensor, the lens and the at least one beamsplitter may be arranged to receive an incoming beam of light and split the beam of light into at least a first path that impinges and HE sensor and a second path that impinges on the ME sensor. The beamsplitter directs a majority of the light to the first path and a lesser amount of the light to the second path. In preferred embodiments, the first path and the second path impinge on the HE and the ME sensor, respectively, to generate images that are optically identical but for light level. The processing device of the HDR camera may be a field-programmable gate array or an application-specific integrated circuit that includes the pipeline. In some embodiments, the kernel operation operates on pixel values as they stream from each of the plurality of image sensors by examining, for a given pixel on the HE sensor, values from a neighborhood of pixels surrounding the given pixel, finding saturated values in the neighborhood of pixels, and using information from a corresponding neighborhood on the ME sensor to estimate a value for the given pixel. Optionally, the pipeline may include—in the order in which the pixel values flow: a sync module to synchronize the pixel values as the pixel values stream onto the processing device from the plurality of image sensors; the kernel operation; the merge module; a demosaicing module; and a tone-mapping operator.
In certain aspects, the invention provides a vehicle that includes an HDR camera operable to produce a real-time HDR video, a control system configured for operation of the vehicle; and a processing system. The processing system is operable to determine, based on the HDR video, an appearance of an item in an environment of the vehicle and cause the control system to make a change in the operation of the vehicle based on the appearance of the item. Preferably, the HDR camera includes a plurality of image sensors coupled to a processing device, with the HDR camera being configured to stream pixel values from each of the plurality of image sensors in a frame-independent manner through a pipeline on the processing device, wherein the pipeline includes a kernel operation that identifies saturated pixel values and a merge module to merge the pixel values to produce the HDR video in real-time.
The vehicle may include a 360-degree camera that captures a 360-degree view around the vehicle. In some embodiments, the captured 360-degree view is ring-shaped and the processing system performs a de-warping process to convert the 360-degree view into a rectangular panoramic image. The 360-degree camera may itself be a real-time HDR video camera that performs the pipeline processing. Additionally or alternatively, the HDR video camera may complement the operation of the 360-degree camera. For example, the processing system may detect glare in the environment within the 360-degree view and use the HDR camera to capture an HDR image of a portion of the environment affected by the glare. Systems of the invention may also include separate linked cameras placed at discreet positions on the vehicle.
In some embodiments, the vehicle includes a detection and ranging sensor, such as a RADAR or LIDAR device. The processing system detects an object with both the detection and ranging sensor as well as with the HDR camera, and correlates an image of the object in the HDR video with a detected range of the objection determined via the detection and ranging system.
The processing system and the control system make a change in the operation of the vehicle based on the appearance of the item. In one example, the item is a taillight of another vehicle and determining the appearance includes detecting an illumination status of the taillight. In another example, the HDR video includes the sun, the item is a moving object, and determining the appearance includes determining that the item is present.
In preferred embodiments, the HDR camera uses multiple image sensors and a single lens. The image sensors all capture images that are identical (e.g., in composition and exposure time) but for light level. The HDR camera may include a lens and at least one beamsplitter. The plurality of image sensors preferably includes at least a high exposure (HE) sensor and a middle exposure (ME) sensor. The HE sensor, the ME sensor, the lens and the at least one beamsplitter may be arranged to receive an incoming beam of light and split the beam of light into at least a first path that impinges and HE sensor and a second path that impinges on the ME sensor. The beamsplitter directs a majority of the light to the first path and a lesser amount of the light to the second path. The first path and the second path impinge on the HE and the ME sensor, respectively, to generate images that are optically identical but for light level.
In certain embodiments, the processing device comprises a field-programmable gate array or an application-specific integrated circuit that includes the pipeline. The kernel operation may operate on pixel values as they stream from each of the plurality of image sensors by examining, for a given pixel on the HE sensor, values from a neighborhood of pixels surrounding the given pixel, finding saturated values in the neighborhood of pixels, and using information from a corresponding neighborhood on the ME sensor to estimate a value for the given pixel. In some embodiments, the pipeline includes—in the order in which the pixel values flow: a sync module to synchronize the pixel values as the pixel values stream onto the processing device from the plurality of image sensors; the kernel operation; the merge module; a demosaicing module; and a tone-mapping operator. The pipeline may further include one or more of a color-correction module; an HDR conversion module; and an HDR compression module.
Aspects of the invention provide a method for operating a vehicle. The method includes receiving light via an HDR camera on a vehicle. The beamsplitter splits the light onto a plurality of image sensors that capture values for each of a plurality of pixels on the sensors. As well as the splitting step, the method preferably includes streaming the pixel values to the processing device that uses a kernel operation to identify saturated pixel values and a merge module to merge the pixel values to produce the HDR video in real-time. The method may include demosaicing the video. The method includes determining, by the processing system and based on the HDR video, an appearance of an item in an environment of the vehicle and causing the control system to make a change in the operation of the vehicle based on the appearance of the item. The vehicle is preferably an autonomous vehicle.
Autonomous vehicles use a variety of digital sensors as part of an overall Advanced Driver Assistance Systems (ADAS). ADAS relies on inputs from multiple data sources and sensors in order to make driving decisions. ADAS are included in a vehicle to automate, adapt, or enhance vehicle systems for safety and better driving. ADAS include features designed to avoid collisions and accidents by offering technologies that alert the driver to potential problems, or to avoid collisions by implementing safeguards and taking over control of the vehicle. Adaptive features may automate lighting, provide adaptive cruise control, automate braking, incorporate GPS or traffic warnings, connect to smartphones, alert the driver to other cars or dangers, keep the vehicle in the correct lane, or monitor what is in “blind spots”. As used herein, ADAS can be taken to refer to all of the components that operate together to control the vehicle (e.g., cameras, plus ECUs, plus detectors, etc.). Components of ADAS may be built into cars, added as aftermarket add-on packages, or combinations of both. ADAS may use inputs from multiple data sources, including automotive imaging, LiDAR, radar, image processing, computer vision, and in-car networking. Additional inputs are possible from other sources separate from the primary vehicle platform, such as other vehicles, referred to as Vehicle-to-vehicle (V2V), or Vehicle-to-Infrastructure (such as mobile telephony or Wi-Fi data network) systems.
Here is provided a control system for an autonomous vehicle with a real-time HDR video camera that can be integrated into the ADAS. The use of real-time HDR video may increase the quality of the data from all the imaging sensors throughout an ADAS by increasing contrast ratio, color information, and critical details needed for safety functions while keeping bandwidth and latency low (real-time).
The HDR system offers functionality that may be employed in fully autonomous vehicles, as part of a driver assistance feature, or to provide accessory functionality outside of the primary driving functions, such as by augmenting a vehicle's navigational, safety, or entertainment systems. The system may, for example, use an HDR camera to read street signs or other landmarks to provide navigational assistance. Additionally or alternatively, the HDR camera may be used to detect and interpret road conditions such as dips, bumps, potholes, construction, metal plates, etc., and set up a car's electronic suspension damping for such features. The system may use one or more HDR cameras to collect information and feed the information to a server for, for example, larger cartographical projects, such as building a road and business database for a navigational or emergency service system. In preferred embodiments, the HDR camera-based system is used to improve the utility and safety of fully autonomous vehicles. As discussed in greater detail herein, a fully autonomous vehicle can use the HDR video camera-based system to fully see and interpret all manner of detail in the road and environment, providing for optimized safety and efficiency in operation.
Embodiments of the invention provide an HDR system for a military vehicle. The system includes an HDR camera operable to produce a real-time HDR video and a processing system. The processing system communicates with the HDR camera and a control system of a vehicle. Using the HDR video, the processing system determines an appearance of an item in an environment of the vehicle and issues to the control system an instruction that directs a change in the operation of the vehicle based on the appearance of the item. The processing system can be programmed to interface with a weapons or target-tracking system, a navigational system, the control system, or combinations thereof. The vehicle may be, for example, a Humvee or troop transport that uses the real-time HDR video camera to essentially see in difficult lighting conditions and drive through hostile terrain. The HDR video camera can provide a display for a human operator, the vehicle can be autonomous, or the vehicle can have autonomous systems assist a human operator. Because the camera is HDR, sudden flashes of bright light such as explosions do not impair the ability of the HDR system to see and navigate the environment.
Embodiments of the invention provide an HDR system for a boat. The system includes an HDR camera operable to produce a real-time HDR video and a processing system. The processing system communicates with the HDR camera and a control system of the boat. Using the HDR video, the processing system determines an appearance of an item in an environment of the boat and issues to the control system an instruction to control the boat based on the appearance of the item. Boats are essentially surrounded by water and do not offer the same visual cues as roadways. Swells and valleys among waves, with breaking crests and sea foam in the air can present a scene of sudden bright sun glints and rapidly changing contrast scenes without the types of anchor points the human eye expects to see. A human may be so consumed attempting to navigate a harbor as to lack the residual attention to understand the water's urges. Moreover, reading the crests and troughs and what all the buoys signify may be difficult for a human due essentially to strange patterns (both spatial and temporal) of visual contrast in the great volume of the sea. Additionally the ceaseless roiling of endless waves may afford no purchase to the balancing tools of the inner ear, causing a human operator to lack the kinesthetic sense necessary to correctly perceive an absolute frame of reference including down and up and left and right. The HDR system may read surfaces of the waves, detect and interpret buoys and other navigational markers, and aid in controlling the boat. Optionally, the HDR system may communicate with a global positioning system, compass, level, or other such instruments to maintain an absolute reference frame. The system may interact with, or include, sonar systems that can read the ocean floor or look forward to obstacles. The processing system can synthesize this information and offer such useful benefits as, for example, an autopilot mode that drives a boat from slip to sea, navigating out of the harbor.
Embodiments of the invention provide an HDR system that provides an autopilot mode for a vehicle such as a boat, plane, or road-going vehicle. The system includes an HDR camera operable to produce a real-time HDR video and a processing system. The processing system communicates with the HDR camera and a control system of a vehicle. Using the HDR video, the processing system identifies items in an environment, navigational goals, landmarks, etc., and controls operation of the vehicle without human participation.
The HDR system can be an installed, OEM part of a vehicle as the vehicle is shipped from a factory. The HDR system can be a component sold to OEM vehicle manufacturers, e.g., to be integrated into a vehicle on the assembly line or added as a dealer option. The HDR system can be an after-market accessory sold, for example, to a consumer to be used with an existing vehicle. Once installed on a vehicle, the HDR system can be considered to be part of the vehicle's ADAS.
The ADAS include the control system 125 and the processing system 113. The vehicle 101 optionally includes as part of the ADAS a detection and ranging sensor 131, such as a RADAR device, a LIDAR device, others, or combinations thereof. The processing system 113 determines based, on the HDR video, an appearance of an item in an environment of the vehicle and causes the control system to make a change in the operation of the vehicle based on the appearance of the item. In preferred embodiments, the HDR camera 201 comprises a plurality of image sensors coupled to a processing device, and the HDR camera 201 is configured to stream pixel values from each of the plurality of image sensors in a frame-independent manner through a pipeline on the processing device. As discussed in greater detail below, the pipeline includes a kernel operation that identifies saturated pixel values and a merge module to merge the pixel values to produce the HDR video in real-time. The vehicle 101 may also include a 360-degree camera 129 that captures a 360-degree view around the vehicle.
The HDR system may be provided as part of, or for use in, any suitable vehicle including road-going cars and trucks such as consumer automobiles and work vehicles (e.g., semi trucks, buses, etc.) In certain embodiments, the HDR system is provided as part of, or for use in, a military or emergency response vehicle (e.g., jeep, ambulance, troop transport, HUMVEE). The HDR system may be provided as part of, or for use in, a utility vehicle such as a forklift or personal mobility device. The HDR system may be provided as part of, or for use in, a non-road-going vehicle, such as a boat, plane, train or submarine.
The HDR system offers benefits in “seeing” across a very high dynamic range including over a dynamic range greater than what can be perceived by the human eye and mind. Not only can the HDR system detect items across a greater dynamic range than a human, but the system is not subject to perception problems caused by limits in human consciousness and thought processes. For example, where some remarkable spectacle lies in the periphery of human perception, such as a bad car crash on the side of the road, it a human tendency to turn attention to that spectacle at the expense of attention to the upcoming road. The HDR system is not subject to that phenomenon. Where a human pays attention to a spectacle, he or she may pay residual attention to the road ahead, but features in the road characterized by limited contrast (a white truck crossing the street with a bright sky background) may not cross the threshold for human perception. Where a vehicle is equipped with a traditional camera, a white truck against a bright white sky background may not be perceived due to the limited contrast presented. Thus an HDR system addresses road-travel concerns presented by traditional cameras and limits of human perception. For those reasons, an HDR system may have particular benefit in environments with exaggerated lighting conditions, environments packed with stimulus, or other environments that include unexpected and hard-to-anticipate content.
For example, it may be found that the HDR camera offers particular benefits for vehicles that operate in lighting conditions not well suited to the human eye, such as in the dark, among rapidly flashing lights, extremely bright lights, unexpected or unpredictable lighting changes, flashing emergency lights, light filtered through gels or other devices, night-vision lighting, etc. Thus, compared to vehicles controlled solely by a human, a vehicle using the control system may perform better in environments such as night, underground, Times Square, lightning storms, house fires or forest fires, emergency road conditions, military battles, deep-sea dives, mines, etc. To give one example, it is a known issue in law enforcement that motorists strike police cars with unusual frequency. Without being bound by any mechanism, it may be theorized that the flashing lights of police vehicles, when sitting alongside the road, create a non-constant signal that defies the ability of human perception to extrapolate from. Where a traditional vehicle drives towards a stationary feature, the human mind projects the like relative positioning of that feature in the immediate future. The fact that police lights flash may defy the ability to make such a mental projection. Thus a driver may not be able to anticipate a path of travel that steers clear of a stopped vehicle with flashing lights. The HDR system is not subject to that limitation. The processing system 113 detects the police car lights regardless of brightness or flashing rate. Because the system is HDR, it is of minimal importance that the police car may be off to the side of the road in the dark. Because the HDR camera operates in real time, the system can project the relative positioning of the vehicles immediately in the future and avoid a collision.
By extension, it may be particularly valuable to include an HDR system on a military or emergency vehicle because those vehicles may be expected to operate near flashing lights frequently. The HDR system may be beneficial in any situation with extremes of light including extremes of timing (frequent pulses or flashes) or extremes of intensity.
Camera system 1425 may be used in traffic sign recognition, parking assistance, a surround view, and/or for lane departure warning. A short-range radar system 1429 (another detection and range-finding system) may be used to provide cross-traffic alerts. An ultrasound system 1435 may be included for, for example, parking assistance. The HDR camera 201 may be included in any of these systems. In preferred embodiments, the HDR camera is included as a component for either or both of the LIDAR system 1413 and the camera system 1425.
While the HDR camera 201 may include a “standard” field of view, in some embodiments, an HDR camera is used for the 360-degree camera 129. While various embodiments are within the scope of the invention, in some embodiments, the 360-degree camera 129 streams a real-time HDR video that possesses substantially a “ring” shape.
The processing system 113 may contribute to a variety of features and functionality that integrate the HDR camera with the ADAS.
In some embodiments of the vehicle 101, one or more components of the ADAS can operate to detect objects and determine a distance (or range of distances) to those objects. For example, the long range radar system 1407 may operate as a detection and ranging sensor 131, able to detect other vehicles in the roadway and a determine a range for those vehicles. The HDR camera 201 may be used to supplement or complement the information provided by the detection and ranging capabilities. For example, the processing system 113 may be operable to detect an object with the detection and ranging sensor, detect the object with HDR camera 201, and correlate an image of the object in the HDR video with a detected range of the object determined via the detection and ranging system.
As another example, the processing system 113 may be operable to detect glare in the environment within the 360-degree view and use the HDR camera 201 to capture an HDR image of a portion of the environment affected by the glare.
One benefit offered by the use of the HDR camera 201 is that such an instrument is particularly well-suited to making meaningful interpretations of scenes in which valuable information is provided primarily by a difference in light levels. As an example, a difference between a dark brake light and an illuminated brake light provides significant information to the operation of motor vehicles, but manifests primarily as a difference in light levels. Thus, in some embodiments, the HDR camera 201 is helpful in determining the appearance of an item such as a taillight, turn signal, or brake light on another vehicle and detecting an illumination status of the taillight.
At the core of the functionality offered by the system is the HDR camera 201 that operates to produce a real-time HDR video. The HDR camera 201 is connected to a control system 125 configured for operation of the vehicle through a processing system 113. In preferred embodiments, the HDR camera 201 comprises a plurality of image sensors coupled to a processing device, and the HDR camera 201 is configured to stream pixel values from each of the plurality of image sensors in a frame-independent manner through a pipeline on the processing device. The pipeline includes a kernel operation that identifies saturated pixel values and a merge module to merge the pixel values to produce the HDR video in real-time. The vehicle 101 may also include a 360-degree camera 129 that captures a 360-degree view around the vehicle
The kernel operation 413 operates on pixel values 501 as they stream from each of the plurality of image sensors 265 by examining, for a given pixel on the HE sensor 213, values from a neighborhood 601 of pixels surrounding the given pixel, finding saturated values in the neighborhood 601 of pixels, and using information from a corresponding neighborhood 601 on the ME sensor 211 to estimate a value for the given pixel.
Various components of the HDR camera 201 may be connected via a printed circuit board 205. The HDR camera 201 may also include memory 221 and optionally a processor 227 (such as a general-purpose processor like an ARM microcontroller). HDR camera 201 may further include or be connected to one or more of an input-output device 239 or a display 267. Memory can include RAM or ROM and preferably includes at least one tangible, non-transitory medium. The processor 227 may be any suitable processor known in the art, such as the processor sold under the trademark XEON E7 by Intel (Santa Clara, CA) or the processor sold under the trademark OPTERON 6200 by AMD (Sunnyvale, CA). Input/output devices according to the invention may include a video display unit (e.g., a liquid crystal display or LED display), keys, buttons, a signal generation device (e.g., a speaker, chime, or light), a touchscreen, an accelerometer, a microphone, a cellular radio frequency antenna, port for a memory card, and a network interface device, which can be, for example, a network interface card (NIC), Wi-Fi card, or cellular modem. The HDR camera 201 may include or be connected to a storage device 241. The plurality of sensors is preferably provided in an arrangement that allows multiple sensors 265 to simultaneously receive images that are identical except for light level.
The HDR camera 201 may include a lens 311 and at least one beamsplitter 301. The HE sensor 213, the ME sensor 211, the lens 311 and the at least one beamsplitter 301 are arranged to receive an incoming beam of light 305 and split the beam of light 305 into at least a first path that impinges and HE sensor 213 and a second path that impinges on the ME sensor 211. In a preferred embodiment, the HDR camera 201 uses a set of partially-reflecting surfaces to split the light from a single photographic lens 311 so that it is focused onto three imaging sensors simultaneously. In a preferred embodiment, the light is directed back through one of the beamsplitters a second time, and the three sub-images are not split into red, green, and blue but instead are optically identical except for their light levels. This design, shown in
In some embodiments, the optical splitting system uses two uncoated, 2-micron thick plastic beamsplitters that rely on Fresnel reflections at air/plastic interfaces so their actual transmittance/reflectance (T/R) values are a function of angle. Glass is also a suitable material option. In one embodiment, the first beamsplitter 301 is at a 45° angle and has an approximate T/R ratio of 92/8, which means that 92% of the light from the camera lens 311 is transmitted through the first beamsplitter 301 and focused directly onto the high-exposure (HE) sensor 213. The beamsplitter 301 reflects 8% of the light from the lens 311 upwards (as shown in
Of the 8% of the total light that is reflected upwards, 94% (or 7.52% of the total light) is transmitted through the second beamsplitter 319 and focused onto the medium-exposure (ME) sensor 211. The other 6% of this upward-reflected light (or 0.48% of the total light) is reflected back down by the second beamsplitter 319 toward the first beamsplitter 301 (which is again at 45°), through which 92% (or 0.44% of the total light) is transmitted and focused onto the low-exposure (LE) sensor 261. With this arrangement, the HE, ME and LE sensors capture images with 92%, 7.52%, and 0.44% of the total light gathered by the camera lens 311, respectively. Thus a total of 99.96% of the total light gathered by the camera lens 311 has been captured by the image sensors. Therefore, the HE and ME exposures are separated by 12.2×(3.61 stops) and the ME and LE are separated by 17.0× (4.09 stops), which means that this configuration is designed to extend the dynamic range of the sensor by 7.7 stops.
This beamsplitter arrangement makes the HDR camera 201 light efficient: a negligible 0.04% of the total light gathered by the lens 311 is wasted. It also allows all three sensors to “see” the same scene, so all three images are optically identical except for their light levels. Of course, in the apparatus of the depicted embodiment 201, the ME image has undergone an odd number of reflections and so it is flipped left-right compared to the other images, but this is fixed easily in software. In preferred embodiments, the three sensors independently stream incoming pixel values directly into a pipeline that includes a synchronization module. This synchronization module can correct small phase discrepancies in data arrival times to the system from multiple sensors.
Thus it can be seen that the beamsplitter 301 directs a majority of the light to the first path and a lesser amount of the light to the second path. Preferably, the first path and the second path impinge on the HE sensor 213 and the ME sensor 211, respectively, to generate images that are optically identical but for light level. In the depicted embodiment, the HDR camera 201 includes a low exposure (LE) sensor.
In preferred embodiments, pixel values stream from the HE sensor 213, the ME sensor 211, and the LE sensor 261 in sequences directly to the processing device 219. Those sequences may be not synchronized as they arrive onto the processing device 219.
The HDR camera 201 (1) captures optically-aligned, multiple-exposure images simultaneously that do not need image manipulation to account for motion, (2) extends the dynamic range of available image sensors (by over 7 photographic stops in one embodiment), (3) is inexpensive to implement, (4) utilizes a single, standard camera lens 311, and (5) efficiently uses the light from the lens 311. The HDR camera also optionally (1) combines images separated by more than 3 stops in exposure, (2) spatially blends pre-demosaiced pixel data to reduce unwanted artifacts, (3) produces HDR images that are radiometrically correct, and (4) uses the highest-fidelity (lowest quantized-noise) pixel data available.
In operation, light enters the HDR camera 201 through a lens and meets one or more beamsplitters that split the light into different paths that impinge upon multiple image sensors. Each image sensor then captures a signal in the form of a pixel value for each pixel of the sensor. Each sensor includes an array of pixels. Any suitable size of pixel array may be included. In some embodiments, one or more of the sensors has 1920×1080 pixels. As light impinges on the sensor, pixel values stream off of the sensor to a connected processing device. The pixel values stream from each of multiple sensors in a frame independent-manner through a pipeline on a processing device 219. The pipeline 231 includes a kernel operation 135 that identifies saturated pixel values. The pixel values 501 are merged 139. Typically, the merged image will be demosaiced 145 and this produces an HDR image that can be displayed, transmitted, stored, or broadcast 151. In operation of the vehicle 101, the multiple image sensors all capture 125 images simultaneously through a single lens 311. The pipeline 231 and kernel operation 135 may be provided by an integrated circuit such as a field-programmable gate array or an application-specific integrated circuit. Each of the image sensors may include a color filter array 307. In preferred embodiments, the HDR image is demosaiced 145 after the merging step 139. The multiple image sensors preferably capture images that are optically identical except for light level.
A feature is that the pixel values 501 are pipeline processed in a frame-independent manner. Sequences of pixel values 501 are streamed 129 through the processing device 219 and merged 139 without waiting to receive pixel values 501 from all pixels on the image sensors. This means that the obtaining 125, streaming 129, and merging 139 steps may be performed by streaming 129 the sequences of pixel values 501 through the pipeline 231 on the processing device 219 such that no location on the processing device 219 stores a complete image. Because the pixel values are streamed through the pipeline, the final HDR video signal is produced in real-time. Real-time means that HDR video from the camera may be displayed essentially simultaneously as the camera captures the scene (e.g., at the speed that the signal travels from sensor to display minus a latency no greater than a frame of video). There is no requirement for post-processing the image data and no requirement to capture, store, compare, or process entire “frames” of images.
The output is an HDR video signal because the HDR camera 201 uses multiple sensors at different exposure levels to capture multiple isomorphic images (i.e., identical but for light level) and merge them. Data from a high exposure (HE) sensor are used where portions of an image are dim and data from a mid-exposure (ME) (or lower) sensor(s) are used where portions of an image are more brightly illuminated. The HDR camera 201 merges the HE and ME (and optionally LE) images to produce an HDR video signal. Specifically, the HDR camera 201 identifies saturated pixels in the images and replace those saturated pixels with values derived from sensors of a lower exposure. In preferred embodiments, a first pixel value from a first pixel on one of the image sensors is identified as saturated if it is above some specified level, for example at least 90% of a maximum possible pixel value.
The bottom portion of
Streaming the pixel values 501 through the kernel operation 413 includes examining values from a neighborhood 601 of pixels surrounding a first pixel 615 on the HE sensor 213, finding saturated values in the neighborhood 601 of pixels, and using information from a corresponding neighborhood 613 from the ME sensor 211 to estimate a value for the first pixel 615. This will be described in greater detail below. To accomplish this, the processing device must make comparisons between corresponding pixel values from different sensors. It may be useful to stream the pixel values through the kernel operation in a fashion that places the pixel under consideration 615 adjacent to each pixel from the neighborhood 601 as well as adjacent to each pixel from the corresponding neighborhood on another sensor.
The neighborhood comparisons may be used in determining whether to use a replacement value for a saturated pixel and what replacement value to use. An approach to using the neighborhood comparisons is discussed further down after a discussion of the merging. A replacement value will be used when the sequences 621 of pixel values 501 are merged 139 by the merge module 421. The merging 139 step excludes at least some of the saturated pixel values 501 from the HDR image.
Previous algorithms for merging HDR images from a set of LDR images with different exposures typically do so after demosaicing the LDR images and merge data pixel-by-pixel without taking neighboring pixel information into account.
To capture the widest dynamic range possible with the smallest number of camera sensors, it is preferable to position the LDR images further apart in exposure than with traditional HDR acquisition methods. Prior art methods yield undesired artifacts because of quantization and noise effects, and those problems are exacerbated when certain tone mapping operators (TMOs) are applied. Those TMOs amplify small gradient differences in the image to make them visible when the dynamic range is compressed, amplifying merging artifacts as well.
For illustration, the system is simplified with 4-bit sensors (as opposed to the 12-bit sensors as may be used in HDR camera 201), which measure only 16 unique brightness values and the sensors are separated by only 1 stop (a factor of 2) in exposure. Since CMOS sensors exhibit an approximately linear relationship between incident exposure and their output value, the values from the three sensors are graphed as a linear function of incident irradiance instead of the traditional logarithmic scale.
Merging images by prior art algorithms that always use data from all three sensors with simple weighting functions, such as that of Debevec and Malik, introduces artifacts. In the prior art, data from each sensor is weighted with a triangle function as shown by the dotted lines, so there are non-zero contributions from the LE sensor at low brightness values (like the sample illumination level indicated), even though the data from the LE sensor is quantized more coarsely than that of the HE sensor.
Methods of the invention, in contrast, use data from the higher-exposure sensor as much as possible and blend in data from the next darker sensor when near saturation.
In certain embodiments, the HDR camera 201 not only examines individual pixels when merging the LDR images, but also takes into account neighboring pixels 601 (see
One aspect of merging 139 according to the invention is to use pixel data exclusively from the brightest, most well-exposed sensor possible. Therefore, pixels from the HE image are used as much as possible, and pixels in the ME image are only used if the HE pixel is close to saturation. If the corresponding ME pixel is below the saturation level, it is multiplied by a factor that adjusts it in relation to the HE pixel based on the camera's response curve, given that the ME pixel receives 12.2× less irradiance than the HE pixel.
It may be found that merging by a “winner take all” approach that exclusively uses the values from the HE sensor until they become saturated and then simply switches to the next sensor results in banding artifacts where transitions occur. To avoid such banding artifacts, the HDR camera 201 optionally transitions from one sensor to the next by spatially blending pixel values between the two sensors. To do this, the HDR camera 201 scans a neighborhood 601 around the pixel 615 being evaluated (see
The HDR camera 201 performs merging 139 prior to demosaicing 145 the individual Bayer color filter array images because demosaicing can corrupt colors in saturated regions. For example, a bright orange section of a scene might have red pixels that are saturated while the green and blue pixels are not. If the image is demosaiced before being merged into HDR, the demosaiced orange color will be computed from saturated red-pixel data and non-saturated green/blue-pixel data. As a result, the hue of the orange section will be incorrectly reproduced. To avoid these artifacts, the HDR camera 201 performs HDR-merging prior to demosaicing.
Since the images are merged prior to the demosaicing step, the HDR camera 201 preferably works with pixel values instead of irradiance. To produce a radiometrically-correct HDR image, the HDR camera 201 matches the irradiance levels of the HE, ME, and LE sensors using the appropriate beamsplitter transmittance values for each pixel color, since these change slightly as a function of wavelength. Although the HDR camera 201 uses different values to match each of the color channels, for simplicity the process is explained with average values. A pixel value is converted through the camera response curve 901, where the resulting irradiance is adjusted by the exposure level ratio (average of 12.2× for HE/ME), and this new irradiance value is converted back through the camera response curve 901 to a new pixel value.
The camera response curve 901 can be measured by taking a set of bracketed exposures and solving for a monotonically-increasing function that relates exposure to pixel value (to within a scale constant in the linear domain). The curve may be computed from the raw camera data, although a curve computed from a linear best-fit could also be used. A camera response curve shows how the camera converts scene irradiance into pixel values. To compute what the ME pixel value should be for a given HE value, the HE pixel value (1) is first converted to a scene irradiance (2), which is next divided by our HE/ME attenuation ratio of 12.2. This new irradiance value (3) is converted through the camera response curve into the expected ME pixel value (4). Although this graph is approximately linear, it is not perfectly so because it is computed from the raw data, without significant smoothing or applying a linear fit. With the irradiance levels of the three images matched, the merging 139 may be performed.
In an illustrative example of merging 139, two registered LDR images (one high-exposure image IHE and a second medium-exposure image IME) are to be merged 139 into an HDR image IHDR. The merging 139 starts with the information in the high-exposure image IRE and then combines in data from the next darker-exposure image IME, as needed. To reduce the transition artifacts described earlier, the HDR camera 201 works on each pixel location (x, y) by looking at the information from the surrounding (2k+1)×(2k+1) pixel neighborhood 601, denoted as N(x,y).
In some embodiments as illustrated in
In certain embodiments, the merging 139 includes a specific operation for each of the four cases for the pixel 615 on sensor 213 and its neighborhood 601 (see
Case 1: The pixel 615 is not saturated and the neighborhood 601 has no saturated pixels, so the pixel value is used as-is.
Case 2: The pixel 615 is not saturated, but the neighborhood 601 has 1 or more saturated pixels, so blend between the pixel value at IHE(x, y) and the one at the next darker-exposure IME(x, y) depending on the amount of saturation present in the neighborhood.
Case 3: The pixel 615 is saturated but the neighborhood 601 has 1 or more non-saturated pixels, which can be used to better estimate a value for IHE(x,y): calculate the ratios of pixel values in the ME image between the unsaturated pixels in the neighborhood and the center pixel, and use this map of ME ratios to estimate the actual value of the saturated pixel under consideration.
Case 4: The pixel 615 is saturated and all pixels in the neighborhood 601 are saturated, so there is no valid information from the high-exposure image, use the ME image and set IHDR(x, y)=IME(x, y).
When there are three LDR images, the process above is simply repeated in a second iteration, substituting IHDR for IRE and ILE for IME. In this manner, data is merged 139 from the higher exposures while working toward the lowest exposure, and data is only used from lower exposures when the higher-exposure data is at or near saturation.
This produces an HDR image that can be demosaiced 145 and converted from pixel values to irradiance using a camera response curve similar to that of
The HDR camera 201 may be implemented using three Silicon Imaging SI-1920HD high-end cinema CMOS sensors mounted in a camera body. Those sensors have 1920×1080 pixels (5 microns square) with a standard Bayer color filter array, and can measure a dynamic range of around 10 stops (excluding noise). The sensors are aligned by aiming the camera at small pinhole light sources, locking down the HE sensor and then adjusting setscrews to align the ME and LE sensors.
The camera body may include a Hasselblad lens mount to allow the use of high-performance, interchangeable commercial lenses. For beamsplitters, the apparatus may include uncoated pellicle beamsplitters, such as the ones sold by Edmund Optics [part number NT39-482]. Preferably, the multiple image sensors include at least a high exposure (HE) sensor 213 and a middle exposure (ME) sensor 211, and the merging includes using HE pixel values 501 that are not saturated and ME pixel values 501 corresponding to the saturated pixel values. The multiple sensors may further include a low exposure (LE) sensor 261, and the kernel operation may identify saturated pixel values 501 originating from both the HE sensor 213 and the ME sensor 211. Because the pixel values stream through a pipeline, it is possible that at least some of the saturated pixel values 501 are identified before receiving values from all pixels of the multiple image sensors at the processing device 219 and the merge operation may begin to merge 139 portions of the sequences while still streaming 129 later-arriving pixel values 501 through the kernel operation 413.
It is understood that optical components such as beamsplitters, lenses, or filters—even if labeled “spectrally neutral”—may have slight wavelength-dependent differences in the amounts of light transmitted. That is, each image sensor may be said to have its own “color correction space” whereby images from that sensor need to be corrected out of that color correction space to true color. The optical system can be calibrated (e.g., by taking a picture of a calibration card) and a color correction matrix can be stored for each image sensor. The HDR video pipeline can then perform the counter-intuitive step of adjusting the pixel values from one sensor towards the color correction of another sensor—which may in some cases involve nudging the colors away from true color. This may be accomplished by multiplying a vector of RGB values from the one sensor by the inverse color correction matrix of the other sensor. After this color correction to the second sensor, the streams are merged, and the resulting HDR video signal is color corrected to truth (e.g., by multiplying the RGB vectors by the applicable color correction matrix). This color correction process accounts for spectral differences of each image sensor.
The color correction process 1001 converts one sensor's data from its color correction space to the color correction space of another sensor, before merging the images from the two sensors. The merged image data can then be converted to the color correction space of a third sensor, before being combined with the image data from that third sensor. The process may be repeated for as many sensors as desired. After all sensors' images have been combined, the final combined image may be demosaiced 145 and then may be color corrected to truth.
The color correction process 1001 allows images from multiple sensors to be merged, in stages where two images are merged at a time, in a way that preserves color information from one sensor to the next. For example purposes, in
The basic principle guiding the color correction process 1001 is to first convert a dark image to the color correction space of the next brightest image, and then to merge the two “non-demosaiced” (or Color Filter Array [CFA] Bayer-patterned) images together.
The color correction process 1001, for an HDR camera 201 with an ME sensor, an LE sensor, and an SE sensor, includes three general phases: an SE color correction space (CCS) phase, ME color correction space phase, and LE color correction space phase. The color correction process first begins with the SE color correction space phase, which comprises first demosaicing 1045 the LE pixel values and then transforming 1051 the resulting vectors into the color correction space of the ME image. The demosaicing process 1045 yields a full-color RGB vector value for each pixel.
After it has been demosaiced 1045, the LE image data is next transformed 1045 into the ME color correction space. The purpose is to match the color of the LE pixels (now described by RGB vectors) to the color of the ME array (with all of the ME array's color imperfections). To perform the transformation 1051, the LE RGB vectors are transformed 1051 by a color correction matrix. For example, Equations 1-3 show how to use the color correction matrices to correct the color values for the HE, ME, and LE sensors, respectively. Equation 1 shows how to use the color correction matrix to correct the color values of the HE sensor, where the 3×3 matrix coefficients, including values A1-A9, represent coefficients selected to strengthen or weaken the pixel value, and an RGB matrix (RLE, GLE, and BLE) represents the demosaiced RGB output signal from the LE sensor. In some cases, the 3×3 matrix coefficients can be derived by comparing the demosaiced output against expected (or so-called “truth”) values. For example, the 3×3 matrix coefficients can be derived by least-squares polynomial modeling between the demosaiced RGB output values and reference values from a reference color chart (e.g., a Macbeth chart). Similarly, Equation 2 shows how to use the color correction matrix to correct the color values of the ME sensor, where the RGB matrix (RME, GME, and BME) represents the demosaiced RGB output signal from the ME sensor, and Equation 3 shows how to use the color correction matrix to correct the color values of the SE sensor, where the RGB matrix (RME, GME, and BME) represents the demosaiced RGB output values from the SE sensor.
To convert an image from a first color correction space (CCS) to a second color correction space, the color correction matrices from one or more sensors can be used. This process may be referred to as converting between color correction spaces or calibrating color correction spaces. Neither the first color correction space nor the second color correction space accurately reflects the true color of the captured image. The first and the second color correction space both have inaccuracies, and those inaccuracies are, in general, different from one another. Thus RGB values from each sensor must be multiplied by a unique color correction matrix for those RGB values to appear as true colors.
The present invention includes a method for converting an image from the LE sensor's color correction space to the ME sensor's color correction space and is illustrated in Equation 4 below:
In Equation 4, the LE sensor's pixel values (R, G, B) are multiplied by the LE sensor's correction matrix, [C], and then multiplied by the inverse of the ME sensor's correction matrix, [B]. The result is a set of pixel values (R, G, B) that are in the ME sensor's color correction space.
Methods of the invention allow matching of the color correction space of the second sensor to the color correction space of the first sensor so that the images from the two sensors may be accurately combined, or merged. The method for applying all the inaccuracies of the second color correction space to the first color correction space, prior to combining images from the two into an HDR image, is previously unknown. Typical methods for combining data from multiple CFA sensors rely on color-correcting each sensor's data to the “truth” values measured from a calibrated color card, prior to combining the images. This is problematic in an HDR system, where it is known that the brighter sensor's image will have significant portions that are saturated, which saturated portions should actually have been utilized from the darker sensor's image when combining. Color correcting an image that has color information based on saturated pixels will cause colors to be misidentified. Therefore, in an MP, system, color-correcting; the brighter image (for example, to “truth” color values), prior to combining images, will lead to colors being misidentified because of the use of saturated pixel data in creating colors from a mosaic-patterned image. For this reason, we specify that (1) the darker image have its color information transformed to match the color space of the brighter image, (2) this transformed darker image be combined with the brighter image, and then (3) the final combined image be color-transformed to “truth” color values.
The solution provided in the present invention avoids this saturated-pixel color misidentification problem by performing the steps of [(a) demosaic 1045, (b) color correct 1051 & (c) mosaic 1057] data from the darker sensor, thereby ensuring all data is accurately returned to its non-demosaiced state prior to the step of merging the darker sensor's data with the brighter sensor's data.
Furthermore, prior to merging the images from two sensors, the present invention matches the color correction spaces of the two sensors. This transformation ensures that the two images (from the first and second color correction space sensors) can be accurately merged, pixel-for-pixel, in non-demosaiced format. It may at first seem counterintuitive to change the color correction space of one sensor to match the color correction space of a second sensor, especially when the second sensor's color correction space is known to differ from the “true” color correction space. However, it is an important feature in ensuring that (1) the brighter sensor's color information not be demosaiced prior to merging, and (2) the color data from both sensors is matched together, prior to merging the images. The color correction process 1001 uses matrices that may themselves be implemented as kernels in the pipeline 231 on the processing device 219. Thus the color correction process 1001 is compatible with an HDR pipeline workflow because the kernels are applied as they receive the pixel values.
After the LE information is transformed 1051 from the LE color correction space to the ME color correction space, the transformed values are mosaiced 1057 (i.e., the demosaicing process is reversed). The transformed scalar pixel values are now comparable with the Bayer-patterned scalar ME pixel values detected by the ME sensor, and the process 1001 includes merging 1061 of ME and HE non-demosaiced (i.e., scalar) sensor data.
The merged non-demosaiced image within the ME color correction space is then demosaiced 1067. This demosaicing 1064 is similar to the demosaicing 1045 described above, except the CFA pixel values undergoing the demosaicing process are now associated with the ME color correction space. The demosaicing 1067 produces RGB vectors in the ME color space. Those RGB vectors are transformed 1071 into the HE color space while also being color corrected ([B][A]−1[RGB]). Equation 2 shows how to use the color correction matrix to correct the color values of the ME sensor. The color corrected ME information is transformed 1071 from the ME color correction space to the HE color correction space by multiplying the ME color correction matrix by the inverse of the SE color correction matrix.
After the ME information is transformed 1071 from the ME color correction space to the HE color correction space, the transformed vectors are mosaiced 1075 (i.e., the demosaicing process is reversed). This allows the transformed ME CFA Bayer-patterned pixel values to merge 1079 with the HE pixel values detected by the HE sensor. At this point in the color correction process 1001, the transformed color information detected by the HE and ME sensors is now calibrated to match the color information detected by the HE sensor. This newly merged color value data set now represents color values within the HE color correction space 205.
After the color processing and tone-mapping, the pipeline has produced an HDR video signal.
One or more of the HDR camera 201 can be used in conjunction with single lens systems that can simultaneously, and without the need for stitching multiple cameras' views together, view 360 degrees in real-time. HDR is preferred for 360 degree simultaneous viewing because it is common to have the sun or other bright source in the field. HDR's extended light range capabilities ensure that the scene is visible from the darkest shadows to the brightest-lit areas. The processing system 113 can perform 360° unwrapping and unwarping in real-time for data subsets that may be relevant for sensor fusion and radar/lidar hand-off. Additionally or alternatively, a portion of the 360° view may be presented to driver or passenger for basic display purposes (unlike radar). This sensor can easily be calibrated with respect to direction and heading of the car to provide unique location and/or bearing data.
In some embodiments, the pipeline may include a module for subtraction that, in real-time, subtracts the SDR signal from the HDR signal (HDR−SDR=residual). What flows from the subtraction module is a pair of streams—the SDR video signal and the residual signal. Preferably, all of the color information is in the SDR signal. At this stage the HDR signal may be subject to HDR compression by a suitable operation (e.g., JPEG or similar). The pair of streams includes the 8-bit SDR signal and the compressed HDR residual signal, which provides for HDR display.
The pixel values are streamed 1729 to the processing device 219 that uses a kernel operation to identify 1735 saturated pixel values and a merge module to merge 1739 the pixel values to produce the HDR video in real-time. The video is preferably demosaiced. The processing system 113 determines 1745 based, on the HDR video, an appearance of an item in an environment of the vehicle 101 and causes the control system to make a change 1751 in the operation of the vehicle 101 based on the appearance of the item. The vehicle is preferably an autonomous vehicle.
Determining 1745 the appearance of an item in an environment of the vehicle 101 may cause the control system to make a change 1751 in the operation of the vehicle 101, which can include a variety of examples of such actions. For example, the ADAS can determine that some item has newly appeared on the roadway (e.g., a dog has run out onto the road) and change the operation of the vehicle by operating the brakes. In another example, the ADAS can determine that a road appears wet or snowy and can cause the vehicle to cautiously reduce speed. For a further example, the ADAS can determine that an object appears in the scene even where that object would be very difficult to detect by the human eye or an SDR camera (e.g., a white truck against a bright sky) and apply a steering and/or braking correction to cause the vehicle 101 to avoid the object. In a further example, the ADAS can use the HDR camera to determine that the driveway behind the vehicle 101 appears free of obstacles, even in very low-light conditions, and can cause the vehicle to turn on its engine and back out of a garage. Other determinations and operations will be evident to one of skill in the art.
The HDR system may be provided as part of, or for use in, a military or emergency vehicle. The real-time HDR video camera provides the ability to detect and respond to a variety of inputs that a human would have difficulty processing, such as large numbers of inputs in a busy environment, or hard to detect inputs, such as very small things far away. As but one example, a squadron of airplanes using the HDR systems could detect and respond to each other as well as to ambient clouds, birds, topography, etc., to fly in perfect formation for long distances, e.g., and even maintain a formation while flying beneath some critical altitude over varying topography. In some embodiments, the HDR system is for a military or emergency vehicle and provides an autopilot or assist functionality. An operator can set the system to control the vehicle over a period. Additionally or alternatively, the system can be programmed to step in for an operator should the operator lose consciousness, get distracted, hit a panic button, etc. For example, the system can be connected to an eye tracker or physiological sensor such as a heart rate monitor, and can initiate a backup operation mode should such sensor detect values over a certain threshold (e.g., extremely low or elevated heart rate; exaggerated or suppressed eye movements or eye movements not directed towards an immediate path of travel). The system can be operated to place a vehicle in a holding pattern, e.g., fly in a high-altitude circle for a few hours while a pilot sleeps. It will be appreciated that a wide variety of features and functionality may be provided by the vehicle.
References and citations to other documents, such as patents, patent applications, patent publications, journals, books, papers, web contents, have been made throughout this disclosure. All such documents are hereby incorporated herein by reference in their entirety for all purposes.
Various modifications of the invention and many further embodiments thereof, in addition to those shown and described herein, will become apparent to those skilled in the art from the full contents of this document, including references to the scientific and patent literature cited herein. The subject matter herein contains important information, exemplification and guidance that can be adapted to the practice of this invention in its various embodiments and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
2560351 | Kell | Jul 1951 | A |
2642487 | Schroeder | Jun 1953 | A |
2971051 | Back | Feb 1961 | A |
3202039 | DeLang | Aug 1965 | A |
3381084 | Wheeler | Apr 1968 | A |
3474451 | Abel | Oct 1969 | A |
3601480 | Randall | Aug 1971 | A |
3653748 | Athey | Apr 1972 | A |
3659918 | Tan | May 1972 | A |
3668304 | Eilenberger | Jun 1972 | A |
3720146 | Yost, Jr. | Mar 1973 | A |
3802763 | Cook et al. | Apr 1974 | A |
3945034 | Suzuki | Mar 1976 | A |
4009941 | Verdijk et al. | Mar 1977 | A |
4072405 | Ozeki | Feb 1978 | A |
4084180 | Stoffels et al. | Apr 1978 | A |
4134683 | Goetz et al. | Jan 1979 | A |
4268119 | Hartmann | May 1981 | A |
4395234 | Shenker | Jul 1983 | A |
4396188 | Dreissigacker et al. | Aug 1983 | A |
4486069 | Neil et al. | Dec 1984 | A |
4555163 | Wagner | Nov 1985 | A |
4584606 | Nagasaki | Apr 1986 | A |
4743011 | Coffey | May 1988 | A |
4786813 | Svanberg et al. | Nov 1988 | A |
4805037 | Noble et al. | Feb 1989 | A |
4916529 | Yamamoto et al. | Apr 1990 | A |
4933751 | Shinonaga et al. | Jun 1990 | A |
5024530 | Mende | Jun 1991 | A |
5092581 | Koz | Mar 1992 | A |
5093563 | Small et al. | Mar 1992 | A |
5134468 | Ohmuro | Jul 1992 | A |
5153621 | Vogeley | Oct 1992 | A |
5155623 | Miller et al. | Oct 1992 | A |
5194959 | Kaneko et al. | Mar 1993 | A |
5272518 | Vincent | Dec 1993 | A |
5275518 | Guenther | Jan 1994 | A |
5355165 | Kosonocky et al. | Oct 1994 | A |
5386316 | Cook | Jan 1995 | A |
5642191 | Mende | Jun 1997 | A |
5644432 | Doany | Jul 1997 | A |
5707322 | Dreissigacker et al. | Jan 1998 | A |
5729011 | Sekiguchi | Mar 1998 | A |
5734507 | Harvey | Mar 1998 | A |
5801773 | Ikeda | Sep 1998 | A |
5835278 | Rubin et al. | Nov 1998 | A |
5856466 | Cook et al. | Jan 1999 | A |
5881043 | Hasegawa et al. | Mar 1999 | A |
5881180 | Chang et al. | Mar 1999 | A |
5900942 | Spiering | May 1999 | A |
5905490 | Shu et al. | May 1999 | A |
5926283 | Hopkins | Jul 1999 | A |
5929908 | Takahashi et al. | Jul 1999 | A |
6011876 | Kishner | Jan 2000 | A |
6215597 | Duncan et al. | Apr 2001 | B1 |
6392687 | Driscoll, Jr. et al. | May 2002 | B1 |
6429016 | McNeil | Aug 2002 | B1 |
6614478 | Mead | Sep 2003 | B1 |
6633683 | Dinh et al. | Oct 2003 | B1 |
6646716 | Ramanujan et al. | Nov 2003 | B1 |
6674487 | Smith | Jan 2004 | B1 |
6747694 | Nishikawa et al. | Jun 2004 | B1 |
6801719 | Szajewski et al. | Oct 2004 | B1 |
6856466 | Tocci | Feb 2005 | B2 |
6937770 | Oguz et al. | Aug 2005 | B1 |
7068890 | Soskind et al. | Jun 2006 | B2 |
7084905 | Nayar et al. | Aug 2006 | B1 |
7138619 | Ferrante et al. | Nov 2006 | B1 |
7177085 | Tocci et al. | Feb 2007 | B2 |
7283307 | Couture et al. | Oct 2007 | B2 |
7336299 | Kostrzewski et al. | Feb 2008 | B2 |
7397509 | Krymski | Jul 2008 | B2 |
7405882 | Uchiyama et al. | Jul 2008 | B2 |
7535647 | Otten, III et al. | May 2009 | B1 |
7623781 | Sassa | Nov 2009 | B1 |
7714998 | Furman et al. | May 2010 | B2 |
7719674 | Furman et al. | May 2010 | B2 |
7731637 | D'Eredita | Jun 2010 | B2 |
7961398 | Tocci | Jun 2011 | B2 |
8035711 | Liu et al. | Oct 2011 | B2 |
8320047 | Tocci | Nov 2012 | B2 |
8323047 | Reusche et al. | Dec 2012 | B2 |
8340442 | Rasche | Dec 2012 | B1 |
8441732 | Tocci | May 2013 | B2 |
8606009 | Sun | Dec 2013 | B2 |
8610789 | Nayar et al. | Dec 2013 | B1 |
8619368 | Tocci | Dec 2013 | B2 |
8622876 | Kelliher | Jan 2014 | B2 |
8659683 | Linzer | Feb 2014 | B1 |
8843938 | MacFarlane et al. | Sep 2014 | B2 |
8982962 | Alshin et al. | Mar 2015 | B2 |
9087229 | Nguyen et al. | Jul 2015 | B2 |
9129445 | Mai et al. | Sep 2015 | B2 |
9131150 | Mangiat et al. | Sep 2015 | B1 |
9258468 | Cotton et al. | Feb 2016 | B2 |
9264659 | Abuan et al. | Feb 2016 | B2 |
9277122 | Mura et al. | Mar 2016 | B1 |
9338349 | Sharma | May 2016 | B2 |
9459692 | Li | Oct 2016 | B1 |
9488984 | Williams et al. | Nov 2016 | B1 |
9560339 | Borowski | Jan 2017 | B2 |
9633417 | Sugimoto et al. | Apr 2017 | B2 |
9654738 | Ferguson et al. | May 2017 | B1 |
9661245 | Kawano | May 2017 | B2 |
9675236 | McDowall | Jun 2017 | B2 |
9677840 | Rublowsky et al. | Jun 2017 | B2 |
9720231 | Erinjippurath et al. | Aug 2017 | B2 |
9779490 | Bishop | Oct 2017 | B2 |
9800856 | Venkataraman et al. | Oct 2017 | B2 |
9904981 | Jung et al. | Feb 2018 | B2 |
9948829 | Kiser et al. | Apr 2018 | B2 |
9955084 | Haynold | Apr 2018 | B1 |
9974996 | Kiser | May 2018 | B2 |
9998692 | Griffiths | Jun 2018 | B1 |
10038855 | Cote et al. | Jul 2018 | B2 |
10165182 | Chen | Dec 2018 | B1 |
10200569 | Kiser et al. | Feb 2019 | B2 |
10257393 | Kiser et al. | Apr 2019 | B2 |
10257394 | Kiser et al. | Apr 2019 | B2 |
10264196 | Kiser et al. | Apr 2019 | B2 |
10375382 | Gorilovsky | Aug 2019 | B2 |
10536612 | Kiser et al. | Jan 2020 | B2 |
10554901 | Kiser et al. | Feb 2020 | B2 |
10601908 | Ragupathy et al. | Mar 2020 | B1 |
10616512 | Ingle et al. | Apr 2020 | B2 |
10679320 | Kunz | Jun 2020 | B1 |
10742847 | Kiser et al. | Aug 2020 | B2 |
10805505 | Kiser et al. | Oct 2020 | B2 |
10809801 | Weng | Oct 2020 | B1 |
10819925 | Kiser et al. | Oct 2020 | B2 |
10951888 | Kiser et al. | Mar 2021 | B2 |
20020014577 | Ulrich et al. | Feb 2002 | A1 |
20020089765 | Nalwa | Jul 2002 | A1 |
20030007254 | Tocci | Jan 2003 | A1 |
20030016334 | Weber et al. | Jan 2003 | A1 |
20030048493 | Pontifex et al. | Mar 2003 | A1 |
20030072011 | Shirley | Apr 2003 | A1 |
20030081674 | Malvar | May 2003 | A1 |
20030122930 | Schofield et al. | Jul 2003 | A1 |
20030138154 | Suino | Jul 2003 | A1 |
20040119020 | Bodkin | Jun 2004 | A1 |
20040125228 | Dougherty | Jul 2004 | A1 |
20040143380 | Stam et al. | Jul 2004 | A1 |
20040156134 | Furuki et al. | Aug 2004 | A1 |
20040179834 | Szajewski et al. | Sep 2004 | A1 |
20040202376 | Schwartz et al. | Oct 2004 | A1 |
20050001983 | Weber et al. | Jan 2005 | A1 |
20050041113 | Nayar et al. | Feb 2005 | A1 |
20050099504 | Nayar et al. | May 2005 | A1 |
20050117799 | Fuh et al. | Jun 2005 | A1 |
20050151860 | Silverstein et al. | Jul 2005 | A1 |
20050157943 | Ruggiero | Jul 2005 | A1 |
20050168578 | Gobush | Aug 2005 | A1 |
20050198482 | Cheung et al. | Sep 2005 | A1 |
20050212827 | Goertzen | Sep 2005 | A1 |
20050219659 | Quan | Oct 2005 | A1 |
20060001761 | Haba et al. | Jan 2006 | A1 |
20060002611 | Mantiuk et al. | Jan 2006 | A1 |
20060061680 | Madhavan et al. | Mar 2006 | A1 |
20060104508 | Daly et al. | May 2006 | A1 |
20060184040 | Keller et al. | Aug 2006 | A1 |
20060209204 | Ward | Sep 2006 | A1 |
20060215882 | Ando et al. | Sep 2006 | A1 |
20060221209 | McGuire et al. | Oct 2006 | A1 |
20060249652 | Schleifer | Nov 2006 | A1 |
20060262275 | Domroese et al. | Nov 2006 | A1 |
20070025717 | Raskar et al. | Feb 2007 | A1 |
20070073484 | Horibe | Mar 2007 | A1 |
20070086087 | Dent et al. | Apr 2007 | A1 |
20070133889 | Horie et al. | Jun 2007 | A1 |
20070152804 | Breed et al. | Jul 2007 | A1 |
20070182844 | Allman et al. | Aug 2007 | A1 |
20070189750 | Wong et al. | Aug 2007 | A1 |
20070189758 | Iwasaki | Aug 2007 | A1 |
20070201560 | Segall et al. | Aug 2007 | A1 |
20070258641 | Srinivasan et al. | Nov 2007 | A1 |
20080013051 | Glinski et al. | Jan 2008 | A1 |
20080030611 | Jenkins | Feb 2008 | A1 |
20080037883 | Tsutsumi et al. | Feb 2008 | A1 |
20080055683 | Choe et al. | Mar 2008 | A1 |
20080068721 | Murnan et al. | Mar 2008 | A1 |
20080094486 | Fuh et al. | Apr 2008 | A1 |
20080100910 | Kim et al. | May 2008 | A1 |
20080112651 | Cho et al. | May 2008 | A1 |
20080175496 | Segall | Jul 2008 | A1 |
20080198235 | Chen et al. | Aug 2008 | A1 |
20080198266 | Kurane | Aug 2008 | A1 |
20080239155 | Wong et al. | Oct 2008 | A1 |
20080297460 | Peng et al. | Dec 2008 | A1 |
20080304562 | Chang et al. | Dec 2008 | A1 |
20090015683 | Ando | Jan 2009 | A1 |
20090059048 | Luo et al. | Mar 2009 | A1 |
20090161019 | Jang | Jun 2009 | A1 |
20090213225 | Jin et al. | Aug 2009 | A1 |
20090225433 | Tocci | Sep 2009 | A1 |
20090244717 | Tocci | Oct 2009 | A1 |
20090290043 | Liu et al. | Nov 2009 | A1 |
20100098333 | Aoyagi | Apr 2010 | A1 |
20100100268 | Zhang et al. | Apr 2010 | A1 |
20100149546 | Kobayashi et al. | Jun 2010 | A1 |
20100172409 | Reznik et al. | Jul 2010 | A1 |
20100201799 | Mohrholz et al. | Aug 2010 | A1 |
20100225783 | Wagner | Sep 2010 | A1 |
20100266008 | Reznik | Oct 2010 | A1 |
20100271512 | Garten | Oct 2010 | A1 |
20100328780 | Tocci | Dec 2010 | A1 |
20110028278 | Roach | Feb 2011 | A1 |
20110058050 | Lasang et al. | Mar 2011 | A1 |
20110188744 | Sun | Aug 2011 | A1 |
20110194618 | Gish et al. | Aug 2011 | A1 |
20110221793 | King, III et al. | Sep 2011 | A1 |
20120025080 | Liu et al. | Feb 2012 | A1 |
20120134551 | Wallace | May 2012 | A1 |
20120147953 | El-Mahdy et al. | Jun 2012 | A1 |
20120154370 | Russell et al. | Jun 2012 | A1 |
20120179833 | Kenrick et al. | Jul 2012 | A1 |
20120193520 | Bewersdorf et al. | Aug 2012 | A1 |
20120212964 | Chang et al. | Aug 2012 | A1 |
20120241867 | Ono et al. | Sep 2012 | A1 |
20120242867 | Shuster | Sep 2012 | A1 |
20120260174 | Imaida et al. | Oct 2012 | A1 |
20120299940 | Dietrich, Jr. et al. | Nov 2012 | A1 |
20120307893 | Reznik et al. | Dec 2012 | A1 |
20130021447 | Brisedoux et al. | Jan 2013 | A1 |
20130021505 | Plowman et al. | Jan 2013 | A1 |
20130027565 | Solhusvik et al. | Jan 2013 | A1 |
20130038689 | McDowall | Feb 2013 | A1 |
20130057971 | Zhao et al. | Mar 2013 | A1 |
20130063300 | O'Regan et al. | Mar 2013 | A1 |
20130064448 | Tomaselli et al. | Mar 2013 | A1 |
20130093805 | Iversen | Apr 2013 | A1 |
20130094705 | Tyagi et al. | Apr 2013 | A1 |
20130128957 | Bankoski et al. | May 2013 | A1 |
20130148139 | Matsuhira | Jun 2013 | A1 |
20130190965 | Einecke et al. | Jul 2013 | A1 |
20130194675 | Tocci | Aug 2013 | A1 |
20130215290 | Solhusvik et al. | Aug 2013 | A1 |
20130250113 | Bechtel et al. | Sep 2013 | A1 |
20130258044 | Betts-Lacroix | Oct 2013 | A1 |
20130286451 | Verhaegh | Oct 2013 | A1 |
20130329053 | Jones et al. | Dec 2013 | A1 |
20130329087 | Tico et al. | Dec 2013 | A1 |
20140002694 | Levy et al. | Jan 2014 | A1 |
20140063300 | Lin et al. | Mar 2014 | A1 |
20140085422 | Aronsson et al. | Mar 2014 | A1 |
20140104051 | Breed | Apr 2014 | A1 |
20140132946 | Sebastian et al. | May 2014 | A1 |
20140152694 | Narasimha et al. | Jun 2014 | A1 |
20140168486 | Geiss | Jun 2014 | A1 |
20140184894 | Motta | Jul 2014 | A1 |
20140192214 | Laroia | Jul 2014 | A1 |
20140198187 | Lukk | Jul 2014 | A1 |
20140204195 | Katashiba et al. | Jul 2014 | A1 |
20140207344 | Ihlenburg | Jul 2014 | A1 |
20140210847 | Knibbeler et al. | Jul 2014 | A1 |
20140263950 | Fenigstein et al. | Sep 2014 | A1 |
20140297671 | Richard | Oct 2014 | A1 |
20140300795 | Bilcu et al. | Oct 2014 | A1 |
20140313369 | Kageyama et al. | Oct 2014 | A1 |
20140321766 | Jo | Oct 2014 | A1 |
20150077281 | Taniguchi et al. | Mar 2015 | A1 |
20150078661 | Granados et al. | Mar 2015 | A1 |
20150138339 | Einecke et al. | May 2015 | A1 |
20150151725 | Clarke et al. | Jun 2015 | A1 |
20150172608 | Routhier et al. | Jun 2015 | A1 |
20150175161 | Breed | Jun 2015 | A1 |
20150201222 | Mertens | Jul 2015 | A1 |
20150208024 | Takahashi et al. | Jul 2015 | A1 |
20150215595 | Yoshida | Jul 2015 | A1 |
20150245043 | Greenebaum et al. | Aug 2015 | A1 |
20150245044 | Guo et al. | Aug 2015 | A1 |
20150249105 | Skeete | Sep 2015 | A1 |
20150253198 | Bergen | Sep 2015 | A1 |
20150296140 | Kim | Oct 2015 | A1 |
20150302562 | Zhai et al. | Oct 2015 | A1 |
20150312498 | Kawano | Oct 2015 | A1 |
20150312536 | Butler et al. | Oct 2015 | A1 |
20160007052 | Haitsuka et al. | Jan 2016 | A1 |
20160007910 | Boss et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160050354 | Musatenko et al. | Feb 2016 | A1 |
20160057333 | Liu et al. | Feb 2016 | A1 |
20160093029 | Micovic et al. | Mar 2016 | A1 |
20160163356 | De Haan et al. | Jun 2016 | A1 |
20160164120 | Swiegers et al. | Jun 2016 | A1 |
20160165120 | Lim | Jun 2016 | A1 |
20160173811 | Oh et al. | Jun 2016 | A1 |
20160191795 | Han et al. | Jun 2016 | A1 |
20160195877 | Franzius et al. | Jul 2016 | A1 |
20160205341 | Hollander et al. | Jul 2016 | A1 |
20160205368 | Wallace et al. | Jul 2016 | A1 |
20160227193 | Osterwood et al. | Aug 2016 | A1 |
20160239276 | MacLean et al. | Aug 2016 | A1 |
20160252727 | Mack et al. | Sep 2016 | A1 |
20160301959 | Oh et al. | Oct 2016 | A1 |
20160307602 | Mertens | Oct 2016 | A1 |
20160323518 | Rivard et al. | Nov 2016 | A1 |
20160344977 | Murao | Nov 2016 | A1 |
20160345032 | Tsukagoshi | Nov 2016 | A1 |
20160353123 | Ninan | Dec 2016 | A1 |
20160360212 | Dai et al. | Dec 2016 | A1 |
20160360213 | Lee et al. | Dec 2016 | A1 |
20160375297 | Kiser | Dec 2016 | A1 |
20170006273 | Borer et al. | Jan 2017 | A1 |
20170026594 | Shida et al. | Jan 2017 | A1 |
20170039716 | Morris et al. | Feb 2017 | A1 |
20170070719 | Smolic et al. | Mar 2017 | A1 |
20170084006 | Stewart | Mar 2017 | A1 |
20170111643 | Bugdayci Sansli et al. | Apr 2017 | A1 |
20170126987 | Tan et al. | May 2017 | A1 |
20170155818 | Bonnet | Jun 2017 | A1 |
20170155873 | Nguyen | Jun 2017 | A1 |
20170186141 | Ha et al. | Jun 2017 | A1 |
20170237879 | Kiser et al. | Aug 2017 | A1 |
20170237890 | Kiser et al. | Aug 2017 | A1 |
20170237913 | Kiser et al. | Aug 2017 | A1 |
20170238029 | Kiser et al. | Aug 2017 | A1 |
20170270702 | Gauthier et al. | Sep 2017 | A1 |
20170279530 | Tsukagoshi | Sep 2017 | A1 |
20170302858 | Porter et al. | Oct 2017 | A1 |
20170339385 | Usui | Nov 2017 | A1 |
20170352131 | Berlin et al. | Dec 2017 | A1 |
20170374390 | Leleannec et al. | Dec 2017 | A1 |
20180005356 | Van Der Vleuten et al. | Jan 2018 | A1 |
20180048801 | Kiser et al. | Feb 2018 | A1 |
20180054566 | Yaguchi | Feb 2018 | A1 |
20180063537 | Sasai et al. | Mar 2018 | A1 |
20180152721 | Rusanovskyy et al. | May 2018 | A1 |
20180189170 | Dwarakanath et al. | Jul 2018 | A1 |
20180198957 | Kiser et al. | Jul 2018 | A1 |
20190014308 | Kiser et al. | Jan 2019 | A1 |
20190130630 | Ackerson et al. | May 2019 | A1 |
20190166283 | Kiser et al. | May 2019 | A1 |
20190238725 | Kiser et al. | Aug 2019 | A1 |
20190238726 | Kiser et al. | Aug 2019 | A1 |
20190238766 | Kiser et al. | Aug 2019 | A1 |
20190324888 | Evans et al. | Oct 2019 | A1 |
20190349581 | Fuchie et al. | Nov 2019 | A1 |
20190373260 | Kiser et al. | Dec 2019 | A1 |
20200036918 | Ingle et al. | Jan 2020 | A1 |
20200058104 | Kiser et al. | Feb 2020 | A1 |
20200059670 | Kiser et al. | Feb 2020 | A1 |
20200097295 | Xu et al. | Mar 2020 | A1 |
20200154030 | Kiser et al. | May 2020 | A1 |
20200235607 | Kanarellis et al. | Jul 2020 | A1 |
20200320955 | Kiser et al. | Oct 2020 | A1 |
20200368616 | Delamont | Nov 2020 | A1 |
20210029271 | Kiser et al. | Jan 2021 | A1 |
20210034342 | Hoy | Feb 2021 | A1 |
20210044765 | Kiser et al. | Feb 2021 | A1 |
20210099616 | Kiser et al. | Apr 2021 | A1 |
20210227220 | Kiser et al. | Jul 2021 | A1 |
Number | Date | Country |
---|---|---|
101344706 | Sep 2010 | CN |
105472265 | Apr 2016 | CN |
0484802 | May 1992 | EP |
1225574 | Jul 2002 | EP |
1395062 | Mar 2004 | EP |
3051821 | Aug 2016 | EP |
3070934 | Sep 2016 | EP |
2526047 | Nov 2015 | GB |
2539917 | Jan 2017 | GB |
S53093026 | Aug 1978 | JP |
S53124028 | Oct 1978 | JP |
S60213178 | Oct 1985 | JP |
S63160489 | Jul 1988 | JP |
H0468876 | Mar 1992 | JP |
H0564070 | Mar 1993 | JP |
H06335006 | Dec 1994 | JP |
H07107346 | Apr 1995 | JP |
H08220585 | Aug 1996 | JP |
H11127441 | May 1999 | JP |
2000019407 | Jan 2000 | JP |
2000338313 | Dec 2000 | JP |
2001136434 | May 2001 | JP |
2002165108 | Jun 2002 | JP |
2002-369210 | Dec 2002 | JP |
2003035881 | Feb 2003 | JP |
2005-117524 | Apr 2005 | JP |
2007-96510 | Apr 2007 | JP |
2007-243942 | Sep 2007 | JP |
2007-281816 | Oct 2007 | JP |
2007295326 | Nov 2007 | JP |
2009-17157 | Jan 2009 | JP |
2013-27021 | Feb 2013 | JP |
2014-524290 | Sep 2014 | JP |
100695003 | Mar 2007 | KR |
101310140 | Sep 2013 | KR |
2005025685 | Mar 2005 | WO |
2009043494 | Apr 2009 | WO |
2009111642 | Sep 2009 | WO |
2009121068 | Oct 2009 | WO |
2011032028 | Mar 2011 | WO |
2012076646 | Jun 2012 | WO |
2013025530 | Feb 2013 | WO |
2015072754 | May 2015 | WO |
2015173570 | Nov 2015 | WO |
2017139363 | Aug 2017 | WO |
2017139596 | Aug 2017 | WO |
2017139600 | Aug 2017 | WO |
2017157845 | Sep 2017 | WO |
Entry |
---|
Aggarwal, 2004, Split Aperture Imaging for High Dynamic Range, Int J Comp Vis 58(1):7-17. |
Alleysson, 2006, HDR CFA Image Rendering, Proc EURASIP 14th European Signal Processing Conf. (5 pages). |
Altera, 2010, Memory System Design, Chapter 7 in Embedded Design Handbook, Altera Corporation (18 pages). |
Banterle, 2009, High dynamic range imaging and low dynamic range expansion for generating HDR content, Eurographics State of the The Art Report (18 pages). |
Borer, 2014, Non-linear opto-electrical transfer functions for high dynamic range television, Research and Development White Paper, British Broadcasting Corporation (24 pages). |
Bravo, 2011, Efficient smart CMOS camera based on FPGAs oriented to embedded image processing, Sensors 11:2282-2303. |
Damazio, 2006, A codec architecture for real-time High Dynamic Range video, VIII Symposium on Virtual and Augmented Reality (Belém, PA, Brazil) (9 pages). |
Debevec, 1997, Recovering High Dynamic Range Radiance Maps from Photographs, Int Conf Comp Graphics and Interactive Techniques, proceedings (10 pages). |
Dhanani, 2008, HD video line buffering in FPGA, EE Times (5 pages). |
Flux Data Inc, 2008, FD-1665 High Resolution 3 CCD Multispectral Industrial Camera, web.archive.orgweb20080113023949www.fluxdata.com/prod (7 pages). |
Geronimo, 2010, Survey of pedestrian detection for advanced driver assistance systems, IEEE Trans Pat Anal Mach Int 32(7):1239-58. |
Gurel, 2016, A comparative study between RTL and HLS for image processing applications with FPGAs, Thesis, UC San Diego (78 pages). |
Hegarty, 2014, Darkroom: compiling high-level image processing code into hardware pipelines, ACM Trans Graph 33 (4):144. |
Jack, 2005, Color spaces, Chapter 3 in Video Demystified: A Handbook for the Digital Engineer, 4Ed, Newnes (20 pages). |
Kao, 2008, High Dynamic Range Imaging by Fusing Multiple Raw Images and Tone Reproduction, IEEE Transactions on Consumer Electronics 54(1):10-15. |
Kresch, 1999, Fast DCT domain filtering using the DCT and the DST, IEEE Trans Imag Proc (29 pages). |
Lawal, 2007, C++ based system synthesis of real-time video processing systems targeting FPGA implementation, IEEE Int Par Dist Proc Symposium, Rome, pp. 1-7. |
Lawal, 2008, Memory synthesis for FPGA implementations of real-time video processing systems, Thesis, Mid Sweden U (102 pages). |
Lukac, 2004, Demosaicked Image Postprocessing Using Local Color Ratios, IEEE Transactions on Circuits and Systems for Video Technology 14(6):914-920. |
Lyu, 2014, A 12-bit high-speed col. parallel two-step single-slope analog-to-digital converter (ADC) for CMOS mage sensors, Sensors 14:21603-21625. |
Myszkowki, 2008, High Dynamic Range Video, Morgan & Claypool Publishers, San Rafael, CA (158 pages). |
Nayar, 2000, High dynamic range imaging: spatially varying pixel exposures, 2000 Proc IEEE Conf on Comp Vision and Pattern Rec, ISSN: 1063-6919 (8 pages). |
Nosratinia, 2002, Enhancement of JPEG-compressed images by re-application of JPEG, Journal of VLSI signal processing systems for signal, image and video technology (20 pages). |
Rahman, 2011, Pipeline synthesis and optimization of FPGA-based video processing applications with CAL, EURASIP J Image Vid Processing 19:1-28. |
Roberts, 2017, Lossy Data Compression: JPEG, Stanford faculty page (5 pages) Retrieved from the Internet on Feb. 3, 2017, from <https://cs.stanford.edu/people/eroberts/courses/soco/projects/data-compression/lossy/jpeg/dct.htm>( 5 pages). |
Schulte, 2016, HDR Demystified: Emerging UHDTV systems, SpectraCal 1-22. |
Sedigh, 1998, Evaluation of filtering mechanisms for MPEG video communications, IEES Symp Rel Dist Sys (6 pages). |
Sony, 2017, HDR (High Dynamic Range), Sony Corporation (15 pages). |
Stumpfel, 2004, Direct HDR Capture of the Sun and Sky, Computer graphics, virtual reality, visualisation and Interaction in Africa (9 pages). |
Tiwari, 2015, A review on high-dynamic range imaging with its technique, Int J Sig Proc, IPPR 8(9):93-100. |
Tocci, 2011, A versatile HDR video production system, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2011, 30(4):article 41 (9 pages). |
Touze, 2014, HDR video coding based on local LDR quantization, Second International Conference and SME Workshop on HDR imaging (6 pages). |
Unattributed, 2018, JPEG YCbCr Support, Microsoft, Retrieved from the Internet on Nov. 20, 2019 from <https://docs.microsoft.com/en-us/windows/win32/wic/jpeg-ycbcr-support> (14 pages). |
Wong, 2017, Ultra-low latency contiguous block-parallel stream windowing using FPGA on-chip memory, FPT 56-63. |
Number | Date | Country | |
---|---|---|---|
20220311926 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
62372527 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16745864 | Jan 2020 | US |
Child | 17840108 | US | |
Parent | 15670229 | Aug 2017 | US |
Child | 16745864 | US |