The present technology relates to image projector systems.
Movies and other content may be enjoyed by projecting images onto a screen for viewing. One factor that strongly affects the realism and impact of projected images is dynamic range (the difference between the highest and lowest light intensity in an image). There is a demand for high resolution, high dynamic range (HDR), projectors capable of projecting images that include high intensity highlights. Such projectors can provide significantly enhanced viewing experiences.
One technology that may be used to realize HDR projectors is light steering. Light steering involves concentrating light that would otherwise be directed to darker areas of an image to provide image highlights. The image highlights may have light intensities that are many times higher than a full frame uniform intensity achievable by the same projector. As such, light steering technology may help to achieve both high dynamic range and high maximum intensity in highlights for a vivid viewing experience.
Especially when light steering is involved, an image forming device may be illuminated non-uniformly. The nature of this intentional non-uniformity is such that substantially the whole illumination power of the HDR projector may be concentrated on one or more regions that are significantly smaller than the total area that is normally illuminated.
One problem with projectors capable of very high light intensity output, whether achieved using light steering or other technologies is the danger that a person's vision could be damaged if the person looks toward the output of the projector. To avoid this problem, there are standards that specify limits on the maximum radiant exposure that the projector can deliver at a fixed distance from an output of the projector. Examples of such a standard is IEC 60825-1:2014 published by the International Electrotechnical Commission and 21 CFR 1040.10 and 1040.11.
Algorithms for controlling a HDR projector may be designed to avoid output of light having irradiance that exceeds safe levels. However, such algorithms may not always be reliable, especially where light steering is involved.
As the demand for more vivid and compelling viewing experiences increases there is an increased demand for projectors that are capable of exceeding safe radiant exposures. The inventors have determined that there is a need for projector systems that include output monitoring to ensure safety and compliance with applicable regulations.
This invention has a number of aspects including, without limitation:
One aspect of the invention provides a light projection system (10) comprising: a light source (12) operative to emit a light field; output optics (16) arranged to project the light of the light field; and an irradiance monitor (20). The irradiance monitor comprises: a light sampling element (21) arranged to redirect a fraction of the light from the light field onto a first portion (22-1) of a light sensor (22); a calibration light source (24) comprising one or more calibration light emitters (25) operative to emit calibration light (24A), the calibration light source (24) arranged to direct the calibration light (24A) to illuminate a second portion (22-2) of the light sensor (22); and a processor (26) connected to receive image data (27) from the light sensor (22) and to process the image data (27) to determine whether light incident on the first portion of the light sensor (22) has an irradiance in excess of an irradiance threshold based on a response of the light sensor (22) to the calibration light (24A) and the fraction of the light from the light field.
The light projection system (10) may comprise a modulation stage (14) operative to modulate the light field from the light source (12) to yield a modulated light field (15) wherein the light sampling element (21) samples the modulated light field (15).
The modulation stage (14) may comprise a light steering unit (14A) operative to concentrate light from the light source (12) into regions having irradiance greater than irradiance of light incident on the light steering unit (14A) from the light source (12).
The light source (12) may comprise a plurality of light emitters (13) each operative to emit light in a corresponding one of a first plurality of narrow wavelength bands.
The first plurality of narrow wavelength bands may be made up of a first red (R) band, a first green (G) band and a first blue (B) band.
The calibration light source (24) may comprise a plurality of light emitters (25) which each emit light in a corresponding one of a second plurality of wavelength bands.
The second plurality of wavelength bands may be made up of a second red (R) band, a second green (G) band and a second blue (B) band.
The calibration light source (24) may comprise one or more broadband light emitters.
The broadband light emitters may comprise white light emitters.
The one or more calibration light emitters (25) may comprise one or more light emitting diodes (LEDs).
The light sensor (22) may comprise an imaging light sensor (22) operative to output the image data (27).
The light sensor (22) may comprise an RGB light sensor (22).
The projected light may include frames projected at a frame rate and the light sensor (22) may measure the light from the light field at a rate that is at least twice the frame rate.
The light sensor (22) may measure the light from the light field at a rate that is at least once every 5 milliseconds.
The light sensor (22) may measure the light from the light field at a rate that is at least once every 3 milliseconds.
The light projection system (10) may be configured to apply a colour transform to the image data (27) to yield transformed image data in which cross talk between colour channels of the image data is reduced.
Values in the transformed image data may be indicative of irradiance.
The colour transform may be representable as a 3×3 matrix.
The light sensor (22) may comprise a RGB sensor that is part of a colour camera (23), the colour transform may be performed by the camera (23) and the processor (26) the transformed image data from the camera (23).
The processor (26) may be configured to, in response to determining that the light incident on the first portion (22-1) of the light sensor (22) has an irradiance in excess of the irradiance threshold, shut off or dim the light source (12) and/or operate a shutter to block light from being projected by the output optics (16).
The light projection system (10) may further comprise an intrusion detection system operative to detect intrusion of persons or objects into a region that includes a beam of light projected by projection optics (16) wherein the processor (26) is configured to, in response to determining that the light incident on the first portion (22-1) of the light sensor (22) has an irradiance in excess of the irradiance threshold and receiving an input from the intrusion detection system indicating an intrusion, shut off or dim the light source (12) and/or operate a shutter to block light from being projected by the output optics (16).
The intrusion detection system may comprise a range finder operative to determine whether any detected persons or objects are closer than a threshold distance to the projection optics (16). The processor (26) may be configured to, in response to determining that the light incident on the first portion (22-1) of the light sensor (22) has an irradiance in excess of the irradiance threshold and receiving an input from the intrusion detection system indicating an intrusion of a person or object that is closer to the projection optics (16) than the threshold distance, shut off or dim the light source (12) and/or operate a shutter to block light from being projected by the projection optics (16).
The light source (12) may comprise a plurality of laser light emitters (13).
The laser light emitters (13) may each emit light having bandwidth of 15 nm or less.
The processor (26) being configured to process the image data (27) to determine whether light incident on the first portion of the light sensor (22) has an irradiance in excess of an irradiance threshold may comprise the processor being configured to evaluate per pixel:
where R, G and B are R, G, and B output values from the light sensor (22) monitoring the light field (15) and Rref, Gref and Bref are output values from the imaging light sensor (22) monitoring red, green and blue components of the calibration light (24A).
The light sampling element (21) may comprise a beam splitter.
The light sampling element (21) may redirect less than 5% of the light from the light field onto the light sensor.
The light sampling element may redirect less than 1% of the light from the light field onto the light sensor.
The calibration light source (24) may illuminate the light sensor (22) uniformly.
A refresh rate of the light sensor (22) may be significantly higher than a refresh rate of the modulation stage (14).
Are fresh of the light sensor (22) may be coordinated with a refresh of the modulation stage (14) such that the light sensor (22) captures irradiance of the modulated light field (15) shortly after the modulation stage (14) is refreshed.
The light sensor (22) may comprise one or more optics for spectrally separating the light from the light field.
The output optics (16) may comprise a zoom lens that is adjustable to provide different throw ratios. The irradiance monitor (20) may be configured to receive an input identifying a zoom setting of the zoom lens.
The input identifying the zoom setting may be provided from plural redundant zoom position sensors.
In response to changes in the zoom setting, the irradiance monitor (20) may: adjust a trip level of the irradiance monitor (20) to compensate for differences in the zoom setting; if the current zoom setting is larger than a threshold, inhibits operation of the light source (12) operative to emit the light field and/or operates the light source (12) at a lower power setting and/or introduces an optical attenuator into a light path of the light projection system (10) and/or disables light steering and/or controls light steering to at least partially dump light from the light field and/or issues a warning signal; and/or changes the zoom setting of the output optics (16).
The calibration light source (24) may comprise redundant light emitters.
The light emitters (25) of the calibration light source (24) may be used in rotation.
The light emitters (25) of the calibration light source (24) may be housed in a temperature controlled environment.
The light emitters (25) of the calibration light source (24) may be maintained at a temperature slightly greater than a maximum expected ambient temperature.
The calibration light source (24) may comprise one or more reference sets of light emitters that are used sparingly to determine aging compensation for other sets of light emitters of the calibration light source (24).
The aging compensation may comprise adjusting driving currents for the other sets of light emitters of the calibration light source (24) so that light outputs of the other sets of light emitters match light outputs of corresponding ones of the reference sets of light emitters.
Another aspect of the invention provides a calibration method comprising: developing a colour transform for a light source (12) and/or a calibration light source (24); and at least partially determining residual crosstalk and/or scaling to absolute irradiance levels.
The calibration method may comprise any of the features, combinations of features and/or sub-combinations of features discussed above.
Another aspect of the invention provides a calibration method for the light projection system described above. The method comprises: developing a colour transform for the light source (12) and/or the calibration light source (24) based at least in part on the image data (27); and at least partially determining residual crosstalk and/or scaling to absolute irradiance levels.
The calibration method may comprise any of the features, combinations of features and/or sub-combinations of features discussed above.
Other aspects of the invention provide apparatus having any new and inventive feature, combination of features, or sub-combination of features as described herein.
Other aspects of the invention provide methods having any new and inventive steps, acts, combination of steps and/or acts or sub-combination of steps and/or acts as described herein.
The following description and the accompanying drawings describe a wide variety of ways to implement the above noted methods and apparatus as well as different ways to implement individual components of such methods and apparatus.
It is emphasized that the invention relates to all combinations of the features, described and illustrated in this document even if these are recited in different claims, shown in different drawings, described in different sections, paragraphs or sentences.
Further aspects and example embodiments are illustrated in the accompanying drawings and/or described in the following description.
The accompanying drawings illustrate non-limiting example embodiments of the invention.
Throughout the following description, specific details are set forth in order to provide a more thorough understanding of the invention. However, the invention may be practiced without these particulars. In other instances, well known elements have not been shown or described in detail to avoid unnecessarily obscuring the invention. Accordingly, the specification and drawings are to be regarded in an illustrative, rather than a restrictive sense.
One aspect of the present technology provides a projector system that includes a monitor of output irradiance.
In
Modulation stage 14 is controlled by controller 30 to modulate light from light source 12 according to image data 17 to yield a modulated light field 15 which is directed by output optics 16 toward a screen or other surface for viewing. In some embodiments, modulation stage 14 includes light steering technology.
In some embodiments an amplitude modulator 18 (e.g., a LCD panel, LCOS spatial light modulator, DMD or the like) further spatially modulates modulated light field 15, for example to add high spatial frequency details in a projected image.
Projection system 10 includes a monitor 20 that operates to monitor output irradiance produced by projector system 10 as described herein. In some embodiments, monitor 20 operates as a quasi-independent safety function that continuously monitors an illumination pattern (e.g., modulated light field 15) of projector system 10.
In some applications of the present technology modulation stage 14 is not required or present. In such applications, monitor 20 may monitor a light field produced by light source 12 without additional modulation.
Monitor 20 includes a light sampling element 21 that redirects a small set portion of light 15A from modulated field 15 onto a sensor 22. Light sampling element 21 images modulated light field 15 onto sensor 22. Light sampling element 21 may, for example, comprise a beam splitter.
Light sampling element 21 may, for example, redirect less than 5% of the light in modulated light field 15 to sensor 22. In an example embodiment, light sampling element 21 redirects less than 1% (e.g., 0.5% or 0.3%) of the light from modulated light field 15 to sensor 22.
Light sampling element 21 may include an attenuating element 21A such as a neutral density filter to reduce the intensity of the light redirected by sampling element 21 that reaches sensor 22 by a desired factor. Light sampling element 21 may be designed so that the intensity of light from modulated light field 15 that reaches sensor 22 is between the black level and saturation level of sensor 22. In some embodiments the light incident on sensor 22 has an intensity (illuminance) that is a factor of 105 or more less than the intensity of modulated light field 15.
Light sampling element 21 may include one or more lenses 21B to image modulated light field 15 onto sensor 22.
It is an option to integrate monitor 20 with projection optics 16. For example, light sampling element 21 may be integrated into a projection lens.
Monitor 20 also includes a calibration light source 24 operative to direct calibration light 24A onto sensor 22. Preferably calibration light 24A is directed to illuminate areas of sensor 22 that are outside of a region of sensor 22 onto which light from modulated light field 15 is directed. This advantageously allows calibration light 24A and light sampled from modulated light field 15 to be monitored simultaneously by sensor 22. Where the performance of sensor 22 is spatially uniform (e.g., the pixels in different areas of sensor 22 have the same responsiveness) or any spatial variation of the performance of sensor 22 is known then it is not necessary for calibration light 24A to illuminate the same pixels of sensor 22 as light from modulated light field 15.
It is not necessary for sensor 22 to have extremely high spatial resolution. For example, the resolution of sensor 22 may be selected so that sensor 22 oversamples the finest detail that modulated light field 15 may have. Where modulation stage 14 provides modulation of modulated light field 15 by a light steering unit for which the smallest feature is about 10% of screen height, a sensor 22 of 1 megapixel or more may have adequate spatial resolution. Sensor 22 may have a spatial resolution higher than necessary; however, this comes at the cost of longer time needed or more complex hardware needed to output sensor image data from sensor 22 and to process that sensor image data.
In some embodiments, calibration light source 24 is configured to illuminate sensor 22 uniformly with calibration light 24A or may be configured to illuminate only a specific part of sensor 22 with calibration light 24A.
Calibration light source 24 may, for example, comprise one or more light emitting diodes (LEDs) 25 and a driver circuit 29 that operates the one or more LEDs 25 to provide a desired output of calibration light 24A.
Driver circuit 26 may, for example, drive each of LEDs 25 with a corresponding constant DC electrical current. The magnitude (I) of the DC electrical current may be set so that calibration light 24A illuminates sensor 22 with a desired irradiance.
In the illustrated embodiment, calibration light source 24 is operative to emit light of a plurality of colours. For example, calibration light source 24 may be operative to emit light of the same number of different colours as are included in the light of modulated light field 15 (three in the illustrated example).
In
In some embodiments, calibration light 24A includes:
In some embodiments calibration light emitters 25 comprise broadband light emitters such as, for example, LEDs that emit white light (“white LEDs”). As described in more detail below, broadband light emitters such as white LEDs may be substituted for R, G and B LEDs without overall loss of function. White LEDs may, for example, be phosphor based.
Monitor 20 also includes a processor/controller 26 that operates to process sensor image data 27 output by sensor 22 and to determine whether irradiance of modulated light field 15 exceeds a threshold. Processor 26 may be implemented using any of a wide range of technologies such as field programmable gate arrays (FPGAs) or other configurable hardware, graphics processing units (GPUs), software-controlled data processors, and/or purpose specific logic circuits. Processor 26 may be a stand-alone processor/controller or may be integrated into a controller (e.g., controller 30) that controls one or more other aspects of projector system 10 or controls projector system 10 overall.
Processor 26 is operable to monitor irradiance of modulated light field 15 in real time and to take a suitable action if the irradiance in an area of light field 15 exceeds a threshold. Example actions that processor 26 may trigger include, without limitation:
For certain applications it can be desirable for system 10 to react exceedingly quickly (e.g., within a few milliseconds) in the case that monitor 20 detects irradiation above a safety threshold. Sensor 22 may operate at a frame rate that is high enough to ensure that irradiance exceeding a set threshold can be identified within a desired time. For example, if system 10 should react to irradiance exceeding the set threshold within at most 4 to 5 milliseconds, sensor 22 may be refreshed and the sensor image data from sensor 22 processed as described herein at least once every 4 milliseconds or faster (e.g., once every 2 or 3 milliseconds). This typically means that the refresh rate of sensor 22 will be significantly higher than the refresh rate of modulation stage 14 and/or a spatial light modulator of projection system 10 (which, at example frame rates of 120 Hz or less, may be refreshed at a rate of once every 8 milliseconds or longer). In some embodiments refresh of sensor 22 is coordinated with refresh of modulation stage 14 such that sensor 22 captures irradiance of modulated light field 15 shortly after any active modulation elements in modulation stage 14 have been refreshed.
In some embodiments system 10 is configured to commence taking action (e.g., an action as described above) to reduce the likelihood that projected light will have an irradiance that exceeds the safety threshold in response to monitor 20 detecting irradiance that exceeds a lower threshold (e.g., 90% of the safety threshold). In some embodiments, monitor 20 outputs an irradiance level signal that indicates how close the maximum monitored irradiation of modulated light field 15 is to a tripping point. Other parts of projector system 10 may use the irradiance level signal to make adjustments that tend to limit maximum irradiance of projected light (e.g., by controlling a light steering unit using a less aggressive setting that reduces concentration of light into small highlights).
Sensor 22 is operative to detect light from light sources 12 and calibration light source 24. Sensor 22 may take a variety of forms. For example, sensor 22 may comprise one or more imaging light sensors (i.e. sensors that produce as output a two dimensional (2D) map (image) representing a measured quality of light as a function of position in the 2D image). Imaging sensors may be fixed (e.g., provided by an array of pixel sensors which each measures light in a fixed corresponding point or area of modulated light field 15) or scanning (e.g., provided by one or more sensors and an optical path that includes one or more scanning elements such that the position in modulated light field 15 which corresponds to the light sampled by sensor 22 changes as a function of time). In applications where the projected light is uniform (e.g., flat field) or has a known structure, sensor 22 may be provided by one or more discrete light sensors (such as photodiodes, charge coupled devices (CCDs) or the like).
Sensor 22 may include optics for spectrally separating light (e.g., spectral filtering). Such optics may be arranged for example to deliver R, G, and B, components of the light to different light detectors.
In some embodiments, sensor 22 is provided by light detectors of a camera 23. A wide variety of suitable colour cameras are commercially available. One example is the model GigEPRO GP4136C camera available from from NET GmbH, of Finning, Germany. This camera has a model EV76C560 image sensor available from e2v GmbH, of Grobenzell, Germany.
Camera 23 may comprise a colour camera. For example, sensor 22 may comprise a RGB imaging sensor. Sensor 22 may be of the same general type as used in general purpose cameras (e.g., sensor 22 may be optimized to provide an RGB output that allows for acceptable color reproduction of general real-world scenery under more or less common illumination circumstances). It is not necessary for sensor 22 to be designed for accuracy, reliability, stability or to output absolute luminance or irradiance levels.
As described herein, monitor 20 may determine a relative response of sensor 22 to light from modulated light field 15 and to light from calibration light source 24. Monitor 20 may use this relative response together with the known irradiance of calibration light source 24 to evaluate the irradiance of light field 15.
In some embodiments, calibration light source 24 provides light of substantially the same wavelengths as light sources 12. In such embodiments the response of sensor 22 to light from calibration light source 24 can be used directly to determine the irradiance of light in modulated light field 15. However, it is not always convenient to construct a calibration light source 24 that emits light that has the same wavelengths as light sources 12.
In some embodiments the light emitted by calibration light source 24 contains light of different wavelengths and/or different linewidths as compared to the light in modulated light field 15. For example, light source 12 may comprise lasers or banks of lasers that emit light having narrow linewidths (e.g., 10 nm or less) while calibration light source 24 comprises LEDs that emit light having linewidths in the range of about 15 to 40 nm. In an example embodiment, light sources 12 comprise lasers that respectively emit R, G, and B, light and calibration light source 24 includes LEDS that respectively emit R, G, and B, light. The LEDs may, for example emit light having linewidths of about 20 nm, 30 nm, 25 nm for R, G, B respectively.
One complication in using an RGB sensor (e.g., a camera imaging sensor) for determining irradiance of light is that there tends to be cross-talk between the R, G, and B channels of the sensor (i.e. illumination of the sensor by light of a particular wavelength may cause non-negligible outputs in two or more of the R, G, and B, channels). The response of a sensor having M output channels to light made up of a set of N discrete wavelengths may be expressed as a N×M matrix. Where M=N the off-diagonal terms each represent cross-talk between a pair of the channels. For a RGB sensor (M=3) and light substantially made up of three wavelengths (N=3) the matrix is a 3×3 matrix.
Monitor 20 may perform a calibration procedure that allows irradiance of modulated light field 15 to be determined based on the output from sensor 22. The calibration procedure may be conceptualized as comprising a first step which separates three components of the illumination and/or calibration light into three categories and a second step which tags each of the categories with a corresponding energy-based weight. The resulting sensor outputs may be interpreted as components of the total optical power levels and summed. This procedure strongly reduces the impact of the mutual balances of the illumination light and the calibration light.
In cases where the light from light source 12 and calibration light source 24 have different spectral content, the residual crosstalk can be substantial and typically depends on the RGB balance of the light. Crosstalk distorts the mapping of the energy-based weights and therefore interferes with the accuracy of irradiance obtained by summing the sensor outputs. Methods as described herein may include steps for reducing crosstalk between the colour channels to reduce or eliminate this distortion.
In some embodiments the calibration procedure has two main steps:
The colour transform may serve two functions:
It is beneficial to determine the colour transform for either light source 12 or calibration light source 24. This avoids the complexity of working with two different colour transforms.
The irradiance of light projected by projector system 10 at any distance from projection optics 16 will depend on characteristics of projection optics 16. For example, projection optics 16 may be characterized by a throw ratio D/W where D is a distance between projection optics 16 and a screen and W is a width of the projected image on the screen. Changing projection optics 16 (either by replacement or adjustment) to have a higher throw ratio tends to increase the irradiance of light in a projected beam at any distance in front of projection optics 16. Changing projection optics 16 to have a smaller throw ratio has the opposite effect.
In some embodiments monitor 20 is configured to receive input that identifies projection optics 16 installed on projector system 10. Monitor 20 may store identification information for projection optics 16 at the time of calibration. If at a later time monitor 20 detects that current projection optics 16 does not match the stored identification information then monitor 20 may take actions to maintain safe operation of projector system 10. The actions may, byway of non-limiting example, comprise one or more of:
In some embodiments projection optics 16 comprise a zoom lens that is adjustable to provide different throw ratios in a range of throw ratios. In some such embodiments monitor 20 receives an input identifying a zoom setting of the zoom lens. The input may be provided from plural redundant zoom position sensors to enhance reliability. In response to changes in the zoom setting monitor 20 may take actions to maintain safe operation of projector system 10. The actions may, by way of non-limiting example, comprise one or more of:
In some embodiments projector system 10 includes a table which relates zoom settings to corresponding trip levels for monitor 20. Processor 26 may use the table to compensate for changes in zoom settings.
Monitoring for changes in projection optics 16 may, for example, be performed by processor 26 executing suitable firmware or software. In some embodiments processor 26 has access to a table that includes characteristics for a number of lenses that may be interchangeably included in projection optics 16. The table may, for example, expressly include compensation factors for adjusting a tripping point of monitor 20 or information (such as throw ratios) based on which processor 26 may calculate compensation factors for adjusting the tripping point of monitor 20 to compensate for different lenses being used in projection optics 16.
Calibration light source 24 may be made more reliable by including redundant calibration light emitters 25. In some embodiments calibration light source 24 includes three or more sets of light emitters 25 that respectively illuminate different parts of sensor 22. Each set of light emitters 25 may include one or more light emitters 25 of each wavelength present in calibration light 24A.
Processor 26 may monitor the light detected in regions (e.g. regions 22-2) of sensor 22 corresponding to each of the light emitters 25 and may coordinate corrective action in the event that any of the calibration light emitters 25 is not working or appears to be producing the wrong amount of light (because it is producing more or less light than the other calibration light emitters of the same wavelength.)
In some embodiments different combinations of sets of light emitters 25 are used in rotation. For example where there are three sets, A, B and C of calibration light emitters 25 these sets may be used in rotation in the combinations A-B, B-C and C-A.
The optical output of LEDs and light emitters of some other types can be temperature dependent. To avoid ambient temperature from affecting the operation of monitor 20, light emitters 25 of calibration light source 24 may be housed in a temperature controlled environment. For example, when monitor 20 is operating an environment of light emitters 25 may be maintained at a temperature slightly greater than a maximum expected ambient temperature. In some embodiments a temperature of sensor 22 is also controlled.
The light output of LEDs and some other types of light emitters may change as the light emitters age. Aging tends to be accelerated with use. In some embodiments calibration light source 24 includes one or more reference sets of light emitters 25 that are used sparingly to determine aging compensation for other sets of light emitters 25 in calibration light source 24. For example, when monitor 20 is powered on, outputs of the reference set(s) of light emitters 25 may be measured and compared to outputs of other sets of light emitters 25. An aging compensation procedure may involve adjusting driving currents for light emitters 25 in the other sets of light emitters 25 so that the light outputs of the other light emitters 25 match that of corresponding one(s) of the reference light emitters 25.
As described herein, the present technology may be implemented in a wide variety of different ways which may use different components and/or be configured for different modes of operation. Failure modes, effects, and diagnostic analysis (FMEDA) may be applied in designing such systems to enhance reliability and safety.
In addition to providing a safety function, monitor 20 may output information regarding measured irradiance of modulated light field 15 that may be applied by other parts of projector system 10 to check their operation (e.g., by comparing a maximum irradiance measured by monitor 20 to a predicted irradiance predicted from control inputs such as light steering settings) and/or to better control their operation (e.g., by using the information from monitor 20 as a feedback signal in a control loop such as a control loop that sets power levels for light emitters 13 of light source 12).
The following sections provides example implementations. These examples include details applicable to the case where:
This example implementation begins by developing a colour transform for light source 12 and then proceeds to determine residual crosstalk and scaling to absolute irradiance levels for light source 12. This choice is advantageous based on the characteristics of light from light source 12 and calibration light source 24 respectively. In particular:
Notwithstanding the benefits of generating the colour transform for light source 12, this is not a mandatory selection. In another example implementation (see Example Implementation 2) the colour transform is generated for calibration light source 24.
At step S201 the responsiveness of sensor 22 to light from light source 12 is measured. Step S201 attempts to reduce crosstalk between the RGB output values of sensor 22 when illuminated by light from light source 12. This reduction in crosstalk can enable better accuracy in the following parts of method 200.
At step S202, lumped coefficients for residual crosstalk are calculated. Step S202 may involve, for example: steps S202A and S202B. Step S202A sets the power balance of calibration light source 24. This step attempts to set the RGB balance of calibration light source 24 in line with the typical RGB balance of light source 12. Step S202A can help to optimize the used range of sensor 22 with respect to the available usable (dynamic) range of sensor 22. The used range of sensor 22 should be above the noise floor of sensor 22 and below a saturation or clipping level of sensor 22. Appropriate control of the power balance of calibration light source 24 can enable a better accuracy in the step S202B.
Step 202B establishes a common scale factor (based on the power balance found in step 202A). The common scale factor links the response of sensor 22 to light from light source 12 to the response of sensor 22 to the light from calibration light source 24. The common scale factor may be applied to determine total irradiance of light from light source 12 (e.g., light from modulated light field 15). The total irradiance of light from light source 12 may be compared to a threshold.
Step S201 may, for example comprise the following procedure. For each colour (e.g., R, G, B) drive light source 12 to produce light of the colour at a corresponding power level. This causes light source 12 to output light of an unknown optical power Ee,col (where col is an index indicating the colour).
The power level may be the same or different for different colours. Ratios of the power levels for the different colours may be selected to correspond to a desired white point. The desired white point may be selected to be a “typical” white point for light source 12.
Set camera 23 to capture an image of the sampled light with suitable exposure settings and then record the resulting R,G,B output values. In some embodiments a colour transform built into camera 23 is set to a unity matrix for this step. The resulting RGB output values may be recorded in matrix form as follows:
Correct the recorded values to compensate for the power balance (i.e. the values of Ee,col. may not be the same for different colours). This may be done by finding the inverse of the matrix:
and multiplying the matrix of Equation 1 by the inverse of Equation 2 to yield a sensitivity matrix S as follows:
Calculate the inverse matrix T=S−1 and scale matrix T to yield a matrix Tscaled so that the maximum value on the diagonal of Tscaled has a value suitable for further processing (such as a value of 1.0). Note that scaling is not the same as normalization. The matrix Tscaled can then be applied as a colour transform to the output of camera 23.
In some embodiments camera 23 includes a colour transform unit and Tscaled may be uploaded into camera 23 so that the output from sensor 22 is automatically multiplied by Tscaled in camera 23. In general, Tscaled may be applied to the output from sensor 22 either in camera 23 or elsewhere along the signal path of the RGB output channels.
The effectiveness of Tscaled may be verified by operating light source 12 and camera 23 to obtain images for each colour individually. In each case the output colour channels (after processing by Tscaled) not corresponding to the current colour (e.g., when the current colour is R, the colour channels for G and B) should have values close to zero.
Applying the colour transform Tscaled significantly reduces crosstalk in the output channels of camera 23. This in turn facilitates summing of the levels under observation to yield total irradiance values.
The procedure above provides outputs that correspond to responsiveness for the different colors of light from light source 12 which is proportional to irradiance. This allows total irradiance to be determined by summing the values
.
Step S202 may, for example comprise finding a correspondence between power levels for light emitters 25 of calibration light source 24 and power levels for light emitters 13 of light source 12 and then establishing a relationship between outputs of camera 23 and totaled optical power. The following procedure is an example of how this may be done. In this example an irradiance meter 40 (which may use any technology to measure optical power—e.g., the irradiance meter may comprise an optical power meter such as a bolometer, a spectrometer system, a photosensor based irradiance meter etc.) is applied to measure irradiance of light. Irradiance meter 40 may, for example, be arranged to measure irradiance of light at a specified location (“calibration location”). The calibration location may be, for example, a specified distance in front of projection optics 16. Irradiance meter 40 is only required for initial calibration and does not need to be present when projector system 10 is subsequently operated.
For each light emitter 13 (i.e., each primary colour) of light source 12 operate the light emitter 13 to emit light onto a sensor of the irradiance meter 40 and measure the irradiance of the light using the irradiance meter 40. It is preferable to adjust the optical power of the light detected by the irradiance meter 40 so that a specific calibration irradiance is detected by the irradiance meter 40. The calibration irradiance may be different for different primary colours.
The irradiance at the calibration location may be adjusted by one or more of adjusting the power level of the light emitter 13 and adjusting settings of modulation stage 14 (especially where modulation stage 14 includes a light steering unit). These adjustments are made so that the irradiance meter 40 indicates the calibration irradiance. In some embodiments, projector system 10 may be controlled to display a test pattern while the irradiance is being measured in step S202. The test pattern may, for example, include a high intensity spot at the calibration location. In some embodiments the high-intensity spot has an irradiance significantly higher at the calibration location than could be achieved by the light emitter 13 without light steering. The test pattern may, for example, comprise a pattern that has high intensity spots located at the ANSI points. Such test patterns may also be used for measuring luminance uniformity.
The calibration irradiance may, for example be:
In some embodiments the calibration irradiance is, for example, a specified fraction of an allowed maximum irradiance weighted by the contribution of the primary colour at the desired white point. For example, where at the desired white point, R, G, and B respectively contribute 32%, 33% and 35% of the total irradiance, the power level for the R primary may be set to yield an irradiance that is 0.32 times the specified fraction of the allowed maximum irradiance. In some embodiments the specified fraction of the allowed maximum irradiance is in the range of 50% to 100%, for example 90%. The specified fraction is preferably in the range of 85% to 99% to provide a safety factor in the range of 1% to 15%.
While the primary colour is being delivered at the set power level, use camera 23 (more generally sensor 22) to measure the light of the primary colour at a location on sensor 22 corresponding to the calibration location to yield a corresponding sensor output value. For example, sensor output values may be determined for each of R, G, and B, light.
In some embodiments the above parts of S202 are performed for a plurality of calibration locations. This can assist in improving reliability of monitor 20.
The corresponding light emitter 25 of calibration light source 24 may then be operated to emit light of the corresponding colour and the power level for the light emitter 25 adjusted to find the power level at which the corresponding output for camera 23 is the same as the previously determined sensor output value for that colour. The power levels for each of light emitters 25 and/or the balance between the power levels for light emitters 25 may be recorded.
Next, light source 12 may be operated to emit light with light emitters 13 operating together with power levels for light emitters 13 set according to the desired white balance. The power levels for light emitters 13 may then be scaled up or down as necessary until the irradiance measured by the irradiance meter 40 is the specified fraction of the allowed maximum irradiance.
Next, calibration light source 24 may be operated to emit light with light emitters 25 operating together with their relative power levels set according to the balance determined above. The drive current for light emitters 25 may be scaled up or down as necessary while maintaining the balance until the output of camera 23 indicates an output equal to the specified fraction of the allowed maximum irradiance. The drive currents for light emitters 25 when this equality is satisfied may be saved as reference drive currents.
Once the reference drive currents for light emitters 25 have been established, monitor 20 may continuously monitor the output of camera 23 to evaluate:
where R, G and B are R, G, and B output values from camera 23 monitoring modulated light field 15 and Rref, Gref and Bref are the output values from camera 23 monitoring calibration light source 24 when light emitters 25 are driven with the reference drive currents. Eqn. (4) may be evaluated per pixel in region of light sensor 22 that receives light from modulated light field 15.
For example, monitor 20 may be tripped to perform an action if:
which indicates that the irradiance of light field 15 exceeds the specified fraction of the allowable maximum irradiance.
It is noteworthy that the divisor of
may include substantial contributions from crosstalk (off-diagonal) terms. For example, Rref, Gref and Bref may be expressed as follows:
where λij is the crosstalk term indicating the contribution to output channel i, i ∈ {R, G, B} by light jref,jref ∈{Rref, Gref, Bref} from calibration light source 24. However, where the white balance of calibration light source 24 is fixed, the mutual balance between Ee,Rref, Ee,Gref and Ee,Bref does not change, the value of the divisor in Equation (4) scales with the radiance sum of Ee,Rref+Ee,Gref+Ee,Bref, which is the sum that is calibrated. Hence any contributions to the divisor of Equation (4) from crosstalk terms can be considered to be lumped in one scale factor that is included in the calibration of the irradiance sum.
As mentioned above, in some embodiments calibration light emitters 25 comprise broadband light sources such as white LEDs. With this choice of light emitter 25 there will almost certainly be more residual crosstalk than in the case where light emitters 25 are narrower band light emitters such as R, G, B LEDs. This residual crosstalk may still be cancelled in method 200.
For example, for the case where all colours in calibration light 24A are emitted by white LEDs (in which different colour components cannot be individually controlled), Eqn. (6) could be restated as:
where Ee,ref is the radiance of the broadband calibration light emitter 25.
The ratio of response of sensor 22 to red light emitter 13 to that produced by red reference light emitter 25 may be expressed as:
for the case where calibration light source 24 has one or more separate light emitters 25 which emit light having irradiance Ee,Rref corresponding to the red light emitter 13 which emits light having irradiance Ee,R. For the case where light emitters 25 of calibration light source 24 are broadband light emitters equation 10 becomes:
The denominator of Eqn. 11 clearly includes more residual crosstalk as compared to Eqn. 10. However, that does not matter since the denominator of Eqn. (11) is included in one scale factor that is used in the calibration of the irradiance sum so the remaining calculations are not affected. The calculations for the other primary colours (e.g., G, and B) work the same way.
A consequence of using light emitters 25 that have a fixed colour balance in calibration light source 24 is that fine tuning of the colour balance of calibration light 24A is not possible. However, that turns out not to be a significant problem in most cases. Broadband light emitters 25 may be selected so that the spectrum of emitted light is at different wavelengths is well within the useful range of sensor 22.
It can be appreciated that the method described above exploits the fact that optical energy of the light from light source 12 and calibration light source 24 is almost entirely concentrated at a few specific wavelengths (for example 6 specific wavelengths). With this discontinuous spectrum the response curves of the sensor 22 sensor are in fact “sampled” in 6 discrete and more or less narrow ranges of wavelengths. Hence, instead of having to consider the entirety of spectral response curves, R, G and B, 6 values are sufficient. Moreover the 6 values are close linked per color (i.e. calibration light 24A can consist of light having a plurality of discrete wavelengths that are each close to a corresponding discrete wavelength of light from light source 12). This allows ratios of responses of sensor 22 to the three pairs of corresponding wavelengths to be used in the calibration so their 3 ratios can be used.
Another benefit of the calibration method as described herein is that the calibration can automatically take account of the conversion to radiometric units Ee (optical energy) from light intensity Ev. Since we are concerned with discrete wavelengths of light in narrow bands, for the source primaries of light source 12: Ee,col=Cλ,col·Ev,col where Cλ,col is a constant that has a value that is different for each wavelength of light. Similarly, for calibration light source 24
Because the calibration procedure described above makes use of relative comparisons between the irradiance of light from light source 12 and the irradiance of calibration light 24A, the ratios of these constants may be considered to be of primary importance. Therefore, the numerical values of the constants Cλ,col and Cλ,Ref,col do not need to be known. The calibration outcome from the above process incorporates the factors necessary to compare optical energy of calibration light 24A and light from light source 12. Even the typically small deviation between Cλ,col and Cλ,Ref,col is incorporated in the calibrated driving currents for calibration light source 24.
Thus, calibration as described above can establish a relative scaling applied to the RGB response of camera 23 so that the results have the correct proportions to represent relative Ee values. Relative Ee values may be further scaled to absolute Ee values up to a calibration factor for the effect of projection optics 16.
One application of the technology described above is improving the safety of high power projector systems. The output of a monitor 20 that checks for irradiance exceeding a threshold (whether or not the monitor 20 operates in any of the ways described herein) may be applied as described above to limit the maximum irradiance of light output by a projector to not exceed some threshold. The threshold may be chosen to be at a level that is acceptably safe (e.g., a level that complies with accepted safety standards).
For some applications higher optical power outputs may be desired. As mentioned above, highlights, even small highlights, that are high in optical power can help to provide a very vibrant high-impact viewing experience. Such highlights may be safe to view on a screen. However, generating such highlights in images projected onto the screen may require a projector to output light that has irradiance levels above a safe threshold at locations between the projector and the screen (e.g., such that a person could damage their eyes by looking into the projector instead of at the screen).
One way to address this problem is to exclude people from being able to enter a region through which light passes from the projector to the screen (e.g., with a rear projection screen or by providing physical barriers. This solution is often not practical or optimum.
In some embodiments, operation of a monitor 20 to limit irradiance of the light output by a projector system is inhibited as long as no person (or no object) intrudes into the region through which light passes from the projector to the screen. An intrusion detector, for example, a LIDAR system, an optical curtain, or the like, may be provided and operated to detect unauthorized entry into the region through which the light passes.
Another way to detect intrusion into the region is to detect shadows on the screen (e.g., by observing the screen with a camera (e.g., an infrared camera) and processing images from the camera to detect shadows which show that a person or object has intruded into the projector beam). This is illustrated in
If monitor 20 detects that the light being projected by the projector system has an irradiance that exceeds a threshold value and also the intrusion detector indicates an intrusion into the region then corrective action may be taken. The corrective action may, by way of non-limiting example, comprise one or more of:
In some embodiments the intrusion detector has a ranging capability (i.e. is operable to determine whether any person or object is in the region through which light passes from the projector to the screen and is closer to projection optics 16 than a set distance. In such embodiments the corrective action may optionally be taken only in cases where the intruding person or object is closer to projection optics 16 than the set distance.
Where a component (e.g., a software module, processor, assembly, device, circuit, etc.) is referred to herein, unless otherwise indicated, reference to that component (including a reference to a “means”) should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention.
Embodiments of the invention may be implemented using specifically designed hardware, configurable hardware, programmable data processors configured by the provision of software (which may optionally comprise “firmware”) capable of executing on the data processors, special purpose computers or data processors that are specifically programmed, configured, or constructed to perform one or more steps in a method as explained in detail herein and/or combinations of two or more of these. Examples of specifically designed hardware are: logic circuits, application-specific integrated circuits (“ASICs”), large scale integrated circuits (“LSIs”), very large scale integrated circuits (“VLSIs”), and the like. Examples of configurable hardware are: one or more programmable logic devices such as programmable array logic (“PALs”), programmable logic arrays (“PLAs”), and field programmable gate arrays (“FPGAs”). Examples of programmable data processors are: microprocessors, digital signal processors (“DSPs”), embedded processors, graphics processors, math co-processors, general purpose computers, server computers, cloud computers, mainframe computers, computer workstations, and the like. For example, one or more data processors in a control circuit for a device may implement methods as described herein by executing software instructions in a program memory accessible to the processors.
Processing may be centralized or distributed. Where processing is distributed, information including software and/or data may be kept centrally or distributed. Such information may be exchanged between different functional units by way of a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet, wired or wireless data links, electromagnetic signals, or other data communication channel.
The invention may also be provided in the form of a program product. The program product may comprise any non-transitory medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The program product may comprise, for example, non-transitory media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, EPROMs, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted.
In some embodiments, the invention may be implemented in software. For greater clarity, “software” includes any instructions executed on a processor, and may include (but is not limited to) firmware, resident software, microcode, code for configuring a configurable logic circuit, applications, apps, and the like. Both processing hardware and software may be centralized or distributed (or a combination thereof), in whole or in part, as known to those skilled in the art. For example, software and other modules may be accessible via local memory, via a network, via a browser or other application in a distributed computing context, or via other means suitable for the purposes described above.
Software and other modules may reside on servers, workstations, personal computers, tablet computers, and other devices suitable for the purposes described herein.
Unless the context clearly requires otherwise, throughout the description and the claims:
Words that indicate directions such as “vertical”, “transverse”, “horizontal”, “upward”, “downward”, “forward”, “backward”, “inward”, “outward”, “left”, “right”, “front”, “back”, “top”, “bottom”, “below”, “above”, “under”, and the like, used in this description and any accompanying claims (where present), depend on the specific orientation of the apparatus described and illustrated. The subject matter described herein may assume various alternative orientations. Accordingly, these directional terms are not strictly defined and should not be interpreted narrowly.
Where a range for a value is stated, the stated range includes all sub-ranges of the range. It is intended that the statement of a range supports the value being at an endpoint of the range as well as at any intervening value to the tenth of the unit of the lower limit of the range, as well as any subrange or sets of sub ranges of the range unless the context clearly dictates otherwise or any portion(s) of the stated range is specifically excluded. Where the stated range includes one or both endpoints of the range, ranges excluding either or both of those included endpoints are also included in the invention.
Certain numerical values described herein are preceded by “about”. In this context, “about” provides literal support for the exact numerical value that it precedes, the exact numerical value ±5%, as well as all other numerical values that are near to or approximately equal to that numerical value. Unless otherwise indicated a particular numerical value is included in “about” a specifically recited numerical value where the particular numerical value provides the substantial equivalent of the specifically recited numerical value in the context in which the specifically recited numerical value is presented. For example, a statement that something has the numerical value of “about 10” is to be interpreted as: the set of statements:
Specific examples of systems, methods and apparatus have been described herein for purposes of illustration. These are only examples. The technology provided herein can be applied to systems other than the example systems described above. Many alterations, modifications, additions, omissions, and permutations are possible within the practice of this invention. This invention includes variations on described embodiments that would be apparent to the skilled addressee, including variations obtained by: replacing features, elements and/or acts with equivalent features, elements and/or acts; mixing and matching of features, elements and/or acts from different embodiments; combining features, elements and/or acts from embodiments as described herein with features, elements and/or acts of other technology; and/or omitting combining features, elements and/or acts from described embodiments.
As will be apparent to those of skill in the art upon reading this disclosure, each of the individual embodiments described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any other described embodiment(s) without departing from the scope of the present invention.
Any aspects described above in reference to apparatus may also apply to methods and vice versa.
Any recited method can be carried out in the order of events recited or in any other order which is logically possible. For example, while processes or blocks are presented in a given order, alternative examples may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, simultaneously or at different times.
Various features are described herein as being present in “some embodiments”. Such features are not mandatory and may not be present in all embodiments. Embodiments of the invention may include zero, any one or any combination of two or more of such features. All possible combinations of such features are contemplated by this disclosure even where such features are shown in different drawings and/or described in different sections or paragraphs. This is limited only to the extent that certain ones of such features are incompatible with other ones of such features in the sense that it would be impossible for a person of ordinary skill in the art to construct a practical embodiment that combines such incompatible features. Consequently, the description that “some embodiments” possess feature A and “some embodiments” possess feature B should be interpreted as an express indication that the inventors also contemplate embodiments which combine features A and B (unless the description states otherwise or features A and B are fundamentally incompatible). This is the case even if features A and B are illustrated in different drawings and/or mentioned in different paragraphs, sections or sentences.
It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such modifications, permutations, additions, omissions, and sub-combinations as may reasonably be inferred. The scope of the claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
This application claims priority from, and for the purposes of the United States of America, the benefit under 35 USC § 119 in connection with, U.S. application No. 63/265,019 filed 6 Dec. 2021, which is hereby incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/084581 | 12/6/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63265019 | Dec 2021 | US |