The present invention relates to an improved sensor apparatus and improved method of sensing. In particular the present invention relates to an improved particle 10 detector and method of detecting particles.
There are a number of ways of detecting smoke in a region, such as a room, building, enclosure, or open space. Some methods involve sampling air from the region and passing the sampled air through a detection chamber, whereby particles are detected and an estimation is made of the amount of smoke in the region of interest. Such an apparatus is exemplified in aspirated smoke detectors like LaserPLUS™ sold by the applicant. Other detectors are placed in the region of interest, and use a sensor to detect particles adjacent the sensor. An example of such a detector is a point detector, in which air passes between an emitter and a sensor, and the smoke is detected directly in the region of interest.
In both cases if the smoke does not enter a sampling point (of the aspirated detector) or pass between the sensor and emitter of the point detector, no smoke will be detected. As many buildings employ air handling means for extracting air from a region, such as air-conditioning, there is no guarantee that smoke will be detected rather than pass out of the region via the air handling ducts. It can be very difficult to use the aforementioned methods of detecting smoke in outdoor areas or very large indoor arenas where there may not be appropriate locations to place a point detector or a sample point and connecting tubing.
Other devices used to detect smoke include the detector disclosed in U.S. Pat. No. 3,924,252, (Duston) which uses a laser and a photodiode to detect light scattered from particles. This device uses a corner reflector to reflect the light back at the emitter. Duston requires a feedback circuit to detect whether the beam is emitted or blocked.
Another type of detector is known as a “Beam Detector”, which measures the attenuation of the intensity of a signal from a projected light source caused by smoke particles suspended in the projected light. These detectors have relatively low sensitivity and are only capable of measuring the total attenuation within the illuminated region.
Any discussion of documents, devices, acts or knowledge in this specification is included to explain the context of the invention. It should not be taken as an admission that any of the material forms a part of the prior art base or the common general knowledge in the relevant art in Australia or elsewhere on or before the priority date of the disclosure and claims herein.
In one form the present invention provides a method of detecting particles including emitting a beam of radiation into a monitored region and detecting a variation in images of the region indicating the presence of the particles.
With respect to the above method, further steps embodying the method and features of preferred embodiments may include identifying an area of interest in the images which represents a corresponding zone of the monitored region. Scattered radiation within the zone may be represented in one or more segments of a corresponding image, which allows for the location of the particles in the region to be identified. The location of the particles may be determined in accordance with a geometric relationship between the locations of a source of emitted radiation, a direction of the emitted radiation and a point of image detection wherein, the geometric relationship is determined from the images. The detected variation may be an increase in scattered radiation intensity. The increase in scattered radiation intensity may be assessed with reference to a threshold value. The threshold value may be calculated by averaging integrated intensity values from the images. The method may comprise assigning different threshold values for different spatial positions within the region. The method may comprise directing the radiation along a path and identifying a target in the images, the target representing a position at which the radiation is incident on an objective surface within the region. A location of the target in the images may be monitored and the emission of radiation may be ceased in response to a change in the location of the target. The method comprise identifying a location of an emitter in the images. Further, the method may comprise determining an operating condition of the emitter based on radiation intensity at the identified location of the emitter. The images may be processed as frames which are divided into sections which represent spatial positions within the monitored region. Also, the method may comprise monitoring intensity levels in associated sections of the images and assigning different threshold values for different spatial positions within the region which correspond to the associated sections.
In another aspect, the present invention provides apparatus for monitoring a region, comprising:
an emitter for directing a beam of radiation comprising at least one predetermined characteristic into the region;
an image capture device for obtaining at least one image of the region; and
a processor for analysing the at least one image to detect variation of the at least one characteristic between the images, indicating presence of particles within the region.
The processor may be adapted to determine the location of particles in accordance with a geometric relationship between the locations of the emitter, the directed beam of radiation and the image capture device wherein, the geometric relationship is determined from the analysed images. The apparatus may comprise a plurality of emitters, arranged to direct radiation along different respective beam paths. The apparatus may further comprise one or more filters for adapting the image capture device to capture radiation from the emitter in preference to radiation from other sources. The filters may be one or more or a combination of:
a temporal filter.
a spatial filter.
a band-pass filter.
a polarising filter.
The image capture device preferably comprises an attenuator. The attenuator may comprise a variable aperture device. A plurality of image-capturing devices may be used. Preferably, the image capture device comprises a camera. It is also preferable that the emitter comprises a laser.
In a further aspect, the resent invention provides a method of detecting particles comprising the steps of: determining a path of a beam of radiation comprising placing a first image capturing device to view a source of the radiation and at least a part of the path of the beam of radiation; communicating the position of the source to a processor; placing a second image capturing device to view an impact point of the beam of radiation; communicating related position information of the impact point to the processor; determining the path of the beam in accordance with a geometric relationship between the position of the source and the position information of the impact point.
In yet another aspect the present invention provides a method of detecting particles comprising the steps of determining a region of interest containing a path of a beam of radiation comprising locating a first point, being the position of a source of the beam, using an image capturing device; locating a second point being the intersection of the beam of radiation with a field of view of the image capturing device, determining the path of the beam in accordance with the first and second point; calculating a region of interest containing the determined beam path.
The step of locating a second point may be performed with at least one substantially transparent probe and the probe is preferably removed from the beam path once located.
In still another aspect, the present invention provides a method of determining the level of smoke at one or more subregions in a region of interest comprising: directing a beam of radiation within the region, selecting a view of at least a portion of a path of the beam with an image capture device, determining the location of the source of the radiation relative to the image capture device, determining the direction of the beam relative to the image capture device, dividing the beam of radiation into segments, determining a geometric relationship between the segments and the image capture device, adjusting a level of light received by the image capture device of each segment so as to allow for the geometric relationship. The segments may comprise at least one pixel and the segments are preferably grouped to form the subregions for smoke detection.
In a further aspect the present invention provides apparatus adapted to detect particles, said apparatus comprising processor means adapted to operate in accordance with a predetermined instruction set, said apparatus, in conjunction with said instruction set, being adapted to perform the method as disclosed herein.
In embodiments of the present invention there is provided a computer program product comprising; a computer usable medium having computer readable program code and computer readable system code embodied on said medium for detecting particles within a data processing system, said computer program product comprising; computer readable code within said computer usable medium for performing the method steps the methods as described herein.
Other aspects, advantages and features are disclosed in the specification and/or defined in the appended claims, forming a part of the description of the invention.
Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
Further disclosure, improvements, advantages, features and aspects of the present application may be better understood by those skilled in the relevant art by reference to the following description of preferred embodiments taken in conjunction with the accompanying drawings, which are given by way of illustration only, and thus are not limiting to the scope of the present invention, and in which:
In
An image capture device 14 views at least a portion of the region 12, comprising a portion that contains electromagnetic radiation from emitter 16. The image capture device 14 may be a camera or one or more devices forming a directionally sensitive electromagnetic receiver such as photodiodes or CCD's, for example. In the preferred embodiment, the image capture device 14 is a camera. In the present embodiment, the camera 14 uses full frame capture to capture the images to send analogue video information along communications link 18 to a processor 20. It is not necessary to use full frame capture. However, it is preferable to use full frame capture for engineering simplicity in obtaining images, performance, and minimising installation restrictions. As would be understood by the person skilled in the art, other image capture devices 14 such as line transfer cameras may be used and methods to compensate for the efficiency of full frame capture may be employed. Another communication link 22 connects the emitter 16 to the processor 20. The processor 20 controls the output of emitter 16, and/or receives information about the output of emitter 16 through the communications link 22. Alternatively, the state of the emitter 16 may be sensed by the camera 14 or determined automatically as disclosed below. In the preferred embodiment, the emitter 16 is a laser producing visible, infra-red or other suitable radiation. The laser 16 may incorporate a lens 21 and spatial filter such as a field of view restrictor 23. When a beam of light travels thought a homogeneous medium there is no scattering, only when irregularities are present does the beam scatter. Therefore, in the presence of particles such as smoke particles the laser beam will scatter. Furthermore, in accordance with the preferred embodiment, the laser 16 may be modulated, eg “laser on”, laser “off” in a given sequence. When no smoke is present, the intensity of pixels in a captured image including the laser beam is the same regardless of the state of the laser. When smoke is present, there is a difference between the intensity of a captured image when the laser 16 is on (due to scattering), compared to the intensity when the laser 16 is turned off.
Optional filters are shown in
Other filtering methods comprise modulation of the laser and use of positional information with regard to the systems components as described below.
The image capture device may employ an attenuator for controlling the radiation received. A controllable neutral density filter arrangement may be used. Alternatively, the attenuator could be in the form of controlling the intensity with a variable aperture. An optional, adjustable, iris 24a can be used to control exposure levels. It can be manually set at the time of installation, or the system could automatically set the exposure according to incident light levels. The reason for this is to minimise or avoid camera saturation, at least in the parts of the field of view that are used in subsequent processing. The iris 24a could be a mechanical iris or an LCD iris or any other means to reduce the amount of light entering the camera. Some electronic cameras incorporate an electronic shutter, and in this case the shutter time can be used to control exposure instead of an iris 24a. A spatial filter 24b is also shown, which may for example comprise a slit for effectively masking the incident light to the camera 14. For example, a slit may mask the incident received light at the camera 14 to conform generally to the shape of the laser beam as it would be projected in the plane of the camera 14 lens. Items 26, 24a, 24b & 24 can be physically located in a variety of orders or combinations.
In use, electromagnetic radiation, such as a red laser light from emitter 16, passes through the region 12 and impacts on a wall or an absorber 28. The field of view of the camera 14 comprises at least part of the path of the laser, and optionally, the impact point of the laser on the wall, which in this case impacts on an absorber 28. Particles in the air in the region that intersect the laser, in this case represented by particle cloud 30, will cause laser light to scatter. Some of the light scattered from particles will fall on the sensor of the camera 14, and be detected.
In the embodiment shown in
In other. embodiments it is possible to use a camera 14 which would capture the data and transmit it digitally to the processor 20 without the need for a video capture card 32. Further, the camera 14, filters 24, 26, processor 20 and light source 16 could be integrated into a single unit or units. Also, embedded systems may be employed to provide the functions of at least the processor 20.
A number of camera 14 configurations are able to be used in this application, provided image information in the form of data can be supplied to the processor 20.
In the example shown in
The detector 10 may be set to wait until the measured scattering exceeds a given threshold for a predetermined period of time, before indicating an alarm or pre-alarm condition. The manner for determining an alarm or pre-alarm condition for the detector 10 may be similar to the methods used in aspirated smoke detectors using a laser in a chamber, such as the VESDA™ LaserPLUS™ smoke detector sold by Vision Fire and Security Pty Ltd.
Physical System Variations
It is desirable in some circumstances to use a number of emitters in a system. This may be to comply with regulations, provide back up, or to assist in covering a larger area than could be covered with a single emitter.
If coverage of a large area is required, it is possible to employ a number of emitters so that smoke may be detected in a number of different locations within a region.
The emitters 54 and 55 do not all need to intersect on targets 56 and 57, and may be distributed along a number of targets, or cross over each other onto other targets.
An alternative is shown in
In
In
In the present embodiment, the processor 10 comprises a personal computer running a Pentium 4 chip, Windows 2000 operating system.
An important aspect of the present embodiments is signal processing is discussed in detail below with reference to
Laser State Determination
At step 401 of
A small region of interest is assigned that includes the laser source radiation. The centre of the region is set to an initial position of the laser source spot. The average pixel value in the region is computed. It is then compared with a threshold value to make the decision of whether the image records the laser on or off.
The threshold value is the average of the outputs of a peak detector and a trough detector that are fed by the average. Each detector executes an exponential decay back to the current average in the case that a new peak or trough has not been made. The time constant is set in terms of frames, preferably with values of about 10.
This technique has proven to be fairly robust. An alternative method is to look for one or more pixels that exceeded the average in the rectangle by a fixed threshold.
In an implementation where the laser on/off switching is more closely coupled to frame acquisition this function may not be required. However, it can still serve a double check that the laser source is not obscured and is of the correct intensity.
Laser Position
At step 401 of
More precisely, the threshold established in the previous step (laser state determination) is subtracted from the image and negatives are clipped to zero. The centre of gravity of the same rectangle used in the state determination then yields (x,y) coordinates of the laser spot. In this calculation, the pixel values are treated as weight.
An alternative technique is to treat the previously described area as an image and calculate an average of a large number (−50) of known “emitter off state” images, then subtract the average from the latest image that is known to have been captured with the emitter on. The previously described centre of gravity algorithm is then applied to the image data to estimate the position of the spot.
Compute Regions of Interest & Background Cancellation
At step 403 of
The integration region 102 contains the emitted radiation path, while the areas to each side, background region 101 and 103, are used during background cancellation. The regions are generally triangular, that is wider further away from the laser source. This is necessary because while the exact location of the radiation spot is known, the exact angle of the path is not, so a greater tolerance is needed at the other end of the path when the camera cannot see where the radiation terminates. There is more noise in a fatter section of integration region due to more pixels, fortunately, each pixel represents a shorter length of the path, so the larger number of samples per unit length allows more averaging. If the camera can see the radiation termination point, there would be less uncertainty of its position and the regions of interest would not need to diverge as much as shown in
Two background regions 101, 103 are chosen for interpolation of the brightness compensation factor for correcting temporal variations in background lighting on either side of the radiation path in the laser off images. For example, changes in lighting due to two different, independent temporally varying light sources on either side of the radiation path. This principle could be further extended to allow for variations along the path, not just to either side of the path by subdividing the three areas 101, 102, 103 into ‘segments along the length of the radiation path and performing the calculations for each subdivision.
The background cancelling algorithm sums n “on frames” and m “off frames” the sequence of these frames is arbitrary. Prior to the subtraction of the “emitter off” frames from the “emitter on” frames, the “emitter off” frames are scaled by a factor, f, to compensate for variance in lumination levels of the images. This may be useful with artificial lighting, the intensity of which varies rapidly. The resultant image contains any differences between the n “emitter on” and M “emitter off” images. This is shown graphically in
The scaling factor f is determined by interpolation, using the ratios of background variation between the laser on and laser off frames.
where:
μ is the average value of pixel intensity in a given background region in either a laser on or laser off frame as designated by the subscripts.
If the processor is not fast enough to keep up with the full frame rate, there needs to be a scheme to allow a random selection of frames to be processed. Since n laser on and in laser off frames are used for the background cancellation, while waiting to accumulate this number of frames, any excess laser on or laser off frames can be discarded.
Alliteratively a lock step synchronisation technique could be used so that the computer was fed information about the state of the laser with respect to the captured image. In any case, a minimum of one on frame and one off frame is required for the technique to work.
An alternative to the cancellation scheme described above is to simply subtract laser on and laser off frames. Many on frames and off frames can be summed or averaged or low pass filtered, with the summing, averaging or filtering performed before and/or after the subtraction.
The result of the background cancellation is an image that is predominantly composed of scattered light from the emitter, and some residual background light and noise.
Frame Integration
At step 405 of
With reference to
Scatter Vs Radius Computation
At step 406 of
Compute Geometry
At step 408 of
A corrected intensity of pixels corresponding to a given radius, r, is then determined for a real world system, in which the intensity of pixels is multiplied by a predetermined scattering gain value, discussed below under Scattering Angle Correction, corresponding to the given radius and a given scattering angle relative to a lossless isotropic scattering calculation. A resultant scattered data array is formed.
Scattering Angle Correction
A correction for scatter angle is logically determined in accordance with step 409 of
At each scattering angle as determined during the above geometry computation, the gain for every scattering angle is derived. The data from the input scattering data file is linearly interpolated so that for every scattering angle an approximation of the forward gain can be calculated.
Compute Smoke Vs Radius
A determination of smoke for a given radius of the beam is performed at step 407 of
For each element in the previously described Scatter vs Radius array, the values of L, θ and r, as shown in
Integrate Along Beam to Obtain Obscuration
At step 410 of
Likewise at the camera end, due to the geometry of the set up, the field of view of the camera allows the beam to be viewed to within a few meters of the camera.
In order to provide a smooth transition between sector boundaries, a simple moving average filter is implemented. In fact, the beam is divided into n+1 segments, and then a moving average is applied (of length two segments) resulting in n sectors.
Each pixel along the beam captured image corresponds to a physical length along the beam see
For example, to determine the obscuration, 0, over the whole beam, given as a sector size in pixel radius, r, as n to m,
where S is scattered light and L is given above.
As noted above, the beam length is divided into a number of segments to determine individual smoke levels for each segment effectively simulating a number of point detectors. The output of these notional point detectors can be provided to an addressable fire panel. This is based on the theory that scattered light emitted from each segment of the emitted radiation will provide a different light output for a given particle density based upon the angle from the radiation path to the camera and the number of pixels per segment. As the path of the emitted radiation comes closer to the camera that is as r increases in
Array 94 contains the actual radius of the light captured by the pixels. Array 96 comprises the length of the segment of the emitted radiation encompassed by, in this case, one horizontal pixel in the captured image in the frame of the camera. This information is used to ascertain the volume of the emitted radiation and is used to assist in the calculation of the radiation intensity. Also, array 96 contains data on the smoke intensity at each point r, defined as smoke [r].
Alarm State
Finally with reference to
The same method is used for the zone alarm level, except that final zone output is the highest sector or the zone level, whichever is higher.
Fault Detection
The system may have provision for the detection of a fault condition, which is essentially the absence of the laser spot in the image. The laser on/off signal duty cycle may be checked to be within 33% to 66% over the number of frames used in one background cancellation cycle.
Alternative Embodiments
A number of alternative embodiments are available, depending on application and desired features. Unless otherwise specified, the general principles of operation as described above apply to the implementation of the following variations. For example, fault detection may be carried out in a number of ways.
In another application, the system described above could be used in applications where measurement of obscuration was important, such as airports where fog may cause planes to divert if visibility falls below a certain level. The system does not require ambient light to operate, and can therefore operate at night without additional lighting. An infrared camera could also be used with an infrared light source, where the light source, if of similar frequency to the detecting light, could be cycled so that the processor ignores frames illuminated for security purposes.
A typical security camera may take 25 images or frames per second. Smoke detection may only require detecting 1 frame per second or less. Therefore the remaining 30 images can be used for security purposes.
To give increased sensitivity, video processing software operating within the detection sub-system (6,7) may be used to eliminate the contribution of nuisance changes in video signals which are not in the location known to be occupied by the light beam. Software based systems which perform a similar function of processing distinct areas of a video image are known, for example in video-based security systems such as Vision System's ADPRO™ products.
The emitter may be a laser, emitting polarised radiation. The laser may emit visible radiation, infrared radiation or ultra violet radiation. Selection of the wavelength of the radiation may be dependent on the characteristics of the particles to be detected, as well as the characteristics of the apparatus and method to be employed in the detection of the particles. Other types of radiation emitter may comprise a xenon flash tube, other gas discharge tubes, or a laser diode or light emitting diode. The light is preferably collimated to at least some degree, but if the optional area segregation using regions of interest is employed, a broader radiation beam may be emitted.
A further embodiment is shown in
While this invention has been described in connection with specific embodiments thereof, it will be understood that it is capable of further modification(s). This application is intended to cover any variations uses or adaptations of the invention following in general, the principles of the invention and comprising such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains and as may be applied to the essential features hereinbefore set forth.
As the present invention may be embodied in several forms without departing from the spirit of the essential characteristics of the invention, it should be understood that the above described embodiments are not to limit the present invention unless otherwise specified, but rather should be construed broadly within the spirit and scope of the invention as defined in the appended claims. Various modifications and equivalent arrangements are intended to be included within the spirit and scope of the invention and appended claims. Therefore, the specific embodiments are to be understood to be illustrative of the many ways in which the principles of the present invention may be practiced. In the following claims, means-plus-function clauses are intended to cover structures as performing the defined function and not only structural equivalents, but also equivalent structures. For example, although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface to secure wooden parts together, in the environment of fastening wooden parts, a nail and a screw are equivalent structures.
“Comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Number | Date | Country | Kind |
---|---|---|---|
2003902319 | May 2003 | AU | national |
This is a Continuation application of U.S. application Ser. No. 14/636,840 filed Mar. 3, 2015, which is a Continuation application of U.S. application Ser. No. 14/089,315, filed Nov. 25, 2013 now a U.S. Pat. No. 9,002,065, which is a Continuation application of U.S. application Ser. No. 13/775,577, filed Feb. 25, 2013, now U.S. Pat. No. 8,620,031 issued Dec. 31, 2013, which is a Continuation application of U.S. application Ser. No. 13/164,123, filed Jun. 20, 2011, now U.S. Pat. No. 8,406,471 issued Mar. 26, 2013, which is a Continuation application of U.S. application Ser. No. 10/556,807, filed Nov. 9, 2006, now U.S. Pat. No. 7,983,445 issued Jul. 19, 2011, which is a U.S. National Stage application of PCT/AU2004/000637 filed May 14, 2004, which claims priority to Australian Provisional Patent Application No. 2003902319, filed May 14, 2003 and entitled “Laser Video Detector”. The above-noted applications are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
3429243 | Boyle | Feb 1969 | A |
3688298 | Miller et al. | Aug 1972 | A |
3727056 | Enemark | Apr 1973 | A |
3737858 | Turner | Jun 1973 | A |
3788742 | Garbunny | Jan 1974 | A |
3901602 | Gravatt | Aug 1975 | A |
3915575 | Sick | Oct 1975 | A |
3924252 | Duston | Dec 1975 | A |
4594581 | Matoba | Jun 1986 | A |
5189631 | Suzuki | Feb 1993 | A |
5225810 | Inoue et al. | Jul 1993 | A |
5266798 | Borden et al. | Nov 1993 | A |
5381130 | Thuillard et al. | Jan 1995 | A |
5502434 | Minowa et al. | Mar 1996 | A |
5530433 | Morita | Jun 1996 | A |
5576697 | Nagashima et al. | Nov 1996 | A |
5646390 | Wang et al. | Jul 1997 | A |
5696379 | Stock | Dec 1997 | A |
5751785 | Moorman et al. | May 1998 | A |
5912619 | Vogt | Jun 1999 | A |
5923260 | Endo et al. | Jul 1999 | A |
6091345 | Howard et al. | Jul 2000 | A |
6119055 | Richman | Sep 2000 | A |
6204768 | Kosugi et al. | Mar 2001 | B1 |
6292683 | Gupta et al. | Sep 2001 | B1 |
6509832 | Bauer et al. | Jan 2003 | B1 |
6658203 | Oster | Dec 2003 | B1 |
6813303 | Matsuda et al. | Nov 2004 | B2 |
7983445 | Knox et al. | Jul 2011 | B2 |
8154724 | Mitchell et al. | Apr 2012 | B2 |
8406471 | Knox et al. | Mar 2013 | B2 |
8427642 | Mitchell et al. | Apr 2013 | B2 |
8508376 | Knox et al. | Aug 2013 | B2 |
8620031 | Knox et al. | Dec 2013 | B2 |
20020070854 | Bartholomew et al. | Jun 2002 | A1 |
20020080040 | Schneider et al. | Jun 2002 | A1 |
20020118352 | Ohzu et al. | Aug 2002 | A1 |
20020135490 | Opitz et al. | Sep 2002 | A1 |
20020153499 | Oppelt et al. | Oct 2002 | A1 |
20030189487 | Mathews et al. | Oct 2003 | A1 |
20040017505 | Yanauchi | Jan 2004 | A1 |
20040051791 | Hashimoto | Mar 2004 | A1 |
20040056765 | Anderson et al. | Mar 2004 | A1 |
20040080618 | Norris et al. | Apr 2004 | A1 |
20040085448 | Goto et al. | May 2004 | A1 |
20050207655 | Chopra et al. | Sep 2005 | A1 |
20050259255 | Williams et al. | Nov 2005 | A1 |
20060170787 | Bentkovski | Aug 2006 | A1 |
20060202847 | Oppelt et al. | Sep 2006 | A1 |
20070024459 | Cole | Feb 2007 | A1 |
20070064980 | Knox et al. | Mar 2007 | A1 |
20080061250 | Perel et al. | Mar 2008 | A1 |
20080297360 | Knox et al. | Dec 2008 | A1 |
20110058167 | Knox et al. | Mar 2011 | A1 |
20110221889 | Knox et al. | Sep 2011 | A1 |
20110243389 | Knox et al. | Oct 2011 | A1 |
20120038768 | Fujimori | Feb 2012 | A1 |
20120140231 | Knox et al. | Jun 2012 | A1 |
20130121546 | Guissin | May 2013 | A1 |
20130170705 | Knox et al. | Jul 2013 | A1 |
20140022547 | Knox et al. | Jan 2014 | A1 |
20140028989 | Butscher et al. | Jan 2014 | A1 |
Number | Date | Country |
---|---|---|
1300816 | Apr 2003 | EP |
S5387283 | Aug 1978 | JP |
S5622932 | Mar 1981 | JP |
362153780 | Jul 1987 | JP |
H03245296 | Oct 1991 | JP |
5020563 | Jan 1993 | JP |
H06109631 | Apr 1994 | JP |
H0712724 | Jan 1995 | JP |
10154284 | Jun 1998 | JP |
H10232196 | Sep 1998 | JP |
H11503236 | Mar 1999 | JP |
11339150 | Dec 1999 | JP |
2000019112 | Jan 2000 | JP |
2000180349 | Jun 2000 | JP |
2002250769 | Sep 2002 | JP |
2004257876 | Sep 2004 | JP |
2004102498 | Nov 2004 | WO |
2006050570 | May 2006 | WO |
Entry |
---|
Communication dated Mar. 20, 2015, issued by the European Patent Office in European Application No. 08 849 716.9. |
Translation of communication dated May 26, 2015, issued by the Japan Patent Office in Japanese Application No. 2014-148142. |
European Search Report Dated Jan. 24, 2013, issued in counterpart European Patent Application No. 12182832.1. |
Office Action issued by the US Patent and Trademark Office in counterpart U.S. Appl. No. 11/719,226 dated Jan. 27, 2012. |
Office Action issued by the US Patent and Trademark Office in counterpart U.S. Appl. No. 11/719,226 dated May 29, 2012. |
Notice of Allowance issued by the US Patent and Trademark Office in counterpart U.S. Appl. No. 11/719,226 dated Apr. 8, 2013. |
Office Action issued by the US Patent and Trademark Office in counterpart U.S. Appl. No. 12/743,171 dated Feb. 15, 2012. |
Office Action issued by the US Patent and Trademark Office in counterpart U.S. Appl. No. 12/743,171 dated Jan. 14, 2014. |
European Search Report issued by the European Patent and Trademark Office in European Patent Application 12183197.8 dated May 10, 2013. |
European Search Report issued by the European Patent and Trademark Office in European Patent Application 12183106.9 dated May 13, 2013. |
European Search Report issued by the European Patent and Trademark Office in European Patent Applcation No. 12183148.1 dated Jun. 5, 2013. |
European Search Report issued by the European Patent and Trademark Office in European Patent Applcation No. 12183185.3 dated Jun. 20, 2013. |
European Search Report issued by the European Patent and Trademark Office in European Patent Applcation No. 12183207.5 dated Jul. 2, 2013. |
European Search Report issued by the European Patent and Trademark office in European Patent Application No. 08849716.9 dated Nov. 1, 2011. |
Office Action dated Jan. 7, 2010 issued in U.S. Appl. No. 10/556,807. |
Office Action dated Mar. 17, 2009 issued in U.S. Appl. No. 10/556,807. |
Office Action dated Aug. 8, 2010 issued in U.S. Appl. No. 10/556,807. |
Allowance dated Mar. 15, 2011 issued in U.S. Appl. No. 10/556,807. |
Office Action dated Oct. 19, 2011 issued in U.S. Appl. No. 13/164,123. |
Office Action dated May 14, 2012 issued in U.S. Appl. No. 13/164,123. |
Allowance dated Nov. 23, 2012 issued in U.S. Appl. No. 13/164,123. |
Office Action dated May 8, 2013 issued in U.S. Appl. No. 13/775,577. |
Allowance dated Aug. 23, 2013 issued in U.S. Appl. No. 13/775,577. |
Communication dated Jun. 3, 2014 from The Japanese Patent Office in counterpart Japanese Patent Application No. 2013-055559. |
Communication dated Jun. 3, 2014 from The Japanese Patent Office in counterpart Japanese patent Application No. 2010-196936. |
Communication dated Jun. 10, 2014 from The Japanese Patent Office in counterpart Japanese Patent Application No. 2013-096833. |
Number | Date | Country | |
---|---|---|---|
20160153906 A1 | Jun 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14636840 | Mar 2015 | US |
Child | 15015787 | US | |
Parent | 14089315 | Nov 2013 | US |
Child | 14636840 | US | |
Parent | 13775577 | Feb 2013 | US |
Child | 14089315 | US | |
Parent | 13164123 | Jun 2011 | US |
Child | 13775577 | US | |
Parent | 10556807 | US | |
Child | 13164123 | US |