System and method for high-resolution storage of images

Information

  • Patent Grant
  • 10334249
  • Patent Number
    10,334,249
  • Date Filed
    Thursday, November 2, 2017
    7 years ago
  • Date Issued
    Tuesday, June 25, 2019
    5 years ago
Abstract
An image-creation method includes capturing an image as digital data, locating an area of interest of the captured image, extracting, from the digital data, at least some data corresponding to the located area of interest, digitally magnifying the extracted at least some data to yield digitally magnified data, and combining the digitally magnified data with at least some of the digital data of the captured image to yield combined data.
Description
TECHNICAL FIELD

This application relates generally to video surveillance and more particularly to systems and methods for high-resolution storage of images.


BACKGROUND

Many police cars now include a video camera to capture activities transpiring both outside and inside the vehicle. One use of the video captured by these cameras is as evidence in a criminal trial. In order for the videos to be used as evidence, the images must be clearly identifiable by, for example, a jury or an expert witness. Often police cars and their corresponding devices for recording video data may remain in use for extended periods of time, for example, when an officer stays out on patrol overnight. It is often necessary to compress the video being recorded in order to be able to store those large volumes of data.


In order to store the large amount of data captured by the video camera over long periods of time, compression algorithms are normally used to compress the data. There are various compression algorithms currently in use for compressing videos, such as lossless and lossy algorithms. In a lossy algorithm, some visual quality is lost in the compression process and cannot be restored. The various compression algorithms utilize a combination of techniques for compressing the data such as downsampling or subsampling, block splitting, pixilating, and lowering resolution. A few examples of compression algorithms include the MPEG family of algorithms such as MPEG 2 and MPEG 4.


SUMMARY OF THE INVENTION

An image-creation method includes capturing an image as digital data, locating an area of interest of the captured image, extracting, from the digital data, at least some data corresponding to the located area of interest, digitally magnifying the extracted at least some data to yield digitally magnified data, and combining the digitally magnified data with at least some of the digital data of the captured image to yield combined data.


An article of manufacture for image creation includes at least one computer readable medium, and processor instructions contained on the at least one computer readable medium. The processor instructions are configured to be readable from the at least one computer readable medium by at least one processor and thereby cause the at least one processor to operate to capture an image as digital data, locate an area of interest of the captured image, extract, from the digital data, at least some data corresponding to the located area of interest, digitally magnify the extracted at least some data to yield digitally magnified data, and combine the digitally magnified data with at least some of the digital data of the captured image to yield combined data.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of various embodiments of the present invention may be obtained by reference to the following Detailed Description when taken in conjunction with the accompanying Drawings wherein:



FIG. 1 is a flow chart of a process for compressing and storing data;



FIG. 2 is a diagram of a system for capturing and storing video data;



FIG. 3 is an illustrative view of a video image with an insert in a corner of the image;



FIG. 4 is an illustrative view of a video image with a magnified area inserted into the image; and



FIG. 5 is an illustrative view of a video image with a fixed zoom box.





DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS OF THE INVENTION

Various embodiments of the present invention contemplate identifying areas of importance in a video image that would be desirable to save in a high-resolution format. Some embodiments contemplate magnifying areas of interest before compressing video of the areas of interest so that deleterious effects of compression losses of some video data may be lessened. In that way, large volumes of data can be stored while minimizing the loss of clarity of the areas of interest. The above summary of the invention is not intended to represent each embodiment or every aspect of the present invention.


In some cameras, a data stream coming from the camera may contain both chroma (i.e., color information) and luma (i.e., brightness or black-and-white information). Most of the resolution may be contained in the black-and-white content of the video, even though the images may be color images. In some video streams, the resolution of the black and white pixels may be, for example, four times that of the color. Between the luma and chroma information, the luma may contain much of the information relative to how much detail is visible in a given image.


Clarity is often critical to police work, for example, to identify a particular vehicle or suspect to a jury. Therefore, it is important that recorded images remain clear even after video of the images has been compressed. For example, when video of a traffic stop is recorded, it is important that clarity of a license-plate portion of the video remains so that letters and numbers of the license plate are identifiable. However, almost all compression algorithms lower the clarity and sharpness of images being compressed, making things like facial features, letters, and numbers harder to identify. For example, some cameras may capture data at the equivalent of approximately 480 horizontal lines while the data may be stored at only, for example, 86 horizontal lines. The resolution of the recorded video may therefore be, for example, one-fourth the resolution of the video the camera actually captured. By accessing video data while it is still at its captured resolution, effects of the compression algorithms and lower resolution settings on the readability of the license plates or other images can be minimized.



FIG. 1 is a flow chart showing a process 100 for automatic image magnification. The process 100 begins at step 102. At step 102, an image capture device, such as a camera, captures video images in a field of view of the camera. In some embodiments, the camera may be mounted in a police car and adapted to capture video while the police car is moving and also when the police car is stopped, for example, during a traffic stop.


From step 102, execution proceeds to step 104. At step 104, the captured video images are sent to a buffer. At step 106, areas of interest in the captured and buffered data are located. In some embodiments, raw data of the captured images may be read and an automatic license-plate locator algorithm run to identify, for example, whether one or more license plates are in the field of view. In some embodiments, only the luma information and not the chroma information may be viewed. The luma information may have more resolution than the chroma information. In some embodiments, a CPU may, for example, run a facial-feature location algorithm typically used, for example, to identify whether there are people in the field of view. In various embodiments, the license-plate location algorithm runs on raw data coming from the camera. It is also contemplated that the license-plate location algorithm may run on compressed data and a feedback signal may be sent as to the location of the pixels to be extracted from the uncompressed data.


Once the one or more areas of interest (e.g., license plates or faces) have been located at step 108, the location information relative to each of the areas of interest is sent to an extractor. For example, the location information sent may be one or more sets of coordinates, such as, for example, coordinates corresponding to the four corners of a license plate. Information related to, for example, the size of the one or more license plates may also be sent. At step 109, the location information is used to extract data corresponding to the area of interest, (e.g., the areas of license plates). At step 110, the extracted data may be altered so that pixels represented thereby are magnified relative to an original image to yield a magnified image. At step 112, the data representing the magnified image is combined with at least some of the data captured by the video camera at step 102. The combined data, including the raw data and the magnified-image data, is compressed at step 114 to yield combined compressed data. At step 116, the combined compressed data is stored on a recordable medium such as, for example, a digital video disc (DVD).


Referring now to FIG. 2, a system 200 for magnifying areas of interest is shown. A camera 202 captures raw video data. The raw video data is sent by the camera 202 to a buffer 204. The camera 202 may be pointing through, for example, the windshield of a police car. Data corresponding to a zoom area (e.g., a zoom box) subset of the raw video data from the camera is extracted from the buffer 204 by an extractor 206. The extracted data is digitally magnified by a magnifier 208, to yield a magnified zoom area. The digitally magnified zoom area is inserted into the raw data by a combiner 210 to yield a combined data output. The combined data is output by the combiner 210, compressed by a compressor 212, and stored onto a storage medium 214. Each of the buffer 204, the extractor 206, the magnifier 208, the combiner 210, the compressor 212, and the storage 214 may be hardware or software based.


Those having skill in the art will recognize that the embodiment shown in FIG. 2 does not necessarily require an automatic license-plate locator algorithm because, for example, the zoom area may be a fixed area on the field of view of the camera. To ensure the zoom area encompasses a license plate or other area of interest, in some embodiments an officer can point the camera in a particular direction so that the fixed zoom area captures the area of interest. For example, the office may pull up behind a car and steer the police car so the zoom area captures the license plate of the car. Additionally, the officer may, for example, rotate the camera so that the zoom area encompasses the area of interest. In some embodiments, the zoom area may be moved within the field of view by a user interface, such as a touch screen or directional buttons, to change the area being magnified without needing to aim the camera's lens.


Referring now to FIG. 3, an image 300 is shown. The image 300 has a magnified insert 302. An area of interest has been highlighted by a zoom box 304 in the image 300. In some embodiments, an automatic license-plate locator algorithm may have been run to locate the license plate in the image 300. In some embodiments, a user may have moved the zoom box 304 to position the zoom box 304 over an area of interest. In the embodiment shown, the magnified area 302 has been inserted on top of a different area of the image 300, for example in a bottom corner as a picture-in-picture (PIP) view.


In some embodiments, an indication of where the zoom box 304 was originally located before extraction is included in the stored data. For example, a thin colored line may encompass the license plate of the car whose license plate has been magnified. In some embodiments, multiple license plates in one field of view may be identified and rotated among by sequentially zooming the various license plates. As each license plate is in turn magnified, an indicator, for example, a thin red line, may, for example, encompass or highlight the license plate being magnified. In some embodiments, a plurality of different indicators, for example, different colored thin lines, may be used so that a plurality of license plates can be magnified at the same time with each different colored indicator showing the location of the magnified license plate relative to the original image.


Referring now to FIG. 4, another way of inserting an image into a field of view is shown. In an image 400, a zoom box 402 containing a magnified image has been inserted back into the image 400 in approximately the same position from which the original image was removed. In the embodiment shown, the magnified image overlaps onto a larger area than that which was originally removed. In some embodiments, the zoom box 402 may be limited to the original size of the part of the image 400 that was removed. For example, the magnified image 402 may be placed back over the part of the image 400 in which the original license plate was located. In various embodiments, the face of a driver or other information may be within the area of interest that is magnified, for example, by increasing the size of the image and inserting it back into the field of view on top of the original image.


Referring now to FIG. 5, a fixed region, for example a strip across the middle of a video image captured by a camera may be magnified and inserted along the bottom of a video to be stored. Oftentimes, the hood of the police car is in the field of view of the camera. Since video of the hood does not typically need to be recorded, it may be desirable to insert the magnified video image along the bottom of the field of view. Similarly, oftentimes the sky is recorded; in that case, it may be desirable to insert the strip of magnified video across the top of the field of view where the sky is usually located. In other embodiments, a vertical strip may be magnified and inserted. In various embodiments, a user has the option of selecting the size and shape of the area being recorded and where the magnified images should be inserted. Those having skill in the art will appreciate that various embodiments may permit magnified still images to be recorded or magnified video to be recorded or both.


In some embodiments, the video output saved to the storage device may be at the same video quality and same compression level and take up approximately the same amount of storage space, but, due to magnification, the areas of interest are more readable. While the area of interest is typically readable without need for an optical zoom of the camera, in some embodiments, an optical zoom may be used to further enhance resolution of areas of interest. Similarly, some embodiments contemplate use of a digital zoom. In some embodiments, the process is different from a typical digital zoom because video may be accessed while the video is still in a raw form. In that way, a high-resolution image may be extracted before the compression algorithm has been implemented and before reduced video quality settings have been applied.


Although various embodiments of the method and system of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it will be understood that the invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the spirit of the invention as set forth herein.

Claims
  • 1. An image-creation method comprising: capturing an image as digital data;locating an area of interest of the captured image;extracting, from the digital data, at least some data corresponding to the located area of interest;digitally magnifying the extracted at least some data corresponding to the located area of interest to yield a magnified area of interest image; andcombining the digitally magnified extracted data with at least some of the digital data of the captured image to yield combined data, wherein the combining results in a combined image corresponding to the magnified area of interest image being located in an area of low interest of the combined image.
  • 2. The image-creation method of claim 1, further comprising buffering the digital data.
  • 3. The image-creation method of claim 2, further comprising retrieving the captured data from the buffer.
  • 4. The image-creation method of claim 1, wherein the locating step is performed before the capturing step.
  • 5. The image-creation method of claim 1, further comprising identifying data corresponding to the located area of interest.
  • 6. The image-creation method of claim 1, wherein the extracting step comprises copying the at least some data corresponding to the located area of interest and not modifying the data corresponding to the located area of interest as present in the captured image.
  • 7. The image-creation method of claim 1, further comprising compressing the combined data.
  • 8. The image-creation method of claim 1, wherein the combining step results in a combined image corresponding to the digitally magnified extracted data being located in the area of low interest of the combined image represented by the combined data.
  • 9. The image-creation method of claim 1, further comprising using the combined data to display a modified image.
  • 10. The image-creation method of claim 1, further comprising: repeating the recited steps a plurality of times; andsaving modified video data formed therefrom.
  • 11. An article of manufacture for image creation, the article of manufacture comprising: at least one computer readable medium;processor instructions contained on the at least one computer readable medium, the processor instructions configured to be readable from the at least one computer readable medium by at least one processor and thereby cause the at least one processor to operate as to perform the following steps: capturing an image as digital data;locating an area of interest of the captured image;extracting, from the digital data, at least some data corresponding to the located area of interest;digitally magnifying the extracted at least some data corresponding to the located area of interest to yield a magnified area of interest image; andcombining the digitally magnified extracted data with at least some of the digital data of the captured image to yield a combined image corresponding to the magnified area of interest image being located in an area of low interest of the combined image.
  • 12. The article of manufacture of claim 11, wherein the processor instructions are configured to cause the at least one processor to operate as to perform the following step: buffering the digital data.
  • 13. The article of manufacture of claim 12, wherein the processor instructions are configured to cause the at least one processor to operate as to perform the following step: retrieving the captured data from the buffer.
  • 14. The article of manufacture of claim 11, wherein the processor instructions are configured to cause the at least one processor to operate as to perform the following step: identifying the at least some data corresponding to the located area of interest.
  • 15. The article of manufacture of claim 11, wherein the extracting step comprises copying the at least some data corresponding to the located area of interest and not modifying the data corresponding to the located area of interest as present in the captured image.
  • 16. The article of manufacture of claim 11, wherein the processor instructions are configured to cause the at least one processor to operate as to perform the following step: compressing the combined data.
  • 17. The article of manufacture of claim 11, further comprising identifying the area of low interest of the image.
  • 18. The article of manufacture of claim 11, wherein the processor instructions are configured to cause the at least one processor to operate as to perform the following step: using the combined data to display the modified image.
  • 19. The article of manufacture of claim 11, wherein the processor instructions are configured to cause the at least one processor to operate as to perform the following steps: repeating the recited steps a plurality of times; andsaving modified video data formed therefrom.
  • 20. The article of manufacture of claim 11, wherein, in the combined image, a portion of the captured image is replaced by the magnified area of interest image.
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application is a continuation application of U.S. patent application Ser. No. 12/371,189, filed on Feb. 13, 2009. U.S. patent application Ser. No. 12/371,189 claims priority from, and incorporates by reference for any purpose the entire disclosure of, U.S. Provisional Patent Application No. 61/029,101, filed Feb. 15, 2008. In addition, U.S. patent application Ser. No. 12/371,189 claims priority from and incorporates by reference U.S. Provisional Patent Application No. 61/029,092, also filed Feb. 15, 2008. U.S. patent application Ser. No. 12/371,189 also incorporates by reference U.S. Patent Application Publication No. 2006/0158968, filed on Oct. 12, 2005 and U.S. Patent Application Publication No. 2009/0213218, filed on Feb. 13, 2009.

US Referenced Citations (339)
Number Name Date Kind
3752047 Gordon et al. Aug 1973 A
4258421 Juhasz et al. Mar 1981 A
4389706 Gomola et al. Jun 1983 A
4420238 Felix Dec 1983 A
4688244 Hannon et al. Aug 1987 A
4754255 Sanders et al. Jun 1988 A
4786900 Karasawa et al. Nov 1988 A
4789904 Peterson Dec 1988 A
4831438 Bellman, Jr. et al. May 1989 A
4843463 Michetti Jun 1989 A
4949186 Peterson Aug 1990 A
4992943 McCracken Feb 1991 A
4993068 Piosenka et al. Feb 1991 A
5027104 Reid Jun 1991 A
5111289 Lucas et al. May 1992 A
5136655 Bronson Aug 1992 A
5164827 Paff Nov 1992 A
5185667 Zimmermann Feb 1993 A
5204536 Vardi Apr 1993 A
5225882 Hosokawa et al. Jul 1993 A
5430431 Nelson Jul 1995 A
5485611 Astle Jan 1996 A
5491464 Carter et al. Feb 1996 A
5491511 Odle Feb 1996 A
5515042 Nelson May 1996 A
5539454 Williams Jul 1996 A
5570127 Schmidt Oct 1996 A
5579239 Freeman et al. Nov 1996 A
5651075 Frazier et al. Jul 1997 A
5677979 Squicciarini et al. Oct 1997 A
5682133 Johnson et al. Oct 1997 A
5689442 Swanson et al. Nov 1997 A
5703604 McCutchen Dec 1997 A
5708780 Levergood et al. Jan 1998 A
5726450 Peterson et al. Mar 1998 A
5734337 Kupersmit Mar 1998 A
5742336 Lee Apr 1998 A
5784023 Bluege Jul 1998 A
5787367 Berra Jul 1998 A
5799083 Brothers et al. Aug 1998 A
5809161 Auty et al. Sep 1998 A
5815093 Kikinis Sep 1998 A
5818864 van Goor et al. Oct 1998 A
5844599 Hildin Dec 1998 A
5852664 Iverson et al. Dec 1998 A
5857159 Dickrell et al. Jan 1999 A
5890079 Levine Mar 1999 A
5898866 Atkins et al. Apr 1999 A
5917405 Joao Jun 1999 A
5920338 Katz Jul 1999 A
5926210 Hackett et al. Jul 1999 A
5936683 Lin Aug 1999 A
5963248 Ohkawa et al. Oct 1999 A
6008841 Charlson Dec 1999 A
6028528 Lorenzetti et al. Feb 2000 A
6037977 Peterson Mar 2000 A
6076026 Jambhekar et al. Jun 2000 A
6092008 Bateman Jul 2000 A
6121898 Moetteli Sep 2000 A
6141611 Mackey et al. Oct 2000 A
6151065 Steed et al. Nov 2000 A
6211907 Scaman et al. Apr 2001 B1
6215519 Nayar et al. Apr 2001 B1
6252989 Geisler et al. Jun 2001 B1
6259475 Ramachandran et al. Jul 2001 B1
6282462 Hopkins Aug 2001 B1
6326714 Bandera Dec 2001 B1
6330025 Arazi et al. Dec 2001 B1
6332193 Glass et al. Dec 2001 B1
6335789 Kikuchi Jan 2002 B1
6345219 Klemens Feb 2002 B1
6373962 Kanade et al. Apr 2002 B1
6389340 Rayner May 2002 B1
6421080 Lambert Jul 2002 B1
6430488 Goldman et al. Aug 2002 B1
6445824 Hieda Sep 2002 B2
6456321 Ito et al. Sep 2002 B1
6490513 Fish et al. Dec 2002 B1
6518881 Monroe Feb 2003 B2
6542076 Joao Apr 2003 B1
6545601 Monroe Apr 2003 B1
6546119 Ciolli Apr 2003 B2
6546363 Hagenbuch Apr 2003 B1
6553131 Neubauer et al. Apr 2003 B1
6556905 Mittelsteadt et al. Apr 2003 B1
6559769 Anthony et al. May 2003 B2
6631522 Erdelyi Oct 2003 B1
6636256 Passman et al. Oct 2003 B1
6684137 Takagi et al. Jan 2004 B2
6696978 Trajkovic et al. Feb 2004 B2
6704281 Hourunranta et al. Mar 2004 B1
6707489 Maeng et al. Mar 2004 B1
6734911 Lyons May 2004 B1
6754663 Small Jun 2004 B1
6801574 Takeuchi et al. Oct 2004 B2
6812835 Ito et al. Nov 2004 B2
6831556 Boykin Dec 2004 B1
6914541 Zierden Jul 2005 B1
6950013 Scaman et al. Sep 2005 B2
6950122 Mirabile Sep 2005 B1
6959122 McIntyre Oct 2005 B2
6965400 Haba et al. Nov 2005 B1
7023913 Monroe Apr 2006 B1
7119674 Sefton Oct 2006 B2
7119832 Blanco et al. Oct 2006 B2
7131136 Monroe Oct 2006 B2
7180407 Guo et al. Feb 2007 B1
7190882 Gammenthaler Mar 2007 B2
7215876 Okada et al. May 2007 B2
7262790 Bakewell Aug 2007 B2
7272179 Siemens et al. Sep 2007 B2
7363742 Nerheim Apr 2008 B2
7373395 Brailean et al. May 2008 B2
7382244 Donovan et al. Jun 2008 B1
7405834 Marron et al. Jul 2008 B1
7471334 Stenger Dec 2008 B1
7495579 Sirota et al. Feb 2009 B2
7570158 Denny et al. Aug 2009 B2
7570476 Nerheim Aug 2009 B2
7574131 Chang et al. Aug 2009 B2
7583290 Enright et al. Sep 2009 B2
7646312 Rosen Jan 2010 B2
7702015 Richter et al. Apr 2010 B2
7711150 Simon May 2010 B2
7768548 Silvernail et al. Aug 2010 B2
7787025 Sanno et al. Aug 2010 B2
7804426 Etcheson Sep 2010 B2
7880766 Aoki et al. Feb 2011 B2
7894632 Park et al. Feb 2011 B2
7920187 Sanno et al. Apr 2011 B2
7929010 Narasimhan Apr 2011 B2
7944676 Smith et al. May 2011 B2
7973853 Ojima et al. Jul 2011 B2
7995652 Washington Aug 2011 B2
8022874 Frieaizen Sep 2011 B2
8026945 Garoutte et al. Sep 2011 B2
8037348 Wei et al. Oct 2011 B2
8050206 Siann et al. Nov 2011 B2
8228364 Cilia Jul 2012 B2
8446469 Blanco et al. May 2013 B2
8487995 Vanman et al. Jul 2013 B2
8570376 Sharma et al. Oct 2013 B1
8594485 Brundula Nov 2013 B2
8599368 Cilia et al. Dec 2013 B1
8630497 Badawy Jan 2014 B2
8736680 Cilia et al. May 2014 B1
8781292 Ross et al. Jul 2014 B1
8805431 Vasavada et al. Aug 2014 B2
8819686 Memik et al. Aug 2014 B2
8837901 Shekarri et al. Sep 2014 B2
8964054 Jung et al. Feb 2015 B2
8982944 Vanman et al. Mar 2015 B2
9058499 Smith Jun 2015 B1
9134338 Cilia et al. Sep 2015 B2
9159371 Ross et al. Oct 2015 B2
9253452 Ross et al. Feb 2016 B2
9262800 Cilia Feb 2016 B2
9325950 Haler Apr 2016 B2
9331997 Smith May 2016 B2
9377161 Hanchett et al. Jun 2016 B2
9432298 Smith Aug 2016 B1
9456131 Tran Sep 2016 B2
9584710 Marman et al. Feb 2017 B2
9615062 Sablak et al. Apr 2017 B2
9716913 Sivasankaran Jul 2017 B2
9756279 Vanman et al. Sep 2017 B2
9973711 Yang et al. May 2018 B2
10186012 Newman et al. Jan 2019 B2
10230866 Townsend et al. Mar 2019 B1
20010052137 Klein Dec 2001 A1
20020040475 Yap et al. Apr 2002 A1
20020064314 Comaniciu et al. May 2002 A1
20020135679 Scaman Sep 2002 A1
20020140924 Wangler et al. Oct 2002 A1
20020141618 Ciolli Oct 2002 A1
20020141650 Keeney et al. Oct 2002 A1
20020149476 Ogura Oct 2002 A1
20020180759 Park et al. Dec 2002 A1
20020183905 Maeda et al. Dec 2002 A1
20020186148 Trajkovic et al. Dec 2002 A1
20020186297 Bakewell Dec 2002 A1
20030025599 Monroe Feb 2003 A1
20030025812 Slatter Feb 2003 A1
20030052798 Hanson Mar 2003 A1
20030071891 Geng Apr 2003 A1
20030080878 Kirmuss May 2003 A1
20030086000 Siemens et al. May 2003 A1
20030095338 Singh et al. May 2003 A1
20030112133 Webb et al. Jun 2003 A1
20030142209 Yamazaki et al. Jul 2003 A1
20030151663 Lorenzetti et al. Aug 2003 A1
20030154009 Basir et al. Aug 2003 A1
20030172123 Polan et al. Sep 2003 A1
20030185419 Sumitomo Oct 2003 A1
20030210329 Aagaard et al. Nov 2003 A1
20030210806 YoichiShintani et al. Nov 2003 A1
20030212567 Shintani et al. Nov 2003 A1
20030214585 Bakewell Nov 2003 A1
20040008255 Lewellen Jan 2004 A1
20040017930 Kim et al. Jan 2004 A1
20040021852 DeFlumere Feb 2004 A1
20040056779 Rast Mar 2004 A1
20040080615 Klein et al. Apr 2004 A1
20040096084 Tamoto et al. May 2004 A1
20040119869 Tretter et al. Jun 2004 A1
20040150717 Page et al. Aug 2004 A1
20040189804 Borden et al. Sep 2004 A1
20040201765 Gammenthaler Oct 2004 A1
20040218099 Washington Nov 2004 A1
20040221311 Dow et al. Nov 2004 A1
20040223058 Richter et al. Nov 2004 A1
20040252193 Higgins Dec 2004 A1
20040258149 Robinson et al. Dec 2004 A1
20050083404 Pierce et al. Apr 2005 A1
20050090961 Bonk et al. Apr 2005 A1
20050099273 Shimomura et al. May 2005 A1
20050100329 Lao et al. May 2005 A1
20050101334 Brown et al. May 2005 A1
20050128064 Riesebosch Jun 2005 A1
20050151671 Bortolotto Jul 2005 A1
20050151852 Jomppanen Jul 2005 A1
20050196140 Moteki Sep 2005 A1
20050206773 Kim et al. Sep 2005 A1
20050212912 Huster Sep 2005 A1
20050243171 Ross et al. Nov 2005 A1
20050258942 Manasseh et al. Nov 2005 A1
20060010199 Brailean Jan 2006 A1
20060012683 Lao et al. Jan 2006 A9
20060028547 Chang Feb 2006 A1
20060033813 Provinsal et al. Feb 2006 A1
20060098843 Chew May 2006 A1
20060126932 Eschbach Jun 2006 A1
20060132604 Lao et al. Jun 2006 A1
20060133476 Page et al. Jun 2006 A1
20060152636 Matsukawa et al. Jul 2006 A1
20060158968 Vanman et al. Jul 2006 A1
20060159325 Zeineh et al. Jul 2006 A1
20060187305 Trivedi et al. Aug 2006 A1
20060193384 Boyce Aug 2006 A1
20060209189 Simpson Sep 2006 A1
20060244826 Chew Nov 2006 A1
20060269265 Wright et al. Nov 2006 A1
20060274166 Lee et al. Dec 2006 A1
20070013776 Venetianer et al. Jan 2007 A1
20070024706 Brannon et al. Feb 2007 A1
20070029825 Franklin et al. Feb 2007 A1
20070035612 Korneluk et al. Feb 2007 A1
20070058856 Boregowda et al. Mar 2007 A1
20070069921 Sefton Mar 2007 A1
20070097212 Farneman May 2007 A1
20070109411 Jung et al. May 2007 A1
20070122000 Venetianer et al. May 2007 A1
20070188612 Carter Aug 2007 A1
20070200933 Watanabe et al. Aug 2007 A1
20070217761 Chen et al. Sep 2007 A1
20070219686 Plante Sep 2007 A1
20070221233 Kawano et al. Sep 2007 A1
20070222678 Ishio et al. Sep 2007 A1
20070222859 Chang et al. Sep 2007 A1
20070225550 Gattani et al. Sep 2007 A1
20070230943 Chang et al. Oct 2007 A1
20070260363 Miller Nov 2007 A1
20070268370 Sanno et al. Nov 2007 A1
20070274705 Kashiwa et al. Nov 2007 A1
20070291104 Petersen et al. Dec 2007 A1
20070296817 Ebrahimi et al. Dec 2007 A1
20080002028 Miyata Jan 2008 A1
20080007438 Segall et al. Jan 2008 A1
20080036580 Breed Feb 2008 A1
20080100705 Kister et al. May 2008 A1
20080129844 Cusack et al. Jun 2008 A1
20080167001 Wong Jul 2008 A1
20080175479 Sefton et al. Jul 2008 A1
20080218596 Hoshino Sep 2008 A1
20080240616 Haering et al. Oct 2008 A1
20080285803 Madsen Nov 2008 A1
20080301088 Landry et al. Dec 2008 A1
20090002491 Haler Jan 2009 A1
20090046157 Cilia et al. Feb 2009 A1
20090049491 Karonen et al. Feb 2009 A1
20090088267 Shimazaki et al. Apr 2009 A1
20090102950 Ahiska Apr 2009 A1
20090129672 Camp, Jr. et al. May 2009 A1
20090195655 Pandey Aug 2009 A1
20090207248 Cilia et al. Aug 2009 A1
20090207252 Raghunath Aug 2009 A1
20090213218 Cilia et al. Aug 2009 A1
20090237529 Nakagomi et al. Sep 2009 A1
20090251530 Cilia Oct 2009 A1
20090259865 Sheynblat et al. Oct 2009 A1
20090295919 Chen et al. Dec 2009 A1
20090300692 Mavlankar et al. Dec 2009 A1
20090320081 Chui et al. Dec 2009 A1
20100026802 Titus et al. Feb 2010 A1
20100118147 Dorneich et al. May 2010 A1
20100165109 Lang Jul 2010 A1
20100208068 Elsemore Aug 2010 A1
20100225817 Sheraizin et al. Sep 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100245568 Wike, Jr. et al. Sep 2010 A1
20100265331 Tanaka Oct 2010 A1
20100321183 Donovan et al. Dec 2010 A1
20110042462 Smith Feb 2011 A1
20110044605 Vanman et al. Feb 2011 A1
20110052137 Cowie Mar 2011 A1
20110053654 Petrescu et al. Mar 2011 A1
20110074580 Mercier et al. Mar 2011 A1
20110110556 Kawakami May 2011 A1
20110134141 Swanson et al. Jun 2011 A1
20110157376 Lyu et al. Jun 2011 A1
20110211810 Vanman et al. Sep 2011 A1
20110234749 Alon Sep 2011 A1
20110242277 Do et al. Oct 2011 A1
20110249153 Hirooka et al. Oct 2011 A1
20110267499 Wan et al. Nov 2011 A1
20110285845 Bedros et al. Nov 2011 A1
20110292287 Washington Dec 2011 A1
20110310435 Tsuji et al. Dec 2011 A1
20120040650 Rosen Feb 2012 A1
20120069224 Cilia et al. Mar 2012 A1
20120092522 Zhang et al. Apr 2012 A1
20130150004 Rosen Jun 2013 A1
20130279757 Kephart Oct 2013 A1
20130287090 Sasaki et al. Oct 2013 A1
20130336634 Vanman et al. Dec 2013 A1
20140059166 Mann et al. Feb 2014 A1
20140139680 Huang et al. May 2014 A1
20140192192 Worrill et al. Jul 2014 A1
20140201064 Jackson et al. Jul 2014 A1
20140226952 Cilia et al. Aug 2014 A1
20140240500 Davies Aug 2014 A1
20140355951 Tabak Dec 2014 A1
20150050923 Tu et al. Feb 2015 A1
20150051502 Ross Feb 2015 A1
20150054639 Rosen Feb 2015 A1
20150063776 Ross et al. Mar 2015 A1
20160035391 Ross et al. Feb 2016 A1
20160073025 Cilia Mar 2016 A1
20170215971 Gattani et al. Aug 2017 A1
Foreign Referenced Citations (18)
Number Date Country
707297 Apr 1996 EP
2698596 Feb 1995 FR
2287152 Sep 1995 GB
2317418 Mar 1998 GB
2006311039 Nov 2006 JP
10-1050897 Jul 2011 KR
WO-1993020655 Oct 1993 WO
WO-1994019212 Sep 1994 WO
WO-1995028783 Oct 1995 WO
WO-1996022202 Jul 1996 WO
WO-1997038526 Oct 1997 WO
WO-1998052358 Nov 1998 WO
WO-1999018410 Apr 1999 WO
WO-01097524 Dec 2001 WO
WO-2004036926 Apr 2004 WO
WO-2007104367 Sep 2007 WO
WO-2013100993 Jul 2013 WO
WO-2016089918 Jun 2016 WO
Non-Patent Literature Citations (44)
Entry
Copenheaver, Blaine R., International Search Report for PCT/US2009/000930 as dated Apr. 9, 2009, (4 pages).
Young, Lee W., International Search Report for PCT/US2009/000934 as dated Apr. 29, 2009, (3 pages).
Copenheaver, Blaine R., International Search Report for PCT/US2010030861 as dated Jun. 21, 2010, (4 pages).
Nhon, Diep T., International Search Report for PCT/US05/36701 as dated Oct. 25, 2006, 5 pages.
Copenheaver, Blaine R., International Search Report for PCT/US2009/032462 as dated Mar. 10, 2009 (3 pages).
Kortum, P. et al., “Implementation of a foveated image coding system for image bandwidth reduction”, SPIE Proceedings, vol. 2657, 1996, pp. 350-360, XP-002636638.
Geisler, Wilson S. et al., “A real-time foveated multiresolution system for low-bandwidth video communication”, Proceedings of the SPIE—The International Society for Optical Engineering SPIE—Int. Soc. Opt. Eng. USA, vol. 3299,1998, pp. 294-305, XP-002636639.
Rowe, Lawrence A., et al.; “Indexes for User Access to Large Video Databases”; Storage and Retrieval for Image and Video Databases II, IS&T/SPIE Symp. on Elec. Imaging Sci. & Tech.; San Jose, CA; Feb. 1994; 12 pages.
Polybius; “The Histories,” vol. III: Books 5-8; Harvard University Press; 1923; pp. 385 & 387.
Crisman, P.A. (editor); “The Compatible Time-Sharing System: A Programmer's Guide,” second . edition; The M.I.T. Press, Cambridge Massachusetts; 1965; 587 pages.
Kotadia, Munir; “Gates Predicts Death of the Password”; http://www.cnet.com/news/gates-predicts-death-of-the-password/?ftag=CADe856116&bhid=; Feb. 25, 2004; 3 pages.
Morris, Robert, et al.; “Password Security: A Case History”; Communications of the ACM, vol. 22, No. 11; Nov. 1979; pp. 594-597.
Cranor, Lorrie Faith, et al.; “Security and Usability: Designing Secure Systems that People Can Use”; O'Reilly Media; Aug. 2005; pp. 3 & 104.
Chirillo, John; “Hack Attacks Encyclopedia: A Complete History of Hacks, Cracks, Phreaks, and Spies Over Time”; John Wiley & Sons, Inc.; 2001; 485-486.
Stonebraker, Michael, et al.; “Object-Relational DBMSs: Tracking the Next Great Wave”; Second Ed.; Morgan Kaufmann Publishers, Inc.; 1999; pp. 3, 173, 232-237, 260.
Stonebraker, Michael, et al.; “Object-Relational DBMSs: The Next Great Wave”; Morgan Kaufmann Publishers, Inc.; 1996; pp. 105, 107, 166-168.
Barwick, Sandra; “Two Wheels Good, Four Wheels Bad”; The Independent; http://www.independent.co.uk/voices/two-wheels-good-four-wheels-bad-1392034.html; Feb. 4, 1994; 11 pages.
Mcfee, John E., et al.; “Multisensor Vehicle-Mounted Teleoperated Mine Detector with Data Fusion”; SPIE Proceedings, vol. 3392; Apr. 13, 1998; 2 pages.
Malcolm, Andrew H.; “Drunken Drivers Now Facing Themselves on Video Camera”; The New York Times; http://www.nytimes.com/1990/04/21/us/drunken-drivers-now-facing-themselves-on-video-camera.html; Apr. 21, 1990; 3 pages.
Kaplan, A.E., et al.; “An Internet Accessible Telepresence”; Multimedia Systems, vol. 5, Issue 2; Mar. 1997; Abstract only; 2 pages.
Sabatini, Richard V.; “State Police Cars in Pa. Get Cameras Patrol Stops Will be Videotaped. The Units will Benefit Citizens and Police, Authorities Say”; http://articles.philly.com/1996-03- 30/news/25637501_1_patrol-car-state-police-commissioner-paul-j-evanko; Mar. 30, 1996; 2 pages.
Stockton, Gregory R., et al.; “Why Record? (Infrared Video)”; Infraspection Institute's IR/INFO '98 Symposium, Orlando, Florida; Jan. 25-28, 1998; 5 pages.
Racepak LLC; “Racepak DataLink Software” http://www.racepak.com/software.php.; Feb. 3, 2008; 4 pages.
Pavlopoulos, S., et al.; “A Novel Emergency Telemedicine System Based on Wireless Communication Technology—AMBULANCE”; IEEE Trans Inf Technol Biomed, vol. 2, No. 4; Dec. 1998; Abstract only; 2 pages.
Horne, Mike; “Future Video Accident Recorder”; http://www.iasa.com.au/folders/Publications/pdf_library/horne.pdf; May 1999; 9 pages.
Townsend & Taphouse; “Microsentinel I”; http://www.blacksheepnetworks.com/security/resources/encyclopedia/products/prod19.htm; Jul. 5, 2003; 1 page.
Security Data Networks, Inc.; “Best of Show Winner at CES Consumer Electronics Show is MicroSentinel(R) Wireless Internet Based Security System that Allows Users to Monitor their Home, Family, or Business using any Internet or E-Mail Connection”; PR Newswire; http://www.prnewswire.com/news-releases/best-of-show-winner-at-ces-consumer-electronics-show-is-microsentinelr-wireless-internet-based-security-system-that-allows-users-to-monitor-their-home-family-or-business-using-any-internet-or-e-mail-connection-73345197.html; Jan. 7, 1999; 3 pages.
Draper, Electra; “Mission Possible for Loronix”; Denver Post; http://extras.denverpost.com/business/top100b.htm; Aug. 13, 2000; 2 pages.
“Choosing the Right Password”; The Information Systems Security Monitor (ISSM); vol. 2, No. 2; Apr. 1992; 2 pages.
Aref, Walid G., et al.; “A Video Database Management System for Advancing Video Database Research”; In Proc. of the Int Workshop on Management Information Systems; Nov. 2002; 10 pages.
ICOP Extreme Wireless Mic, Operation Supplement; 2008.
Raytheon JPS Communications, Ratheon Model 20/20-W, Raytheon 20/20 Vision Digital In-Car Video Systems; Feb. 2010.
Product Review: ICOP Model 20/20-W; May 19, 2009.
State of Utah Invitation to Bid State Cooperative Contract, Contract No. MA503; Jul. 3, 2008.
X26 TASER; date unknown.
TASER X26 Specification Sheet; 2003.
Affidavit of Christopher Butler of Internet Archive attesting to the public availability of the 20/20-W Publication on Nov. 25, 2010.
International Association of Chiefs of Police Digital Video System Minimum Specifications Nov. 21, 2008.
City of Pomona Request for Proposal for Mobile Video Recording System for Police Vehicles; Apr. 4, 2013.
“TASER Int'l (TASR) Challenges to Digital Ally's (DGLY) Patents”, StreetInsider.com, http://www.streetinsider.com/Corporate+News/TASER+Intl+(TASER)+Challenges+to+Digital+Allys+(DGLY)+Patents/12302550 html Dec. 1, 2016.
Carlier, Axel, et al.; “Combining Content-Based Analysis and Crowdsourcing to Improve User Interaction with Zoomable Video”; Proceedings of the 19th International Conference on Multimedia 2011; Nov. 28-Dec. 1, 2011; pp. 43-52.
Mavlankar, Aditya, et al.; “Region-of-Interest Prediction for Interactively Streaming Regions of High Resolution Video”; Proceedings of the 16th IEEE International Packet Video Workshop; Nov. 2007; 10 pages.
Sony Corporation; “Network Camera: User's Guide: Software Version 1.0: SNC-RZ25N/RZ25P”; 2004; 81 pages.
Vanman, Robert, “U.S. Appl. No. 15/481,214”, filed Apr. 6, 2017.
Related Publications (1)
Number Date Country
20180103255 A1 Apr 2018 US
Provisional Applications (2)
Number Date Country
61029092 Feb 2008 US
61029101 Feb 2008 US
Continuations (1)
Number Date Country
Parent 12371189 Feb 2009 US
Child 15801801 US