Systems and methods for multiple-resolution storage of media streams

Information

  • Patent Grant
  • 10341605
  • Patent Number
    10,341,605
  • Date Filed
    Thursday, April 6, 2017
    7 years ago
  • Date Issued
    Tuesday, July 2, 2019
    5 years ago
Abstract
In an embodiment, a method includes continuously receiving, from a camera, raw video frames at an initial resolution. The method also includes, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame; downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame; identifying a location of a target; cropping at least video frame based, at least in part, on the location of the target; and storing the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively.
Description
BACKGROUND
Technical Field

The present disclosure relates generally to media capture and more particularly, but not by way of limitation, to systems and methods for multiple-resolution storage of media streams.


History of Related Art

Capture devices such as video cameras may capture video for storage and playback. The computational and storage expense of video storage and playback increases in proportion to the resolution of video provided the video cameras.


SUMMARY OF THE INVENTION

In an embodiment, a method is performed by a computer system. The method includes continuously receiving, from a camera, raw video frames at an initial resolution. The method also includes, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame; downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame; identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame; cropping at least video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target; and storing the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively.


In an embodiment, a system includes a computer processor and memory. The computer processor and memory in combination are operable to implement a method. The method includes continuously receiving, from a camera, raw video frames at an initial resolution. The method also includes, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame; downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame; identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame; cropping at least video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target; and storing the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively.


In one embodiment, a computer-program product includes a non-transitory computer-usable medium having computer-readable program code embodied therein. The computer-readable program code is adapted to be executed to implement a method. The method includes continuously receiving, from a camera, raw video frames at an initial resolution. The method also includes, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame; downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame; identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame; cropping at least video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target; and storing the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of the method and apparatus of the present invention may be obtained by reference to the following Detailed Description when taken in conjunction with the accompanying Drawings wherein:



FIG. 1 illustrates an example of a system for intelligent, multiple-resolution storage of media streams.



FIG. 2 illustrates an example of a process for recording video from video cameras.



FIG. 3 illustrates an example of a process for multiple-resolution storage of video.





DETAILED DESCRIPTION

In certain embodiments, capture devices, such as video cameras, can be integrated with a video-recording system and produce recordings of live media streams. A media stream can include video, audio, combinations of same, and/or the like. A captured media stream typically includes an audio and/or video recording.


In some embodiments, the video-recording system can include a collection of video cameras that are arranged to provide a 360-degree view relative to a point of reference. For example, four video cameras with 120-degree fields of view could be strategically arranged around a vehicle, such as a police car, to at least partially overlap and cover a 360-degree view from a perspective of the vehicle. The video cameras can be configured to record video upon certain triggers such as, for example, emergency-light activation, siren activation, a detected speed in excess of a threshold, excessive g-force events (e.g., collisions), manual activation of an individual video camera, combinations of same, and/or the like. The video-recording system can also include or be communicably coupled to mobile devices, such as wearable video cameras and other cameras, that can be configured, for example, to each make their own individual decisions on whether to record additional video from their respective vantage points.


Video from multiple video cameras can be highly advantageous for accurately depicting what has occurred over a given time period. While higher-resolution video is generally preferable to lower-resolution video from the standpoint of accurately depicting events, the storage, transmission, and playback costs of high-resolution video can be prohibitively expensive, particularly when numerous cameras are involved. For example, if a particular source video camera were to provide raw video at a resolution of 3960 by 2160, each uncompressed frame could have an approximate file size of 25.7 MB (RGB 8-bit frame) or 51.3 MB (RGB 16-bit frame). If that source video camera were to provide such video frames at thirty frames per second (fps), the corresponding bit rate could be approximately 2.05 Gigabits per second (Gbps) for a raw 3×8 bit or 12.3 Gbps for a raw 3×16 bit. The bit rate can quickly multiply in proportion to a number of cameras supplying video.


In many implementations, video recordings are produced on a continual basis over a period of time. For example, in a police implementation, video recordings of various lengths (e.g., five minutes, ten minutes, etc.) may be created over the course of a shift or a longer period (e.g., in emergency situations). Particularly in a mobile or portable storage environment such as a vehicle, storage resources are not always suitable for storing raw video of the type described above, from multiple video cameras, for hours or more at a time. Likewise, it is often not practical to transmit high-resolution video of the type described above over a network to central storage location due to bandwidth limitations. These are technical problems related to how data is transmitted and stored.


One way to address the above problems might be to encode video in a compressed and/or lower-resolution format. However, this approach would typically result in the loss of video detail, which detail might prove important to demonstrating what took place at a given time. Although this disadvantage might be mitigated by minimizing the amount of compression and/or resolution lowering that is performed, such mitigation would also reduce the resultant storage and transmission savings.


The present disclosure describes examples of intelligent, multiple-resolution storage of video data, for example, in mobile or portable video-storage environments. The intelligent, multiple-resolution storage can occur in real-time as raw video frames are received from video cameras. For purposes of this patent application, raw video or a raw video frame refers to a video or video frame, respectively, that is in its original format as provided by a capture source such as a video camera. In certain embodiments, a media system, such as an in-vehicle media system, can enhance its real-time knowledge of live media streams by automatically identifying targets and/or regions of interest in one or more fields of view. For example, the media system can downscale the raw video frames to multiple resolutions on a frame-by-frame basis. In addition, in some embodiments, the media system can perform additional processing on the identified regions of interest and store selected video at a relatively higher resolution.



FIG. 1 illustrates an example of a system 100 for intelligent, multiple-resolution storage of media streams. The system 100 can include an in-vehicle media system (IVMS) 102, a mobile device 104, and a media storage system 106. Although the IVMS 102, the mobile device 104, and the media storage system 106 are each shown singly, it should be appreciated that, in some embodiments, each can be representative of a plurality of such components. For example, the mobile device 104 can be representative of a plurality of mobile devices and the media storage system 106 can be representative of a plurality of storage locations that are available over a network.


In certain embodiments, the IVMS 102 can be communicably coupled to the mobile device 104 and the media storage system 106 via a communication link 108 and a communication link 110, respectively. In addition, in certain embodiments, the mobile device 104 can be communicably coupled to the media storage system 106 via a communication link 114. The communication links 108, 110 and 114 can be representative of wired and/or wireless communication. In some cases, the communication links 108, 110 and 114 can represent links that are periodically established, for example, in order to transfer captured media therebetween (e.g., from the mobile device 104 to the IVMS 102, from the mobile device 104 to the media storage system 106 and/or from the IVMS 102 to the media storage system 106).


The IVMS 102 is typically operable to receive, process, and store media such as audio and/or video as it is received from a source. An example of functionality that the IVMS 102 can include is described in U.S. Pat. No. 8,487,995 (“the '995 patent”). The '995 patent is hereby incorporated by reference. In general, the mobile device 104 can capture the media and, in some cases, provide same to the IVMS 102 in a continuous, ongoing fashion. The media storage system 106 can, in some embodiments, be implemented as a central storage system that stores captured media from multiple mobile devices similar to the mobile device 104 and/or from multiple media systems similar to the IVMS 102.


The IVMS 102, mobile device 104, and media storage system 106 may each include one or more portions of one or more computer systems. In particular embodiments, one or more of these computer systems may perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems may provide functionality described or illustrated herein. In particular embodiments, encoded software running on one or more computer systems may perform one or more steps of one or more methods described or illustrated herein or provide functionality described or illustrated herein.


The components of IVMS 102, mobile device 104, and media storage system 106 may comprise any suitable physical form, configuration, number, type and/or layout. As an example, and not by way of limitation, IVMS 102, mobile device 104, and/or media storage system 106 may comprise an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. Where appropriate, IVMS 102, mobile device 104, and/or media storage system 106 may include one or more computer systems; be unitary or distributed; span multiple locations; span multiple machines; or reside in a cloud, which may include one or more cloud components in one or more networks.


In the depicted embodiment, IVMS 102, mobile device 104, and media storage system 106 each include their own respective processors 111, 121, and 131; memory 113, 123, and 133; storage 115, 125, and 135; interfaces 117, 127, and 137; and buses 119, 129, and 139. Although a system is depicted having a particular number of particular components in a particular arrangement, this disclosure contemplates any system having any suitable number of any suitable components in any suitable arrangement. For simplicity, similar components of IVMS 102, mobile device 104, and media storage system 106 will be discussed together while referring to the components of IVMS 102. However, it is not necessary for these devices to have the same components, or the same type of components. For example, processor 111 may be a general purpose microprocessor and processor 121 may be an application specific integrated circuit (ASIC).


Processor 111 may be a microprocessor, controller, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other components, (e.g., memory 113) wireless networking functionality. Such functionality may include providing various features discussed herein. In particular embodiments, processor 111 may include hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 111 may retrieve (or fetch) instructions from an internal register, an internal cache, memory 113, or storage 115; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 113, or storage 115.


In particular embodiments, processor 111 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 111 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 111 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 113 or storage 115 and the instruction caches may speed up retrieval of those instructions by processor 111. Data in the data caches may be copies of data in memory 113 or storage 115 for instructions executing at processor 111 to operate on; the results of previous instructions executed at processor 111 for access by subsequent instructions executing at processor 111, or for writing to memory 113, or storage 115; or other suitable data. The data caches may speed up read or write operations by processor 111. The TLBs may speed up virtual-address translations for processor 111. In particular embodiments, processor 111 may include one or more internal registers for data, instructions, or addresses. Depending on the embodiment, processor 111 may include any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 111 may include one or more arithmetic logic units (ALUs); be a multi-core processor; include one or more processors 111; or any other suitable processor.


Memory 113 may be any form of volatile or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), flash memory, removable media, or any other suitable local or remote memory component or components. In particular embodiments, memory 113 may include random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM, or any other suitable type of RAM or memory. Memory 113 may include one or more memories 113, where appropriate. Memory 113 may store any suitable data or information utilized by IVMS 102, including software embedded in a computer readable medium, and/or encoded logic incorporated in hardware or otherwise stored (e.g., firmware). In particular embodiments, memory 113 may include main memory for storing instructions for processor 111 to execute or data for processor 111 to operate on. In particular embodiments, one or more memory management units (MMUs) may reside between processor 111 and memory 113 and facilitate accesses to memory 113 requested by processor 111.


As an example and not by way of limitation, IVMS 102 may load instructions from storage 115 or another source (such as, for example, another computer system) to memory 113. Processor 111 may then load the instructions from memory 113 to an internal register or internal cache. To execute the instructions, processor 111 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 111 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 111 may then write one or more of those results to memory 113. In particular embodiments, processor 111 may execute only instructions in one or more internal registers or internal caches or in memory 113 (as opposed to storage 115 or elsewhere) and may operate only on data in one or more internal registers or internal caches or in memory 113 (as opposed to storage 115 or elsewhere).


In particular embodiments, storage 115 may include mass storage for data or instructions. As an example and not by way of limitation, storage 115 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 115 may include removable or non-removable (or fixed) media, where appropriate. Storage 115 may be internal or external to IVMS 102, where appropriate. In particular embodiments, storage 115 may be non-volatile, solid-state memory. In particular embodiments, storage 115 may include read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. Storage 115 may take any suitable physical form and may comprise any suitable number or type of storage. Storage 115 may include one or more storage control units facilitating communication between processor 111 and storage 115, where appropriate.


In particular embodiments, interface 117 may include hardware, encoded software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) among IVMS 102, mobile device 104, media storage system 106, any networks, any network devices, and/or any other computer systems. As an example and not by way of limitation, communication interface 117 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network and/or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network.


In some embodiments, interface 117 comprises one or more radios coupled to one or more physical antenna ports 116. Depending on the embodiment, interface 117 may be any type of interface suitable for any type of network with which the system 100 is used. As an example and not by way of limitation, the system 100 can include (or communicate with) an ad-hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, the system 100 can include (or communicate with) a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, an LTE network, an LTE-A network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or any other suitable wireless network or a combination of two or more of these. IVMS 102 may include any suitable interface 117 for any one or more of these networks, where appropriate.


In some embodiments, interface 117 may include one or more interfaces for one or more I/O devices. One or more of these I/O devices may enable communication between a person and IVMS 102. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touchscreen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. Particular embodiments may include any suitable type and/or number of I/O devices and any suitable type and/or number of interfaces 117 for them. Where appropriate, interface 117 may include one or more drivers enabling processor 111 to drive one or more of these I/O devices. Interface 117 may include one or more interfaces 117, where appropriate.


Bus 119 may include any combination of hardware, software embedded in a computer readable medium, and/or encoded logic incorporated in hardware or otherwise stored (e.g., firmware) to couple components of IVMS 102 to each other. As an example and not by way of limitation, bus 119 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or any other suitable bus or a combination of two or more of these. Bus 119 may include any number, type, and/or configuration of buses 119, where appropriate. In particular embodiments, one or more buses 119 (which may each include an address bus and a data bus) may couple processor 111 to memory 113. Bus 119 may include one or more memory buses.


Herein, reference to a computer-readable storage medium encompasses one or more tangible computer-readable storage media possessing structures. As an example and not by way of limitation, a computer-readable storage medium may include a semiconductor-based or other integrated circuit (IC) (such, as for example, a field-programmable gate array (FPGA) or an application-specific IC (ASIC)), a hard disk, an HDD, a hybrid hard drive (HHD), an optical disc, an optical disc drive (ODD), a magneto-optical disc, a magneto-optical drive, a floppy disk, a floppy disk drive (FDD), magnetic tape, a holographic storage medium, a solid-state drive (SSD), a RAM-drive, a SECURE DIGITAL card, a SECURE DIGITAL drive, a flash memory card, a flash memory drive, or any other suitable tangible computer-readable storage medium or a combination of two or more of these, where appropriate.


Particular embodiments may include one or more computer-readable storage media implementing any suitable storage. In particular embodiments, a computer-readable storage medium implements one or more portions of processor 111 (such as, for example, one or more internal registers or caches), one or more portions of memory 113, one or more portions of storage 115, or a combination of these, where appropriate. In particular embodiments, a computer-readable storage medium implements RAM or ROM. In particular embodiments, a computer-readable storage medium implements volatile or persistent memory. In particular embodiments, one or more computer-readable storage media embody encoded software.


Herein, reference to encoded software may encompass one or more applications, bytecode, one or more computer programs, one or more executables, one or more instructions, logic, machine code, one or more scripts, or source code, and vice versa, where appropriate, that have been stored or encoded in a computer-readable storage medium. In particular embodiments, encoded software includes one or more application programming interfaces (APIs) stored or encoded in a computer-readable storage medium. Particular embodiments may use any suitable encoded software written or otherwise expressed in any suitable programming language or combination of programming languages stored or encoded in any suitable type or number of computer-readable storage media. In particular embodiments, encoded software may be expressed as source code or object code. In particular embodiments, encoded software is expressed in a higher-level programming language, such as, for example, C, Perl, or a suitable extension thereof. In particular embodiments, encoded software is expressed in a lower-level programming language, such as assembly language (or machine code). In particular embodiments, encoded software is expressed in JAVA. In particular embodiments, encoded software is expressed in Hyper Text Markup Language (HTML), Extensible Markup Language (XML), or other suitable markup language.


Referring more specifically to the IVMS 102, the IVMS 102 can include media capture components 120a. The media capture components 120a can include video-capture hardware and/or software (e.g., video cameras), audio-capture hardware and/or software (e.g., microphones), combinations of same and/or the like. More particularly, in certain embodiments, the media capture components 120a can include an arrangement of video cameras in or around a vehicle. In an example, the media capture components 120a can include video cameras arranged around an exterior of the vehicle so as to capture a 360-degree field of view. For example, the 360-degree field of view can be captured by front, left, right, and rear-facing video cameras that each individually have, for example, a 120-degree field of view. In addition, or alternatively, the media capture components 120a can include one or more video cameras positioned inside the vehicle. Additionally, in some embodiments, at least some of the video cameras of the media capture components 120a can be video cameras configured for use in low lighting (e.g., night-vision cameras).


Referring now more specifically to the mobile device 104, the mobile device 104 can include a media capture component 120b and a battery 118. The media capture component 120b can include video-capture hardware and/or software (e.g., a camera), audio-capture hardware and/or software (e.g., a microphone), combinations of same, and/or the like. In a typical embodiment, the media capture component 120b enables the mobile device 104 to capture the live media stream for processing and storage. The battery 118 typically provides a limited power source to the mobile device 104.


Furthermore, the IVMS 102, the mobile device 104 and the media storage system 106 can include a media processor 112(1), a media processor 112(2) and a media processor 112(3), respectively (collectively, media processor(s) 112). The media processor(s) 112 can include software and/or hardware to process a live media stream and store the live media stream in memory in the form of a database (e.g., in the storage 115, 125 and/or 135). For example, in some embodiments, metadata related to each media stream can be stored in relation to the media stream as a database record. It should be appreciated that the media processor(s) 112 are shown for illustrative purposes.


In various embodiments, some of the media processor(s) 112 can be omitted. For example, in some embodiments, processing and storage of live media streams can occur entirely on the IVMS 102. In these embodiments, the media processor 112(2) of the mobile device 104 and/or the media processor 112(3) of the media storage system 106 can be omitted. In addition, in some embodiments, processing and storage of the media stream can occur on two or more of the IVMS 102, the mobile device 104 and the media storage system 106. In these embodiments, the functionality described herein that is attributed to the media processor(s) 112 can be distributed among two or more of the IVMS 102, the mobile device 104 and the media storage system 106. In addition, or alternatively, the media processor 112(1), the media processor 112(2) and the media processor 112(3) can perform at least some of the same functionality in parallel. In general, it should be appreciated that the particular arrangement of the IVMS 102, the mobile device 104 and the media storage system 106 is illustrative in nature. In various implementations, more, fewer or different components can implement the functionality of the system 100.


In certain embodiments, the media processor(s) 112 can implement intelligent, multiple-resolution storage of video data. In an example, the media processor 112(1) can continuously receive raw video frames from video cameras represented in the media capture components 120a and strategically process and downscale the raw video frames in real-time as the raw video frames are received. In certain embodiments, the media processor 112(1) can automatically identify targets within each raw video frame and optimize the downscaling based on any targets that are identified. In various embodiments, the media processor 112(1) can perform the above-described functionality for a particular camera, selected cameras, or all available cameras. For example, in some embodiments, the media processor 112(1) can perform the above-described multiple-resolution functionality exclusively for a front camera, exclusively from a rear camera, for all exterior cameras involved in a 360-degree field of view, combinations of the foregoing and/or the like. The media processor 112(2), for example, can perform similar multiple-resolution functionality with respect to video received from the media capture component 120b. In addition, or alternatively, the media processor 112(3) can perform similar functionality with respect to video streams stored on the storage 135. Example functionality of the media processor(s) 112 will be described in greater detail with respect to FIGS. 2-3.


When, for example, certain video cameras of the media capture components 120a are arranged to form a 360-degree view of a point of reference such as the vehicle, the media processor(s) 112(1) can also blend together video streams from such video cameras into a single viewable 360-degree stream. In some cases, the media processor 112(1) can create multiple 360-degree streams at multiple resolutions (e.g., one such stream for each resolution at which video frames are retained). In various embodiments, the media processor 112(1), or another component, can enable users to navigate within the 360-degree streams and save additional views, for example, to the storage 115 or other memory.



FIG. 2 illustrates an example of a process 200 for recording video from video cameras. For example, the process 200, in whole or in part, can be implemented by one or more of the IVMS 102, the mobile device 104, the media storage system 106, the media processor 112(1), the media processor 112(2), the media processor 112(3), the media capture components 120a, and/or the media capture component 120b. The process 200 can also be performed generally by the system 100. Although any number of systems, in whole or in part, can implement the process 200, to simplify discussion, the process 200 will be described in relation to the IVMS 102 and components thereof.


At block 202, the media processor 112(1) monitors video cameras of the media capture components 120a. At decision block 204, the media processor 112(1) determines whether a new raw video frame has been received. If not, the process 200 returns to block 202, where the media processor 112(1) continues to monitor the video cameras. Otherwise, if it is determined at decision block 204 that one or more raw video frames have been received from one or more video cameras, the process 200 proceeds to block 206 and executes in parallel for each raw video frame that is received. For ease of description, blocks 206-208 of the process 200 will be described with respect to a raw video frame received from a particular video camera of the media capture components 120a.


At block 206, the media processor 112(1) processes the raw video frame for multiple-resolution storage. In general, block 206 can include the media processor 112(1) downscaling the raw video frame to one or more resolutions so as to yield one or more downscaled video frames. In addition, or alternatively, the block 206 can include identifying a location of a target or region of interest in the raw video frame. For example, the media processor 112(1), or a component in communication with the media processor 112(1), can identify a person, a vehicle, a license plate, combinations of same and/or the like. In some embodiments, the identified location can be expressed as two-dimensional coordinates that represent a centroid of the target. In embodiments in which the block 206 includes target detection, the block 206 can further include, for example, automatically cropping an area of interest that includes the centroid and, in some cases, downscaling the cropped area of interest to one or more resolutions so as to yield additional scaled video frames. Some or all video frames resulting from the block 206 can be stored, for example, in the storage 115. Examples of functionality that can be included in the block 206 will be described in greater detail with respect to FIG. 3.


At block 208, the media processor 112(1) performs post-processing optimization. For example, in certain embodiments, the media processor 112(1) can cause the particular video camera that provided the raw video frame to pan and/or tilt so as to center an identified target in its field of view. In some embodiments, the block 208 can be omitted. From block 208, the media processor 112(1) returns to block 202 and proceeds as described above. In general, the process 200 can iteratively execute, for example, at 20 fps, 30 fps, 40 fps, etc. for each video camera of the media capture components 120a. The process 200 can be terminated when the IVMS 102 is shut down, when all video cameras of the media capture components 120a stop recording, upon manual termination by a user, or whenever other suitable termination criteria is satisfied.



FIG. 3 illustrates an example of a process 300 for multiple-resolution storage of video. In certain embodiments, an instance of the process 300 can be executed in real-time for each video frame that is received during the process 200 of FIG. 2. For example, with respect to some or all of the cameras of the media capture components 120a (e.g., front camera, rear camera, side cameras, combinations of foregoing and/or the like), the process 300 can repeatedly execute at approximately 20 fps, 30 fps, 40 fps, etc. In these embodiments, the process 300 represents an example of functionality that can be performed with respect to a particular camera at blocks 204-206 of FIG. 2. For example, the process 300, in whole or in part, can be implemented by one or more of the IVMS 102, the mobile device 104, the media storage system 106, the media processor 112(1), the media processor 112(2), the media processor 112(3), the media capture components 120a, and/or the media capture component 120b. The process 300 can also be performed generally by the system 100. Although any number of systems, in whole or in part, can implement the process 300, to simplify discussion, the process 300 will be described in relation to the IVMS 102 and components thereof.


At block 302, the media processor 112(1) receives a raw video frame from a video camera represented in the media capture components 120a. At block 304, the media processor 112(1) downscales the raw video frame to a plurality of resolutions so as to yield a plurality of scaled video frames. For example, if the resolution of the raw video frame is 3960 by 2160, the media processor 112(1) could downscale the raw video frame to 1080p, 480p, another suitable resolution, etc. In that way, the downscaling at block 304 can yield a scaled video frame for each resolution to which the raw video frame is downscaled.


At block 306, the media processor 112(1) can identify a target in the raw video frame or in one of the scaled video frames resulting from block 304. The target can represent, for example, a vehicle, a license plate, a person, etc. In certain embodiments, the target can be located, for example, by a software component in communication with the media processor 112(1), such that the identification at block 306 involves receiving coordinates of a centroid of the target from the software component. In some embodiments, the block 306 can include selecting from among a plurality of potential targets such as, for example, a vehicle, a license plate, person, etc. In certain cases, the media processor 112(1) can make this selection, for example, by prioritizing identification of people, vehicles, and license plates, sometimes in that order.


At block 308, the media processor 112(1) automatically crops one or more of the raw video frame and the scaled video frames based on the identified location of the target. In certain embodiments, the block 308 can include cropping the chosen video frame to have a narrower field of view. In an example, if the raw video frame provides a 120-degree field of view, the cropped video frame could have a 60-degree field of view that is centered on the identified location of the target. In some embodiments, the automatic cropping can be performed on the raw video frame regardless of which video frame was used to identify the target at block 306. In addition, or alternatively, the automatic cropping can be performed on one or more of the scaled video frames regardless of which video frame was used to identify the target at block 306. Further, in some embodiments, the automatic cropping can be performed on the same video frame in which the target was identified at block 306. In various embodiments, the block 308 can also include performing a digital zoom on the identified location of the target.


At block 310, the media processor 112(1) can downscale the automatically cropped video frame to one or more resolutions. For example, if the media processor 112(1) has automatically cropped the raw video frame at block 308, the media processor 112(1) can downscale the automatically cropped raw video frame to 1080p, 480p, a combination of the foregoing and/or the like. By way of further example, if the media processor 112(1), at block 308 described above, automatically cropped a scaled video frame (e.g., a scaled video frame that resulted from block 304 described above), the media processor 112(1) can further downscale the cropped and scaled video frame to one or more other resolutions such as, for example, 480p, another suitable resolution, combinations of the foregoing and/or the like. In some embodiments, such as the scenario in which the automatic cropping at block 308 was performed on a scaled video frame, additional downscaling at the block 310 may be omitted. In general, the downscaling at block 310 can yield a scaled video frame for each resolution to which the automatically cropped video frame is downscaled.


At block 312, the media processor 112(1) stores each video frame as part of a corresponding video stream in the storage 115. For example, if the media processor 112(1) is creating a first video stream at 1080p, a second video stream at 480p, a third video stream that represents a 1080p crop and a fourth video stream that represents a 480p crop, the block 312 can include storing a scaled 1080p video frame as part of the first video stream, a scaled 480p video frame as part of the second video stream, a scaled and cropped 1080p video frame as part of the third video stream, and a scaled and cropped 480p video frame as part of the fourth video stream. In that way, each iteration of the process 300 can result in video frames being added to respective video streams in the storage 115.


Advantageously, in certain embodiments, the process 300, when iteratively executed as part of an overall video-recording process such as the process 200 of FIG. 2, can result in improved recording, storage, and playback decisions. Consider an example in which raw video from a given video camera results in four stored video streams: an uncropped 1080p video stream, an uncropped 480p video stream, a cropped 1080p video stream, and a cropped 480p video stream. According to this example, the uncropped 480p video stream and/or the cropped 480p video may be particularly suitable for live streaming to the media storage system 106 or another computer system such as a mobile device, with the cropped 480p video stream providing a greater opportunity for transmission efficiency. Also according to this example, the cropped and/or uncropped 1080p video streams may be particularly suitable for evidentiary use at a later time (e.g., after a later, non-real-time transmission to the media storage system 106 or another component). Furthermore, in some embodiments, as storage resources are depleted and/or on a periodic basis, the IVMS 102 can strategically delete higher-resolution video streams (e.g., the cropped and/or uncropped 1080p video streams) that are not marked as events or that correspond to certain types of low-priority events (e.g., traffic stops). In addition, or alternatively, users such as police officers can make informed decisions as to which streams to retain and which streams to delete.


As further technical advantages, in some embodiments, the media processor(s) 112 can create more detailed and more relevant video via target and region-of-interest detection as described above. In these embodiments, maintaining raw video frames from video cameras may not be feasible due to the limited storage resources of a mobile or portable storage environment. However, as described above, in certain cases, target identification can be performed in real-time on the raw video frames before the raw video frames are discarded. By performing the target identification on the raw video frames in real-time, analysis, cropping and storage can be based on the greater video detail contained within the raw video frames. In these embodiments, target identification and multiple-resolution storage based thereon can be better facilitated. For example, even if only a cropped, lower-resolution video stream, such as a 480p video stream, is ultimately retained, the cropped, lower-resolution 480p video stream can be an automatic result of a real-time, frame-by-frame analysis and strategic pruning of the raw video frames before access to the raw video frames is lost. Consequently, the cropped, lower-resolution 480p video stream can more accurately represent a relevant portion of video using fewer storage resources as compared, for example, to a straight 480p scaling of the raw video frames.


For illustrative purposes, the processes 200 and 300 are described with respect to raw video frames received by the media processor 112(1) of the IVMS 102 of FIG. 1. It should be appreciated that the processes 200 and 300 can also be performed, for example, by the media processor 112(2) and/or the media processor 112(3), although video resolutions different from the examples described above may be used. For example, with respect to the mobile device 104 of FIG. 1, the media processor 112(2) may perform processes similar to those of the processes 200 and 300 with respect to raw video supplied by the media capture component 120b. By way of further example, with respect to the media storage system 106 of FIG. 1, the media processor 112(3) may perform processes similar to those of the processes 200 and 300 with respect to raw video and/or scaled video supplied by the IVMS 102 over the communication link 110, although the video resolution of such video may be somewhat lower (e.g., 1080p or 480p) to accommodate for the cost of transmitting over the communication link 110.


Depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. Although certain computer-implemented tasks are described as being performed by a particular entity, other embodiments are possible in which these tasks are performed by a different entity.


Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment.


While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, the processes described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of protection is defined by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. A method comprising, by a computer system: continuously receiving, from a plurality of cameras, raw video frames at an initial resolution, wherein the plurality of cameras are arranged to provide a 360-degree view relative to a point of reference;for each camera of the plurality of cameras, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame;downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame;identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame;cropping at least one video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target;downscaling the cropped at least one video frame to a third resolution to yield a third scaled video frame; andstoring the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively; andblending together a video stream of each of the plurality of cameras into a 360-degree video stream, wherein the video stream of each of the plurality of cameras comprises at least one of the first video stream, the second video stream, and the third video stream.
  • 2. The method of claim 1, wherein, for at least one camera of the plurality of cameras, the identifying comprises identifying the location of the target in the raw video frame.
  • 3. The method of claim 1, wherein the third resolution is the same as at least one of the first resolution and the second resolution.
  • 4. The method of claim 1, comprising: downscaling the cropped at least one video frame to a fourth resolution to yield a fourth scaled video frame; andstoring the fourth scaled video frame as part of a fourth video stream.
  • 5. The method of claim 1, wherein the storing the information related to the cropped at least one video frame comprises storing the cropped at least one video frame as part of the third video stream.
  • 6. The method of claim 1, wherein the cropping comprises cropping the at least one video frame to a narrower field of view.
  • 7. The method of claim 1, wherein the at least one video frame is the raw video frame.
  • 8. The method of claim 1, wherein the target is selected from the group consisting of person, vehicle, and license plate.
  • 9. The method of claim 1, wherein the identifying comprises selecting from among a plurality of potential targets.
  • 10. The method of claim 9, wherein the selecting prioritizes identification of people over other potential targets.
  • 11. The method of claim 1, comprising causing at least one of the plurality of cameras to at least one of pan and tilt based, at least in part, on the location of the target.
  • 12. A system comprising a processor and memory, wherein the processor and memory in combination are operable to implement a method comprising: continuously receiving, from a plurality of cameras, raw video frames at an initial resolution, wherein the plurality of cameras are arranged to provide a 360-degree view relative to a point of reference;for each camera of the plurality of cameras, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame;downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame;identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame;cropping at least one video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target;downscaling the cropped at least one video frame to a third resolution to yield a third scaled video frame; andstoring the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively; andblending together a video stream of each of the plurality of cameras into a 360-degree video stream, wherein the video stream of each of the plurality of cameras comprises at least one of the first video stream, the second video stream, and the third video stream.
  • 13. The system of claim 12, wherein, for at least one camera of the plurality of cameras, the identifying comprises identifying the location of the target in the raw video frame.
  • 14. The system of claim 12, wherein the third resolution is the same as at least one of the first resolution and the second resolution.
  • 15. The system of claim 12, the method comprising: downscaling the cropped at least one video frame to a fourth resolution to yield a fourth scaled video frame; andstoring the fourth scaled video frame as part of a fourth video stream.
  • 16. The system of claim 12, wherein the storing the information related to the cropped at least one video frame comprises storing the cropped at least one video frame as part of the third video stream.
  • 17. The system of claim 12, wherein the identifying comprises selecting from among a plurality of potential targets.
  • 18. The system of claim 17, wherein the selecting prioritizes identification of people over other potential targets.
  • 19. The system of claim 12, the method comprising causing at least one of the cameras to at least one of pan and tilt based, at least in part, on the location of the target.
  • 20. A computer-program product comprising a non-transitory computer-usable medium having computer-readable program code embodied therein, the computer-readable program code adapted to be executed to implement a method comprising: continuously receiving, from a plurality of cameras, raw video frames at an initial resolution, wherein the plurality of cameras are arranged to provide a 360-degree view relative to a point of reference;for each camera of the plurality of cameras, for each raw video frame, as the raw video frame is received: downscaling the raw video frame to a first resolution to yield a first scaled video frame;downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame;identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame;cropping at least one video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target;downscaling the cropped at least one video frame to a third resolution to yield a third scaled video frame; andstoring the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively; andblending together a video stream of each of the plurality of cameras into a 360-degree video stream, wherein the video stream of each of the plurality of cameras comprises at least one of the first video stream, the second video stream, and the third video stream.
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application claims priority from, and incorporates by reference the entire disclosure of, U.S. Provisional Patent Application No. 62/319,364 filed on Apr. 7, 2016.

US Referenced Citations (339)
Number Name Date Kind
3752047 Gordon et al. Aug 1973 A
4258421 Juhasz et al. Mar 1981 A
4389706 Gomola et al. Jun 1983 A
4420238 Felix Dec 1983 A
4688244 Hannon et al. Aug 1987 A
4754255 Sanders et al. Jun 1988 A
4786900 Karasawa et al. Nov 1988 A
4789904 Peterson Dec 1988 A
4831438 Bellman, Jr. et al. May 1989 A
4843463 Michetti Jun 1989 A
4949186 Peterson Aug 1990 A
4992943 McCracken Feb 1991 A
4993068 Piosenka et al. Feb 1991 A
5027104 Reid Jun 1991 A
5111289 Lucas et al. May 1992 A
5136655 Bronson Aug 1992 A
5164827 Paff Nov 1992 A
5185667 Zimmermann Feb 1993 A
5204536 Vardi Apr 1993 A
5225882 Hosokawa et al. Jul 1993 A
5430431 Nelson Jul 1995 A
5485611 Astle Jan 1996 A
5491464 Carter et al. Feb 1996 A
5491511 Odle Feb 1996 A
5515042 Nelson May 1996 A
5539454 Williams Jul 1996 A
5570127 Schmidt Oct 1996 A
5579239 Freeman et al. Nov 1996 A
5651075 Frazier et al. Jul 1997 A
5677979 Squicciarini et al. Oct 1997 A
5682133 Johnson et al. Oct 1997 A
5689442 Swanson et al. Nov 1997 A
5703604 McCutchen Dec 1997 A
5708780 Levergood et al. Jan 1998 A
5726450 Peterson et al. Mar 1998 A
5734337 Kupersmit Mar 1998 A
5742336 Lee Apr 1998 A
5784023 Bluege Jul 1998 A
5787367 Berra Jul 1998 A
5799083 Brothers et al. Aug 1998 A
5809161 Auty et al. Sep 1998 A
5815093 Kikinis Sep 1998 A
5818864 van Goor et al. Oct 1998 A
5844599 Hildin Dec 1998 A
5852664 Iverson et al. Dec 1998 A
5857159 Dickrell et al. Jan 1999 A
5890079 Levine Mar 1999 A
5898866 Atkins et al. Apr 1999 A
5917405 Joao Jun 1999 A
5920338 Katz Jul 1999 A
5926210 Hackett et al. Jul 1999 A
5936683 Lin Aug 1999 A
5963248 Ohkawa et al. Oct 1999 A
6008841 Charlson Dec 1999 A
6028528 Lorenzetti et al. Feb 2000 A
6037977 Peterson Mar 2000 A
6076026 Jambhekar et al. Jun 2000 A
6092008 Bateman Jul 2000 A
6121898 Moetteli Sep 2000 A
6141611 Mackey et al. Oct 2000 A
6151065 Steed et al. Nov 2000 A
6211907 Scaman et al. Apr 2001 B1
6215519 Nayar et al. Apr 2001 B1
6252989 Geisler et al. Jun 2001 B1
6259475 Ramachandran et al. Jul 2001 B1
6282462 Hopkins Aug 2001 B1
6326714 Bandera Dec 2001 B1
6330025 Arazi et al. Dec 2001 B1
6332193 Glass et al. Dec 2001 B1
6335789 Kikuchi Jan 2002 B1
6345219 Klemens Feb 2002 B1
6373962 Kanade et al. Apr 2002 B1
6389340 Rayner May 2002 B1
6421080 Lambert Jul 2002 B1
6430488 Goldman et al. Aug 2002 B1
6445824 Hieda Sep 2002 B2
6456321 Ito et al. Sep 2002 B1
6490513 Fish et al. Dec 2002 B1
6518881 Monroe Feb 2003 B2
6542076 Joao Apr 2003 B1
6545601 Monroe Apr 2003 B1
6546119 Ciolli et al. Apr 2003 B2
6546363 Hagenbuch Apr 2003 B1
6553131 Neubauer et al. Apr 2003 B1
6556905 Mittelsteadt et al. Apr 2003 B1
6559769 Anthony et al. May 2003 B2
6631522 Erdelyi Oct 2003 B1
6636256 Passman et al. Oct 2003 B1
6684137 Takagi et al. Jan 2004 B2
6696978 Trajkovic et al. Feb 2004 B2
6704281 Hourunranta et al. Mar 2004 B1
6707489 Maeng et al. Mar 2004 B1
6734911 Lyons May 2004 B1
6754663 Small et al. Jun 2004 B1
6801574 Takeuchi et al. Oct 2004 B2
6812835 Ito et al. Nov 2004 B2
6831556 Boykin Dec 2004 B1
6914541 Zierden Jul 2005 B1
6950013 Scaman et al. Sep 2005 B2
6950122 Mirabile Sep 2005 B1
6959122 McIntyre Oct 2005 B2
6965400 Haba et al. Nov 2005 B1
7023913 Monroe Apr 2006 B1
7119674 Sefton Oct 2006 B2
7119832 Blanco et al. Oct 2006 B2
7131136 Monroe Oct 2006 B2
7180407 Guo et al. Feb 2007 B1
7190882 Gammenthaler Mar 2007 B2
7215876 Okada et al. May 2007 B2
7262790 Bakewell Aug 2007 B2
7272179 Siemens et al. Sep 2007 B2
7363742 Nerheim Apr 2008 B2
7373395 Brailean et al. May 2008 B2
7382244 Donovan et al. Jun 2008 B1
7405834 Marron et al. Jul 2008 B1
7471334 Stenger Dec 2008 B1
7495579 Sirota et al. Feb 2009 B2
7570158 Denny et al. Aug 2009 B2
7570476 Nerheim Aug 2009 B2
7574131 Chang et al. Aug 2009 B2
7583290 Enright et al. Sep 2009 B2
7646312 Rosen Jan 2010 B2
7702015 Richter Apr 2010 B2
7711150 Simon May 2010 B2
7768548 Silvernail et al. Aug 2010 B2
7787025 Sanno et al. Aug 2010 B2
7804426 Etcheson Sep 2010 B2
7880766 Aoki et al. Feb 2011 B2
7894632 Park et al. Feb 2011 B2
7920187 Sanno et al. Apr 2011 B2
7929010 Narasimhan Apr 2011 B2
7944676 Smith et al. May 2011 B2
7973853 Ojima et al. Jul 2011 B2
7995652 Washington Aug 2011 B2
8022874 Frieaizen Sep 2011 B2
8026945 Garoutte Sep 2011 B2
8037348 Wei et al. Oct 2011 B2
8050206 Siann et al. Nov 2011 B2
8228364 Cilia Jul 2012 B2
8446469 Blanco et al. May 2013 B2
8487995 Vanman et al. Jul 2013 B2
8570376 Sharma Oct 2013 B1
8594485 Brundula Nov 2013 B2
8599368 Cilia et al. Dec 2013 B1
8630497 Badawy et al. Jan 2014 B2
8736680 Cilia et al. May 2014 B1
8781292 Ross et al. Jul 2014 B1
8805431 Vasavada et al. Aug 2014 B2
8819686 Memik et al. Aug 2014 B2
8837901 Shekarri et al. Sep 2014 B2
8964054 Jung Feb 2015 B2
8982944 Vanman Mar 2015 B2
9058499 Smith Jun 2015 B1
9134338 Cilia et al. Sep 2015 B2
9159371 Ross et al. Oct 2015 B2
9253452 Ross et al. Feb 2016 B2
9262800 Cilia Feb 2016 B2
9325950 Haler Apr 2016 B2
9331997 Smith May 2016 B2
9377161 Hanchett et al. Jun 2016 B2
9432298 Smith Aug 2016 B1
9456131 Tran Sep 2016 B2
9584710 Marman Feb 2017 B2
9615062 Sablak Apr 2017 B2
9716913 Sivasankaran Jul 2017 B2
9860536 Cilia Jan 2018 B2
9973711 Yang May 2018 B2
10186012 Newman Jan 2019 B2
10230866 Townsend Mar 2019 B1
20010052137 Klein Dec 2001 A1
20020040475 Yap et al. Apr 2002 A1
20020064314 Comaniciu et al. May 2002 A1
20020135679 Scaman Sep 2002 A1
20020140924 Wangler et al. Oct 2002 A1
20020141618 Ciolli et al. Oct 2002 A1
20020141650 Keeney et al. Oct 2002 A1
20020149476 Ogura Oct 2002 A1
20020180759 Park et al. Dec 2002 A1
20020183905 Maeda et al. Dec 2002 A1
20020186148 Trajkovic et al. Dec 2002 A1
20020186297 Bakewell Dec 2002 A1
20030025599 Monroe Feb 2003 A1
20030025812 Slatter Feb 2003 A1
20030052798 Hanson Mar 2003 A1
20030071891 Geng Apr 2003 A1
20030080878 Kirmuss May 2003 A1
20030086000 Siemens et al. May 2003 A1
20030095338 Singh et al. May 2003 A1
20030112133 Webb et al. Jun 2003 A1
20030142209 Yamazaki et al. Jul 2003 A1
20030151663 Lorenzetti et al. Aug 2003 A1
20030154009 Basir et al. Aug 2003 A1
20030172123 Polan et al. Sep 2003 A1
20030185419 Sumitomo Oct 2003 A1
20030210329 Aagaard et al. Nov 2003 A1
20030210806 YoichiShintani et al. Nov 2003 A1
20030212567 Shintani et al. Nov 2003 A1
20030214585 Bakewell Nov 2003 A1
20040008255 Lewellen Jan 2004 A1
20040017930 Kim et al. Jan 2004 A1
20040021852 DeFlumere Feb 2004 A1
20040056779 Rast Mar 2004 A1
20040080615 Klein et al. Apr 2004 A1
20040096084 Tamoto et al. May 2004 A1
20040119869 Tretter et al. Jun 2004 A1
20040150717 Page et al. Aug 2004 A1
20040189804 Borden et al. Sep 2004 A1
20040201765 Gammenthaler Oct 2004 A1
20040218099 Washington Nov 2004 A1
20040221311 Dow et al. Nov 2004 A1
20040223058 Richter et al. Nov 2004 A1
20040252193 Higgins Dec 2004 A1
20040258149 Robinson et al. Dec 2004 A1
20050083404 Pierce et al. Apr 2005 A1
20050090961 Bonk et al. Apr 2005 A1
20050099273 Shimomura et al. May 2005 A1
20050100329 Lao et al. May 2005 A1
20050101334 Brown et al. May 2005 A1
20050128064 Riesebosch Jun 2005 A1
20050151671 Bortolotto Jul 2005 A1
20050151852 Jomppanen Jul 2005 A1
20050196140 Moteki Sep 2005 A1
20050206773 Kim et al. Sep 2005 A1
20050212912 Huster Sep 2005 A1
20050243171 Ross et al. Nov 2005 A1
20050258942 Manasseh et al. Nov 2005 A1
20060010199 Brailean et al. Jan 2006 A1
20060012683 Lao et al. Jan 2006 A9
20060028547 Chang Feb 2006 A1
20060033813 Provinsal et al. Feb 2006 A1
20060098843 Chew May 2006 A1
20060126932 Eschbach Jun 2006 A1
20060132604 Lao et al. Jun 2006 A1
20060133476 Page et al. Jun 2006 A1
20060152636 Matsukawa et al. Jul 2006 A1
20060158968 Vanman et al. Jul 2006 A1
20060159325 Zeineh et al. Jul 2006 A1
20060187305 Trivedi et al. Aug 2006 A1
20060193384 Boyce Aug 2006 A1
20060209189 Simpson Sep 2006 A1
20060244826 Chew Nov 2006 A1
20060269265 Wright et al. Nov 2006 A1
20060274166 Lee et al. Dec 2006 A1
20070013776 Venetianer et al. Jan 2007 A1
20070024706 Brannon et al. Feb 2007 A1
20070029825 Franklin et al. Feb 2007 A1
20070035612 Korneluk et al. Feb 2007 A1
20070058856 Boregowda et al. Mar 2007 A1
20070069921 Sefton Mar 2007 A1
20070097212 Farneman May 2007 A1
20070109411 Jung et al. May 2007 A1
20070122000 Venetianer et al. May 2007 A1
20070188612 Carter Aug 2007 A1
20070200933 Watanabe et al. Aug 2007 A1
20070217761 Chen et al. Sep 2007 A1
20070219686 Plante Sep 2007 A1
20070221233 Kawano et al. Sep 2007 A1
20070222678 Ishio et al. Sep 2007 A1
20070222859 Chang et al. Sep 2007 A1
20070225550 Gattani et al. Sep 2007 A1
20070230943 Chang et al. Oct 2007 A1
20070260363 Miller Nov 2007 A1
20070268370 Sanno et al. Nov 2007 A1
20070274705 Kashiwa et al. Nov 2007 A1
20070291104 Petersen et al. Dec 2007 A1
20070296817 Ebrahimi et al. Dec 2007 A1
20080002028 Miyata Jan 2008 A1
20080007438 Segall et al. Jan 2008 A1
20080036580 Breed Feb 2008 A1
20080100705 Kister et al. May 2008 A1
20080129844 Cusack et al. Jun 2008 A1
20080167001 Wong Jul 2008 A1
20080175479 Sefton et al. Jul 2008 A1
20080218596 Hoshino Sep 2008 A1
20080240616 Haering et al. Oct 2008 A1
20080285803 Madsen Nov 2008 A1
20080301088 Landry et al. Dec 2008 A1
20090002491 Haler Jan 2009 A1
20090046157 Cilia et al. Feb 2009 A1
20090049491 Karonen et al. Feb 2009 A1
20090088267 Shimazaki et al. Apr 2009 A1
20090102950 Ahiska Apr 2009 A1
20090129672 Camp, Jr. et al. May 2009 A1
20090195655 Pandey Aug 2009 A1
20090207248 Cilia et al. Aug 2009 A1
20090207252 Raghunath Aug 2009 A1
20090213218 Cilia et al. Aug 2009 A1
20090237529 Nakagomi et al. Sep 2009 A1
20090251530 Cilia Oct 2009 A1
20090259865 Sheynblat et al. Oct 2009 A1
20090295919 Chen et al. Dec 2009 A1
20090300692 Mavlankar et al. Dec 2009 A1
20090320081 Chui et al. Dec 2009 A1
20100026802 Titus et al. Feb 2010 A1
20100118147 Dorneich et al. May 2010 A1
20100165109 Lang Jul 2010 A1
20100208068 Elsemore Aug 2010 A1
20100225817 Sheraizin et al. Sep 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100245568 Wike, Jr. et al. Sep 2010 A1
20100265331 Tanaka Oct 2010 A1
20100321183 Donovan et al. Dec 2010 A1
20110042462 Smith Feb 2011 A1
20110052137 Cowie Mar 2011 A1
20110053654 Petrescu et al. Mar 2011 A1
20110074580 Mercier et al. Mar 2011 A1
20110110556 Kawakami May 2011 A1
20110134141 Swanson et al. Jun 2011 A1
20110157376 Lyu et al. Jun 2011 A1
20110234749 Alon Sep 2011 A1
20110242277 Do et al. Oct 2011 A1
20110249153 Hirooka et al. Oct 2011 A1
20110267499 Wan et al. Nov 2011 A1
20110285845 Bedros et al. Nov 2011 A1
20110292287 Washington Dec 2011 A1
20110310435 Tsuji et al. Dec 2011 A1
20120040650 Rosen Feb 2012 A1
20120069224 Cilia et al. Mar 2012 A1
20120092522 Zhang et al. Apr 2012 A1
20120236112 Cilia Sep 2012 A1
20130150004 Rosen Jun 2013 A1
20130279757 Kephart Oct 2013 A1
20130287090 Sasaki et al. Oct 2013 A1
20130336634 Vanman et al. Dec 2013 A1
20140059166 Mann et al. Feb 2014 A1
20140139680 Huang et al. May 2014 A1
20140192192 Worrill et al. Jul 2014 A1
20140201064 Jackson et al. Jul 2014 A1
20140226952 Cilia et al. Aug 2014 A1
20140240500 Davies Aug 2014 A1
20140355951 Tabak Dec 2014 A1
20150050923 Tu et al. Feb 2015 A1
20150051502 Ross Feb 2015 A1
20150054639 Rosen Feb 2015 A1
20150063776 Ross et al. Mar 2015 A1
20160035391 Ross et al. Feb 2016 A1
20160073025 Cilia Mar 2016 A1
20170215971 Gattani et al. Aug 2017 A1
20180103255 Cilia et al. Apr 2018 A1
Foreign Referenced Citations (19)
Number Date Country
707297 Apr 1996 EP
2698596 Feb 1995 FR
2287152 Sep 1995 GB
2317418 Mar 1998 GB
2006311039 Nov 2006 JP
10-1050897 Jul 2011 KR
WO-1993020655 Oct 1993 WO
WO-1994019212 Sep 1994 WO
WO-1995028783 Oct 1995 WO
WO-1996022202 Jul 1996 WO
WO-1997038526 Oct 1997 WO
WO-1998052358 Nov 1998 WO
WO-1999018410 Apr 1999 WO
WO-01097524 Dec 2001 WO
WO-2004036926 Apr 2004 WO
WO-2007104367 Sep 2007 WO
WO-2013100993 Jul 2013 WO
WO-2014000161 Jan 2014 WO
WO-2016089918 Jun 2016 WO
Non-Patent Literature Citations (53)
Entry
Sony Corporation; “Network Camera: User's Guide: Software Version 1.0: SNC-RZ25N/RZ25P”; 2004; 81 pages.
U.S. Appl. No. 12/694,931, Cilia.
U.S. Appl. No. 12/779,492, Vanman.
U.S. Appl. No. 12/779,564, Vanman.
U.S. Appl. No. 12/780,050, Vanman.
U.S. Appl. No. 12/780,092, Vanman.
U.S. Appl. No. 12/362,302, Andrew Cilia et al.
U.S. Appl. No. 13/095,107, Cilia.
U.S. Appl. No. 13/109,557, Cilia.
U.S. Appl. No. 14/788,556, Cilia et al.
U.S. Appl. No. 14/942,838, Cilia.
Carlier, Axel, et al.; “Combining Content-Based Analyis and Crowdsourcing to Improve User Interaction with Zoomable Video”; Proceedings of the 19th International Conference on Multimedia 2011; Nov. 28-Dec. 1, 2011; pp. 43-52.
Mavlankar, Aditya, et al.; “Region-of-Interest Prediction for Interactively Streaming Regions of High Resolution Video”; Proceedings of the 16th IEEE International Packet Video Workshop; Nov. 2007; 10 pages.
Copenheaver, Blaine R., International Search Report for PCT/US2009/000930 dated Apr. 9, 2009.
Young, Lee W., International Search Report for PCT/US2009/000934 dated Apr. 29, 2009.
Copenheaver, Blaine R., International Search Report for PCT/US2010030861 dated Jun. 21, 2010.
Nhon, Diep T., International Search Report for PCT/US05/36701 dated Oct. 25, 2006.
Copenheaver, Blaine R., International Search Report for PCT/US2009/032462 dated Mar. 10, 2009.
Kortum, P. et al., “Implementation of a foveated image coding system for image bandwidth reduction”, SPIE Proceedings, vol. 2657, 1996, pp. 350-360, XP-002636638.
Geisler, Wilson S. et al., “A real-time foveated multiresolution system for low-bandwidth video communication”, Proceedings of the SPIE—The International Society for Optical Engineering SPIE—Int. Soc. Opt. Eng. USA, vol. 3299,1998, pp. 294-305, XP-002636639.
Rowe, Lawrence A., et al.; “Indexes for User Access to Large Video Databases”; Storage and Retrieval for Image and Video Databases II, IS&T/SPIE Symp. on Elec. Imaging Sci. & Tech.; San Jose, CA; Feb. 1994; 12 pages.
Polybius; “The Histories,” vol. III: Books 5-8; Harvard University Press; 1923; pp. 385 & 387.
Crisman, P.A. (editor); “The Compatible Time-Sharing System: A Programmer's Guide,” second edition; The M.I.T. Press, Cambridge Massachusetts; 1965; 587 pages.
Kotadia, Munir; “Gates Predicts Death of the Password”; http://www.cnet.com/news/gates-predicts-death-of-the-password/?ftag=CADe856116&bhid=; Feb. 25, 2004; 3 pages.
Morris, Robert, et al.; “Password Security: A Case History”; Communications of the ACM, vol. 22, No. 11; Nov. 1979; pp. 594-597.
Cranor, Lorrie Faith, et al.; “Security and Usability: Designing Secure Systems that People Can Use”; O'Reilly Media; Aug. 2005; pp. 3 & 104.
Chirillo, John; “Hack Attacks Encyclopedia: A Complete History of Hacks, Cracks, Phreaks, and Spies Over Time”; John Wiley & Sons, Inc.; 2001; 485-486.
Stonebraker, Michael, et al.; “Object-Relational DBMSs: Tracking the Next Great Wave”; Second Ed.; Morgan Kaufmann Publishers, Inc.; 1999; pp. 3, 173, 232-237, 260.
Stonebraker, Michael, et al.; “Object-Relational DBMSs: The Next Great Wave”; Morgan Kaufmann Publishers, Inc.; 1996; pp. 105, 107, 166-168.
Barwick, Sandra; “Two Wheels Good, Four Wheels Bad”; The Independent; http://www.independent.co.uk/voices/two-wheels-good-four-wheels-bad-1392034.html; Feb. 4, 1994; 11 pages.
McFee, John E., et al.; “Multisensor Vehicle-Mounted Teleoperated Mine Detector with Data Fusion”; SPIE Proceedings, vol. 3392; Apr. 13, 1998; 2 pages.
Malcolm, Andrew H.; “Drunken Drivers Now Facing Themselves on Video Camera”; The New York Times; http://www.nytimes.com/1990/04/21/us/drunken-drivers-now-facing-themselves-on-video-camera.html; Apr. 21, 1990; 3 pages.
Kaplan, A.E., et al.; “An Internet Accessible Telepresence”; Multimedia Systems, vol. 5, Issue 2; Mar. 1997; Abstract only; 2 pages.
Sabatini, Richard V.; “State Police Cars in Pa. Get Cameras Patrol Stops Will be Videotaped. The Units will Benefit Citizens and Police, Authorities Say”; http://articles.philly.com/1996-03-30/news/25637501_1_patrol-car-state-police-commissioner-paul-j-evanko; Mar. 30, 1996; 2 pages.
Stockton, Gregory R., et al.; “Why Record? (Infrared Video)”; Infraspection Institute's IR/INFO '98 Symposium, Orlando, Florida; Jan. 25-28, 1998; 5 pages.
Racepak LLC; “Racepak DataLink Software” http://www.racepak.com/software.php.; Feb. 3, 2008; 4 pages.
Pavlopoulos, S., et al.; “A Novel Emergency Telemedicine System Based on Wireless Communication Technology—Ambulance”; IEEE Trans Inf Technol Biomed, vol. 2, No. 4; Dec. 1998; Abstract only; 2 pages.
Horne, Mike; “Future Video Accident Recorder”; http://www.iasa.com.au/folders/Publications/pdf_library/horne.pdf; May 1999; 9 pages.
Townsend & Taphouse; “Microsentinel I”; http://www.blacksheepnetworks.com/security/resources/encyclopedia/products/prod19.htm; Jul. 5, 2003; 1 page.
Security Data Networks, Inc.; “Best of Show Winner at CES Consumer Electronics Show is MicroSentinel(R) Wireless Internet Based Security System that Allows Users to Monitor their Home, Family, or Business using any Internet or E-Mail Connection”; PR Newswire; http://www.prnewswire.com/news-releases/best-of-show-winner-at-ces-consumer-electronics-show-is-microsentinelr-wireless-internet-based-security-system-that-allows-users-to-monitor-their-home-family-or-business-using-any-internet-or-e-mail-connection-73345197.html; Jan. 7, 1999; 3 pages.
Draper, Electra; “Mission Possible for Loronix”; Denver Post; http://extras.denverpost.com/business/top100b.htm; Aug. 13, 2000; 2 pages.
“Choosing the Right Password”; The Information Systems Security Monitor (ISSM); vol. 2, No. 2; Apr. 1992; 2 pages.
Aref, Walid G., et al.; “A Video Database Management System for Advancing Video Database Research”; In Proc. of the Int Workshop on Management Information Systems; Nov. 2002; 10 pages.
ICOP Extreme Wireless Mic, Operation Supplement; 2008.
Raytheon JPS Communications, Ratheon Model 20/20-W, Raytheon 20/20 Vision Digital In-Car Video Systems; Feb. 2010.
Product Review: ICOP Model 20/20-W; May 19, 2009.
State of Utah Invitation to Bid State Cooperative Contract, Contract No. MA503; Jul. 3, 2008.
X26 TASER; date unknown.
TASER X26 Specification Sheet; 2003.
Affidavit of Christopher Butler of Internet Archive attesting to the public availability of the 20/20-W Publication on Nov. 25, 2010.
International Association of Chiefs of Police Digital Video System Minimum Specifications Nov. 21, 2008.
City of Pomona Request for Proposal for Mobile Video Recording System for Police Vehicles; Apr. 4, 2013.
“TASER Int'l (TASR) Challenges to Digital Ally's (DGLY) Patents”, StreetInsider.com, http://www.streetinsider.com/Corporate+News/TASER+Intl+(TASER)+Challenges+to+Digital+Allys+(DGLY)+Patents/12302550 html Dec. 1, 2016.
Provisional Applications (1)
Number Date Country
62319364 Apr 2016 US