Systems, apparatuses and methods for multiplexing and synchronizing audio recordings

Information

  • Patent Grant
  • 10152859
  • Patent Number
    10,152,859
  • Date Filed
    Tuesday, February 21, 2017
    7 years ago
  • Date Issued
    Tuesday, December 11, 2018
    6 years ago
Abstract
Techniques for multiplexing audio recordings. Systems and methods for multiplexing and synchronizing audio recordings using data markers in the recorded files.
Description
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH

Not applicable.


TECHNICAL FIELD OF THE INVENTION

This disclosure relates generally to techniques for processing audio recordings. More particularly, but not by way of limitation, this disclosure relates to systems and methods for multiplexing and synchronizing audio recordings.


BACKGROUND

Today's law enforcement officers have various means of technology at their disposal to perform their tasks. However, while technology has provided law enforcement officers powerful tools to perform their jobs, it has also added a level of complexity for officers on patrol. Officers are typically burdened with having to wear and maintain various pieces of gear while on patrol. This gear weighs down the officer, and the electronic gear generates heat which creates discomfort, particularly in hot summer conditions. Recently, officers have begun to use body-worn-cameras (BWC) to capture on-scene video while on patrol.


The BWCs used by officers are often paired with a separate microphone to transmit on-scene audio. The captured audio is typically transmitted wirelessly to a receiver in the officer's patrol car. The transmission of the audio from the body-worn microphone to the patrol car is subject to signal interference and signal loss when out of range. These limitations can hinder the use of such audio collection for evidence in legal proceedings.


A need remains for consolidation of wearable equipment and improved techniques to collect, multiplex, and synchronize audio recordings for law enforcement purposes and other functions.


SUMMARY

In view of the aforementioned problems and trends, embodiments of the present invention provide systems and methods for multiplexing and synchronizing audio recordings.


According to an aspect of the invention, a method includes recording audio data using a first device, wherein the first device is portable, transferring the recorded audio data from the first device to a second device configured to receive audio data, and multiplexing the transferred audio data with at least one data file in the second device to synchronize the transferred audio data with the at least one data file.


According to another aspect of the invention, a system includes a first device configured to record audio data, wherein the first device is portable, and a second device configured: (a) to hold or store at least one data file, and (b) to receive audio data from the first device, wherein the second device is configured to multiplex audio data the second device receives from the first device with at least one data file held by the second device, to synchronize the received audio data with the at least one data file.


According to another aspect of the invention, a method includes recording audio data using a first device, wherein the first device is portable, transferring the recorded audio data from the first device to a second device containing at least one data file, detecting at least one marker in either of the transferred audio data or the at least one data file, and using the detected at least one marker, synchronizing the transferred audio data with the at least one data file.


According to another aspect of the invention, a method includes recording audio data using a first device, wherein the first device is portable, simultaneously with the recording audio data using the first device, wirelessly transmitting the recorded audio data from the first device to a second device configured to receive audio data, and recording the transmitted audio data using the second device.


Other aspects of the embodiments described herein will become apparent from the following description and the accompanying drawings, illustrating the principles of the embodiments by way of example only.





BRIEF DESCRIPTION OF THE DRAWINGS

The following figures form part of the present specification and are included to further demonstrate certain aspects of the present claimed subject matter, and should not be used to limit or define the present claimed subject matter. The present claimed subject matter may be better understood by reference to one or more of these drawings in combination with the description of embodiments presented herein. Consequently, a more complete understanding of the present embodiments and further features and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numerals may identify like elements, wherein:



FIG. 1, in accordance with some embodiments of the present disclosure, depicts a portable camera device;



FIG. 2, in accordance with some embodiments of the present disclosure, depicts the portable camera device of FIG. 1 in use as a body-worn-camera;



FIG. 3, in accordance with some embodiments of the present disclosure, depicts a schematic of a portable camera device docked in a docking module;



FIG. 4, in accordance with some embodiments of the present disclosure, depicts a vehicle with an onboard computer and camera devices;



FIG. 5, in accordance with some embodiments of the present disclosure, depicts a processing flow chart for audio data processing;



FIG. 6 is a flow chart depicting, at a top level, a method in accordance with some embodiments of the present disclosure;



FIG. 7 is a flow chart depicting, at a top level, another method in accordance with some embodiments of the present disclosure; and



FIG. 8 is a flow chart depicting, at a top level, another method in accordance with some embodiments of the present disclosure.





NOTATION AND NOMENCLATURE

Certain terms are used throughout the following description and claims to refer to particular system components and configurations. As one skilled in the art will appreciate, the same component may be referred to by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” (and the like) and “comprising” (and the like) are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” Also, the term “couple,” “coupled,” or “linked” is intended to mean either an indirect or direct electrical, mechanical, or wireless connection. Thus, if a first device couples to or is linked to a second device, that connection may be through a direct electrical, mechanical, or wireless connection, or through an indirect electrical, mechanical, or wireless connection via other devices and connections.


As used throughout this disclosure the term “computer” encompasses special purpose microprocessor-based devices such as a digital video surveillance system primarily configured for executing a limited number of applications, and general purpose computers such as laptops, workstations, or servers which may be configured by a user to run any number of off the shelf or specially designed software applications. Computer systems and computer devices will generally interact in the same way with elements and aspects of disclosed embodiments. This disclosure also refers to memory or storage devices and storage drives interchangeably. In general, memory or a storage device/drive represents a medium accessible by a computer (via wired or wireless connection) to store data and computer program instructions. It will also be appreciated that use of the term “microprocessor” in this disclosure encompasses one or more processors.


As used throughout this disclosure the term “record” is interchangeable with the term “store” and refers to the retention of data in a storage medium designed for long-term retention (e.g., solid state memory, hard disk, CD, DVD, memory card, etc.), as compared to the temporary retention offered by conventional memory means such as volatile RAM. The temporary retention of data, audio data or otherwise, is referred to herein as the “holding” of data or as data being “held.”


The terms “multiplex” and “multiplexing” refer to the incorporation or combination of a specified file, audio track (i.e. audio communication signal), and/or data with another file, audio track, or other data.


As used throughout this disclosure the terms “video data” and “visual data” refer to still image data, moving image data, or both still and moving image data, as traditionally understood. The term “audiovisual data” encompasses not only video or visual data but also audio data and/or metadata. That is, audiovisual data may include visual or video data, audio data, metadata, or any combination of these three. This audiovisual data may be compressed using industry standard compression technology (e.g., Motion Picture Expert Group (MPEG) standards, Audio Video Interleave (AVI), etc.) or another proprietary compression or storage format. The terms “camera,” “camera device,” and the like are understood to encompass devices configured to record or capture audiovisual data. Such devices may also be referred to as video recording devices, or the like. Metadata may be included in the files containing the audiovisual (or audio, or video) data or in separate, associated data files, that may be configured in a structured text format such as eXtensible Markup Language (XML).


The term “metadata” refers to information associated with the recording of audio, video, or audiovisual data, or information included in the recording of such data, and metadata may contain information describing attributes associated with one or more acts of actual recording of audio, video, or audiovisual data. That is, the metadata may describe who (e.g., officer ID) or what (e.g., manual or automatic trigger) initiated or performed the recording. The metadata may also describe where the recording was made. For example, location may be obtained using global positioning system (GPS) information. The metadata may also describe why the recording was made (e.g., event tag describing the nature of the subject matter recorded). The metadata may also describe when the recording was made, using timestamp information obtained in association with GPS information or from an internal clock, for example. Metadata may also include information relating to the device(s) used to capture or process information (e.g. a unit serial number, mac address, etc.). Metadata may also include telemetry or other types of data. From these types of metadata, circumstances that prompted the recording may be inferred and may provide additional information about the recorded information. This metadata may include useful information to correlate recordings from multiple distinct recording systems as disclosed herein. This type of correlation information may assist in many different functions (e.g., query, data retention, chain of custody, precise synchronization and so on).


As used throughout this disclosure the term “portable” refers to the ability to be easily carried or moved. The term encompasses a wearable device (i.e. a device that can be worn or carried by a person or an animal).


The term “cloud” refers to an area or environment generally accessible across a communication network (which may or may not be the Internet) that provides shared computer storage and/or processing resources and/or data to computers and other devices. A “cloud” may refer to a public cloud, private cloud, or combination of a public and private cloud (e.g., hybrid cloud). The term “public cloud” generally refers to a cloud storage environment or area that is maintained by an unrelated third party but still has certain security measures in place to ensure that access is only allowed to authorized users. The term “private cloud” generally refers to a cloud storage environment or area that is maintained by a related entity or that is maintained on physical computer resources that are separate from any unrelated users.


DETAILED DESCRIPTION

The foregoing description of the figures is provided for the convenience of the reader. It should be understood, however, that the embodiments are not limited to the precise arrangements and configurations shown in the figures. Also, the figures are not necessarily drawn to scale, and certain features may be shown exaggerated in scale or in generalized or schematic form, in the interest of clarity and conciseness. The same or similar parts may be marked with the same or similar reference numerals.


While various embodiments are described herein, it should be appreciated that the present invention encompasses many inventive concepts that may be embodied in a wide variety of contexts. The following detailed description of exemplary embodiments, read in conjunction with the accompanying drawings, is merely illustrative and is not to be taken as limiting the scope of the invention, as it would be impossible or impractical to include all of the possible embodiments and contexts of the invention in this disclosure. Upon reading this disclosure, many alternative embodiments of the present invention will be apparent to persons of ordinary skill in the art. The scope of the invention is defined by the appended claims and equivalents thereof.


Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are necessarily described for each embodiment disclosed in this specification. In the development of any such actual embodiment, numerous implementation-specific decisions may need to be made to achieve the design-specific goals, which may vary from one implementation to another. It will be appreciated that such a development effort, while possibly complex and time-consuming, would nevertheless be a routine undertaking for persons of ordinary skill in the art having the benefit of this disclosure. It will also be appreciated that the parts and component dimensions of the embodiments disclosed herein may not be drawn to scale.



FIG. 1 depicts an embodiment of a first device 10 in accordance with this disclosure. In this embodiment, the first device 10 comprises a portable wearable camera 12. The camera 12 is configured with a fixed-focus or auto-focus lens 16, a microphone 18, and a record activation button 20, which permits a user to manually activate or deactivate the camera 12 to record audiovisual data (i.e., audio data captured via the microphone 18 and video data captured via the lens 16). Some embodiments may be configured to allow for separate manual activation/deactivation of the microphone 18 (audio recording functionality) and lens 16 (video recording functionality), respectively, permitting a user to capture only audio data or only video data as desired. Thus, camera 12 is configurable/operable to record/store/hold solely audio data, solely video data, or a combination thereof, which may be referred to as audiovisual data; in all of these cases, camera 12 is also operable/configurable, but not required, to record/store/hold metadata. As described below, where audiovisual data is recorded/stored/held, this may be with or without a separate audio and/or video track. Some embodiments may also include a programmable function button 21 that provides a user the ability to select among different programmed/programmable modes of operation. The camera 12 may be configured with an internal buffer 22 (e.g. RAM) to temporarily hold captured audio, video and/or audiovisual data, and memory 24 (e.g. flash drive) to store captured audio, video and/or audiovisual data. The camera 12 may be configured to automatically create the respective data files (i.e. audio, video, audiovisual) as the data is captured/recorded. Some embodiments may also include an audio speaker 25 to provide voice messages and/or an audible indication during various modes of operation. For example, the speaker 25 may be configured for activation: when camera 12 recording starts or stops, to provide a camera low battery alert, to provide a camera memory full alert, to indicate successful camera pairing with another device, to provide warning beeps that may be sent from another device, etc. In some embodiments, the audible indication may indicate generically an event or may indicate the specific event (not merely that one of several types of events occurred). That is, different audible sounds (including voice messages) may be generated for respective different event types (e.g., one beep for event type x, two beeps for event type y; a buzzer sound for event type x, a ring tone for event type y, a voice message for event type z; a voice message for event type x, a different voice message for event type y; etc.). It will be appreciated by those skilled in the art that camera 12 embodiments of this disclosure can be implemented with various types of additional sensors to capture/store desired information (e.g. temperature) and with conventional data storage means as known in the art. Embodiments of the camera 12 are also equipped with internal Bluetooth® circuitry and associated electronics or other wireless technology to permit wireless communication and or signal transfer to or from the camera 12. Bluetooth® pairing may be manually activated by a button 26 on the camera 12. Some embodiments may also include a light-emitting diode (LED) 27 to indicate when the camera 12 is recording or performing other functions. Some embodiments are also equipped with Global Positioning System and real-time clock circuitry 28 to provide GPS data and current time information. Suitable camera devices 12 that may be used to implement embodiments of this disclosure include the devices commercially available from COBAN Technologies, Inc., in Houston, Tex. (http//www.cobantech.com).



FIG. 2 depicts the first device 10 camera 12 being worn by a user 30 (e.g. a police officer) as a BWC. This implementation provides a user the ability to capture on-scene audiovisual data with the camera 12. For law enforcement, the wearable camera 12 provides the officer complete freedom to move while the camera 12 records audio, video, or audiovisual data as desired. The audio, video, and/or audiovisual data is stored in the camera 12 memory 24. The camera 12 may be configured to automatically include metadata (e.g., time stamp, watermark, GPS data, unit ID, officer ID, unique identifier, etc.) in the data recorded.


In some embodiments, the first device 10 camera 12 may be configured to wirelessly sync (e.g., via Bluetooth®, RuBee, Wi-Fi, 3G, 4G, LTE, etc.) with other data gathering/telemetry devices within a set range or proximity. Such other devices may include, for example, biometric data sensors, geospatial, distancing and orientation data devices (apart from that provided by GPS), environmental telemetry devices, etc. In such embodiments, the camera 12 can wirelessly receive data transmitted from the other devices and store the data in memory 24 as metadata. The data from the other devices can be recorded in sync with the recording of audio/video or independently (e.g. when the camera 12 is not holding/storing audio/video). The camera 12 may be configured to sync with other devices automatically or via manual activation. All of this additional data from other devices can be multiplexed and synchronized with selected data using the techniques disclosed herein.


In some embodiments, the audio, video, and/or audiovisual data captured by the camera 12 is temporarily held in the buffer 22 in a continuous circulating stream to perform “pre-event” circular buffering, without storing the data to memory 24 until the camera 12 is activated to store the data to memory 24 by a wireless command or by manual activation via the record “on/off” button 20. This “smart buffering” feature provides a circular buffer that temporarily holds the captured data in configurable file segment sizes (e.g. 1-5 minute chunks) until activated to store the data to memory 24 or the data is automatically deleted as new data is captured and streamed into the buffer 22 in a continuous manner. When activated to store the data, time points are marked in the data files. In some embodiments, if the camera 12 is activated or triggered to store the data, the camera 12 can be configured to export the data (in the above-mentioned file segments) to a removable media and/or a separate folder in the memory 24 sector where the circular recording is written. In some embodiments, the pre-event buffering can optionally be configured to continually write directly to memory 24 in a circulating stream.


When not being worn, the camera 12 can be docked into a docking module 32, as depicted in FIG. 3. Suitable docking modules 32 that may be used to implement embodiments of this disclosure include the devices commercially available from COBAN Technologies, Inc., in Houston, Tex. (http//www.cobantech.com).


For law enforcement applications, the docking module 32 can be mounted in a police vehicle 34, as depicted in FIG. 4. Embodiments can be implemented with the docking module 32 disposed on any type of vehicle. In such embodiments, the docking module 32 is coupled to a second device 36 comprising an in-car video (ICV) camera 40 that is disposed in the vehicle 34. The ICV camera 40 is configured to record on-scene audio, video, or audiovisual data. The audio, video, and/or audiovisual data captured by the ICV camera 40 is stored on internal and/or removable storage media. In some embodiments, the ICV camera 40 is configured to temporarily hold the recorded audio, video, and/or audiovisual data in a buffer in a continuous circulating stream to perform “pre-event” circular buffering, without storing the data until triggered to store the data by a command signal or by manual user activation. Similar to the “smart buffering” feature provided by some embodiments of the first device 10 camera 12, this feature also provides a circular buffer that temporarily holds the captured data in configurable file segment sizes (e.g. 1-5 minute chunks) until the ICV camera 40 is activated to store the data to memory or the data is automatically deleted as new data is captured and streamed into the buffer in a continuous manner. When activated to store the data, time points are marked in the data files. In some embodiments, if the ICV camera 40 is activated or triggered to store the data, the ICV camera 40 can be configured to export the data (in the above-mentioned file segments) to the removable media and/or a separate folder in the memory sector where the circular recording is written. In some embodiments, the pre-event buffering can optionally be configured to continually write directly to memory in a circulating stream.


In operation, the first device 10 camera 12 can be used to record desired audio data (whether the audio is captured as audiovisual data or solely as audio data). In some situations, the ICV camera 40 will capture relevant audio data via a wired or wireless microphone source, while in some situations the ICV camera 40 may not (e.g., when the officer is outside of the vehicle 34 performing a traffic stop), and the first device 10 camera 12 may record the on-scene audio data when worn by the officer as a BWC. Thus, on-scene video data and audio data are obtained by both the ICV camera 40 and the first device 10 camera 12. However, separate playback of the recorded data files from the ICV camera 40 and the first device 10 camera 12 may not be in sync. This mismatch in synchronization may be particularly exacerbated when the ICV camera 40 and the first device 10 camera 12 are each activated at different times or intermittently during a recording event. The embodiments of this disclosure provide a solution in this situation.


Embodiments of the second device 36 are implemented with software configured to extract and/or multiplex the audio data recorded by the first device 10 with the file container of the data file(s) in the second device 36 memory (e.g. ICV camera 40). The data file(s) stored in the second device 36 may include audio, video, metadata, and/or audiovisual files. Some embodiments of the second device 36 are configured to multiplex the audio data recorded by the first device 10 to synchronize the audio data with the relevant data file(s) (i.e., audio data, video data, metadata, and/or audiovisual data) in the file container of the second device 36. It will be appreciated by those skilled in the art that data files (audio data, video data, metadata, and/or audiovisual data) can be multiplexed and synchronized with multiple devices 10, 36 and other audiovisual sources, and in some cases linked to several devices and/or sources, that were on the scene, for later synchronization. Such embodiments provide for enhanced audio and video data review and may also be used to identify and create a map of the location where the devices/sources were located during an event.



FIG. 5 depicts a flow chart 50 of the audio data processing according to some embodiments of this disclosure. At module 52, as previously described, the first device 10 camera 12 records audio data. The audio data may be captured in an audiovisual recording or solely as an audio data recording. At module 54, the recorded audio data is uploaded or transferred to the second device 36 (e.g. ICV camera 40). In some embodiments, the docking module 32 is configured with POGO pins that mate with a receptacle in the first device 10 camera 12 to facilitate the data transfer between the camera 12 and the second device 36 when the camera 12 is docked in the docking module 32. In other embodiments, the first device 10 camera 12 is configured to wirelessly transfer the audio data file to the second device 36 using a conventional communication standard (e.g., RuBee, Wi-Fi, 3G, 4G, LTE, etc.). In yet other embodiments, the first device 10 camera 12 is configured for direct transfer of the audio data via a cable connection (e.g. a USB direct connect) with the second device 36. At module 56, the second device 36 analyzes the transferred audio data and one or more data files in the second device's 36 file container to search for and detect markers present in the respective data. For example, with embodiments wherein the second device 36 comprises an ICV camera 40, the camera processor is configured to execute instructions to analyze the audio data transferred from the first device 10 and the file container data file(s) (e.g., containing audio, video or audiovisual data) captured by ICV camera 40. The markers may consist of: the time stamps, a watermark, other metadata such as unit ID, officer ID, a unique identifier (“event identifier”) for the recording event (e.g., DUI, speeding, etc.), GPS data, telemetry, etc. These markers may be automatically embedded in the respective recorded data and/or a separate XML file by the portable camera 12 and the ICV camera 40. Customized or event-specific markers (e.g., accident scene, DUI, etc.) can also be added to the data by the officer by (e.g., manual) entry via an ICV camera 40 touch display 42, a keyboard, a smartphone, or a tablet device. These added markers can be automatically associated with any device/file that is to be synchronized. Some portable camera 12 embodiments may also be configured with software to allow a user to add customized/event-specific markers to the camera 12 audio data file. In some embodiments, customized/event-specific markers may be preprogrammed and selectable via the programmable button or the aforementioned methods. Clock synchronization between the first device 10 and the second device 36 may be performed by the GPS circuitry/clock 28, by direct sync via the USB connection to the second device 36, or by sending real-time clock (RTC) sync signals similar to Network Time Protocol (NTP) time servers.


At module 58, once the markers have been detected in the respective data files, the audio data transferred from the first device 10 camera 12 is multiplexed and synchronized with the data file(s) in the second device ICV camera 40 file container. In some embodiments, the data files are multiplexed by linking the files together (via the software) such that opening or “playing” one file simultaneously opens/plays the linked file. In some embodiments, the synchronization is performed by: (a) selecting one of the data files (i.e., either the transferred audio data file or a data file in the ICV camera 40 file container); (b) rolling back in the selected file to a specific marker point (e.g. the earliest time mark); and (c) automatically synchronizing the files by marking points in the selected file where markers match with the data in the other data file. In an application, an officer can record the on-scene audio with the portable camera 12 affixed to his vest as a BWC. After the event is over, the officer can immediately transfer the audio data recorded with the camera 12 to the ICV camera 40, as described herein, or the officer can perform the data transfer at a later time (e.g. upon return to the station at the end of his shift). After the recorded audio data has been transferred from the camera 12 to the ICV camera 40 storage, the ICV camera 40 can roll back the transferred audio data to the proper time stamp and automatically multiplex and synchronize the data files by marking points in the transferred audio data where the markers match the data in the audiovisual file stored in the ICV camera 40. At module 60, the ICV camera 40 may create a unique identifier to identify the multiplexed data so that the synchronized data files can be logged in an audit trail and stored as desired. This way, when either data file is searched (i.e. the audio data recorded with the portable camera 12 or the data recorded with the ICV camera 40), the associated data file is automatically linked to be played back simultaneously and in sync if needed. Synchronous play from multiple data files can then be activated as desired. It will be appreciated by those skilled in the art that embodiments of this disclosure may be implemented using conventional software platforms and coding configured to perform the techniques and processes as disclosed herein.


In some embodiments where the first device 10 camera 12 and the ICV camera 40 are each configured to provide the “pre-event” circular buffering described above, the synchronization step of module 58 may be performed in a slightly different manner. With such embodiments, the selected data file that is rolled back (step (b) above) is the data file with the shortest recording time (duration). In other words, the selecting is performed based on comparing respective recording durations of (i) the audio data file transferred from the first device 10 camera 12 (BWC) and (ii) the at least one data file in the ICV camera. In this scenario the files may get synchronized starting points while maintaining the original starting points for each file. This ensures that the multiplexed data files are synced to the longest event of interest.


In some embodiments, the first device 10 is configured to simultaneously record and transmit audio data to the second device 36. The received audio transmission can be recorded in the second device 36 in real-time. For example, an embodiment of the first device 10 camera 12 could be used to record audio data as described herein, and simultaneously transmit (e.g., via RuBee, Wi-Fi, 3G, 4G, LTE, etc.) the audio data to a ICV camera 40. The ICV camera 40 can then store the transmitted audio data in the file container of the ICV camera 40 data file. Once stored in the ICV camera 40, the transmitted audio data may be multiplexed and synchronized with the data file(s) in the ICV camera 30 as disclosed herein.


In some embodiments, the audio data transferred from the first device 10 is used to replace audio data in a data file in the second device 36. For example, in a situation where the audio data captured by the ICV camera 40 is of such poor quality that it is difficult to discern (e.g. the audio signal goes faint as the officer walks away from the vehicle 34), the system software may be configured to automatically replace the poor-quality audio data in the data file from the ICV camera 40 with the audio data recorded by the first device 10. In some embodiments, only portions of audio data in the second device 36 data file are replaced in this manner. In other embodiments, the audio data transferred from the first device 10 is established as the audio data for the data file in the second device 36, such that when the multiplexed files are played, the only audio signal heard is that from the transferred audio data. For example, if the data file in the second device 36 contains only video data, without audio, the audio data recorded with first device 10 may be used as the audio data once the audio data is multiplexed into the file container of the second device data file. Other embodiments may combine and synchronize audio data captured by a separate body-worn source (e.g., a separate body-worn wireless microphone linked to the second device 36) with audio data from the first device 10, to produce a higher quality resultant audio file. Embodiments of this disclosure also encompass the multiplexing and synchronization of data (audio, video, audiovisual) obtained by multiple first devices 10 and/or second devices 36. Such embodiments provide for the synchronization of multiple audio data files to non-audio carrying video.


Although the examples presented above describe embodiments using a time stamp as a starting marker for the synchronization process, any marker or combination of markers in the data files may be used to synchronize the data sets.


It will also be appreciated by those skilled in the art having the benefit of this disclosure that embodiments may be implemented wherein the second device 36 that receives the recorded audio data from the first device 10 is a remote computer (e.g. a server at headquarters), a smartphone, a wearable device (e.g. another BWC), etc. Any of these second devices 36 may be implemented with electronics, microprocessors, and software configured to perform the techniques and processes disclosed herein. It will also be appreciated that the first device 10 may be, or include, a device configured to record and/or transmit audio data and metadata, and optionally video data.


Other embodiments may be implemented wherein the first device 10 is configured to create and store an audio track (i.e. containing solely a captured audio communication signal). The audio track can be created as a solely recorded file, i.e., without the creation of visual data, or simultaneously with creating and storing a separate audiovisual track, or non-simultaneously with creating and storing an audiovisual track. For example, an embodiment of the portable camera 12 can be configured to record an audiovisual data file of captured video and audio data, while simultaneously creating and storing a separate audio track containing only the captured audio data. In such embodiments, the markers (described above) may be automatically inserted in either or both of the audiovisual data file and the separate audio track. As another example, the portable camera 12 can be configured to create and store a separate audio track, containing only the captured audio data, at a later time after an audiovisual data file of captured video and audio data has been stored. Thus, camera 12 is configurable/operable to create/store/hold solely an audio track (file), solely a video data file, solely an audiovisual data file, or a combination thereof (such combination may be created simultaneously or non-simultaneously). With embodiments including an audio track, the transfer of only the recorded audio track (containing the audio data of interest) to the second device 36 is streamlined as audio signal data files typically entail less data and transfer at a faster rate (depending on system bandwidth) compared to audiovisual data. In all embodiments, the audio track can also be stored with automatically embedded markers (e.g., time stamp, watermark, metadata, unique identifier, GPS data, telemetry, etc.). In other embodiments, the first device 10 is configured to wirelessly transmit and stream (e.g., via the Internet, Cloud, radio network, Bluetooth, Wi-Fi, 3G, 4G, LTE, satellite, etc.) the captured audio data to a remote second device 36, in addition to recording the audio data to memory as described herein. The second device 36 is configured with a speaker to allow the received streamed audio data to be heard (e.g., in real-time or later), and the second device 36 is also operable to record the received streamed audio data to memory/storage (either or both of these functions, as desired). For example, for law enforcement applications this would allow an officer in the vehicle 34 to listen, in real-time, to the audio wirelessly streaming from his partner's BWC 12 and to manually select (e.g. by pushing a button) to record the data to the memory of the second device 36 (e.g., ICV camera 40). These features can be used as backup functions.


Among the benefits of the functionality provided by the disclosed embodiments is the elimination of the range-based limitations encountered by conventional wireless audio data transmission. Since on-scene audio of interest is recorded with the first device 10 and subsequently transferred from the first device to the second device 36, there are no longer any issues regarding wireless signal transfer range or signal interference. The embodiments also provide the ability to multiplex and/or synchronize the audio data files at a later time, after the video and/or audio files have been produced. In implementations where all files are transferred to a server, the multiplexing, synchronization, unique identifier coding, or a combination thereof, can be done at a later time as desired. For example, once the files are obtained, audio files from the first device 10 may be multiplexed and synced, or played separately yet in sync, with video files from the second device 36.


The recorded/stored/held data (audio, video, or audiovisual data) acquired by any device(s) can also be sent to the cloud in real-time, where the disclosed extraction, multiplexing, and/or synchronization techniques can be performed. For example, once uploaded to the cloud, audio data recorded by a first device 10 can be synchronized with the data file(s) (i.e., audio data, video data, metadata, and/or audiovisual data) uploaded to the cloud from a second device 36. Cloud processing can be performed concurrently with the disclosed techniques or as stand-alone processing of the data. Such cloud processing provides for rapid accessibility (e.g. by remote locations such as headquarters) and flexibility of scalability.



FIG. 6 is a flow chart depicting a method 100 according to an embodiment of this disclosure. At step 110, audio data is recorded using a first device, wherein the first device is portable. The first device can be any of the devices as described herein. At step 120, the recorded audio data is transferred from the first device to a second device configured to receive audio data. The audio data can be transferred via any of the means disclosed herein. At step 130, the transferred audio data is multiplexed with at least one data file in the second device to synchronize the transferred audio data with the at least one data file. This method may be implemented using the techniques and embodiments disclosed herein.


In a variant of the embodiment depicted in FIG. 6, another embodiment entails the steps of method 100 and concurrently includes: uploading the audio data to the cloud, uploading the at least one data file from the second device to the cloud; and in the cloud, multiplexing the audio data with the at least one data file to synchronize the audio data with the at least one data file. The data processing may be performed by a virtual server in the cloud and the resultant file(s) may be stored in the same server or downloaded as desired.



FIG. 7 is a flow chart depicting a method 200 according to an embodiment of this disclosure. At step 210, audio data is recorded using a first device, wherein the first device is portable. The first device can be any of the devices as described herein. At step 220, the recorded audio data is transferred from the first device to a second device containing at least one data file. The audio data can be transferred via any of the means disclosed herein. At step 230, at least one marker is detected in either of the transferred audio data or the at least one data file. At step 240, using the detected marker(s), the transferred audio data is synchronized with the at least one data file. This method may be implemented using the techniques and embodiments disclosed herein.



FIG. 8 is a flow chart depicting a method 300 according to an embodiment of this disclosure. At step 310, audio data is recorded using a first device, wherein the first device is portable. The first device can be any of the devices as described herein. At step 320, simultaneously with the recording audio data using the first device, the recorded audio data from the first device is wirelessly transmitted to a second device configured to receive audio data. At step 330, the transmitted audio data is recorded using the second device. The audio data can be transmitted via any of the means disclosed herein. This method may be implemented using the techniques and embodiments disclosed herein.


With regard to FIGS. 6-8, any of the data files mentioned may include audio data, video data, or audiovisual data. Any of the data files may also include metadata. Similarly, with respect to FIGS. 6-8, the “audio data” mentioned may in fact be audio data, visual data, or audiovisual data, and it may also include metadata.


In light of the principles and example embodiments described and depicted herein, it will be recognized that the example embodiments can be modified in arrangement and detail without departing from such principles. Also, the foregoing discussion has focused on particular embodiments, but other configurations are also contemplated. In particular, even though expressions such as “in one embodiment,” “in another embodiment,” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments. As a rule, any embodiment referenced herein is freely combinable with any one or more of the other embodiments referenced herein, and any number of features of different embodiments are combinable with one another, unless indicated otherwise.


Similarly, although example processes have been described with regard to particular operations performed in a particular sequence, numerous modifications could be applied to those processes to derive numerous alternative embodiments of the present invention. For example, alternative embodiments may include processes that use fewer than all of the disclosed operations, processes that use additional operations, and processes in which the individual operations disclosed herein are combined, subdivided, rearranged, or otherwise altered. This disclosure describes one or more embodiments wherein various operations are performed by certain systems, applications, modules, components, etc. In alternative embodiments, however, those operations could be performed by different components. Also, items such as applications, modules, components, etc., may be implemented as software constructs stored in a machine accessible storage medium, such as an optical disk, a hard disk drive, etc., and those constructs may take the form of applications, programs, subroutines, instructions, objects, methods, classes, or any other suitable form of control logic; such items may also be implemented as firmware or hardware, or as any combination of software, firmware and hardware, or any combination of any two of software, firmware and hardware.


This disclosure may include descriptions of various benefits and advantages that may be provided by various embodiments. One, some, all, or different benefits or advantages may be provided by different embodiments.


In view of the wide variety of useful permutations that may be readily derived from the example embodiments described herein, this detailed description is intended to be illustrative only, and should not be taken as limiting the scope of the invention. What is claimed as the invention, therefore, are all implementations that come within the scope of the following claims, and all equivalents to such implementations.

Claims
  • 1. A method, comprising: recording audio data using a first device comprising a body-worn camera (BWC);transferring the recorded audio data from the first device to a second device comprising an in-car video (ICV) camera and configured to receive audio data;multiplexing the transferred audio data with at least one data file contained in the second device;synchronizing the transferred audio data with the at least one data file; andcreating an identifier associating the transferred audio data with the at least one data file;wherein the synchronizing the transferred audio data with the at least one data file comprises (a) selecting either the transferred audio data file or the at least one data file, (b) rolling back in time the selected file to a specific marker point, and (c) automatically synchronizing the transferred audio data file and the at least one data file by marking points in the selected file where markers in the selected file match with data in the file that was not selected; andwherein the selecting is performed based on comparing recording durations of the transferred audio data file and the at least one data file, the selected file being whichever of the transferred audio data file and the at least one data file has a shortest recording duration.
  • 2. The method of claim 1, wherein the at least one data file comprises audiovisual data.
  • 3. The method of claim 1, wherein the synchronizing the transferred audio data with the at least one data file comprises synchronizing the audio data using one or more markers selected from the group consisting of: (i) a time stamp; (ii) a watermark; (iii) an event identifier; (iv) GPS data; and (v) other metadata.
  • 4. The method of claim 1, wherein the multiplexing the transferred audio data with the at least one data file comprises replacing audio data from the at least one data file with audio data from the transferred audio data.
  • 5. The method of claim 1, wherein the multiplexing the transferred audio data with the at least one data file comprises establishing the transferred audio data as the audio data for the at least one data file.
  • 6. The method of claim 1, wherein the recording audio data comprises creating a file in the first device containing visual data recorded by the first device and audio data recorded by the first device.
  • 7. The method of claim 1, wherein the recording audio data comprises creating an audiovisual data file and a separate audio track of the audio data.
  • 8. The method of claim 1, wherein the recording audio data comprises creating an audio track of the audio data.
  • 9. The method of claim 8, wherein the audio track comprises one or more markers configured for use to synchronize audio data transferred from the audio track with the at least one data file.
  • 10. The method of claim 9, wherein the markers are selected from the group consisting of: (i) a time stamp; (ii) a watermark; (iii) an event identifier; (iv) GPS data; and (v) other metadata.
  • 11. The method of claim 1, further comprising wirelessly transmitting the recorded audio data to a remote device.
  • 12. The method of claim 1, further comprising: uploading the audio data to the cloud;uploading the at least one data file from the second device to the cloud; andin the cloud, multiplexing the audio data with the at least one data file to synchronize the audio data with the at least one data file.
  • 13. A system, comprising: a first device comprising a body-worn camera (BWC) configured to record audio data; anda second device comprising an in-car video (ICV) camera and configured to: (a) hold or store at least one data file, (b) receive audio data from the first device, (c) multiplex the audio data received from the first device with the at least one data file held or stored by the second device, and (d) synchronize the received audio data with the at least one data file held or stored by the second device, (e) and create an identifier associating the audio data received from the first device with the at least one data file held or stored by the second device;wherein the received audio data is synchronized with the at least one data file by: (a) selecting either the received audio data file or the at least one data file, (b) rolling back in time the selected file to a specific marker point, and (c) automatically synchronizing the received audio data file and the at least one data file by marking points in the selected file where markers in the selected file match with data in the file that was not selected; andwherein the selecting is performed based on comparing recording durations of the received audio data file and the at least one data file, the selected file being whichever of the received audio data file and the at least one data file has a shortest recording duration.
  • 14. The system of claim 13, wherein the at least one data file held or stored by the second device comprises audiovisual data.
  • 15. The system of claim 13, wherein the second device is configured to synchronize the audio data received from the first device with the at least one data file held or stored by the second device using one or more markers selected from the group consisting of: (i) a time stamp; (ii) a watermark; (iii) an event identifier; (iv) GPS data; and (v) other metadata.
  • 16. The system of claim 13, wherein the second device is configured to multiplex the audio data received from the first device with the at least one data file held or stored by the second device to replace audio data from the at least one data file held or stored by the second device with audio data from the received audio data.
  • 17. The system of claim 13, wherein the second device is configured to multiplex the audio data received from the first device with the at least one data file held or stored by the second device to establish the received audio data as the audio data for the at least one data file held or stored by the second device.
  • 18. The system of claim 13, wherein the first device is configured to create a file containing visual data recorded by the first device and audio data recorded by the first device.
  • 19. The system of claim 13, wherein the first device is configured to create an audiovisual data file and a separate audio track of the audio data recorded by the first device.
  • 20. The system of claim 13, wherein the first device is configured to create an audio track of the audio data.
  • 21. The system of claim 20, wherein the audio track comprises one or more markers configured for use to synchronize audio data transferred from the audio track with the at least one data file held or stored by the second device.
  • 22. The system of claim 13, wherein the first device is configured to wirelessly transmit the recorded audio data to the second device or to another device.
  • 23. A method, comprising: recording audio data using a first device comprising a body-worn camera (BWC);transferring the recorded audio data from the first device to a second device comprising an in-car video (ICV) camera and configured to receive audio data;multiplexing the transferred audio data with at least one data file contained in the second device; andsynchronizing the transferred audio data with the at least one data file,wherein the synchronizing the transferred audio data with the at least one data file comprises (a) selecting either the transferred audio data file or the at least one data file, (b) rolling back in time the selected file to a specific marker point, and (c) automatically synchronizing the transferred audio data file and the at least one data file by marking points in the selected file where markers in the selected file match with data in the file that was not selected, andwherein the selecting is performed based on comparing recording durations of the transferred audio data file and the at least one data file, the selected file being whichever of the transferred audio data file and the at least one data file has a shortest recording duration.
  • 24. A method, comprising: recording audio data using a first device comprising a body-worn camera (BWC);simultaneously with the recording audio data using the first device, wirelessly transmitting the audio data from the first device to a second device comprising an in-car video (ICV) camera and configured to receive audio data;recording the transmitted audio data using the second device;rendering the transmitted audio data audible using a speaker associated with the second device; andsynchronizing the transmitted audio data with at least one data file,wherein the synchronizing the transmitted audio data with the at least one data file comprises (a) selecting either the transmitted audio data file or the at least one data file, (b) rolling back in time the selected file to a specific marker point, and (c) automatically synchronizing the transmitted audio data file and the at least one data file by marking points in the selected file where markers in the selected file match with data in the file that was not selected; andwherein the selecting is performed based on comparing recording durations of the transmitted audio data file and the at least one data file, the selected file being whichever of the transmitted audio data file and the at least one data file has a shortest recording duration.
  • 25. The method of claim 24, further comprising multiplexing the transmitted audio data with at least one data file contained in the second device to synchronize the transmitted audio data with the at least one data file.
  • 26. The method of claim 25, wherein the synchronizing the transmitted audio data with the at least one data file comprises synchronizing the audio data using one or more markers selected from the group consisting of: (i) a time stamp; (ii) a watermark; (iii) an event identifier; (iv) GPS data; and (v) other metadata.
  • 27. The method of claim 1, wherein the recording audio data comprises recording an audiovisual data file and a separate audio file of the audio data, the audiovisual data file comprising audio data and visual data.
  • 28. The method of claim 1, wherein the first device and the second device were activated at respective different times to record the audio data and the at least one data file, respectively.
  • 29. The system of claim 13, wherein the first device is configured to record an audiovisual data file and a separate audio file of the audio data, the audiovisual data file comprising audio data and visual data.
  • 30. The system of claim 13, wherein the first device and the second device were activated at respective different times to record the audio data and the at least one data file, respectively.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application No. 62/333,818, filed on May 9, 2016, titled “Systems, Apparatuses and Methods for Creating, Identifying, Enhancing, and Distributing Evidentiary Data.” The entire disclosure of Application No. 62/333,818 is hereby incorporated herein by reference.

US Referenced Citations (295)
Number Name Date Kind
4344184 Edwards Aug 1982 A
4543665 Sotelo et al. Sep 1985 A
4590614 Erat May 1986 A
4910795 McCowen et al. Mar 1990 A
5012335 Cohodar Apr 1991 A
5111289 Lucas et al. May 1992 A
5408330 Squicciarini et al. Apr 1995 A
5477397 Naimpally et al. Dec 1995 A
5613032 Cruz et al. Mar 1997 A
5724475 Kirsten Mar 1998 A
5815093 Kikinis Sep 1998 A
5841978 Rhoads Nov 1998 A
5862260 Rhoads Jan 1999 A
5926218 Smith Jul 1999 A
5946343 Schotz et al. Aug 1999 A
5970098 Herzberg Oct 1999 A
6002326 Turner Dec 1999 A
6009229 Kawamura Dec 1999 A
6028528 Lorenzetti et al. Feb 2000 A
6038257 Brusewitz et al. Mar 2000 A
6122403 Rhoads Sep 2000 A
6141611 Mackey et al. Oct 2000 A
6163338 Johnson et al. Dec 2000 A
6175860 Gaucher Jan 2001 B1
6181711 Zhang et al. Jan 2001 B1
6275773 Lemelson et al. Aug 2001 B1
6298290 Abe et al. Oct 2001 B1
6346965 Toh Feb 2002 B1
6405112 Rayner Jun 2002 B1
6411874 Morgan et al. Jun 2002 B2
6421080 Lambert Jul 2002 B1
6424820 Burdick et al. Jul 2002 B1
6462778 Abram et al. Oct 2002 B1
6505160 Levy et al. Jan 2003 B1
6510177 De Bonet et al. Jan 2003 B1
6518881 Monroe Feb 2003 B2
6624611 Kirmuss Sep 2003 B2
6778814 Koike Aug 2004 B2
6788338 Dinev et al. Sep 2004 B1
6788983 Zheng Sep 2004 B2
6789030 Coyle et al. Sep 2004 B1
6791922 Suzuki Sep 2004 B2
6825780 Saunders et al. Nov 2004 B2
6831556 Boykin Dec 2004 B1
7010328 Kawasaki et al. Mar 2006 B2
7091851 Mason et al. Aug 2006 B2
7119832 Blanco et al. Oct 2006 B2
7120477 Huang Oct 2006 B2
7155615 Silvester Dec 2006 B1
7167519 Comaniciu et al. Jan 2007 B2
7190882 Gammenthaler Mar 2007 B2
7231233 Gosieski, Jr. Jun 2007 B2
7272179 Siemens et al. Sep 2007 B2
7317837 Yatabe et al. Jan 2008 B2
7356473 Kates Apr 2008 B2
7386219 Ishige Jun 2008 B2
7410371 Shabtai et al. Aug 2008 B2
7414587 Stanton Aug 2008 B2
7428314 Henson Sep 2008 B2
7515760 Sai et al. Apr 2009 B2
7542813 Nam Jun 2009 B2
7551894 Gerber et al. Jun 2009 B2
7554587 Shizukuishi Jun 2009 B2
7618260 Daniel et al. Nov 2009 B2
7631195 Yu et al. Dec 2009 B1
7688203 Rockefeller et al. Mar 2010 B2
7693289 Stathem et al. Apr 2010 B2
7768548 Silvernail et al. Aug 2010 B2
7778601 Seshadri et al. Aug 2010 B2
7792189 Finizio et al. Sep 2010 B2
7818078 Iriarte Oct 2010 B2
7835530 Avigni Nov 2010 B2
7868912 Venetianer et al. Jan 2011 B2
7877115 Seshadri et al. Jan 2011 B2
7974429 Tsai Jul 2011 B2
7995652 Washington Aug 2011 B2
8068023 Dulin et al. Nov 2011 B2
8081214 Vanman et al. Dec 2011 B2
8086277 Ganley et al. Dec 2011 B2
8121306 Cilia et al. Feb 2012 B2
8126276 Bolle et al. Feb 2012 B2
8126968 Rodman et al. Feb 2012 B2
8139796 Nakashima et al. Mar 2012 B2
8144892 Shemesh et al. Mar 2012 B2
8145134 Henry et al. Mar 2012 B2
8150089 Segawa et al. Apr 2012 B2
8154666 Mody Apr 2012 B2
8166220 Ben Yacov et al. Apr 2012 B2
8174577 Chou May 2012 B2
8195145 Angelhag Jun 2012 B2
8208024 Dischinger Jun 2012 B2
8228364 Cilia Jul 2012 B2
8230149 Long et al. Jul 2012 B1
8253796 Renkis Aug 2012 B2
8254844 Kuffner et al. Aug 2012 B2
8260217 Chang et al. Sep 2012 B2
8264540 Chang et al. Sep 2012 B2
8270647 Crawford et al. Sep 2012 B2
8289370 Civanlar et al. Oct 2012 B2
8300863 Knudsen et al. Oct 2012 B2
8311549 Chang et al. Nov 2012 B2
8311983 Guzik Nov 2012 B2
8358980 Tajima et al. Jan 2013 B2
8380131 Chiang Feb 2013 B2
8422944 Flygh et al. Apr 2013 B2
8446469 Blanco et al. May 2013 B2
8457827 Ferguson et al. Jun 2013 B1
8489065 Green et al. Jul 2013 B2
8489151 Engelen et al. Jul 2013 B2
8497940 Green et al. Jul 2013 B2
8554145 Fehr Oct 2013 B2
8612708 Drosch Dec 2013 B2
8630908 Forster Jan 2014 B2
8661507 Hesselink et al. Feb 2014 B1
8707392 Birtwhistle et al. Apr 2014 B2
8731742 Zagorski et al. May 2014 B2
8780199 Mimar Jul 2014 B2
8781292 Ross et al. Jul 2014 B1
8849557 Levandowski et al. Sep 2014 B1
9041803 Chen et al. May 2015 B2
9070289 Saund et al. Jun 2015 B2
9159371 Ross et al. Oct 2015 B2
9201842 Plante Dec 2015 B2
9225527 Chang Dec 2015 B1
9253452 Ross et al. Feb 2016 B2
9307317 Chang et al. Apr 2016 B2
9325950 Haler Apr 2016 B2
9471059 Wilkins Oct 2016 B1
9589448 Schneider et al. Mar 2017 B1
9665094 Russell May 2017 B1
10074394 Ross et al. Sep 2018 B2
20020003571 Schofield et al. Jan 2002 A1
20020051061 Peters et al. May 2002 A1
20020135679 Scaman Sep 2002 A1
20030052970 Dodds et al. Mar 2003 A1
20030080878 Kirmuss May 2003 A1
20030081122 Kirmuss May 2003 A1
20030081127 Kirmuss May 2003 A1
20030081128 Kirmuss May 2003 A1
20030081934 Kirmuss May 2003 A1
20030081935 Kirmuss May 2003 A1
20030095688 Kirmuss May 2003 A1
20030103140 Watkins Jun 2003 A1
20030151663 Lorenzetti et al. Aug 2003 A1
20030197629 Saunders et al. Oct 2003 A1
20040008255 Lewellen Jan 2004 A1
20040051793 Tecu et al. Mar 2004 A1
20040107030 Nishira et al. Jun 2004 A1
20040146272 Kessel et al. Jul 2004 A1
20040177253 Wu et al. Sep 2004 A1
20050007458 Benattou Jan 2005 A1
20050078195 VanWagner Apr 2005 A1
20050083404 Pierce et al. Apr 2005 A1
20050088521 Blanco et al. Apr 2005 A1
20050122397 Henson et al. Jun 2005 A1
20050154907 Han et al. Jul 2005 A1
20050158031 David Jul 2005 A1
20050185936 Lao et al. Aug 2005 A9
20050243171 Ross, Sr. et al. Nov 2005 A1
20050286476 Crosswy et al. Dec 2005 A1
20060055521 Blanco et al. Mar 2006 A1
20060072672 Holcomb et al. Apr 2006 A1
20060077256 Silvemail et al. Apr 2006 A1
20060078046 Lu Apr 2006 A1
20060130129 Dai et al. Jun 2006 A1
20060133476 Page et al. Jun 2006 A1
20060165386 Garoutte Jul 2006 A1
20060270465 Lee et al. Nov 2006 A1
20060274116 Wu Dec 2006 A1
20070005609 Breed Jan 2007 A1
20070064108 Haler Mar 2007 A1
20070086601 Mitchler Apr 2007 A1
20070111754 Marshall et al. May 2007 A1
20070124292 Kirshenbaum et al. May 2007 A1
20070217761 Chen et al. Sep 2007 A1
20070219685 Plante Sep 2007 A1
20080005472 Khalidi et al. Jan 2008 A1
20080030782 Watanabe Feb 2008 A1
20080129825 DeAngelis et al. Jun 2008 A1
20080165250 Ekdahl et al. Jul 2008 A1
20080186129 Fitzgibbon Aug 2008 A1
20080208755 Malcolm Aug 2008 A1
20080294315 Breed Nov 2008 A1
20080303903 Bentley et al. Dec 2008 A1
20090017881 Madrigal Jan 2009 A1
20090022362 Gagvani et al. Jan 2009 A1
20090074216 Bradford et al. Mar 2009 A1
20090076636 Bradford et al. Mar 2009 A1
20090118896 Gustafsson May 2009 A1
20090195651 Leonard et al. Aug 2009 A1
20090195655 Pandey Aug 2009 A1
20090213902 Jeng Aug 2009 A1
20100026809 Curry Feb 2010 A1
20100030929 Ben-Yacov et al. Feb 2010 A1
20100057444 Cilia Mar 2010 A1
20100081466 Mao Apr 2010 A1
20100131748 Lin May 2010 A1
20100136944 Taylor et al. Jun 2010 A1
20100180051 Harris Jul 2010 A1
20100238009 Cook et al. Sep 2010 A1
20100274816 Guzik Oct 2010 A1
20100287545 Corbefin Nov 2010 A1
20100289648 Ree Nov 2010 A1
20100302979 Reunamaki Dec 2010 A1
20100309971 Vanman et al. Dec 2010 A1
20110016256 Hatada Jan 2011 A1
20110044605 Vanman et al. Feb 2011 A1
20110092248 Evanitsky Apr 2011 A1
20110142156 Haartsen Jun 2011 A1
20110233078 Monaco et al. Sep 2011 A1
20110234379 Lee Sep 2011 A1
20110280143 Li et al. Nov 2011 A1
20110280413 Wu et al. Nov 2011 A1
20110299457 Green, III et al. Dec 2011 A1
20120014534 Bodley et al. Jan 2012 A1
20120078397 Lee et al. Mar 2012 A1
20120083960 Zhu et al. Apr 2012 A1
20120119894 Pandy May 2012 A1
20120163309 Ma et al. Jun 2012 A1
20120173577 Millar et al. Jul 2012 A1
20120266251 Birtwhistle et al. Oct 2012 A1
20120300081 Kim Nov 2012 A1
20120307070 Pierce Dec 2012 A1
20120310394 El-Hoiydi Dec 2012 A1
20120310395 El-Hoiydi Dec 2012 A1
20130114849 Pengelly et al. May 2013 A1
20130135472 Wu et al. May 2013 A1
20130163822 Chigos et al. Jun 2013 A1
20130201884 Freda et al. Aug 2013 A1
20130218427 Mukhopadhyay et al. Aug 2013 A1
20130223653 Chang Aug 2013 A1
20130236160 Gentile et al. Sep 2013 A1
20130242262 Lewis Sep 2013 A1
20130251173 Ejima et al. Sep 2013 A1
20130268357 Heath Oct 2013 A1
20130287261 Lee et al. Oct 2013 A1
20130302758 Wright Nov 2013 A1
20130339447 Ervine Dec 2013 A1
20130346660 Kwidzinski et al. Dec 2013 A1
20140037142 Bhanu et al. Feb 2014 A1
20140038668 Vasavada et al. Feb 2014 A1
20140078304 Othmer Mar 2014 A1
20140085475 Bhanu et al. Mar 2014 A1
20140092251 Troxel Apr 2014 A1
20140100891 Turner et al. Apr 2014 A1
20140114691 Pearce Apr 2014 A1
20140143545 McKeeman et al. May 2014 A1
20140162598 Villa-Real Jun 2014 A1
20140184796 Klein Jul 2014 A1
20140236414 Droz et al. Aug 2014 A1
20140236472 Rosario Aug 2014 A1
20140278052 Slavin et al. Sep 2014 A1
20140280584 Ervine Sep 2014 A1
20140281498 Bransom et al. Sep 2014 A1
20140297687 Lin Oct 2014 A1
20140309849 Ricci Oct 2014 A1
20140321702 Schmalstieg Oct 2014 A1
20140355951 Tabak Dec 2014 A1
20140375807 Muetzel et al. Dec 2014 A1
20150012825 Rezvani et al. Jan 2015 A1
20150032535 Li et al. Jan 2015 A1
20150066349 Chan et al. Mar 2015 A1
20150084790 Arpin et al. Mar 2015 A1
20150086175 Lorenzetti Mar 2015 A1
20150088335 Lambert et al. Mar 2015 A1
20150103159 Shashua et al. Apr 2015 A1
20150161483 Allen et al. Jun 2015 A1
20150211868 Matsushita et al. Jul 2015 A1
20150266575 Borko Sep 2015 A1
20150294174 Karkowski et al. Oct 2015 A1
20160023762 Wang Jan 2016 A1
20160035391 Ross et al. Feb 2016 A1
20160042767 Araya et al. Feb 2016 A1
20160062762 Chen et al. Mar 2016 A1
20160062992 Chen et al. Mar 2016 A1
20160063642 Luciani et al. Mar 2016 A1
20160064036 Chen et al. Mar 2016 A1
20160065908 Chang et al. Mar 2016 A1
20160144788 Perrin et al. May 2016 A1
20160148638 Ross et al. May 2016 A1
20160285492 Vembar et al. Sep 2016 A1
20160332747 Bradlow Nov 2016 A1
20170032673 Scofield et al. Feb 2017 A1
20170053169 Cuban et al. Feb 2017 A1
20170053674 Fisher Feb 2017 A1
20170059265 Winter et al. Mar 2017 A1
20170066374 Hoye Mar 2017 A1
20170076396 Sudak Mar 2017 A1
20170085829 Waniguchi et al. Mar 2017 A1
20170113664 Nix Apr 2017 A1
20170178422 Wright Jun 2017 A1
20170178423 Wright Jun 2017 A1
20170193828 Holtzman et al. Jul 2017 A1
20170253330 Saigh et al. Sep 2017 A1
20170324897 Swaminathan et al. Nov 2017 A1
Foreign Referenced Citations (40)
Number Date Country
2907145 May 2007 CN
101309088 Nov 2008 CN
102355618 Feb 2012 CN
102932703 Feb 2013 CN
202957973 May 2013 CN
103617005 Mar 2014 CN
1148726 Oct 2001 EP
1655855 May 2006 EP
2107837 Oct 2009 EP
2391687 Nov 2004 GB
2003150450 May 2003 JP
2005266934 Sep 2005 JP
2009169922 Jul 2009 JP
2012058832 Mar 2012 JP
1997038526 Oct 1997 WO
2000013410 Mar 2000 WO
2000021258 Apr 2000 WO
2000045587 Aug 2000 WO
2000072186 Nov 2000 WO
2002061955 Aug 2002 WO
2004066590 Aug 2004 WO
2004111851 Dec 2004 WO
2005053325 Jun 2005 WO
2005054997 Jun 2005 WO
2007114988 Oct 2007 WO
2009058611 May 2009 WO
2009148374 Dec 2009 WO
2012001143 Jan 2012 WO
2012100114 Jul 2012 WO
2012116123 Aug 2012 WO
2013020588 Feb 2013 WO
2013074947 May 2013 WO
2013106740 Jul 2013 WO
2013107516 Jul 2013 WO
2013150326 Oct 2013 WO
2014057496 Apr 2014 WO
2016033523 Mar 2016 WO
2016061516 Apr 2016 WO
2016061525 Apr 2016 WO
2016061533 Apr 2016 WO
Non-Patent Literature Citations (42)
Entry
Office Action issued in U.S. Appl. No. 15/413,205 dated Mar. 17, 2017, 7 pages.
Office Action issued in U.S. Appl. No. 11/369,502 dated Mar. 16, 2010, 10 pages.
Office Action issued in U.S. Appl. No. 11/369,502 dated Sep. 30, 2010, 12 pages.
Office Action issued in U.S. Appl. No. 11/369,502 dated Jul. 14, 2011, 17 pages.
Office Action issued in U.S. Appl. No. 11/369,502 dated Jan. 31, 2012, 18 pages.
Examiner's Answer (to Appeal Brief) issued in U.S. Appl. No. 11/369,502 dated Oct. 24, 2012, 20 pages.
Office Action issued in U.S. Appl. No. 13/723,747 dated Mar. 22, 2013, 6 pages.
Office Action issued in U.S. Appl. No. 13/723,747 dated Jun. 26, 2013, 6 pages.
Office Action issued in U.S. Appl. No. 13/723,747 dated Sep. 10, 2013, 7 pages.
Advisory Action issued in U.S. Appl. No. 13/723,747 dated Feb. 24, 2014, 4 pages.
Office Action issued in U.S. Appl. No. 13/723,747 dated Mar. 20, 2014, 6 pages.
Office Action issued in U.S. Appl. No. 13/723,747 dated Nov. 10, 2014, 9 pages.
Notice of Allowance and Fees Due issued in U.S. Appl. No. 13/723,747 dated Mar. 30, 2015, 6 pages.
First Action Interview Pilot Program Pre-Interview Communication issued in U.S. Appl. No. 14/588,139 dated May 14, 2015, 4 pages.
Office Action issued in U.S. Appl. No. 14/593,853 dated Apr. 20, 2015, 30 pages.
Office Action issued in U.S. Appl. No. 14/593,956 dated May 6, 2015, 10 pages.
PCT International Search Report and Written Opinion issued in Application No. PCT/US07/63485 dated Feb. 8, 2008, 10 pages.
Chapter 5: “Main Memory,” Introduction to Computer Science course, 2004, 20 pages, available at http://www2.cs.ucy.ac.cy/˜nicolast/courses/lectures/MainMemory.pdf.
Sony Corporation, Digital Still Camera (MVC-CD200/CD300), Operation Manual, 2001, 108 pages, Sony, Japan.
Steve'S Digicams, Kodak Professional DCS 620 Digital Camera, 1999, 11 pages, United States, available at: http://www.steves-digicams.com/dcs620.html.
Gregory J. Allen, “The Feasibility of Implementing Video Teleconferencing Systems Aboard Afloat Naval Units” (Master's Thesis, Naval Postgraduate School, Monterey, California), Mar. 1990, 143 pages.
Bell-Northern Research Ltd., “A Multi-Bid Rate Interframe Movement Compensated Multimode Coder for Video Conferencing” (Final Report prepared for DARPA), Apr. 1982, 92 pages, Ottawa, Ontario, Canada.
Xiaoqing Zhu, Eric Setton, Bernd Girod, “Rate Allocation for Multi-Camera Surveillance Over an Ad Hoc Wireless Network,” 2004, 6 pages, available at http://msw3.stanford.edu/˜zhuxq/papers/pcs2004.pdf.
Office Action issued in U.S. Appl. No. 14/593,722 dated Sep. 25, 2015, 39 pages.
Office Action issued in U.S. Appl. No. 14/593,853 dated Sep. 11, 2015 (including Summary of Interview conducted on May 9, 2015), 45 pages.
Notice of Allowance issued in U.S. Appl. No. 14/593,956 dated Oct. 26, 2015, 10 pages.
“IEEE 802.1X,” Wikipedia, Aug. 23, 2013, 8 pages, available at: http://en.wikipedia.org/w/index.php?title=IEEE_802.1X&oldid=569887090.
Notice of Allowance issued in U.S. Appl. No. 14/588,139 dated Aug. 14, 2015, 19 pages.
“Near Field Communication,” Wikipedia, Jul. 19, 2014, 8 pages, available at: https://en.wikipedia.org/w/index.php?title=near_field_communication&oldid=617538619.
PCT International Search Report and Written Opinion issued in Application No. PCT/US15/47532 dated Jan. 8, 2016, 22 pages.
Office Action issued in U.S. Appl. No. 14/686,192 dated Apr. 8, 2016, 19 pages.
Office Action issued in U.S. Appl. No. 14/715,742 dated Aug. 21, 2015, 13 pages.
Office Action issued in U.S. Appl. No. 14/715,742 dated Mar. 11, 2016, 14 pages.
Office Action issued in U.S. Appl. No. 14/593,722 dated Apr. 10, 2015, 28 pages.
Office Action issued in U.S. Appl. No. 14/686,192 dated Dec. 24, 2015, 12 pages.
“Portable Application,” Wikipedia, Jun. 26, 2014, 4 pages, available at: http://en.wikipedia.org/index.php?title=Portable_application&oldid=614543759.
“Radio-Frequency Identification,” Wikipedia, Oct. 18, 2013, 31 pages, available at: http://en.wikipedia.org/w/index.php?title=Radio-frequency_identification&oldid=577711262.
Advisory Action issued in U.S. Appl. No. 14/715,742 dated May 20, 2016 (including Summary of Interview conducted on May 12, 2016), 4 pages.
Advisory Action issued in U.S. Appl. No. 14/715,742 dated Jun. 14, 2016, 3 pages.
Office Action issued in U.S. Appl. No. 14/715,742 dated Sep. 23, 2016, 17 pages.
Office Action issued in U.S. Appl. No. 15/412,044 dated Jun. 1, 2017, 10 pages.
Office Action issued in U.S. Appl. No. 15/467,924 dated May 8, 2017, 10 pages.
Related Publications (1)
Number Date Country
20170323663 A1 Nov 2017 US
Provisional Applications (1)
Number Date Country
62333818 May 2016 US