The recording of personally significant events through video and/or audio recording devices is a common occurrence. Typically, recordings of organized events, such as a surprise party or a piano recital, may be planned for in advance and the recording devices prepared ahead of time for the best position and amount of recording media appropriate for the event. However, many everyday events tend not to take place on a schedule or begin on command. Also, the significance of a given event may not be recognized until after that event has already started to unfold. To keep track of everything that happens to or around a person on a given day, one could carry camera and attempt to bring that camera out quickly to capture a particular moment or event. Taking out the recording device too late or the recording device being at a location just out of reach is a common issue with spontaneous events. Alternatively, a recording device may be positioned at a location that seems likely to capture interesting events and left on continuously during the hours that something of interest might occur. This latter approach to capturing more spontaneous events may run into the problem of not having the recording device in quite the right location and the expense of the large amounts of recording media needed or large amount of time needed to manually review the unattended recording.
According to one aspect, a video recording apparatus is disclosed. The video recording apparatus may comprise a video recorder configured to record video data, a memory configured to store the video data, and a processor in communication with the memory. The processor may be configured to control storage of the video data in a video segment on the memory, determine whether a trigger event is detected during storage of the video data in the video segment, and in response to detecting the trigger event during storage of the video data in the video segment, flag at least a portion of the video segment for preservation, and implement a trigger event protocol.
According to another aspect, a method for recording continuous video segments is disclosed. The method may comprise controlling a video recorder to record video data, and controlling a processor to store the video data in a video segment on a memory, determine whether a trigger event is detected during storage of the video data in the video segment, and in response to detecting the trigger event during storage of the video data in the video segment, flag at least a portion of the video segment for preservation and implement a trigger event protocol.
The present disclosure may be better understood with reference to the following drawings and description. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles. In the figures, like referenced numerals may refer to like parts throughout the different figures unless otherwise specified.
The methods, devices, and systems discussed below may be embodied in a number of different forms. Not all of the depicted components may be required, however, and some implementations may include additional, different, or fewer components from those expressly described in this disclosure. Variations in the arrangement and type of the components may be made without departing from the spirit or scope of the claims as set forth herein.
In order to capture spontaneous events without the distraction of manually handling a camera, and without the risk of missing the beginning of an event, a wearable recording device is disclosed. According to some embodiments, the wearable recording device is a pair of glasses including a digital video recording component for recording events and objects from the user's point of view through the glasses. The wearable glasses embodiment is advantageous for recording events and objects from the user's own viewing field. In other words, events that are viewable by the user may be recorded by the digital video recording component of the wearable glasses. According to other embodiments, the wearable recording device may be attached to other locations on the user's body to enable the digital video recording component to record events and objects surrounding the user.
The components of wearable recording device 100 may be integrated into wearable recording device 100, or may be discrete elements removably attached to wearable recording device 100. The electrical components of wearable recording device 100 (e.g., computing system 102, display unit 103, and camera unit 104) may be configured to communicate with each other via wired or wireless communication protocols.
In particular, arm portions 101 may be configured to wrap, at least in part, around a user's ears. Computing system 102 may include one or more components described in computer 700 illustrated in
Camera unit 104 may support capture of image data digitally, in analog, and/or according to any number of lossy or lossless image or video formats, such as Joint Photographic Experts Group (jpeg or jpg), Tagged Image File Format (tiff), Portable Network Graphics (png), Graphics Interchange Format (gif), Moving Picture Experts Group (mpeg, mpeg-2), or any other image or video file format. Camera unit 104 may also support capture image data for various forms of image, such as ultra-violet images, infrared images, night vision, thermal scans, and more.
Referring to
Smart watch 210 may be configured to support wireless communication (e.g. cellular, Wi-Fi, Bluetooth, NFC, etc.) capabilities and data processing and memory abilities. The components of smart watch 210 may include one or more components described in computer 700 illustrated in
Smart phone 220 may be configured to support wireless communication (e.g. cellular, Wi-Fi, Bluetooth, NFC, etc.) capabilities and data processing and memory abilities. The components of smart phone 220 may include one or more components described in computer 700 illustrated in
When communication between wearable recording device 100 and secure server 240 is established via network 230, data being captured or received by wearable recording device may be uploaded to secure server 240 to be stored on a memory of secure server 240. For example, digital video recorded by camera unit 104 may be uploaded to secure serve 240 and stored on a memory of secure server 240. The components of secure server 240 may include one or more components described in the computer 700 illustrated in
Communication between devices within the user-based network system 200 may be through any of a number of standard wireless communication protocols. For example, Bluetooth communication, RF communication, NFC communication, telecommunication network communication, or any other wireless mechanism may be used.
As described in greater detail below, the present disclosure describes wearable recording device 100 that supports real-time recording of objects and events occurring with a user's field of view. In order to maximize limited on-board memory storage capabilities, a data storage strategy is disclosed that allows for recording continuous video segments by utilizing data overwriting strategies to overwrite previously recorded video segments. However, the disclosure further discloses a data storage strategy that detects specific trigger events (e.g., sounds, objects, facial recognition, biometric readings, acceleration readings, orientation readings) during the recording of a video segment that may cause the video segment to be stored separately and saved from the overwriting feature. This allows for the smart conservation of memory and storage of video segments that may include significant events that warrant longer term data storage.
The video recording strategies, data storing strategies, and other related processes described herein may be implemented by a data storage strategy tool. The data storage strategy tool may be, for example, software, hardware, firmware, or some combination thereof, running on one or more of the devices that comprise the user-based network system 200 or secure server 240.
Referring back to timeline 300 illustrated in
A subsequent third video segment 3 recording events occurring during a third time period T3 lasting from time t3 to t4, may be stored to overwrite all, or substantially all, of the first video segment 1 by storing the third video segment 3 over first data block 1. Similarly, a subsequent fourth video segment 4 recording events occurring during a fourth time period T4 lasting from time t4 to t5, may be stored to overwrite all, or substantially all, of the second video segment 2 by storing the fourth video segment 4 over second data block 2. In this way, the data storage strategy tool is configured to overwrite previously recorded video segments as a strategy for efficiently conserving limited memory storage space. For example, due to the limits of on-board memory available on wearable recording device 100, without a data storage strategy that includes overwriting previously recorded video segments, the ability to continuously record video segments will be severely limited.
So with the data storage strategy described by timeline 300, video segments may continue to be recorded and stored, for at least a known period of time before it is overwritten. This enables the continuous recording of events occurring within the field of view of camera unit 104 while also limiting the memory dedicated to the storage of such video recording to a fixed (i.e., known) limit.
Each of the time periods T1-T4 may be designed to last the same length of time. For example, one or more of time periods T1-T4 may last for 15 minutes or some other predetermined length of time. Further, although timeline 300 is described to repeat the memory overwriting cycle to every two subsequent time periods, the data storage strategy tool may be configured to repeat the memory overwriting cycle according to other intervals. For example, the data storage strategy tool may control the memory overwriting cycle so that each subsequent video recording segment is stored to overwrite the previously recorded video segment.
The data storage strategy described by timeline 300 may be desirable and effective for continuously recording video segments that do not include a significant event that may be considered for more permanent storage. However, when a significant event (e.g., a predetermined trigger event) is detected during the recording of a video segment, a different data overwriting strategy may be desirable. Accordingly, the identification of a trigger event may initiate the data storage strategy tool to implement a trigger event protocol that may, for example, separately save a video segment lasting a predetermined length of time prior to the triggering event and/or lasting a predetermined length of time following the triggering event. In addition or alternatively, the identification of a triggering event may initiate the data storage strategy tool to continue recording the video segment past a default time period until a subsequent trigger event is detected. The timelines illustrated in
According to the first data storage strategy, after a trigger event is detected, the data storage strategy tool may automatically save a first video portion lasting a first predetermined length of time prior to the trigger event. In addition or alternatively, after the trigger event is detected, the data storage strategy tool may automatically save a second video portion lasting a second predetermined length of time following the trigger event. The first predetermined length of time may be equal to the second predetermined length of time (e.g., 7.5 minutes). Alternatively, the first predetermined length of time may be different from the second length of time.
According to timeline 410, a first video segment 1 may be recorded to capture events occurring during a first time period T1 lasting from time t1 to t2. As illustrated in timeline 410, a trigger event is not detected during recording of the first video segment 1 during the first time period T1. Therefore, the data storage strategy tool controls storage of the first video segment 1 into a first data block 1.
Following storage of the first video segment 1 into first data block 1, the data storage strategy tool commences recording a second video segment 2 starting at time t2. During the recording of the second video segment 2, a trigger event may be detected at time t3. According to the first data storage strategy, the detection of the trigger event at time t3 may automatically initiate the data storage strategy tool to save a first video portion of previously recorded video content that goes back a first predetermined length of time from the trigger event detection time at t3. In addition or alternatively, according to the first data storage strategy the detection of the trigger event at time t3 may also initiate the data storage strategy tool to continuing storing video content (i.e., a second video portion) following the detection of the trigger event at time t3 for a second predetermined length of time. Timeline 410 illustrates the first predetermined length of time lasting from time t2 to time t3, and the second predetermined length of time lasting from time t3 to time t5. In the embodiment illustrated by timeline 410, the first predetermined time length is equal to the second predetermined time length. However, in alternative embodiments the first predetermined time length may be different from the second predetermined time length, where the first predetermined time length may be longer or shorter than the second predetermined time length.
So whereas the second video segment 2 may have lasted from time t2 to time t4 under the baseline data storage strategy described in timeline 300, because the trigger event was detected during the second time period T2 at time t3 in timeline 410, the second video segment 2 is recorded from time t2 to time t5 according to the first data storage strategy. The combination of the first video portion (lasting from time t2 to time t3 prior to the trigger event) and the second video portion (lasting from time t3 to time t5 following the trigger event) may then be stored as second video segment 2 (i.e., trigger event video segment) into data block 2. Alternatively, according to some embodiments the first video portion may be stored as the second video segment 2 into data block 2. Alternatively, according to some embodiments the second video portion may be stored as the second video segment 2 into data block 2.
A subsequent third video segment 3 may be recorded to capture events occurring during a third time period T3 lasting from time t5 to t6. The data storage strategy tool allows the third video segment 3 to overwrite all, or substantially all, of the first video segment 1 by storing the third video segment 3 over first data block 1. The data storage strategy tool allows for first video segment 1 to be overwritten in first data block 1 because a trigger event was not detected in first video segment 1.
A subsequent fourth video segment 4 may be recorded to capture events occurring during a fourth time period T4 lasting from time t6 to t7. Whereas under the baseline data storage strategy described in timeline 300 the fourth video segment 4 was stored to overwrite second video segment 2 in second data block 2, because the trigger event was detected during the recording of the second video segment 2 in timeline 410, the data storage strategy tool will not overwrite the second video segment 2 in data block 2. Rather, the data storage strategy tool controls storage of the fourth video segment 4 into a third data block 3, where the third data block 3 may be different from the first data block 1 and/or second data block 2. In this way, the second video segment 2 including the trigger event will be saved from overwriting and preserved until a later action to overwrite it is taken.
According to the second data storage strategy, after a trigger event is detected, the data storage strategy tool may automatically save a first video portion lasting a first predetermined length of time prior to the trigger event. In addition or alternatively, after the trigger event is detected, the data storage strategy tool may automatically save a second video portion lasting a second predetermined length of time following the trigger event. The first predetermined length of time may be equal to the second predetermined length of time (e.g., 10 minutes). Alternatively, the first predetermined length of time may be different from the second length of time.
According to timeline 420, a first video segment 1 may be recorded to capture events occurring during a first time period T1 lasting from time t1 to t2. As illustrated in timeline 420, a trigger event is not detected during recording of the first video segment 1 during the first time period T1. Therefore, the data storage strategy tool controls storage of the first video segment 1 into a first data block 1.
Following storage of the first video segment 1 into first data block 1, the data storage strategy tool commences recording of a second video segment 2 starting at time t2. During the recording of the second video segment 2, a trigger event may be detected at time t3. According to the first data storage strategy, the detection of the trigger event at time t3 may automatically initiate the data storage strategy tool to save a first video portion of previously recorded video content that goes back a first predetermined length of time from the trigger event detection time at t3. In addition or alternatively, according to the first data storage strategy, the detection of the trigger event at time t3 may also automatically initiate the data storage strategy tool to continuing storing video content (i.e., a second video portion) following the detection of the trigger event at time t3 for a second predetermined length of time. Timeline 420 illustrates the first predetermined length of time lasting from time t1-2 to time t3, where time t1-2 extends back into the first time period T1. The second predetermined length of time lasts from time t3 to time t5. In the embodiment illustrated by timeline 420 the first predetermined time length is equal to the second predetermined time length. However, in alternative embodiments the first predetermined time length may be different from the second predetermined time length, where the first predetermined time length may be longer or shorter than the second predetermined time length.
So whereas the second video segment 2 may have lasted from time t2 to time t4 under the baseline data storage strategy described in timeline 300, because the trigger event was detected during the second time period T2 at time t3 in timeline 420, the second video segment 2 is recorded from time t1-2 to time t5 according to the second data storage strategy. The combination of the first video portion (lasting from time t1-2 to time t3 prior to the trigger event) and the second video portion (lasting from time t3 to time t5 following the trigger event) may then be stored as second video segment 2 (i.e., trigger event video segment 2) into data block 2. Alternatively, according to some embodiments the first video portion may be stored as the second video segment 2 into data block 2. Alternatively, according to some embodiments the second video portion may be stored as the second video segment 2 into data block 2.
A subsequent third video segment 3 may be recorded to capture events occurring during a third time period T3 lasting from time t5 to t6. The data storage strategy tool controls the storage of the third video segment 3 into a third data block 3. The third data block 3 may include portions of the first data block 1 that wasn't utilized to store portions of the second video segment (i.e., video segment where a trigger event was identified) as well as any additional data block portions to allow for storage of the third video segment 3. It follows that the data storage strategy tool allows the third video segment 3 to overwrite portions of the first video segment 1 that were not saved as part of the second video segment 2.
A subsequent fourth video segment 4 may be recorded to capture events occurring during a fourth time period T4 lasting from time t6 to t7. Whereas under the baseline data storage strategy described in timeline 300 the fourth video segment 4 was stored to overwrite second video segment 2 in second data block 2, because the trigger event was detected during recording of the second video segment 2 in timeline 420, the data storage strategy tool will not overwrite the second video segment 2 in data block 2. Rather, the data storage strategy tool controls storage of the fourth video segment 4 into a fourth data block 4, where the fourth data block 4 may be different from the first data block 1, second data block 2, and/or third data block 3.
According to the third data storage strategy, after a trigger event is detected, the data storage strategy tool may automatically save a first video portion lasting a first predetermined length of time prior to the trigger event. In addition or alternatively, after the trigger event is detected, the data storage strategy tool may automatically save a second video portion lasting from the trigger event at time t3 until a subsequent trigger event is detected. The subsequent trigger event may be a predetermined sound or voice command recognized by a speech recognition device operated by the data storage strategy tool, a predetermined object recognized by a video recognition device operated by the data storage strategy tool, a predetermined person recognized by a facial recognition device operated by the data storage strategy tool, a predetermined gesture command recognized by a video recognition device operated by the data storage strategy tool, a predetermined user input command received via an input device and recognized by the data storage strategy tool, a predetermined acceleration measurement measured by an accelerometer and recognized by the data storage strategy tool, a predetermined orientation measurement detected by a gyroscope and recognized by the data storage strategy tool, a user's biometric reading obtained by one or more biometric sensors and recognized by the data storage strategy tool, or some other trigger event that may be detected by the data storage strategy tool from the current video segment being recorded. The detection of the subsequent trigger event may be accomplished by the data storage strategy tool running on the wearable recording device 100. Alternatively, the detection of the subsequent trigger event may be accomplished by the data storage strategy tool running, at least in part, on another device in communication with the wearable recording device 100 (e.g., smart watch 210, smart phone 220, or secure server 240).
According to timeline 430, a first video segment 1 may be recorded to capture events occurring during a first time period T1 lasting from time t1 to t2. As illustrated in timeline 430, a trigger event is not detected during recording of the first video segment 1. Therefore, the data storage strategy tool may control storage of the first video segment 1 into a first data block 1.
Following storage of the first video segment 1 into first data block 1, the data storage strategy tool commences recording a second video segment 2 starting at time t2. During the recording of the second video segment 2, a trigger event may be detected at time t3. According to the third data storage strategy, the detection of the trigger event at time t3 may automatically initiate the data storage strategy tool to save a first video portion of previously recorded video content that goes back a first predetermined length of time from the trigger event detection time at t3. In addition or alternatively, according to the third data storage strategy, the detection of the trigger event at time t3 may also automatically initiate the data storage strategy tool to continuing storing video content (i.e., a second video portion) following the detection of the trigger event at time t3 until a subsequent trigger event is identified. Timeline 430 illustrates the first predetermined length of time lasting from time t2 to time t3. Timeline 430 also illustrates the recording of the second video portion from the time of the trigger event detection at time t3 and continuing until a subsequent trigger event is detected at time t5. The subsequent trigger event may be any one or more of the subsequent trigger events described herein.
So whereas the second video segment 2 may have lasted from time t2 to time t4 under the baseline data storage strategy described in timeline 300, because the trigger event was detected during the second time period T2 at time t3 in timeline 430, the data storage strategy tool's implementation of the third data storage strategy results in the second video segment 2 being recorded from time t2 until the subsequent trigger event is detected at time t5. The combination of the first video portion (lasting from time t2 to time t3 prior to the trigger event) and the second video portion (lasting from time t3 to time t5 following the trigger event) may then be stored as second video segment 2 (i.e., trigger event video segment) into data block 2. Alternatively, according to some embodiments the first video portion may be stored as the second video segment 2 into data block 2. Alternatively, according to some embodiments the second video portion may be stored as the second video segment 2 into data block 2.
A subsequent third video segment 3 may be recorded to capture events occurring during a third time period T3 lasting from time t5 to t6. The data storage strategy tool allows the third video segment 3 to overwrite all, or substantially all, of the first video segment 1 by storing the third video segment 3 over first data block 1. The data storage strategy tool allows for first video segment 1 to be overwritten in first data block 1 because a trigger event was not detected in first video segment 1.
At 501, the data storage strategy tool may control recording to a first video segment into a first data block. The recording of the first video segment may correspond to any one of the processes for recording a first video segment described herein.
At 502, the data storage strategy tool determines whether a trigger event is detected during the recording of the first video segment. The detection of the trigger event may correspond to any of the processes for detecting a trigger event described herein. More particularly, 502 may include detecting a user input to the wearable recording device 100, and recognizing the manual input as a manual trigger event for identifying the trigger event. For example, during recoding of the first video segment, a user may input a command into wearable recording device 100 signaling that a trigger event has been manually detected.
In addition or alternatively, 502 may include detecting an object or person from the first video segment, and recognizing the detected object or person as a predetermined trigger event. For example, the user's child may be recognized during recoding of the first video segment, where the recognition of the user's child within a recorded video segment is known to be a predetermined trigger event. Similarly, a specific object such as a birthday cake or famous landmark may be recognized during recording of the first video segment, where recognition of special object within a recorded video segment is known to be a predetermined trigger event.
In addition or alternatively, 502 may include detecting an audible user command or sound, and recognizing the audible user command or sound as a predetermined trigger event. For example, a specific audible command such as “trigger event detected” from the user may be detected and recognized as a predetermined command for identifying a trigger event. In addition, a specific sound such as tires screeching, a baby's cry, a gunshot, or sirens may be detected and recognized as a predetermined trigger event.
In addition or alternatively, 502 may include detecting an acceleration, deceleration, and/or orientation measurement of wearable recording device 100 (or another device in communication with wearable recording device 100 as described herein), and recognizing the acceleration, deceleration, and/or orientation measurement as a predetermined trigger event. For example, detection of a sudden acceleration or deceleration may correspond to a traffic accident scenario, and therefore detection of a sudden acceleration or declaration may be recognized as a predetermined trigger event. In addition, a prolonged stillness (i.e., lack of acceleration, lack of deceleration, or no change in orientation) may correspond to a health issue where the user cannot move, and therefore detection of a prolonged stillness may be recognized as a predetermined trigger event.
In addition or alternatively, 502 may include detecting certain biometric levels and recognizing certain biometric levels as being predetermined trigger events. For example, if a high heartbeat is measured this may correspond to the user being in a stressful, or even dangerous situation. Therefore, the detection of certain biometric measurements that are known as being harmful to the user may be recognized as predetermined trigger events.
If a trigger event is detected at 502, then at 503 the data storage strategy implements a trigger event protocol. The recognition of a trigger event indicates that the current video segment being recorded will be saved rather than made available for overwriting by a subsequent video segment recording. The trigger event protocol may correspond to any one of the data storage strategies (i.e., data overwriting strategies) initiated by the data storage strategy based on the detection of a trigger event described herein.
In addition to the trigger event protocol, according to some embodiments an additional action may be implemented by the data storage strategy tool based on the type of trigger event that is recognized. For example, if the trigger event corresponds to a traffic accident scenario (e.g., sudden increase in acceleration or sudden deceleration), the data storage strategy tool may additionally control smart phone 220 to call a predetermined number (e.g., emergency contact identified in smart phone 220, known emergency services number). As another example, if the trigger event corresponds to a potentially harmful situation for the user, the data storage strategy tool may additionally control smart phone 220 to call a predetermined number (e.g., emergency contact identified in smart phone 220, known emergency services number) and/or control wearable recording device 100 to transmit the first video segment including the recognized trigger event to emergency services.
In addition to the trigger event protocol, according to some embodiments an additional action may be implemented by the data storage strategy tool based on the detection of the trigger event. For example, the detection of the trigger event may cause the data storage strategy tool to begin uploading or saving video data to a new remote system and/or database.
If a trigger event is not detected at 502, then at 504 the data storage strategy continues to record the first video segment.
At 505, the data storage strategy tool determines whether a recording period for the first video segment has ended. If it is determined that the recording period for the first video segment has ended, then at 506 the data storage strategy tool controls the recording of a second video segment. If it is determined that the recording period for the first video segment has not ended, then the data storage strategy tool may revert back to 502 to again determine whether a trigger event has occurred during the recording of the first video segment.
At 507, the data storage strategy tool determines whether a trigger event is detected during the recording of the second video segment. The detection of the trigger event may correspond to any of the processes for detecting a trigger event described herein.
If a trigger event is detected at 507, then at 508 the data storage strategy implements a trigger event protocol. The trigger event protocol may correspond to any one of the data storage strategies initiated by the data storage strategy based on the detection of a trigger event described herein.
If a trigger event is not detected at 507, then at 509 the data storage strategy continues to record the second video segment.
At 510, the data storage strategy tool determines whether a recording period for the second video segment has ended. If it is determined that the recording period for the second video segment has ended, then at 511 the data storage strategy tool controls the recording of a third video segment to overlap the first video segment, or some other previously recorded video segment. Controlling the recording of the third video segment to overlap a previously recorded video segment may correspond to any one of the data storage strategies (i.e., data overwriting strategies) initiated by the data storage strategy tool based on the detected of a trigger event described herein.
If it is determined that the recording period for the second video segment has not ended, then the data storage strategy tool may revert back to 507 to again determine whether a trigger event has occurred during the recording of the second video segment.
According to some embodiments, certain trigger events may be recognized as a command to cease recording of the current video segment being recorded, and/or deleting the current video segment being recording. For example, if an input prompt that asks for personal or private information (e.g., personal identification number (PIN), social security number, passwords, documents labeled as being confidential) is detected from the current video segment being recorded by a video recognition device, the data storage strategy tool may recognize this as a predetermined trigger event for ceasing the recording of the current video segment and/or deleting the current video segment recording. It follows that the recognition of such a predetermined trigger event will cause the data storage strategy tool to implement a trigger event protocol that ceases the recording of the current video segment and/or deletes the current video segment recording.
In addition, the user's location may be tracked via the wearable recording device 100 (or another device in communication with the wearable recording device), such that the detection of the user at certain predetermined locations (e.g., banks or other financial institutions) may be recognized by the data storage strategy tool as a trigger event for ceases the recording of the current video segment and/or deletes the current video segment recording.
In addition, facial recognition of certain predetermined people, or the audible recognition of certain people, may be recognized by the data storage strategy tool as a trigger event for ceases the recording of the current video segment and/or deletes the current video segment recording.
At 601, the data storage strategy tool may save a first video portion going back a first predetermined time period prior to the detection of a trigger event. The first video portion may go back into a previously recorded video segment, or the first video portion may go back to a time within the video segment currently being recorded.
At 602, the data storage strategy tool may continue to record and save a second video portion starting from the time when the trigger event was detected. The second video portion may continue to record for a second predetermined time period following the detection of the trigger event.
At 603, the data storage strategy tool may save the first video portion and the second video portion as a trigger event video segment. The data storage strategy tool may further control a data storage strategy to prevent future video segment recordings from overwriting the trigger event video segment for at least a predetermined time period or until a user provides instructions allowing for the overwriting of the trigger event video segment.
At 604, the data storage strategy tool may commence recording subsequent video segments according to any of the data storage strategies described herein.
At 611, the data storage strategy tool may save a first video portion going back a predetermined time period prior to the detection of a trigger event. The first video portion may go back into a previously recorded video segment, or the first video portion may go back to a time within the video segment currently being recorded.
At 612, the data storage strategy tool may continue to record and save a second video portion starting from the time when the trigger event was detected. The second video portion may continue recording until a subsequent trigger event signifying the end to the significant event corresponding to the initial trigger event is detected.
So at 613, the data storage strategy tool determines whether the subsequent trigger event is detected. If the subsequent trigger event is detected, then at 614 the data storage strategy tool ceases recording the second video portion. However, if the subsequent trigger event is not detected, the data storage strategy tool may revert back to 612 and continue recording the second video portion.
At 615, the data storage strategy tool may save the first video portion and the second video portion as a trigger event video segment. The data storage strategy tool may further control a data storage strategy to prevent future video segment recordings from overwriting the trigger event video segment for at least a predetermined time period or until a user provides instructions allowing for the overwriting of the trigger event video segment.
At 616, the data storage strategy tool may commence recording subsequent video segments according to any of the data storage strategies described herein.
Although description has been provided for the data storage strategy tool controlling storage of video data (e.g., image and sound data), it is within the scope of this disclosure for the data storage strategy tool to control storage of additional data concurrently with the video data. For example, the data storage strategy tool may control storage of smell data, weather data, humidity data, gyroscope measurement data, and other sensor data obtained by sensors accessible by the data storage strategy tool according to any one or more of the data storage strategies described herein. Further, when the data storage strategy tool is configured to control the storage of video data and additional data, the data storage strategy tool may select which data to store and which data not to store in accordance to any one or more of the data storage strategies described herein.
Referring to
In one implementation, the wearable recording device 100, smart watch 210, or smart phone 220 may communicate directly with the secure server 240. In other implementations, the wearable recording device 100, smart watch 210, or smart phone 220 may communicate with the secure server 240 only after downloading an authenticating software application or token from the secure server 240. Thus, computer-executable instructions, such as program modules, being executed by a processor or other computing capability on one or more of the devices illustrated in the system of
With reference to
The processor 702 represents a central processing unit of any type of architecture, such as a CISC (Complex Instruction Set Computing), RISC (Reduced Instruction Set Computing), VLIW (Very Long Instruction Word), or a hybrid architecture, although any appropriate processor may be used. The processor 702 executes instructions and includes portions of the computer 700 that control the operation of the entire computer 700. The processor 702 may also represent a controller that organizes data and program storage in memory and transfers data and other information between the various parts of the computer 700.
The processor 702 is configured to receive input data and/or user commands from the input device 712. The input device 712 may be a keyboard, mouse or other pointing device, trackball, scroll, button, touchpad, touch screen, keypad, microphone, speech recognition device, video recognition device, or any other appropriate mechanism for the user to input data to the computer 700 and control operation of the computer 700 and/or operation of the data storage strategy tool described herein. Although only one input device 712 is shown, in another embodiment any number and type of input devices may be included. For example, input device 712 may include an accelerometer, a gyroscope, and a global positioning system (GPS) transceiver.
The processor 702 may also communicate with other computers via the network 726 to receive instructions 724, where the processor may control the storage of such instructions 724 into any one or more of the main memory 704, such as random access memory (RAM), static memory 706, such as read only memory (ROM), and the storage device 716. The processor 702 may then read and execute the instructions 724 from any one or more of the main memory 704, static memory 706, or storage device 716. The instructions 724 may also be stored onto any one or more of the main memory 704, static memory 706, or storage device 716 through other sources. The instructions 724 may correspond to, for example, instructions that make up the data storage strategy tool described herein.
Although computer 700 is shown to contain only a single processor 702 and a single bus 708, the disclosed embodiment applies equally to computers that may have multiple processors and to computers that may have multiple busses with some or all performing different functions in different ways.
The storage device 716 represents one or more mechanisms for storing data. For example, the storage device 716 may include a computer readable medium 722 such as read-only memory (ROM), RAM, non-volatile storage media, optical storage media, flash memory devices, and/or other machine-readable media. In other embodiments, any appropriate type of storage device may be used. Although only one storage device 716 is shown, multiple storage devices and multiple types of storage devices may be present. Further, although the computer 700 is drawn to contain the storage device 716, it may be distributed across other computers, for example on a server.
The storage device 716 may include a controller (not shown) and a computer readable medium 722 having instructions 724 capable of being executed by the processor 702 to carry out the functions as previously described herein with reference to the data storage strategy tool. In another embodiment, some or all of the functions are carried out via hardware in lieu of a processor-based system. In one embodiment, the controller is a web browser, but in other embodiments the controller may be a database system, a file system, an electronic mail system, a media manager, an image manager, or may include any other functions capable of accessing data items. The storage device 716 may also contain additional software and data (not shown), which is not necessary to understand the other features.
Output device 710 is configured to present information to the user. For example, the output device 710 may be a display such as a liquid crystal display (LCD), a gas or plasma-based flat-panel display, or a traditional cathode-ray tube (CRT) display or other well-known type of display in the art of computer hardware. Accordingly, in some embodiments the output device 710 displays a user interface. In other embodiments, the output device 710 may be a speaker configured to output audible information to the user. In still other embodiments, any combination of output devices may be represented by the output device 710.
Network interface device 720 provides the computer 700 with connectivity to the network 726 through any suitable communications protocol. The network interface device 720 sends and/or receives data from the network 726 via a wireless or wired transceiver 714. The transceiver 714 may be a cellular frequency, radio frequency (RF), infrared (IR) or any of a number of known wireless or wired transmission systems capable of communicating with a network 726 or other computer device having some or all of the features of computer 700. Bus 708 may represent one or more busses, e.g., USB, PCI, ISA (Industry Standard Architecture), X-Bus, EISA (Extended Industry Standard Architecture), or any other appropriate bus and/or bridge (also called a bus controller).
Computer 700 may be implemented using any suitable hardware and/or software, such as a personal computer or other electronic computing device. In addition to the various types of wearable devices described herein, computer 700 may also be a portable computer, laptop, tablet or notebook computer, PDA, pocket computer, appliance, telephone, or mainframe computer. Network 726 may be any suitable network and may support any appropriate protocol suitable for communication to the computer 700. In an embodiment, network 726 may support wireless communications. In another embodiment, network 726 may support hard-wired communications, such as a telephone line or cable. In another embodiment, network 726 may support the Ethernet IEEE (Institute of Electrical and Electronics Engineers) 802.3x specification. In another embodiment, network 726 may be the Internet and may support IP (Internet Protocol). In another embodiment, network 726 may be a LAN or a WAN. In another embodiment, network 726 may be a hotspot service provider network. In another embodiment, network 726 may be an intranet. In another embodiment, network 726 may be a GPRS (General Packet Radio Service) network. In another embodiment, network 726 may be any appropriate cellular data network or cell-based radio network technology. In another embodiment, network 726 may be an IEEE 802.11 wireless network. In still another embodiment, network 726 may be any suitable network or combination of networks. Although one network 726 is shown, in other embodiments any number of networks (of the same or different types) may be present.
According to some embodiments, certain public use modes of the wearable recording device 100 in combination with at least one other wearable recording device 800 are disclosed. The wearable recording device 800 may include any combination of the components illustrated in
In the party event scenario illustrated in
Another public use mode available when multiple wearable recording devices are communicating with each other may include a 3D capability for later viewing an event in three dimensions by combining the multiple different perspective views recorded by at least three different wearable recording devices of a same event. This may be a post processing procedure implemented by secure server 240 after receiving video segment recordings from at least three different wearable recording devices.
Another public use mode available when multiple wearable recording devices are communicating with each other may include the ability of, for example, wearable recording device 100 being able to view video being recorded by wearable recording device 800. The video being recorded by wearable recording device 800 may be transmitted to wearable recording device 100 and displayed on display unit 103 of wearable recording device 100. Similarly, video being recorded by wearable recording device 100 may be transmitted to wearable recording device 800 and displayed on display unit 803 of wearable recording device 800. In this way, a public use mode may allow a first wearable recording device to view video being recorded by another wearable recording device to provide different viewing perspectives of a common event. This public use mode may be useful to give users an option of switching to someone else's view replay if their own view is blocked.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or use the processes described in connection with the presently disclosed subject matter, e.g., through the use of an API, reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.
Although exemplary embodiments may refer to using aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be spread across a plurality of devices. Such devices might include personal computers, network servers, and handheld devices, for example.
This application is a continuation of U.S. patent application Ser. No. 16/674,366, filed Nov. 5, 2019, which is a continuation of U.S. patent application Ser. No. 15/378,710 (now U.S. Pat. No. 10,506,281), filed on Dec. 14, 2016, which claims benefit to U.S. Provisional Patent Application No. 62/271,057, filed Dec. 22, 2015, the entirety of all of which are hereby incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5890141 | Carney et al. | Mar 1999 | A |
5903904 | Peairs | May 1999 | A |
5930501 | Neil | Jul 1999 | A |
6005623 | Takahashi | Dec 1999 | A |
6195452 | Royer | Feb 2001 | B1 |
6315195 | Ramachandran | Nov 2001 | B1 |
6668372 | Wu | Dec 2003 | B1 |
6930718 | Parulski et al. | Aug 2005 | B2 |
7296734 | Pliha | Nov 2007 | B2 |
7647897 | Jones | Jan 2010 | B2 |
7766223 | Mello | Aug 2010 | B1 |
7865425 | Waelbroeck | Jan 2011 | B2 |
7873556 | Dolan | Jan 2011 | B1 |
7974869 | Sharma | Jul 2011 | B1 |
8009931 | Li | Aug 2011 | B2 |
8064729 | Li | Nov 2011 | B2 |
8118654 | Nicolas | Feb 2012 | B1 |
8131636 | Viera et al. | Mar 2012 | B1 |
8396623 | Maeda et al. | Mar 2013 | B2 |
RE44274 | Popadic et al. | Jun 2013 | E |
8483473 | Roach | Jul 2013 | B2 |
8531518 | Zomet | Sep 2013 | B1 |
8582862 | Nepomniachtchi et al. | Nov 2013 | B2 |
8768836 | Acharya | Jul 2014 | B1 |
8818033 | Liu | Aug 2014 | B1 |
8824772 | Viera | Sep 2014 | B2 |
8929640 | Mennie et al. | Jan 2015 | B1 |
9195986 | Christy et al. | Nov 2015 | B2 |
9235860 | Boucher et al. | Jan 2016 | B1 |
9270804 | Dees et al. | Feb 2016 | B2 |
9384409 | Ming | Jul 2016 | B1 |
9387813 | Moeller et al. | Jul 2016 | B1 |
9524269 | Brinkmann et al. | Dec 2016 | B1 |
9613467 | Roberts et al. | Apr 2017 | B2 |
9613469 | Fish et al. | Apr 2017 | B2 |
9824453 | Collins et al. | Nov 2017 | B1 |
10157326 | Long et al. | Dec 2018 | B2 |
10210767 | Johansen | Feb 2019 | B2 |
10217375 | Waldron | Feb 2019 | B2 |
10402944 | Pribble et al. | Sep 2019 | B1 |
10460295 | Oakes, III et al. | Oct 2019 | B1 |
10482432 | Oakes, III et al. | Nov 2019 | B1 |
10818282 | Clauer Salyers | Oct 2020 | B1 |
10956879 | Eidson | Mar 2021 | B1 |
11030752 | Backlund | Jun 2021 | B1 |
11042940 | Limas | Jun 2021 | B1 |
11042941 | Limas | Jun 2021 | B1 |
11062130 | Medina, III | Jul 2021 | B1 |
11062131 | Medina, III | Jul 2021 | B1 |
11062283 | Prasad | Jul 2021 | B1 |
11064111 | Prasad | Jul 2021 | B1 |
11068976 | Voutour | Jul 2021 | B1 |
11070868 | Mortensen | Jul 2021 | B1 |
11121989 | Castinado | Sep 2021 | B1 |
11182753 | Oakes, III et al. | Nov 2021 | B1 |
11222315 | Prasad et al. | Jan 2022 | B1 |
11232517 | Medina et al. | Jan 2022 | B1 |
11250398 | Prasad et al. | Feb 2022 | B1 |
11288898 | Moon | Mar 2022 | B1 |
11328267 | Medina, III | May 2022 | B1 |
20010020949 | Gong et al. | Sep 2001 | A1 |
20010030695 | Prabhu et al. | Oct 2001 | A1 |
20010051965 | Guillevic | Dec 2001 | A1 |
20020075380 | Seeger et al. | Jun 2002 | A1 |
20020116335 | Star | Aug 2002 | A1 |
20020145035 | Jones | Oct 2002 | A1 |
20020152169 | Dutta | Oct 2002 | A1 |
20020154815 | Mizutani | Oct 2002 | A1 |
20020172516 | Aoyama | Nov 2002 | A1 |
20030046223 | Crawford | Mar 2003 | A1 |
20030051138 | Maeda et al. | Mar 2003 | A1 |
20030081121 | Swan | May 2003 | A1 |
20030097592 | Adusumilli | May 2003 | A1 |
20030119478 | Nagy et al. | Jun 2003 | A1 |
20030200107 | Allen et al. | Oct 2003 | A1 |
20030213841 | Josephson et al. | Nov 2003 | A1 |
20040010803 | Berstis | Jan 2004 | A1 |
20040061913 | Takiguchi | Apr 2004 | A1 |
20040066419 | Pyhalammi | Apr 2004 | A1 |
20040136586 | Okamura | Jul 2004 | A1 |
20040193878 | Dillinger et al. | Sep 2004 | A1 |
20040201695 | Inasaka | Oct 2004 | A1 |
20040202349 | Erol et al. | Oct 2004 | A1 |
20040217170 | Takiguchi et al. | Nov 2004 | A1 |
20040238619 | Nagasaka et al. | Dec 2004 | A1 |
20040267665 | Nam et al. | Dec 2004 | A1 |
20050001924 | Honda | Jan 2005 | A1 |
20050015341 | Jackson | Jan 2005 | A1 |
20050034046 | Berkmann | Feb 2005 | A1 |
20050078192 | Sakurai | Apr 2005 | A1 |
20050102208 | Gudgeon | May 2005 | A1 |
20050128333 | Park | Jun 2005 | A1 |
20050133586 | Rekeweg et al. | Jun 2005 | A1 |
20050144131 | Aziz | Jun 2005 | A1 |
20050157174 | Kitamura et al. | Jul 2005 | A1 |
20050165641 | Chu | Jul 2005 | A1 |
20050198364 | Val et al. | Sep 2005 | A1 |
20050205660 | Munte | Sep 2005 | A1 |
20050216409 | McMonagle et al. | Sep 2005 | A1 |
20050238257 | Kaneda et al. | Oct 2005 | A1 |
20050273430 | Pliha | Dec 2005 | A1 |
20050281450 | Richardson | Dec 2005 | A1 |
20060026140 | King | Feb 2006 | A1 |
20060171697 | Nijima | Feb 2006 | A1 |
20060071950 | Kurzweil et al. | Apr 2006 | A1 |
20060077941 | Alagappan et al. | Apr 2006 | A1 |
20060112013 | Maloney | May 2006 | A1 |
20060124728 | Kotovich et al. | Jun 2006 | A1 |
20060186194 | Richardson | Aug 2006 | A1 |
20060229987 | Leekley | Oct 2006 | A1 |
20060255124 | Hoch | Nov 2006 | A1 |
20060270421 | Phillips | Nov 2006 | A1 |
20070005467 | Haigh et al. | Jan 2007 | A1 |
20070030363 | Cheatle et al. | Feb 2007 | A1 |
20070058874 | Tabata et al. | Mar 2007 | A1 |
20070116364 | Kleihorst et al. | May 2007 | A1 |
20070118747 | Pintsov et al. | May 2007 | A1 |
20070130063 | Jindia | Jun 2007 | A1 |
20070136078 | Plante | Jun 2007 | A1 |
20070244811 | Tumminaro | Oct 2007 | A1 |
20080013831 | Aoki | Jan 2008 | A1 |
20080040280 | Davis | Feb 2008 | A1 |
20080069427 | Liu | Mar 2008 | A1 |
20080140552 | Blaikie | Jun 2008 | A1 |
20080192129 | Walker | Aug 2008 | A1 |
20080250196 | Mori | Oct 2008 | A1 |
20090171723 | Jenkins | Jul 2009 | A1 |
20090176511 | Morrison | Jul 2009 | A1 |
20090222347 | Whitten | Sep 2009 | A1 |
20090240574 | Carpenter | Sep 2009 | A1 |
20100008579 | Smirnov | Jan 2010 | A1 |
20100016016 | Brundage et al. | Jan 2010 | A1 |
20100038839 | DeWitt et al. | Feb 2010 | A1 |
20100069093 | Morrison | Mar 2010 | A1 |
20100069155 | Schwartz | Mar 2010 | A1 |
20100076890 | Low | Mar 2010 | A1 |
20100112975 | Sennett | May 2010 | A1 |
20100161408 | Karson | Jun 2010 | A1 |
20100201711 | Fillion et al. | Aug 2010 | A1 |
20100262607 | Vassilvitskii | Oct 2010 | A1 |
20100287250 | Carlson | Nov 2010 | A1 |
20110015963 | Chafle | Jan 2011 | A1 |
20110016109 | Vassilvitskii | Jan 2011 | A1 |
20110054780 | Dhanani | Mar 2011 | A1 |
20110082747 | Khan | Apr 2011 | A1 |
20110083101 | Sharon | Apr 2011 | A1 |
20110105092 | Felt | May 2011 | A1 |
20110112985 | Kocmond | May 2011 | A1 |
20120036014 | Sunkada | Feb 2012 | A1 |
20120052874 | Kumar | Mar 2012 | A1 |
20120098705 | Yost | Apr 2012 | A1 |
20120109793 | Abeles | May 2012 | A1 |
20120113489 | Heit et al. | May 2012 | A1 |
20120150767 | Chacko | Jun 2012 | A1 |
20120230577 | Calman et al. | Sep 2012 | A1 |
20120296768 | Fremont-Smith | Nov 2012 | A1 |
20130021651 | Popadic et al. | Jan 2013 | A9 |
20130155474 | Roach et al. | Jun 2013 | A1 |
20130191261 | Chandler | Jul 2013 | A1 |
20130201534 | Carlen | Aug 2013 | A1 |
20130324160 | Sabatellil | Dec 2013 | A1 |
20130332004 | Gompert et al. | Dec 2013 | A1 |
20130332219 | Clark | Dec 2013 | A1 |
20130346306 | Kopp | Dec 2013 | A1 |
20130346307 | Kopp | Dec 2013 | A1 |
20140010467 | Mochizuki et al. | Jan 2014 | A1 |
20140032406 | Roach et al. | Jan 2014 | A1 |
20140037183 | Gorski et al. | Feb 2014 | A1 |
20140156501 | Howe | Jun 2014 | A1 |
20140197922 | Stanwood et al. | Jul 2014 | A1 |
20140203508 | Pedde | Jul 2014 | A1 |
20140207673 | Jeffries | Jul 2014 | A1 |
20140207674 | Schroeder | Jul 2014 | A1 |
20140244476 | Shvarts | Aug 2014 | A1 |
20140313335 | Koravadi | Oct 2014 | A1 |
20140351137 | Chisholm | Nov 2014 | A1 |
20140374486 | Collins, Jr. | Dec 2014 | A1 |
20150134517 | Cosgray | May 2015 | A1 |
20150235484 | Kraeling et al. | Aug 2015 | A1 |
20150244994 | Jang et al. | Aug 2015 | A1 |
20150294523 | Smith | Oct 2015 | A1 |
20150348591 | Kaps | Dec 2015 | A1 |
20160026866 | Sundaresan | Jan 2016 | A1 |
20160034590 | Endras et al. | Feb 2016 | A1 |
20160142625 | Weksler et al. | May 2016 | A1 |
20160189500 | Kim et al. | Jun 2016 | A1 |
20160335816 | Thoppae et al. | Nov 2016 | A1 |
20170039637 | Wandelmer | Feb 2017 | A1 |
20170068421 | Carlson | Mar 2017 | A1 |
20170132583 | Nair | May 2017 | A1 |
20170146602 | Samp et al. | May 2017 | A1 |
20170229149 | Rothschild | Aug 2017 | A1 |
20170263120 | Durie, Jr. et al. | Sep 2017 | A1 |
20170033761 | Beguesse | Nov 2017 | A1 |
20180108252 | Pividori | Apr 2018 | A1 |
20180197118 | McLaughlin | Jul 2018 | A1 |
20190026577 | Hall et al. | Jan 2019 | A1 |
20190122222 | Uechi | Apr 2019 | A1 |
20210097615 | Gunn, Jr. | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
2619884 | Mar 2007 | CA |
1897644 | Jan 2007 | CN |
1967565 | May 2007 | CN |
0984410 | Mar 2000 | EP |
2004-23158 | Jan 2004 | JP |
3708807 | Oct 2005 | JP |
WO 9614707 | May 1996 | WO |
WO 0161436 | Aug 2001 | WO |
WO 2005043857 | May 2005 | WO |
WO 2007024889 | Mar 2007 | WO |
Entry |
---|
Bieniecki, Wojciech et al. “Image Preprocessing for Improving OCR Accuracy”, Computer Engineering Department, Technical University of Lodz, al. Politechniki 11, Lodz Poland, May 23, 2007. |
Shaikh, Aijaz Ahmed et al., “Auto Teller Machine (ATM) Fraud—Case Study of Commercial Bank in Pakistan”, Department of Business Administration, Sukkur Institute of Business Administration, Sukkur, Pakistan, Aug. 5, 2012. |
Tiwari, Rajnish et al., “Mobile Banking as Business Strategy”, IEEE Xplore, Jul. 2006. |
Lyn C. Thomas, “A survey of credit and behavioural scoring: forecasting financial risk of lending to consumers”, International Journal of Forecasting, (Risk) (2000). |
Non-Final Office Action issued on U.S. Appl. No. 14/293,159 dated Aug. 11, 2022. |
Non-Final Office Action issued on U.S. Appl. No. 16/455,024 dated Sep. 7, 2022. |
Non-Final Office Action issued on U.S. Appl. No. 17/071,678 dated Sep. 14, 2022. |
Non-Final Office Action issued on U.S. Appl. No. 17/180,075 dated Oct. 4, 2022. |
Non-Final Office Action issue on U.S. Appl. No. 17/511,822 dated Sep. 16, 2022. |
Non-Final Office Action issued on U.S. Appl. No. 17/568,849 dated Oct. 4, 2022. |
Yong Gu Ji et al., “A Usability Checklist for the Usability Evaluation of Mobile Phone User Interface”, International Journal of Human-Computer Interaction, 20(3), 207-231 (2006). |
Printout of news article dated Feb. 13, 2008, announcing a Nokia phone providing audio cues for capturing a document image. |
IPR Petition 2022-01593, Truist Bank v. United Services Automobile Association filed Oct. 11, 2022. |
Fletcher, Lloyd A., and Rangachar Kasturi, “A robust algorithm for text, string separation from mixed text/graphics images”, IEEE transactions on pattern analysis and machine intelligence 10.6 (1988), 910-918 (1988). |
IPR 2022-00076 filed Nov. 17, 2021 on behalf of PNC Bank N.A., 98 pages. |
IPR 2022-00075 filed Nov. 5, 2021 on behalf of PNC Bank N.A., 90 pages. |
IPR 2022-00050 filed Oct. 22, 2021 on behalf of PNC Bank N.A., 126 pages. |
IPR 2022-00049 filed Oct. 22, 2021 on behalf of PNC Bank N.A., 70 pages. |
About Network Servers, GlobalSpec (retrieved from https//web.archive.or/web/20051019130842/http://qlobalspec.com 80/LearnMore/Networking_Communication_Equipment/Networking_Equipment/Network_Servers (“GlobalSpec”). |
FDIC: Check Clearing for the 21st Century act (Check21), FED. Deposit Ins. Corp., Apr. 25, 2016 (retrieved from https://web.archive.org/web/20161005124304/https://www.fdic.gov/consumers/assistance/protection/check21.html (“FDIC”). |
Andrew S. Tanenbaum, Modern Operating Systems, Second Edition (2001). |
Arnold et al, The Java Programming Language, Fourth Edition (2005). |
Consumer Assistance & Information—Check 21 https://www.fdic.gov/consumers/assistance/protection/check21.html (FDIC). |
Halonen et al., GSM, GPRS, and EDGE Performance: Evolution Towards 3G/UMTS, Second Edition (2003). |
Heron, Advanced Encryption Standard (AES), 12 Network Security 8 (2009). |
Immich et al., Performance Analylsis of Five Interprocess CommunicAtion Mechanisms Across UNIX Operating Systems, 68 J. Syss. & Software 27 (2003). |
Leach, et al., A Universally Unique Identifier (UUID) URN Namespace, (Jul. 2005) retrieved from https://www.ietf.org/rfc/rfc4122.txt. |
N. Ritter & M. Ruth, The Geo Tiff Data InterchAnge Standard for Raster Geographic Images, 18 Int. J. Remote Sensing 1637 (1997). |
Pbmplus—image file format conversion package, retrieved from https://web.archive.org/web/20040202224728/https:/www.acme.com/software/pbmplus/. |
Petition filed by PNC Bank N.A. for Inter Partes Review of Claims 1-23 of U.S. Pat. No. 10,482,432, dated Jul. 14, 2021, IPR2021-01071, 106 pages. |
Petition filed by PNC Bank N.A. for Inter Partes Review of Claims 1-7, 10-21 and 23 of U.S. Pat. No. 10,482,432, dated Jul. 14, 2021, IPR2021-01074. |
Petition filed by PNC Bank N.A. for Inter Partes Review of Claims 1-18 of U.S. Pat. No. 10,621,559, dated Jul. 21, 2021, IPR2021-01076, 111 pages. |
Petition filed by PNC Bank N.A. for Inter Partes Review of Claims 1-18 of U.S. Pat. No. 10,621,559, filed Jul. 21, 2021, IPR2021-01077; 100 pages. |
Petition filed by PNC Bank N.A. for Inter Partes Review of Claims 1-30 of U.S. Pat. No. 10,013,681, filed Aug. 27, 2021, IPR2021-01381, 127 pages. |
Petition filed by PNC Bank N.A. for Inter Partes Review of U.S. Pat. No. 10,013,605, filed Aug. 27, 2021, IPR2021-01399, 113 pages. |
Readdle, Why Scanner Pro is Way Better Than Your Camera? (Jun. 27, 2016) retrieved from https://readdle.com/blog/why-scanner-pro-is-way-better-than-your-camera. |
Santomero, The Evolution of Payments in the U.S.: Paper vs. Electronic (2005) retrieved from https://web.archive.org/web/20051210185509/https://www.philadelphiafed.org/publicaffairs/speeches/2005_santomero9.html. |
Schindler, Scanner Pro Review (Dec. 27, 2016) retrieved from https://www.pcmag.com/reviews/scAnner-pro. |
Sing Li & Jonathan Knudsen, Beginning J2ME: From Novice to Professional, Third Edition (2005), ISBN (pbk): 1-59059-479-7, 468 pages. |
Wang, Ching-Lin et al. “Chinese document image retrieval system based on proportion of black pixel area in a character image”, the 6th International Conference on Advanced Communication Technology, 2004, vol. 1, IEEE, 2004. |
Zaw, Kyi Pyar and Zin Mar Kyu, “Character Extraction and Recognition for Myanmar Script Signboard Images using Block based Pixel Count and Chain Codes” 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (CS), IEEE, 2018. |
Jung et al, “Rectangle Detection based on a Windowed Hough Transform”, IEEE Xplore, 2004, 8 pgs. |
Craig Vaream, “Image Deposit Solutions” Emerging Solutions for More Efficient Check Processing, Nov. 2005, 16 pages. |
Certificate of Accuracy related to Article entitled, “Deposit checks by mobile” on webpage: https://www.elmundo.es/navegante/2005/07/21/empresas/1121957427.html signed by Christian Paul Scrogum (translator) on Sep. 9, 2021. |
Bruno-Britz, Maria “Mitek Launches Mobile Phone Check Capture Solution,” Bank Systems and Technologies Information Week (Jan. 24, 2008). |
V User Guide, https://www.lg.com/us/support/manualsdocuments?customerModelCode=%20LGVX9800&csSalesCode=LGVX9800, select“VERISON(USA) en”; The V_UG_051125.pdf. |
MING Phone User Manual, 2006. |
Patel, Kunur, “How Mobile Technology is Changing Banking's Future” AdAge, Sep. 21, 2009, 4 pages. |
Spencer, Harvey, “Controlling Image Quality at the Point of Capture” Check 21, Digital Check Corporation & HSA 2004. |
Moseik, Celeste K., “Customer Adoption of Online Restaurant Services: A MultiChannel Approach”, Order No. 1444649 University of Delaware, 2007, Ann Arbor: ProQuest., Web. Jan. 10, 2022 (Year: 2007). |
ANS X9.100-160-1-2004, Part 1: Placement and Location of Magnetic Ink Printing (MICR), American National Standard for Financial Services, approved Oct. 15, 2004. |
Clancy, Heather, “Turning cellphones into scanners”, The New York Times, Feb. 12, 2005; https://www.nytimes.com/2005/02/12/business/worldbusiness/turning-cellphones-into-scanners.html. |
Consumer Guide to Check 21 and Substitute Checks, The Federal Reserve Board, The Wayback Machine—Oct. 28, 2004 https://web.archive.org/20041102233724/http://www.federalreserve.gov. |
Curtin, Denis P., A Short Course in Digital Photography Chapter 7 Graphic File Formats. |
Dance, Christopher, “Mobile Document Imaging”, Xerox, Research Centre Europe, XRCE Image Processing Area, Nov. 2004. |
Digital Photography Now, Nokia N73 Review, Oct. 28, 2006. |
Federal Reserve System, 12 CFR Part 229, Regulation CC: Docket No. R-1176, Availability of Funds and Collection of Checks, Board of Governors of the Federal Reserve System Final rule. |
Financial Services Policy Committee, Federal Reserve Banks Plan Black-and-White Image Standard and Quality Checks, May 18, 2004. |
MICR-Line Issues Associated With The Check 21 Act and the Board's Proposed Rule, Prepared by Federal Reserve Board Staff, Apr. 27, 2004. |
Microsoft Computer Dictionary Fifth Edition—Copyright 2002. |
HTTP Over TLS, Network Working Group, May 2000, Memo. |
Nokia N73—Full phone specifications. |
Ranjan, Amit, “Using a Camera with Windows Mobile 5”, Jul. 21, 2006. |
Reed, John, “FT.com site: Mobile users branch out”, ProQuest, Trade Journal, Oct. 6, 2005. |
Weiqui Luo et al., “Robust Detection of Region-Duplication Forgery in Digital Image” Guoping Qui, School of Computer Science, University of Nottingham, NG8 1 BB, UK—Jan. 2006. |
Final Written Decision relating to U.S. Pat. No. 8,699,779, IPR2021-01070, Jan. 19, 2023. |
Final Written Decision relating to U.S. Pat. No. 8,877,571, IPR2021-01073, Jan. 19, 2023. |
Final Written Decision relating to U.S. Pat. No. 10,621,559, IPR2021-01077, Jan. 20, 2023. |
Number | Date | Country | |
---|---|---|---|
62271057 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16674366 | Nov 2019 | US |
Child | 17349240 | US | |
Parent | 15378710 | Dec 2016 | US |
Child | 16674366 | US |