The present invention relates generally to solutions for activation of video recording based on sensors inputs and for broadcasting of recorded video.
Systems for recording operation of a vehicle or a person operating a vehicle typically include an array of cameras, an array of sensors, and a storage for storing the recorded data. In one implementation of such systems, the cameras always record the data and the recorded data is saved together with the sensor's information. The content saved in the storage is analyzed off-line by a person, to detect events that may have caused, for example, a road accident.
In another implementation of such a vehicle recording system, the recording is managed by a central element that determines, based on the sensors' inputs, when to retrieve content from the cameras. The retrieved content is collected by the central element and then sent to a remote location for analysis. The disadvantage of such a system is that while the central element retrieves data from the cameras its stops the monitoring of the sensors, thus valuable information may not be gathered. As a result, such a system typically includes only one sensor being monitored by the central element. Furthermore, the conventional vehicle recording system is limited to determine when to collect video information from the cameras. Any missing information that was not previously collected cannot be available for the off-line analysis.
As a result, the conventional vehicle recording systems are limited to applications related to road accidents or safety of a car driver. Furthermore, due to lack of detailed information recorded during the operation of the vehicle, the recorded information is analyzed by people who are uniquely trained for such tasks.
It would be therefore advantageous to provide a solution to overcome the deficiencies of conventional vehicle recording systems.
Certain embodiments include herein include a system for monitoring the operation of a vehicle. The system comprises a plurality of sensors; a plurality of optical capture devices; a memory in which at least a recording schema is stored, the at least recording schema contains rules for operation of at least one of the plurality of optical capture devices responsive of at least one of the plurality of sensors respective of at least one activity to be captured; and a recorder coupled to the plurality of sensors, the plurality of optical capture devices and the memory, the recorder determines based on the at least recording schema and responsive of at least an input from at least one of the plurality of sensors which of the at least one of the plurality of optical capture devices to operate.
Certain embodiments include herein include a method for determining operation of a plurality of optical capturing devices mounted on a vehicle. The method comprises receiving sensory inputs from a plurality of sensors mounted on the vehicle; determining which of a plurality of recording schemas stored in memory are to be activated responsive to at least an input from the sensory inputs; operating at least an optical capturing device based at least on a determined recording schema from the plurality of recording schemas, wherein a recoding schema contains rules for operation of at least one of the plurality of optical capture devices responsive of at least one of the plurality of sensors respective of at least one activity to be captured; and recording at least an information segment by the at least an optical capturing device responsive of the determined schema.
The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the invention will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
It is important to note that the embodiments disclosed are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present disclosure do not necessarily limit any of the various claimed inventions. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
As will be described in detail below, the data recorded by the system 100 can be utilized in many applications including, for example, flight emulation, training of a person operating the vehicle (e.g., pilots, skippers, car drivers, truck drivers, racing car drivers, and so on), detection of drivers fatigue, monitoring and alerting of medical conditions of a person operating the vehicle or thereon, tracking supply chains, and so on.
As depicted in
The vehicle recording system 100 captures visual data, audio data, and motion data based on one or more recording schemas and the state of one or more of sensors. As shown in
The cameras 101 capture visual (images and video data) and audio and are operated under the control of the recorder 102. The cameras 101 may include, but are not limited to, video cameras, stills cameras, Infrared cameras, smart phones, or any computing device having a built-in optical capture device to which the recorder 102 can interface with. The cameras 101 may be mounted in various places inside and/or outside the vehicle, either permanently or temporarily. A camera 101 can also include an audio capturing device that captures, for example, the plane radio, intercom audio and cockpit audio.
The sensors 103 may include, but are not limited to, accelerometer (or any type other motion sensor), a GPS (or any other type of a position sensor), heart rate, temperature, instrumentation sensors (e.g., revolutions per minute (RPM), engine status, tire pressure, etc.), a gyro, a compass, and so on. The recorder 102 includes a processor and internal memory (not shown) configured to perform the recording process as will be described below. The recorder 102 also includes a removable media card, e.g., a flash memory card (not shown in
The recorder 102 includes a network interface (not shown) to interface with the network 115. A user can access the recorder 102 through a user interface (e.g., a web browser) to control the configuration of the recorder 102 and/or to upload and/or modify the recordering schemas. In one embodiment, the recorder 102 may be realized in a computing device including, for example, a personal computer, a smart phone, a tablet computer, a laptop computer, and the like.
According certain embodiments disclosed herein, the recorder 102 controls each of the cameras 101 based on the inputs received from the plurality of sensors 103 and at least one recording schema. That is, the recorder 102 can instruct each of the cameras 101 to start/stop capturing visual/audio data, change the zoom and/or view angle, resolution, shutter speed, frame rate of the captured video signal, optical configurations, and so on. This can be useful, for example and without limitation, when the sensory input indication is of an event from a particular area around or inside the vehicle. An emphasis of recording can then be determined by the recorder 102, so as to cause the cameras 101 to adapt for recording that best suits the event detected by the sensors 103.
In an embodiment of, a recording schema defines segments that should be recorded during the operation of the vehicle. Specifically, for each segment the recording schema defines one or more sensors' inputs that triggers the beginning and the end of the recording, one or more cameras 101 that should capture the visual/audio data, and the state (e.g., zoom, frame rate, angle view, etc.) of each such camera. The setting of each segment is based on the activity that should be captured. The recording schema can also define which of the segments should be tagged as “high interest”. Such “high interest” segments can be then uploaded, preferably in real-time or near real-time, by the web server 110 or the recorder 102 to video-sharing web sites and/or social media web sites, and the like. A recording schema includes rules defining the operation of each of the cameras 101 responsive of sensory inputs from the one or more of the sensors 103. A recording schema is associated with a certain activity to be captured. Various examples for such recording schema are provided below.
According to an embodiment disclosed herein, the recorder 102 constantly monitors the inputs received from the sensors 103 and compares the inputs to the settings in the recording schema. In an embodiment, the sensors' 103 inputs may be compared to predefined thresholds (e.g., speed, location, G-force), abnormal measurement (e.g., high heart rate, low heart rate variability), and more.
If it is determined that a recording should start, the recorder 102 selects one or more cameras for recording the segment and sets these cameras according to the settings in the recording schema. Data captured by the cameras 101 is saved in the flash memory. It should be noted that the when data is captured, the recorder 102 continues to monitor the sensors' inputs to determine if recording current active cameras should be stopped, if the setting of the current active cameras should be changed (e.g., change zoom), and addition camera(s) should be activated for the current segment, or if a recording of a new segment should begin.
Once the recording of a current segment is completed, the recorder 102 determines if the segment should be tagged as a “high-interest” segment. Such determination is made based on the value(s) of one or more sensors and according to settings defined in the recording scheme. For example, if the accelerometer sensor shows unexpected motion, the respective segment may be tagged as a high interest segment. In another embodiment, segments can be tagged as “high interest” by a user of the recorder 102 through a user interface. It should be understood that such a recording schema may include adding, subtracting, and/or adjustments to the recording parameters of one or more of the cameras 101.
In yet another embodiment, metadata information can be saved or associated with each segment recorded in the memory. The metadata may include, for example, vehicle information, user's information (e.g., a driver, a trainer, etc.), date and time, weather information, values measured by one or more predefined sensors during the segment, and so on.
In S240, one or more of the recording schemas determined to require execution responsive of the sensory inputs received are executed by the recorder 102. It should be understood that the recorder 102 may apply all of the schemas identified as relevant, or, only a portion thereof as determined by the schemas available. For example, it may not be necessary to activate one schema if another schema is to be activated and a hierarchy between schemas may be established. A recording schema defines the operation of each of the cameras 101 responsive of sensory inputs from the one or more of the sensors 103. A recording schema is associated with a certain activity to be captured.
The execution of a recording schema includes operating the one or more cameras 101 to capture the visual/audio data for the activity defined in the schema. For example, the operation of the one or more cameras 101 include starting the recording one or more of the cameras 101 (different cameras 101 may be activated in different times), changing the zoom, and/or recording rate (frames per second) of the cameras 101 during the recording, and so on. Thus, the recorder 102 fully controls the operation of one or more cameras 101 during the recording of a segment based on the sensory information and the rules of the rule schemas.
In S250, it is checked whether information based on the operation of the schema is to be provided, for example, by uploading a segment to a website, as discussed hereinabove in greater detail. If it is necessary to provide such information execution continues with S260; otherwise, execution continues with S270. In S260 information resulting from the schema(s) processing in S240 is provided to the desired target in either a push or pull mechanism, i.e., is actively sent to the desired destination, or made available on the system 100, for example, in memory, for the destination to initiate a process of retrieval of such information on its own initiative. In S270, it is checked if the operation of system 100 should continue, and if so execution continues with S220; otherwise, execution terminates, for example responsive of a shutdown trigger provided to the system 100 and/or a user of the system.
Following are non-limiting examples for the embodiments described above. It should be appreciated, however, that the operation of the system 100 and other embodiments of the invention should not be limited to examples provided below.
In one example, the system 100 is installed in a truck. The cameras include side-door, rear-door, or above-vehicle cameras and the sensors include the like of power-take-off (PTO), proximity sensor, RFID and a RuBee sensors. In such configuration, according this example, the driver stops the vehicle and turns off the engine. According to a recording schema, side-door, rear-door, below-vehicle or above-vehicle cameras begin recording when a signal is received from either a power-take-off (PTO) or proximity sensor that indicates a door has been opened. The recording continues until the PTO sensor indicates the door has been closed. Recording of the open door(s) continues and is coordinated with events that, for example, indicate whether RFID or RUBEE tagged goods have been removed from the vehicle or have been replaced to the vehicle.
Front-facing and in-cab-facing cameras are switched off as the schema determines that these are not necessary under the current conditions. The technician opens a side or rear door of the vehicle, signaling the side or rear-door cameras to be activated. Video recording of the door opening continues and the recording is tagged with time, date and sensory data that is received and associated with the removal or replacement of the RFID, RuBee or proximity-sensor tagged item being removed or replaced.
In another example, the embodiments described herein are implemented in a system that is used for a supply chain application to document goods and products that are carried on a vehicle and have been removed or replaced. Here, the recording schema may define video events, time, date, and sensor information that can be used to trigger a re-supply chain event or to notify technician or supervisor that an item has been forgotten or lost. It may not be necessary to record the good while it is determined that the vehicle is in motion, or otherwise it may be sufficient, according to a specific schema, to merely have a period still photograph of the goods.
Proximity sensors may be used to indicate if a bucket-up condition is sensed or to indicate that tools or equipment stored on the roof of the vehicle have been removed. Proximity sensors trigger respectively the recording by the camera associated with roof-storage area or associated with the operation of the bucket. Below-vehicle video and sensors can indicate if an obstruction is present, or vehicle maintenance event, such as a flat tire, has occurred. When the recorded door opening or proximity sensor receives a door-closed event, a bucket-down event or an object replacement event from the PTO, proximity sensor or identification tag, video recording for the associated camera is stopped.
The recoding schema may also define that when a vehicle is in forward motion two or more cameras may look to the front of the vehicle and at the driver activities (covering hands, wheel, dashboard, face, etc.). When a vehicle is in reverse motion, two or more cameras may look from the rear of the vehicle as well as at the driver activities in the vehicle cabin (covering hands, wheel, dashboard, face, and so on).
Another example and type of sensory input to monitor driver safety is the HRV (Heart Rate Variability) signal and ratio between the sympathetic to parasympathetic power which is known to have a good correlation with driver fatigue. Monitoring HR (Heart Rate) is known and can be done in a variety of ways using either contact or non-contact methods in the car to measure the driver HR. The HRV is also well-known and can be measured easily within a small computerized device such as a smartphone or other similar devices. According to an aspect of the invention, an HR sensor input is used, and as the trend line for driver fatigue is recognized, the recorder 102 triggers the operation of the camera looking at the driver's face and eyes, using in one embodiment an IR camera, to better identify if this is actually a driver fatigue. Using the camera looking out in the driving direction, and the sensors monitoring the vehicle movement, it is established if this is defined as a fatigued driver and if so, the appropriate alarm is generated and action is taken.
A flight training school may use the system with its recording devices on planes, connected to sensors coming from plane instrumentation and engine monitors or mobile sensors independently connected or part of the recorder device. This is another application for the embodiments discussed herein. The sensors include GPS, accelerometers, gyro, compass, as well as others.
Three cameras are mounted in this case in the cockpit looking out over the cowling (Cam1), looking over the shoulder of the student pilot to capture pilot manipulation and instrument panel (Cam2), and a 3rd camera with an Infra Red (IR) option looking at the pilot face (Cam3). Additional cameras may be mounted under the plane body looking at the plane underside and at the retractable wheels (Cam4) and under the two wings looking back to capture wing flaps and covering the back 180 degrees (Cam 5 and Cam 6). An additional camera may be positioned near the runway to capture the landing from a ground view (Cam 7). In this scenario two-to-four cameras operate at a time based on sensory input.
In the above configuration, one or more recording schemas may define how the recorder 102 should operate the various cameras based on the sensory input when a student pilot practicing landings with instrument approaches with an instructor. According to this example, a plane engine starts and plane taxies at a speed above 5 knots for more than 5 seconds resulting in Cam 1 and Cam 2 operation. The plane accelerates for takeoff and when at a speed of over 20 knots for 5 seconds the appropriate schema causes Cam 1, Cam 3, Cam 4, and Cam 5 to operate, while Cam 2 cease operation, until plane levels off and maintains altitude or does not descend at more than 200 feet/minute. At that time Cam 1, Cam 2 and Cam 3 operate while Cam 4 and Cam 5 cease operation according to the appropriate schema. It should be noted that Cam 3 adds the capture of the pilot scanning the instruments.
The first part therefore captures the takeoff and departure with views of the pilot manipulation, centerline of the airstrip and the relative airplane positioning, wheels folding and other points of interest for the IFR (Instrument Flight) student. The second part may include the incoming IFR procedure where Cam 1, Cam 2 and Cam 3 are recording, with Cam 4 added when the wheels out airspeed (say under 100 knots) is reached in descent or when the wheels down button sensor is activated. Finally, at airspeed below 70 knots, Cam 1, Cam 2, Cam 4 and Cam 7 operate and capture the landing. Cam 7 may also be activated by the plane GPS position triggering the ground video camera which transmits a short video segment, for example, wirelessly (WiFi) to the main recorder device. Finally, at a speed of under 40 knots for a period of at least 5 seconds (landing completed), Cam 1 and Cam 2 operate until the plane speed is under 5 knots for more than 5 minutes.
It should be understood that each of these cases describe a schema that is based on the sensory inputs received and triggers the appropriate use of the optical capturing device, for example, a camera, to capture the necessary images in concert with and responsive of the sensory input(s) received. Moreover, additional scenarios and corresponding schemas may be developed without departing from the scope of the invention.
As noted above the system can tag as “high interest” certain segments captured by the recorder. Such tagging may be done respective of a timeline. The tagged points and areas of interest that may be uploaded to a desired location in the web or cloud can then be easily reviewed by directly going to the tagged areas or reviewing only the “areas of interest memory”. This allows a user to avoid the review of lengthy recordings and rather hone on to areas of interest or tagged parts of the activity. Such areas of interest may be automatically annotated by the schema, for example, by an indication such as “loss of altitude beyond boundaries”. A person of ordinary skill in the art will readily appreciate the advantage of such tagging for automatic, self- or assisted debriefing.
The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or non-transitory computer readable medium. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
This application claims the benefit of U.S. provisional application No. 61/436,106 filed on Jan. 25, 2011, the contents of which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61436106 | Jan 2011 | US |