The present invention relates to the processing of data, and in particular to the processing of video and sensor data recorded by a video camera.
Digital video cameras and the processing of digital media data recorded by such cameras have become commonplace in recent years. The video data and often audio data recorded by such cameras are typically written to a digital media container, such as the AVI container format or MP4 container format. These container formats allow the video and audio data, and in many cases other data such as subtitles and still images, to be stored in a digital media file, but also allows the data to be live broadcasted, i.e. streamed over the Internet.
Digital media containers are used to identify and interleave different data types, and comprise a plurality of portions including a payload (or data) portion and one or more metadata portions. The payload (or data) portion includes the media data, typically with each of the data types, e.g. video, audio, etc, in the container being interleaved (or multiplexed). The one or more metadata portions contain data about the container and the media data (or content) contained therein. For example, the one or more metadata portions can include data such as: the number of streams (or tracks), e.g. video, audio, etc; the format of each stream, e.g. the type of compression, if any, used to encode each stream; and the duration of the media data; all of which are required to read the data in the container and to subsequently provide the content. The one or more metadata portions can also include information about the content in the container, such as a title, an artist name, etc. Digital media containers typically have a hierarchical structure, with the one or more metadata portions often being positioned at the start of the container. This is not always the case, however, and in some instances one or more metadata portions can be positioned at the start of the container, and one or more other metadata portions can be positioned at the end of the container.
In the case of the MP4 container, each of the portions of the container are typically referred to as ‘atoms’. The payload portion of a MP4 container is called the mdat atom, and the metadata portions include the moov atom that acts as the index for the container and defies the timescale, duration, display characteristics of the media data in the container, and information for each track in the container, and often one or more uuid atoms, or so called user defined atoms. The moov atom is required to be accessed before it becomes possible to play the media content in a MP4 container, and the position of the moov atom is therefore typically dependent on the manner in which the container is going to be delivered, e.g. progressive download, streaming or local playback. For local playback, the position of the moov atom in the container is not important, since the entire file is available immediately. Accordingly, the moov atom will typically be found at the end of the container, as this can be beneficial since the data and thus size of the moov atom is not known until the media data has been added to the container. However, for progressive download or streaming, if the moov atom were to be positioned at the end of the container, then the entire file is required to be downloaded before it can be played (or a second communication channel, separate from a communication channel used to stream the media content of the file, is needed to obtained the moov atom). Accordingly, in such instances it is desirable for the moov atom to be positioned at the start of the container.
An overview of certain digital media processing techniques will now be described, with reference to
A first technique is that of writing a digital media file, often generally called “encoding”, and is shown in
A second technique is that of reading a digital media file, often generally called “decoding”, and is shown in
A third technique is that of transcoding, and is shown in
Digital video cameras that use such digital media processing techniques, either on the camera itself or on associated editing software for use on computing devices, such as desktop or laptop computers, smartphones and the like, are increasingly being used in outdoors and sports settings. Such video cameras, which are often referred to as “action cameras” are commonly attached to a user, sports equipment or a vehicle and are operated to capture video data, and typically also audio data, during a sports session with minimal user interaction.
It is also known to integrate a number of additional sensor devices into such action cameras. For example, WO 2011/047790 A1 discloses a video camera comprising some or all of an integrated GPS device, speed or acceleration measuring device, time measuring device, temperature measuring device, heart rate measuring device, barometric altitude measuring device and an electronic compass. These sensors can be integrated in the camera itself, or can be remote from the camera and operably connected to the camera using a wired or wireless connection. It is further described that the data from these additional sensor devices, i.e. sensor data, can be stored separately from the digital media file containing the recorded video and audio data, but also that the sensor data can be stored in the same digital media file as the recorded video and audio data, such as by storing the sensor data in the payload (or data) portion of the media file. In this latter case, the sensor data is multiplexed with the video and audio data, and can, for example, be stored in the subtitle track of the media file.
WO 2011/047790 further discloses that the sensor data can be added as a digital overlay over the video data when it is played and displayed on a display device, such that to the viewers can see, for example, the changing speed, acceleration, position, elevation, etc of the user or their equipment simultaneously with the video. It is also disclosed that such digital overlays can be integrated permanently into the video data through a transcoding process, such that the recorded video can then be uploaded to a video sharing site, such as YouTube®.
While such techniques are advantageous in their own right, the Applicants believe that there remains scope for improvements to techniques for processing video image data, and in particular to techniques for processing integrated video image and sensor data.
According to an aspect of the present invention, there is provided a method of storing data collected by a digital video camera having one or more sensor devices associated therewith, the method comprising:
The present invention extends to a system, preferably a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for storing data collected by a digital video camera having one or more sensor devices associated therewith, the system comprising:
The present invention further extends to a digital media file created using the method described above. The media file therefore comprising: video image data indicative of data received from an image sensor of a digital video camera during a recording event, i.e. a period of time between receipt of input (or instruction) to start recording and an input (or instruction) to stop recording, in a payload portion of the media file; and sensor data indicative of data received from one or more sensor devices associated with the camera during the recording event in a metadata portion of the media file.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa.
The present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
The present invention, at least in some aspects and embodiments, is concerned with methods and systems for storing data collected by a digital video camera having one or more sensor devices associated therewith. The one or more sensor devices, as will be discussed in more detail below, can include sensors integral with the camera, i.e. within the housing of the camera, but can also include sensors remote from the camera and which are operably connected to the camera using a wired or wireless connection.
In the present invention, video image data is stored in a payload portion of a media file, as is conventional in the art. However, in contrast with known methods, such as that described in WO 2011/047790 A1, the sensor data is stored in a metadata portion of the same media file, rather than in the payload portion of the media file. The Applicants have found that this allows the sensor data to be accessed and used more quickly and easily to provide other functionality, such as in the generation of highlights as will be described in more detail below, since the sensor data does not need to be demultiplexed from the other data, e.g. the video image data, audio data, etc, in the payload portion of the file before it can be used.
In accordance with the invention, a first input is received to cause the digital video camera to start recording. The first input can be a manual input by a user, such as the user actuating a user input, e.g. button, slide, etc, of the camera, or of a remote control device that is in communication with the camera using a wired or wireless connection. The manual input by the user could additionally or alternatively include a touch or gesture input, such as the selection of a virtual button presented on a touch sensitive display screen, and/or an audible input, such as a voice command by the user that is detected and interpreted by automatic speech recognition (ASR) software. The first input could also be an automatic input, such as a command to start recording after a predetermined period of time has elapsed and/or based on data from the one or more of the sensor devices. For example, the first input could be automatically generated when a predetermined speed or acceleration is detected, e.g. based on data output by a global navigation satellite system (GNSS) receiver, an accelerometer or the like.
A digital media container is opened on a first memory based on receipt of the first input. Any suitable digital media container can be used, such as MP4, AVI, etc. The first memory preferably comprises a non-volatile memory device for storing the data collected by the video camera, and may comprise a removable non-volatile memory device that can be is attachable to and detachable from the video camera. For example, the first memory may comprise a memory card such as, for example, an SD card or the like.
As will be appreciated, the video camera comprises an image sensor that generates raw, i.e. uncompressed, image data. While the camera can be used in any situation as desired, preferably the image data generated by the image sensor, and thus collected by the video camera, is preferably data collected during an outdoor or sports session or the like, preferably while the video camera is attached to a user, sports equipment or a vehicle. The video camera also comprises a video processing device, including at least an encoder, to process the raw image data and generate an encoded (video) stream. As will be described in more detail below, the video processing device preferably further includes a decoder to decode an encoded (video) stream, and which can preferably be used together with the at least one encoder to perform transcoding. The video processing device preferably comprises a system on chip (SOC) comprising cores (or blocks) for encoding, decoding and transcoding video and audio data. The video processing device is therefore preferably implemented in hardware, e.g. without using embedded processors.
The raw image data from the image sensor can be encoded using any suitable compression technique as desired, e.g. lossless compression or lossy compression, and could be, for example, an intraframe compression technique or an interframe compression technique. As known in the art, intraframe compression techniques function by compressing each frame of the image data individually, whereas interframe compression techniques function by compressing a plurality of neighbouring frames together (based on the recognition that a frame can be expressed in terms of one or more preceding and/or succeeding frames. However, in preferred embodiments, the raw image data is processed to generate at least one stream encoded using an interframe compression technique, such as H.264. The properties of the encoded (video) stream, such as the frame rate and resolution, can be selected by the user, e.g. by using a user interface of the camera and/or a remote control device. For example, the user can select to record video image data at one or more of the following resolutions: 720p; 1080p; 2.7K and 4K, and/or at one or more of the following frame rates: 15 frames per second (fps); 30 fps; and 60 fps; although it will be appreciated that such values are merely exemplary.
In preferred embodiments, the first memory is within a housing of the video camera, and is preferably is connected to the image sensor and video processing device of the camera using a wired connection. It is also contemplated, however, that the first memory could be remote from the video camera, and be connected to the image sensor and video processing device of the camera using a wireless connection.
In the present invention, video image data based on data received from the image sensor of the camera, preferably the encoded stream output by the video processing device, e.g. an H.264 encoded stream, is written to the payload portion of the digital media container opened on the first memory. As known in the art, the digital media container can comprise a plurality of tracks, such as one or more video tracks, one or more audio tracks, one or more subtitle tracks, etc; the data in each of these tracks being multiplexed, i.e. placed into packets and interleaved, and stored in the payload portion of the container. Accordingly, in preferred embodiments, the video image data is interleaved with other data, such as audio data, other video image data, etc, as will be discussed in more detail below, and written to the payload portion of the digital media container. The digital video camera therefore preferably comprises a multiplexer to interleave a plurality of encoded media streams, e.g. one or more video streams, one or more audio streams, etc, into a single interleaved encoded stream, together with a demultiplexer to separate the single interleaved encoded stream back into its constitute plurality of encoded media streams.
In preferred embodiments, the video camera further comprises a microphone that generates raw, i.e. uncompressed, audio data, and an audio processing system, including at least an encoder, to process the raw audio data and generate an encoded (audio stream). The raw audio data can be encoded using any suitable compression technique as desired, e.g. lossless compression or lossy compression. For example, in preferred embodiments, the raw audio data is processed to generate a stream encoded using the AAC compression technique. The microphone can be within the housing of the video camera, and the housing comprises an opening to the external environment. In other embodiments, an external (or remote) microphone can be used, which is connected to the camera using a wired or wireless connection. In preferred embodiments, the audio data, e.g. the encoded stream output by the audio processing system, e.g. an AAC encoded stream, is written to the payload portion of the digital media container opened on the first memory, preferably with the audio data being multiplexed with the video image data.
Additionally, or alternatively, the video processing device can comprise a first encoder to generate a first encoded stream from data received from the image sensor of the camera, preferably using an interframe compression technique, and a second encoder to generate a second encoded stream from data received from the image sensor of the camera, preferably using an intraframe compression technique. The first encoded stream can comprise an H. 264 encoded stream, and, as discussed above, may be at one or more of the following resolutions: 720p; 1080p; 2.7K and 4K, and/or at one or more of the following frame rates: 15 fps; 30 fps; and 60 fps. The second encoded stream, which will typically be a lower quality stream than the first encoded stream, e.g. a stream with a lower resolution and/or frame rate, can comprise a stream wherein each frame is compressed as a jpeg image. Each jpeg image can be at a resolution of 768×432 pixels (px), and the stream may have a frame rate of 30 fps; although it will be appreciated that the values are merely exemplary. In preferred embodiments, both the first and second encoded streams are written to the payload portion of the digital media container opened on the first memory, preferably with the first encoded stream being multiplexed with the second encoded stream.
The presence of both encoded streams in the media file (created when the container is closed) is advantageous in that the first encoded video stream can be used in a normal manner to provide a high quality video, whereas the second encoded video stream can be streamed to as remote device, such as a smartphone or other mobile computing device, such that a user can immediately view the recorded video in the media file (or preview a video before finalising edits to a video to be created from the first encoded video stream). The use of an intraframe compression technique with the second encoded stream, in contrast to the use of a more complex interframe compression technique, allows the video to be played, in practice, by any smartphone or other mobile computing device without the need for specific software and hardware.
It is believed that the storage of first video stream encoded using an interframe compression technique and a second video stream encoded using an intraframe compression technique is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of storing data collected by a digital video camera having an image sensor and a video processing device, the method comprising:
The present invention extends to a system, preferably a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for storing data collected by a digital video camera having an image sensor and a video processing device, the system comprising:
The present invention further extends to a digital media file created using the method described above. The media file therefore comprising two sets of video image data indicative of data received from an image sensor of a digital video camera during a recording event, i.e. a period of time between receipt of input (or instruction) to start recording and an input (or instruction) to stop recording; a first set of video image data being encoded using an interframe compression technique, and a second set of video image data being encoded using an intraframe compression technique. These two sets of video image data are preferably multiplexed and stored in a payload portion of the media file.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa.
The first and second encoders can be formed as separate devices. However, in preferred embodiments, the video processing device comprises two output channels; one for the first encoded stream, and the other for the second encoded stream.
As discussed above, both the first and second encoded streams are preferably written to the payload portion of the digital media container opened on the first memory, preferably with the first encoded stream being multiplexed with the second encoded stream. The first encoded stream is preferably written to a video track of the payload portion of the container, and data is preferably added to a metadata portion of the resultant file, such that the first encoded stream is identified as video image data. This allows the first encoded stream to be played (after being demultiplexed and decoded) by conventional video playing and editing hardware and/or software of a computing device. The second encoded stream, despite also being video image data, is preferably written to a non-video track of the payload portion of the container, such as a text track, e.g. the subtitle track. As the second encoded stream is written to a non-video track, e.g. the subtitle track, and data is preferably not added to a metadata portion of the resultant file, such that the second encoded stream is not identified as video image data. This means that the second encoded stream will not be identifiable, and thus playable, by conventional video playing and editing hardware and/or software of a computing device.
Accordingly, in preferred embodiments, the method includes adding data to a metadata portion of the digital media container based on receipt of the second input (to cause the camera to stop recording) to identify only the first encoded video stream, and thus not the second encoded video stream, as video image data.
It is believed that the storage of a first video stream in a video track of a digital media container and a second video stream in a text track of the digital media container is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of storing data collected by a digital video camera having an image sensor and a video processing device, the method comprising:
The present invention extends to a system, preferably a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for storing data collected by a digital video camera having an image sensor and a video processing device, the system comprising:
The present invention further extends to a digital media file created using the method described above. The media file therefore comprising two sets of video image data indicative of data received from an image sensor of a digital video camera during a recording event, i.e. a period of time between receipt of input (or instruction) to start recording and an input (or instruction) to stop recording. One of the sets of video image data is stored in a video track of the media file, and the other set of video image data is stored in a text track of the media file, such as a subtitle track.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. Furthermore, the first encoded video stream that is written to the video track of the container is preferably encoded using an interframe compression technique, and the second encoded video stream that is written to the text track of the container is preferably encoded using an intraframe compression technique. The first and second encoded video stream are also preferably multiplexed and stored in a payload portion of the media file.
As discussed above, in some aspects and embodiments of the present invention, data is received from one or more sensor devices associated with the camera, and this data, or data derived therefrom, is stored in a second memory. The second memory is preferably different from the first memory. For example, where the first memory is preferably a removable memory, such as an SD card, the second memory is preferably a non-removable memory within the camera. As will be appreciated, the sensor data is preferably contemporaneous with the video image data, and optionally audio data, collected by the video camera, and is preferably data collected during an outdoor or sports session or the like, preferably while the video camera is attached to a user, sports equipment or a vehicle. The sensor data therefore preferably comprises data collected substantially continually or at, preferably regular, intervals during the time in which video image data is recorded.
The one or more sensor devices are preferably used to measure at least one of: movements or other physical parameters of the user, camera, item of sports equipment and/or vehicle, such as position, speed, acceleration, etc (e.g. during the outdoor or sports session while video image data is being recorded); environmental conditions around the user and/or camera, such as temperature, pressure, etc; and physiological properties of the user, such as heart rate, VO2 max, etc. Accordingly, in embodiments, the sensor data may relate to parameters such as any one or more of: position; speed; acceleration; altitude; cadence; heart rate; temperature; bearing (or heading); light level; pressure; and orientation. Moreover, in embodiments, the one or more sensor devices preferably include one or more of: a positioning determining device, such as a global navigation satellite system (GNSS) receiver; an accelerometer (preferably a 3-axis accelerometer); a gyroscope; a magnetometer; a pressure sensor, i.e. a barometer; a temperature sensor, i.e. thermometer; an audio measurement device, such as a microphone; an electronic compass; a light sensor; and a heart rate monitor. One or some or all of the sensor devices can be located within a housing of the camera, and preferably operably coupled to a processor of the camera using a wired connection. Additionally, or alternatively, one or some or all of the sensor devices can be remote from the camera, e.g. configured to be worn by or attached to the user of the video camera and/or to sports equipment or a vehicle being used by the user. Such remote sensor devices are preferably operably coupled to a processor of the camera using a wireless connection, such as WiFi, Bluetooth, etc.
In embodiments, each of the one or more sensor devices has an associated sample rate that dictates the frequency at which data is received from the sensor device. The sample rate can be the same between all of the sensor devices, although typically at least some of the sensor devices will have different sample rates. The data received from the one or more sensor devices is preferably stored in the second memory in association with data indicative of the time at which the data was determined by the sensor device, e.g. a time stamp. The received data from the one or more sensor devices is preferably stored in the second memory according to a data structure comprising: a time stamp; a sensor device type; and a payload including the received sample from the sensor device.
In embodiments, the sensor data stored in the second memory can include one or more datasets in respect of a variable obtained directly from the one or more sensor devices, and/or one or more datasets in respect of a variable obtained indirectly from the one or more sensor devices, i.e. a variable derived from data obtained from the one or more senor devices. For example, acceleration can be determined from speed as determined from a GNSS receiver.
In accordance with the invention, a second input is received to cause the digital video camera to stop recording. The second input can be a manual input by a user, such as the user actuating a user input, e.g. button, slide, etc, of the camera, or of a remote control device that is in communication with the camera using a wired or wireless connection. The user input to stop recording can be the same user input actuated to start recording, although in preferred embodiments the two user inputs are different. The manual input by the user could additionally or alternatively include a touch or gesture input, such as the selection of a virtual button presented on a touch sensitive display screen, and/or an audible input, such as a voice command by the user that is detected and interpreted by automatic speech recognition (ASR) software. The second input could also be an automatic input, such as a command to stop recording after a predetermined period of time has elapsed and/or based on data from the one or more of the sensor devices.
Based on the receipt of this second input, the sensor data stored in the second memory is added to a metadata portion of the digital media container, and the container then closed to create a digital media file stored in the first memory.
As will be appreciated, the sensor data will typically only form part of the total metadata that is added to the container in order to close the file. For example, the one or more metadata portions will also include data identifying the duration of the file, the type of compression used to encode the video and audio data, the number of media tracks in the file, the resolution of the video data, the frame rate of the video data, etc, and which are required to allow the file to be played, e.g. displayed, by a media player of a computing device. In addition, and as will be discussed in more detail below, the one or more metadata portions may also include data identifying one or more times (or moments) in the recorded video image data which have been determined to potentially be of interest to the user, referred to herein as “tags” or “highlights”, and which can be used to facilitate playback and/or editing of the video image data.
The one or more metadata portions may also include data linking multiple media files, e.g. identifying that a particular media file contains video data that immediately precedes or succeeds another media file. For example, preferably there is a predetermined maximum size of a media file, which will often correspond to a particular recording duration using a particular video format. Therefore, in embodiments, when this maximum size (or recording duration) is reached, then media container currently opened is closed by adding the relevant one or more metadata portions, and a new media container is opened. The one or more metadata portions therefore preferably include an association and order between the plurality of files, such that a user can view and edit the entire recorded image as though from a single media file (without the user having to manually select all of the media files that make up the sequence of media files). In other words, the one or more metadata portions of a media file preferably include information that identifies that particular media file's relationship to the other media file(s) in the sequence, i.e. that indicates the position of the media file in a sequence of related media files.
In embodiments of the invention, the one or more metadata portions of the media file are located before, i.e. in front of, the payload portion of the media file. As discussed above, by structuring the media file in this manner, then it allows the media data in the file to be streamed for playing and display on a remote computing device using a single communication channel (without needing to first transfer (or download) the entire media file to the remote computing device). According, in preferred embodiments, when opening the digital media container on the first memory, a predetermined amount of memory is reserved before the payload portion of the container into which the one or more metadata portions of the container are added when closing the container to create the media file. The amount of space in memory that is reserved can be the same for all containers that are opened, e.g. based on knowledge of a predetermine maximum size of the file. However, in other embodiments, the amount of space to be reserved can vary between contains based, for example, on the format, e.g. resolution, frame rate, etc, of the video image data to be written to the file.
In some aspects and embodiments of the invention, highlight data identifying one or more times of interest in the video image data is stored in a memory, such as the second memory mentioned above and that is used to store the sensor data, and the stored highlight data is then added to a metadata portion of the digital media container, e.g. opened on the first memory, based on a receipt of an input to cause the camera to stop recording, i.e. the second input. The digital media container is then closed to create a digital media file stored in the first memory. Accordingly, since the highlight data is stored in a metadata portion of the media file, then the data can be easily read and accessed, without needing to read the payload portion of the file, which allows the highlight data to be used to facilitate playback and/or editing of the video image data as will be discussed in more detail below.
It is believed that the storage of highlight data in a metadata portion of a media file comprising video image data in its payload portion, said highlight data identifying one or more times of interest in the video image data, is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of storing data identifying one or more times of interest in video image data collected by a digital video camera, the method comprising:
The present invention extends to a system, preferably a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for storing data identifying one or more times of interest in video image data collected by a digital video camera, the method comprising:
The present invention further extends to a digital media file created using the method described above. The media file therefore comprising: video image data indicative of data received from an image sensor of a digital video camera during a recording event, i.e. a period of time between receipt of input (or instruction) to start recording and an input (or instruction) to stop recording, in a payload portion of the media file; and highlight data identifying one or more times of interest, e.g. to a user, in the video image data recorded during the recording event in a metadata portion of the media file.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, the first memory preferably comprises a non-volatile memory device for storing the data collected by the video camera, and may comprise a removable non-volatile memory device that can be is attachable to and detachable from the video camera, e.g. a memory card such as, for example, an SD card or the like. The second memory is preferably different from the first memory. For example, where the first memory is preferably a removable memory, such as an SD card, the second memory is preferably a non-removable memory within the camera.
The highlight data identifies one or more times of interest in the video image data, e.g. for use by the user when editing the video image data. The highlight data can comprise one or more single times in the video image data and/or or can comprise one or more time periods in the video image data. For the sake of clarity, each single time is referred to herein as a “tag”, whereas each time period is referred to herein as a “highlight”.
In embodiments, a highlight can be based on, and preferably includes, a tag. For example, the time period of a highlight can comprise a time window based on the time of the tag. The position and/or length of the time window of the highlight relative to the time of the tag can be the same for all highlights, or can be different for at least some of the highlights, e.g. dependent on the manner in which the tag was created, can be adjusted by the user, etc (as will be discussed in more detail below). Accordingly, a highlight, and preferably each highlight, preferably comprises a “tag time”, i.e. a time of tag, and a time window, which can be defined as start time and an end time. Typically, the tag time will be between the start and end times of the time window. Although, it is also contemplated, in some embodiments, that the tag time can be the same as the start time or the same as the end time, or the tag time can be before the start time or after the end time. Each time can be defined as an absolute time, e.g.as a UTC (Coordinated Universal Time) value or using the time zone of the country or region in which the video image data in the media file was recorded. Preferably, however, each time is defined as a relative time, e.g. as an offset relative to the beginning and/or end of the video image data. As will be appreciated, a “highlight” with at least a tag time, but without information defining start and end times of a time window, will constitute a “tag”.
A highlight, and preferably each highlight, comprises one or more of the following information: (i) an unique identifier (which can be unique for the just media file in the memory of the camera, for all media files in the memory of the camera, or for all media files in the memory of all cameras); (ii) a type identifying the type of tag or highlight, e.g. whether the tag was generated automatically or based on a manual input; (iii) a tag time identifying the time when the tag was generated, e.g. as an offset from the beginning of the video image data; (iv) a start time identifying the start of the highlight, e.g. as an offset from the beginning of the video image data; (v) an end time identifying the end of the highlight, e.g. as an offset from the beginning of the video image data; and (vi) additional information, which can be based on the type of tag or highlight, e.g. name of a user, a location of the tag, information from a sensor device at the tag time, etc. As will be appreciated, the additional information can form metadata for the highlight.
In embodiments, each highlight preferably has the same size (or length) in memory. This allows the highlights stored in a media file to be accessed, read, edited, etc more easily, especially when the media file is stored on a memory card, such as a SD card, having limited read and write speeds. Accordingly, and since it is desirable to have the one or more metadata portions of a media file located before the payload portion as discussed above, in some embodiments there is a predetermined maximum number of highlights that can be stored with a media file.
The one or more times of interest in the video image data can be determined based on received data indicative of a manual input, i.e. a user input. For example, a time of interest can be identified based on a user actuating a user input, e.g. button, slide, etc, of the camera, or of a remote device that is in communication with the camera using a wired or wireless connection. The manual input by the user could additionally or alternatively include a touch or gesture input, such as the selection of a virtual button presented on a touch sensitive display screen, and/or an audible input, such as a voice command by the user that is detected and interpreted by automatic speech recognition (ASR) software.
The use of a remote device, i.e. separate from the camera, to generate data indicative of a manual input allows someone watching a person performing an activity and is carrying or is associated with the camera, to identify one or more times of interest in the video image data being recorded. This can be advantageous since it frees the person perming the activity from needing to provide their own manual inputs, whilst still maintaining the ability to generate and store highlight data. The remote device may comprise a display device arranged to display, preferably substantially in real-time, the video image data being recorded by the camera; the video image data being streamed over a wireless connection from the camera to the remote device.
Similarly, in embodiments, data indicative of a manual input can be determined or received from multiple devices and/or multiple users. For example, a first user, who may be performing an activity and is carrying or is associated with the camera, can actuate a user input on the camera or a remote device, such as one worn on the user's wrist, while one or more second users, who may be watching the first user performing the activity, can actuate a user input on their own remote control devices (and that are in wireless communication with the camera). As will be appreciated, data indicative of such manual inputs will be received during the recording of the video image data.
Additionally, or alternatively, the one or more times of interest in the video image data can be automatically determined, i.e. not based on a manual input. For example, a time of interest can be identified based on an analysis of the sensor data received from the one or more sensor devices associated with the video camera. The sensor data can be used in this manner, since it is likely that a user will be interested in times of “extreme” events during the performance of an outdoor activity or sports session, e.g. so that they can quickly find and play these moments and/or share the moment with others. A time of interest may also, or alternatively, be determined based on an analysis of the recorded video image data and/or audio data. For example, the recorded video image data can be analysed using a facial recognition algorithm to identify times in the video image data when a person's face, or a particular person's face, can be seen. Similarly, the recorded audio data can be analysed, for example, to identify times when a particular sound is heard or a particular word or phrase is spoken. In other examples, one or more times of interest can be determined based on data received over a wireless connection from at least one wireless beacons or sensors. For example, a wireless beacon or sensor could detect when a user, who is performing an activity and is carrying or is associated with camera, is within a predetermined distance of the beacon, and, in response to detecting the user, transmit data, e.g. the time at which the user was detected, to the video camera. This would allow, for example, a user to place a wireless beacon on a finish line or at a certain location on a course being followed, e.g. a jump or other obstacle, and for the camera to create a tag and/or highlight based on the time at which the user is detected by the wireless beacon.
In other examples, and as will be discussed in more detail below, highlight data for one digital media file, i.e. one or more times of interest, such as a tag or highlight, can be generated based on highlight data for another digital media file, e.g. based on a start time and an end time of a highlight associated with the digital media file. This is useful, for example, when a plurality of digital video cameras have each recorded video image data during at least the same time period, and preferably relating to the same event. For example, a single user, e.g. performing a ski run or mountain biking course, may have a plurality of cameras organised to record an event, e.g. with one or more cameras being carried or worn by the user (i.e. mobile cameras) and/or one or more cameras set up a certain location along the course (i.e. static cameras). In other examples, a plurality of users, e.g. who are running, biking or skiing the same course, can each carry or wear one or more cameras. The generation of highlight data in this manner may be carried out in response to a user input, e.g. the selection of a highlight based on which a highlight in another digital media file is desired to be generated, and/or automatically, e.g. such that a corresponding highlight in another digital media file is generated in relation to certain types of highlight.
The analysis of the sensor data to identify the one or more times of interest can occur, e.g. substantially continuously or at periodic intervals, during the recording of the video image data, i.e. after receipt of the first input to cause the camera to start recording and before receipt of the second input to cause the camera to stop recording. Preferably, however, the analysis is performed after receipt of the input to stop recording, such that the sensor data recorded during the entire recording event, e.g. for the entire performance of the activity or sports session, can be analysed and compared. This allows for the most extreme events during the recording event itself to be identified, rather than just general extreme events. This is because, as will appreciated, an extreme event experienced while cycling, for example, will typically appear very differently in the sensor data than an extreme event experienced while undertaking a motorsport; yet ideally both extreme events would want to be identified.
Accordingly, in embodiments, the sensor data stored in the second memory is analysed using a highlight identification algorithm, after receipt of the second input to cause the camera to stop recording, to generate highlight data identifying one or more times of interest in the video image data, and the sensor data and the highlight data is then added to a metadata portion of the digital media container so as to create the media file. As will be appreciated, there may already be highlight data stored in the second memory, e.g. determined from one or more manual inputs, before receipt of the input to stop recording, and in such embodiments the highlight data added to the metadata portion of the digital media container comprises first highlight data identifying one or more times of interest determined based on received manual inputs, i.e. referred to herein as “manual highlights”, and second highlight data identifying one or more times of interest based on an analysis of the sensor data, i.e. referred to herein as “automatic highlights”.
The highlight identification algorithm preferably comprises analysing a plurality of datasets obtained from the plurality of sensor devices associated with the video camera. Each of the datasets comprises a plurality of data values for a plurality of times during a time period in which video image data is collected, i.e. a recording event between an input causing the camera to start recording and an input causing the camera to stop recording. The datasets are analysed by identifying extrema, e.g. maxima and/or minima, in each of the datasets, and determining, for each of the identified extrema, if the time of an extremum is within a first predetermined time of the time of another of the extrema. The first predetermined time can be, for example, between 1 and 5 seconds, such as 3 seconds. A plurality of clusters are generated based on the determination, wherein each cluster comprises a plurality of extrema, and wherein the time of each extremum in the cluster is within the first predetermined time of the time of another extremum in the cluster. Each cluster, as will be appreciated, has a start time and an end time, which together define a duration for the cluster. The start time for the cluster corresponds to the earliest of the times of the extrema in the cluster, while the end time corresponds to the latest of the times of the extrema in the cluster. One or more of the generated clusters are then used to create highlights for the video image data, with each highlight typically being based on one of the clusters. It is contemplated, however, that if two or more clusters are close to each other in time, e.g. within a second predetermined time, then the clusters can be combined to create a single cluster, such that a highlight would be based on the two or more clusters that are combined. The second predetermined time can be, for example, between 1 and 5 seconds, such as 2 seconds.
It is believed that the analysis of sensor data collected while recording video image data to identify one or more times of interest in the video image data is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of identifying one or more times of interest in video image data collected by a digital video camera during a time period, said digital video camera having a plurality of sensor devices associated therewith, the method comprising:
The present invention extends to a system, preferably a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for identifying one or more times of interest in video image data collected by a digital video camera during a time period, said digital video camera having a plurality of sensor devices associated therewith, the system comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, the method is preferably performed on the digital video camera, and is preferably performed on the sensor data stored in the second memory, e.g. after receipt of the input to cause the camera to stop recording and before it is added to a metadata portion of the resultant media file comprising the video image data. It is contemplated, however, that the method can be performed on a computing device, separate from the video camera, using the sensor data stored in the media file.
The plurality of datasets that are analysed can include one or more datasets in respect of a variable obtained from a single sensor device, such as speed as determined from a global navigation satellite system (GNSS) receiver, heart rate as determined from a heart rate sensor, etc. Additionally, or alternatively, the plurality of datasets that are analysed can include one or more datasets in respect of a variable from a plurality of sensor devices, such as absolute or relative altitude as determined from a pressure sensor and a temperature sensor, acceleration as determined from an accelerometer, gyroscope and compass, etc.
The plurality of datasets that are analysed can include one or more datasets in respect of a variable obtained directly from the one or more sensor devices, e.g. and as will typically be stored in the second memory of the camera. Additionally, or alternatively, the plurality of datasets that are analysed can include one or more datasets in respect of a variable obtained indirectly from the one or more sensor devices, e.g. acceleration can be determined from speed as determined from a GNSS receiver. Such variables can be determined upon receipt of the data from the associated one or more sensor devices, and stored as the sensor data in the second memory. Alternatively, such variables can be determined when the sensor data is accessed from the second memory for use by the highlight identification algorithm.
In aspects and embodiments of the invention, extrema are identified in each of the plurality of datasets. The extrema can include maxima (e.g. peaks) and/or minima (e.g. troughs), e.g. based on the particular variable of the dataset being analysed. In embodiments, the datasets can be filtered, e.g. using a Kalman filter, and/or smoothed, e.g. using a moving average or similar technique. Such filtering and/or smoothing removes noise and other phenomena from the datasets, and can make the facilitate the process of identifying the extrema. In embodiments, extrema are identified only when the extrema passes a certain threshold, i.e. is above or below a predetermined value as appropriate. The threshold value may, and typically will be, dependent on the particular variable of the dataset being analysed. The threshold value may also be dynamically determined based on the data in the dataset. In other words, a different threshold value may be used between different datasets from the same sensor device.
In embodiments, the plurality of datasets comprise one or more of: speed (e.g. determined from a GNSS receiver); heart rate (e.g. determined from a heart rate sensor); acceleration (e.g. determined from a GNSS receiver); vertical speed (e.g. determined from a barometer); rotation (e.g. determined from a gyroscope); and G-force (e.g. determined from an accelerometer). The speed dataset can be used to determine a maximum speed, for example, of the user, camera or equipment (based on the location of the sensor device) during the time period in which the video image data was recorded. The heart rate dataset can be used to determine a maximum heart rate, for example, of the user, camera or equipment (based on the location of the sensor device) during the time period in which the video image data was recorded. The acceleration dataset can be used to determine a maximum acceleration and/or maximum deceleration, for example, of the user, camera or equipment (based on the location of the sensor device) during the time period in which the video image data was recorded. The vertical speed dataset can be used to determine a maximum vertical speed, for example, of the user, camera or equipment (based on the location of the sensor device) during the time period in which the video image data was recorded. The rotation dataset can be used to determine a maximum rotation, for example, of the user, camera or equipment (based on the location of the sensor device) during the time period in which the video image data was recorded. The G-force dataset can be used to determine a maximum G-force, for example, of the user, camera or equipment (based on the location of the sensor device) during the time period in which the video image data was recorded.
In embodiments, a score is determined for each of the identified extrema. The score is preferably determined using the data value of the identified peak, together with the data values in the other datasets at the same time as the identified extremum. Preferably, each of the data values used to determine the score are normalised with respect to the other data values in their respective datasets. For example, each data value used to determine the score is preferably divided by the maximum data value in its respective dataset. The score for an identified extreme is then preferably determined by calculating a mean of the normalised data values from each of the plurality of datasets at the time of the identified extremum. Accordingly, each identified extremum is associated with a time and a data value, and optionally a type identifying the dataset in which the extremum was identified and/or a score.
As discussed above, a plurality of clusters are generated based on a determination, for each of the identified extrema, if the time of an extremum is within a predetermined time of the time of another of the extrema. Each cluster has a start time and an end time, which together define a duration for the cluster. The start time for the cluster corresponds to the earliest of the times of the extrema in the cluster, while the end time corresponds to the latest of the times of the extrema in the cluster. Each cluster preferably further comprises a cluster score, wherein the cluster score is preferably based on the score of the individual identified extrema in the cluster. For example, the cluster score can be a mean, optionally a weighted mean, of the scores of the individual scores. The use of a weighted mean allows the different individual datasets to have a different impact on the cluster score.
In embodiments, two or more clusters can be combined to create a single cluster, e.g. if the two or more clusters are within a predetermined time of each other, such as 2 seconds. For example, a first cluster can be combined with a second cluster if the end time of the first cluster is within a predetermined time of the start time of the second cluster. The resultant single cluster will preferably have a set of properties based on the properties of the clusters that were combined to create it. For example, if a first cluster is combined with a second cluster, the first cluster being earlier in time than the second cluster, then the resultant cluster will have the start time of the first cluster and the end time of the second cluster. The score of the resultant cluster will preferably be based on the score of the clusters that are combined, e.g. as a mean of the clusters of the combined clusters.
At least some or all of the clusters, either an original cluster or resulting from a combination of clusters, are used to create highlights identifying time periods of interest, e.g. to the user, in the video image data. As will be appreciated, each cluster is preferably used to create an individual highlight, which is then preferably stored, as discussed above, in a metadata portion of the digital media file. In embodiments, each of the clusters are ranked (or sorted) based on their cluster scores, and only some of the clusters are used in the creation of highlights. For example, only a predetermined number of clusters may be used to create highlights, e.g. due to the need to reserve memory such that the one or more metadata portions can be located at the start of the media file. The predetermined number of clusters can be a fixed number, e.g. only 10 automatic highlights are created, or can be a variable number, e.g. based on the number of manual highlights, such that only a maximum number of highlights are created and added to a media file. Additionally, or alternatively, only those clusters with a cluster score above a predetermined value are preferably used in the creation of highlights.
In embodiments, the time window associated with a highlight created from a cluster, i.e. an automatic highlight, is preferably of at least a predetermined size, such as 6 seconds. This can help improve, for example, the display of such highlights on a computing device as will be discussed in more detail below. Accordingly, in some embodiments, if the duration of a cluster is less than the predetermined time, e.g. 6 seconds, then the start and/or end times of the cluster can be modified, such that the resultant highlight has a time window of the predetermined size.
As discussed above, a highlight, and preferably each highlight, preferably comprises one or more of the following information: (i) an unique identifier; (ii) a type identifying the type of tag or highlight; (iii) a tag time identifying the time when the tag was generated; (iv) a start time identifying the start of the highlight; (v) an end time identifying the end of the highlight; and (vi) additional information. The type for an automatic highlight, i.e. a highlight created from a cluster, can include information identifying a dataset with an extremum that lead to the creation of the highlight, and preferably identifying the dataset of the extremum with the highest score of the extrema in the cluster. Therefore, for example, the type for an automatic highlight can be one of the following: speed; G-force; rotation (or spin); acceleration; deceleration; vertical speed; and heart rate. The start and end times for an automatic highlight are preferably determined as described above. In embodiments, an artificial tag time is associated with an automatic highlight, e.g. for use when displaying and using highlights on a computing device; the tag time is artificial since an automatic highlight is typically not derived from a single point time in contrast to a manual tag. The tag time for an automatic highlight can be any time between the start and end times of the highlight, and can even be one of the start and end times. However, in preferred embodiments, the tag time is the central time of the highlight, i.e. equidistance from the start and end times of the highlight.
In some aspects and embodiments of the invention, the highlight data, e.g. comprising one or more tags and/or highlights, that identifies one or more times of interest in video image data is stored in a metadata portion of a digital media file including the video image data. The storage of the highlight data in this manner is advantageous in that it allows efficient read access to the highlight data, e.g. by a computing device remote from the camera, and also efficient write access to the highlight data, e.g. to allow the information associated with a highlight, such as the start time and/or the end time, to be modified in a post-processing step, e.g. based on a user input, after the recordal of the video image data (and thus after the creation of the digital media file).
Accordingly, in embodiments, the highlight data of a media file can be modified after the creation of the file. The modification of the highlight data can include the addition of new tags and/or highlights to the highlight data; such new tags and highlights preferably being manual highlights created by a user watching the video image data on a computing device. Additionally, or alternatively, the modification of the highlight data can include the modification of an existing tag and/or highlight in the highlight data, such as changing the tag time of a tag (or highlight), changing the start time and/or end time of a highlight, etc. Additionally, or alternatively, the modification of the highlight data can include the deletion of existing tags and/or highlights in the highlight data, e.g. to remove manual highlights that were added accidently, to remove automatic highlights to relate to times in the video image data that are of no interest to the user. The modification of the highlight data in a media file is preferably performed on the computing device with the memory storing the media file. The modification can therefore be performed by the video camera, or can be any other computing device as desired. The modification of the highlight data can be performed based on instructions (or commands) generated on the computing device that performs the modification, or based on instructions (or commands) generated on a remote computing device and that are transmitted, e.g. via a wired or wireless connection, to the computing device performing the modification. For example, and as will be discussed in more detail below, the modification instructions can be generated on a mobile computing device, such as a smartphone, and wirelessly transmitted to the video camera, with the highlight data in the media file being modified on the video camera according to the received instructions.
In some embodiments of the invention, new tags and/or highlights can be created based on highlight data, such as manual tags and/or highlights, created by other users. In other words, manual tags and/or highlights can be crowdsourced, and used to suggest new tags and/or highlights to a user. Indeed, it is believed that the crowdsourcing of manual tags from a plurality of users to create highlights is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of identifying one or more times of interest in video image data collected by a digital video camera during a time period, said digital video camera having a position determining device associated therewith, the method comprising:
The present invention extends to a system, such as a computing device, and preferably a mobile computing device, and/or a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described,
Thus, in accordance with another aspect of the invention, there is provided a system for identifying one or more times of interest in video image data collected by a digital video camera during a time period, said digital video camera having a position determining device associated therewith, the system comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, the first position data preferably forms part of the sensor data, which is, in preferred embodiments, stored in a metadata portion of the media file; the video image data in contrast being stored in a payload portion of the media file.
The first position data is representative of the change in position of the digital video camera, e.g. of the position determining device associated with the camera. In other words, the first position data comprises data indicative of the position of the camera collected substantially continually or at, preferably regular, intervals during the time in which the video image data is recorded. In embodiments, the first position data comprises a set of geographic coordinates, such as latitude, longitude and elevation, e.g. as obtained from a GNSS sensor, and optionally a pressure sensor, associated with the camera, together with a time stamp for each geographic coordinate. The time stamp is indicative of the time at which the geographic coordinate was recorded, and can be in the form of an absolute value, e.g. as a UTC value, or a relative value, e.g. as an offset from the beginning and/or end of the video image data.
In some aspects and embodiments of the invention, the first position data is transmitted to a server. The first position data can comprise all of the position data collected and stored during the time period in which the video image data was recorded, i.e. the entire position dataset within the sensor data stored in the media file. In other embodiments, the first position data can comprise a portion of the position dataset within the sensor data. The first position data can be transmitted to the server, together with other datasets from the sensor data stored in the media file, or appropriate portions thereof. For example, and as described in WO 2013/037860 A1, the entire content of which is incorporated herein by reference, information concerning the speed, heading and/or acceleration of the digital video camera and/or the user of the camera, and/or information concerning the quality of the position data, e.g. based on the accuracy of a GNSS receiver used to determine the position, can be used to improve the estimate of the path taken by the camera during the time period.
The first position data received at the server is processed to determine second position data identifying one or more positions of interest in the first position data. The positions of interest are preferably based on manual highlights created by other users. As discussed above, at least some of the highlights for a media file can be generated based on a manual input, e.g. during the recording of the associated video image data, such as by actuating a button on the digital video camera, or after the recording, such as by interaction with a computing device when reviewing video image data.
The server preferably receives a plurality of datasets from a plurality of different users in relation to different recording events. Each dataset includes at least data identifying the position of one or more, and preferably all, manual highlights created by a user. In embodiments, the received dataset includes the position of the camera at the tag time associated with a highlight (and which can be found, in some embodiments, in the highlight data associated with a media file). In other embodiments, the received dataset includes the position of the camera at the start time and end time of the time period represented by the highlight. Preferably, however, the received dataset includes data identifying the changing position of the camera in the time period represented by the highlight (and which can be found, in some embodiments, in the sensor data associated with a media file). The server preferably processes the plurality of received datasets to identify a plurality of positions of interest, e.g. as point locations, line locations and/or area locations; such positions of interest being geographical locations where users have found it is desirable to create manual highlights, and where it can therefore be assumed that there is a landform, i.e. a nature feature of the Earth's surface, such as a feature, e.g. a hill, valley, etc, and/or a man-made feature that users have wanted to mark for later viewing.
The server thus preferably comprises, or has access to, a database of stored positions of interest, e.g. as point geographic locations, line geographic locations and/or area geographic locations. When first position data is received at the server, the server preferably compares the first position data to the stored positions of interest in order to identify one or more positions of interest in the first position data. The comparison is preferably based on one or more of a distance measure and a heading measure. For example, the comparison may comprise defining a bounding area based on the first position data; where the bounding area identifies a geographic region within which the camera moved during the time period. The bounding area can be any area as desired, provided it encompasses all of the first position data, such as, for example, a rectangle bounding the trace formed by the first position data. The server uses the bounding area to identify any stored positions of interest that are within the bounding area. By comparing the first position data to the stored positions of interest within the bounding area, e.g. by determining if a position falls within an stored area of interest, by determining if a position is within a predetermined distance of a stored point or line of interest, etc, then it is possible to identify one or more positions of the first position data that relate to stored positions of interest, and which the second position data.
In some aspects and embodiments of the invention, the second position data is transmitted from the server to the camera or associated computing device. The second position data identifies one or more positions of interest in the first position data. The second position data can be of any form that identifies point locations within the first position data. For example, the second position data could comprise one or other, or both, of the time stamp and geographic coordinate of the first position data sent to the server. In preferred embodiments, the second position data comprises one or more time values indicative of times within the time period of the video image data.
The received second position data, e.g. one or more geographic coordinates and/or time stamps, are used to generate tags and/or highlights for the digital media file. For example, the time stamps can be used as the time for a tag, and thus as the basis for the time window defining a highlight. Alternatively, the geographic coordinates can be used to determine a corresponding time of interest within the time period of the video image data, e.g. by comparing with the sensor data associated with the media file, and the determined time of interest is used as the time for a tag, and thus as the basis for the time window defining a highlight.
The highlight data can be generated automatically based on the received position data, or, in embodiments, the highlight data can be generated based on an input received from a user. For example, in embodiments, a computing device can be used to display a representation of the potential highlight data to a user on a display device of the computing device and/or to play (or preview) the video image data associated with the potential highlight; as described in more detail below. The user, after viewing the potential highlight, can then accept the highlight, or accept an amended version of the highlight, e.g. by adjusting the start and/end times of the highlight, and cause the highlight to be added to the highlight data in the metadata portion of the media file.
As will be appreciated, the computing device and/or digital video camera that is arranged to perform the above described invention is preferably also arranged to transmit one or more datasets including data identifying the position of one or more, and preferably all, manual highlights that are created for one or more, and preferably all, recorded digital media files. In other words, the computing device and/or digital video camera is preferably arranged to provide data indicative of manual tags for use in generating the database of stored positions of interest, in addition to being arranged to make use of the database for suggesting tags and/or highlights.
The present invention, in at least some aspects and embodiments, also extends to a computing device, e.g. a server, that is arranged to perform the above described method of generating a database of stored positions of interest based on a plurality of received datasets indicative of manual tags from a plurality of different users in relation to different recording events and/or the above described method of receiving first position data from a computing device and transmitting, in response to the received first position data, second position data identifying one or more positions of interest in the first position data. In accordance with at least some aspects and embodiments of the invention, and as discussed above, a digital video camera is used to create one or more media files stored on a memory, and wherein each of the one or more media files includes one or more sets of video image data, and optionally audio data, in a payload portion of the file, together with highlight data and/or sensor data in one or more metadata portions of the file. The sensor data is based on data received from one or more sensor devices that are associated with the video camera, and is contemporaneous with the one or more sets of video image data. The highlight data identifies one or more times of interest in the one or more sets of video image data. Preferably, the one or more media files stored on the memory are accessed by a computing device, such as desktop computer, laptop computer, tablet computer, smartphone, etc, so as allow the media content in the files to be played and displayed to a user and/or to allow a user to edit the media content in the files.
The one or more media files can be accessed in any desired manner. For example, the one or more media files can be transferred from the memory to a separate memory of the computing device. Additionally, or alternatively, the memory storing the one or more media files could be a removable memory, such as a memory card, and the memory itself is transferred and added to, i.e. installed on, the computing device.
In some embodiments, the media content in the files can be transferred over a wireless connection, i.e. streamed, to the computing device, e.g. from the video camera. As discussed above, the one or more media files are preferably structured such that the one or more metadata portions are located at the front of the file, so as to facilitate streaming of the media content in the file.
Entire media files, rather than just the content contained therein, can also, in some embodiments, be transferred to the computing device. As will be discussed in more detail below, however, such media files (that are transferred over a wireless connection) will typically not be media files created during the recording of data by the digital video camera, but instead are preferably media files that are generated, e.g. in a transcoding process, from the media files created during the recording of data.
The computing device, once having accessed the one or more media files, preferably displays a representation of each of the one more media files to a user on a display device of the computing device. This representation can be selected by the user, e.g. using a computer pointing device, such as a computer mouse, via a touch on a touchscreen display, etc, to allow the media file associated with the representation to be played, deleted and/or manipulated, e.g. renamed, moved to a new location in memory, etc. The representation can include a thumbnail image, which is preferably a frame from the video image data contained in the respective media file. The representation can also information, preferably superimposed over the thumbnail image, identifying one or more of: the type of video image data in the file (e.g. data indicative of the resolution and/or frame rate of the video image data); a duration of the media file; and the number of times of interest identified in the highlight data for the file (e.g. the number of the highlights).
In embodiments, the computing device, once having accessed the one or more files, can additionally, or alternatively, display a representation of the highlight data in the one or more media files to a user on a display device of the computing device. The highlight data, as described above, preferably comprises a plurality of highlights, each having a start time and an end time with respective to the associated video image data. Thus, in embodiments, the representation of the highlight data displayed to the user preferably comprises a representation of one or more or all highlights in the one or more media files. This representation can be selected by the user, e.g. using a computer pointing device, such as a computer mouse, via a touch on a touchscreen display, etc, to allow the highlight associated with the representation to be played, deleted and/or selected, as discussed in more detail below, for combining with other highlights to create a highlights video (e.g. a summary or story of interesting and/or exciting moments in the video image data recorded by the video camera. The representation can include a thumbnail image, which is preferably a frame from the video image data contained in the respective media file for the time window defined by the respective highlight. The representation can also include information, preferably superimposed over the thumbnail image, identifying one or more of: the origin of the highlight, e.g. whether the highlight is a manual highlight or an automatic highlight; the type of automatic highlight; and a value associated with the highlight, such as, for an automatic highlight, the value of an extremum that lead to the creation of the highlight (e.g. maximum speed, maximum acceleration, etc), and, for a manual highlight, the tag time of the highlight.
The representation of the media file and/or highlight, e.g. thumbnail image, can include a single frame from the relevant video image data (i.e. a static image). Additionally, or alternatively, in some embodiments, when a pointer of a pointing device, such as a computer mouse or a finger in the case of touchscreen display, is positioned on the thumbnail image, the thumbnail image may cycle through a series of predetermined frames from the relevant video image data, so as to show a brief summary (or preview) of the video image data contained in the associated media file to the user. Additionally, or alternatively, in some embodiments, when a pointer of a pointing device is moved or transitions across the thumbnail image, the thumbnail image may show a preview of the video image data contained in the associated media file to the user, wherein the position of the pointer is used to select the frame on which the displayed thumbnail image is based. In other words, the relative position of the cursor or finger along a timeline defined relative to the thumbnail image can be used to select a relevant frame from the video image data, which is then used to generate the displayed thumbnail image. It is believed that the previewing of video image data by moving a pointer across a display window is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of previewing video image data using a computing device, the method comprising:
The present invention extends to a system, such as a computing device, e.g. a desktop computer, laptop, tablet, mobile phone, etc, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for previewing video image data using a computing device, the system comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, the video image data of the media file that is previewed using the above technique can be the entire video image data of the file or can be the video image data associated with a highlight. In the former case, the first and second positions on the boundary, i.e. the start and end of the timeline, represent, respectively, the start and end of the video track in the file. Whereas, in the latter example, the first and second positions on the boundary represent, respectively, the start and end times of the highlight (with respect to the start and/or end of the video track in the file).
In some aspects and embodiments of the invention, one or more, and preferably a plurality of, media files comprising video image data is accessed by the computing device. A media file can be stored in a memory of the computing device and thus accessed using a wired connection, or alternatively can be stored in a memory of the video camera and accessed over a wireless connection. The video image in the media file may be stored in a payload portion of the file and encoded using one or other of an interframe compression technique and an interframe compression technique. As will be appreciated, it is less computationally expensive, i.e. requires less processing, to access frames of the video image data that is encoded using an intraframe compression technique, since each frame is compressed without reference to another frame. Accordingly, as discussed in more detail below, each frame within the video image data can be accessed and, if required, appropriately scaled for use as displayed thumbnail image. The method can, however, be applied using video image data that is encoded using an interframe compression technique; for example by accessing key frames (i.e. complete images stored within the data stream) and using these frames, again appropriately scaled if required, for use as the displayed thumbnail image.
In preferred embodiments, the digital media file has a payload portion comprising a first track in which the video image data is encoded using an interframe compression technique and a second track in which the video image data is encoded using an intraframe compression technique. The first track is preferably used when playing or scrubbing (e.g. reviewing) the file, while the second track, typically lower resolution track, can be used when previewing the file, e.g. as discussed above.
A representation of each of the one or more media files is displayed in a display window on a display device. Each representation comprises a thumbnail image, such as one based on a frame at a predetermined position between the start time and end time of the video image data, e.g. 10%, 20%, etc from the beginning of the video image data. As discussed above, each thumbnail image may have additional information relating the media file, video image data and/or the highlight data superimposed over the thumbnail image. When representations of a plurality of media file are displayed in the display window, then the thumbnail images may form a grid. Each thumbnail image can be individually selected, or can be selected as a group, by a user, e.g. based on a received touch or using a computer mouse, to initiate one or more actions concerning the media file and/or highlight data associated with the thumbnail image, such as moving the media file from one memory to another (e.g. from a memory card to a local hard disk), deleting the media file from memory, causing the video image data to be played, etc. Each thumbnail image can be of any shape as desired, such as a rectangle, square, circle, etc.
A timeline is defined that extends from a first position on a boundary of the thumbnail image to a second position on the boundary of the thumbnail image. The timeline represents the length of the video image data, e.g. the entire video or the highlight, such that the first position is indicative of a start time of the video image data and the second position is indicative of an end time of the video image data. The timeline is used to divide at least a portion of a first area defined by the boundary into a plurality of second areas, such that each of the second areas is representative of a different time period between the start time and the end time. In an embodiment, the first area corresponds to the area of the thumbnail image, and thus for example could be rectangular or circular. The first area could, in other embodiments however, correspond to only a portion of the thumbnail image. For example, the area could be a strip following an edge or other portion of the boundary. In a preferred embodiment, wherein the shape of the thumbnail image is rectangular, the timeline may extend from one edge to an opposite edge.
The first area may be divided in the plurality of second areas by defining normal lines to the timeline. Normal lines may be defined based on the frame rate of the video image data in the case of intraframe encoded video image data, or the rate of key frames in the case of interframe encoded video image data, such that each second area relates to a different frame from the video image data.
A frame is selected from the video image data based on the time period associated with the second area on which a pointer of a pointing device is located. The pointer may be a location indicated by a finger, stylus, etc or a cursor associated with a computer mouse, touchpad, etc. The selected frame is used to generate a new thumbnail image for display in the display window, i.e. in replace of the current thumbnail image. The thumbnail image may correspond to the entire selected frame, or it may be a portion of the selected frame; optionally with the frame being scaled to fit within the first area as defined by the boundary of the thumbnail image. As will be appreciated, the frame that is used to generate the thumbnail image is based on the particular second area in which the pointer is located, such that movement of the pointer from one second area to another, e.g. in response to the receipt of an input from a user, causes the displayed thumbnail image to change. Thus, for example, when moving from one second area to an adjacent second area toward the end time of the timeline, then the thumbnail image will change to a next frame of the video image data, i.e. the succeeding frame. Similarly, for example, when moving from one second area to an adjacent second area toward the start time of the timeline, then the thumbnail image will change to a previous frame of the video image data, i.e. the preceding frame, Thus, by moving the pointer along the portion of the first area through each second area in turn from the start of the timeline to the end of the timeline, the thumbnail images will change to permit the user to preview the video image data. Accordingly, in embodiments of the invention, the computing device sends a request to the camera to play (or preview) media data, such as a video or highlight, stored on a memory of the camera. The request can be generated following receipt of a user input on the computing device selecting one or more videos and/or highlights to be played, e.g. through the selection of a displayed representation as described above. The request can also be generated automatically by the computing device, or software running thereon, e.g. if multiple videos and/or highlights are requested to be played subsequently, either randomly or in a particular order. After receipt of such a request, the video camera transfers (or streams) the requested media data to the computing device, e.g. over a connection established by a wireless communication device in the camera and a corresponding wireless communication device in the computing device. The media data that is transferred is preferably an encoded media stream. The encoded media stream can be, for example, an encoded video stream, such that the computing device displays only video image data, i.e. the request video or highlight video. Alternatively, the encoded media stream can be an interleaved stream comprising, for example, video image data and audio data.
Accordingly, in some embodiments, the encoded media stream transferred (or streamed) to the computing device can be the payload portion of a media file, or relevant portion thereof when a highlight is requested, e.g. when the media file only includes a video track, or a video track and an audio track. In other embodiments, and for example wherein the payload portion of the media includes first and second video image data, e.g. one in a video track and another in a text track as described above, the payload portion of the media file may first need to be demultiplexed into its constitute encoded streams, e.g. by a demultiplexer of the camera. The encoded media stream transferred to the computing device may therefore comprise one of the encoded streams output by the demultiplexer, e.g. the video image data from the text track of the media file, or a plurality of the encoded streams output by the demultiplexer, e.g. the video image data from the text track of the media file and the audio data from the audio track of the media file, and which have been interleaved into a single encoded stream by a multiplexer of the camera.
The computing device, e.g. a smartphone, therefore preferably comprises at least one decoder to decode the data in the encoded media stream, such that the decoded data can then be shown to the user on the display device of the computing device. Preferably, however, the computing device comprises a demultiplexer and a plurality of decoders, such that when the computing device receives an interleaved encoded media stream from the camera, e.g. with both video and audio data, the computing device can separate the encoded streams and decode the audio and video data contained therein, such that the user is able to preview a video or highlight video with audio.
In embodiments in which the video camera communicates with the (remote) computing device using a wireless connection, the two devices are preferably capable of communicating with each other using two different communication protocols, preferably short-range communication protocols. For example, the video camera, and thus the computing device, comprises a first wireless communication device capable of communicating using a first communication protocol and a second wireless communication device capable of communicating using a second communication protocol. The first communication device and associated protocol is preferably used as a control channel allowing the computing device to, for example, trigger status and operational changes in the video camera. The second communication device and associated protocol meanwhile is preferably used as a data channel allowing for the exchange of data between the camera and the computing device, such as data from one or more media files stored in a memory of the video camera. As will be appreciated, the control channel is typically a low bandwidth channel, whereas the data channel is typically a high bandwidth channel.
The first communication device preferably comprises a Bluetooth Low Energy (BLE) transceiver. As known in the art, BLE is lower power communication protocol that is designed for applications requiring low data rates and short duty cycles (in comparison to classical Bluetooth). In BLE, and other similar low energy communication protocols, a connection is only established between devices when there is data to be transferred. This is in contrast to communication protocols, such as classical Bluetooth and WiFi, wherein a connection is maintained between devices even when there is no data to be transferred. For this reason, however, BLE and other similar communication protocols typically have a limitation on the size of the data packets that can be transferred between connected devices. The second communication device, meanwhile, preferably comprises a WiFi transceiver. Such connection-orientated communication protocols can be used to exchange large quantities of data between devices in a frequent or continuous manner, which the limited data packet size of low energy communication protocols do not allow or at least make inefficient. While the first communication device is described herein primarily with regard to the BLE communication protocol, it will be appreciated that any suitable lower power communication protocol can be used, such as ANT and ZigBee. Similarly, while the second communication device is described herein primarily with regard to the WiFi communication protocol, it will be appreciated that any suitable connection-orientated communication protocol can be used, such as classical Bluetooth.
In embodiments of the present invention, the control channel (provided by the first communication device of the camera) is used by the computing device to activate (or turn on) the second communication device in the camera. This allows, for example, a connection between the camera and the computing device to be established only when required, e.g. to allow the computing device to access and obtain data from media files stored on the video camera. Once the data has been transferred, then the computing device can send another command over the control channel to deactivate (or turn off) the second communication in the camera.
It is believed that a video camera capable of connecting to a remote computing device using two different wireless communication protocols, one for use a control channel and the other as a data channel, and activating the communication device to establish data the channel only upon receipt of a command over the control channel is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of transmitting data from a digital video camera to a remote computing device, said video camera having: a first wireless communication device capable of communicating using a first communication protocol with a remote computing device; and a second wireless communication device capable of communicating using a second communication protocol with the remote computing device, the method comprising:
The present invention extends to a system, preferably a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for transmitting data from a digital video camera to a remote computing device, said video camera having: a first wireless communication device capable of communicating using a first communication protocol with a remote computing device; and a second wireless communication device capable of communicating using a second communication protocol with the remote computing device, the method comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, the first wireless communication device preferably comprises a BLE transceiver, or a device using a similar low energy communication protocol, such as ANT or ZigBee, wherein data is exchanged using broadcast advertising packets or over a temporarily established connection (that is broken by the master device as soon as the relevant data has been exchanged). The second wireless communication device preferably comprises a WiFi transceiver, or device using a similar connection-orientated communication protocol, such as classical Bluetooth, wherein data is exchanged using an established connection.
In some aspects and embodiments of the invention, a first command is received at the first communication device from the computing device, said first command being an instruction to activate (or turn on) the second communication device. A second command is also received at the first communication device from the computing device, said second command being an instruction to deactivate (or turn off) the second communication device. This allows the second communication device to be deactivated whenever it is not needed for the exchange of data with the computing device, thereby reducing power consumption on the camera.
The first communication device can operate as an “observer” that continually scans for advertising data packets broadcast by a corresponding communication device in the computing device. Thus, in embodiments, the first and second commands are contained in advertising data packets. In such embodiments, a connection is not established between the camera and the computing by the first communication device, and the control channel formed using the first communication device is unidirectional (from the computing device to the camera).
In other embodiments, the first communication device can operate as a “peripheral” (to the corresponding communication device in the computing device), such that a connection is established between the computing device and the camera. The first and second commands are preferably therefore contained in data packets transmitted over the connection. In such embodiments, the control channel formed using the first communication device is bidirectional. This allows, for example, the camera to notify, or confirm to, the computing device when the second communication device is activated and/or deactivated, e.g. after receipt of the first and/or second command, such that a suitable notification can be provided to the user on the computing device, e.g. on a display device thereof.
In embodiments, the first command to activate the second wireless communication device in the video camera is generated following a user input on the remote computing device. The user input can be a request for the computing device to act as a viewfinder for the camera, i.e. to display the view as seen by the image sensor of the camera, and thus that will form the video image data recorded by the camera. The use of the computing device, such as a smartphone, as a viewfinder for the camera allows the suitably adjust the position of the camera on their body or piece of sports equipment without needing to see a display device on the camera, if there even is one. The user input can additionally, or alternatively, be a request for the computing device to play (or preview) the video image data in a media file stored in the memory of the camera. In such embodiments, the request can be the selection of a representation of a media file, e.g. as described above through the selection of a thumbnail image, to cause the video image data of the selected media file to be played. Alternatively, the request can be the selection of a representation of a highlight of a media file, e.g. as described above through the selection of a thumbnail image, to cause the video image data associated with the selected highlight to be played (i.e. the video image data between the start time and end time of the highlight). The user input can additionally, or alternatively, be a request for sensor data, e.g. stored in a metadata portion of a media file stored in the memory of the camera.
Additionally, or alternatively, the first command to activate the second wireless communication device in the video camera is generated automatically by the computing device, e.g. by software running thereon. For example, upon execution of the software, e.g. of an application (or app) installed on the computing device to allow the viewing and editing of video image data, the software may synchronise with the camera, so as to obtain data indicative of the one or more media files, and preferably their associated highlight data. This allows the computing device to display representations of the various videos and highlights stored on the camera, e.g. as described above.
The second communication device is activated, i.e. turned on, following the receipt of the first command by the first communication device, and a connection established between the video camera and the computing device. The established connection is then used by the computing device to send a request for data to the video camera. The requested data will be dependent on the action that triggered the generation of the first command, e.g. on the user input on the computing device.
For example, when the request is for the computing device to act as a viewfinder for the camera, then the request transmitted over the established connection is a request for video image data received from an image sensor of the camera. In such embodiments, the data that is transferred (or streamed) to the computing device is preferably an encoded video stream output from the video processing device of the camera (and generated from data received from the image sensor of the camera), and preferably a video stream encoded using an intraframe compression technique. As discussed above, such an encoded video stream preferably comprises a stream wherein each frame is compressed as a jpeg image. Each jpeg image can be at a resolution of 768×432 px, and the stream may have a frame rate of 30 fps; although it will be appreciated that the values are merely exemplary. The computing device, as discussed above, preferably comprises a decoder to decode the received encoded video stream, and display the resultant video image data on the display device of the computing device.
In other embodiments, when the request is for the computing device to play (or preview) a video or a highlight, then the request transmitted over the established connection is a request for video image data, and optionally audio data, in a media file stored in the memory of the camera. In such embodiments, the data that is transferred (or streamed) to the computing device is preferably an encoded media stream. As discussed above, the encoded media stream can be, for example, an encoded video stream, such that the computing device displays only video image data, i.e. the request video or highlight video. Alternatively, the encoded media stream can be an interleaved stream comprising, for example, video image data and audio data. In some embodiments, the data that is transferred to the computing device can also include the sensor data for the media file, or for the highlight, such that the sensor data can be displayed simultaneously with the video or highlight as it is played by the computing device.
In other embodiments, when the request is for computing device to obtain information about the one or more media files stored on the memory of the video camera, then the request transmitted over the established connection can be a request for the number of media files stored on the memory, and optionally, for each of the files, one or more of the following: the time at which the file was created; the size of the file; the duration of the video image data in the file; the number of tags and/or highlights stored in the file; the resolution of the video image data in the file; the frame rate of the video image data in the file; and the aspect ratio of the video image data in the file.
In some aspects and embodiments of the invention, preferably following the receipt of the requested data from the mobile computing device, e.g. after the user closes the viewfinder, finishes playing a video or highlight, closes the app, etc, a second command is generated by the computing device. The second command is transmitted by a communication device in the computing device, e.g. in a broadcast advertising data packet or in a data packet transmitted over an established connection, and received by the first communication device in the camera. The second communication device is deactivated based on receipt of the second command.
While the control channel between the camera and the computing device formed by the first communication device using the first wireless communication protocol, e.g. BLE, has been described with respect to the activation and deactivation of the second communication device to form a data channel, the control channel can also be used to provide additional functionality. For example, the control channel can be used to adjust settings of the camera based on a commands received from the computing device, which may be generated, for example, based on an input from a user. For example, the control channel can also be used to cause changes or adjustments to one or more of the following settings of the camera: resolution; frame rate; white balance (i.e. to adjust the overall colour tone of the video image data); colour (i.e. to adjust the colour profile of the video image data); gain or ISO limit (i.e. to adjust the sensitivity of the camera in low light environments); sharpness (i.e. to adjust the sharpness of the video image data); and exposure (i.e. to correct for environments with contrasting light conditions).
As described above, in embodiments of the invention, the computing device, e.g. desktop, laptop, smartphone, etc, is arranged to access one or more media files and to play (or preview) at least video image data from the one or more files, e.g. a video, a highlight video, etc. The video image data being played is preferably displayed in a display window on a display device of the computing device. A timeline (or playback bar) is preferably also displayed simultaneously with the video image data. One end of the timeline indicates the start of the video currently being played, with the other end of the timeline indicating the end of the video currently being played. Thus, the timeline can be said to form a schematic representation of the video image data over the duration of the video. The timeline may, in some embodiments, be formed as a straight line. The timeline preferably further includes an icon (or slider) that moves along the timeline as the video is played, so as to show the location along the timeline of the video image data currently being displayed. The icon (or slider) can preferably be manipulated by the user, i.e. by moving the icon along the timeline, so as to allow the user to select the video image data being displayed. The process of the user manipulating the icon in this manner is referred to as “scrubbing”, and is often used in video editing to allow a user to select one or more portions of a video to be retained or deleted in the creation of an edited video.
It has been recognised that sensor data based on data received from one or more sensor devices associated with the camera, and which is contemporises with video image data recorded by the camera, can be advantageously used in the scrubbing of a video. Thus, in some aspects and embodiments of the invention, the timeline comprises a representation of the sensor data, and preferably a representation of one or more datasets in respect of a variable obtained from the one or more sensor devices. Indeed, it is believed that the use of contemporaneous sensor data to scrub video image data is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of reviewing video image data collected by a digital video camera during a time period using a computing device, said digital video camera having one or more sensor devices associated therewith, the method comprising:
The present invention extends to a system, such as a computing device, and preferably a mobile computing device, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for reviewing video image data collected by a digital video camera during a time period using a computing device, said digital video camera having one or more sensor devices associated therewith, the method comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, as described above, the one or more sensor devices, preferably a plurality of sensor devices, are preferably used to measure at least one of: movements or other physical parameters of the user, camera, item of sports equipment and/or vehicle, such as position, speed, acceleration, etc (e.g. during the outdoor or sports session while video image data is being recorded); environmental conditions around the user and/or camera, such as temperature, pressure, etc; and physiological properties of the user, such as heart rate, VO2 max, etc. Furthermore, the media file is preferably structured such that the video image data is stored in a payload portion of the file and the sensor data is stored in a metadata portion of the file. The sensor data also preferably includes one or more datasets, and preferably a plurality of datasets, wherein each dataset is in respect of a variable obtained, directly or indirectly, from the one or more sensor devices.
In some aspects and embodiments of the invention, a media file comprising video image data and sensor data is accessed by the computing device. The media file can be stored in a memory of the computing device, or alternatively can be stored in a memory of the video camera and is accessed over a wireless connection. In these latter embodiments, the sensor data is preferably downloaded to a memory of the computing device (such that all the sensor data is present in a memory of the computing device), whereas the video image data is streamed over the wireless connection as required for display on the display device of the computing device.
The video image data, whether streamed or present in a local memory of the computing device, is displayed in a first display window on the display device. Meanwhile, a timeline is simultaneously displayed in a second display window on the display device, together with an icon having a location relative to the timeline to show the video image data currently being displayed in the first display window. In embodiments, the first display window may be separate from the second display window. However, in some embodiments, the display windows may at least partially overlap, and may even be the same, such that the timeline is superimposed over the video image data. The location of the icon relative to the timeline can be changed by the user, through an input on an input device, such that the video image data being displayed in the first display window is changed to correspond to that of the new location of the icon.
As discussed above, the timeline comprises a representation of the sensor data, such that the user can view the sensor data, and use this information to select a desired location of the icon (i.e. so as to scrub the video). The representation of the sensor data preferably comprises a representation showing how the data values of a variable obtained, e.g. directly or indirectly, from the one or more sensor devices change over the time period in which video image data was collected. For example, the representation of the sensor data may show how the speed, or the acceleration, or the heart rate of the user, etc, changed over the time period. In embodiments, the data from a plurality of datasets may be displayed simultaneously. Alternatively, the data from only a single dataset may be displayed, together with one or more selectable options that can be used by the display the data from another of the datasets.
In embodiments, the icon may be arranged to move along the representation of the sensor data. Alternatively, a second timeline formed as a straight line with one end representing the start of the time period and the other end representing the end of the time period may be displayed in the first or second display windows, or optionally in a third display window, and the icon is preferably arranged to move along this second timeline. In such embodiments, a marker may be displayed on the representation of the sensor corresponding to the location of the icon on the second timeline, e.g. such that the user can easily identify the relevant data values of the sensor data for the current location of the icon.
In a particular embodiment, the representation of the sensor data comprises a path showing the change in position of the user (or camera, or equipment, dependent on the location of the sensor) over the time period, e.g. based on position data determined by a GNSS receiver. This representation of the path can be superimposed over a representation of a digital map showing the terrain and/or navigable network, e.g. roads, paths, etc, over which the user travelled. As will be appreciated, this representation of the path can be displayed simultaneously with data from one or more other datasets comprising the sensor data.
In further embodiments, and wherein the media file further comprises highlight data, e.g. in metadata portion of the file as described above, one or more markers can be displayed on the display device together with the one or more timelines, e.g. a straight line, a representation of sensor data, or a combination thereof, wherein the one or markers are based on the highlight data and identify the location of the one or more times of interest on the one or more timelines. These markers allow the user to easily move the video image data being displayed to the time associated with the marker. The one or more markers can be displayed on, e.g. superimposed over, at least one of the timelines, each marker being located on the timeline based on the time of interest associated with the maker. Alternatively, the one or more markers can be displayed adjacent to the location on at least one of the timelines for the respective time of interest.
The one or more markers can be associated with a single point in time on the timeline, e.g. a tag, or can be associated with a period of time on the timeline, e.g. a highlight having an associated time window. In these latter embodiments, the one or more markers may be displayed at a single point in time on the timeline, e.g. corresponding to the tag time of a highlight, despite each marker still associated with a highlight, as will be discussed in more detail below. Alternatively, the one or more markers may be displayed together with an indication on the timeline showing the time window of the highlight.
As discussed above, highlights and/or tags can be generated from multiple sources, e.g. manually based on a user input and automatically based on sensor data, such as speed, G-force, rotation (or spin), acceleration, deceleration, vertical speed and heart rate. In preferred embodiments, the one or more markers include an identifier showing the source of the tag and/or highlight associated with each marker.
The one or more markers are preferably selectable by a user, e.g. via a touch selection when the computing device comprises a touchscreen display, so as to allow the user to change the video image data being displayed to a time associated with the marker. For example, when a marker is associated with a single point in time, e.g. a tag, then selection of the marker causes the video image data being displayed to change to the video image data associated with that point in time. In other embodiments, and when a marker is associated with a period of time, e.g. a highlight, the selection of the marker can cause the video image data being displayed to change to the video image data for a time based on the period of time, e.g. a time based on the time window of the highlight. For example the video image data displayed after selection of the marker may correspond to the video image data for a start time of the highlight or a tag time of the highlight.
It is believed that the use of one or more selectable markers associated with a period of time, e.g. a highlight, and displayed relative to a timeline, together with displayed video data, to scrub video image data is new and advantageous in its own right.
Thus, in accordance with another aspect of the present invention, there is provided a method of reviewing video image data collected by a digital video camera using a computing device, the method comprising:
The present invention extends to a system, such as a computing device, and preferably a mobile computing device, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for reviewing video image data collected by a digital video camera using a computing device, the method comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, as described above, the media file is preferably structured such that the video image data is stored in a payload portion of the file and the highlight data is stored in a metadata portion of the file. Furthermore, a highlight, and preferably each highlight, preferably comprises one or more of the following information: (i) an unique identifier; (ii) a type identifying the type of tag or highlight; (iii) a tag time identifying the time when the tag was generated; (iv) a start time identifying the start of the highlight; (v) an end time identifying the end of the highlight; and (vi) additional information.
In some aspects and embodiments of the invention, a media file comprising video image data and highlight data is accessed by the computing device. The media file can be stored in a memory of the computing device, or alternatively can be stored in a memory of the video camera and is accessed over a wireless connection. In these latter embodiments, the highlight data is preferably downloaded to a memory of the computing device (such that all the highlight data is present in a memory of the computing device), whereas the video image data is streamed over the wireless connection as required for display on the display device of the computing device.
The video image data, whether streamed or present in a local memory of the computing device, is displayed in a first display window on the display device. Meanwhile, a timeline is simultaneously displayed in a second display window on the display device, together with the one or more selectable markers, each selectable marker being associated with a highlight and having a location relative to the timeline corresponding to a time within the time window of the highlight. The time can be any one or more of the timelines described above, e.g. a straight line, a representation of sensor data, or a combination thereof. In embodiments, the first display window may be separate from the second display window. However, in some embodiments, the display windows may at least partially overlap, and may even be the same, such that the timeline is superimposed over the video image data.
In some aspects and embodiments of the invention, a selection of marker is received from a user, e.g. using a computer pointing device, such as a computer mouse, via a touch on a touchscreen display, etc, and the video image data being displayed in the first display window is changed to correspond to the video image data for a time based on the time window of the highlight associated with the selected marker.
In embodiments, the selection of the marker can cause the displayed video image data to change to video image data for a time within the time window of the highlight, e.g. the start time of the highlight. In other words, the selection of the marker can cause the highlight video to be played.
Additionally, or alternatively, the selection of the marker can cause a change in the video image data being displayed in the first display window and also a change in the timeline displayed in the second display window. For example, the selection can cause the timeline to be modified so as to “zoom in” to the highlight. In other words, whereas the original timeline provides a schematic representation of a time period corresponding to the duration of the media file, the new timeline provides a schematic representation of a different time period based on the duration of the highlight. This different time period can correspond to the duration of the highlight, but in preferred embodiments is larger than the duration of the highlight. For example, the time period represented by the new timeline can correspond to the duration of the highlight and an additional period of time. This additional period of time can comprise a predetermined amount of time, 4 seconds, or it can comprise a predetermined proportion of the duration of the highlight, e.g. 50% of the duration of the highlight. In embodiments, one end of the new timeline represents a first predetermined period of time before the start time of the highlight and the other end represents a second predetermined period of time after the end time of the highlight. The first and second predetermined periods of time can different, but are preferably the same and combine to equal the above described additional period of time. As will be appreciated, the video image data shown in the first display window after selection of the marker will preferably comprise the video image data corresponding to the start of the new timeline, and the user can then cause the video image data corresponding to the new timeline to be played. In embodiments, an indication is displayed on the new timeline showing the current time window for the highlight, and preferably a first indictor showing the start time of the highlight and a second indictor showing the end time of the highlight. The user can preferably interactive with the indication, and preferably with the first and/or second indictors, to modify the start time and/or end time of the highlight, e.g. by moving the first and/or second indicators along the new timeline. As discussed above, once modified in this manner, the computing device will transfer data indicative of the change in start and/or end time to the camera, such that the highlight data in the metadata portion of the media file is modified (or updated) accordingly.
In embodiments, video image data that is displayed in the first display window following selection of the marker can be different dependent on a type of selection. For example, if the selection is made in a first manner, e.g. a single tap on a touchscreen device or a single click using a computer mouse, then the displayed video image data can be caused to change to video image data for a time within the time window of the highlight, e.g. the start time of the highlight. Furthermore, if the selection is made in a second manner, e.g. a long hold on a touchscreen device or a double click using a computer mouse, then the video image data and timeline can be caused to change, i.e. to “zoom in” to the highlight, e.g. as described above.
The new timeline displayed when zooming into a highlight, can be displayed together with a further selectable marker, additionally, or alternately, to the indication showing the current time window for the highlight. The further selectable marker can, as discussed above with respect to the original timeline, include an identifier showing the source of the highlight associated with the marker. The further marker can be selected by the user, e.g. via a touch selection when the computing device comprises a touchscreen display, so as to allow the user to play (or preview) the video image data associated with the highlight, e.g. to display the video image data corresponding to the start time of the highlight. Additionally, or alternatively, the further marker can be selected by the user to cause a change in the video image data being displayed in the first display window and also a change in the timeline displayed in the second display window. For example, the selection can cause the new timeline displayed in the second display window to be modified so as to “zoom out”, i.e. return, to the original timeline providing a schematic representation of a time period corresponding to the duration of the media file.
In embodiments, the video image data that is displayed in the first display window following selection of the marker can be different dependent on a type of selection. For example, if the selection is made in a first manner, e.g. a single tap on a touchscreen device or a single click using a computer mouse, then the video image data associated with the highlight will be played in the first display window. Furthermore, if the selection is made in a second manner, e.g. a long hold on a touchscreen device or a double click using a computer mouse, then the video image data and timeline can be caused to change back, i.e. to “zoom out”, to the original timeline and the original video image data.
As described above, in many aspects and embodiments of the invention, each of one or media files stored in memory, e.g. on the video camera, comprise video image data and highlight data, wherein the highlight data comprises at least one highlight identifying a time period of interest in the video image data of the respective media file. It has been recognised that these highlights can be used to automatically create a media file with the most interesting parts of recorded video image data, and which, potentially after being reviewed and modified by the user, can then be quickly shared with other users via a video sharing platform, such as YouTube®. It is believed that the use of highlight data in this manner is new and advantageous in its own right.
This, in accordance with another aspect of the present invention there is provided a method of creating a first digital media file, comprising:
The present invention extends to a system, such as a computing device, and preferably a mobile computing device, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for creating a first digital media file, comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. Furthermore, preferably each highlight, in addition to a start time and end time (e.g. defined as an offset from the beginning of the video image data additionally comprises one or more of: an unique identifier; a type identifying the type of tag or highlight; a tag time identifying the time when the tag was generated; and additional information.
The one or more second media files can be stored in a memory of the computing device, or alternatively can be stored in a memory of the video camera and are accessed over a wireless connection. In these latter embodiments, the highlight data from the one or more second media files is preferably downloaded to a memory of the computing device (such that all the highlight data is present in a memory of the computing device).
In embodiments, the one or more media files, and the highlight associated therewith, may be accessed upon receipt of an input from the user on the computing device, which can be a mobile computing device such as a smartphone. The input from the user can be the selection of virtual button presented on the mobile computing device, or alternatively the input could be a predetermined movement of the computing device by the user, e.g. the shaking of the computing device.
In accordance with some aspects and embodiments of the invention, a plurality of highlights are selected from the one or more second media files. The selection of the highlights can be manual, e.g. through the user selecting individual highlights from a representation of the highlight data displayed on the computing device, such as in a manner as described above, but preferably the selection of the highlights occurs at least partially automatically. For example, highlights may be selected from the second media files that have most recently been recorded, and which have preferably been created within a predetermined time of the current time (since it is assumed that the user will typically want to create the first media file relatively soon after recording their activity or sport). In embodiments, these automatically selected highlights can be combined with one or more highlights that have been manually selected by the user. Alternatively, all of the highlights may be selected automatically. In embodiments, a plurality of highlights may be selected up to a predetermined number and/or such that the total duration of the selected highlights, i.e. the sum of the duration of each individual highlight, does not exceed a predetermined time value.
The selected highlights, whether selected manually, automatically or a combination thereof, are placed into an ordered sequence. The sequence may be ordered based on the creation data of the one or more second media files, such that the oldest recorded highlights are positioned first in the ordered sequence. Alternatively, the sequence may be based on information in the highlight data, or in sensor data, e.g. such that the highlights generated from more extreme moments are positioned first in the ordered sequence or possibly at periodic intervals in the sequence.
In embodiments, the computing device can display a representation of the selected highlights in the ordered sequence. The representation can include a thumbnail image, which is preferably a frame from the video image data contained in the respective media file for the time window defined by the respective highlight. The representation can also include information, preferably superimposed over the thumbnail image, identifying one or more of: the origin of the highlight, e.g. whether the highlight is a manual highlight or an automatic highlight; the type of automatic highlight; a value associated with the highlight, such as, for an automatic highlight, the value of an extremum that lead to the creation of the highlight; and a duration of the highlight.
Preferably, the user is able to select the representation of a highlight to play (or preview) the video image data corresponding to the highlight. Additionally, or alternatively, the user is able to: add new highlights, e.g. at any desired position in the ordered sequence; delete an existing selected highlight; and move an already existing selected highlight to another position in the ordered sequence. This allows, for example, the user to modify the highlights that are selected and/or to modify the position in the order of any highlights. Additionally, or alternatively, the user is able to change the start and/or end times of a selected highlight, e.g. by selecting the representation so as to play the highlight, and manipulating first and second indicators on a displayed timeline, wherein the first indicator is representative of the start time of the highlight and the second indicator is representative of the end time of the highlight. As will be appreciated, when the second media files are stored remotely from the computing device, e.g. in a memory of the video camera, any changes to the highlight data are sent to the video camera, such that the highlight data for the relevant second media file can be updated as required.
In embodiments, once a user is satisfied with the selected highlights and/or sequence order, a third media file is obtained for each highlight in the ordered sequence, wherein each third media file comprises video image data obtained from the second media file based on the start time and end time of the associated highlight. The third media files may be obtained by the computing device from the video camera, e.g. when the second media files are stored in a memory of the camera. Alternatively, in embodiments where the second media files are stored in a memory of the computing device, then the third media files can be obtained from the memory of the computing device. Preferably, however, a request is sent from the computing device to the video camera over a wireless communication channel.
Each third media files comprise video image data obtained from a second media file based on the start time and end time of the associated highlight. The third media files can comprise solely video image data, e.g. if requested by the user, such that they can add music to the first media file, or can comprise video image data and audio data. In either event, the third media files will typically be obtained by processing the relevant second media file in a transcoding operation. In other words, the payload portion of the second media file is demultiplexed, and then the relevant portion of the one or more resultant encoded streams (based on the start and end times of the highlight) are multiplexed and added to a new container, which is then closed to create the third media file. In embodiments, the transcoding operation, which preferably occurs on the video camera, can further include the decoding of at least the encoded video stream (preferably as stored in the video track of the file, and not the video image data in the text track of the file), and the subsequent re-encoding of the video image, such that all third media files have the same properties, preferably resolution and frame rate. This is allows, for example, for the first media file to be formed simply by the concatenation of the third media files ordered in accordance with the sequence, without the need to perform a further transcoding operation on the computing device.
It is believed that the creation of multiple highlight videos with the same properties is new and advantageous in its own right.
This, in accordance with another aspect of the present invention there is provided a method of creating a plurality of first digital media files from one or more second digital media files, each second digital media file comprising video image data and highlight data identifying one or more times of interest in the video image data, said highlight data comprising one or more highlights each having a start time and end time with respective to the video image data, the method comprising:
The present invention extends to a system, such as a digital video camera, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
Thus, in accordance with another aspect of the invention, there is provided a system for creating a plurality of first digital media files from one or more second digital media files, each second digital media file comprising video image data and highlight data identifying one or more times of interest in the video image data, said highlight data comprising one or more highlights each having a start time and end time with respective to the video image data, the method comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. For example, receipt of a selection of a plurality of highlights is preferably made by the computing device to the video camera over a wireless communication channel, and the plurality of first media files that are generated are preferably transmitted to the computing device from the video camera. The highlight data for the second media file is also preferably stored in a metadata portion of the file, whereas the video image data is stored in the payload portion of the file.
In embodiments, and as discussed above, the transcoding, i.e. at least demultiplexing and multiplexing streams, and optionally decoding and re-encoding of data, is performed such that the plurality of first digital media files have the same properties. The same properties can be the same video properties, e.g. at least the same resolution and frame rate, and/or the same audio proprieties.
Accordingly, in some aspects and embodiments of the invention, the computing device receives, or otherwise obtains, a plurality of third digital media files. These third digital media files correspond to the first digital media files mentioned above, and which have therefore preferably been created such that they each have the same properties, e.g. resolution, frame rate, etc.
This, in embodiments, the creation of the first digital media file from the plurality of third digital media files in accordance with the ordered sequence can comprise the concatenation of the plurality of third media files, e.g. since the video properties of each of the files is the same. In other embodiments, the user may decide to add music and/or a graphical overlay showing sensor data (e.g. as obtained from the sensor data in the metadata portion of the second media files, again based on the start and end times of the highlight). In these latter embodiments, the plurality of third digital media files undergo a further transcoding operation, albeit preferably in the computing device rather than the video camera, so as to create the first digital media file.
The first digital media file, once created, is preferably then uploaded by the user to a video sharing website, such as YouTube®, e.g. using a communication device of the computing device, such as WiFi or a mobile telecommunications transceiver. In other words, the first digital media file is preferably sent from the computing device to a remote server computer.
The present invention also extends to embodiments in which video image data obtained from multiple video cameras may be used. It has been recognised that it would be desirable for a user to be presented with the opportunity to view video footage captured by different video cameras, but relating to the same time period. Such footage may provide a different perspective of the same time of interest, i.e. highlight. In accordance with some further aspects and embodiments, the present invention provides the ability to allow a user to select footage relating to one or more times of interest, e.g. highlights, from one or more different video cameras for viewing, in addition, or as an alternative, to viewing such footage from their own camera. Alternatively or additionally, in accordance with its further aspects and embodiments, the present invention enables a first media file to be created based on footage relating to a time of interest captured by multiple video cameras. For example, the footage might be obtained from video cameras associated with different members of a group of skiers traversing a ski run. Alternatively, footage might be obtained from a static camera recording a ski jump, and a camera associated with the person performing the jump.
Thus, in accordance with another aspect of the present invention there is provided a method of creating a first digital media file, comprising:
The present invention extends to a system, such as a computing device, and preferably a mobile computing device, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described. It is envisaged that any computing device may be used, such as a desktop computing device. However, embodiments in which the computing device is a mobile computing device, such as a tablet or phone device, are advantageous, in that this permits editing to be carried out “on the fly”, e.g. where an activity is being performed. Nonetheless, it will be appreciated that the steps of the method in accordance with these further aspects and embodiments of the invention, in which overlapping highlights are obtained, may be carried out by any one or ones of a plurality of devices involved in the system of capturing and using the video image data, e.g. any one of a plurality of video cameras and/or a computing device, whether or not a mobile device.
In accordance with another aspect of the invention, there is provided a system for creating a first digital media file, the system comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa. Furthermore, preferably each highlight, in addition to a start time and end time (e.g. defined as an offset from the beginning of the video image data) additionally comprises one or more of: an unique identifier; a type identifying the type of tag or highlight; a tag time identifying the time when the tag was generated; and additional information.
Each one of the plurality of second digital media files comprises video image data relating at least to the same period of time. Preferably the video image data relates to the same event. In this way, the video image data may provide multiple perspectives of the same event. The event may be any type of event. In preferred embodiments the event is an activity performed by one or more persons. For example, the event may be a sporting activity, such as skiing. It will be appreciated that the video image data from the different second digital media files need not have the same start time and end time, provided that there is at least some temporal overlap between the video image data from the different files, such that they include video image data relating to at least the same period of time.
Each of the second digital media files is preferably obtained from a different source of video image data. Each source of video image data is preferably a video camera. Where the second digital media files are obtained from a set of a plurality of different video cameras, and relates to an event which is an activity performed by one or more persons, one or more of the set of video cameras may be associated with a person participating in the activity. Such video camera(s) are mobile cameras. One or more of the set of video cameras may be associated with each of one or more persons participating in the activity. The or each such camera may be carried, e.g. worn by a person. For example, the cameras may be associated with each member of a group of skiers, runners, cyclists, etc. Alternatively or additionally, one or more of the video cameras may be a static camera. For example, the static video camera may be set up to record a ski jump. Where a static camera is used, and the event is an activity performed by one or more persons, preferably at least one camera associated with a person participating in an activity is also used. By way of further example, a single user, e.g. performing a ski run or mountain biking course, may have a plurality of cameras organised to record an event, e.g. with one or more cameras being carried or worn by the user (i.e. mobile cameras) and/or one or more cameras set up a certain location along the course (i.e. static cameras). In other examples, a plurality of users, e.g. who are running, biking or skiing the same course, can each carry or wear one or more cameras.
In embodiments in which the method is performed by a computing device, and each source of video image data is a video camera, the respective second media files can be stored in a memory of the computing device, or more preferably are stored in a respective memory of the or each, or a particular (e.g. master as described below) video camera and accessed over a wireless connection. In the latter case, highlight data from the one or more second media files may be downloaded to a memory of the computing device (such that all the highlight data is present in a memory of the computing device).
As in the earlier embodiments, in these multiple camera (or “multi-cam”) aspects and embodiments of the invention, the one or more second media files, and the highlight data associated therewith, may be accessed upon receipt of an input from the user on the computing device, which can be a mobile computing device such as a smartphone. The input from the user can be the selection of virtual button presented on the mobile computing device, or alternatively the input could be a predetermined movement of the computing device by the user, e.g. the shaking of the computing device. The input may initiate the automatic performance of the method. Thus, a predetermined movement, or other input by the user, may initiate the automatic creation of a first digital media file in accordance with the invention in these further aspects or embodiments. These embodiments are advantageous, in that the user need only perform a simple action, e.g. shaking the computing device, to prompt the device to automatically generate a first media file comprising video image data from multiple sources, e.g. video cameras, and hence perspectives, relating to a period of particular interest in the footage, i.e. a highlight.
The number of second digital media files which are accessed and considered in obtaining overlapping second highlights in performing the method of the invention in these further aspects or embodiments may be selected as desired. For example, all available such second digital media files may be used, e.g. from video image data sources forming part of a network as described below, a predetermined number thereof and/or a number selected such that the total duration of the video image data does not exceed a predetermined time value. The user may be able to select the number of data sources, e.g. video cameras, from which data should be used. This may be an option that may be selected when providing an input to initiate the process of accessing the second digital media files to create the first digital media file. This is similar to the way in which the number of highlights may be limited when creating a first digital media file in the earlier embodiments of the invention.
One or more of the plurality of second digital media files comprise highlight data identifying one or more times of interest in the video image data of the respective second digital media file. The highlight data comprises one or more highlights each having a start time and end time with respect to the video image data. The start time and end time of a first highlight in a given one of the second digital media files is used to obtain one or more second highlights in different second digital media files, wherein the one or more second highlights temporally overlap with the first highlight. Thus, the start time and end time of a first highlight in a given one of the second digital media files is used to obtain one or more second highlights, where each of the second highlights is in a different second digital media file (each different second digital media file also being different to the given one of the second digital media files).
The given one of the second digital media files to which the first highlight relates may be any suitable one of the plurality of second digital media files. In preferred embodiments, the second digital media files comprise video image data obtained from each one of a plurality of video cameras, and the given one of the second digital media files is a file comprising video image data obtained from a master video camera from the plurality of video cameras. The method may thus further comprise designating one of the video cameras to be a master video camera. The ways in which a master video camera may be selected will be described in more detail below.
The given second digital media file may comprise multiple highlights, or may comprise only a single highlight. Where the file comprises multiple highlights, the identification, e.g. selection of the first highlight in the given one of the second digital media files may proceed in any suitable manner. The identification can be manual, e.g. through the user selecting an individual highlight from a representation of the highlight data displayed on the computing device, such as in a manner as described above. However, in other embodiments, the selection of the highlight occurs automatically. For example, the highlight may be a most recent highlight of the second media file, or may be selected randomly from highlights of the second media file where the file comprises multiple highlights, or in accordance with any suitable algorithm for determining for which highlights temporally overlapping highlights are to be obtained. These options are discussed in more detail below.
It is envisaged that each of the plurality of second digital media files may comprise such highlight data. However, this need not be the case. It is sufficient that a given one of the files contains such data, which may then enable a temporally overlapping highlight to be generated in one of the other second digital media files. The step of obtaining one or more second highlights using the start time and end time of the first highlight may comprise causing one or more second highlight to be generated in the or each applicable different second digital media file based on the start time and end time of the first highlight. Such a step may be carried out by a computing device. The computing device may not necessarily perform the step of generating the second highlight, in particular where a mobile device is used. The step of causing a highlight to be generated may comprise sending an instruction to a remote device to generate the highlight, e.g. to a video camera associated with the applicable second digital media file. The step of causing a second highlight to be generated comprises causing highlight data associated with the applicable second digital media file to be generated, the highlight data being indicative of a highlight having a start time and an end time with respect to the video image data of the second digital media file. In these embodiments, the start time and end time of a generated second highlight may typically be caused to correspond to the start time and end time of the first highlight.
The step of causing one or more second highlight to be generated may be triggered in response to a user input. For example, the method may comprise a step of receiving a selection of the first highlight from a user, and causing one or more second highlight in different second digital media files to be generated based on the start time and end time of the first highlight. A user may select a first highlight, e.g. on the computing device, such as a mobile computing device, to indicate a wish to see corresponding footage relating to the same time period, obtained from one or more other video cameras.
Alternatively or additionally, one or more second highlight may be caused to be generated automatically. This may be particularly appropriate where the editing process involved in creating the first digital media file is to be carried out automatically, e.g. once triggered by an appropriate user input. Any suitable algorithm may be used to determine which first highlights are used to create corresponding overlapping second highlights. For example, second highlights may only be generated in respect to a certain type of first highlight, e.g. a manually created highlight, or a highlight generated using sensor data. Such types of highlights may be assumed to be of greater interest to a user.
In some embodiments, the first highlight and the one or more second highlights that temporally overlap with the first highlight have been generated independently of one another. In other words, the one or more second highlights have been created in the different second digital media files, and happen to overlap the first highlight in the given one of the second digital media files. In these embodiments each second digital media file will already comprise highlight data. The step of obtaining a second highlight using the start time and end time of a first highlight may then comprise identifying the one or more second highlights using the start time and end time of the first highlight. The first and second highlights may have been created in any of the manners described in relation to the earlier aspects and embodiments of the invention. It will be appreciated that where highlights have been independently generated in footage captured, e.g. by multiple cameras, and relating to the same time period, this may be strongly indicative that the relevant time period is indeed of interest. For example, if two or more video cameras have identified the same portion of an event as being of interest, and justifying a highlight, it is likely that this portion of the event is of considerable importance. In embodiments in which the first and second highlights have been independently created, it will be appreciated that the start time and the end time of the highlights may not match. Thus, while there is a time period of overlap, the first and second highlights may additionally include times before and/or after the overlapping time period. The step of obtaining one or more second highlight may, in these embodiments, comprise receiving a selection of the first highlight from a user, and identifying the one or more second highlights using the start time and end time of the first highlight. For example, the user may provide a selection of a first highlight for which corresponding footage from other cameras is desired to be obtained, via a computing device, such as a mobile computing device. The method may comprise displaying to the user in association with the first highlight, an indication that one or more overlapping second highlights exist. Such an indication may be displayed automatically, or in response to an input by the user, e.g. tapping a representation of the first highlight.
The method may involve maintaining a list of overlapping highlights from available media files.
It will be appreciated that where multiple second highlights are obtained, the step of obtaining each second highlight may involve obtaining the highlights in the same, or differing manners, from the options set out above, e.g. causing a second highlight to be generated, or identifying an already generated second highlight.
The start and end times of a second highlight may or may not correspond to the start and end time of the first highlight, provided that there is temporal overlap between the highlights. The method may comprise carrying out a step of adjusting the start and/or end times of the first highlight and the or each second highlights as required so that each of the first and second highlights is in respect of the same time period. This may be appropriate in particular where the first and second highlights have been independently generated, as there is then a greater likelihood that the start times and end times will differ. For example, the start and end times of each second highlight may be caused to conform to the start and end times of the first highlight, or the start time of all highlights may be caused to conform to the earliest start time among the highlights, and the end time to the latest end time among the highlights, etc. Any suitable technique may be used to cause the start time and end time of each highlight to match. It will be appreciated, that it may not always be necessary to cause the start and end times of the first and second highlights to match, depending upon the manner in which the highlight data is to be used. Where a second highlight has its start and/or end time adjusted, it will be appreciated that the second highlight referred to in the remainder of the method may be the second highlight after any such adjustment (or other editing) has been carried out, rather than the initially obtained second highlight.
The method comprises obtaining a selection of at least one of the first highlight and the one or more second highlights which are to be used in creating the first digital media file. The selection may be obtained manually, i.e. from a user, or automatically. The method may comprise displaying the first highlight and the or each second highlight to a user to enable the user to make the selection of at least one of the first highlight and the one or more second highlights. The selection may be of a single highlight, e.g. the first highlight, or, more usually, a second highlight, or may be of a plurality of highlights, which may or may not include the first highlight. For example, where the selection is made by a user, the user may choose to replace the first highlight with a second highlight, i.e. one of the highlights obtained from a different source, or may choose to retain multiple overlapping highlights, which may include the original first highlight, e.g. from their own camera and one or more second highlight from other camera(s).
The method may involve obtaining highlight data associated with video image data from a single source of video image data as described in relation to the earlier aspects and embodiments, e.g. a user's video camera, and providing an indication as to where overlapping highlights may exist. The user may then view the overlapping highlights, and decide whether to add any of them to the original “story” (or list of highlights) or use them to replace any of the highlights in the original story in order to create a new story. In other embodiments, the selection of highlights to include in the story may be made automatically, for example, selecting a different one of multiple video cameras from which to obtain an overlapping highlight for replacing a first highlight at random each time.
In a similar manner to the earlier aspects and embodiments of the invention, where multiple highlights are selected, the method may comprise placing the selected highlights into an ordered sequence. The sequence may be ordered in any desired manner. This may be done manually or automatically, or some combination thereof. Preferably the ordering is performed automatically. Sequencing may be based on various measures, alone, or in combination, e.g. based on a source of the video image data, etc. Where highlights have different start and/or end times, the ordering may be based at least partially on a timing of the highlights. Alternatively or additionally, the sequence may be based on information associated with the highlights, or any sensor data associated therewith, as described in earlier embodiments.
It is envisaged that the user may be presented with a representation of the selected highlights. For example, the representation may include a thumbnail image as in the earlier embodiments. The highlights may be represented in an ordered sequence. The user may be able to (re)order the highlights, perform some editing, e.g. to adjust the start and/or end times of highlights, etc. The user may be able to perform any of the actions described in relation to the earlier aspects and embodiments of the invention, involving obtaining a first digital media file based on multiple highlights from a single source. Where the second digital media files are stored remotely from the computing device, e.g. in the memory of one or more of the video cameras, any changes to the highlight data are sent to the respective video camera or cameras, such that the highlight data for the relevant second media file can be updated as required.
The method involves obtaining a third media file for each of the one or more selected highlights, each third media file comprising video image data obtained from the respective second digital media file based on the start time and the end time of the associated highlight. This step is carried out once the user is satisfied with the selected highlights and/or sequence order, where the user is able to provide input in this regard. The third media files may be obtained by the computing device from a video camera associated with the applicable second digital media file, e.g. when the second media files are stored in a memory of the camera. In other embodiments second digital media files obtained from a plurality of different video cameras are stored in a memory of a given one of the cameras i.e. a master camera, and the method may comprise the computing device obtaining third media files in respect of multiple video cameras from the given camera. Alternatively, in embodiments where the second media files are stored in a memory of the computing device, then the third media files can be obtained from the memory of the computing device. Preferably, however, a request is sent from the computing device to one or more video camera (i.e. a master camera, or a plurality of cameras where the second digital media files are stored by individual cameras) over a wireless communication channel.
The method then comprises creating the first digital media file using at least the one or more third digital media files. The method may comprise creating the first digital media file from the plurality of third digital media files in accordance with an ordered sequence where the highlights have been put into such a sequence. It will be appreciated that the first digital media file comprises a plurality of digital media files. In addition to the or each third digital media file for each of the one or more selected highlights that is obtained in accordance with the invention, the first digital media file may be created using one or more fourth digital media file. This may be the case, for example, although not by limitation, where only a single third media file is obtained based on a selection of a single highlight. A fourth digital media file may be any suitable digital media file including video image data obtained from a second digital media file based on the start time and the end time of an associated highlight, which highlight is not one of the defined first or one or more second overlapping highlights.
A first media file may be created in which the or each of the third digital media files (or any fourth digital media file included) are concatenated, e.g. order in accordance with the determined sequence.
Each third media file comprises video image data obtained from a second media file based on a start time and end time of a highlight. While the start and end times of each of the selected highlights may be identical, this need not be the case, provided that there is at least some overlap. The video image data obtained from the second media file will typically be in respect of the entire duration of the highlight defined between the start time and the end time. However, it is envisaged that the video image data may only be obtained for a part of the highlight, provided that this includes a portion of overlap with the first highlight. The third media files can comprise solely video image data, e.g. if requested by the user, or can comprise video image data and audio data. In either event, the third media files will typically be obtained by processing the relevant portion of the second media file in a transcoding operation. In other words, the payload portion of the second media file is demultiplexed, and then the relevant portion of the one or more resultant encoded streams (based on the start and end times of the relevant highlight) are multiplexed and added to a new container, which is then closed to create the third media file. In embodiments, the transcoding operation, which preferably occurs on a video camera, can further include the decoding of at least the encoded video stream (preferably as stored in the video track of the file, and not the video image data in the text track of the file), and the subsequent re-encoding of the video image, such that all third media files have the same properties, preferably resolution and frame rate. This is allows, for example, for the first media file to be formed simply by the concatenation of the third media files ordered in accordance with the sequence, without the need to perform a further transcoding operation on the computing device. The transcoding operation may occur on the video camera which obtained the video image data to which the second media file relates, or another video camera which stores the second media file e.g. a master video camera. Of course, in other embodiments, where the computing device stores the second media files, the computing device may perform the transcoding operation. However, in particular where the computing device is a mobile device, it is preferred that transcoding is carried out by a video camera or cameras. Where the first digital media file is created additionally using one or more fourth digital media file, the or each fourth digital media file is preferably similarly arranged to have the same properties as the third digital media files. Thus, each fourth digital media file is preferably obtained by processing a relevant portion of a second digital file in a transcoding operation. The fourth digital media file may therefore be obtained in the same manner described in relation to any of the embodiments involving a third digital media file. The only difference is that the video image data obtained from the second digital media file, in this case, while still obtained based on the start time and end time of a highlight does not relate to highlight that overlaps the first highlight.
In embodiments, and as discussed above, the transcoding, i.e. at least demultiplexing and multiplexing streams, and optionally decoding and re-encoding of data, is performed such that the plurality of third (and where used, fourth) digital media files have the same properties. Thus, in embodiments, the creation of the first digital media file from the plurality of third digital media files can comprise the concatenation of the plurality of third digital media files (and where present, fourth digital media files). The user may decide to add music and/or a graphical overlay showing sensor data as in the earlier embodiments. In these latter embodiments, the plurality of third digital media files (and, where present, fourth digital media files) undergo a further transcoding operation, albeit preferably in the computing device rather than a video camera, so as to create the first digital media file.
The first digital media file, once created, may be uploaded by the user to a video sharing website, such as YouTube®, e.g. using a communication device of the computing device, such as WiFi or a mobile telecommunications transceiver. In other words, the first or further digital media file is preferably sent from the computing device to a remote server computer.
In any of the embodiments relating to the multi-cam implementation of the invention in its further aspects, the method may be carried out by a mobile computing device, such as a smartphone, and the second digital media files are preferably obtained by a plurality of video cameras. The method may be implemented via an app run on the mobile computing device. Preferably a network is created between the mobile computing device and each video camera. The network is preferably established automatically, e.g. upon receipt of a request by a user. This enables the mobile computing device to access the second digital media files of the video cameras as required to enable the method to be performed. The method may comprise designating one of the plurality of video cameras as a “master” video camera, and the or each other video camera as a client (or “slave”) camera. The first highlight may then be a highlight in the video image data of the second digital media file associated with the master camera. The master camera is preferably a camera that is connected to the mobile computing device. The other (client) video camera(s) forming part of the network may then be other camera(s) in proximity to the master camera. The client video camera(s) may be connected to the mobile computing device only through the master video camera.
The method of creating the network may comprise a first video camera connecting to the mobile computing device that is arranged to perform the method of creating the first digital media file (e.g. that is running an applicable app). The first video camera is then designated as a master video camera. One or more additional video cameras in proximity to the master camera may then be connected to the network as client video cameras. For example, the client video cameras may be any cameras in proximity to the master camera that are discoverable by the master camera and/or to which the master camera is discoverable. In order to become a client video camera, it may be necessary that a multi-cam operating mode has been enabled on the video camera. Each potential client camera may be arranged to request connection to any master camera determined by the potential client camera to be present within a detectable range of the camera and/or a master camera may request connection to any potential client camera determined by the master camera to be present within a detectable range of the camera. A potential client camera may be a camera on which the multi-cam mode has been enabled. Connection between master and client cameras may be automatic, or may require a user of the client and/or master camera to confirm acceptance of the use any suitable wireless connection. For example a wireless communication protocol such as classical Bluetooth, WiFi, or more preferably, Bluetooth Low Energy (BLE) as described earlier, or combinations thereof may be used.
It will be appreciated that these techniques may enable an ad hoc network to be set up to enable multi-cam footage of an event to be created. For example, after traversing a ski run, one of a group of skiers may connect their camera to their smartphone, so as to become a master camera. Other members of the group may then allow their cameras to connect to the master camera as client cameras. A digital media file containing footage relating to one or more times of interest in the ski run from one or more of the cameras may then be created.
In some cases, it has been recognised that it would be desirable to create a first digital media file, which, rather than including a plurality of highlights one after the other, includes multiple highlights obtained from different sources shown simultaneously, or otherwise in a combined manner. Such methods may also be implemented using a multi-cam network as described above.
In accordance with a further aspect of the invention, there is provided a method of creating a first digital media file, comprising:
In accordance with a further aspect of the invention, there is provided a system for creating a first digital media file, comprising:
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa.
Preferably the steps of obtaining the selection of the highlights, and the editing effect, and creating the first digital media file are carried out by a computing device, and preferably a mobile computing device, such as a smartphone. The step of obtaining the overlapping highlights may similarly be carried out by the computing device which then provides the selection of the highlights (whether made manually or automatically) to the camera.
In these further aspects of the invention, the steps of accessing the plurality of second digital media files, and using the start time and end time of a first highlight in a given one of the second digital media files to obtain overlapping second highlights, may be carried out in the manner described in relation to the earlier aspects and embodiments of the invention. Thus, the highlights may be selected automatically or manually i.e. by a user input. In embodiments in which the method is performed by a video camera, the selection of the at least two highlights may be received from a computing device.
In accordance with these further aspects of the invention, a selection of at least two of the first highlight and the one or more second highlights (i.e. the overlapping highlights) is obtained. Thus, the aim of this technique is to be able to somehow combine overlapping highlights, whether a first and one or more second highlight, or multiple second highlights, so that the user may view the highlights in a simultaneous/combined manner. The difference occurs in the manner in which the obtained selection of highlights is then used to create a first digital media file.
The method comprises the step of obtaining a selection of an editing effect to allow combined or simultaneous viewing of the selected highlights. The selection of the editing effect may be obtained manually, e.g. by means of a user input. Alternatively, the selection may be an automatic selection of a predefined effect, e.g. a default effect. An editing effect to allow simultaneous viewing of the highlights may be an editing effect to allow viewing of the highlights side-by-side, in a split screen format, or in a picture-within-picture format. An editing effect to allow combined viewing of the highlights may be an effect which transitions from one highlight to the other highlight and back again, e.g. within a time period associated with the highlights. The transition may occur multiple times. This is in contrast to the earlier aspects and embodiments of the invention in which highlights may be presented sequentially, such that after presentation of one highlight, the view does not return to the original highlight. Any other effect may be used which does not merely involve a simple transition from one highlight to the next, without returning to the first highlight.
It will be appreciated that in any of these further aspects or embodiments of the invention, a plurality of first digital media files may be obtained, each based on a respective selection of at least two of a first highlight and one or more second highlights. The method may then comprise creating a further digital media file based on the plurality of first digital media files. For example, this may be formed from a concatenation of the files, or similar. This step may be carried out by a computing device based on first digital media files obtained by a video camera e.g. a master video camera.
Once a first digital media file has been obtained, the method may comprise transmitting the first digital media file to a computing device e.g. a mobile computing device. The device is then provided with a first digital media file to allow combined or simultaneous viewing of overlapping highlights, without needing to carry out further processing itself. Where multiple first digital media files are received, the computing device may simply concatenate the files as required. Each file may be treated in a similar manner to the third digital media files of the earlier aspects and embodiments, and may be e.g. placed in a desired sequence. However, rather than containing data relating to a single highlight, each such file contains data relating to multiple highlights to permit viewing in a combined or simultaneous manner.
The first digital media file, or a further digital media file obtained using one or more first digital media files, once created, may be uploaded by the user to a video sharing website, such as YouTube®, e.g. using a communication device of the computing device, such as WiFi or a mobile telecommunications transceiver. In other words, the first or further digital media file is preferably sent from the computing device to a remote server computer.
In accordance with a further aspect of the invention there is provided a method of creating a first digital media file from one or more second digital media files, each second digital media file comprising video image data relating to at least the same period of time, and highlight data identifying one or more times of interest in the video image data, said highlight data comprising one or more highlights each having a start time and end time with respect to the video image data, the method comprising:
The video image data that is transcoded may be obtained directly from the second digital media files. Alternatively, the method may further comprise obtaining a third media file for one or more or each of the selected highlights, each third media file comprising video image data obtained from the respective second digital media file based on the start and end time of the associated highlight. The video image data that is transcoded in this embodiment includes the one or more third media files, as needed, and thus is obtained indirectly from the second digital media files.
The selected highlights preferably include at least two of a first highlight and one or more second highlights, wherein the or each second highlight has been obtained using a start time and end time of a first highlight in a given one of the second digital media files.
In accordance with a further aspect of the invention there is provided a system for creating a first digital media file from one or more second digital media files, each second digital media file comprising video image data relating to at least the same period of time, and highlight data identifying one or more times of interest in the video image data, said highlight data comprising one or more highlights each having a start time and end time with respect to the video image data, the system comprising:
The system is preferably a video camera e.g. a master video camera. Thus, in these further aspects, the invention relates to the method performed by a master video camera in the multi-cam system in which highlights are combined using an editing effect.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa.
The method aspects and embodiments of the present invention as described herein are preferably computer implemented methods. The apparatus aspects and embodiments of the present invention can be configured to carry out any of all of the method steps as described herein, and vice-versa. It should be noted that the phrase “associated with” as used herein should not be interpreted to require any particular restriction on data storage locations. The phrase only requires that the features are identifiably related. Therefore association may for example be achieved by means of a reference to a file, potentially located in a remote server.
The present invention can be implemented in any suitable system, such as a suitably configured micro-processor based system. In a preferred embodiment, the present invention is implemented in a computer and/or micro-processor based system. The present invention is particularly, but not exclusively, suitable for use in low power and portable devices. Thus, in a preferred embodiment, the computing device comprises a portable device, such as a mobile telephone or PDA. The present invention is applicable to any suitable form or configuration of video camera.
The present invention accordingly also extends to a video camera and/or a video camera system, that includes the system or apparatus of the present invention, e.g. a computing device, and preferably a mobile computing device.
The various functions of the present invention can be carried out in any desired and suitable manner. For example, the functions of the present invention can be implemented in hardware or software, as desired. Thus, for example, unless otherwise indicated, the various functional elements and “means” of the invention may comprise a suitable processor or processors, controller or controllers, functional units, circuitry, processing logic, microprocessor arrangements, etc., that are operable to perform the various functions, etc., such as appropriately dedicated hardware elements and/or programmable hardware elements that can be programmed to operate in the desired manner.
It should also be noted here that, as will be appreciated by those skilled in the art, the various functions, etc., of the present invention may be duplicated and/or carried out in parallel on a given processor. Equally, the various processing stages may share processing circuitry, etc., if desired.
It will also be appreciated by those skilled in the art that all of the described aspects and embodiments of the present invention can, and preferably do, include, as appropriate, any one or more or all of the preferred and optional features described herein.
The methods in accordance with the present invention may be implemented at least partially using software, e.g. computer programs. It will thus be seen that when viewed from further aspects the present invention provides computer software specifically adapted to carry out the methods herein described when installed on one or more data processors, a computer program element comprising computer software code portions for performing the methods herein described when the program element is run on one or more data processors, and a computer program comprising code adapted to perform all the steps of a method or of the methods herein described when the program is run on a data processing system. The one or more data processors may be a microprocessor system, a programmable FPGA (field programmable gate array), etc.
The invention also extends to a computer software carrier comprising such software which when used to operate a video camera system comprising one or more data processors causes in conjunction with said one or more data processors said system to carry out the steps of the methods of the present invention. Such a computer software carrier could be a physical storage medium such as a ROM chip, CD ROM, RAM, flash memory, or disk, or could be a signal such as an electronic signal over wires, an optical signal or a radio signal such as to a satellite or the like.
It will further be appreciated that not all steps of the methods of the invention need be carried out by computer software and thus from a further broad aspect the present invention provides computer software and such software installed on a computer software carrier for carrying out at least one of the steps of the methods set out herein.
The present invention may accordingly suitably be embodied as a computer program product for use with a computer system. Such an implementation may comprise a series of computer readable instructions either fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CD ROM, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques. The series of computer readable instructions embodies all or part of the functionality previously described herein.
Those skilled in the art will appreciate that such computer readable instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Further, such instructions may be stored using any memory technology, present or future, including but not limited to, semiconductor, magnetic, or optical, or transmitted using any communications technology, present or future, including but not limited to optical, infrared, or microwave. It is contemplated that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation, for example, shrink wrapped software, pre-loaded with a computer system, for example, on a system ROM or fixed disk, or distributed from a server or electronic bulletin board over a network, for example, the Internet or World Wide Web.
Various aspects of the teachings of the present invention, and arrangements embodying those teachings, will hereafter be described by way of illustrative example with reference to the accompanying drawings, in which:
A number of preferred embodiments of the present invention will now be described. The preferred embodiments relate to a video camera system, in which video image data and contemporaneous sensor data is recorded and processed.
As illustrated by
In use, the video camera system operates to record video image data, audio data and contemporaneous sensor data from one or more of the sensors. The recorded data is then stored in a detachable memory device (“Batt-Stick”) of the video camera.
The recorded data is processed by the video camera and/or the external mobile computing device or desktop computing devices, so as to post-process the data, before the post-processed data is stored in the memory device and/or uploaded to the internet.
Although not shown in
One or more tags may be automatically added after the data has been recorded, i.e. the video camera system and/or computing device can place automatic tags which require no user input. Automatic tags can be generated “on-the-fly” (i.e. during recording of the data), but may also be generated at a later time, for example if the amount of data that needs to be processed is relatively high.
One or more tags can be automatically added on the basis of the video image data. For example, tags may be generated for portions of the video image data where the video image data comprises one or more faces (e.g. using face recognition technology), and/or where the video image data indicates that the lens of the video camera has been covered for a certain period of time (e.g. based on the light levels in the video image data).
One or more tags can be automatically added on the basis of the contemporaneous sensor data. For example, one or more tags can be added where the contemporaneous sensor data comprises an “extreme” value or an “extreme” change in value. For example, as shown in
Tags can be generated for portions of the video image data where the sensor data indicates a maximum speed (e.g. based on GPS data), maximum heart rate (e.g. based on heart rate sensor data), maximum cadence (e.g. based on cadence sensor data), maximum acceleration (e.g. based on accelerometer data), maximum impact G-force (e.g. based on accelerometer data), maximum sustained G-force (e.g. based on accelerometer data), maximum barometer reading, maximum vertical speed (e.g. based on accelerometer data), maximum jump time/airtime (e.g. based on barometer data), a certain degree of rotation such as 360 degrees (e.g. based on accelerometer data), a crash or fall (e.g. based on accelerometer data), the start/stop of physical activity/movement (e.g. based on GPS and/or accelerometer data), maximum volume (e.g. based on audio data), a particular voice or word (e.g. someone's name) (e.g. based on analysis, e.g. voice recognition, of the audio data), and/or a certain (e.g. popular) location (e.g. based on GPS data).
One or more tags may be generated based upon criteria derives using data analysis of manual tagging performed by plural other users. Suggested tags can be presented to a user, e.g. based on combinations of sensor data that are used by other users to tag moments.
One or more additional tags may be manually added during playback of the recorded data by a user pressing a button or making a selection, e.g. using the computing device. Thus, a user can manually add “missing” tags.
The data is then post-processed using the tags.
In the present embodiment, this involves translating the one or more tags into “highlights”. “Highlights” are clips of video image data derived from individual tags. For example, a highlight may comprise the preceding 5 seconds of video image data and the following 5 seconds of video image relative to the time associated with the tag. Other time periods would, of course, be possible. Highlight clips can then be used to give users a quick and effortless overview of the most interesting moments in the recordings they made.
A given recording may comprise multiple highlights and/or different types of highlights, i.e. depending on the number of tags added to the video image data and/or depending on the types of tags (i.e. depending on what it was that caused the tag to be generated).
As illustrated in
The highlight clips are presented to the user on the computing device, e.g. in the form of a “Highlight Wall” for selection. The “raw” recordings can also be displayed to the user.
The highlight clips or the raw data can then be further edited by the user.
In “timeline view”, as shown in
In “graph view”, as shown in
In “trace view”, as shown in
The user can select and view highlights as desired. The user can further edit (e.g. trim, etc.) each of the highlights, if desired. This is shown, for example, in
The highlight clips can then be manipulated by a user to create a “movie”, e.g. comprising several highlight clips. One or more visual effects, music, etc., can be added to the raw data files, highlight clips, and/or edited movies using the mobile device. One or more parts of the sensor data may be incorporated into a post-processed video file. This is shown in
The final post-processed/edited movie is stored. The movie can be stored in the desktop computer's memory. In “on-the-go” contexts, the movie can be transferred to and stored on the camera's SD card, which will typically have much more capacity than the mobile device. In an alternative embodiment, editing indications are transferred from the mobile device to the camera, which then performs the appropriate transcoding, and stores the final movie on the SD card. This reduces the required memory bandwidth.
The post-processed/edited data can then be presented to the user of the mobile device for selection.
The data can be shared. The user can select one or more raw data files, highlight clips, edited movies, and export the files, either for downloading to the mobile device or to an external web server, email, etc.
One way in which a network, such as that shown in
Once a network has been established, the user of the mobile device is provided with the ability to select between footage captured by different ones of the network, and to create a story including footage from multiple cameras, or at least a different camera to their own camera. The number of cameras which may connect to the mobile device to form the network may be limited to keep the amount of footage to a reasonable level. For example, the number of cameras may be limited to four, or any other suitable number.
It will be appreciated that this is only one example of a way in which multiple video cameras may be connected into a network with a mobile device for performing the method of these further embodiments of the invention. A network need not be created in such a manner, or indeed at all, and the method need not be implemented via a mobile device. While the creation of a network is advantageous in enabling a story video to be produced on the fly, the invention is not limited to such arrangements. For example, different sets of video footage may be combined at a later stage, e.g. using a desktop computing device. Furthermore, rather than all being mobile cameras, at least some, or even all of the video cameras may be static, provided that they capture footage relating to the same time period. For example, a static camera and a camera carried by a ski jumper may both provide footage of a ski jump performed by the ski jumper.
Some examples of embodiments of methods using a system as shown in
The mobile phone may present the user with a library, showing thumbnails of the highlights available for viewing from any of the cameras in the network, including the user's own camera (the master camera), or any of the other cameras (client cameras). It will be appreciated that the mobile phone has access only to metadata indicative of a highlight. The actual video image data may be stored on the video camera that captured the footage, or, alternatively, video image data from all available cameras may transferred to the users own camera i.e. the master camera for storage. If the user chooses to play a highlight, the video image data will be streamed wirelessly from the applicable camera.
Where one or more temporally overlapping highlight from a different camera exists in relation to any one of the available highlights, an icon is superposed on the thumbnail to alert the user to this. An overlapping highlight may arise for a number of reasons. In some cases, the user may select a highlight shot by their camera, in order to see if overlapping footage from other cameras exists. The mobile device may then search for any footage relating to the same time period, and cause a highlight to be created in that footage, typically having the same start time and end time as the selected highlight. The phone may send an instruction to the video camera storing the relevant video image data file to create the highlight. The highlight data of the mobile device will then be updated to reflect the new highlight, and the highlight shown in the library. Once this overlapping highlight has been generated, an icon will be shown associated with the original highlight indicating that an overlapping highlight exists.
Alternatively, or additionally, a highlight associated with footage shot by another camera may be caused by the mobile phone to be generated automatically. For example, the phone may cause such highlights to be generated in relation to particular types of highlights captured by the users own camera. The phone may cause a highlight to be created in any footage shot by another camera which overlaps such types of highlights. This may be carried out in relation to highlights which have been created manually, or using sensor data etc.
Alternatively or additionally, a highlight associated with footage shot by another camera may be a highlight that has been independently generated e.g. by the other camera, and just happens to overlap the highlight in the footage of the user's camera. This is an indicator that the particular time period to which the overlapping highlights relate is an important part of the event e.g. ski run. In this situation, the start and end times of the highlights are less likely to match, and the mobile phone may carry out an operation to cause the start time and end time associated with each highlight to match, if this is desirable for further processing. It is envisaged that the user may also be able to adjust the start time and end time of highlights in their library.
A story video may now be created, in a similar manner to the embodiments described earlier in relation to
Once the user is happy with their selection, and has performed any desired editing, the user makes an appropriate input to the phone. The phone now obtains a digital media file from the camera(s) which store the video image data associated with the highlights to be included. These may be the cameras which captured the footage, or all footage may be stored on the master camera. The or each camera creates a digital media file including the video footage, which has been transcoded, and provides it to the mobile phone. Each such file is of the same format. The phone then simply concatenates the files in the desired order, to obtain a story video. This may be achieved in the same way that a story video is obtained in the earlier embodiments using transcoded digital media files relating to multiple highlights obtained from a single camera.
In other embodiments, the selection of overlapping highlights, and their ordering, may be selected automatically by the phone for providing the story video. In these embodiments, the user may provide an input, such as a predetermined movement of the phone e.g. a shake, to initiate automatic creation of a story video using available overlapping highlight footage. The cameras from which highlight footage is obtained may be selected at random, or in any suitable manner, in these embodiments.
It will be appreciated that, whether the story video is created with or without user input, the transition between the footage from different cameras may be seamless, or any desired effect may be used.
Once the single “story” video has been created, it may be shared by a user, by uploading to a suitable site, such as YouTube®.
Some examples of user interfaces provided by the mobile phone to enable the user to carry out the above described processes will now be described.
The user may navigate to a library page, as shown in
In accordance with some further embodiments, rather than creating a story video in which overlapping highlights are presented sequentially, it may be desirable to show such footage simultaneously, or in a combined manner. For example, the overlapping highlights may be shown in a split screen, side by side, picture in picture format etc. Alternatively, a story video may be desired which switches alternately between the overlapping highlights. This may proceed in a similar manner to the embodiments described above. The user may select on the mobile phone the highlights to be included, together with a desired editing effect indicating how the highlights are to be used, e.g. displayed side by side, combined so that the story video switches between the highlights, etc. The mobile phone may then send an indication of the editing effect and the selected highlights to the master camera. In other embodiments, the highlights and/or editing effect may be automatically selected and an indication thereof sent to the master video camera. For example, a default editing effect may be used.
If the master camera has stored the video image data for the relevant highlights, the camera creates a digital media file using the video image data corresponding to each of the highlights, combined using the appropriate editing effect. The camera transcodes the file and sends it back to the mobile phone for display to the user. The user may then view the story video if desired, and share the video as in the earlier embodiments.
If the master camera has not stored some or all of the required video image data for the highlights, the master camera requests a file containing the data from the or each respective client camera storing the data. Each client camera creates a digital media file containing the video image data for the time period corresponding to the highlight, and sends the file to the master video camera. The camera then creates a video image file, as in embodiments where it stores the video image data itself, using the video image data from the received files, and using the selected editing effect. The camera transcodes the file and sends it back to the mobile phone for display to the user. The user may then view the story video if desired, and share the video as in the earlier embodiments.
Implementing such methods in this way avoids the mobile phone or other mobile computing device needing to carry out transcoding, or to create the single file based on the video image data corresponding to the highlights combined using the selected editing effect. However, in other arrangements, the steps may be carried out by any suitable device. For example, a computing device might perform the method.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/078013 | 11/2/2017 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62416693 | Nov 2016 | US |