The present invention generally relates to the field of video technology. More specifically, the present invention relates to a system for zooming in a video recording.
The recording of videos, especially by the use of handheld devices, is constantly gaining in popularity. It will be appreciated that a majority of today's smartphones are provided with a video recording function, and as the number of smartphone users may be in the vicinity of 3 billion in a few years' time, the market for functions and features related to video recording, especially for devices such as smartphones, is ever-increasing.
The possibility to zoom when recording a video is one example of a function which often is desirable. In case the video is recorded by a device having a touch-sensitive screen, a zoom may often be performed by the user's touch on the screen. However, manual zoom functions of this kind may suffer from several drawbacks, especially when considering that the user may often need to perform the zooming whilst being attentive to the motion of the (moving) object(s). For example, when performing a manual zoom during a video recording session, the user may be distracted by this operation such that he or she loses track of the object(s) and/or that the object(s) move(s) out of the zoomed view. Another problem of performing a manual zoom of this kind is that the user may unintentionally move the device during the zooming, which may result in a video where the one (or more) object is not rendered in a desired way in the video.
Hence, alternative solutions are of interest, which are able to provide a convenient zoom function and/or by which zoom function one or more zoomed objects may be rendered in an appealing and/or convenient way in a video recording.
It is an object of the present invention to mitigate the above problems and to provide a convenient zoom function and/or by which zoom function one or more zoomed objects may be rendered in an appealing and/or convenient way in a video recording.
This and other objects are achieved by providing a system, a method and a computer program having the features in the independent claims. Preferred embodiments are defined in the dependent claims.
Hence, according to a first aspect of the present invention, there is provided a system for zooming of a video recording. The system is configured to detect at least one object in a first view of the video recording, track the detected at least one object, and select at least one of the tracked at least one object. The system is further configured to define the selected at least one object by at least one first boundary and define a second boundary, wherein at least one of the at least one first boundary is provided within the second boundary. Furthermore, the system is configured to define a third boundary and define a second view of the video recording corresponding to a view of the video recording defined by the third boundary. The system is further configured to perform at least one of an in-zooming of the video recording relative the first view of the video recording, by changing the third boundary such that the third boundary coincides with the second boundary, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the in-zooming of the video recording, and an out-zooming of the video recording relative the second view of the video recording corresponding to a view of the video recording defined by the third boundary of decreased size relative the first view of the video recording, by changing the third boundary such that the third boundary coincides with the first view, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the out-zooming of the video recording. The system is further configured to, in case at least one predetermined event occurs during the video recording: stop a performed in-zooming or out-zooming of the video recording, track the selected at least one object, re-define the selected at least one object by the at least one first boundary, re-define the second boundary, wherein at least one of the at least one first boundary is provided within the second boundary, and change the third boundary, whereby the second view of the video recording corresponds to a view of the video recording defined by the third boundary.
According to a second aspect of the present invention, there is provided a method for zooming of a video recording. The method comprises the steps of detecting at least one object in a first view of the video recording, tracking the detected at least one object, and selecting at least one of the tracked at least one object. The method further comprises defining the selected at least one object by at least one first boundary and defining a second boundary, wherein at least one of the at least one first boundary is provided within the second boundary. The method further comprises defining a third boundary and define a second view of the video recording corresponding to a view of the video recording defined by the third boundary. The method further comprises performing at least one of the steps of: in-zooming of the video recording relative the first view of the video recording, by changing the third boundary such that the third boundary coincides with the second boundary, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the in-zooming of the video recording, and out-zooming of the video recording relative the second view of the video recording corresponding to a view of the video recording defined by the third boundary of decreased size relative the first view of the video recording, by changing the third boundary such that the third boundary coincides with the first view, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the out-zooming of the video recording. The method further comprises the steps of, in case at least one predetermined event occurs during the video recording: stopping a performed at least one of an in-zooming and an out-zooming of the video recording, tracking the selected at least one object, re-defining the selected at least one object by the at least one first boundary, re-defining the second boundary, wherein at least one of the at least one first boundary is provided within the second boundary, and changing the third boundary, whereby the second view of the video recording corresponds to a view of the video recording defined by the third boundary.
According to a third aspect of the present invention, there is provided a computer program comprising computer readable code for causing a computer to carry out the steps of the method according to the second aspect of the present invention when the computer program is carried out on the computer.
Thus, the present invention is based on the idea of providing a system for zooming of a video recording. The system may detect, track and select one or more objects in a first view of the video recording. The system may thereafter automatically provide an in-zooming or out-zooming of the selected object(s). The performed in-zooming or out-zooming of the video recording may be stopped in case a predetermined event occurs, such as a interrupted tracking of at least one of the selected at least one object, a de-selection of at least one of the selected at least one object, and/or a selection of at least one object in the first view, separate from the selected at least one object. After the stopping of the video recording, the system may track the object(s), re-define the first and second boundaries accordingly and change the third boundary. The system may thereafter resume an in-zooming or out-zooming of the video recording.
It will be appreciated that the system of the present invention is primarily intended for a real-time zooming of a video recording, wherein the in- and/or out-zooming of the video recording is performed during the actual and ongoing video recording. However, the system of the present invention may alternatively be configured for a post-processing of the video recording, wherein the system may generate in- and/or out-zooming operations on a previously recorded video.
The present invention is advantageous in that the zooming of the object(s) during the video recording by the device is provided automatically by the system, thereby avoiding drawbacks related to manual zooming. The automatic zoom may conveniently zoom in on (or zoom out of) selected objects, often resulting in a more even, precise and/or smooth zooming of the video recording compared to a manual zooming operation. For example, an attempt of a manual zooming of one or more objects during a video recording may lead to a user losing track of the object(s) and/or that the object(s) move(s) out of the zoomed view. Furthermore, during a manual zooming, the user may unintentionally move the device which may result in a video where the object(s) is (are) not rendered in a desired way in the video recording. Furthermore, in case one or more events occur during the video recording, such as an interrupted tracking of one or more of the selected object(s), a de-selection of one or more of the selected at least one object, and/or a selection of one or more object(s) in the first view, separate from the selected object(s), the system may conveniently re-define the first and second boundaries and change the third boundary, which may lead to a convenient in-zooming or out-zooming of the video recording.
The present invention is further advantageous in that the system may provide a smooth and convenient in- and/or out-zooming of a video recording, leading to an esthetically appealing appearance of the resulting video recording. Furthermore, the experience of the video recording may be modified by changing the speed of the in- and/or out-zooming of the video recording.
It will be appreciated that the mentioned advantages of the system of the first aspect of the present invention also hold for the method according to the second aspect of the present invention.
According to the first aspect of the present invention, there is provided a system for zooming of a video recording. By the term “zooming”, it is here meant an in-zooming and/or an out-zooming of a first view of the video recording. The system is configured to detect at least one object in a first view of the video recording. By the term “first view”, it may hereby be meant a full view, a primary (unchanged, unzoomed) view, or the like, of the video recording. The system is configured to track the detected object(s). By the term “track”, it is here meant an automatic following of the detected object(s). The system is further configured to select one or more of the tracked object(s). Hence, the system may be configured to select none, all, or a subset of the tracked object(s).
The system is further configured to define the selected at least one object by at least one first boundary. Hence, each selected object may be defined by a first boundary, i.e. each selected object may be provided within a first boundary. The first boundary may also be referred to as a “tracker boundary”, or the like. The system is further configured to define a second boundary, wherein at least one of the at least one first boundary is provided within the second boundary. In other words, one or more of the first boundaries may be enclosed by a second boundary. The second boundary may also be referred to as a “target boundary”, or the like.
The system is further configured to define a third boundary and define a second view of the video recording corresponding to a view of the video recording defined by the third boundary. In other words, the second view corresponds to the resulting view of the video recording, i.e. the view of the video recording when the video recording is (re)played. The third boundary may also be referred to as a “zoom boundary”, or the like.
Furthermore, the system is configured to perform an in-zooming and/or an out-zooming. During an in-zooming of the video recording relative the first view of the video recording, the system is configured to change the third boundary such that the third boundary coincides with the second boundary, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the in-zooming of the video recording. Analogously, during an out-zooming of the video recording relative the second view of the video recording corresponding to a view of the video recording defined by the third boundary of decreased size relative the first view of the video recording, the system is configured to change the third boundary such that the third boundary coincides with the first view, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the out-zooming of the video recording. Hence, the third boundary is automatically moved, changed, shifted, increased, decreased and/or resized such that it coincides with the second boundary. Furthermore, as the second view corresponds to a view of the video recording defined by the third boundary, the move, change and/or resizing of the third boundary implies an in-zooming or out-zooming of the video recording relative the first view of the video recording.
Furthermore, it will be appreciated that one or more events may occur during the video recording. For example, a tracking of at least one of the selected at least one object may be interrupted, at least one of the selected at least one object may be de-selected, and/or at least one object in the first view, separate from the selected at least one object, may be selected. Then, the system is configured to perform the following: firstly, stop a performed in-zooming or out-zooming of the video recording. Hence, if an in-zooming or out-zooming is performed by the system, this zooming is interrupted. Secondly, track the selected at least one object. Thirdly, re-define the selected at least one object by the at least one first boundary. Fourthly, re-define the second boundary, wherein at least one of the at least one first boundary is provided within the second boundary, and fifthly, change the third boundary, whereby the second view of the video recording corresponds to a view of the video recording defined by the third boundary.
According to an embodiment of the present invention, the at least one first boundary may be provided within the second boundary, and the second boundary may be provided within the third boundary. The system may further be configured to decrease the size of the third boundary such that the third boundary coincides with the second boundary, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes an in-zooming of the video recording.
According to an embodiment of the present invention, the system may further be configured to detect the at least one object based on pattern recognition. The present embodiment is advantageous in that pattern recognition is a convenient and efficient manner of recognizing one or more objects.
According to an embodiment of the present invention, the system may be configured to define at least one predetermined criteria for selection of the at least one object, and select the tracked at least one object according to the at least one predetermined criteria. By the term “criteria”, it is here meant a criteria which may be linked to one or more characteristic features of an object, such as the size and/or speed of an object. The present embodiment is advantageous in that the system may conveniently select one or more tracked object(s) according to predetermined criteria and/or characteristics of the object(s).
According to an embodiment of the present invention, at least one of the at least one predetermined criteria is associated with the size of the at least one object, and the system is configured to select only the largest object of the detected at least one object.
According to an embodiment of the present invention, at least one of the at least one predetermined criteria is an action performed by the at least one object, the system further being configured to identify an action performed by at least one object, and associate the identified action with at least one of the at least one predetermined criteria, and select the at least one object performing the action. Hence, the system may be configured to match an action by one or more objects with a predetermined object action, and select the object(s) accordingly. By the term “action”, it is here meant substantially any movement performed by the object(s), such as running, walking, jumping, etc. The present embodiment is advantageous in that the system may efficiently and conveniently identify object(s) performing an action which may be desirable to emphasize in the video recording.
According to an embodiment of the present invention, the system may be configured to de-select at least one of the selected at least one object. By the term “de-select”, it is here meant a deletion, removal and/or deselection of one or more objects. The present embodiment is advantageous in that the system may de-select any object(s) which is of no interest to zoom into.
According to an embodiment of the present invention, the system may be configured to, in case there is no marked at least one object, increase the size of the third boundary such that the third boundary coincides with the first view, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes an out-zooming of the video recording relative the second view of the video recording corresponding to a view of the video recording defined by the third boundary of decreased size relative the first view of the video recording. In other words, if the system de-selects the (or all) object(s), the size of the third boundary increases. As the second view of the video recording corresponds to a view of the video recording defined by the third boundary, the second view constitutes an out-zooming of the video recording. The present embodiment is advantageous in that the system may interrupt the zooming and return to the first view of the video recording.
According to an embodiment of the present invention, the system is further configured to change the speed of at least one of the in-zooming and out-zooming of the video recording. The present embodiment is advantageous in that the video recording may be rendered in an even more dynamic manner. For example, the system may be configured to have a relatively high speed of the zooming for a livelier experience. Conversely, the system may be configured to have a relatively low and/or moderate speed of the zooming for a calmer experience.
According to an embodiment of the present invention, there is provided a user interface, UI, comprising a system according to any one of the preceding embodiments, for zooming of a video recording by a device, comprising a screen. The UI is configured to be used in conjunction with the device, wherein the device is configured to display the video recording on the screen.
According to an embodiment of the present invention, the user interface may be configured to display, on the screen, at least one of the at least one first boundary, the second boundary and the third boundary. The present embodiment is advantageous in that a user may see the conditions of the zooming operation of the UI, and, optionally, change one or more of the conditions. For example, if the UI is configured to display the one or more first boundary, a user may see which objects have been tracked and selected. Furthermore, if the UI is configured to display the second boundary, a user may see which boundary the UI intends to zoom towards by the third boundary. Furthermore, if the UI is configured to display the third boundary, a user may see how the zooming by the third boundary towards the second boundary may render the second view (i.e. the zoomed view) of the video recording.
According to an embodiment of the present invention, the user interface may be configured to display on the screen, at least one indication of a center portion of at least one of the at least one first boundary, the second boundary and the third boundary. The present embodiment is advantageous in that the center portion indication(s) may facilitate the user's conception of the center(s) of the boundary or boundaries, and consequently, the conception of the resulting video recording.
According to an embodiment of the present invention, the user interface may be a touch-sensitive user interface. By the term “touch-sensitive user interface”, it is here meant a UI which is able to receive an input by a user's touch, such as by one or more fingers of a user touching the UI. The present embodiment is advantageous in that a user, in an easy and convenient manner, may mark, indicate and/or select an object by touch, e.g. by the use of one or more fingers.
According to an embodiment of the present invention, the user interface may be configured to select at least one object based on a marking by a user on the screen on the at least one object, and subsequently, track the selected at least one object. The marking may comprise at least one tapping by the user on the screen on the at least one object. By the term “tapping”, it is here meant a relatively fast pressing of one or more fingers on the screen. The present embodiment is advantageous in that a user may conveniently mark an object visually appearing on the screen.
According to an embodiment of the present invention, the marking by a user on the screen of the at least one object may comprise an at least partially encircling marking of the at least one object on the screen. By the term “an at least partially encircling marking”, it is here meant a circular, circle-like, rectangular or quadratic marking of the user around one or more objects appearing on the screen. The present embodiment is advantageous in that it a user may intuitively and conveniently mark an object appearing on the screen.
According to an embodiment of the present invention, the user interface further comprises a user input function configured to associate at least one user input with at least one object on the screen, wherein the user input is selected from a group consisting of eye movement, face movement, hand movement and voice, and wherein the user interface is configured to select at least one object based on the user input function. In other words, the user input may comprise one or more eye movements, face movements (e.g. facial expression, grimace, etc.), hand movements (e.g. a gesture) and/or voice (e.g. voice command), and the user input function may hereby associate the user input with one or more objects on the screen. The present embodiment is advantageous in that the user interface is relatively versatile related to the selection of object(s), leading to a user interface which is even more user-friendly.
According to an embodiment of the present invention, the user input function is an eye-tracking function configured to associate at least one eye movement of a user with at least one object on the screen, and wherein the user interface is configured to select at least one object based on the eye-tracking function. The present embodiment is advantageous in that the eye-tracking function even further contributes to the efficiency and/or convenience of the operation of the user interface related to the selection of one or more objects.
According to an embodiment of the present invention, the user interface may further be configured to register an unmarking by a user on the screen of at least one of the at least one object, and de-select the at least one unmarked at least one object.
According to an embodiment of the present invention, the user interface may be configured to register at least one gesture by a user on the screen, and to associate the at least one gesture with a change of the second boundary. The user interface may furthermore be configured to display, on the screen, the change of the second boundary. By the term “gesture”, it is here meant a movement, a touch, a pattern created by the touch of at least one finger top, or the like, by the user on a touch-sensitive screen of a device. The present embodiment is advantageous in that the second boundary may be changed in an easy and intuitive manner. Furthermore, as the UI is configured to display the change (i.e. the move, re(sizing), or the like) of the second boundary, the user is provided with feedback from the change.
According to an embodiment of the present invention, the user interface may be configured to associate the at least one gesture with a change of size of the second boundary. In other words, the user may make the second boundary smaller or larger by a gesture registered on the screen. For example, the gesture may be a “pinch” gesture, whereby two or more fingers are brought towards each other.
According to an embodiment of the present invention, the user interface may further be configured to register a plurality of input points by a user on the screen, and scale the size of the second boundary based on the plurality of input points. By the term “input points”, it is here meant one or more touches, indications, or the like, by the user on the touch-sensitive screen. The present embodiment is advantageous in that the second boundary may be changed in an easy and intuitive manner.
According to an embodiment of the present invention, the user interface may further be configured to associate the at least one gesture with a re-positioning of the second boundary on the screen.
According to an embodiment of the present invention, the user interface may further be configured to register the at least one gesture as a scroll gesture by a user on the screen. By the term “scroll gesture”, it is here meant a gesture of a “drag-and-drop” type, or the like.
According to an embodiment of the present invention, the user interface may further be configured to estimate a degree of probability that the selected at least one object is moving out of the first view of the video recording. In case the degree of probability exceeds a predetermined probability threshold value, the user interface may be configured to generate at least one indicator for a user, and alert the user by the at least one indicator. The present embodiment is advantageous in that the UI may alert a user during a video recording that the object(s) that are selected are moving out of the first view of the video recording, such that the user may move and/or turn the video recording device to be able to continue to record the objects.
According to an embodiment of the present invention, the user interface may be configured to estimate the degree of probability based on an at least one of a location, an estimated velocity and an estimated direction of movement of the at least one object. The present embodiment is advantageous in that the inputs of location, velocity and/or estimated direction of movement of the object(s) may further improve the estimate of the degree of probability that object(s) are about to move out of the first view of the video recording.
According to an embodiment of the present invention, the user interface may further be configured to, in case the degree of probability exceeds the predetermined probability threshold value, display at least one visual indicator on the screen as a function of at least one of the location, the estimated velocity and the estimated direction of movement of the at least one object. The present embodiment is advantageous in that the user may be conveniently guided by the visual indicator(s) on the screen to move and/or turn the video recording device if necessary.
According to an embodiment of the present invention, the at least one visual indicator comprises at least one arrow.
According to an embodiment of the present invention, the device is configured to generate a tactile alert, and in case the degree of probability exceeds the predetermined probability threshold value, cause the device to a generate a tactile alert. By the term “tactile alert”, it is here meant e.g. a vibrating alert.
According to an embodiment of the present invention, the device is configured to generate an auditory alert, and in case the degree of probability exceeds the predetermined probability threshold value, cause the device to generate an auditory alert. By the term “auditory alert”, it is here meant e.g. a signal, an alarm, or the like.
According to an embodiment of the present invention, the user interface is configured to display, on a peripheral portion of the screen, the second view of the video recording. By the term “peripheral portion”, it is here meant a portion at an edge portion of the screen. The present embodiment is advantageous in that the user may be able to see the second view of the video recording, which constitutes a zooming of the video recording relative the first view of the video recording, at the peripheral portion of the screen.
According to an embodiment of the present invention, the user interface is configured to, in case a performed in-zooming or out-zooming of the video recording is stopped, generate at least one indicator for a user, and alert the user by the at least one indicator. For example, indicator may comprise a visual indicator, and the user interface may be configured to display the visual indicator on the screen. According to other examples, the indicator may comprise a tactile alert (e.g. a vibration), an auditory alert (e.g. an alarm), etc.
According to an embodiment of the present invention, there is provided a device for video recording, comprising a screen and a user interface according to any one of the preceding embodiments.
According to an embodiment of the present invention, there is provided a mobile device comprising a device for video recording, wherein the screen of the device is a touch-sensitive screen.
Further objectives of, features of, and advantages with, the present invention will become apparent when studying the following detailed disclosure, the drawings and the appended claims. Those skilled in the art will realize that different features of the present invention can be combined to create embodiments other than those described in the following.
This and other aspects of the present invention will now be described in more detail, with reference to the appended drawings showing embodiment(s) of the invention.
The system 100 is configured to detect an object 150 in a first view 110 in the video recording on the screen 120, and to track (i.e. to follow) the detected object 150. In other words, the system 100 is able to track (follow) the detected object 150 during a movement of the object 150. It will be appreciated that a tracking function is known by the skilled person, and is not described in more detail.
The system 100 is further configured to select one or more of the tracked object(s) 150. The system 100 may be configured to select none, all, or a subset of the tracked object(s) 150. Furthermore, the system 100 may be configured to select one or more tracked objects(s) 150 according to one or more predetermined criteria. For example, one predetermined criteria may be associated with the size of the object(s) 150, and the system 100 may hereby be configured to select only the largest object 150 of a plurality of detected objects 150. Alternatively, the system 100 may be configured to identify an action performed by the object 150, associate the identified action with at least one of the at least one predetermined criteria, and select the object 150 performing the action. For example, the system 100 may be configured to identify the action of the object in
The system 100 is further configured to define the tracked object 150 by at least one first boundary 270. In other words, the object 150 may be enclosed by the first boundary 270. Here, the first boundary 270 is exemplified as a rectangle which encloses (defines) the object 150. It will be appreciated that there may be more than one object 150 on the screen, and hence, there may be a plurality of first boundaries 270, each defining an object 150.
The system 100 is further configured to define a second boundary 280, wherein one or more of the first boundary(ies) 270 is provided within the second boundary 280. Hence, if there is more than one first boundary 270, some or all of these first boundaries 270 may be enclosed by the second boundary 280. For example, the system 100 may select at least one first boundary 270 to be provided within the second boundary 280. The center portion of the second boundary 280 is indicated by a marker 285. In one embodiment of the system 100, the second boundary 280 is displayed on the screen 120.
The system 100 is further configured to define a third boundary 290, provided within the first view 110 of the video recording, and to define a second view of the video recording corresponding to a view of the video recording defined by the third boundary 290. In other words, it is the second view of the video recording which may constitute the resulting video recording. The center portion of the third boundary 290 is indicated by a marker 295.
Furthermore, the system 100 is configured to automatically change and/or move the third boundary 290, as indicated by the schematic arrows at the corners of the third boundary 290, such that the third boundary 290 coincides with (i.e. adjusts to) the second boundary 280. In
The system 100 may be configured to stabilize the first view 110 and/or second view of the video recording. It will be appreciated that a stabilizing function of this kind is known by the skilled person, and is not described in more detail.
In
In
The system 100 is further configured to track 202 the detected at least one object, and select 203 at least one of the tracked at least one object. The system 100 may be configured to select 203 at least one of the tracked object(s) according to one or more predetermined criteria. For example, at least one of the at least one predetermined criteria may be associated with the size of the at least one object, and the system 100 may hereby be configured to select 203 only the largest object of the detected objects. Alternatively, the system 100 may select the object(s) based on an identified action of the object(s).
The system 100 is further configured to define 204 the selected at least one object by at least one first boundary, to define 205 a second boundary, wherein at least one of the at least one first boundary is provided within the second boundary. The system 100 is further configured to define 206 a third boundary and define a second view of the video recording corresponding to a view of the video recording defined by the third boundary.
The system 100 may further be configured to perform an in-zooming 207 of the video recording relative the first view of the video recording. The in-zooming 207 may be performed by changing the third boundary such that the third boundary coincides with the second boundary, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the in-zooming of the video recording.
Alternatively, the system 100 may be configured to perform an out-zooming 208 of the video recording relative the second view of the video recording corresponding to a view of the video recording defined by the third boundary of decreased size relative the first view of the video recording. The out-zooming 208 may be performed by changing the third boundary such that the third boundary coincides with the first view, whereby the second view of the video recording, played in the size of the first view of the video recording, constitutes the out-zooming of the video recording. It will be appreciated that the system 100 may be configured to change the speed of the in-zooming 207 and/or out-zooming 208 of the video recording.
During the in-zooming 207 or the out-zooming 208 performed by the system 100, one or more events or scenarios may occur. For example, a tracking 209 of at least one of the selected at least one object may, possibly, be interrupted. Furthermore, at least one of the selected at least one object may be de-selected 210 during the in-zooming 207 or the out-zooming 208 performed by the system 100. Yet another event or scenario may be that at least one object in the first view, separate from the selected at least one object, is selected 211 during the in-zooming 207 or the out-zooming 208 performed by the system 100. If one or more of the interrupted tracking 209, the de-selection 210 and the selection 211 as described occurs, the system 100 is configured to perform the following: stop 212 a performed in-zooming 207 or out-zooming 208 of the video recording, track 213 the selected at least one object, re-define 214 the selected at least one object by the at least one first boundary, re-define 215 the second boundary, wherein at least one of the at least one first boundary is provided within the second boundary, and change 216 the third boundary, whereby the second view of the video recording corresponds to a view of the video recording defined by the third boundary. After changing 216 the third boundary, the system 100 may either keep the present state of the video recording, perform an in-zooming 2017 or perform an out-zooming 208, as indicated by the iterative (feedback) line of
The tracking 209 of one or more objects may be interrupted due to a movement of the object(s) out of the second view. In a case one or more objects return into the second view, the system 300 may be configured to recognize that the same object(s) has (have) returned into the second view and continue a performed in-zooming or out-zooming.
During the state of in-zooming 302 and/or out-zooming 304 by the system 200, one or more interruptions, changes, or the like, may occur during the video recording. For example, as described in
The person skilled in the art realizes that the present invention by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims. For example, it will be appreciated that the figures are merely schematic views of a user interface according to embodiments of the present invention. Hence, any functions and/or elements of the UI 500 such as one or more of the first 270, second 280 and/or third 290 boundaries may have different dimensions, shapes and/or sizes than those depicted and/or described.
1. A system (100) for zooming of a video recording, the system being configured to:
2. The system according to embodiment 1, wherein the at least one predetermined event is selected from a group consisting of
3. The system according to embodiment 1 or 2, wherein the at least one first boundary is provided within the second boundary, and the second boundary is provided within the third boundary, the system further being configured to
4. The system according to any one of the preceding embodiments, further being configured to:
5. The system according to any one of the preceding embodiments, further being configured to define at least one predetermined criteria for selection of the at least one object, and select the tracked at least one object according to the at least one predetermined criteria.
6. The system according to embodiment 5, wherein at least one of the at least one predetermined criteria is associated with the size of the at least one object, and wherein the system is configured to select only the largest object of the detected at least one object.
7. The system according to embodiment 5, wherein at least one of the at least one predetermined criteria is an action performed by the at least one object, the system further being configured to
8. The system according to any one of the preceding embodiments, further being configured to de-select at least one of the selected at least one object.
9. The system according to embodiment 8, further being configured to, in case there is no selected at least one object,
10. The system according to any one of the preceding embodiments, further being configured to change the speed of at least one of the in-zooming and out-zooming of the video recording.
11. A user interface (500), UI, comprising
12. The user interface according to embodiment 11, further being configured to display, on the screen, at least one of the at least one first boundary, the second boundary and the third boundary.
13. The user interface according to embodiment 12, further being configured to display, on the screen, at least one indication (285, 295) of a center portion of at least one of the at least one first boundary, the second boundary and the third boundary.
14. The user interface according to any one of the embodiments 11-13, wherein the user interface is a touch-sensitive user interface.
15. The user interface according to embodiment 14, further being configured to select at least one object based on a marking by a user on the screen on the at least one object, and subsequently, track the selected at least one object.
16. The user interface according to embodiment 15, wherein the marking by a user on the screen of at least one object comprises at least one tapping by the user on the screen on the at least one object.
17. The user interface according to embodiment 15 or 16, wherein the marking by a user on the screen of the at least one object comprises an at least partially encircling marking of the at least one object on the screen.
18. The user interface according to any one of the embodiments 11-17, further comprising a user input function configured to associate at least one user input with at least one object on the screen, wherein the user input is selected from a group consisting of eye movement, face movement, hand movement and voice, and wherein the user interface is configured to select at least one object based on the user input function.
19. The user interface according to embodiment 18, wherein the user input function is an eye-tracking function configured to associate at least one eye movement of a user with at least one object on the screen, and wherein the user interface is configured to select at least one object based on the eye-tracking function.
20. The user interface according to any one of the embodiments 11-19, further being configured to:
21. The user interface according to any one of the embodiments 11-20, further being configured to:
22. The user interface according to embodiment 21, further being configured to:
23. The user interface of embodiment 22, further being configured to:
24. The user interface according to any one of the embodiments 19-23, further being configured to associate the at least one gesture with a re-positioning of the second boundary on the screen.
25. The user interface according to embodiment 24, further being configured to register the at least one gesture as a scroll gesture by a user on the screen.
26. The user interface according to any one of the embodiments 11-25, further being configured to:
27. The user interface according to embodiment 26, further being configured to:
28. The user interface according to embodiment 26 or 27, further being configured to:
29. The user interface according to embodiment 28, wherein the at least one visual indicator comprises at least one arrow.
30. The user interface according to any one of the embodiments 26-29, and wherein the device is configured to generate a tactile alert, the user interface further being configured to:
31. The user interface according to any one of the embodiments 26-30, and wherein the device is configured to generate an auditory alert, further being configured to:
32. The user interface according to any one of the embodiments 11-31, further being configured to display, on a peripheral portion of the screen, the second view of the video recording.
33. The user interface according to any one of the embodiments 11-32, further being configured to, in case a performed in-zooming or out-zooming of the video recording is stopped,
34. The user interface according to embodiment 33, wherein the at least one indicator comprises a visual indicator, and wherein the user interface is configured to, in case a performed in-zooming or out-zooming of the video recording is stopped,
35. The user interface according to embodiment 33 or 34, wherein the at least one indicator comprises a tactile alert, and wherein the user interface is configured to:
36. The user interface according to any one of embodiments 33-35, wherein the at least one indicator comprises an auditory alert, and wherein the user interface is configured to:
37. A device for video recording, comprising
38. A mobile device (300), comprising
39. A method for zooming of a video recording, the method comprising the steps of:
40. The method according to embodiment 39, wherein the at least one predetermined event is selected from a group consisting of
41. A computer program comprising computer readable code for causing a computer to carry out the steps of the method according to embodiment 40 or 41 when the computer program is carried out on the computer.
Number | Date | Country | Kind |
---|---|---|---|
16171711.1 | May 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/061348 | 5/11/2017 | WO | 00 |