The present invention is related to a data encoding device.
Existing motion related conversion arrangements convert motion from a video or other type of media associated into a mechanical output device, such that the mechanical output device moves synchronously with events portrayed in the video. For example, in 4D movie theaters, theater seats include motors that move the seats in response to objects moving in the associated film. These known systems include a file containing data which corresponds to movement of objects shown in the associated video. Existing motion detection systems are disclosed in U.S. Pat. Nos. 4,458,266 and 8,378,794, which are incorporated by reference as if fully set forth herein.
Creating files including data to link motion of objects in a video with a mechanical output is time-consuming and labor-intensive process. This process usually includes a manual operator that must watch the video and replicate movement of objects on the screen. The operator's manual input is captured and synchronized with the movie. This process requires prolonged concentrated attention and labor. This process results in an imprecise translation of the movement in the video to the movement to the output device.
Known techniques parameterize the movement of objects depicted in video data. These techniques analyze frames in a video and compare image data to determine if parts of the image are moving to a different location from one frame to another frame. However, the movement analysis techniques of existing systems are not suitable for the analysis of specific motion of specific objects in a video. Current systems for movement analysis analyze movement throughout an image and generate overall data for a scene shown in the video, and cannot generate data for specific objects in the video.
It would be desirable to provide an improved arrangement for encoding and extracting data from motion that is not as labor intensive as known systems and provides precise data encoding and extraction.
An improved system and method for extraction of data associated with motion in media is provided. The system and method disclosed herein provides automated or semi-automated extraction of data related to movement in a media file for the purpose of moving mechanical devices in synchrony with events portrayed in the media file. The system disclosed herein allows interactive selection of regions of interest related to objects for further automated detection of movement of said objects through automatic analysis of changing image patterns or morphology around a tracked object. The extracted data may be used to operate or otherwise provide movement of a remote device. The extracted data may also be used to synchronize the motion in the media with the movement of a remote device.
In one embodiment, a video tracking method is disclosed. The method includes: (a) acquiring video images including a plurality of frames; (b) selecting a first frame of the plurality of frames; (c) positioning a cursor on the first frame and selecting an area that is a region of interest of the first frame; (d) analyzing the area to detect parameters associated with movement of the area of the first frame and a surrounding region of the area; and (e) tracking the area in subsequent frames of the plurality of frames. Data associated with movement of the area can be synchronized with the video images. The data associated with movement of the area can be used to control or drive movement of a remote device.
The methods, systems, and algorithms disclosed herein allow a user to extract data from a media file or video related to motion within frames of the media file or video. The user can select a portion of the frame, which can vary in shape and size, and reliably track the portion of the frame in subsequent frames. Data associated with this portion of the frame can then be used to provide an input signal to a device, such as a sex toy device, that imitates or mimics motion captured from the media file or video, or otherwise moves in response to the data corresponding to motion captured from the media file or video.
A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings wherein:
According to one embodiment, a portion of an image or screen, which may be referred to as a “specific object” is identified in frames of a media file, such as a video file. The specific object is followed throughout the video data while a movement detection algorithm is implemented to detect and track the specific object and movement thereof. The specific object can also be referred to as a target area or area of interest herein. According to one embodiment, a method for extracting data from a specific object in a media file includes acquiring video image data, interactively tracking objects of interest through an input device controlled by a user, and generating movement data through image processing code based on the data created by the user and by tracking the video images. According to one embodiment, a method for tracking objects by a user identifies the location of a specific moving object and quantifies a rate of motion for the specific moving object.
Throughout the description, the general concept of combining a media file with an output file is described. The embodiments can produce a single data file that includes media, i.e. a video portion, as well as a tracking portion that synchronizes an output signal with the media. The timing of the visual media portions of the file and the output signal can be synched through a variety of known methods, such as described in U.S. Pat. No. 8,378,794.
The encoder 10 can be connected to a network 2. In one embodiment, objects of interest are tracked interactively through an input device 11 and video data related to the subject 9 undergoes a motion detecting algorithm in processor 3. In one embodiment, the input device 11 is a mouse, but one of ordinary skill in the art would recognize that any type of input device can be used. A user can focus on specific objects from the recorded image of the subject 9 by manipulating a position of the input device 11, which is tracked on a display 4. The display 4 overlays a position of a cursor of the input device 11 over the recorded image data of the subject 9. The user can then manipulate specific portions of the recorded image data of the subject 9 to generate motion dependent data for specific portions of the recorded image data of the subject 9. Motion dependent data is transmitted to an output device 13, including an output device processor 6 that causes a motor 5 of an output device 7 to actuate an object 8, wherein movement of the object 8 is related to movement of the subject 9.
In one embodiment, an alternative system 13 can be provided that only includes the processor 3, the display 4, the input device 11, and the output device 7. In this embodiment, the subject 9 is provided completely separated from the system 13. The system 13 can be used in conjunction with any type of video or media file, wherein a user can play the video or media file on the display 4. As the user plays the video or media file, the user can manipulate the input device 11 to focus a cursor 4′ on the display 4 on a specific region of action in the video or media file. The cursor 4′ can include any shape and can include modifiable shape such that a user can decide its shape to focus on a specific region of action of the display 4.
A user can then select a next region as the search region. A first step size is set at Smax. The method includes comparing a neighborhood of areas of interest in sequential images. As used herein, neighborhood includes a surrounding region. In one embodiment, the neighborhood is an area concentrically arranged the search region.
The method 200 includes searching a neighborhood of area of interest. The method can include searching immediately subsequent frames to find locations in the neighborhood that are similar in morphology to that of the location of the area of interest. A center is moved to a location of lowest cost. The algorithm adaptively changes the step size search and extends away from the center of the location of the area of interest.
According to the flowchart of
As shown in
Once the search region is established, the method sets the initial location of the search in the center of the search region at step 225, and sets the stab size at the maximum size to be used in the search at step 230. A series of analysis steps are then carried out for the search region. These steps can include any known type of image analysis steps, such as vector analysis, object based image analysis, segmentation, classification, spatial, spectral, and temporal scale analysis. One of ordinary skill in the art would understand alternative types of image analysis can be implemented into this algorithm.
Motion capture analysis and motion detection can be carried out according to a variety of methods and algorithms. In one embodiment, analysis of the frames is carried out by obtaining a reference image from a first frame, and then comparing this reference frame to a subsequent frame. In one embodiment, the algorithm counts the number of pixels that change from one frame or region of a frame to a subsequent frame or region of a subsequent frame. This algorithm continuously analyzes the series of frames to determine if the number of pixels that change exceeds a predetermined value. If the predetermined value is exceeded, then a triggering event occurs. The analysis used in the algorithms disclosed herein also allow for adjustments based on sensitivity and ratio/percentage settings. Other types of motion detection and tracking algorithms can be used in any of the embodiments disclosed herein.
Returning to
The encoding system 504 includes multiple sub-components. The encoding system 504 includes a recorder 508. The recorder 508 is preferably a hand-held device. The recorder 508 can include an image recording device, such as a camera. The recorder 508 projects a beam or cone onto the media source 502 to record relative motion from the media source 502. In one embodiment, the recorder 508 is connected to a CPU 510. In one embodiment, the CPU 510 includes a processor 512, a memory unit 514, and a transmitter/receiver unit 516. The CPU 510 can include any other known computing or processing component for receiving data from the recorder 508. The encoding system 504 receives a data input of data associated with motion detected by the recorder 508, and outputs a signal representative of the data associated with the motion detected by the recorder 508. A user can adjust the recorder 508 relative to the media source 502 in a variety of ways. For example, the user can manually move the recorder 508 to focus on different regions of the media source 502. The user can adjust a size of the beam or cone of the recorder 508 to record a larger or smaller region of the media source 502. The user can also adjust a shape of the beam or cone of the recorder 508 projected onto the media source 502.
As shown in
The output arrangement 506 includes a transmitter/receiver unit 522. The transmitter/receiver unit 522 receives a signal from the encoding system 504 via the wireless network 520. The output arrangement 506 includes a motor 524. The motor 524 is configured to provide a driving motion based on signals received from the encoding system 504. The motor 524 drives an output device 526. In one embodiment, the output device 526 is a phallic sex toy device. One of ordinary skill in the art would recognize from the present disclosure that alternative outputs can be provided with varying shapes, sizes, dimensions, profiles, etc.
Another embodiment is illustrated in
The user manipulates a position of the cursor 610 to create a region of interest 612 to focus on any portion of the frame 602a. The region of interest 612 contains the object to be tracked, i.e. the hand 620, and does not include objects that are not to be tracked, i.e. the foot 630. The term cursor is used generically to refer to element 610. One of ordinary skill in the art would understand the cursor 610 can include a brush or pointer and can have any type of shape or dimension. The cursor 610 can be moved interactively by a user to select a specific region of interest to the user for data encoding. In one embodiment, the cursor 610 is a plain pointer. In another embodiment, the cursor 610 is a brush shaped icon or cloud, and analogous to the brush region described above. In another embodiment, the cursor 610 is a spray paint icon.
The user can move a mouse or other object to manipulate a position of the cursor 610 relative to the frame 602a. Once in a desired position on the frame, the user can then select a specific region of the frame 602a and the cursor 610 marks the specific region of the frame 602a. This marking can occur by a variety of methods, such as discoloring the specific region or otherwise differentiating the specific region from adjacent pixels and surrounding colors. This selecting/marking step does not affect the subject video file or frames 602a, 602b and instead is an overlay image, pattern, marking, or indicator that is used by the algorithm for tracking purposes. The cursor 610 in
The tracking algorithm automatically detects the object's hand 620 moved from a raised position in
The embodiments disclosed herein allow a user to extract motion or movement data from any video or media file. The embodiments disclosed herein can be embodied as software or other computer program, wherein a user downloads or installs the program. The program can be run any known computing device. The video or media file can be played within a window on the user's computer. The program can include a toolbox or other menu function to allow the user to adjust the cursor or brush region, control playback of the media file or video, and other commands. The user can manipulate an input device, such as a mouse, to move the cursor or brush region relative to a selected frame. The user can activate the input device to select a specific region of the frame. The cursor can allow the user to draw a closed shape around a specific region to focus on for analysis.
It will be appreciated that the foregoing is presented by way of illustration only and not by way of any limitation. It is contemplated that various alternatives and modifications may be made to the described embodiments without departing from the spirit and scope of the invention. Having thus described the present invention in detail, it is to be appreciated and will be apparent to those skilled in the art that many physical changes, only a few of which are exemplified in the detailed description of the invention, could be made without altering the inventive concepts and principles embodied therein. It is also to be appreciated that numerous embodiments incorporating only part of the preferred embodiment are possible which do not alter, with respect to those parts, the inventive concepts and principles embodied therein. The present embodiment and optional configurations are therefore to be considered in all respects as exemplary and/or illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all alternate embodiments and changes to this embodiment which come within the meaning and range of equivalency of said claims are therefore to be embraced therein.
The following documents are incorporated by reference as if fully set forth: U.S. Provisional Patent Application 62/447,354 filed Jan. 17, 2017; and U.S. Non-Provisional patent application Ser. No. 15/873,373, filed Jan. 17, 2018.
Number | Date | Country | |
---|---|---|---|
62447354 | Jan 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15873373 | Jan 2018 | US |
Child | 16928647 | US |