The present specification generally relates to the field of image processing and, in particular, describes an advanced image recognition and editing system which enables selection, recognition and modification of at least a portion of an image in standalone images or video files. More specifically, some image processing steps may be executed in a social network mobile application using a customized virtual keyboard.
There exist several systems and computer applications which are used for image recognition and modification. Some advanced tools, such as Adobe Photoshop® and Adobe Elements®, enable the users to modify images, but in a limited manner. For example, users are able to draw lines over the images and make modifications. In addition, users can change the contrast level and colors of images to change the overall look and feel of an image. The examples provided above, however, are difficult to use and require that a user spend significant amounts of time learning a program and perfecting a modified image.
Popular mobile applications, such as Instagram®, allow people to select images and apply various types of filters on them to create multiple effects. Most of these filters enable a user to change the look and feel of the image; for example, one filter in Instagram® allows for the creation of a faded image while another filter allows for the creation of a more vibrant image. All of the abovementioned filters, however, are very limited in that they only modify color combinations to change the overall look and feel of the complete image.
Most of the applications known in the prior art allow basic level modifications to an image. Further, the above applications do not allow for any modification of images in a video file.
There is a requirement for applications, particularly social network mobile applications, which can modify images in a much more advanced manner, including separating the video and image into modifiable portions and adding or removing components to or from an image in standalone image files and video files. In other words, there is a need, within social networking, of applications which enable users to perform complex image and video editing via an easy to use and intuitive interface on their mobile device.
The present specification discloses a method for advanced image processing comprising: identifying at least one portion of an input image based on user instructions; comparing pixels corresponding to the at least one portion of the input image with pixels corresponding to other portions of the input image to detect the entire section corresponding to the identified at least one portion; and, creating a new image comprising the detected section.
The new images created by the methods described in this specification may also be referred to as stickers or emojis.
Optionally, an edge detection process is used to identify start and end points of the section to be detected.
Still optionally, image resolution is normalized before processing.
The normalization may be conducted at a remote server location.
Optionally, normalization is conducted in parallel at a client device as well as at a remote server location.
The normalization may be conducted at the client device.
Optionally, the new image is stored in an image gallery located at a client device or at a remote server location. The new image may be used as a personalized emoticon while communicating with other users over internal or external platforms.
The new image may be shared with other users using an image processing system. The new image may also be shared with other users via external social networking platforms or messaging applications.
Optionally, the method further comprises superimposing the new image comprising the detected section over a similar type of section in a target image selected by the user, thus forming a modified image. Metadata related to the modified image may be stored at a remote server location for faster image processing. The metadata may comprise at least one of the following fields: name/location of target image; properties of the target image, such as size/width; name/location of the new image; properties of the new image, such as size/width; location of the new image on the target image; time stamp of creation of the modified image; name of the user who created the modified image.
Optionally, a second image is superimposed over the new image, wherein said second image acts as a watermark.
Optionally, the method further comprises tagging a user, via a user profile, with the new image; notifying the user that his profile has been tagged with the new image; storing the new image in the image gallery corresponding to said tagged user with his permission.
Various processing steps of the methods of the present specification may be executed using a computer application which comprises a virtual keyboard embedded within said computer application.
The virtual keyboard may be customized for each user such that each user can access newly updated images or stickers in his or her network through the virtual keyboard.
The present specification also discloses a computer program product configured to enable a data processing apparatus to perform operations comprising: identifying at least one portion of an input image based on user instructions; comparing pixels corresponding to the at least one portion of the input image with pixels corresponding to other portions of the input image to detect the entire section corresponding to the identified at least one portion; and, creating a new image comprising the detected section.
Optionally, the computer program product further comprises a virtual keyboard accessible to a user to execute various instructions for image processing. The virtual keyboard can be optionally customized for a user. The virtual keyboard may provide access to a gallery of images which is customized for each user. Optionally, a user can share his virtual keyboard with other users over a network.
The present specification also discloses a method for advanced image processing comprising: selecting a target image; identifying a section in the target image; selecting a new image from a gallery of images, wherein each of said new images in the gallery comprise a section which is of similar type as the identified section in the target image; and superimposing the new image over the identified section in the target image.
Various steps of the method may be executed through a computer application. Optionally, the computer application comprises a virtual keyboard which can be customized for each user. The virtual keyboard may provide access to a gallery of images which is customized for each user.
The gallery of images may comprise images related to the current location of the user device.
The present specification also discloses a method for processing video to be shared on an on-line social network, comprising: selecting a reference frame from an input video file; receiving user instructions to identify sections in said reference frame which are to be retained and/or removed from the complete video file; modifying said reference frame based on said user instructions; analyzing other frames in the video file to identify relevant frames comprising sections similar to the sections which are identified by the user in said reference frame; modifying all relevant frames based on the instructions received from the user for said reference frame; and creating a new video file comprising the modified frames, wherein said video processing is performed according to instructions input by a user via an application running on a mobile device.
The process of identifying said sections in video frames may comprise identifying at least one portion of the video frame based on user instructions and comparing pixels corresponding to the at least one portion of the video frame with pixels corresponding to other portions of the video frame to detect the entire section corresponding to the identified at least one portion.
Optionally, said reference frame comprises the first frame of the input video file.
Optionally, said video file is converted to an animated .GIF format before processing.
Still optionally, said video file is preprocessed to normalize it as per the requirement of a computer application executing the various steps of said method.
The preprocessing may comprise modifying the length of the video, modifying the frames per second in said video, modifying the resolution in said video, or modifying the format of said video.
Various steps of said method may be executed at a client device.
At least one of the steps of said method may be executed at a remote server location.
Optionally, the image section removed from various frames of said video file is replaced with a new image in all such frames. Optionally, an edge detection process is used to identify start and end points of said sections.
The new video file may be stored in an image gallery located at a client device or at a remote server location.
Optionally, the method further comprises sharing the new video file with other users of the computer application used for executing the steps of said method. Optionally, the method further comprises sharing the new video file over external social networking platforms or messaging applications.
Metadata related to the new video file may be stored at a remote server location for faster processing. The metadata may comprise at least one of the following fields: name/location of video file; properties of the video file, such as size/resolution/location/time stamp of creation/name of the user who created the modified file.
Optionally, the method further comprises providing a computer application to execute the steps of said video processing and providing a virtual keyboard embedded in said computer application. The virtual keyboard may be customized for each user and is updated based on the newly created image or video files accessible to said user. A user may share his virtual keyboard with other users. Optionally, said modified reference frame is stored in said virtual keyboard.
The present specification also discloses a method for processing video to be shared on an on-line social network, comprising: selecting a reference frame from an input video file; receiving user instructions to identify sections in said reference frame which are to be modified in the complete video file; modifying said sections in said reference frame based on said user instructions; analyzing other frames in the video file to identify relevant frames comprising sections similar to the sections which are identified by the user in said reference frame; modifying all relevant frames based on the instructions received from the user for said reference frame; and creating a new video file comprising the modified frames, wherein said video processing is performed according to instructions input by a user via an application running on a mobile device.
The present specification also discloses a method for video file processing comprising: selecting a reference frame in said input video file; receiving user instructions for identifying a specific section in said reference frame; modifying said reference frame by superimposing a new image over said identified section; analyzing other frames in the video file to identify relevant frames comprising sections similar to the specific section identified by the user in said reference frame; modifying all relevant frames by superimposing said new image on said specific sections in said relevant frames; and creating a new video file comprising the modified frames.
The process of identifying said sections in video frames may comprise identifying at least one portion of the video frame based on user instructions and comparing pixels corresponding to the at least one portion of the video frame with pixels corresponding to other portions of the video frame to detect the entire section corresponding to the identified at least one portion.
Optionally, said reference frame comprises the first frame of the input video file.
Optionally, said video file is converted to an animated .GIF format before processing.
Still optionally, said video file is preprocessed to normalize it as per the requirement of a computer application executing the various steps of said method.
The preprocessing may comprise modifying the length of the video, modifying the frames per second in said video, modifying the resolution in said video, or modifying the format of said video.
Various steps of said method may be executed at a client device.
At least one of the steps of said method may be executed at a remote server location.
Optionally, an edge detection process is used to identify start and end points of said sections.
The new video file may be stored in an image gallery located at a client device or at a remote server location.
Optionally, the method further comprises sharing the new video file with other users of the computer application used for executing the steps of said method. Optionally, the method further comprises sharing the new video file over external social networking platforms or messaging applications.
Metadata related to the new video file may be stored at a remote server location for faster processing. The metadata may comprise at least one of the following fields: name/location of video file; properties of the video file, such as size/resolution/location/time stamp of creation/name of the user who created the modified file.
Optionally, the method further comprises providing a computer application to execute the steps of said video processing and providing a virtual keyboard embedded in said computer application. The virtual keyboard may be customized for each user and is updated based on the newly created image or video files accessible to said user. A user may share his virtual keyboard with other users. Optionally, said modified reference frame is stored in said virtual keyboard.
The present specification also discloses a method for processing a video file and posting said processed video file to an on-line social network, comprising: selecting a reference frame from said video file; receiving a user instruction identifying sections in said reference frame which are to be retained or removed from the video file; modifying said reference frame based on said user instruction; analyzing a plurality of other frames in the video file to identify similar frames comprising sections similar to the sections identified by the user in said reference frame; modifying all similar frames based on the user instruction; and creating a new video file comprising the modified frames, wherein said video processing is performed according to instructions input by a user via an application running on a mobile device.
Optionally, said user instruction identifying sections in said reference frame which are to be retained or removed from the video file is performed by physically touching a portion of a screen of a mobile device, said portion of the screen being associated with pixels of the reference frame which are to be retained or removed from the video file.
Optionally, the process of analyzing the plurality of other frames in the video file to identify frames comprising sections similar to the sections identified by the user in said reference frame is performed by comparing the pixels of the reference frame which are to be retained or removed from the video file with pixels of the plurality of other frames in the video file and identifying those pixels of the plurality of other frames in the video file having similar characteristics to the pixels of the reference frame which are to be retained or removed from the video file.
Optionally, said video file comprises a plurality of frames in sequential order wherein the reference frame is a first frame in said sequential order.
Optionally, said video file is preprocessed to normalize it as per a requirement of a computer application executing said method, wherein said preprocessing comprises at least one of a) modifying a length of the video file, b) modifying a number of frames per second in said video file, c) modifying a resolution of said video file, and d) modifying a format of said video file.
Optionally, metadata related to the new video file is stored at a remote server location, wherein said metadata comprises at least one of a) a field describing a name of the new video file, b) a field describing a location of the new video file, c) a field describing properties of the new video file, d) a field describing a size of the new video file, e) a field describing a resolution of the new video file, f) a field describing a creation time stamp of the new video file, and g) a field describing a name of the user who created the new video file.
The aforementioned and other embodiments of the present invention shall be described in greater depth in the drawings and detailed description provided below.
These and other features and advantages of the present invention will be appreciated, as they become better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
The present specification is directed towards multiple embodiments. The following disclosure is provided in order to enable a person having ordinary skill in the art to practice the invention. Language used in this specification should not be interpreted as a general disavowal of any one specific embodiment or used to limit the claims beyond the meaning of the terms used therein. The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the invention. Also, the terminology and phraseology used is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed. For purpose of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail so as not to unnecessarily obscure the present invention.
The present specification describes a method and application for advanced image processing, preferably within the context of a social network. For purposes of this specification, a social network is an on-line community defined by a first set of data, organized into an account in a mobile application or a set of web pages, that are controlled by and defining the interests, profile, images, video, audio, or other information of a first user (collectively first user data), and a second set of data, organized into an account in a mobile application or a set of web pages, each controlled by and defining the interests, profile, images, video, audio, or other information of a second user (collectively second user data), where the first user can selectively grant to the second user access to the first user data and/or where the second user can selectively grant to the first user access to the second user data. It should be appreciated that the selective granting of data access can be applied by any number of first users by and among any number of second users. It should further be appreciated that when a first user grants to the second user access to the first user data, the first user is “connected” to the second user. A social networking application is a self-contained software programmed, typically operating on a mobile computing device, that can be used to access an on-line community, as defined above.
In an embodiment, the application enables recognition of specific sections of an image and allows performing multiple modifications/operations on those specific sections. For the purpose of having proper reference for different types of images created using methods described in the present specification, in an embodiment, images are classified into three categories as per the following nomenclature: TARGETS are photographs, graphics, stock images, or other background images which are used as the source for a BOM or FOTOBOM; BOMS are images created from specific sections of TARGETS; FOTOBOMS are new images created by superimposing one or more BOMS relative to at least one TARGET.
In some embodiments, the BOM and FOTOBOM images created using the methods described in the present specification are also referred to as stickers or emojis. One of ordinary skill in the art can appreciate that the above nomenclature is used for reference and that there are multiple ways in which the images can be referred to without departing from the spirit and scope of the present specification.
In various embodiments, the BOM and FOTOBOM images are created within the context of a social network, as defined above, via an application on a mobile device. In some embodiments, the application includes an easy to use user interface comprising a virtual keyboard incorporating icons of modified images and/or video frames to allow for quick user access.
In an embodiment, the application described in the present specification is used to select an existing TARGET image from memory or any other external source and is provided instructions to detect and highlight the specific sections of this image which are of interest to the user. For example, the TARGET image may be accessed from the memory of a mobile device, such as the internal memory of a cell phone or an SD card of said phone. The TARGET image may also be accessed from an external source, such as a social network. For example, the TARGET image may be accessed by selecting and/or downloading an image from social network applications such as Facebook®, Instagram®, Twitter®, Whatsapp®, Gtalk®, etc. In various embodiments, a user may log in to a social networking application using their login credentials, either directly through the social networking application, or through the application of the present specification, and select, with the option of saving, a TARGET image for modification. In various embodiments, selecting the TARGET image involves touching, swiping, clicking, or pressing and holding the touchscreen of the mobile device over the desired image whereupon the user is prompted with a series of options, including saving the image to local memory and copying the image. In various embodiments, a copied image may then be pasted into the application of the present specification for modification. The method of the present specification, via the above mentioned application interface, processes the TARGET image with the help of advanced algorithms to recognize and expand the sections which are of interest to the user and displays the detected sections as a final output image or BOM on the screen of a user device running the above application.
In an embodiment, the application further enables the users to perform multiple actions using the above created BOM images comprising only specific sections detected by the application. Users can store these images in a file or gallery in memory of a local device or at a remote server for later use. In an embodiment, the users can also share these pictures with other people through messaging applications and social networks. In another embodiment, the application is fully integrated with popular social networking applications and messaging applications such as Facebook®, Instagram®, Twitter®, Whatsapp®, Gtalk® etc. so that the images can be easily shared. In various embodiments, a user may log in to a social networking application using their login credentials, either directly through the social networking application, or through the application of the present specification, and upload the created BOM and/or FOTOBOM image for sharing. The uploaded images can be viewed and, using the application of the present specification, further modified by other users.
In an embodiment, the application described in the present specification is used to select an existing TARGET image from a memory or any other external source, such as a social networking application, and is provided instructions to modify the TARGET image by using a BOM image. A user creates a new BOM image or selects an existing one from an image gallery stored in local device memory or at a remote server location and provides instructions to the application to place this BOM image over a specific area on the TARGET image to create a new image, which is referred to as a FOTOBOM image, in an embodiment. In an embodiment, the application enables a user to perform multiple actions using a FOTOBOM. A user can store a FOTOBOM in image galleries on the local device memory or at a remote server location and can also share the same with other people through social networking platforms (by uploading the created images) and messaging applications integrated with the application as described in the present specification.
One of ordinary skill in the art can appreciate that there may be multiple embodiments through which a user can highlight a portion without departing from the spirit and scope of this invention. In an embodiment comprising a touch screen device on which the application is run, the user can touch or swipe or click a portion of the section which is of interest and application will detect the entire section using methods disclosed in the present specification. In an alternate embodiment, the application will allow the user to provide information on both the sections which are to be included in the BOM image and the sections to be removed from the BOM image. The application will accordingly process this information to detect the sections to be included in the BOM image.
In an embodiment, the application is configured to receive additional information from the user to process specific portions of an image as per the requirement. The availability of this additional information enables more accurate detection of the specific sections of the image. In an embodiment, the user provides instructions to highlight the portions of the image that comprise the border or edges of the section to be included in the BOM image. In another embodiment, the user provides instructions to apply specific filters to change the look and feel of the image. In another embodiment, the user can provide instructions to smoothen, blend, or glow specific portions in the image.
As shown in
Once the user has selected both TARGET and BOM images, the user provides instructions to the application, through an application interface, regarding placement coordinates of the BOM over the TARGET. These instructions could be provided in multiple ways. In an embodiment, the user can drag the selected BOM image and drop it over the selected TARGET image at the desired location with the help of a computer key or mouse or by using the touchscreen of a touchscreen enabled device. In an embodiment, the application provides an option to the user to provide the exact coordinates of the TARGET image over which the BOM is to be placed. The user might use this option to fine tune the positioning. Generally, while creating the FOTOBOM, BOM images will be placed over that section of TARGET image which falls in the same category as the section displayed in the BOM image. For example, the BOM image might represent the face of a person and a user will generally place it over the face section of any other image. However, one of ordinary skill in the art could appreciate that the methods disclosed in this specification do not have any kind of such limitations and it is up to the creativity of a user how he wants to create combinations of BOM and TARGET images to create FOTOBOMS. In various embodiments, the application provides a library of pre-existing TARGET and BOM images falling in various categories which could be used. For example, in an embodiment, to enable creating funny characters using images of various animals, the application has a library of BOMS comprising faces of various types of animals. The users can select any of these pre-existing BOMS and place them over the face section of their friends, etc. to create funny images which could be shared over a social network with mutual friends.
Once the user provides instructions regarding placement coordinates of the BOM on the TARGET image, as shown in step 118, the BOM is superimposed over the TARGET to create a FOTOBOM. Subsequently, in step 119, the image is fine tuned as the BOM might not fit accurately over the section in the TARGET image which is to be superimposed. In an embodiment, the methods disclosed in this specification use pixel by pixel comparisons and edge detection methods as shown in steps 120 and 121 to integrate the BOM with the TARGET in a seamless manner. In an embodiment, the application might also change the dimensions of edges for seamless integration of the BOM with the TARGET. Steps 122 and 123 depict the options available to a user once the newly created FOTOBOM is ready and displayed on the screen of the user device running the application. As shown in step 122, the user has the option to store the FOTOBOM in local device memory or at a remote location and also define its properties, such as name, category, privacy settings, etc. In step 123, the user is provided with an option to share the FOTOBOM with other people over social networking platforms (by logging into the social networking platform and uploading the FOTOBOM, as described above) and messaging applications integrated with the application, as described in the present specification.
In the embodiments above, although the methods of the present specification have been disclosed in the form of an application or a computer program which can be used on any user device such as mobile phone, tablet computer, or laptop, desktop computer, etc, one of ordinary skill in the art can appreciate that there could be multiple other embodiments to practice the invention. In an embodiment, a remote server and web based interface is used to implement and practice the methods disclosed here and there is no application loaded in the user device. The user can visit the webpage to access this system.
In an embodiment, the invention as described in the present specification comprises a computer web-based application or mobile application through which a user can create an account and also interact with other users using the same application. The application acts like a social network over which the users can capture images, modify them in advanced ways described in this specification and share it over the network with other users. The user account may include basic information provided by the user, such as photographs, a brief introduction, location, a friends list, image galleries, including public and private, and security settings, among others.
The application of the present specification, in an embodiment, includes advanced tools that allow the users to click pictures, search for pictures from internal or external sources (such as social networks), name them, store them in galleries, modify them and also share the same with selected people in their social networks, etc. The users can also share the images from their account with other people through various types of external communication platforms. In an embodiment, the system is integrated with direct messaging platforms such as Gtalk®, Whatsapp®, etc. to make this process smooth and convenient. In one embodiment, the user logs in to the direct messaging platform through the application and, after creating a BOM or FOTOBOM image, is provided an option, via the application interface, to upload and share the created image via the direct messaging platform. The computer application is also integrated with social media networks such as Twitter®, Facebook®, etc. and a user can directly share the images in his wider network on these platforms. In one embodiment, the user logs in to the social media network through the application and, after creating a BOM or FOTOBOM image, is provided an option, via the application interface, to upload and share the created image via the social media network. In an embodiment, the users can search pictures from image galleries of other users on social networks and use them for further modification. In an embodiment, the user can shortlist some of the best pictures created by them and charge a fee to other users for using these images. In another embodiment, the application keeps track of all the images in any user account, and in case any image from a user's gallery is accessed by other users or shared outside over external networks, the concerned user is notified accordingly. In an embodiment, the application provides the user with the option of blocking certain images from access by other users on the social network.
In an embodiment, the users can search pictures corresponding to specific categories which might be previously stored in the system library or might be sourced from external sources in real time. The user can subsequently modify these images as per the requirement.
In an embodiment, based on the demographic profile of a user, the application automatically recommends images to the user for modification through advanced methods. For example, if the user is a teenager, the application might recommend him images of his classmates which the user might modify in advanced ways to create interesting funny images.
In an embodiment, the application allows the users to take part in various contests conducted through the system. The users are required to modify images of their classmates, coworkers, friends, etc. or images related any other given themes and submit their entry. In an embodiment, the entries submitted by users get rated by various other users and the best rated entry is declared the winner.
In an embodiment, based on the demographic profile and interests of a user, the system might show targeted advertisements to him.
In an embodiment, the application described in the present specification provides the user with the functionality of accessing or enabling a virtual keyboard within the application interface. In some embodiments, upon receiving user instruction, the native or default keyboard provided within the application can be replaced by a virtual keyboard which contains shortcuts and tools for accessing and manipulating images. In some embodiments, the virtual keyboard is customized.
In an embodiment, the customized virtual keyboard is a separate application which the users have an option to download, either separately or with the FOTOBOM application on their device. In an embodiment, users can share their customized virtual keyboard with other users in a network. In another embodiment, the virtual keyboard can be shared across other applications. In an embodiment, the virtual keyboard is a separate application which is compatible across various applications on multiple platforms such as iOS, Android, Windows, etc. and can be used across multiple applications in addition to the FOTOBOM application.
In an embodiment the images, such as BOMS and FOTOBOMS as described in the present specification, are also referred to as stickers or emojis. The virtual keyboard contains a gallery of such stickers or emojis which can be accessed by the user.
In an embodiment, the virtual keyboard described in the present specification is dynamic in nature such that the various stickers or emojis linked to the virtual keyboard of a user changes based on settings for the corresponding user. In an embodiment, the images linked to a virtual keyboard change when new images are posted or uploaded by the other users in the network. In another embodiment, the virtual keyboard is constantly populated with new images corresponding to specific themes (preselected by the user), which are posted or uploaded in the application.
In an embodiment, the application described in the present specification further provides the functionality to modify or process video files in multiple ways. In an embodiment, the application allows recognition of specific sections of an image in a plurality of frames in a video file based on the user feedback and allows performing modifications/operations on these specific sections of images in all the image frames based on the feedback received only for said plurality of image frames. In an embodiment, the application provides a very convenient feature wherein a video file is separated into multiple image frames and modifications done by the user in a single image frame are automatically applied to all image frames in which similar modifications would be applicable. In an embodiment, when a video file is selected, a first frame of a video is opened in the application described in the present specification and the user is required to input all changes required in the first frame. Once the user completes the changes in the first frame, the system automatically applies similar changes to all other relevant frames in the video file in which such changes are possible. In case the user wants to keep certain sections in an image and remove certain other sections, the user is required to highlight the sections he would want to keep or the sections he wants to remove only in the first frame. The application records the input provided by the user and, one by one, analyzes all frames to identify relevant frames containing sections similar to the sections highlighted by the user and accordingly modifies all relevant frames as per the user feedback received for the first frame.
One of ordinary skill in the art would appreciate that a user can highlight a section in an image frame for performing multiple operations, such as for removing such section from the file, for changing the size, color, contrast or brightness of that portion, for superimposing that section with some other image, or for changing some other parameter in that section. In an embodiment, once the user provides input regarding the exact change required in the highlighted section in any one single frame, the application applies the similar change to all image frames in which such a change would be applicable. In an embodiment, the application searches all frames in a video file to search for relevant frames in which such a change would be applicable. In other embodiment, the application searches for relevant frames in a sequential manner until it encounters the first frame in which such a change would not be applicable. For example, the user may provide input for the first frame to remove a certain kind of background image from the frame. In the above embodiment, the system will sequentially search all frames and remove similar background images until it encounters a frame which does not contain the similar background image.
In an embodiment, the application allows removing images of specific objects from a plurality of image frames contained in a video file as described above.
One of ordinary skill in the art can appreciate that there may be multiple embodiments through which a user can highlight a portion without departing from the spirit and scope of this invention. In an embodiment comprising a touchscreen device on which the application is running, the user can touch or swipe or click a portion of the section which is of interest and the application will detect the entire section using the methods disclosed in the present specification.
In an embodiment, the application is configured to receive additional information from the user to process specific portions of an image as per the requirement. The availability of this additional information enables more accurate detection of the specific sections of image. In an embodiment, the user provides instructions to highlight the portions of image that comprise the border or edges of the section to be retained in the Reference Video Frame. In another embodiment, the user provides instructions to apply specific filters to change the look and feel of the image. In an embodiment, the user can provide instructions to smoothen, blend, or glow specific portions in the image.
Subsequently, in step 133, the application analyzes all other image frames in the video file to identify relevant image frames containing sections similar to the sections which were retained or removed in the Reference Video Frame as described above. At step 134, the application creates a new video by modifying all such relevant frames frame by retaining or removing those sections from these frames which were retained or removed from the Reference Video Frame.
Steps 135 and 136 depict the options available to a user once the newly created video is ready and displayed on the screen of the user device running this application. As shown in step 135, the user has the option to store the new video in local device memory or at a remote location and also define its properties such as name, category, privacy settings, etc. In step 136, the user is provided with an option to share the new video with other people over social networking platforms (by logging into the social networking platform and uploading the FOTOBOM, as described above) and messaging applications integrated with the application, as described in the present specification.
In an embodiment, the user can create FOTOBOM video files similar to the FOTOBOM image files described in this specification in
Once the user has selected both the Reference Video Frame and BOM image, the user provides instructions to the application, through an application interface, regarding placement coordinates of the BOM over the Reference Video Frame. These instructions could be provided in multiple ways. In an embodiment, the user can drag the selected BOM image and drop it over the selected Reference Video Frame at the desired location with the help of a computer key or mouse or by using the touchscreen of a touchscreen enabled device. In an embodiment, the application provides an option to the user to provide exact coordinates of the Reference Video Frame at which the BOM is to be placed. The user might use this option to fine tune the positioning. Generally, while creating a FOTOBOM video, BOM images are placed over that section of TARGET video which falls in the same category as the section displayed in the BOM image. For example, a BOM image might represent the face of a person and a user will generally place it over the face section of a video. However, one of ordinary skill in the art could appreciate that the methods disclosed in this specification do not have any kind of such limitations and it is up to the creativity of a user how he wants to create combinations of BOM images and TARGET videos to create FOTOBOM videos.
Once the user provides instructions regarding placement coordinates of the BOM on the Reference Video Frame, as shown in step 143, the BOM is superimposed over the Reference Video Frame to create a FOTOBOM. Subsequently, in step 144, the new image frame is fine-tuned as the BOM might not fit accurately over the section in the Reference Video Frame which is to be superimposed. In an embodiment, the methods disclosed in this specification use pixel by pixel comparisons and edge detection methods as shown in steps 145 and 146 respectively, to integrate the BOM with the Reference Video Frame in a seamless manner. In an embodiment, the application might also change the dimensions of edges for seamless integration of the BOM with the Reference Video Frame. Subsequently, in step 147, the application analyzes all other image frames in the video file to identify relevant image frames containing sections similar to the sections which were superimposed with a BOM in the Reference Video Frame as described above. At step 148, the application modifies all such relevant frames based on the feedback received from the user for the single Reference Video Frame by placing the BOM image over the corresponding sections in these frames. Steps 149 and 150 depict the options available to a user once the newly created FOTOBOM video is ready and displayed on the screen of the user device running this application. As shown in step 149, the user has the option to store the FOTOBOM video in local device memory or at a remote location and also define its properties, such as name, category, privacy settings, etc. In step 150, the user is provided with an option to share the FOTOBOM video with other people over social networking platforms (by logging into the social networking platform and uploading the FOTOBOM, as described above) and messaging applications integrated with the application, as described in the present specification.
In another embodiment, the user is provided with the option to provide inputs for more than one image frame for scenarios wherein the video file is of relatively long duration and the user wants to modify multiple image sections which are not displayed together in any single image frame in the video. In such a case, the user can browse through various image frames in a video file and then select two or more image frames. Subsequently, the user is required to provide inputs for the selected image frames. In an embodiment, the application analyzes all the image frames in a video file and implements the suggestions provided by the user for the selected image frames on other image frames containing the relevant sections on which user has provided the feedback.
In some embodiments, the video file is processed at a client or user device. In another embodiment, the video file is processed at a remote server such that the video is initially uploaded to a remote server location and subsequently, after the video is processed to generate a new video file as described in
In an embodiment, based on the available bandwidth, memory and the processing power of the system running the FOTOBOM application, the size of video file that can be processed by the application is restricted. In an embodiment, the application only processes video files of length between 3-10 seconds.
In another embodiment, another tool is used to first crop the selected video file size to make it compatible with the FOTOBOM application requirement. In some embodiments, various parameters such as length, resolution, frame per seconds and other relevant parameters of the selected video are modified using this tool to preprocess the selected video file and make it compatible with the FOTOBOM application requirements.
In some embodiments, the tool used for preprocessing the video file is integrated with the FOTOBOM application.
In an embodiment, wherein the selected video file is of a different format, the video file is first converted to a format supported by the FOTOBOM application. In an embodiment, the FOTOBOM application supports only a single video format, such as an animated .GIF format, and all selected video files are first converted to the supported format before processing them using the FOTOBOM application.
In an embodiment, the methods of the present specification are implemented in the form of an application which a user can load on his device, such as a mobile phone or computer, and start using the application. A user first selects the application, which may require a download, and then activates the application on a device.
In an embodiment, the application first evaluates the image pixels corresponding to the portion highlighted by the user. Subsequently, these pixels are compared with pixels corresponding to all other sections of the image on multiple parameters. After comparison, the application finds the pixels which are similar to the pixels corresponding to the area highlighted by the user to recognize the entire section representing hair 202. To fine tune the image, application further uses edge detection processes to find exact start and end points of hair section. The user can subsequently use this BOM to create a FOTOBOM or can store it or share it over the network.
In another embodiment of the application described in present specification, the user can select multiple sections/subparts to create multiple BOMS from a single base image.
In an embodiment described in the present specification, the application allows the user to create multiple BOM images and store them in a file on the user device or at a remote server location for future use. In an embodiment, the user can create a library of specific types of BOMs (such as hats, hairstyles, or lips, etc.) in separate files for future use.
In an embodiment, the application as described in the present specification enables a user to superimpose or place the BOM image over a TARGET image selected by the user. Now referring to
In an embodiment, on receiving instructions from the user, the application can detect the entire section of the TARGET image which is to be covered by the BOM image and can remove this section before placing a BOM over it. Now referring to
In another embodiment, a user can drag and drop alternate images from a digital list or library to simultaneously remove cropped sections and replace the cropped sections in any image.
In some of the above embodiments, when a user highlights or touches a section of image to generate a target image of that specific section, the application recognizes all pixels associated with that section to detect the entire section and allows the user to replace or modify it in a plurality of ways. The application described in the present specification uses advanced processing techniques to modify images instead of merely applying color filters. A pixel by pixel comparison and boundary detection are conducted to determine where the highlighted section exactly begins and where it ends so that entire image sections could be lifted and modified in advanced ways. In an embodiment, the application uses a gradient based approach wherein the differences in values of pixels corresponding to different portions of the image are analyzed to detect different sections and corresponding edges in an image.
In another embodiment, the application receives three inputs: a source image on which various editing operations are to be performed, sections of the source image that are of interest to the user (referred to here as Keep_data), and sections of the source image that are not of interest to the user (referred to here as Remove_data). The user touches/swipes or clicks on specific portions of the source image to identify the sections corresponding to Keep_data and Remove_data. The system expands the above mentioned sections (Remove_data and Keep_data) through pixel by pixel comparison to generate the complete sections which are to be removed from or inserted into the final target image. In another embodiment, the user also provides information to identify the portions that comprise the border sections. In an embodiment, the user also provides information on sections which are to be blended or smoothed. The system accordingly uses this information to generate a more accurate image of the sections which are to be included in final image.
In an embodiment, the system first expands the Remove_data section to generate the entire section which has to be removed from the final target image. This process first detects the edges by calculating the gradients of the image. These gradients are then mapped to a 0-255 range to create a grayscale image. The user input Remove_data is mapped onto this grayscale image and this Remove_data section is then expanded by recursively checking the neighbors. If the gradient value of the neighbor is less than a preset number such as 5, the neighboring pixel is added to the Remove_data section. This process is repeatedly performed until no further pixels can be added. The removed section generated by expanding the Remove_data is subsequently used for generation of a target image corresponding to the Keep_data section.
The system expands the Keep_data section to generate the entire section which will be part of a target image. This process first detects the edges by calculating the gradients of the image. These gradients are then mapped to a −10-255 range to create a grayscale image. The user input Keep_data is mapped onto this grayscale image and this Keep_data section is then expanded by recursively checking the neighbors. If the gradient value of the neighbor is less than a pre-set number, such as but not limited to 5, the neighboring pixel is added to the Keep_data section. This process is repeatedly performed until no further pixels can be added. During the search and expansion of the Keep_data section, the algorithm checks each pixel to be excluded from the Remove_data section to ensure that Keep_data section does not merge into the Remove_data section. After the generation and expansion of the Keep_data section, the Keep_data section is returned to the user as the target image.
One of ordinary skill in the art can appreciate that the thresholds for defining the “similar” pixels vary based on the images and detection of Keep_data or Remove_data sections.
In an embodiment, the present specification describes a mobile/computer application which can be used to perform all the operations described above. The user can download the application on their mobile devices and/or computing platforms.
In the case where a user selects options 702, he is directed to a new page shown in
Button 909 is used to display, when selected, a list of the user's friends and details corresponding to those friends. Button 910, when selected, is used to display a “Secret Stash” page, which, in an embodiment, is a collection of BOMS, FOTOBOMS and TARGETS that can only be seen by the user and are not shared with any other user on that user's FOTOBOM network.
In an embodiment, to create a new BOM image, a user first selects a background image from available sources, including, but not limited to, local and remote image galleries and social networking platforms. In an embodiment, when a user selects the button “NEW BOM” corresponding to icon 803 from main menu shown in
In an embodiment, when the user selects camera roll 1002 in
When the user selects image 1007, the application is redirected to the screen shown in
The above embodiment describes one specific method through which a user can highlight areas of image the user wants to keep or remove in a BOM, however one can appreciate that there could be multiple ways in which the system can take instructions from the user. In an embodiment, the user can touch or swipe or click on a portion of the section which is to be included in the image and the system conducts a pixel by pixel comparison of this portion with other areas in the image to detect the entire section corresponding to this portion.
In an embodiment, the application described in the present specification is configured to receive additional instructions from the user for more accurate detection of images. In an embodiment, the BOM editor tool screen in
In the example shown, the user creates a BOM comprising the hat and nose sections of image 1007 in
In the above embodiment, if the user chooses the option “BOM it” in 1014, he is redirected back to the screen shown in
Once the user has completed the FOTOBOM, he selects button 1125 in
Once a user clicks on bubble box 1204, the user is redirected to a new screen shown in
In an embodiment, the application allows a user to tag other users with the specific BOMS or FOTOBOMS created by the user. The application subsequently notifies the tagged users that their profile has been tagged with a specific BOM or a FOTOBOM created by another user. In an embodiment, the BOMS OR FOTOBOMS with which a user has been tagged are stored in the STASH/image gallery of the respective user with his permission. The tagged user can subsequently share these BOMS OR FOTOBOMS with other users in his network.
In an embodiment, the saved FOTOBOMS can be used as personalized emoticons while communicating with other users over various internal or external messaging applications. The emoticons are, in an embodiment, a pictorial representation of a facial expression or other expression which serves to lend tone to a sender's written communication, defining its interpretation. Usually, in all messaging applications such as Facebook®, Gtalk®, Whatsapp®, Wechat®, etc., a library of standard emoticons is embedded in the application, which is accessible to the users. The emoticons are very often used in communication over the messaging applications to emphasize a point. In this embodiment, the user can access, through various internal or external messaging applications, a library of personalized emoticons created with the help of BOMS and FOTOBOMS and use them in his communication with other users.
In another embodiment, the application enables the creation of a new virtual keyboard connected to the operating system running on the user device and comprising a library of personalized emoticons. Access to a virtual keyboard comprising the personalized emoticons allows the users to share emoticons as part of a text line while communicating on the messaging applications instead of accessing a separate image file to access each emoticon. In an embodiment, the user can activate the keyboard through the settings menu in the operating system. In various embodiments, while within the social network of the application of the present specification, or while within another social networking platform, such as Instagram® or Facebook®, the user can access the virtual keyboard to share the customized emoticons with other users. Therefore, in various embodiments, the virtual keyboard provides quick user access to the emoticons created from modified images and/or video frames by the application.
In another embodiment, the application described in the present specification enables the creation of a closed group of users on a direct messaging platform, wherein each member of the group can access the library of personalized emoticons stored in the STASH/image library of other members in the group.
In an embodiment, the application described in the present specification provides the user with the functionality of accessing or enabling a virtual keyboard within the application interface. In some embodiments, upon receiving user instruction, the native or default keyboard provided within the application can be replaced by a virtual keyboard which contains shortcuts and tools for accessing and manipulating images as well as saved BOMS and FOTOBOMS, including created emoticons.
In some embodiments, the virtual keyboard is customized. In an embodiment, the customized virtual keyboard is a separate application which the users have an option to download, either separately or with the FOTOBOM application, on their device. In an embodiment, users can share their customized virtual keyboard with other users in a network. In another embodiment, the virtual keyboard can be shared across other applications. In an embodiment, the virtual keyboard is a separate application which is compatible across various applications, such as Instagram® and Facebook®, on multiple platforms such as iOS, Android, Windows, etc. and can be used across multiple applications in addition to the FOTOBOM application.
In an embodiment, the images, such as BOMS and FOTOBOMS as described in the present specification, are also referred as stickers or emojis. The virtual keyboard contains a gallery of such stickers or emojis which can be accessed by the user.
In an embodiment, the virtual keyboard described in the present specification is dynamic in nature such that the various stickers or emojis linked to a virtual keyboard of a user changes based on settings for the corresponding user. In an embodiment, the images linked to a virtual keyboard changes when new images are posted or uploaded by the other users in the network. In another embodiment, the virtual keyboard is constantly populated with new images corresponding to specific themes, which may be preselected by the user, and which are posted or uploaded in the application.
In an embodiment, the various stickers or emojis are stored in a remoter server and are accessed by the user device through the virtual keyboard. In an alternate embodiment, some of these stickers are available for quick access and are stored in the user device itself for quick access. In an embodiment, the stickers available for quick access to a user through the virtual keyboard comprise various categories such as the stickers previously stored by that specific user in his or her stash or stickers linked to the location of the user, and the like.
In an alternate embodiment, the user can search for stickers or emojis related to any subject and the application provides access to stickers in the entire application network which are related to that subject. In another embodiment the application is integrated with at least one internet search engine so that the user can search the internet for locating or potentially creating new stickers from various image sources on the internet. In another embodiment, the user can buy stickers from a gallery of stickers from the application itself. In another embodiment, the users can buy stickers from other users in the network. In an embodiment, a marketplace interface is provided within the application for purchase and trading of stickers among various users and may charge a fee or commission for the same.
In an embodiment, the user can enable or access the virtual keyboard of this application while running other applications such as messaging applications and social networking applications to access, modify and share stickers provided in this application over these applications.
In an embodiment, the virtual keyboard contains various editing tools to modify the images. In an embodiment, the editing tools include common functions such as, but not limited to, rotate, resize, drag, drop, copy, paste, save images and perform color modification on the images.
In another embodiment, the stickers or emojis available within the application network may be rated by various users on a standardized scale such as on a scale of 1 to 10. The various parameters such as average rating and number of views related to a specific sticker are displayed alongside the sticker to showcase its current popularity on the network. In an embodiment, while searching for stickers on any subject, a user can sort the search results using various filters. In an embodiment, the user can sort the search result based on the user rating for each sticker to view the best rated stickers in any category. In an alternate embodiment, each sticker is stored along with its metadata which comprises parameters such as, but not limited to, sticker category, size, resolution, etc. In an embodiment, the user can filter the search results based on various metadata parameters.
Upon selection of a category in the navigation menu 1303 the stickers corresponding to that specific category are displayed in the image display section 1304. In the embodiment shown in
In an embodiment, the user device is a touch screen device and various inputs such as instructions to select a specific category in the navigation menu 1303 or to select a specific image can be provided through a touch or tap on the screen. In an embodiment, the navigation menu 1303 can be scrolled in either direction to see all available menu options.
In an embodiment, the navigation menu 1303 comprises a menu option 1308 which is used to enable or access an image editing tool. In another embodiment, the navigation menu 1303 comprises a menu option 1309 which is used to enable a keyboard such as a QWERTY keyboard used to input any text. In an embodiment, the user can enable the keyboard by tapping on menu option 1309 or alternatively on the text input box 1313.
As shown in
Reference is now made to
In an embodiment, as shown in
As shown in
In another embodiment, as shown in
One of ordinary skill in the art would appreciate here that the options shown in various navigation menus which are depicted here are for illustration purposes only and the number of navigation menus and the respective options in each navigation menu can be customized in multiple other ways to provide maximum options to the user.
In an embodiment, once the user selects the font and color of the text to be included in the text box 1425, the application 1400 directs the user to a new screen as shown in
In another embodiment, referring to
Once the user has performed all edits or changes as shown in
In an embodiment, BOMS and FOTOBOMS can be created using the virtual custom keyboard by directly accessing the BOM editor tool explained in earlier embodiments. Using the virtual keyboard provides a more convenient method to create new BOMS or stickers as it provides quick access to several system features through shortcuts as described in some of the above embodiments.
In an embodiment shown in
Once the user has indicated the portions to be removed or included in the final BOM image, he can indicate the same to the application though a button 1806 shown in
In an embodiment, the BOM image can be subsequently used through the virtual keyboard.
Similarly,
In an embodiment, in addition to the static image categories, the application enables the user to explore images corresponding to his current location. In an embodiment, when the user tries to explore images corresponding to his location, the application detects the location of the user through a GPS tracking mechanism present on the user device. Subsequently, the application displays images corresponding to the detected location. For example, if a user is at Disneyland®, the application enables the user to explore BOMS and FOTOBOM images corresponding to various Disneyland® characters. This may include the images stored in the application library or those created and shared by other users.
In another embodiment, the application described in the present specification is configured such that the information contained in a user's account may be shared with a dynamic program such as a computer or a mobile gaming application.
In an embodiment, the application described in the present specification provides the functionality to modify or process video files in multiple ways. In an embodiment, the application allows recognition of specific sections of an image in a plurality of frames in a video file based on the user feedback and allows performing modifications/operations on these specific sections of images in all the video image frames based on the feedback received only for said plurality of image frames. In an embodiment, the application provides a very convenient feature wherein a video file is separated into multiple image frames and modifications done by the user in a single image frame are automatically applied to all image frames in which similar modifications would be applicable. In an embodiment, when a video file is selected, a first frame of a video is opened in the application described in the present specification and the user is required to input all changes required in the first frame. Once the user completes the changes in the first frame, the system automatically applies similar changes to all other relevant frames in the video file in which such changes are possible. In case the user wants to keep certain sections in an image and remove certain other sections, the user is required to highlight the sections he would want to keep or the sections he wants to remove only in the first frame. The application records the input provided by the user and, one by one, analyzes all frames to identify relevant frames containing sections similar to the sections highlighted by the user and accordingly modifies all relevant frames as per the user feedback received for first frame.
The above embodiment is described with reference to
Once the user selects the TARGET video file shown in
In an embodiment, the buttons 2008 and 2009, corresponding to “REMOVE” and “KEEP”, respectively, are used to modify the image 2007 to create a new video image frame. In an embodiment, selection of buttons 2008 and 2009 launches a highlight tool that allows a user to highlight portions of an image. To highlight those sections which are of interest, the user first selects or presses keep button 2009. Subsequently the user highlights the portions which are of interest and the application fills these portions with a first color. To highlight the sections which are not of interest, the user selects or presses remove button 2008. Subsequently, the user highlights the portions which are not of interest and the system fills these portions with a second color. In an embodiment, the first color 2013 is green which depicts the portions of image to be included in the video file and the second color 2011 is red which depicts the portions of image to be excluded from the video file. If at any time the user wants to undo the previous command, the same can be done by pressing the button 2014 which undoes the last command. After providing all information, the user selects “Done” button 2012, which signals the application that scanning is complete. The application subsequently generates a new image by keeping those sections identified by the first color and removing those sections identified by the second color, depicting the new image frame created by the user. In the above embodiment, the user has highlighted the person's image in the first color to retain his image in the video file and has highlighted the background behind the person in the second color to remove this background from the video file. It should be understood by those of ordinary skill in the art that the use of colors to differentiate areas is by way of example only and that any demarcation may be used to differentiate these areas.
The above embodiment describes one specific method through which a user can highlight areas of image the user wants to keep or remove in a video frame. However, one can appreciate that there could be multiple ways in which the system can take instructions from the user. In an embodiment, the user can touch or swipe or click on a portion of the section which is to be included in the image and the system conducts a pixel by pixel comparison of this portion with other areas in the image to detect the entire section corresponding to this portion.
In an embodiment, the application described in the present specification is configured to receive additional instructions from the user for more accurate detection of images. In an embodiment, the video frame editor tool screen in
In an embodiment, the application subsequently scans all other video frames of the video file depicted in
In the above embodiment, the new video file in which the background behind the person's image has been removed is illustrated in
In another embodiment, the user is provided with the option to provide inputs for more than one image frame for scenarios wherein the video file is of relatively long duration and the user wants to modify multiple image sections which are not displayed together in any single image frame in the video. In such a case, the user can browse through various image frames in a video file and then select two or more image frames. Subsequently, the user is required to provide inputs for the selected image frames. In an embodiment, the application analyzes all the image frames in the video file and implements the suggestions provided by the user for the selected image frames on other image frames containing the relevant sections on which user has provided the feedback.
In some embodiments, the video file is processed at a client or user device. In another embodiment, the video file is processed at a remote server such that the video is initially uploaded to a remote server location and subsequently, after the video is processed to generate a new video file as depicted in
In some embodiments, the modified first frame, or reference frame, is stored in a virtual keyboard similar to the virtual keyboard described with reference to
In an embodiment, based on the available bandwidth, memory, and the processing power of the system running the FOTOBOM application, the size of video file that can be processed by the application is restricted. In an embodiment, the application only processes video files of length between 3-10 seconds.
In another embodiment, another tool is used to first crop the selected video file size to make it compatible with the FOTOBOM application requirement. In some embodiments, various parameters such as length, resolution, frame per seconds and other relevant parameters of the selected video are modified using this tool to preprocess the selected video file and make it compatible with the FOTOBOM application requirements.
In some embodiments, the tool used for preprocessing the video file is integrated with the FOTOBOM application.
In an embodiment, wherein the selected video file is of a different format, the video file is first converted into a format supported by the FOTOBOM application. In an embodiment, the FOTOBOM application supports only a single video format, such as an animated .GIF format, and all selected video files are first converted to the supported format before processing them using the FOTOBOM application.
While the application described in the present specification can work with images of any resolution, in an embodiment, the resolution of images is normalized before combining them to create high quality pictures. There could be cases wherein a large mismatch between resolution and size of TARGET images and BOM images might create a problem. For example, if the TARGET image is very large and the BOM image is very small, the resolution of FOTOBOM image created by combining TARGET with BOM might not be perfect. In an embodiment, a standard resolution range is defined and the system requires the TARGET and BOM images to be normalized to fall within this range before combing them.
The normalization of pictures to make them compatible with the system standard has to be done in a very fast and efficient manner to avoid any lag in the user experience. In an embodiment, the system performs the normalization process at a remote server location based on instruction received from the client application running on the user/client device. In an embodiment, as the user selects an image for creating a BOM or a FOTOBOM, the client application described in the present specification sends the image or a web link corresponding to that image to the server for pre-processing. The server retrieves the image and pre-processes it, which includes the steps of normalizing the size/resolution of image and changing the file types and file names to standardized formats. In an embodiment, a copy of the normalized image is stored in the server so that it can be accessed easily for further processing, including the creation of BOMS and FOTOBOMS. Once the pre-processing is complete at the server side, the image is sent to the client device which displays it on the user screen to receive further instruction. In case the user is creating a new BOM, user instructions would comprise information related to sections of the image to be included in the BOM and/or sections of the image to be excluded from the BOM. In case the user is creating a new FOTOBOM, user instructions would comprise information related to the existing BOM to be used, its location, placement details on the TARGET image, etc. On receiving user instructions, client device sends these instructions to the server, which accordingly processes the image based on the user feedback and sends the final completed image to the client device.
In another embodiment, both the client device and the server are provided with the information about the normalization algorithm and if the client device is capable in terms of its processing capacity, it normalizes the images itself. This is beneficial to speed up client device response and sometimes bypassing the need to send the image to the server. In another embodiment, both the server and client device perform the normalization in order to speed up response times, storage times, etc. The final images normalized at the two locations sync with each other as both the applications use the same normalization algorithm.
In an embodiment of the present specification, the entire normalization and processing of the image is conducted by the client application. In this embodiment, the processing power of the client application is configured such that the client application is independently capable of processing the image without compromising the user experience. In addition, in an embodiment, the client application can access other BOM and FOTOBOM images stored on the server and retrieve the same in case the same are required for any processing step.
In an embodiment, the above described method also makes it possible for the user to remotely create BOM or FOTOBOM images while the client device is not connected with the server. In another embodiment, when the user device connects with the FOTOBOM server, the data corresponding to a user account stored on the client application synchronizes with the data stored on the server corresponding to that user such that any modifications done remotely through the client application are updated on the server. In an embodiment, in order to optimize the system performance, it is imperative that images (TARGETS and BOMS) in their normalized formats are stored on the server as much as possible. Storage on the service provides the following benefits:
In an embodiment, the metadata information related to FOTOBOMS that is stored in the server includes: name and/or location of TARGET image; properties of TARGET image (width, height, other); name and/or location of BOM image; properties of BOM image (width, height, other); location of BOM image within a TARGET image; timestamp of BOM placement within a TARGET image; username of person who placed the BOM;
In another embodiment, the metadata information contains multiple image locations, positions and timestamps to recreate FOTOBOM images at different places in time.
In an embodiment, the application allows the users to create a “Special BOM” which could be used as a watermark on all the images created by the users. Many users want to sign/mark their creations and they can place this “Special BOM” on their work. In an embodiment, this “Special BOM” is designed by the users using standard templates.
One of ordinary skill in the art can appreciate that there could be multiple formats or types of file systems which can be used to create and store the BOM or FOTOBOM images described in above embodiments. The methods described in the present specification are not limited to any specific file type.
The above examples are merely illustrative of the many applications of the methods of present specification. Although only a few embodiments of the present invention have been described herein, it should be understood that the present invention might be embodied in many other specific forms without departing from the spirit or scope of the invention. Therefore, the present examples and embodiments are to be considered as illustrative and not restrictive, and the invention may be modified within the scope of the appended claims.
In the description and claims of the application, each of the words “comprise” “include” and “have”, and forms thereof, are not necessarily limited to members in a list with which the words may be associated.
The present specification relies on, for priority, U.S. Patent Provisional Application No. 62/105,293, filed on Jan. 20, 2015, U.S. Patent Provisional Application No. 62/050,916, filed on Sep. 16, 2014, and U.S. Patent Provisional No. 61/970,258, filed on Mar. 25, 2014, all of which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62105293 | Jan 2015 | US | |
62050916 | Sep 2014 | US | |
61970258 | Mar 2014 | US |