This application claims the benefit under 35 U.S.C. § 119(a) of a Korean patent application filed on Jan. 10, 2014 in the Korean Intellectual Property Office and assigned Serial number 10-2014-0003142, the entire disclosure of which is hereby incorporated by reference.
The present disclosure relates to an electronic device that may add an object to an image and a display method thereof. More particularly, the present disclosure relates to an electronic device capable of providing various forms of objects that reflect a user's surroundings and personality when adding related information to an image, and a display method thereof.
People take pictures to share their memories, such as those of travels and anniversaries. Recently, since portable terminal devices, such as smartphones and tablet Personal Computers (PCs), are equipped with cameras, taking pictures in everyday life has become common. Additionally, images that users capture are being more frequently shared through Social Network Service (SNS).
In the case of old-fashioned cameras that use film, it is not possible to edit an image that has already been captured, but with the advent of a digital camera, it is possible to freely delete and edit captured images.
Accordingly, applications that reflect a user's personality by adding related information or icons to captured images are provided.
As mentioned above, although taking pictures has become ubiquitous and various applications for adding information thereto are provided, such techniques merely synthesize information or icons provided in advance from capturing applications. Therefore, the surroundings and personality of a user may not be fully reflected.
Therefore, a need exists for an electronic device capable of providing various forms of objects that reflect a user's surroundings and personality when adding related information to an image, and a display method thereof.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure.
Aspects of the present disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present disclosure is to provide an electronic device capable of providing various forms of objects that reflect a user's surroundings and personality when adding related information to an image, and a display method thereof.
Another aspect of the present disclosure is to provide an electronic device that manages an object to which an image is applied and regenerating it as another form of content, and a display method thereof.
In accordance with an aspect of the present disclosure, an electronic device is provided. The electronic device includes an input unit configured to receive a selection on an object theme including at least one object from a user, an information collection unit configured to collect information corresponding to the object theme, a storage unit configured to divide the collected information into variable information or invariable information and store the information, and a display unit, when new variable information is collected according to a user instruction for reselecting the object theme, configured to add an object to an image by using the stored information and the new variable information and display the image.
The display unit may add an object based on the stored variable information and the new variable information to the image.
The display unit may change at least one of a type, size and position of the object added to the image according to the variable information.
When new variable information is collected according to a user instruction for reselecting the object theme, the storage unit may link stored information and the new variable information and may store the linked information.
The information collection unit may generate new information by using the collected information.
The display unit may add a new object to the image having the object added thereto according to a user instruction.
The display unit may change at least one of a position and size of the object added to the image according to a user instruction.
The display unit may delete the object added to the image according to a user instruction.
When a slide show instruction for an image having the same object theme applied thereto is inputted, the display unit may align images based on one of the variable information and may sequentially display the aligned images.
The display unit may change a position of the object added to the image to correspond to an order of a currently displayed image among the aligned images and display the image.
In accordance with another aspect of the present disclosure, a display method is provided. The display method includes receiving a selection on an object theme including at least one object from a user, collecting information corresponding to the object theme, dividing the collected information into variable information or invariable information and storing the information, when the object theme is reselected by a user, collecting new variable information, adding an object to an image by using the stored information and the new variable information, and displaying the image having the object added thereto.
The-adding-of-the-object-to-the-image may include adding an object based on the stored variable information and the new variable information to the image.
The-collecting-of-the-information may include collecting information through a web server or a terminal device or collecting information inputted from a user, and generating new information by using the collected information.
The method may further include receiving a user instruction for adding a new object to the image having the object added thereto, collecting information corresponding to the new object, and adding the new object to the image by using the collected information and displaying the image.
The method may further include receiving a user instruction for editing the object added to the image, and changing at least one of a position and size of the object added to the image according to the user instruction and displaying the image.
The method may further include receiving a user instruction for deleting the object added to the image, and deleting the object added to the image according to the user instruction and displaying the image.
The method may further include receiving a selection on at least one object from a user, generating an object theme including the selected object, and storing the object theme.
The method may further include receiving a slide show instruction for an image having the same object theme applied thereto, aligning images image having the same object theme applied thereto based on one of the variable information, and sequentially displaying the aligned images.
The-sequentially-displaying-of-the-aligned-images may include changing a position of the object added to the image to correspond to an order of a currently displayed image among the aligned images and displaying the image.
In accordance with another aspect of the present disclosure, a non-transitory computer-readable recording medium having a program recorded thereon and implementing the method is provided.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the present disclosure.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the present disclosure is provided for illustration purpose only and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
Referring to
The input unit 110 may receive a user instruction. For example, the input unit 110 may receive a user instruction selecting object theme including at least one object. Thereafter, the input unit 110 may receive a user instruction for adding a new object to an object-added image or changing or deleting at least one of the position and size of an added object.
The electronic device 100 may provide an object theme including a certain object. Thereafter, a user may generate a new object theme by editing a provided object theme or selecting at least one object.
The input unit 110 may be implemented with at least one of a touch screen or a touch pad operating by a user's touch input, a key pad or a keyboard including various function keys, numeric keys, special keys, and character keys, a remote controller, a mouse, a motion recognition sensor recognizing a user's motion, and a voice recognition sensor recognizing the user's voice.
The input unit 110 may be variously implemented according to the type and feature of the electronic device 100. For example, when the electronic device 100 is implemented with a smartphone, the input unit 110 may be implemented with a touch screen or a voice recognition sensor. When the electronic device 100 is implemented with a TV, the input unit 110 may be implemented with a remote controller, a motion recognition sensor, or a voice recognition sensor. Additionally, when the electronic device 100 is implemented with a notebook PC, the input unit 110 may be implemented with a keypad or a touch pad.
Once an object theme is selected according to a user instruction, the information collection unit 120 may collect information corresponding to the object theme. For example, the information collection unit 120 may collect information according to the type of an object theme.
The information collection unit 120 may collect information from a web server. For example, the information collection unit 120 may collect information, such as a weather forecast, a temperature, a humidity, a UV intensity, a sunrise time, and a sunset time, a weather-related icon, and the like, from a web server providing weather service. Additionally, the information collection unit 120 may collect information, such as a friends list, profile pictures of friends, a number of times that content is shared, and comments of friends, from a web server providing Social Network Service (SNS). For this, the information collection unit 120 may include a communication module connected to and communicated with various web servers.
Additionally, the information collection unit 120 may access various modules in the electronic device 100 and may then collect information. For example, the information collection unit 120 may collect a current date and time, the current location of an electronic device, and the name of an electronic device by accessing a system module or may collect a current temperature, a humidity, and a pressure from a sensor module. Alternatively, the information collection unit 120 may collect information, such as the exposure time, flash on/off, ISO sensitivity, focus, and white balance of a camera by accessing a camera module. Additionally, the information collection unit 120 may collect information, such as the number of steps of a user, a user name, a user weight, an exercised amount of a user, a total exercise time, an exercised distance, food intake calories, a food intake time, and various icons by accessing an application module (for example, a health management application). Additionally, the information collection unit 120 may collect the tag name, capturing location, capturing date and time, and tag information of a picture by accessing a picture management module.
Thereafter, the information collection unit 120 may collect information inputted from a user through the input unit 110. For example, when a user inputs information, such as a birthday, a weight and a name through the input unit 110, the information collection unit 120 may collect the inputted information.
Moreover, the information collection unit 120 may generate new information by using the collected information. For example, when collecting the birthday of a user, the information collection unit 120 may generate a current age or age information at a past specific point by using the user's birthday. As another example, the information collection unit 120 may generate remaining time information until sunrise by using sunrise time information and current time information.
Further, when the preselected object theme is reselected, the information collection unit 120 may collect new information corresponding to the object theme. In addition, the information collection unit 120 may newly collect variable information corresponding to an object theme.
The storage unit 130 stores information collected by the information collection unit 120. The storage unit 130 may classify the collected information as variable information or invariable information and may then store it. The variable information is information of which values are changed over time and the invariable information is information of which values are not changed over time. Even the same type of information may be classified as variable information or invariable information according to the type of an object theme.
Thereafter, if a preselected object theme is reselected and new information is collected, the storage unit 130 may link stored information and new information and then store it. For example, the storage unit 130 may manage an object theme separate from an object-added image.
The display unit 140 may add an object to an image and display it by using information stored in the storage unit 130. For example, the display unit 140 may display a stored image or an image captured in a camera capturing mode, and if an object theme is selected according to a user instruction, an object may be added to a displayed image.
Even when an object is added to an image, the electronic device 100 may manage the image and the added object separately. For example, a new image is not generated by synthesizing an object with a displayed image itself and an image and an object are linked to each other and managed. Thereafter, when the image and the object are displayed on a display screen, they may be displayed together. Accordingly, each time an object theme is selected, new information is added, so that an object theme reflecting a user's personality may be generated.
When an object theme is reselected and new variable information is collected and stored, the display unit 140 may add an object to an image by using stored information and new information. For example, the display unit 140 may add stored variable information and new variable information to an image and may then display it. Alternatively, the display unit 140 may add stored invariable information and new variable information to an image and may then display it.
Thereafter, when an object is added and displayed, the display unit 140 may change at least one of the type, size, and position of an object added to an image according to variable information. For example, the display unit 140 may change at least one of the type, size, and position of an object added to an image according to the number of variable information or an information value.
The display unit 140 may add a new object according to a user instruction while an object is added to an image, or may change at least one of the position and size of the added object, or may delete the added object. Accordingly, a user may change an object theme according to the user's preference.
The control unit 150 controls the overall operations of the electronic device 100. The control unit 150 may add an object to an image and display it by separately controlling the input unit 110, the information collection unit 120, the storage unit 130, and the display unit 140 according to various embodiments of the present disclosure.
Hereinafter, various embodiments adding an object to an image are described with reference to
Referring to
Once the object theme icon 210 is selected, as shown in
Once a user selects one object 230 of the object themes, as shown in
Moreover, once one object 230 of the object themes is selected, the information collection unit 120 may collect information corresponding to the selected object theme. For example, the information collection unit 120 may collect temperature and humidity information from a web server providing weather service or a sensor included in the electronic device 100. Additionally, the information collection unit 120 may collect an icon corresponding to the current time by accessing the current time information.
The storage unit 130 may classify temperature and humidity information, current time information, and icons as variable information and may then store the information. Thereafter, as shown in
As shown in
Moreover, in the edit mode, according to a user instruction, a new object may be added to an image. For example, once a menu 320 for adding a new object is selected as shown in
Once a user selects one object 340 of the objects, as shown in
Moreover, in the edit mode, according to a user instruction, the position of an object added to an image may be changed. Once a user instruction for moving a new object 350 is inputted as shown in
Referring to
Referring to
Once a user selects the age object theme 520, as shown in
Moreover, once the age object theme 520 is selected, the information collection unit 120 may collect current data information (when a camera module is used) or image-captured date information (when a stored image is used). Thereafter, once the name and the birthday are inputted from a user, the information collection unit 120 may collect the name and birthday information and may generate age information by using the current date information or the image-captured date information and the birthday information.
The storage unit 130 may store the name and birthday information as invariable information and may store the date information and the age information as variable information. Thereafter, the display unit 140 may add an object representing the name and age to an image by using the stored information as shown in
A user may select the age object theme again with respect to a different image. Referring to
When the age object theme 520 is selected again, the information collection unit 120 may collect current data information (when a camera module is used), that is, variable information, or image-captured weather information (when a stored image is used). Unlike the case in which an object theme is selected first, the name and birthday, that is, invariable information, are not changed and thus the name and birthday information is not collected. Once the current date information or image-captured date information is newly collected, the information collection unit 120 may generate new age information by using stored birthday information.
The storage unit 130 may link the newly collected date information with the stored date information and may then store the newly collected date information. Thereafter, the display unit 140 may add an object representing the name and age to an image by using the stored name information and the newly collected age information as shown in
Moreover, a user may later add a new object to an image having an object added. When an image having an object added is called, as shown in
Once a user selects one object 560 of the objects, as shown in
Referring to
Once a user selects the weight object theme 620, an object for receiving a weight is added to an image and then displayed. Once the user inputs the weight, as shown in
Moreover, once the weight object theme 620 is selected and the weight is inputted from the user, the information collection unit 120 may collect weight information and the collected weight information may be stored as variable information in the storage unit 130. Thereafter, as shown in
A user may select the weight object theme again with respect to a different image. Referring to
The position of the weight object may be determined according to the weight inputted by the user. For example, when an initially inputted weight is 75 kg and a newly inputted weight is 80 kg, as shown in
Moreover, once a user selects the weight object theme again with respect to a different image, as shown in
Referring to
Once a user selects the health object theme 720, as shown in
Moreover, once the health object theme 720 is selected, the information collection unit 120 may collect a user's burned calories according to the user's exercise by accessing a health management application module. Additionally, the information collection unit 120 may collect information on the user's target calories. The storage unit 130 may then store the burned calories as variable information and may store the target burned calories as invariable information. Thereafter, as shown in
A user may select the health object theme again with respect to a different image. Once a user selects the health object theme again with respect to a different image, as shown in
The position of an object representing burned calories may be determined according to burned calories. For example, when burned calories collected from a previous image are 1772 cal and newly collected burned calories are 2032 cal, as shown in
Additionally, a new object may be added and displayed according to a user's instruction. Referring to
Once a user instruction adding an object representing the number of steps of a user is inputted, the information collection unit 120 may collect information on the number of steps of a user by accessing a health management application module. The storage unit 130 may store the information on the number of steps as variable information and may store the target burned calories as invariable information. Thereafter, as shown in
Once a user selects the health object theme again with respect to a different image, as shown in
Moreover, as shown in
Referring to
Once the sunrise object theme is selected, the information collection unit 120 may collect sunrise time information, current time information, and sun-shaped icon information. Additionally, the information collection unit 120 may generate remaining time information until sunrise by using the sunrise time information and the current time information. The storage unit 130 may store the sunrise time information as invariable information and may store the current time information, the remaining time information before sunrise, and icon information as variable information. Thereafter, the display unit 140 may add the sun-shaped icon object 810 and the object 820 representing the sunrise time may be added by the stored information as shown in
Here, the color, shape, and position of a sun-shaped icon object may be determined according to the remaining time information before sunrise. For example, as shown in
Moreover, when a capturing time of an image is p.m., as shown in
Referring to
Here, the weather icon object 910 may vary according to current weather information. For example, as shown in
Referring to
Here, the type, position, and size of an added object may vary according to current weather information. For example, when the weather information is ‘sunny’, an object representing a weather, a wind, and a humidity may not be added or displayed small. Additionally, as an object representing a weather, a wind, and a humidity changes, the position and size of the remaining object may be changed.
Referring to
Thereafter, according to a user instruction, once the food object theme is selected again, an object representing a meal time, food calories, and the remaining calories (based on the recommended daily calories) may be added according to the above process. The remaining calorie information may be generated based on the stored food calories and currently collected food calorie information and then displayed as an object.
Referring to
Moreover, when a face is included in an image, the electronic device 100 may recognize a specific face by detecting a face area through a face detection algorithm or may receive the name information from a user. Accordingly, the electronic device 100 may add an object including information on people included in the image.
Moreover, when an object theme is applied to an image according the process described with reference to
Once a slide show instruction for an image having the same object theme applied thereto is inputted, the display unit 140 may align images based on one of the variable information and may then display them sequentially. At this point, an object added to each image may be displayed together. In addition, the position of an object added to an image may be changed and displayed in order to correspond to the order of a currently displayed image among the aligned images. Additionally, if the variable information includes position information, a map and the capturing location of an image may be displayed together and according to the capturing location of a displayed image, an object representing the capturing location may be sequentially moved.
Hereinafter, a display process of an image having an object added thereto is described with reference to
Referring to
Here, the image having the object theme ∘A∘A∘A∘A∘A 1320 applied thereto may be aligned based on one of variable information according to a user instruction. For example, in the case of an image having an age object theme applied thereto, the image may be aligned based on age information or the capturing time of the image. As another example, in the case of an image having a weight object theme applied thereto, the image may be aligned based on a weight or the capturing time of the image. The aligned image may be generated as one video file.
As shown in
An image aligned based on one of variable information is aligned in one direction of a display screen as shown in
When aligned images are sequentially displayed, in the case that variable information includes position information, as shown in
Referring to
Once an object theme is selected, information corresponding to the object theme is collected in operation S1620. For example, information may be collected from a web server or the electronic device 100 or information inputted from a user may be collected. For example, information, such as a weather forecast, a temperature, a humidity, a UV intensity, a sunrise and sunset time, a weather-related icon, and the like, may be collected from a web server providing weather service. Additionally, information, such as a friends list, profile pictures of friends, the number of content sharing, and comments of friends may be collected from a web server providing SNS.
Additionally, information may be collected by accessing various modules in the electronic device 100. For example, the current date and time, the current location of an electronic device, and the name of an electronic device may be collected by accessing a system module or a current temperature, a humidity, and a pressure may be collected from a sensor module. Alternatively, information, such as the exposure time, flash on/off, ISO sensitivity, focus, and white balance of a camera may be collected by accessing a camera module. Additionally, information, such as the number of steps of a user, a user name, a user weight, an exercise amount of a user, a total exercise time, an exercise distance, a food intake calorie, a food intake time, and various icons may be collected by accessing an application module (for example, a health management application). Additionally, the tag name, capturing location, capturing date and time, and tag information of a picture may be collected by accessing a picture management module.
Additionally, information inputted from a user may be collected. For example, once a birthday, a weight and a name are inputted, the inputted information may be collected.
Thereafter, new information may be generated by using the collected information. For example, when the birthday of a user is collected, a current age or age information at a past specific point may be generated by using the user's birthday. As another example, the remaining time information until sunrise may be generated by using sunrise time information and current time information.
Once the information is collected, the electronic device 100 may classify the collected information as variable information or invariable information and may then store it in operation S1630. The variable information is information of which value is changed over time and the invariable information is information of which value is not changed over time. Even the same type of information may be classified as variable information or invariable information according to the type of an object theme.
Thereafter, when the object theme is re-selected by a user in operation S1640, new variable information is collected in operation S1650. The variable information may be lined with stored information and then stored.
In addition, an object is added to an image by using the stored information and the new variable information in operation S1660. An object may be added to a stored image or an image captured in a camera capturing mode. When an object is added to an image, stored variable information and new variable information may be added together. Alternatively, stored invariable information and new variable information may be added to an image.
Here, the type, size, position of an object added to an image may vary according to variable information. For example, at least one of the type, size, and position of an object may be changed according to the number of variable information or an information value.
Thereafter, an image having an object added thereto is displayed in operation S1670.
While an object is added to an image, according to a user instruction, the object may be edited or deleted or a new object may be added. Once a user instruction for adding a new object to an image having an object added thereto is inputted, information corresponding to a new object may be collected. Thereafter, a new object may be added by using the collected information and then displayed. Furthermore, while an object is added to an image, once a user instruction for editing or deleting the object is inputted, at least one of the position and size of the object added to the image may be changed or deleted and then displayed. Since the addition, deletion, and editing of an object are described with reference to
Moreover, once an object theme is applied to an image, the image and the object theme added thereto may be linked and then stored. A user may select and appreciate an image having the same object theme added thereto or may search for an image that satisfies a specific condition.
Once a slide show instruction for an image having the same object theme applied thereto is inputted, images may be aligned based on one of the variable information and then displayed sequentially. At this point, an object added to each image may be displayed together. In addition, the position of an object added to an image may be changed and displayed in order to correspond to the order of a currently displayed image among the aligned images. Additionally, if the variable information includes position information, a map and the capturing location of an image may be displayed together and according to the capturing location of a displayed image, an object representing the capturing position may be sequentially moved. This is described with reference to
According to the above-mentioned various embodiments of the present disclosure, when related information is added to an image, an object that reflects a user's personality may be provided. In addition, various forms of objects may be provided according to a user environment by dynamically managing information accumulated over time. Thereafter, an image having an object added thereto may be managed and also may be generated as a new form of content.
Moreover, a display method according to various embodiments of the present disclosure may be implemented by a program executable in an electronic device. Such a program may be stored in various types of recording media and then used.
For example, a program code for performing the above methods may be stored in various types of nonvolatile memory recording media, such as a flash memory, a Read Only Memory (ROM), an Erasable Programmable ROM (EPROM), an Electronically Erasable and Programmable ROM (EEPROM), a hard disk, a removable disk, a memory card, a Universal Serial Bus (USB) memory, and a CD-ROM.
While the present disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2014-0003142 | Jan 2014 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
8543622 | Giblin | Sep 2013 | B2 |
20090315869 | Sugihara et al. | Dec 2009 | A1 |
20100042926 | Bull | Feb 2010 | A1 |
20110177914 | Park | Jul 2011 | A1 |
20110234626 | Seong | Sep 2011 | A1 |
20120239661 | Giblin | Sep 2012 | A1 |
20140059053 | Giblin | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
2369829 | Sep 2011 | EP |
Number | Date | Country | |
---|---|---|---|
20150199120 A1 | Jul 2015 | US |