The disclosure relates to an electronic device, a method, and a non-transitory computer-readable storage medium for processing text contained within a text input portion of a user interface.
An electronic device may display a user interface including a text input portion through a display of the electronic device or a display of an external electronic device. For example, the text input portion may be a space in which text identified based on a user input is inputted. For example, a virtual keyboard or a handwritten input field may be used to input the text to the text input portion.
The above information is presented as background information only to assist with an understanding of the disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the disclosure.
Aspects of the disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the disclosure is to provide an electronic device, method, and non-transitory computer-readable storage medium for processing text contained within text input portion of user interface.
Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments.
An electronic device is described. The electronic device may comprise a display and a processor. The processor may be configured to, based on a recognition of media content stored in the electronic device, obtain information regarding the media content. The processor may be configured to identify an event providing the media content from a first software application to a second software application. The processor may be configured to identify attribute information of a text input portion in a user interface of the second software application. The processor may be configured to obtain text indicating at least a portion of the information, based on the attribute information. The processor may be configured to display, via the display, a text input portion including the text, with the media content, in the user interface of the second software application executed in response to the event.
A method for an electronic device including a display is described. The method may comprise, based on a recognition of media content stored in the electronic device, obtaining information regarding the media content. The method may comprise identifying an event providing the media content from a first software application to a second software application. The method may comprise identifying attribute information of a text input portion in a user interface of the second software application. The method may comprise obtaining text indicating at least a portion of the information, based on the attribute information. The method may comprise displaying, via the display, a text input portion including the text, with the media content, in the user interface of the second software application executed in response to the event.
A non-transitory computer readable storage medium including one or more programs is described. The one or more programs may comprise instructions which, when executed by a processor of an electronic device including a display, cause the electronic device to, based on a recognition of media content stored in the electronic device, obtain information regarding the media content. The one or more programs may comprise instructions which, when executed by the processor of the electronic device, cause the electronic device to identify an event providing the media content from a first software application to a second software application. The one or more programs may comprise instructions which, when executed by the processor of the electronic device, cause the electronic device to identify attribute information of a text input portion in a user interface of the second software application. The one or more programs may comprise instructions which, when executed by the processor of the electronic device, cause the electronic device to obtain text indicating at least a portion of the information, based on the attribute information. The one or more programs may comprise instructions which, when executed by the processor of the electronic device, cause the electronic device to display, via the display, a text input portion including the text, with the media content, in the user interface of the second software application executed in response to the event.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the disclosure.
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
The same reference numerals are used to represent the same elements throughout the drawings.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
It should be appreciated that the blocks in each flowchart and combinations of the flowcharts may be performed by one or more computer programs which include computer-executable instructions. The entirety of the one or more computer programs may be stored in a single memory device or the one or more computer programs may be divided with different portions stored in different multiple memory devices.
Any of the functions or operations described herein can be processed by one processor or a combination of processors. The one processor or the combination of processors is circuitry performing processing and includes circuitry like an application processor (AP, e.g. a central processing unit (CPU)), a communication processor (CP, e.g., a modem), a graphics processing unit (GPU), a neural processing unit (NPU) (e.g., an artificial intelligence (AI) chip), a Wi-Fi chip, a Bluetooth® chip, a global positioning system (GPS) chip, a near field communication (NFC) chip, connectivity chips, a sensor controller, a touch controller, a finger-print sensor controller, a display driver integrated circuit (IC), an audio CODEC chip, a universal serial bus (USB) controller, a camera controller, an image processing IC, a microprocessor unit (MPU), a system on chip (SoC), an IC, or the like.
Referring to
The electronic device 100 may include components including a processor 110, volatile memory 120, nonvolatile memory 130, a display 140, an image sensor 150, communication circuitry 160, and a sensor 170. The components are merely exemplary. For example, the electronic device 100 may include another component (e.g., a power management integrated circuit (PMIC), audio processing circuitry, or an input/output interface). For example, some components may be omitted from the electronic device 100.
The processor 110 may be implemented with one or more integrated circuit (IC) chips and may execute various data processes. For example, the processor 110 may be implemented as a system on chip (SoC) (e.g., a single chip or chipset). The processor 110 may include sub-components including a central processing unit (CPU), a graphics processing unit (GPU) 112, a neural processing unit (NPU) 113, an image signal processor (ISP) 114, a display controller 115, a memory controller 116, a storage controller 117, a communication processor (CP) 118, and/or a sensor interface 119. The sub-components are merely exemplary. For example, the processor 110 may further include other sub-components. For example, some sub-components may be omitted from the processor 110.
The CPU 111 may be configured to control the sub-components based on execution of instructions stored in the volatile memory 120 and/or the nonvolatile memory 130. The GPU 112 may include circuitry configured to execute parallel operations (e.g., rendering). The NPU 113 may include circuitry configured to execute operations (e.g., convolution computation) for an artificial intelligence model. The ISP 114 may include circuitry configured to process a raw image obtained through the image sensor 150 in a suitable format for a component in the electronic device 100 or a sub-component in the processor 110. The display controller 115 may include circuitry configured to process an image obtained from the CPU 111, the GPU 112, the ISP 114, or the volatile memory 120 in a suitable format for the display 140. The memory controller 116 may include circuitry configured to control reading data from the volatile memory 120 and writing data to the volatile memory 120. The storage controller 117 may include circuitry configured to control reading data from the nonvolatile memory 130 and writing data to the nonvolatile memory 130. The CP 118 may include circuitry configured to process data obtained from a sub-component in the processor 110 in a suitable format for transmitting the data to another electronic device through the communication circuit 160, or process data obtained through the communication circuit 160 from another electronic device in a suitable format for processing the sub-component. The sensor interface 119 may include circuitry configured to process data, obtained through the sensor 170, regarding a state of the electronic device 100 and/or a state around the electronic device 100 in a suitable format for a sub-component in the processor 110.
The processor 110 may display media content stored in the volatile memory 120 and/or the nonvolatile memory 130 through the display 140. For example, the media content may be displayed within a user interface of a software application. For example, the term “media content” used in this document may include data, digital code, text, sound, audio, image, graphics, text, video, or any other similar material. For example, the term “media content” used in this document may further include multimedia content in which two or more different media content are combined. For example, in this document, displaying the media content may include displaying the media content, displaying a visual object representing the media content, and displaying an executable object for displaying the media content.
For example, the user interface may further include a text input portion. For example, the text input portion may be a space to which text identified based on a user input is inputted. The text input portion may be linked with a virtual keyboard or a handwriting input field. The text input portion may include text identified based on a user input received through the virtual keyboard or the handwriting input field.
For example, the media content may be displayed with the text input portion. For example, the media content may be displayed within the text input portion. For example, the media content may be displayed within another region of the user interface distinct from a region of the user interface in which the text input portion is displayed. However, it is not limited thereto.
For example, the text input portion displayed with the media content may be used to input text including information regarding the media content. For example, the text may be illustrated through a description of
Referring to
For example, a user interface 220 of a software application used to manage a schedule may include media content 221, a text input portion 222-1, and a text input portion 222-2. For example, the text input portion 222-1 may be used to input a title of a schedule to be registered. For example, text included in, inputted to, or inserted into the text input portion 222-1 based on a user input may include information regarding the media content 221. For example, the text may include a word indicating or representing at least one user's name indicated by at least one visual object 223 within the media content 221. However, it is not limited thereto. For example, the text input portion 222-2 may be used to input a memo associated with the schedule to be registered. For example, text included in, inputted to, or inserted into the text input portion 222-2 based on a user input may include information regarding the media content 221. The text may include more detailed information than the text included in the text input portion 222-1. However, it is not limited thereto.
For example, the user interface 230 of a software application used for health management may include media content 231 and a text input portion 232. For example, the text input portion 232 may be used to input calories of food consumed by a user. For example, text included in, inputted to, or inserted into the text input portion 232 based on a user input may include information regarding the media content 231. For example, the text may include a word indicating or representing calories of food indicated by one of the visual objects 233 within the media content 231. For example, the text may include a word indicating or representing calories of food corresponding to a visual object 233-1, consumed by a user indicated by a visual object 234 within the media content 231. However, it is not limited thereto.
For example, a user interface 240 of a software application used for displaying and/or searching media content may include at least one media content 241 and a text input portion 242. For example, the text input portion 242 may be used to input at least one keyword used to identify or search for the at least one media content 241 from among a plurality of media content capable of displaying using the software application. For example, a text included in, inputted to, or inserted into the text input portion 242 based on a user input may include information regarding the at least one media content 241. For example, the text may include a word indicating or representing a category of the at least one media content 241. For example, the text may include a word indicating or representing a tag set or identified for the at least one media content 241. However, it is not limited thereto.
Referring back to
For example, since text to be included within the text input portion is associated with the media content, a user input to include the text within the text input portion may be simplified through the analysis regarding the media content. For example, obtaining or identifying text to be included in the text input portion based on the analysis may reduce the number of user inputs to respectively input characters constituting the text. For example, obtaining text to be included in the text input portion based on the analysis may enhance the user experience associated with the text input portion.
For example, the processor 110 may obtain the text by executing at least a portion of operations exemplified through a table of contents “1. Method of obtaining text” below. For example, the electronic device 100 may have an ability executing at least a portion of the operations illustrated through the table of contents “1. Method of obtaining text” below.
For example, the processor 110 may obtain the text to be included in the text input portion based on resources used to identify or search for the media content displayed with the text input portion, from among a plurality of media contents capable of being provided through the electronic device 100. For example, the processor 110 may obtain the text including words respectively indicating or respectively representing the resources. For example, the processor 110 may identify a condition, a word, a character, a category, and/or other media content used to identify or search for the media content, and obtain the text based on the condition, the word, the character, and/or the other media content. For example, the resources, such as the condition, the word, the character, and/or the other media content, may be inputted for the identification of the media content through a user input for selecting at least one item, from among items respectively indicating a plurality of candidate resources. For example, the plurality of candidate resources may be recommended resources for the search, identified based on an autocomplete or autocomplete function. For example, the plurality of candidate resources may be recommended resources for the search, identified based on a use history of the electronic device 100. For example, the resources may be inputted through a user input for the identification of the media content. However, it is not limited thereto.
1.2 Method of Obtaining Text Based on a Category and/or a Keyword of Media Content
For example, the processor 110 may identify at least one category of the media content and/or at least one keyword of the media content based on recognition of the media content displayed with the text input portion or to be displayed with the text input portion, and obtain the text to be included in the text input portion based on the at least one category and/or the at least one keyword. For example, the processor 110 may obtain the text by identifying at least one category including a place (e.g., a place where the media content is obtained) identified based on metadata of the media content and/or at least one keyword indicating the place. For example, the processor 110 may obtain the text, by identifying at least one category including a place (e.g., a place where the media content represents) identified based on feature points in the media content and/or at least one keyword indicating the place. For example, the processor 110 may obtain the text, by identifying at least one category including date information (e.g., date information on which the media content is obtained) identified based on metadata of the media content and/or feature points in the media content, and/or at least one keyword indicating the date information. For example, the processor 110 may obtain the text, by identifying at least one category including an external object corresponding to a visual object in the media content identified based on recognition of the media content and/or at least one keyword indicating the external object. For example, the processor 110 may obtain the text by identifying at least one category including a state of a visual object in the media content identified based on recognition of the media content and/or at least one keyword indicating the state. For example, the processor 110 may obtain the text by identifying at least one category including a context identified based on recognition of the media content and/or at least one keyword indicating the context. For example, the processor 110 may obtain the text by identifying at least one category including a theme of the media content identified based on recognition of the media content and/or at least one keyword indicating the theme. For example, the processor 110 may obtain the text including a word indicating the at least one category and/or the at least one keyword. However, it is not limited thereto.
1.3 Method of Obtaining Text Based on Other Media Content Displayed with Media Content
For example, the media content may be provided from a first software application to a second software application. For example, the processor 110 may display the media content provided from the first software application to the second software application within a user interface of the second software application, together with the text input portion. For example, the processor 110 may identify other media content displayed with the media content within a user interface of the first software application, based on an event in which the media content is provided from the first software application to the second software application. For example, the other media content may be identified based on processing executed before identifying the event. For example, the other media content may be identified based on processing executed in response to the event. For example, a type of the other media content may be the same as a type of the media content. For example, the type of the other media content may be different from the type of the media content. For example, the media content may be an image, and the other media content may be text. For example, the processor 110 may obtain the text to be included in the text input portion based on information regarding the other media content. For example, the information regarding the other media content may be obtained by executing at least a portion of operations exemplified through other sub-table of contents of the table of contents 1. with respect to the other media content. For example, since the other media content displayed with the media content may be associated with the media content, the processor 110 may obtain the text to be included in the text input portion based on the other media content. For example, when the other media content is text including words, the processor 110 may obtain the text including at least a portion of the words. However, it is not limited thereto.
For example, the media content may be obtained through the first software application or displayed on the display 140 through the first software application. For example, the processor 110 may display the media content with the text input portion, within a user interface of the second software application different from the first software application. For example, the processor 110 may identify the first software application used to obtain the media content or display the media content, based on an event for displaying the media content within the user interface of the second software application. For example, the processor 110 may identify a location of an executable object (e.g., icon) for executing the identified first software application. For example, the processor 110 may identify the executable object included in a first folder among folders defined in the electronic device 100, based on the identification. For example, the processor 110 may obtain the text to be included in the text input portion based on a name of the first folder. For example, the name of the first folder may be identified based on processing executed before identifying the event. For example, the name of the first folder may be identified based on processing executed in response to the event. For example, the processor 110 may obtain the text including the name of the first folder or a word indicating the name of the first folder. However, it is not limited thereto.
1.5 Method of Obtaining Text Based on Information Obtained Through Another Software Application Distinct from a Software Application Used to Display Media Content
For example, the processor 110 may identify an event displaying the media content together with the text input portion, within the user interface of the first software application. For example, the processor 110 may identify the information regarding the media content based on the event. For example, the information regarding the media content may be identified based on processing executed before identifying the event. For example, the information regarding the media content may be identified based on processing executed in response to the event. For example, the information regarding the media content may be obtained through at least a portion of operations exemplified through other sub-table of contents of the table of contents 1. For example, the processor 110 may identify whether the information regarding the media content is associated with information obtained through software applications different from the first software application. For example, the processor 110 may identify that the information regarding the media content is associated with information obtained through a second software application different from the first software application. For example, the information regarding the media content being associated with the information obtained through the second software application may be identified based on processing executed before identifying the event. For example, the information regarding the media content being associated with the information obtained through the second software application may be identified based on processing executed in response to the event. For example, when the first software application is a software application for transmitting and receiving a message, and the second software application is a software application for managing a schedule, the processor 110 may identify that the second software application includes contents (e.g., the information obtained through the second software application) regarding a first schedule on date (e.g. the information regarding the media content) on which the media content is obtained. For example, the processor 110 may obtain the text to be included in the text input portion based on the information obtained through the second software application, based on the identification. For example, the processor 110 may obtain the text including at least one word to provide at least a portion of the contents regarding the first schedule. However, it is not limited thereto.
1.6 Method for Obtaining Text Based on a Category and/or a Keyword Including at Least a Portion of Objects within Media Content
For example, the processor 110 may identify an event that displays the media content together with the text input portion within a user interface of a software application. For example, the processor 110 may identify categories (and/or keywords) of a plurality of objects configuring the media content, based on the event. For example, the processor 110 may identify a category (and/or keyword) including the most objects of the media content from among the categories (and/or the keywords). For example, the category (and/or the keyword) may be identified based on processing executed before identifying the event or processing executed in response to the event. For example, the processor 110 may obtain the text to be included in the text input portion, based on the identified category (and/or the keyword). For example, the processor 110 may obtain the text including a word indicating or representing the identified category (and/or the keyword). However, it is not limited thereto. In an embodiment, the operations exemplified through the table of contents 1.6 may be included within the operations exemplified through the table of contents 1.2. However, it is not limited thereto.
1.7 Method for Obtaining Text Based on a State of at Least a Portion of Objects within Media Content
For example, the processor 110 may identify an event that displays the media content together with the text input portion within a user interface of a software application. For example, the processor 110 may identify a state of each of objects within the category (and/or the keyword) (e.g., a category including the most objects of the media content), exemplified through the table of contents 1.6, based on the event. For example, the state of each of the objects may be identified based on processing executed before identifying the event or processing executed in response to the event. For example, the state of each of the objects may be identified based on feature points of each of the objects within the category (and/or the keyword). For example, the processor 110 may obtain the text to be included in the text input portion, based on the identified state. For example, the processor 110 may obtain the text including a word indicating or representing the state of each of the objects. However, it is not limited thereto. In an embodiment, the operations exemplified through the table of contents 1.7 may be included within the operations exemplified through the table of contents 1.6. However, it is not limited thereto.
1.8 Method for Obtaining Text Based on at Least One Object Located within a Predetermined Region of Media Content
For example, the processor 110 may identify an event that displays the media content together with the text input portion within a user interface of a software application. For example, the processor 110 may identify at least one object located within a predetermined region of the media content, based on the event. For example, the at least one object may be identified based on processing executed before identifying the event or processing executed in response to the event. For example, when the media content is an image, the processor 110 may identify at least one visual object located within a center region of the media content as the at least one object. For example, when the media content is a document, the processor 110 may identify text included in the header or footer of the document as the at least one object. However, it is not limited thereto. For example, the processor 110 may obtain the text to be included in the text input portion, based on information regarding the at least one object. For example, the information regarding the at least one object may be identified based on processing executed before identifying the event or processing executed in response to the event. For example, the information regarding the at least one object may be obtained by executing at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. with respect to the at least one object. For example, the processor 110 may obtain the text including a word indicating or representing a keyword of the at least one object or a category of the at least one object.
For example, the media content may be included in a first classification (or a first storage region) among classifications (or storage regions) defined within the first software application. For example, the media content included in the first classification (or the first storage region) may be provided from the first software application to the second software application. For example, the processor 110 may display the media content provided from the first software application to the second software application together with the text input portion, within a user interface of the second software application. For example, the processor 110 may identify the first classification (or the first storage region) including the media content, based on an event providing the media content from the first software application to the second software application. For example, the first classification (or the first storage region) may be identified based on processing executed before identifying the event or processing executed in response to the event. For example, the processor 110 may obtain the text based on the first classification (or the first storage region). For example, the processor 110 may obtain the text including a word indicating or representing the first classification (or the first storage region). However, it is not limited thereto.
1.10 Method of Obtaining Text Based on Privacy Information within Media Content
For example, the media content may include privacy information. For example, when the text to be included in the text input portion is obtained based on the information regarding the media content, the text may include the privacy information. For example, the privacy information may include information indicating a location associated with a user, information indicating a user's name, user identification information, user physical information, and/or user phone number. However, it is not limited thereto. For example, the processor 110 may obtain the information regarding the media content through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. and identify that the privacy information is included in the obtained information. For example, the processor 110 may remove the privacy information from the obtained information based on the identification, and obtain the text to be included in the text input portion based on the information from which the privacy information has been removed. For example, the processor 110 may obtain the text to be included in the text input portion, in response to an event for displaying the media content within a user interface of a software application together with the text input portion. For example, the processor 110 may identify whether a service provided through the software application is a service exposing the privacy information, obtain the text from which the privacy information has been removed on a condition that the service is a service exposing the privacy information, and obtain the text including the privacy information on a condition that the service is a service not exposing the privacy information. For example, whether a service provided through the software application is a service exposing the privacy information may be identified based on processing executed before identifying the event or processing executed in response to the event. However, it is not limited thereto.
For example, text to be included in the text input portion displayed with the media content within a user interface of a software application may correspond to a context in which the media content is displayed through the software application. For example, the processor 110 may identify a service provided through the software application, based on an event in which the media content is displayed with the text input portion, so that the text corresponds to the context. For example, the processor 110 may obtain the text to be included in the text input portion based on the service. For example, the processor 110 may identify data corresponding to the service from the information regarding the media content, and obtain the text based on the data. For example, the data may be identified based on processing executed before identifying the event or processing executed in response to the event. However, it is not limited thereto.
1.12 Method for Obtaining Text Based on a Relationship Between a User Associated with Media Content and a User of an External Electronic Device
For example, the processor 110 may display the text input portion, together with the media content, within a user interface of a software application used to transmit the media content to an external electronic device. For example, the software application may be a software application used to transmit and receive a message, which provides the user interface 210 illustrated in
For example, the text input portion displayed with the media content within a user interface of a software application may have an attribute corresponding to a function or service provided through the software application.
For example, in case that the software application is a software application for transmitting and receiving a message, the text input portion may be a field that provides a function for inputting a recipient of a message transmitted through the software application. For example, the text input portion may have an attribute (or function) linked to a software application for a contact. For example, the processor 110 may identify a user associated with the media content from the information regarding the media content, identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. The processor 110 may identify a word indicating or representing the user, by searching the software application for the contact using data regarding the identified user. The processor 110 may obtain the text including the word.
For example, in case that the software application is a software application for transmitting and receiving a message, the text input portion may be a field for inputting a message. For example, text to be included in the text input portion may be a description regarding the media content. For example, the text input portion may have an attribute associated with a context or state represented through the media content. For example, the processor 110 may identify the context or state represented through the media content from the information regarding the media content, identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. The processor 110 may obtain the text including at least one word indicating or representing the context or the state. For example, the text input portion may have the maximum number of characters capable of being inputted into the text input portion. For example, the processor 110 may obtain the text including the at least one word, based on the maximum number of characters.
For example, in case that the software application is a software application for transmitting and receiving an email, the text input portion may be a field that provides a function for inputting a recipient of an email transmitted through the software application. For example, the text input portion may have an attribute linked to the software application for the contact. For example, the processor 110 may identify a user associated with the media content from the information regarding the media content, identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. The processor 110 may identify a word indicating or representing the user, by searching the software application for the contact using data regarding the identified user. The processor 110 may obtain the text including the word. For example, the text obtained to include in a text input portion within a user interface of the software application for transmitting and receiving an email may be different from text obtained to include in a text input portion within a user interface of the software application for sending and receiving a message. For example, the text obtained to include in a text input portion of a user interface of the software application for transmitting and receiving an email may include the user's email address obtained through the software application for the contact, and text obtained to include in a text input portion in a user interface of the software application for transmitting and receiving a message may include the user's phone number obtained through the software application for the contact.
For example, in case that the software application is a software application for transmitting and receiving an email, the text input portion may be a field that provides a function for inputting a title or content of the email transmitted through the software application. For example, in case that the text input portion is a field that provides a function for inputting a title of the email, text to be included in the text input portion may be a theme of the media content. For example, the processor 110 may identify the theme of the media content from the information regarding the media content, identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. The processor 110 may obtain the text including at least one word indicating or representing the theme. For example, in case that the text input portion is a field that provides a function for inputting content of the email, text to be included in the text input portion may be a description of the media content. For example, the text input portion may identify a context or state represented through the media content from the information regarding the media content, identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. The processor 110 may obtain the text including at least one word indicating or representing the context or the state. For example, a length of the text to be included in the text input portion, which is a field that provides a function for inputting content of the email, may be longer than a length of text to be included in the text input portion, which is a field that provides a function for inputting a title of the email. For example, a length of the text to be included in the text input portion, which is a field that provides a function for inputting content of the email, may be longer than a length of text to be included in the text input portion of a software application for transmitting and receiving a message.
Meanwhile, in case that a plurality of text input portions is displayed with the media content, such as in a user interface of a software application for email, the processor 110 may obtain text to be included in each of the plurality of text input portions. However, it is not limited thereto.
1. 14 Method for Obtaining Text Included in a Text Input Portion Displayed with a Plurality of Media Contents
For example, a plurality of media content may be displayed with the text input portion. For example, in case that the plurality of media contents is displayed with the text input portion, text to be included in the text input portion may indicate or represent common information of the plurality of media contents. For example, the processor 110 may identify categories of the plurality of media content, based on an event that displays the plurality of media content together with the text input portion. For example, the processor 110 may identify categories of first media content and identify categories of second media content, based on the event. For example, the categories of the first media content and the categories of the second media content may be identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. For example, the categories of the first media content and the categories of the second media content may be identified based on processing executed before identifying the event or processing executed in response to the event. For example, the processor 110 may identify a common category of the first media content and the second media content through comparison between the categories of the first media content and the categories of the second media content. For example, the processor 110 may obtain the text based on the common category. For example, the processor 110 may identify that there is no common category of the first media content and the second media content through comparison between the categories of the first media content and the categories of the second media content. For example, the processor 110 may identify upper categories of the categories of the first media content and identify upper categories of the categories of the second media content, based on the identification. For example, the processor 110 may identify a common category through comparison between the upper categories regarding the first media content and the upper categories regarding the second media content. For example, the processor 110 may obtain the text based on the common category. However, it is not limited thereto.
For example, the processor 110 may identify the information regarding the media content through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1, and identify recommended words (e.g., keywords or categories) associated with the media content based on the information. For example, the processor 110 may display items respectively indicating the recommended words together with the text input portion. For example, the items may be adjacent to the text input portion. For example, the processor 110 may identify at least one word indicated by the at least one item, based on a user input for at least one item from among the items. For example, the processor 110 may obtain the text to be included in the text input portion based on the at least one word. However, it is not limited thereto.
For example, the processor 110 may identify the information regarding the media content through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1 and identify recommended words (e.g., keywords or categories) associated with the media content based on the information. For example, the processor 110 may obtain the text to be included in the text input portion by classifying the recommended words and arranging the recommended words according to a sentence format based on the classification. For example, the processor 110 may identify, among the recommended words, at least one first word indicating a place, at least one second word indicating a time, at least one third word indicating a context (or occasion), at least one fourth word indicating at least one user, and at least one fifth word indicating a type of the media content. For example, the processor 110 may obtain the text by arranging the recommended words in the order of the at least one first word, the at least second word, the third words, the at least one fourth word, and the at least one fifth word, in Korean language. For example, the processor 110 may arrange the third words in consideration of a sentence format. For example, the processor 110 may obtain the text, by arranging the third words in Korean language in the order of a word capable of being an adjective among the third words, a word capable of being an object among the third words, and a word capable of being a complement among the third words, in Korean language. For example, in case that the recommended words are “in the afternoon”, “photo”, “in Seoul”, “soup”, “delicious”, “Susan”, and “eating”, the processor 110 may obtain text, which is “In Seoul, in the afternoon, Susan eating delicious soup photo” in the exemplified order in Korean language. However, it is not limited thereto.
For example, the processor 110 may display a text input portion including text obtained through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. together with the media content. For example, the processor 110 may receive a user input for changing at least a portion of the text while displaying the text within the text input portion. For example, in response to a user input selecting at least one of words within the text, the processor 110 may identify candidate words replacing the at least one word from the information regarding the media content, identified through at least a portion of the operations exemplified through other sub-table of contents of the table of contents 1. For example, the processor 110 may display items respectively indicating the candidate words, together with the text input portion including the text including the at least one word selected by the user input. For example, the displayed items may be adjacent to the text input portion. For example, the processor 110 may change the at least one word into at least one other word indicated by the at least one item, in response to a user input for selecting at least one of the items. For example, the processor 110 may obtain text including the at least one other word replaced from the at least one word.
As described above, in case that the media content is displayed with the text input portion, the electronic device 100 may obtain the text to be included in the text input portion through at least a portion of the operations exemplified through the table of contents 1. For example, since obtaining the text may mean bypassing or refraining from at least a portion of obtaining the text through a text input means such as a virtual keyboard, the electronic device 100 may provide a service simplifying a user input for inputting the text through the text input means, via obtaining the text. For example, since a time when the user input for inputting the text through the text input means is received is reduced by obtaining the text, the electronic device 100 may provide faster responsiveness.
For example, the text obtained through at least a portion of the operations exemplified through the table of contents 1. may be displayed within the text input portion via various methods. For example, the processor 110 may display the text within the text input portion displayed with the media content, by executing at least a portion of operations to be exemplified through a table of contents “2. Method of displaying text within a text input portion” below. For example, the electronic device 100 may have an ability executing at least a portion of the operations exemplified through the table of contents “2. Method of displaying text within a text input portion” below.
2. Method of Displaying Text within a Text Input Portion
2.1 Method of Displaying Text within a Text Input Portion in Response to Obtaining Text
For example, the processor 110 may display the text within the text input portion, in response to obtaining the text through at least a portion of the operations exemplified through the table of contents 1. For example, the processor 110 may display the text within the text input portion before receiving a user input confirming that the text is displayed within the text input portion, in response to obtaining the text. In an embodiment, representation of the text may correspond to representation of the media content. For example, in case that the media content includes text, a font of the text displayed within the text input portion may correspond to a font of the text within the media content. For example, in case that the media content is an image, a color of the text displayed within the text input portion may correspond to a color of a visual object within the media content. However, it is not limited thereto.
2.2 Method of Displaying Text within a Text Input Portion after Displaying Items Indicating Candidate Texts
For example, the processor 110 may obtain a plurality of texts through at least a portion of the operations exemplified through the table of contents 1. For example, the processor 110 may display items respectively indicating the plurality of texts, before displaying the text within the text input portion. In response to a user input of selecting an item from among the items, the processor 110 may display text indicated by the item selected by the user input.
2.3 Method of Displaying Text within a Text Input Portion Together with Items Respectively Indicating Recommended Words
For example, in response to obtaining the text, the processor 110 may display items respectively indicating second words replacing first words in the text, together with the text input portion including the text. For example, in response to a user input selecting at least one of the items, the processor 110 may replace at least one word among the first words displayed in the text input portion with at least one other word indicated by the at least one user input.
In an embodiment, the items may be displayed based on a predetermined user input. For example, the processor 110 may display the text within the text input portion, in response to obtaining the text. For example, the processor 110 may identify third words associated with the word among the second words in response to a user input selecting one word in the text within the text input portion, and display items respectively indicating the third words. In an embodiment, in case that the items have a layer, the items may be displayed as a knowledge graph. For example, in case that the third words include a word indicating a first category and a word indicating a second category, which is an upper or lower category of the first category, the word indicating the first category and the word indicating the second category may be displayed as a knowledge graph. However, it is not limited thereto. Meanwhile, in response to a user input selecting at least one of the items, the processor 110 may replace at least one word among the first words displayed in the text input portion with at least one other word indicated by the at least one user input.
2.4 Method of Displaying Text within a Text Input Portion Together with an Item for Identifying Whether to Add at Least One Word Based on a User Input
For example, in response to obtaining the text, the processor 110 may display an item for identifying whether to add at least one word to the text, together with the text input portion including the text. For example, in response to a user input for the item, the processor 110 may display the text further including the at least one word indicated by the item within the text input portion. For example, the at least one word may include privacy information. However, it is not limited thereto. Meanwhile, displaying the item may be ceased. For example, displaying the item may be ceased, in response to the display of the text further including the at least one word, or the user input. For example, displaying the item may be ceased, in response to another user input confirming the text before the user input is received. For example, displaying the item may be ceased after a predetermined time has elapsed from a timing of displaying the item. However, it is not limited thereto.
2.5 Method of Processing Text Displayed within a Text Input Portion According to a Location of a Pointer
For example, the processor 110 may display the text input portion including the text in response to obtaining the text. For example, the text may be displayed with a pointer (e.g., a cursor) within the text input portion. For example, the processor 110 may cease displaying the text within the text input portion, according to a change in a location of the pointer. For example, the processor 110 may cease displaying the text within the text input portion, in response to a user input to move the pointer located at an end portion of the text to a beginning portion of the text. However, it is not limited thereto.
In an embodiment, a plurality of text input portions may be displayed with the media content within a user interface. For example, the processor 110 may display text obtained for the text input portion within a text input portion where the pointer is located among the plurality of text input portions. For example, remaining text input portions among the plurality of text input portions may be in an empty state, unlike the text input portion where the pointer is located. However, it is not limited thereto. For example, alternatively, the processor 110 may display a plurality of texts obtained respectively for the plurality of text input portions, within all the plurality of text input portions.
2.6 Method of Applying Text within a Text Input Portion to Another Portion
For example, the processor 110 may display the text input portion including the text, based on obtaining the text. For example, the processor 110 may receive a user input confirming the text while displaying the text input portion including the text. For example, in case that a user interface including the text input portion is a user interface of a software application for transmitting and receiving a message, the user input may be a touch input with respect to an executable object for transmitting the text to an external electronic device. For example, in case that a user interface including the text input portion is a software application for managing a schedule, the user input may be a touch input with respect to an executable object for storing a schedule or registering a schedule in a calendar. However, it is not limited thereto.
For example, the processor 110 may apply the text to another region (or portion) distinct from the text input portion, in response to the user input.
For example, the processor 110 may display the text within another region of the user interface having a function of changing information. For example, in case that the user interface is a user interface of a software application for a social network service, the processor 110 may display a tag including at least a portion of the text in the user interface.
For example, the processor 110 may apply the text outside the user interface. For example, the processor 110 may set a name of a file for the media content as at least a portion of the text. For example, the processor 110 may include at least a portion of the text in metadata of the media content. However, it is not limited thereto.
In an embodiment, obtaining the text through at least a portion of the operations exemplified through the table of contents 1. may be executed based on providing the media content from a first software application to a second software application that provides a user interface including the text input portion. For example, the media content may be provided to the second software application, in order to use at least one other function distinct from functions supported by the first software application. For example, the media content may be provided to the second software application for transmission to an external electronic device. However, it is not limited thereto.
For example, various inputs may be used to provide the media content from the first software application to the second software application.
For example, the processor 110 may provide the media content from the first software application to the second software application, based on an input with respect to an executable object in a user interface of the first software application. For example, the executable object, which is an object for providing the media content, may be an object for executing a function provided through a framework.
For example, in response to a user input moving the media content displayed within a user interface of the first software application to a user interface of a second software application displayed with the user interface of the first software application, the processor 110 may provide the media content from the first software application to the second software application. However, it is not limited thereto.
As described above, a method of obtaining text to be included in a text input portion displayed with media content and a method of displaying the text in the text input portion may be implemented as the following examples. However, it should be noted that combining of the above-described descriptions is not limited to the following examples.
The user interfaces referring to
Referring to
For example, the processor 110 may identify the event by identifying a user input 313 to an executable object 312 for executing the second software application, which is a software application for transmitting and receiving a message. In response to the event, the processor 110 may display a first text input portion 321 and a second text input portion 322 together with the media content 311 within a user interface 320 of the second software application. For example, the first text input portion 321 may include first text 323 obtained based on attribute information of the first text input portion 321. For example, the first text 323 may be obtained from a third software application for managing a contact. For example, the second text input portion 322 may include second text 324 obtained based on attribute information of the second text input portion 322. For example, the second text 324 may have a length identified based on the maximum number of characters capable of displaying within the second text input portion 322. For example, the second text 324 may indicate a context or state represented by the media content 311. For example, the second text 324 may be at least partially different from the first text 323. However, it is not limited thereto.
For example, the processor 110 may identify the event by identifying a user input 315 to an executable object 314 for executing the second software application, which is a software application for transmitting and receiving an email. In response to the event, the processor 110 may display a first text input portion 331 and a second text input portion 332 together with the media content 311 within a user interface 330 of the second software application. For example, the first text input portion 331 may include first text 333 obtained based on attribute information of the first text input portion 331. For example, the first text 323 may be obtained from a third software application for managing a contact. For example, since the first text 323 is obtained based on attribute information of the first text input portion 331, representation of the first text 333 corresponds to representation of the first text 323, but the first text 333 may be text for an email address and the first text 323 may be text for a phone number. For example, the second text input portion 332 may include second text 334 obtained based on attribute information of the second text input portion 332. For example, since the maximum number of characters capable of displaying within the second text input portion 332 is greater than the maximum number of characters capable of displaying within the second text input portion 322, the second text 334 may include more detailed information than the second text 324. For example, since a size of the second text input portion 332 is larger than a size of the second text input portion 322, the second text 334 may include more detailed information than the second text 324.
For example, the processor 110 may identify the event by identifying a user input 317 for an executable object 316 to execute the second software application, which is a software application for managing a schedule. In response to the event, the processor 110 may display a first text input portion 341, a second text input portion 342, and a third text input portion 343 together with the media content 311 within a user interface 340 of the second software application. For example, the first text input portion 341 may include first text 344 obtained based on attribute information of the first text input portion 341. For example, a length of the first text 344 may be identified based on a size of the first text input portion 341. For example, the first text 344 may be identified based on a context represented through the media content 311. For example, the second text input portion 342 may include second text 345 obtained based on attribute information of the second text input portion 342. For example, the second text 345 may be obtained by searching for a third software application for managing a location using a keyword obtained based on information regarding the media content 311. Although not shown in
In an embodiment, the processor 110 may obtain data represented as shown in Table 1 below, by analyzing the media content 311 stored (or at least temporarily stored) in the electronic device 100.
In an embodiment, the processor 110 may predetermine information (e.g., text) to be included in a text input portion (or input portion) within a user interface of each of one or more software applications, based on the data represented as Table 1. For example, the predetermined information may be represented as shown in Table 2 below.
For example, in Table 2, the first software application may be a software application for managing a schedule, the second software application may be a software application for transmitting and receiving a message, the third software application may be a software application for transmitting and receiving an email, and the fourth software application may be a software application for health management. For example, text to be included in a text input portion of the first software application may be identified based on identifying that the first software application has a date input portion. For example, text to be included in a text input portion within each of the second software application and the third software application may be identified by changing date information of the data. However, it is not limited thereto.
As described above, the electronic device 100 may adaptively obtain text to be displayed in a text input portion, based on attribute information of the text input portion displayed together with media content. For example, through this adaptive obtainment, the electronic device 100 may provide different usage environments according to a type of software applications providing a user interface including a text input portion displayed with the media content.
The user interfaces illustrated in
Referring to
In an embodiment, the processor 110 may display the text 424 within the second text input portion 422, or display the text 424 within the third text input portion 423.
In an embodiment, the processor 110 may display a window 425 including the text 424, superimposed on the user interface 420, in response to obtaining the text 424. For example, the processor 110 may display the text 424 in the second text input portion 422, in response to a user input 426 that inserts the window 425 into the second text input portion 422. For example, the processor 110 may display the text 424 within the third text input portion 423, in response to a user input 427 that inserts the window 425 into the third text input portion 423.
As described above, the electronic device 100 may identify text to be included in a text input portion displayed together with media content, by using resources for searching for the media content. Since the identification of the text may simplify a user input for inputting the text, the electronic device 100 may provide an enhanced user experience. For example, in case that a plurality of text input portions is displayed together with media content, the electronic device 100 may provide an item for setting a location where the text 424 will be included, such as the window 425.
The user interfaces illustrated in
Referring to
As described above, the electronic device 100 may identify text to be included in a text input portion displayed together with media content based on information regarding other media content adjacent to the media content. Since the fact that the other media content is adjacent to the media content may mean that the other media content is associated with the media content, the electronic device 100 may identify the text corresponding to a context indicated by the media content. For example, since the identification of the text may simplify a user input for inputting the text, the electronic device 100 may provide an enhanced user experience.
The user interfaces illustrated in
Referring to
As described above, the electronic device 100 may identify, based on a name of a folder including at least one executable object for at least one software application associated with at least one media content, text to be included in a text input portion displayed together with the at least one media content. Since the name of the folder may correspond to a service provided through the at least one software application, the text may correspond to the at least one media content. For example, since the identification of the text may simplify a user input for inputting the text, the electronic device 100 may provide an enhanced user experience.
The user interfaces illustrated in
Referring to
Although not shown in
In an embodiment, the processor 110 may learn a pattern of a keyword (or category) used for editing the text 715, based on the user input for the at least one of the first items 722, the second item 723, and the third item 724. For example, based on a pattern identified based on the learning, the processor 110 may display a portion of the first items 722, the second item 723, and the third item 724 within the window 721, or visually highlight a portion of the first items 722, the second item 723, and the third item 724 within the window 721. For example, the processor 110 may display the first items 722, the second item 723, and the third item 724 within the window 721 in an arrangement corresponding to the pattern.
As described above, the electronic device 100 may provide an enhanced user experience through the display of the first items 722, the second item 723, and the third item 724. For example, since the electronic device 100 provides not only the first items 722 but also the second item 723 and the third item 724, the electronic device 100 may provide various options for the text 715.
The user interfaces illustrated in
Referring to
In an embodiment, the processor 110 may learn a usage pattern of the plurality of items 813, based on the user input. Based on the usage pattern identified based on the learning, the processor 110 may display a portion of the plurality of items 813 within the user interface 810, or may visually highlight a portion of the plurality of items 813 within the user interface 810. For example, the processor 110 may display the plurality of items 813 within the user interface 810 in an arrangement corresponding to the usage pattern. As described above, the electronic device 100 may provide an enhanced user experience through the display of the plurality of items 813.
The user interfaces illustrated in
Referring to
For example, the name of the classification 917 may be applied to text to be included in a text input portion as well as objects displayed around the text input portion 913 such as the items 914. For example, the processor 110 may include the name of the classification 917 (e.g., interior 922) within text to be included in a text input portion 921 displayed with the media content 911 within a user interface 920 of a third software application for reminder.
Although not shown in
As described above, the electronic device 100 may obtain text to be included in a text input portion displayed with media content included in a classification defined within a software application, based on a name of the classification. For example, the electronic device 100 may provide a service corresponding to a user intention by obtaining the text.
The user interfaces illustrated in
Referring to
For example, the processor 110 may, based on the identifications, identify “food”, “person”, and “photo” as words to be included in the text input portion 1025. The processor 110 may display text 1026 obtained by arranging or disposing the words, within the text input portion 1025.
As described above, when a plurality of media content is displayed with a text input portion, the electronic device 100 may identify a common category and an upper category of categories of the plurality of media content, and identify text to be inserted into the text input portion based on the upper category and the common category. For example, since the text is identified based on the upper category and the common category, the text may correspond to each of the plurality of media contents.
The user interfaces illustrated in
Referring to
As described above, the electronic device 100 may enhance a user experience of changing text within a text input portion, by identifying additional keywords based on the changed word in text within the text input portion.
The user interfaces illustrated in
Referring to
For example, the processor 110 may display a window 1240 based on the user input 1222. For example, the windows 1240 may include items 1241 indicating upper categories (e.g., meat and food) of a category of the word and lower categories (e.g., tenderloin, sirloin, and T-bone) of a category of the word. For example, the items 1241 may be displayed in the window 1240 in a form of a knowledge graph. For example, the processor 110 may receive a user input 1242 for the first item 1241-1 among the items 1241. The processor 110 may add another word (e.g., “sirloin”) corresponding to a category indicated by the first item 1241-1, in response to the user input 1242. For example, the processor 110 may display text to which the other word is added within the text input portion 1221.
As described above, in response to a user input for setting a word within a text input portion, the electronic device 100 may identify upper categories and/or lower categories of the word category identified by the user input. The electronic device 100 may simplify a user input for setting the word by displaying items indicating the upper categories and/or the lower categories as associated with the text input portion.
The user interfaces illustrated in
Referring to
For example, the user interface 1310 may include the text input portion 1312 including media content 1321. For example, the processor 110 may display text 1323 identified based on information regarding the media content 1321 within the text input portion 1312. For example, representation of the text 1323 may be identified based on information regarding the media content 1321. For example, a font of the text 1323 may be cursive, which is a font of the text 1325 of the media content 1321.
As described above, the electronic device 100 may display text in a text input portion displayed with media content as representation corresponding to representation of the media content. The electronic device 100 may indicate association between the text and the media content through such a display.
The user interfaces illustrated in
Referring to
As described above, the electronic device 100 may identify, based on a relationship between a user associated with media content and a user of an external electronic device, text in a text input portion displayed with the media content. The electronic device 100 may display the text corresponding to a context within the text input portion, through such identification.
The user interfaces illustrated in
Referring to
In an embodiment, the processor 110 may further display an object 1514 for identifying whether to include privacy information not included in the text 1513 around the text input portion 1512. For example, the object 1514 may represent or indicate at least one word (e.g., Woosung Apartment in Gangnam Station) including the privacy information. For example, the processor 110 may further display a visual element 1515 for guiding a location where the at least one word represented by the object 1514 is included. However, it is not limited thereto. The processor 110 may receive a user input 1516 for the object 1514. The processor 110 may display the text input portion 1512 including text 1517 further including the at least one word represented by the object 1514, in response to the user input 1516.
For example, the processor 110 may identify a function or a service of a software application that provides a user interface including the text input portion and the media content when obtaining the text to be included in the text input portion, and identify whether to include the privacy information within the text, based on the identification. For example, the processor 110 may identify text 1560 including the privacy information, in response to an event displaying the media content 1551 and the text input portion 1552 within a user interface 1550 of a software application providing information to a specific user, unlike the user interface 1510. For example, the processor 110 may identify text 1560 based on identifying that the recipient receiving an email including the text 1560 is the specific user. For example, the processor 110 may display a text input portion 1552 and media content 1551, which include the text 1560 including the privacy information within the user interface 1550.
As described above, the electronic device 100 may identify whether to disclose the privacy information according to a type of a software application, and identify text to be included in a text input portion displayed with media content based on the identification. The electronic device 100 may protect the user's privacy through such identification.
The user interfaces illustrated in
Referring to
In an embodiment, text 1613 may include at least one word (e.g., “Jenny”, or “Jenny” and “Soo”) that indicates a writer of the media content 1611. For example, the processor 110 may obtain text 1613 including the at least one word indicating the writer of the media content 1611, based on metadata within the media content 1611. For example, the processor 110 may display the text input portion 1612 including the text 1613.
In an embodiment, the text 1613 may include a word indicating the writer (e.g., “Jenny”) of the media content 1611 and a word indicating a user (e.g., “Soo”) of the electronic device 100 including the user interface 1610. For example, the processor 110 may display the text input portion 1612 including the text 1613.
As described above, the electronic device 100 may enhance the convenience of managing a document, by including user information such as a writer of media content in text to be included in a text input portion displayed for a service associated with the document.
The user interfaces illustrated in
Referring to
For example, in response to an event displaying media content 1711 and a text input portion 1752 within a user interface 1750, the processor 110 may identify text 1753 to be included in the text input portion 1752 based on a type of a second software application providing the user interface 1750. For example, the processor 110 may obtain text 1753 configured with an executable object for access to a webpage, based on identifying that the second software application supports a function of displaying a web page standalone. For example, the processor 110 may display the text input portion 1752 including the text 1753. For example, the text 1753 may be identified based on performing a web search based on information regarding the media content 1711. However, it is not limited thereto.
As described above, the electronic device 100 may adaptively change a type of text in a text input portion, according to a type of a software application. For example, the electronic device 100 may provide text having an attribute corresponding to a characteristic of a software application. For example, the electronic device 100 may identify the text through not only a result of recognition of media content, but also additional processing of the result of the recognition. For example, the electronic device 100 may provide various types of information through the text, through such identification.
The user interfaces illustrated in
Referring to
As described above, the electronic device 100 may provide not only obtaining text to be included in a text input portion displayed with media content based on the media content, but also obtaining media content to be displayed within a predetermined region based on the text included in the text input portion. For example, as described in
The user interfaces illustrated in
Referring to
For example, data sets in the schedule information having the first format may include at least one first data set supported within the second software application and at least one second data set not supported within the second software application. For example, the at least one first data set may be transmitted to the external electronic device through the second software application, while the at least one second data set may not be transmitted to the external electronic device.
Meanwhile, the external electronic device may receive a portion of the schedule information including the at least one first data set using the second software application installed in the external electronic device. For example, the external electronic device may not receive another portion of the schedule information including the at least one second data set. For example, in response to a user input caused to the external electronic device, the external electronic device may register the portion of the schedule information in the first software application installed in the external electronic device, based on the at least one first data set. For example, the external electronic device may not receive the at least one second data set due to limitation of the second software application. For example, the external electronic device may not be able to completely transfer the schedule information registered in the electronic device 100.
For example, the processor 110 may identify a third software application that supports processing of the at least one second data set for the complete transfer of the schedule information, based on identifying that the second software application does not support processing of the at least one second data set. For example, in response to execution of the third software application, the processor 110 may transmit the at least one second data set to the external electronic device, based on the information regarding the external electronic device identified through the second software application. For example, the execution of the third software application may be in a background state, unlike the second software application executed in a foreground state. For example, the at least one second data set transmitted through the third software application may be transparent to a user.
Meanwhile, the external electronic device may receive the at least one second data set through the third software application installed in the external electronic device. The external electronic device may identify the at least one first data set received through the second software application and the at least one second data set received through the third software application based on a user input for registering the portion of the schedule information in the first software application installed in the external electronic device through the second software application installed in the external electronic device, and restore the schedule information to the schedule information registered in the electronic device 100 based on the identification. The external electronic device may completely register the schedule information in the first software application based on the restoration. However, it is not limited thereto.
As described above, on a condition that the second software application does not completely support transmission of information, which includes media content and text within a text input portion, registered through the first software application, the electronic device 100 may provide a complete transfer by assisting the second software application using the third software application.
The operations exemplified above may also be combined as follows.
According to an embodiment, an electronic device (e.g., the electronic device 100) may comprise a display and a processor. According to an embodiment, the processor may be configured to, based on a recognition of media content stored in the electronic device, obtain information regarding the media content. According to an embodiment, the processor may be configured to identify an event providing the media content from a first software application to a second software application. According to an embodiment, the processor may be configured to identify attribute information of a text input portion in a user interface of the second software application. According to an embodiment, the processor may be configured to obtain text indicating at least a portion of the information, based on the attribute information. According to an embodiment, the processor may be configured to display, via the display, a text input portion including the text, with the media content, in the user interface of the second software application executed in response to the event.
According to an embodiment, the attribute information may include data indicating that the text input portion is associated with a third software application different from the first software application and the second software application. According to an embodiment, the text may be obtained by searching, using the at least a portion of the information, a database stored in a storage region allocated for the third software application identified based on the data.
According to an embodiment, the attribute information may include data indicating a maximum number of characters capable of being input in the text input portion. According to an embodiment, the text may be obtained by identifying the at least a portion of the information based on the data.
According to an embodiment, the processor may be configured to identify, from among a plurality of visual objects, a visual object corresponding to at least another portion of the information. According to an embodiment, the processor may be configured to display, via the display, in the user interface, with the media content and the text input portion that includes the text, the visual object.
According to an embodiment, the processor may be configured to display, via the display, in a user interface of the first software application, the media content, with an executable object for a function provided via a framework. According to an embodiment, the processor may be configured to, based at least in part on an input on the executable object, identify the event. According to an embodiment, the processor may be configured to, using the second software application executed in response to the event, identify the attribute information. According to an embodiment, the processor may be configured to, using the second software application, obtain the text.
According to an embodiment, the processor may be configured to display, via the display, in a user interface of the first software application, items respectively indicating categories of the media content, with the media content identified from among a plurality of media contents stored in the electronic device.
According to an embodiment, the processor may be configured to, in response to the event identified while displaying the items with the media content, further based on at least a portion of the categories, obtain the text.
According to an embodiment, the processor may be configured to display, via the display, in a user interface of the first software application, with the media content, another text. According to an embodiment, the processor may be configured to, in response to the event identified while displaying the other text with the media content, further based on at least a portion of the other text, obtain the text.
According to an embodiment, the processor may be configured to, while displaying the media content in a user interface of the first software application, identify the event. According to an embodiment, the processor may be configured to, in response to the event, further based on a name of a folder including an executable object for executing the first software application used for displaying the media content, obtain the text.
According to an embodiment, the processor may be configured to identify that other information is at least partially associated with the information, the other information obtained via a third software application before the event is identified. According to an embodiment, the processor may be configured to, further based on the other information, obtain the text.
According to an embodiment, the processor may be configured to identify that another information is at least partially associated with the information, the other information obtained via a third software application before the event is identified. According to an embodiment, the processor may be configured to display, via the display, in the user interface of the second software application executed in response to the event, with the media content, items respectively indicating keywords identified based on the text input portion including the text, and the other information. According to an embodiment, the processor may be configured to receive a user input regarding at least one item from among the items. According to an embodiment, the processor may be configured to, in response to the user input, based on at least one keyword indicated by the at least one item, change at least a portion of the text displayed in the user interface.
According to an embodiment, the processor may be configured to display, via the display, in a user interface of the first software application, items respectively indicating categories of a plurality of media contents stored in the electronic device and at least a portion of the plurality of media contents. According to an embodiment, the processor may be configured to identify the event while the media content, identified from among the plurality of media contents based on a user input regarding at least one item from among the items, is displayed in the user interface of the first software application. According to an embodiment, the processor may be configured to, based on the event, obtain the text further based on at least one category indicated by the at least one item.
According to an embodiment, the at least one item selected by the user input may be visually highlighted relative to remaining items from among the items.
According to an embodiment, the processor may be configured to obtain the information by identifying categories of objects in the media content through the recognition of the media content. According to an embodiment, the processor may be configured to identify, from among the categories, a category including the largest number of objects. According to an embodiment, the processor may be configured to, further based on the identified category, obtain the text.
According to an embodiment, the processor may be configured to, based on the recognition of the media content, obtain the information including data regarding at least one object included in a predetermined region from among objects in the media content. According to an embodiment, the processor may be configured to, further based on the data regarding the at least one object, obtain the text.
According to an embodiment, the processor may be configured to, while displaying, in a user interface of the first software application, the media content included in a classification from among classifications used in the first software application. According to an embodiment, the processor may be configured to, further based on a name of the classification including the media content, obtain the text.
According to an embodiment, the processor may be configured to identify the event providing the media content and other media content from the first software application to the second software application. According to an embodiment, the processor may be configured to identify first text for the media content based on the information. According to an embodiment, the processor may be configured to identify second text for the other media content, based on other information obtained based on recognition of the other media content. According to an embodiment, the processor may be configured to identify an upper category including a category including a word in the first text and a category including a word in the second text. According to an embodiment, the processor may be configured to obtain the text including at least a portion of the information and at least a portion of the other information, further based on the upper category. According to an embodiment, the processor may be configured to display the text input portion including the text, together with the media content and the other media content, through the display, within the user interface of the second software application.
According to an embodiment, the processor may be further configured to display items respectively indicating the text input portion including the text and keywords identified based on the information, through the display, in the user interface of the second software application. According to an embodiment, the processor may be further configured to display at least one item indicating at least one keyword indicating the changed portion of the text through the display together with the items, in response to a second user input for changing a portion of the text. According to an embodiment, the processor may be further configured to display other text within the text input portion through the display, based on the at least one item and a second user input for selecting at least a portion of the items.
According to an embodiment, the processor may be further configured to receive a first user input for selecting a word of the words in the text displayed in the user interface of the second software application. According to an embodiment, the processor may be further configured to display items respectively indicating a category of the word, an upper category of the category, and a lower category of the category through the display, within the user interface of the second software application. According to an embodiment, the processor may be further configured to display the text in which the word is changed into at least another word, through the display within the text input portion, in response to a second user input for at least one of the items.
According to an embodiment, the electronic device may further include communication circuitry. According to an embodiment, the second software application may be a software application usable to transmit the media content to an external electronic device. According to an embodiment, the second software application may be a software application that does not provide a function of processing at least another portion of the information different from the at least a portion of the information. According to an embodiment, the processor may be further configured to transmit the at least another portion of the information to the external electronic device through the communication circuitry, by using a third software application. According to an embodiment, when the media content is provided from the second software application to the third software application or a fourth software application within the external electronic device, the at least another portion of the information may be transmitted from the electronic device to provide the at least another portion of the information within the external electronic device.
According to an embodiment, the user interface of the second software application may include the text input portion and another text input portion. According to an embodiment, the processor may be further configured to obtain the text, based on identifying that the text input portion from among the text input portion and the other text input portion is focused in response to the event. According to an embodiment, the processor may be further configured to obtain other text indicating at least a portion of the information based on attribute information of the other text input portion, in response to a user input of moving a pointer of the text input portion to the other text input portion while displaying the user interface of the second software application including the text input portion, the text input portion including the text. According to an embodiment, the processor may be further configured to display the other text input portion including the other text through the display within the user interface of the second software application.
According to an embodiment, representation of the text may correspond to representation of the media content or representation of an object in the media content.
According to an embodiment, the second software application may be a software application usable to transmit the media content to an external electronic device. According to an embodiment, the processor may be configured to obtain the information including data regarding a first user associated with the media content, based on the recognition of the media content. According to an embodiment, the processor may be configured to obtain data regarding a second user associated with the external electronic device. According to an embodiment, the processor may be configured to identify a word indicating the first user based on a relationship between the first user and the second user. According to an embodiment, the processor may be configured to obtain the text including the word.
According to an embodiment, the second software application may be a software application usable to transmit the media content to an external electronic device. According to an embodiment, the processor may be configured to identify the at least a portion of the information, further based on excluding privacy data from the information.
According to an embodiment, the processor may be further configured to display an item for including other text indicating the privacy data within the text input portion through the display within the user interface of the second software application, together with the text input portion including the text. According to an embodiment, the processor may be further configured to display the text input portion further including the other text through the display within the user interface of the second software application, in response to a user input for the item, and cease displaying the item within the user interface of the second software application.
According to an embodiment, the processor may be configured to identify a service provided through the second software application in response to the event. According to an embodiment, the processor may be configured to obtain the text further based on the service.
According to an embodiment, the processor may be further configured to identify another event of displaying a plurality of items respectively indicating a plurality of media contents stored in the electronic device, while displaying the text input portion including the text within the user interface of the second software application. According to an embodiment, the processor may be further configured to identify at least one media content associated with at least one word in the text among the plurality of media content. According to an embodiment, the processor may be further configured to display at least one item indicating the at least one media content through the display within a region displayed in response to the other event.
It should be appreciated that various embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” or “connected with” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element.
As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC).
Various embodiments as set forth herein may be implemented as software including one or more instructions that are stored in a storage medium that is readable by a machine. For example, a processor of the machine (e.g., the electronic device 100) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between a case in which data is semi-permanently stored in the storage medium and a case in which the data is temporarily stored in the storage medium.
According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server.
According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added.
It will be appreciated that various embodiments of the disclosure according to the claims and description in the specification can be realized in the form of hardware, software or a combination of hardware and software.
Any such software may be stored in non-transitory computer readable storage media. The non-transitory computer readable storage media store one or more computer programs (software modules), the one or more computer programs include computer-executable instructions that, when executed by one or more processors of an electronic device individually or collectively, cause the electronic device to perform a method of the disclosure.
Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like read only memory (ROM), whether erasable or rewritable or not, or in the form of memory such as, for example, random access memory (RAM), memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a compact disk (CD), digital versatile disc (DVD), magnetic disk or magnetic tape or the like. It will be appreciated that the storage devices and storage media are various embodiments of non-transitory machine-readable storage that are suitable for storing a computer program or computer programs comprising computer-executable instructions that, when executed, implement various embodiments of the disclosure. Accordingly, various embodiments provide a program comprising code for implementing apparatus or a method as claimed in any one of the claims of this specification and a non-transitory machine-readable storage storing such a program.
While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2022-0065727 | May 2022 | KR | national |
| 10-2022-0078983 | Jun 2022 | KR | national |
This application is a continuation application, claiming priority under 35 U.S.C. § 365 (c), of an International application No. PCT/KR2023/004827, filed on Apr. 10, 2023, which is based on and claims the benefit of a Korean patent application number 10-2022-0065727, filed on May 29, 2022, in the Korean Intellectual Property Office, and of a Korean patent application number 10-2022-0078983, filed on Jun. 28, 2022, in the Korean Intellectual Property Office, the disclosure of each of which is incorporated by reference herein in its entirety.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/KR2023/004827 | Apr 2023 | WO |
| Child | 18964184 | US |