This application is based on and claims priority under 35 U.S.C. § 119(a) to Indian Patent Application No. 201741005717 (PS), which was filed in the Indian Patent Office on Feb. 17, 2017, and Indian Patent Application No. 201741005717 (CS), which was filed in the Indian Patent Office on Sep. 26, 2017, the disclosure of each of which is incorporated by reference herein in its entirety.
The present disclosure relates generally to content management, and more particularly, to a method and an electronic device for managing information of an application.
In general, a user of an electronic device performs series of steps in order to complete a task within an application. Although user-interface designs have evolved, performing the series of steps within the application is still required for completing most tasks.
Accordingly, the present disclosure is designed to address at least the problems and/or disadvantages described above and to provide at least the advantages described below.
In accordance with an aspect of the present disclosure, method is provided for managing information of an application in an electronic device. The method includes displaying a first user interface of the application on a screen of the electronic device, where the first user interface displays a first level of information of at least one data item of the application, detecting a first gesture input performed on the first user interface, determining a second level of information of the at least one data item based on a context of the at least one data item displayed in the first user interface, and displaying a second user interface including the second level of information of the at least one data item on the screen of the electronic device.
In accordance with another aspect of the present disclosure, a method is provided for managing information of an application in an electronic device. The method includes displaying a first user interface of the application on a screen of the electronic device, where the first user interface displays at least one first data item of the application, detecting a gesture input performed on the first user interface, determining at least one second data item based on a context of the at least one data item displayed in the first user interface, and displaying a second user interface including the at least one second data item of the application on the screen of the electronic device.
In accordance with another aspect of the present disclosure, an electronic device is provided for managing information of an application. The electronic device includes a memory storing the application; and a processor coupled to the memory. The processor is configured to control displaying of a first user interface of the application on a screen of the electronic device, where the first user interface displays a first level of information of at least one data item of the application, detect a first gesture input performed on the first user interface, determine a second level of information of the at least one data item based on a context of the at least one data item displayed in the first user interface, and control displaying of a second user interface including the second level of information of the at least one data item on the screen of the electronic device.
In accordance with another aspect of the present disclosure, an electronic device is provided for managing information of an application. The electronic device includes a processor and a memory storing the application. The processor is configured to control displaying of a first user interface of the application on a screen of the electronic device, where the first user interface displays at least one first data item of the application; and detect a first gesture input performed on the first user interface, determine at least one second data item based on a context of at least one data item displayed in the first user interface, and control displaying of a second user interface including the at least one second data item of the application on the screen of the electronic device.
The above and other aspects, features, and advantage of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Various embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. In the following description, specific details such as detailed configuration and components are merely provided to assist the overall understanding of these embodiments of the present disclosure. Therefore, it should be apparent to those of ordinary skill in the art that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
Herein, the term “or” as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those of ordinary skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments described herein.
As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as units, engines, managers, modules, etc., are physically implemented by analog and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, etc., and may optionally be driven by firmware and/or software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards, etc. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
Accordingly the embodiments herein provide a method of managing information of an application in an electronic device. The method includes displaying a first user interface of the application on a screen of the electronic device, where the first user interface displays a first level of information of at least one data item of the application; and detecting a slide input performed on the first user interface. Further, the method includes determining a second level of information of the at least one data item based on a context of the at least one data item displayed in the first user interface and displaying a second user interface comprising the second level of information of the at least one data item on the screen of the electronic device.
Accordingly the embodiments herein provide a method of managing information of an application in an electronic device. The method includes displaying a first user interface of the application on a screen of the electronic device, where the first user interface displays at least one first data item of the application; and detecting a slide input performed on the first user interface. Further, the method includes determining at least one second data item based on a context of the at least one data item displayed in the first user interface and displaying a second user interface comprising the at least one second data item of the application on the screen of the electronic device.
Referring to
Further, if the user wishes to read the entire content of the message 2, then the user may have to navigate back to the user interface of the messaging application 10 and select the message 2 to access it. Likewise, similar steps are performed by the user in order to explore the contents in the message 3.
Referring to
Conventionally, a portion of the content in the message 1 and the message 2 can be displayed to the user. Otherwise, in order to access and explore the entire content or more than the portion of the content in each of the message 1 and the message 2, the user still has to perform the aforementioned steps, thus degrading user experience while using the messaging application, or while using any other application of the electronic device in a similar manner.
Thus, it is desired to address the above mentioned disadvantages or other shortcomings or at least provide a useful alternative.
Unlike conventional methods and systems (e.g., as detailed in
Referring to
Referring to
The electronic device includes a processor 140, a memory 160 and a display 180.
The processor 140 can be configured to display the first user interface of the application on the screen of the electronic device. The first user interface displays the first level of information of at least one data item of the application.
For example, in a scenario in which the user of the electronic device accesses the messaging application displaying a list of messages received from various contacts, one of the messages (e.g., message 1) reads “Hi friend. How are you? What are your plans for the evening? Let's catch up at 6!”
When the user in the above scenario, he or she will be able to view only the first level of information i.e., “Hi friend. How are you?” of the message without opening the message. The proposed method can be used to provide the second user interface comprising the second level of information, without opening the message 1, as detailed below.
According to the proposed method, the processor 140 detects the first gesture provided by the user on the first user interface. In response to the detected first gesture, the processor 140 determines the availability of the second level of information of the message 1. Upon determining, the availability of the second level of information, the processor 140 can control to display the second user interface comprising the second level of information of the message 1 on the screen of the electronic device.
As illustrated in the
Further, on detecting subsequent gesture input which is the second gesture input, the processor 140 may determine and control to display the third level of information of the message 1 on the screen of the electronic device. For example, the third level of information “Let's catch up at 6!” is displayed. Thus, processor 140 can be configured to determine and control to display the entire content of message 1 “Hi friend. How are you? What are your plans for the evening? Let's catch up at 6!” (based on the subsequent gestures), without requiring the user to access (navigate within) the message 1 displayed on the user interface of the messaging application. The first gesture input and the second gesture input may be different from each other in their directions and/or the type of each of the gesture—i.e. a slide gesture, a rail gesture, a swipe gesture, a flick gesture, a scroll gesture, a flex gesture, a tapping gesture, etc.
In another example, if the user accesses a live camera mode of a camera application, the field of view of the live camera displays a first level of information i.e., view of a street in which objects such as banks, stores, grocery shops, restaurants, etc., are displayed on a first user interface. When the processor 140 detects the gesture on the first user interface, then the processor 140 invokes the second user interface detailing a second level of information of the objects in the field of view of the live camera. The second level of information can include, for example, additional information of the objects in the field of view of the live camera mode such as offers currently running in the stores, menu details of the restaurants, review/ratings of the restaurant, details about the contacts who have checked in to the restaurants, etc., e.g., as illustrated in
The proposed method can also be used to automatically switch between a live camera mode to an augmented reality (AR) mode based on the context of the objects present in the field of view of the live camera mode of the camera application.
The processor 140 can be configured to interact with the hardware components in the electronic device to perform the functionalities of the corresponding hardware components.
The memory 160 may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory 160 may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory 160 is non-movable. In some examples, the memory 160 can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
The display 180, based on receipt of a control command from the processor 140, manages the display of information in the first user interface and the second user interface displayed on the screen of the electronic device. The screen can include, for example, a touch screen, the touch screen may use a liquid crystal display (LCD) technology, a light emitting polymer display (LPD) technology, an organic light emitting diode (OLED), or an organic electro luminescence (OEL) device, although other display technologies may be used in other embodiments.
Referring to
The first user interface displays the first level of information of the data item of the application. The application can include for example, the messaging application, an instant messaging/chat application, a camera application, a browser, an address book, a contact list, an email application, location determination capability (such as that provided by the global positioning system (GPS)), a social networking service (SNS) application, etc. The first level of information can include, for example, the portion of a content, associated with the data item i.e., a single line of the text message in case of the messaging application, the contact numbers in case of the contact list, a captured picture in case of the camera application, etc.
The gesture detector 122 is configured to receive the gesture performed by the user on the screen of the electronic device. The gesture can be, for example, a slide gesture, a rail gesture, a swipe gesture, a flick gesture, a scroll gesture, a flex gesture, etc. The gesture can be a user defined, Original Equipment Manufacturer (OEM) defined or defined by an operating system running in the electronic device.
The context determination unit 124 can be configured to determine the context of the at least one data item displayed in the first user interface and the second user interface. In an embodiment, the context determination unit 124 comprises a natural language processor (NLP) 1241, an object recognition unit 1243 and an application selector 1245.
The NLP 1241 can be configured to parse the first level of the information and determines whether any additional information in the context of the first level of information is available. Upon determining that additional information in the context of the first level of information is available, then the NLP 1241 fetches the additional information from the context of the first level of information. The additional information can be for example, additional content of the text message in case of the messaging application, the contact number along with SNS data, or any other data associated with the contact number in case of contact list, the captured picture with the SNS data in case of the camera application, etc., which are based on the context of the data item displayed in the first user interface. Further, the additional information is displayed on the second user interface of the electronic device.
For example, when the first user interface of a call application displays the details of the call log featuring the contact details (i.e., first level of information), the user can provide a pre-defined gesture on a pre-defined portion of the call application to invoke the second user interface. Thus, the NLP 1241 can be configured to identify the contacts present within the second user interface and determines whether any contextual information (i.e., second level of information) associated with the contacts are available. The contextual information associated with the contacts can be for example, SNS data associated with the contact, tags associated with the contact, etc.
Upon determining that contextual information associated with at least one of contact is available, the NLP 1241 fetches the contextual information associated with the at least one of contact and displays in the second user interface of the electronic device, e.g., as illustrated in
The object recognition unit 1243 can be configured to determine the objects present in the first data item. The objects can be, for example, the objects in the field of view of the live camera mode of the camera application, objects in the gallery application, etc. Further, the object recognition unit 1243 determines information related to the objects present in the first data item. The information related to the objects present in the first data item can be for example, text extracted from the picture (object), accessories identified in the picture, etc.
The application selector 1245 can be configured to determine a relevant application suitable to perform a relevant task, e.g., as illustrated in
For example, consider the first user interface of an application, in the electronic device, displaying an object (i.e., first data item) including data items such as contact details, address, e-mail, etc., based on the gesture detected on the first user interface, the object recognition unit 1243 can automatically determine the context (contact details, address, e-mail, etc.). Further, the application selector 1245 can be configured to automatically provide a relevant application (e.g., call application, the second data item) to perform at least one action based on the determined context. The action can include, but not limited to, launching a call log application, displaying the contact number on a dialer window of the log application. The NLP 1241, the object recognition unit 1243, and the application selector 1245 may be implemented as at least one hardware processor.
Referring to
In operation 404, the electronic device detects the gesture input such as a slide input performed on the first user interface. For example, in the electronic device as illustrated in the
In operation 406, the electronic device determines the additional information of the at least one data item based on the context of the at least one data item displayed in the first user interface. For example, in the electronic device as illustrated in the
In operation 408, the electronic device displays the second user interface comprising the additional information of the at least one data item on the display screen. For example, in the electronic device as illustrated in the
Referring to
In operation 504, the electronic device detects the gesture input such as a slide input performed on the first user interface. For example, in the electronic device as illustrated in the
In operation 506, the electronic device determines at least one second data item based on a context of the at least one data item displayed in the first user interface. For example, in the electronic device as illustrated in the
In operation 508, the electronic device displays the second user interface comprising the at least one second data item of the application on the display screen. For example, in the electronic device as illustrated in the
Referring to the
In operation 604, the electronic device allows the user to provide a gesture input such as a slide input to invoke the second user interface in addition to the first user interface. For example, in the electronic device as illustrated in the
In operation 606, the electronic device checks the background data of the application for availability of the second level of information. For example, in the electronic device as illustrated in the
In operation 608, upon determining that the background data of the application is unavailable, the display 180 displays original list item and does not show any transition on specific list items with additional data in second user interface. In another embodiment, the display 180 can be controlled to provide an indication (e.g., error message, graphical representation, etc.) indicating unavailability of the second level of information.
In operation 610, upon determining that the background data of the application is available, the electronic device fetches the second level of information. Further, the display 180 displays the second level of information as transition of existing first level of information to reveal additional data of the respective list items in the second user interface.
In operation 612, the electronic device, allows the user to provide a repeated gesture input to invoke third user interface (i.e., update to the second user interface) in addition to the second user interface. For example, in the electronic device as illustrated in the
In operation 614, the electronic device checks the background data of the application for availability of additional information. For example, in the electronic device as illustrated in the
In operation 616, upon determining that additional information of the application is unavailable, the display 180 displays original list item and does not show any transition on specific list items with action data in third user interface.
In operation 618, upon determining that additional information of the application is available, the electronic device fetches the additional information. Further, the display 180 displays the additional information as transition of existing data to reveal contextual action in respective list items in third user interface.
In a scenario in which the user of the electronic device accesses the messaging application displaying the plurality of messages within the first user interface 704 of the messaging application, the proposed method can be used to provide the additional information (if any) associated with each message from the plurality of messages without requiring the user to access each message in order to view the additional information (e.g., extra lines of text for each message, attachments in the message, option to respond directly from the grid view, etc.) present therein.
The gesture detector 122 can be configured to detect the first gesture input 702 on the first user interface 704 (as illustrated in
Further, the gesture detector 122 can be configured to detect the second gesture input 708 on the second user interface 706 (as illustrated in
In one or more embodiments, the user may be able to define an area to be covered by the second user interface 706 on the display screen of the electronic device.
In a scenario in which the user of the electronic device accesses the call log application displaying call details within a first user interface 804, the proposed method can be used to provide the additional information (if any) related to each of the contacts in the call log application without requiring the user to access each of the contacts to explore the additional information (e.g., contact number, call details, contact's presence in social networking sites, chat applications, messaging application, etc.) present therein.
The gesture detector 122 can be configured to detect the gesture input 802 on the first user interface 804 (as illustrated in the
In a scenario in which a lock screen of the electronic device displays a plurality of notification messages in a first user interface 904, the proposed method can be used to provide the additional information (if any) related to the plurality of notification messages without requiring the user to unlock the lock screen and access the notifications messages to view the additional information (e.g., notification messages with extra details).
The gesture detector 122 can be configured to detect the gesture input 902 on the first user interface 904 (as illustrated in
In a scenario in which icons of a plurality of applications are displayed within a first user interface 1004 of the home screen, The proposed method can be used to provide the additional information (e.g., latest notification of the applications, etc.) (if any) related to the plurality of applications without requiring the user to access the plurality of applications thereof.
The gesture detector 122 can be configured to detect a gesture input 1002 on the icon of at least one application displayed within the first user interface 1004 (as illustrated in
In a scenario in which the user of the electronic device accesses the gallery application displaying a plurality of images within a first user interface 1104, the proposed method can be used to provide the additional information (if any) of the plurality of images without requiring the user to access each image in order to retrieve the additional information (e.g., size of the image, image type, social networking presence, etc.) thereof.
The gesture detector 122 can be configured to detect the gesture input 1102 on the first user interface 1104 (as illustrated in
In a scenario in which the user of the electronic device accesses the gallery application in which an object (e.g., image) is displayed in a first user interface 1204, the proposed method can be used to provide the additional information (if any) about the image without requiring the user to browse for the additional information (e.g., size of the image, image type, etc.) thereof.
The gesture detector 122 can be configured to detect a first gesture input 1202 on the first user interface 1204 (as illustrated in
Furthermore, the gesture detector 122 can be configured to detect the second gesture input 1208 on the second user interface 1206 (as illustrated in
Referring to
In operation 1304, the electronic device allows the user to provide a gesture input such as a sliding input on the first user interface. For example, in the electronic device as illustrated in the
In operation 1306, the electronic device determines the availability of the second level of information. For example, in the electronic device as illustrated in the
In operation 1308, on the determining that the second level of information is not available, the display 180 displays the first user interface and does not transform to a more consolidated second user interface.
In operation 1310, on the determining that the second level of information is available, the display 180 transforms the first user interface to more consolidated second user interface.
In a scenario in which the user of the electronic device accesses the home screen displaying icons of a plurality of applications within the first user interface 1404, the proposed method can be used to provide the additional information (if any) of the plurality of applications without requiring the user to access each applications in order to retrieve the additional information (e.g., recent notifications, etc.) thereof.
The gesture detector 122 can be configured to detect the gesture input 1402 on the first user interface 1404 (as illustrated in
In a scenario in which the user of the electronic device accesses the camera application in which a plurality of objects are displayed within the first user interface 1504, the proposed method allows the user to access the drawing tools within the camera application.
The gesture detector 122 can be configured to detect a gesture input 1502 on the first user interface 1504 (as illustrated in
In a scenario in which the user of the electronic device accesses the gallery application displaying an image in the first user interface 1604, the proposed method can be used to identify and provide the images with the same context without requiring the user to browse for the images with the same context (e.g., all images with a sunset background are extracted and presented) thereof
The gesture detector 122 can be configured to detect the gesture input 1602 on the first user interface 1604 (as illustrated in
In a scenario in which the user of the electronic device accesses the contact displaying details like call history, text messages, instant messages, image of the contact, etc., the proposed method can be used to identify and provide the additional information related to the contact without requiring the user to browse for the additional information on various applications (e.g., SNS data related to the user, messaging application status, etc.) thereof.
The gesture detector 122 can be configured to detect a gesture input 1702 on the first user interface 1704 (as illustrated in
Referring to
In operation 1804, the electronic device allows the user to provide a gesture input such as a sliding input on the first user interface. For example, in the electronic device as illustrated in the
In operation 1806, the electronic device determines the availability of coupons in messages and email application. For example, in the electronic device as illustrated in the
In operation 1808, upon determining that the coupon is not available, the display 180 displays the original application screen and does not show any transition in the second user interface.
In operation 1810, upon determining that the coupon is available, the display 180 displays contextual coupons in the second user interface.
In operation 1812, the electronic device applies contextual coupon from the second user interface onto application context in the first user interface.
In a scenario in which the user of the electronic device accesses the cab application, the user enters the pickup and drop off locations, and confirms the trip in the first user interface 1904. The proposed method can be used to extract contextual coupons associated with the cab application and use it when the user makes the payment.
The gesture detector 122 can be configured to detect the gesture input 1902 on the first user interface 1904 (as illustrated in
Referring to
In operation 2004, the electronic device allows the user to provide a gesture input to invoke the second user interface on top of the first user interface. For example, in the electronic device as illustrated in the
In operation 2006, the electronic device checks whether data related to the first data item in the application is available. For example, in the electronic device as illustrated in the
In operation 2008, upon determining that the data related to the first data item in the application is unavailable, the display 180 displays the first user interface of the application and does not display any transition to the second user interface.
In operation 2010, upon determining the data related to the first data item in the application is available, the processor 140 fetches the data related to the first data item in the application. Further, the display 180 displays the data related to the first data item in a transitioned second user interface.
In operation 2012, the electronic device, allows the user to provide a repeated gesture input to invoke a third user interface on top of the second user interface. For example, in the electronic device as illustrated in the
In operation 2014, the electronic device checks whether additional information related to the second data item is available. For example, in the electronic device as illustrated in the
In operation 2016, upon determining that additional information related to the second data item is unavailable, the display 180 displays data related to the first data item of the application in the first user interface and does not display any transition to third user interface.
In operation 2018, upon determining that additional information related to the second data item is available, the processor 140 fetches the additional information related to the second data item. Further, the display 180 displays the additional information related to the second data item in a transitioned third user interface.
In a scenario in which the user of the electronic device accesses the camera application in which the view of the street is displayed within a first user interface 2104, the view of the street may display objects such as banks, stores, grocery shops, restaurants, etc. (as illustrated in
In a scenario in which the user of the electronic device accesses the camera application in which an object containing some text is displayed within a first user interface 2204, the user of the electronic device may wish to translate the text to another language, then according to the proposed method user may provide a gesture input 2202 on the first user interface 2204 (as illustrated in
In response to the gesture input 2202, the electronic device extracts the text and provides the text in an editable form in a second user interface 2212 (as illustrated in
In a scenario in which the user of the electronic device accesses the gallery application, the image in the gallery application includes an object containing some text. The proposed method can be used to extract information from the image and place the call with respect thereto.
The gesture detector 122 can be configured to detect the first gesture input 2302 on the first user interface 2304 (as illustrated in
Further, the gesture detector 122 can be configured to detect the second gesture input 2306 on the second user interface 2308 (as illustrated in
In a scenario in which the user of the electronic device accesses the camera application, the live camera is the first user interface 2404. The field of view of the live camera includes a plurality of objects, e.g., a group of people, accessories, etc. The proposed method can be used to identify the emotions of the people in the group. Further, the proposed method can also be used to identify the objects and provide matching e-commerce information from various e-commerce applications thereof.
The gesture detector 122 can be configured to detect the first gesture input 2402 on the first user interface 2404 (as illustrated in
Further, the gesture detector 122 can be configured to detect the second gesture input 2406 on the second user interface 2408 (as illustrated in
In response to detecting the second gesture input 2406, the electronic device can be configured to invoke the third user interface 2412 (as illustrated in
Further, the gesture detector 122 can be configured to detect the third gesture input 2410 on the third user interface 2412 (as illustrated in
In response to detecting the third gesture input 2410, the electronic device can be configured to update the third user interface 2412 (as illustrated in
In a scenario in which the user of the electronic device accesses the home screen, which is the first user interface 2504, the home screen has the wallpaper and the theme. The proposed method can be used to change the wallpaper and the theme by invoking the intelligent layer (i.e., second user interface) thereof.
The gesture detector 122 can be configured to detect the first gesture input 2502 on the first user interface 2504 (as illustrated in
Further, the gesture detector 122 can be configured to detect the second gesture input 2508 on the second user interface 2506 (as illustrated in
In response to detecting the second gesture input 2508, the electronic device can be configured to invoke the third user interface 2510 (as illustrated in
In a scenario in which the user of the electronic device accesses the map application displaying a location map in a map view, the location map in the map view is displayed in the first user interface 2604. The proposed method can be used to identify and provide the additional information (if any) related to the location in a suitable mode (e.g., satellite mode, 3D mode, etc.) thereof.
The gesture detector 122 can be configured to detect the first gesture input 2602 on the first user interface 2604 (as illustrated in
Further, the gesture detector 122 can be configured to detect the second gesture input 2608 on the second user interface 2606 (as illustrated in
In a scenario in which the user of the electronic device accesses the gallery application displaying a plurality of images, the plurality of images are displayed in the first user interface 2704. The plurality of images are categorized into various image folders (e.g., camera roll, saved images, downloaded images, screen shot images, received images, images from instant messaging applications, etc.).
The gesture detector 122 can be configured to detect the first gesture input 2702 on the first user interface 2704 (as illustrated in
In response to detecting the first gesture input 2702, the electronic device can be configured to invoke the second user interface 2706 (as illustrated in
The processor 140 can be configured to navigate from one image folder to the other image folder (e.g., from gallery folder to the camera roll folder) (as illustrated in
Further, the gesture detector 122 can be configured to detect the second gesture input 2708 on the second user interface 2706 (as illustrated in
In response to detecting the second gesture input 2708, the electronic device can be configured to invoke the updated second user interface 2706 (as illustrated in
In a scenario in which the user of the electronic device accesses the calendar application, the first user interface 2804 provides a calendar with a list of tasks and reminders for each date (if any). The proposed method can be used to extract information related to an appointment, a meeting, an event based notification, etc., and add the information to the calendar thereof.
The gesture detector 122 can be configured to detect the first gesture input 2802 on the first user interface 2804 (as illustrated in
In response to detecting the first gesture input 2802, the electronic device determines information related to appointments, meetings, events, etc., from messages/emails. Further, the processor 140 can be configured to add the information related to the appointment, the meeting, the event based notification, etc., to the calendar and display it in the third user interface 2808 (as illustrated in
Certain aspects of the present disclosure can also be embodied as computer readable code on a non-transitory computer readable recording medium. A non-transitory computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the non-transitory computer readable recording medium include a Read-Only Memory (ROM), a Random-Access Memory (RAM), Compact Disc-ROMs (CD-ROMs), magnetic tapes, floppy disks, and optical data storage devices. The non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. In addition, functional programs, code, and code segments for accomplishing the present disclosure can be easily construed by programmers of ordinary skill in the art to which the present disclosure pertains.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation.
While the present disclosure has been particularly shown and described with reference to certain embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
201741005717PS | Feb 2017 | IN | national |
201741005717CS | Sep 2017 | IN | national |