This application claims priority under 35 U.S.C. §119(a) to a Korean Patent Application filed on Mar. 4, 2013 in the Korean Intellectual Property Office and assigned Serial No. 10-2013-0023044, the contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates generally to an electronic device and more particularly, to a data processing method and apparatus of an electronic device for manipulating data presented on a page displayed on a display.
2. Description of the Related Art
With the advance of digital technologies, various types of electronic devices capable of communicating and processing data have evolved into multifunctional devices integrating various functions with mobile convergence tendencies. For example, a recent electronic device integrates various functions including voice and video telephony, messaging including Short Message Service/Multimedia Message Service (SMS/MMS) and email, navigation, document editing (e.g. memo and word processor), picture-taking, broadcast playback, multimedia (video and audio) playback, Internet access, messenger, and Social Networking Service (SNS) functions.
Touchscreen-enabled electronic devices have recently overcome the limit of legacy input means and have facilitated user manipulation of the electronic devices. The recent touchscreen-based input manipulation growth has spawned development of various text input applications capable of editing data on a page of text edit applications. However, there is a need for more condensed client device steps, to reduce confusion and enhance user convenience in such electronic devices.
The present invention has been made to address at least the above problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present invention provides a content synchronization apparatus and method that is capable of reducing the client device operations by allowing a cloud server to perform the content synchronization function in the cloud service system.
In accordance with an aspect of the present invention, a data processing method of an electronic device includes displaying a page configured to receive a user input, forming an area on the page in response to the user input, providing, on a perimeter of the formed area, at least one mark configured to adjust the formed area, and displaying data in the formed area.
In accordance with another aspect of the present invention, disclosed is a non-transitory computer-readable storage medium having recorded thereon a data processing method of an electronic device, the method comprising displaying a page for receiving a user input, forming an area on the page in response to the user input, providing, on a perimeter of the formed area, at least one mark configured to adjust the formed area, and displaying data in the area when the area is formed.
The above and other aspects, features, and advantages of the present invention will be more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which:
Embodiments of the present invention are described with reference to the accompanying drawings in detail. The same reference numbers are used throughout the drawings to refer to the same or like parts. Detailed description of well-known functions and structures incorporated herein may be omitted for the sake of clarity and conciseness.
The present invention relates to an electronic device and method for supporting document composition thereof. Embodiments of the present invention disclose a data processing method and apparatus for processing data input in composing a document with an application capable of displaying an editable page and receiving various data, such as text, image, pictures, and video.
Embodiments of the present invention disclose a method and apparatus that is capable of inserting internal and/or external data into the page of the document currently being edited by the application operating on the electronic device in a simple and intuitive manner. According to embodiments of the present invention, it is possible to provide a frame area (e.g. window) for presenting data on the current page in response to a user's gesture input, and inserting data selected from the frame area into the page. According to embodiments of the present invention, it is possible to acquire various data from inside and outside of the electronic device, according to the data acquisition mode, and display the data as a preview when generating the frame area.
In the embodiments of the present invention, the application may be any of the various applications capable of composing a document (based on text and image) such as word processor, memo, email, message, and travel organizer. In embodiments of the present invention, the application may be any the various types of applications capable of providing an editable page and inputting various data through the page.
In embodiments of the present invention, the data may include image, picture, emoticon, map data, and Rich Site Summary or Really Simple Syndication (RSS) feed and various types of data formats capable of being inserted into the page, including internal data and external data. The internal data may be stored in the electronic device or acquired through picture-taking of the electronic device, and the external data may be acquired from outside of the electronic device. The data may also include still and motion pictures acquired from inside and outside of the electronic device.
In embodiments of the present invention, the outside of the electronic device may indicate a server, a cloud, or other electronic device connectable through a wired or wireless network. The external data may include all types of data acquirable from outside of the electronic device. The inside of the electronic device may include various components implemented in the electronic device, such as a storage unit and image sensor. The internal data may include all types of data acquired from inside of the electronic device.
In embodiments of the present invention, the data may be stored in the memory functionally connected inside or outside of the electronic device. The functionally connected memory may include the storage unit configured in the electronic device and external storage medium connected through an interface provided in the electronic device. As described above, the functionally connected memory may include the server, cloud, and other electronic devices connectable through wired or wireless network, or storage unit or database of the devices.
Referring to
According to embodiments of the present invention, the internal data may be acquired from the storage unit 150 attached to the electronic device 100. The internal data presented in the frame area 600 as a preview may be all data stored in the storage unit 150 or partial data extracted (retrieved) in correspondence to a keyword input by the user. The internal data may be acquired by the camera module 170 embodied in the electronic device 100. The internal data presented in the frame area 600 as a preview may be the data taken by use of the camera module 170.
The external data may be acquired from any of the server 410, the cloud 420, and another electronic device 430 to which the electronic device 100 is connected through the network 400. The external data may be downloaded form a specific server 410 such as an integration server, content server, provider server, and internet server, to which the electronic device 100 is connected through the network 400. The external data also may be downloaded from the cloud 420 to which the electronic device 100 is connected through the network 400, the cloud 420 being associated with the user. The external data may be received from another electronic device 430 to which the electronic device 100 is connected through the network 400.
The external data presented through the frame area 600 as a preview may be the data retrieved from the server 410, the cloud 420, and another electronic device 430 based on the keyword input by the user. The external data presented through the frame area 600 as a preview may be the mode data that may be acquired from the server 410, the cloud 420, and another electronic device 430 without any keyword input by the user.
In the following description, the server 410, the cloud 420, and another electronic device 430 are referred to as an external server, for conciseness. Although not shown in
In embodiments of the present invention, the network 400 may be any type of wired communication network, such as a Universal Serial Bus (USB) and data cable network, and a wireless communication network, such as a short-range communication network, cellular communication network, and Wireless Local Area Network (WLAN).
As shown in
The radio communication unit 110 may include at least one module responsible for radio communication of the electronic device 10 with a wireless communication system or another electronic device 430. For example, the radio communication unit 110 may include a cellular communication module 111, a WLAN module 113, a short-range communication module 115, a location calculation module 117, and a broadcast reception module 119.
The cellular communication module 111 may communicate radio signals with at least one of a base station, another terminal, and various servers (e.g. integral server, provider server, content server, Internet server, and cloud server, etc.) on a cellular communication network. The radio signals may carry voice telephony, video conference, and text/multimedia message data. In embodiments of the present invention, the radio signals may carry the external data such as image, picture, emoticon, map data, and RSS feed transmitted by various servers. The cellular communication module 111 may receive the external data corresponding to a keyword input by the user through a cellular communication channel established with any of the various servers.
The WLAN module 113 is for wireless Internet access and establishing a WLAN link with another electronic device 430, and may be embodied in the electronic device or implemented as an external component. The wireless Internet technology may be any of Wireless Fidelity (Wi-Fi), Wireless Broadband (WiBro), World Interoperability for Microwave Access (Wimax), and High Speed Downlink Packet Access (HSDPA). The WLAN module 113 may establish a WLAN link with another electronic device 430 to transmit data selected by the user and receive external data. The WLAN module 113 may establish a WLAN link with any of various servers to download external data corresponding to the keyword input by the user.
The short-range communication module 115 is for short-range communication. There are many short-range communication technologies including Bluetooth®, Bluetooth Low Energy (BLE), Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee®, and Near Field Communication (NFC). The short-range communication module 115 may establish a short-range communication link with another electronic device 430 to transmit data selected by the user and receive external data. The short-range communication module 115 may maintain a turn-on or turn on/off according to the user configuration or in response to the user input.
The location calculation module 115 is for acquiring the location of the electronic device 100, and is represented by a Global Positioning System (GPS) module. The location calculation module 115 calculates distances from three or more base stations and time information, and performs triangulation with the calculated information to acquire the current location defined with latitude, longitude, and altitude. The location calculation module 115 receives the location information on the electronic device 100 from three or more satellites in real time to calculate current location of the electronic device 100, using various methods.
The broadcast reception module 119 may receive the broadcast signal, such as TV, radio, and data broadcast signal, and/or broadcast-related information such as on the broadcast channel, broadcast program, and broadcast service provider, through a broadcast channel.
The input unit 120 may generate input data corresponding to the user input for controlling the electronic device 100. The input unit 120 may include at least one of a keypad, a dome switch, a touchpad (resistive/capacitive), a jog wheel, a jog switch, a sensor such as speech, proximity, luminance, acceleration, and gyro sensors. The input unit 120 may be implemented as external buttons on the outside of the electronic device 100 or virtual keys being presented on the touch panel. The input unit 120 may receive a user input for inserting internal or external data presented in the frame area 600 into the page currently being edited and generates an input signal corresponding to the user input.
The touchscreen 130 is an input/output means responsible for receiving user input and displaying information, and includes a display panel 131 and a touch panel 133. Particularly, if a touch gesture is made by the user on the touch panel 133 when displaying a screen associated with an operation of the electronic device 100, the touchscreen 130 generates an input signal corresponding to the touch gesture to the control unit 180. The control unit 180 identifies the touch gesture and controls the electronic device 100 to perform the operation corresponding to the touch gesture.
The display panel may display (output) information processed by the electronic device 100. When the electronic device operates in the telephony mode, the display panel 131 is capable of displaying a UI or GUI related to the telephony mode. Particularly, the display panel 131 may display an execution screen of the application (e.g. page of the document being composed by the user) along with the frame area 600 generated by the user. The display panel 131 may present the data acquired from the inside or the outside of the electronic device 100 (internal or external data) in the frame area 600 of the presented page. If specific data is selected by the user from the data presented in the frame area 600, the display panel 131 may display the page in which the selected data is inserted.
The display panel 131 may display a scrolling screen on which the data presented in the frame area 600 is changing (switching), in response to the user's scroll control event. The display panel 131 may present the data (e.g. preview image) input through the camera module 170 in the frame area 600 on the current page. The display panel 131 may display the screen in a landscape mode or a portrait mode and switch the screen between the landscape and portrait modes according to the rotation direction (or orientation) of the electronic device 100.
The display panel 131 may be implemented as any of a Liquid Crystal Display (LCD), Thin Film Transistor LCD (TFT LCD), Light Emitting Diode (LED), Organic LED (OLED), Active Matrix OLED (AMOLED), flexible display, bended display, and 3-Dimensional (3D) display. The display panel 131 may be implemented as a transparent display panel through which the light penetrates.
The touch panel 133 is capable of being placed on the display panel 131 to detect the user's touch gesture made on the surface of the touch screen 130 (e.g. single touch gesture, multi-touch gesture, camera shooting gesture, and data input gesture). If the user's touch gesture is detected on the surface of the touchscreen 130, the touch panel 133 extracts the coordinates at the position of the touch gesture and transfers the coordinates to the control unit 180. The touch panel 133 detects the touch gesture made by the user and generates a signal corresponding to the touch gesture to the control unit 180, which executes a function according to the signal transmitted by the touch panel 133 in association with the position where the touch gesture is detected.
The tough panel 133 may be configured to convert the pressure applied at a specific position of the display panel 131 or the change of capacitance at a specific position of the display panel 131 to an electrical input signal. The touch panel 133 is capable of measuring the pressure of the touch input as well as the position and size of the touch. If a touch input is detected, the touch panel 133 generates corresponding signal(s) to a touch controller (not shown), which is capable of processing the signal(s) and transferring the corresponding data to the control unit 180. In this manner, the control unit 180 is capable of checking the touched area on the touchscreen 130.
The audio processing unit 140 relays the audio signal input from the control unit 180 to the speaker (SPK) 141 and the audio signal including voice input through the microphone (MIC) 143 to the control unit 180. The audio processing unit 140 processes the voice/sound data to output through the speaker 141 as an audible sound wave under the control of the control unit 180, and converts the audio signal including voice input through the microphone 143 into a digital signal to be output to the control unit 180.
The speaker 141 is capable outputting the audio data received by means of the radio communication unit 110 in the telephony, document editing e, messaging, messenger, audio (video) recording, speech recognition, broadcast reception, media content (audio file and video file) playback modes, and photo shooting modes, and stored in the storage unit 150. The speaker 141 is also capable of outputting an audio signal related to the function executed in the electronic device 100, such as inbound call reception, outbound call placing, data input, picture-taking, and media content playback.
The microphone 143 is capable of processing the sound input in the aforementioned various modes, to generate electronic audio data. The processed audio data may be processed in the format capable of being transmitted to the mobile communication base station by means of the mobile communication module 111. The microphone 143 may be implemented with various noise cancellation algorithms for removing noise generated while receiving the outside sound.
The storage unit 150 temporarily stores programs associated with information processing and control function of the control unit 180 and input/output data (e.g. internal and external data, contact information, document, picture, messaging and chat data, and media contents including audio and video). The storage unit 150 is also capable of storing information of usage frequencies (e.g. application usage frequency, data usage frequency, keyword usage frequency, and multimedia content usage frequency), weights, and priorities, and data of various patterns of vibration and sound effects output in response to touch inputs made on the touchscreen 130.
The storage unit 150 stores the Operating System (OS) of the electronic device 100 and programs for controlling touchscreen-based input and displaying input data and inserting data into the page of the document currently being composed in real time and data generated by the applications semi-persistently or temporarily. The storage unit 150 also may store various configuration information associated with the creation of the frame area 600 in correspondence to the drawing gesture of the user in shape and size.
The configuration information may explain how to maintain the shape in correspondence to the user input, and automatically convert the shape to a fixed diagram and information on the user input method for forming the frame area 600. The configuration information may include the information on the memory from which the data is acquired, and may be configured according to the source of the data (e.g. the source functionally connected to the electronic device 100 to provide the data). For example, the configuration information may include the information on the internal data acquisition from the storage unit 150 or camera module 170 and the external data acquisition from an external server. The configuration information may be defined by the user or set to default at the manufacturing state of the electronic device 100.
The storage unit 150 may be implemented with a storage medium of at least one type including a flash memory, hard disk, micro, and card (e.g. Secure Digital (SD) type and eXtream Digital (XD) card) memories, and Random Access Memory (RAM), Dynamic RAM (DRAM), Static RAM (SRAM), Read-Only Memory (ROM), Programmable ROM (PROM), Electrically Erasable PROM (EEPROM), Magnetic RAM (MRAM), magnetic disk, and optical disk type memories. The electronic device 100 may interoperate with a web storage operating as the storage unit 150 on the Internet.
The interface unit 160 provides the interface for the external devices connectable to the electronic device 100. The interface unit 160 may transfer the data or power from the external devices to the internal components of the electronic device 100 and transfer the internal data to the external devices. For example, although not shown, the interface unit 160 may be provided with a wired/wireless headset port, external charging port, wired/wireless data port, memory card slot, identity module slot, audio input/output port, video input/output port, and earphone jack.
The camera module 170 is responsible for photo shooting function of the electronic device 100. The camera module 170 may shoot a still or motion image of a scene. The camera module 170 may shoot a picture of a scene and output the video data of the picture to the display panel 131 and the control unit 180 under the control of the control unit 180. The camera module 170 may include an image sensor (or camera sensor) (not shown) for converting optical signal to electric signal and an video signal processor (not shown) for converting the electronic signal received from the image sensor to digital video data. The image sensor may be a Charge-Coupled Device (CCD) or a Complementary Metal-Oxide-Semiconductor (CMOS) sensor. The camera module 170 may provide an image processing function for supporting photo shooting according to various shooting options set by the user (e.g. zooming, screen aspect ratio, visual effect (e.g. sketch, mono, sepia, vintage, mosaic effect, etc.), frame, etc.).
The control unit 180 controls the electronic device 100. For example, the control unit 180 may control voice telephony, data communication, and video conference. The control unit 180 may include an Application Processor (AP) (not shown) for processing the document composition function and related aspects of the application. In an embodiment of the present invention, the application processor may be embedded in the control unit 180 or independently implemented. The control unit 180 may include a data processing module having a page display module 184, a frame area display module 186, and a data display module 188. The additional information on the page display module 184, frame area display module 186, and data display module 188 is provided in association with
The page display module 184 may display an editable page. The frame area display module 186 may create the frame area 600 on the corresponding page in page view mode of a specific application (e.g. document editor, email editor, and web browser), and may determine a data acquisition mode in creating the frame area 600.
The data display module 188 acquires the internal or external data according to the determined data acquisition mode and controls such that the acquired data is presented in the frame area 600. The data display module 188 may control such that the data (e.g. preview image) input through the camera module 170 is displayed in the frame area 600 when creating the frame area 600.
The data display module 188 may detect an input event for inputting data when displaying the acquired data in the frame area 600. When the insertion event is detected, the control unit 180 may control to processes the data presented in the frame area 600 to generate the insertion data to be inserted into the page in response to the insertion event, and display the data as inserted on into the corresponding page.
The page display module 184 may control based on the object input mode and data insertion mode of the application. In embodiments of the present invention, the object input mode may indicate the mode for inputting an object (e.g. text, diagram, and picture input by the user). The data insertion mode may be capable of generating the frame area 600, presenting the internal or external data through the frame area 600, editing the frame area 600 (size adjustment, location movement, and rotation), and configuring option (e.g. zoom-in/out, visual effect, cutting, decorating, framing, and rotating) on the data being presented in the frame area 600. For example, the data insertion mode may be capable of generating the frame area 600, presenting acquired data in the frame area 600, and inserting the selected data.
The page display module 184 may control switching between the object input mode and the data insertion mode and waiting for user's gesture input when maintaining the page screen of the application when the mode is switched from the object input mode to the data insertion mode. The control unit 180 may control this switching in response to the selection of the insertion item for activating (executing or switching) the data insertion mode when the electronic device 100 is operating in the object input mode.
The frame area display module 186 may generate the frame area 600 at the location of the user gesture input in the data insertion mode with a shape and size corresponding to the user gesture, and display the area 600 on the page. For example, the control unit 180 may distinguish between the user input for inputting the object on the page in the object input mode and data insertion mode, and the user input for forming the frame area 600. The user input made in the data input mode is processed as the input for forming the frame area 600. If the user input does not fulfill the condition for forming the frame area 600, the user input may be ignored to prevent the user input from unintentionally inserting an object.
The data display module 188 may operate the camera module 170 on the background when the mode switches from the object input mode to the data insertion mode. Alternatively, the area creation event is input and controls such that, when the frame area 600 is created, the data (preview image) acquired through the camera module is presented in the area 600.
According to an embodiment of the present invention, the control unit 180 may control various functions of the electronic device 100 as well as the above-described functions. The control unit 180 is capable of receiving input signals corresponding to various touch-based events supported on the touch-based input interface (e.g. touchscreen 130) and controlling functions in response to the input signal. The control unit 180 is also capable of controlling transmission/reception of various data through wired or wireless communication channels.
The power supply 190 supplies the power from an external or internal power source to the internal components of the electronic device.
As described above, the electronic device, according to embodiments of the present invention, may be any of the well known types of information communication and multimedia devices equipped with at least one of an Application Processor (AP), Graphic Processing Unit (GPU), and Central Processing Unit (CPU). For example, the electronic device may be any of cellular communication terminal operating with various communication protocols corresponding to the communication systems, tablet Personal Computer (PC), smartphone, digital camera, Portable Multimedia Player (PMP), Media Player (e.g. MP3 player), portable game console, Personal Digital Assistant (PDA), etc. Also, the gesture-based control method according to one of various embodiments of the present disclosure may be applied to various display devices such as digital television (TV), Digital Signage (DS), and Large Format Display (LFD), laptop computer, and desktop computer.
The gesture-based data processing method, according to embodiments of the present invention may be implemented in software, hardware, or combination of both and stored in a computer-readable storage medium. In hardware implementation, the gesture-based data processing method, according to one of the embodiments of the present invention may be implemented with at least one of Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and other electrical units which perform certain tasks.
Some embodiments of the present invention may be implemented by the control unit 180. When implemented in software, the procedures and functions described in the embodiments of the present invention may be implemented with the software modules (e.g. page manager 184, guide manager 186, and object manager 188), which may perform at least one of the above-described functions.
The storage medium may be any of the computer-readable storage media storing the program commands of display a page for receiving a user input, forming the frame area 600 on the page in response to a user input, and presenting data in the frame area 600.
In
As shown in
The optional items may include items for selecting various tools (options) for use in composing a document. For example, the optional items may include edit items 211 for selecting object color, boldness, erase, and rollback functions, insertion item 213 for inserting data, and sharing items for sharing (transmitting) the composed document with external devices.
The user may command to form the frame area 600 by means of the insertion item 213, present data acquired from inside or outside of the electronic device 100, and switch the mode to the data input mode for inserting the data presented through the frame area 600 into the page 200. If the data input mode is activated in response to the user input made on the insertion item 213, the electronic device 100 regards the user input made in the document composition area 230 as the input for forming the frame area 600 other than the input of the object for document composition.
Although the embodiment of
In
The user may select the insertion item 213 to initiate the data insertion mode and determine the position and size of the data to be inserted on the page 200. The user may make a gesture (area generation gesture such as drawing gesture) for forming the frame area 600 at the determined position.
For example, the user may make a gesture drawing the frame area 600 in the shape of a square having the determined size at the determined position.
The electronic device 100 acquires the data (e.g. internal and external data) according to a predetermined method when forming the frame area 600 and displays the acquired data through the frame area 600 as a preview., as shown in
The embodiment of
The frame area 600 may be created in correspondence to the position and size of the user gesture (drawing) made on the page 200 along with adjustment marks 650 on the boundary line that are capable of allowing the user to intuitively adjust the frame area 600 (e.g. size, position, rotation). The user may adjust, i.e. extend or shrink, the size of the frame area 600 in one direction (e.g. up, down, left, and right directions) or multiple directions (e.g. top left, top light, bottom left, and bottom right directions) using the adjustment marks 650. The user may move the frame area 600 within the boundary of the page 200 by making a movement (drag) gesture in a direction when holding the boundary or the adjustment item 650 on the boundary of the frame area 600.
After the activation of the frame area 600 as shown in
As shown in
In
At this time, the electronic device 100 retrieves the data (e.g. internal and external data) according to the method configured in creating the frame area 600 based on the input object (e.g. CAR) and displays the retrieved data in the frame area 600. When the frame area 600 is created in the data insertion mode when the object (e.g. CAR) has been input, the electronic device 100 retrieves the data to be displayed in the frame area 600 based on the input object.
The electronic device 100 performs text recognition on the input object to generate a keyword (e.g. CAR) and retrieves the internal and/or external data matching the keyword according to a data acquisition mode. For example, the electronic device 100 may perform text recognition on the input object to acquire the keyword “CAR” and retrieve the data matching the keyword “CAR” from the internal and/or external data. The retrieved data is buffered and then processed so as to be presented in the frame area 600.
As shown in the screen displays of
The electronic device 100 then may present the object “WINE?” at a portion of the onscreen page 200 where the user has made the insertion. The object may be a series of objects “W,” “I,” “N,” “E,” and “?” sequentially inserted.
In
The user may select the insertion item 213 to execute the data input mode and determine the position and size of the data shot by the camera module 170 in the onscreen page 200. The user may make a gesture (e.g. drawing gesture) for creating the frame area 600.
For example, the user may make a gesture drawing a rectangular frame area 600 having a size at a certain part of the onscreen page as shown in
As shown in
As shown in
Once the insertion data is inserted into the onscreen page of the document currently being composed, the camera module 170 is no longer needed. Accordingly, after the insertion is generated and inserted to the ongoing page, the camera module 170 may turn off. However, the present invention is not limited thereto, and may be embodied to maintain the active state of the camera module 170 before the end of the application.
The user may input an object (e.g. text and diagram) for the data inserted in the page 200 in the screen state of
As shown in
In embodiments of the present invention, the data presented in the frame area 600 may be zoomed in/out according to the user input. For example, the user may make a signal touch having a contact point within the frame area 600 and then drag the contact point upward to zoom in or downward to zoom out. The user may perform a multi-touch having two contact points within the frame area 600 and then pinch in/out to narrow or broaden the distance between contact points for zoom-out/in. The user may perform the zooming by performing a hovering gesture with a touch pen (e.g. stylus pen). Such a user input gesture may be used for various options aforementioned as well as zooming. In embodiments of the present invention, the option adjustments for the frame area 600 formed on the page 200 may be performed through various methods.
The user may select a portion of the boundary of the frame area 600, move the frame area downward while holding the selection, and release the selection at a certain position as shown in
Then the electronic device 100 displays the frame area 600 at the position where the frame area 600 has been moved according to the user drag & drop touch gesture input. For example, the screen display of
In embodiments of the present invention, the size adjustment of the frame area 600 may have different reactions according to whether the frame area 600 displays a still or motion image or a preview image. The distinct cases are described with reference to
As shown in
The electronic device 100 displays the frame area 600 of which size has been adjusted (enlarged) in response to the size adjustment (enlargement) touch gesture of the user as shown in
In embodiments of the present invention, the scroll of the data displayed in the frame area 600 may be performed by user's touch gesture (e.g. sweep, flick, and drag) as shown in
Then the electronic device 100 navigates the data in the frame area 600 in response to the user's scroll gesture, such that a navigated portion of the data is displayed in the frame area 600. For example, the data displayed in the frame area 600 may be replaced by previous data or next data, according to the direction of the user's scroll gesture.
Illustration 2201 of
Then the electronic device 100 processes the scroll operation on the data displayed in the frame area 600 in response to the user's scroll gesture as shown in illustration 2203 or 2205 such that the data is replaced by other data. For example, the current data displayed in the frame area 600 may be replaced by the previous data or the next data according to the direction of the user's scroll gesture.
As described with reference to
In embodiments of the present invention, the recognition of shape drawn by the user for forming the frame area 600 on the onscreen page may be performed using various diagram recognition algorithms. The user input gesture for forming the frame area may be made and detected on the touchscreen 130 according to various touch gesture recognition mechanisms, such as with a user's fingers(s) or a touch pen, in which pen case the touch gesture may be replaced by a hovering gesture.
Referring to
The control unit 180 may detect a user input at step 3503. For example, the control unit 180 may detects a signal corresponding to the user input generated in the frame area 600 of the onscreen page. The signal corresponding to the user input may be generated with at least one user gesture.
The frame area display module 186 may control to display the frame area in correspondence to the user input at step 3505. For example, the frame area display module 186 may create the frame area 600 at a certain position in a certain size on the onscreen page in correspondence to the user input. In embodiment of the present invention, displaying the frame area 600 is performed in such a manner of forming an area determined according to at least one user gesture in a specified shape adjusted automatically. In embodiments of the present invention, displaying the frame area 600 may include acquiring data to be displayed in the frame area 600 in response to the signal detection. The data may include internal and external data, as described above. The data may include all the data stored in the memory functionally connected to the electronic device 100. For example, the data may include the data stored in the internal memory and the data received from the external memory connected to the electronic device 100 through the network 400.
The data display module 188 may control to display the data in the frame area 600 along with the display of the frame area 600 on the page at step 3507. For example, the data display module 188 may acquire the data to be displayed in the frame area in response to the signal detection, and display the acquired data in the frame area 600 as described above.
Although not shown in
The displaying the data may include displaying the data (e.g. object) on the page in response to the user input, changing at least one of the size and position of the frame area 600 according to at least one user input, and applying at least one style selected among several styles or input by the user.
The frame area 600 may be divided into various sub-areas in response to a user input. Displaying the frame area 600 may also include displaying various sub-areas including a first subarea and a second subarea. The data may be composed of various data including first data and second data at least in correspondence to the subareas, and displaying the data may include displaying the first data in the first subarea and the second data in the second subarea.
In embodiments of the present invention, the first data being received from one of first and second image sensors functionally connected to the electronic device 100, and the second data being received from another image sensor except the sensor which received the first data. For example, among the first image sensor (e.g. rear camera module) and the second image sensor (e.g. front camera module) of the camera module 170 functionally connected to the electronic device 100, the first data is acquired through the first image sensor, and the second data is acquired through the second image sensor connected to the electronic device 100.
In embodiments of the present invention, the frame area 600 may include various subareas, and the data displayed in one subarea selected among the plurality subareas may be stored.
Referring to
If the insertion item 213 is selected, the data display module 188 may control to switch from the object input mode to the data insertion mode at step 2203. For example, the data display module 188 may switch the object input mode for receiving an object to a data insertion mode for inserting data, and then wait for user input for generating the frame area 600.
If a user's gesture input for generating the frame area 600 is detected at step 2205 after the entry to the data insertion mode, the frame area display module 186 may recognize the frame area 60 in correspondence to the user's gesture at step 2207. The control unit 180 may control to generate the frame area 600 on the onscreen page, according to the user's gesture at step 2209.
For example, the user may determine the position and size of the internal or external data to be inserted on the page and make a drawing gesture to form the frame area 600. Then the frame area display module 186 recognizes the area drawn in accordance to the user's gesture, as described above, and forms the frame area 600 corresponding to the drawn area. In embodiments of the present invention, forming the frame area 600 is performed in such a manner of creating the atypical or typical shape of the frame area 600 according to the user configuration. The data display module 188 may control the camera module 170 to operate on the background when switching to the data insertion mode or detecting the user's gesture input. The control unit 180 also may control to display the subject's data (preview image) input through the camera module 170 in the frame area 600 as a preview.
Referring to
In embodiments of the present invention, the touch gesture may be the input gesture for writing the object (keypad-based text, writing-based text, and diagram, etc.), selecting the edit item 211 among the options items provided in the item area 210, or selecting the sharing item 215 for sharing (transmitting) the written document with (to) other devices. The application may be any type capable of generating an editable document (text and image-based document) such as word processor, memo, email, message, and travel organizer applications.
The frame area display module 186 may detect a user gesture for forming the frame area 600 in the object input mode of the application at step 2103. For example, the user may determine the position and size of an area for inserting specific data on the onscreen page 200 in the object input mode. The user may draw a diagram having the determined position and size on the page to designate the frame area 600. The user may select the insertion item 213 before making the user gesture in order to switch from the object input mode to the data insertion mode.
The frame area display module 186 may create the frame area 600 in response to the user's gesture input on the onscreen page 200 (e.g. document currently being composed) and display the frame area 600 on the page 200. In embodiments of the present invention, displaying the frame area 600 may include acquiring the data to be displayed in the frame area 600, and the data may include the internal and external data as described above. The data may include all the types of data stored in the memory connected to the electronic device 100 functionally.
For example, the data may include the data stored in the internal memory and the data received from an external memory connected through the network 400. In embodiments of the present invention, when the data is acquired from the camera module 170, the control unit 180 may control the camera module 170 to operate on the background in creating the frame area 600 in response to the user's gesture input and then acquire the subject's data (preview image).
The data display module 188 may display the acquired internal or external data in the frame area 600 formed on the onscreen page 20 as a preview at step 2107. For example, the data display module 188 may control such that specific data is displayed in the frame area 600 having the position and size determined by the user's gesture input.
The data display module 188 may control such that the data is displayed in the frame area 600 in response to at least one user's gesture input. The control unit 180 may differentiate the user's inputs to restrict some user's input. The control unit 180 also may control to display at least one menu (e.g. option selection menu) corresponding to the function for controlling the data in the frame area 600 or a predetermined area on the page 200.
The frame area display module 186 may change at least one of the size and position of the frame area according to at least one user's input made when the data is displayed in the frame area 600. The control unit 180 also may apply at least one style selected among various styles or selected by the user's input to at least a portion of the frame area 600 when the data is displayed in the frame area 600.
The frame area display module 186 may display the frame area 600 as divided into various subareas in response to the user's input when the data is displayed in the frame area 600. Displaying the data in the frame area 600 may include displaying the various sub-areas including at least the first and second subareas. The data may include at least first and second data in correspondence to the subareas constituting the frame area 600, and displaying the data in the frame area 600 may include displaying the first data at the first subarea and the second data at the second-sub area.
The data display module 188 may detect the insertion event when the data is displayed in the frame area 600 at step 2109. For example, the control unit 180 may detect the user's input gesture for commanding to convert the data displayed in the frame area 600 into the insertion data and insert the insertion data on the page 200. For example, the insertion command may be input as a touch gesture (e.g. tap) or by manipulating the input unit 120.
The data display module 188 may generate the insertion data in response to the insertion command at step 2111 and inserts the insertion data on the onscreen page 200 of the document currently being composed at step 2113. For example, the data display module 188 may control to generate the insertion data equal in size to the frame area 600 and display the inserted data at the same position as the frame area 600 on the page 200. The page display module 184 may switch the mode to the object input mode or maintain the data input mode according to a preconfigured setting, and wait for a user input.
The control unit 180 may perform an operation corresponding to the user's request when the insertion data is inserted on the corresponding page 200 at step 2115. For example, the control unit 180 may insert additional data (e.g. object) onto the inserted data in response to the user input, generate a new frame area 600 to add the additional data in response to the area insertion command, store and end of the composed document with the corresponding page in response to the user request, or transmit (share) the composed document with the corresponding page to the outside of the electronic device 100 in response to the user request, when the data is inserted in the page 200.
Referring to
The frame area display module 186 may detect a user input for creating the frame area 600 at step 2903. For example, the user may determine the position and size of the area for inserting specific data on the page and make a drawing gesture for designating the frame area 600 at the determined position with the determined size on the page 200.
The frame area display module 186 may create the frame area 600 and display the frame area 600 on the page in response to the user input at step 2905.
The frame area display module 186 may determine a data acquisition mode when displaying the frame area 600 at step 2907. In embodiments of the present invention, displaying the frame area 600 may include acquiring the data to be displayed in the frame area 600, and the data may include the internal and/or external data.
The data display module 188 may acquire the internal or external data according to the determined data acquisition mode at step 2909. For example, the data display module 188 may acquire the data stored in the memory functionally connected to the electronic device 100 according to the data acquisition mode. The control unit 180 may acquire the data stored in the internal memory or the external memory connected through the network 400. The control unit 180 may control the camera module 170 to operate on the background and acquire the subject's data (preview image) input through the camera module 170.
The data display module 188 may display the acquired internal or external data in the frame area 600 as a preview at step 2911. For example, the control unit 180 may control specific data in the frame area 600 having the position and size determined according to the user's gesture input on the page 200 of the document currently being composed at step 2911.
The data display module 188 may monitor to detect a user input and, if a user input is detected, determine whether any user input is a scroll command for scrolling the data displayed in the frame area 600 at step 2913.
If the user input is the scroll command, the data display module 188 may scroll the data in the frame area 600 and display the scrolled data in response to the user's scroll command input at step 2915. If the user input is not the scroll command input, the data display module 188 may determine whether the user input is an insertion command input at step 2917.
If the user input is not the insertion command, the control unit 180 may perform the step corresponding to the user input at step 2919. For example, the data display module 188 may input an object on the page in response to the user input, or change the data input method.
If the user input is the insertion command, the data display module 188 may generate insertion data at step 2921. For example, the control unit may detect an insertion command input when the data is displayed in the frame area 600 and convert the data displayed in the frame area 600 as the insertion data to be inserted on the page.
The data display module 188 may insert the created insertion data on the onscreen page of the document currently being composed at step 2923. For example, the control unit 180 may generate the insertion data equal in size to the frame area 600 and insert the insertion data at the same area (position) as the frame area 600.
If the user input is not the scroll command, the control unit 180 may perform an operation corresponding to the user input at step 2925. For example, the control unit 180 may switch from the data insertion mode to the object input mode for inserting the insertion data on the onscreen page, form a new frame area 600 in response to a user input, or display an object on the page in response to the user input.
Referring to
The frame area display module 186 may determine whether the data acquisition mode is an internal data acquisition mode at step 3003. If the data acquisition mode is the internal data acquisition mode, the frame area display module 186 determines whether the internal data acquisition mode is keyword-based internal data acquisition mode at step 3005.
If the internal data acquisition mode is not the keyword-based internal data acquisition mode, the data display module 188 may acquire all the internal data at step 3013 and display the acquired data in the frame area 600 on the page as a preview.
If the internal data acquisition mode is the keyword-based data acquisition mode, the data display module 188 extracts the keyword at step 3007. For example, the control unit 180 may analyze the object on the page to extract a keyword.
The data display module 188 determines whether there is any keyword on the page at step 3009. If there is no keyword on the page, the data display module 188 controls to acquire all the internal data at step 3013 and displays the acquired internal data in the frame area 600 as a preview at step 3015.
If there is any keyword on the page, the data display module 188 may extract to acquire the internal data matching the keyword at step 3011. The control unit 180 may prompt the user to enter whether to perform keyword-based search (e.g. whether the extracted keyword is correct). If the user request is for keyword-based search, the control unit 180 retrieves the internal data matching the corresponding keyword. According to an embodiment of the present invention, if a user input requesting for keyword-based search is received, the control unit 180 may retrieve the internal data matching the keyword. The control unit 180 may control such that the retrieved internal data is displayed in the frame area 600 on the page as a preview.
If the data acquisition mode is not the internal data acquisition mode (i.e. if the data acquisition mode is the external data acquisition mode) at step 3003, the control unit 180 may extract a keyword at step 3021. For example, the control unit 180 may analyze the object on the page to extract a keyword.
The data display module 188 may determine whether there is any keyword on the page at step 3023. If there is any keyword on the page, the control unit 180 may prompt the user to check whether the corresponding keyword is correct as a popup message. The popup message may inquire whether to perform data search with the extracted keyword.
If there is no keyword in the page, the control unit 180 may control to display a search window at step 3027. For example, the control unit 180 may display a search window prompting the user to enter a keyword to search for external data. The control unit 180 may receive the keyword entered by the user through the search window at step 3029.
At step 3031, the control unit 180 may receive the user input requesting for search with the keyword confirmed at step 3025 or entered at step 3029.
The control unit 180 may control to establish a communication link in response to the search request of the user at step 3033. For example, the control unit 180 may activate a communication module capable of communication with the data source (e.g. external server) for acquiring the external data, and establish a connection to the data source through the network 400.
The control unit 180 may request the connected data source (e.g. external server) for the external data matching the keyword confirmed by the user at step 3035. The control unit 180 may receive the external data from the data source through the network at step 3037. The control unit 180 may control to display the received external data in the frame area 600 on the page as a preview.
The user may make a gesture (split command) for splitting the frame area 600 into at least two subareas. The split command may be input with a gesture of splitting the frame area 600 in a vertical, horizontal, or diagonal direction, and distinguished from the input for adjusting the option (or style) of the data in the frame area 600.
The frame area 600 is split into two subareas as shown in
The user may input an insertion command for selecting one of the two split subareas of the frame area 600 (e.g. first subarea 610) and insert the first data (e.g. the data displayed in the frame area 600 into the selected subarea. The electronic device 100 saves the first data and displays the first data in the first subarea 610 of the frame area 600 in response to the user's input command. The second subarea 630 of the frame area 600 may continue displaying the data as a preview, as in
As shown in
In
Although
As described with reference to
According to embodiments of the present invention, the frame area may be split into various subareas according to the user input. The data processing method may include splitting the frame area 600 input various subareas including the first and second subareas. The data also may be split into various data including at least the first and second data in correspondence to the subareas. In splitting the frame area 600, the data displayed in the respective subareas of the frame area 600 may be different from each other, and the different data may be combined into a single piece of data to be inserted into the page.
When the frame area 600 is split into the first and second subareas 610 and 630, the first data may be displayed in the first subarea 610 and the second data in the second subarea 630. When the frame area 600 is split into the first to third subareas 4510, 4520, and 4530, the first data may be displayed in the first subarea 4510, the second data in the second subarea 4520, and the third data in the third subarea 4530.
In embodiments of the present invention, the data acquired for the respective split subareas may be identical or different from each other, as described with reference to Table 1.
Table 1 is directed to when the frame area 600 is split into three subareas and three pieces of data are stored for the three subareas. As shown in Table 1, the first to third data is mapped to the first to third subareas, and may be identical to or different from each other.
For example, the first to third data may be processed to generate the same data as the data displayed in the frame area 600 and displayed in different forms according to the shapes of the split subareas. The first to third data may be the different internal data acquired in response to the insertion command inputs made on different data or the internal, external, and shot data acquired from different memories. When the frame area 600 is split into various subareas, the insertion data may be generated in various data combinations.
In embodiments of the present invention, the first data may be acquired from one image sensor of the first image sensor (e.g. rear camera module) of the camera module 170 functionally connected to the electronic device 100 and the second data may be acquired from the other image sensor of the first and second images sensors connected to the electronic device 100. The third data may be received from the external memory through the network 400.
In embodiments of the present invention, the frame area 600 may include various subareas. If one of the subareas is selected, the data displayed in the selected subarea is stored. For example, the control unit 180 acquires the first data input from the first image sensor in selection of the first subarea, the second data in selection of the second subarea, and the third data in selection of the third subarea. Once the data for the all the subareas is acquired, the control unit 180 combines the first to third data and inserts the combined data into the corresponding page.
In the screen display of
As shown in
When the motion picture data is inserted on the onscreen page as shown in
According to embodiments of the present invention, each module may be implemented in the form of software, firmware, hardware, or any combination thereof. Some or entire modules may be implemented as one entity responsible for the functions of the corresponding modules.
According to embodiments of the present invention, the individual steps may be performed sequentially, repeatedly, or in parallel. Some steps may be omitted or performed along with other steps. The individual steps may be executed with corresponding modules described in the present invention.
The above-described data processing method, according to embodiments of the present invention can be implemented as computer-executable program commands and stored in a computer-readable storage medium. The computer readable storage medium may store the program commands, data files, and data structures in individual or combined forms. The program commands recorded in the storage medium may be designed and implemented for various embodiments of the present invention or used by those skilled in the computer software field.
The computer-readable storage medium includes magnetic media such as a floppy disk and a magnetic tape, optical media including a Compact Disc (CD) Read-Only Memory (ROM) and a Digital Video Disc (DVD) ROM, a magneto-optical media, and the hardware device designed for storing and executing program commands such as ROM, Random Access Memory (RAM), and flash memory. The program commands include the language code executable by computers using an interpreter as well as the machine language codes created by a compiler. The aforementioned hardware device can be implemented with one or more software modules for executing the various embodiments of the present invention.
Although embodiments of the present invention have been described using specific terms, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense in order to help understand the present invention. It is obvious to those skilled in the art that various modifications and changes can be made thereto without departing from the broader spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
10-2013-0023044 | Mar 2013 | KR | national |