The disclosure relates to an electronic device and a method for improving a force touch operation thereof and, for example, to an electronic device and a method for improving a force touch operation based on artificial intelligence.
Electronic devices including touch screens are widely being used. These electronic devices may accurately identify a touch input through the touch screens, thereby providing users with a satisfying experience.
Such a touch input may be classified as various types of inputs, such as a force touch, a long touch, a tap touch (or short touch), and a multi-touch. A force touch may indicate a touch made by a user applying a pressure of a predetermined intensity or higher with a finger, and a long touch may indicate a touch of maintaining a pressure for a predetermined time or longer regardless of a particular pressure intensity.
There are 3D touch and haptic touch as touch input technologies for distinguishing between a force touch and a long touch. 3D touch is a technology of recognizing a pressure intensity using a pressure sensor (e.g., a hardware component), and haptic touch may be a technology of recognizing how long a user makes a touch, by means of software.
However, in a haptic touch method, since software is used to recognize how long a user makes a touch, the force applied to a screen is unrecognizable, and thus it may be difficult to provide a function of taking a 2-step action by pressing the screen harder. For 3D touch, a change in the gap between beam lights and the display length is detected to detect how hard the user presses the screen. In this case, there should be no errors in the linear display gap, the sensor required to detect same is expensive, and due to the characteristics of the display (e.g., OLED), adding extra sensors may lead to design challenges. Additionally, 3D touch is difficult to detect unless the user presses the exact part of the screen hard enough, and the user may need to relay on sensory judgment to perform a force touch input and a long touch input, which could be a drawback.
In addition, while the intensity and time of a touch may vary for each user, electronic devices typically determine the pressure intensity or touch time with a uniform pressure and/or fixed value, which could lead to relative recognition errors in operation. Moreover, there is no user interface (UI) provided that is capable of distinguishing between a long touch and a force touch.
Embodiments of the disclosure provide a method of analyzing the variation (or skin compression based on a touch area) of a user's skin area without a separate pressure sensor to determine whether a force touch is recognized, so as to improve the recognition rate of a force touch.
Embodiments of the disclosure provide a method of dynamically learning a touch recognition area according to a form factor state and/or a grip state of an electronic device to reflect a user characteristic so as to improve the recognition rate of a force touch.
Embodiments of the disclosure provide a method of providing an intuitive state change for a long touch and a force touch so as to resolve the ambiguous distinction between a long touch and a force touch.
Embodiments of the disclosure provide a method of, when a force touch action based on a force touch is performed, reconfiguring the layout of an app feature-based screen so as to improve usability.
An electronic device according to an example embodiment comprises: a display; a touch sensor; a memory; and at least one processor, comprising at least one processor, the memory includes instructions, wherein at least one processor, individually and/or collectively, is configured to execute the instructions and to cause the electronic device to: determine whether a force touch which gradually increases an amount of change in skin pressure depending on a touch area within a long touch detection time is recognized based on touch data received from the touch sensor; execute a force touch action function mapped to the force touch based on force touch recognition being maintained for a specified period of time; learn a malfunction situation for the force touch; and display a sensitivity correction UI on the display based on sensitivity adjustment of the force touch being required.
A method for improving a force touch operation according to an example embodiment comprises: receiving touch data received from a touch sensor; based on the touch data, determining whether a force touch having a gradually increasing variation of skin compression according to a touch area is recognized within a long touch detection time; based on recognition of the force touch, executing a force touch action function mapped to the force touch; at a time of the recognition of the force touch, learning a malfunction situation for the force touch and, based on sensitivity adjustment of the force touch being required, displaying a sensitivity correction UI on a display.
By an electronic device and a method according to an example embodiment, a touch recognition area according to a form factor state (e.g., form factor change) and/or a grip state of the electronic device may be learned and a force touch recognition algorithm (e.g., a reference area or a threshold value) may be dynamically changed so as to improve the recognition rate of a force touch and usability.
By an electronic device and a method according to an example embodiment, an intuitive situation feedback for recognition of a long touch and a force touch may be provided so as to resolve the ambiguous distinction between a long touch and a force touch.
By an electronic device and a method according to an example embodiment, a force touch operation being recognized differently from a user's intent may be determined and a user feedback related to touch sensitivity adjustment may be induced to resolve a malfunction recognition so as to improve force touch recognition intensity (sensitivity).
The effects obtained from the present disclosure are not limited to those mentioned above, and other effects not explicitly stated herein may be clearly understood by those skilled in the art to which the present disclosure pertains based on the description provided below.
The above and other aspects, features and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, a home appliance, or the like. According to an embodiment of the disclosure, the electronic devices are not limited to those described above.
Referring to
The processor 120 may include various processing circuitry and/or multiple processors. For example, as used herein, including the claims, the term “processor” may include various processing circuitry, including at least one processor, wherein one or more of at least one processor, individually and/or collectively in a distributed manner, may be configured to perform various functions described herein. As used herein, when “a processor”, “at least one processor”, and “one or more processors” are described as being configured to perform numerous functions, these terms cover situations, for example and without limitation, in which one processor performs some of recited functions and another processor(s) performs other of recited functions, and also situations in which a single processor may perform all recited functions. Additionally, the at least one processor may include a combination of processors performing various of the recited/disclosed functions, e.g., in a distributed manner. At least one processor may execute program instructions to achieve or perform various functions. The processor 120 may execute, for example, software (e.g., a program 140) to control at least one other component (e.g., a hardware or software component) of the electronic device 101 coupled with the processor 120, and may perform various data processing or computation. According to an embodiment, as at least part of the data processing or computation, the processor 120 may load a command or data received from another component (e.g., the sensor module 176 or the communication module 190) in volatile memory 132, process the command or the data stored in the volatile memory 132, and store resulting data in non-volatile memory 134. According to an embodiment, the processor 120 may include a main processor 121 (e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor 123 (e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor 121. Additionally or alternatively, the auxiliary processor 123 may be adapted to consume less power than the main processor 121, or to be specific to a specified function. The auxiliary processor 123 may be implemented as separate from, or as part of the main processor 121.
The auxiliary processor 123 may control at least some of functions or states related to at least one component (e.g., the display module 160, the sensor module 176, or the communication module 190) among the components of the electronic device 101, instead of the main processor 121 while the main processor 121 is in an inactive (e.g., sleep) state, or together with the main processor 121 while the main processor 121 is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor 123 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module 180 or the communication module 190) functionally related to the auxiliary processor 123. According to an embodiment, the auxiliary processor 123 (e.g., the neural processing unit) may include a hardware structure specified for artificial intelligence model processing. An artificial intelligence model may be generated by machine learning. Such learning may be performed, e.g., by the electronic device 101 where the artificial intelligence is performed or via a separate server (e.g., the server 108). Learning algorithms may include, but are not limited to, e.g., supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The artificial intelligence model may include a plurality of artificial neural network layers. The artificial neural network may be a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-network or a combination of two or more thereof but is not limited thereto. The artificial intelligence model may, additionally or alternatively, include a software structure other than the hardware structure.
The memory 130 may store various data used by at least one component (e.g., the processor 120 or the sensor module 176) of the electronic device 101. The various data may include, for example, software (e.g., the program 140) and input data or output data for a command related thereto. The memory 130 may include the volatile memory 132 or the non-volatile memory 134.
The program 140 may be stored in the memory 130 as software, and may include, for example, an operating system (OS) 142, middleware 144, or an application 146.
The input module 150 may receive a command or data to be used by another component (e.g., the processor 120) of the electronic device 101, from the outside (e.g., a user) of the electronic device 101. The input module 150 may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen).
The sound output module 155 may output sound signals to the outside of the electronic device 101. The sound output module 155 may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record. The receiver may be used for receiving incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker.
The display module 160 may visually provide information to the outside (e.g., a user) of the electronic device 101. The display module 160 may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display module 160 may include a touch sensor adapted to detect a touch, or a pressure sensor adapted to measure the intensity of force incurred by the touch.
The audio module 170 may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module 170 may obtain the sound via the input module 150, or output the sound via the sound output module 155 or a headphone of an external electronic device (e.g., an electronic device 102) directly (e.g., wiredly) or wirelessly coupled with the electronic device 101.
The sensor module 176 may detect an operational state (e.g., power or temperature) of the electronic device 101 or an environmental state (e.g., a state of a user) external to the electronic device 101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module 176 may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
The interface 177 may support one or more specified protocols to be used for the electronic device 101 to be coupled with the external electronic device (e.g., the electronic device 102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface.
A connecting terminal 178 may include a connector via which the electronic device 101 may be physically connected with the external electronic device (e.g., the electronic device 102). According to an embodiment, the connecting terminal 178 may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector).
The haptic module 179 may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electric stimulator.
The camera module 180 may capture a still image or moving images. According to an embodiment, the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
The power management module 188 may manage power supplied to the electronic device 101. According to an embodiment, the power management module 188 may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery 189 may supply power to at least one component of the electronic device 101. According to an embodiment, the battery 189 may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell.
The communication module 190 may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 101 and the external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108) and performing communication via the established communication channel. The communication module 190 may include one or more communication processors that are operable independently from the processor 120 (e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module 190 may include a wireless communication module 192 (e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network 198 (e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network 199 (e.g., a long-range communication network, such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module 192 may identify and authenticate the electronic device 101 in a communication network, such as the first network 198 or the second network 199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module 196).
The wireless communication module 192 may support a 5G network, after a 4G network, and next-generation communication technology, e.g., new radio (NR) access technology. The NR access technology may support enhanced mobile broadband (eMBB), massive machine type communications (mMTC), or ultra-reliable and low-latency communications (URLLC). The wireless communication module 192 may support a high-frequency band (e.g., the mmWave band) to achieve, e.g., a high data transmission rate. The wireless communication module 192 may support various technologies for securing performance on a high-frequency band, such as, e.g., beamforming, massive multiple-input and multiple-output (massive MIMO), full dimensional MIMO (FD-MIMO), array antenna, analog beam-forming, or large scale antenna. The wireless communication module 192 may support various requirements specified in the electronic device 101, an external electronic device (e.g., the electronic device 104), or a network system (e.g., the second network 199). According to an embodiment, the wireless communication module 192 may support a peak data rate (e.g., 20 Gbps or more) for implementing eMBB, loss coverage (e.g., 164 dB or less) for implementing mMTC, or U-plane latency (e.g., 0.5 ms or less for each of downlink (DL) and uplink (UL), or a round trip of 1 ms or less) for implementing URLLC.
The antenna module 197 may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device 101. According to an embodiment, the antenna module 197 may include an antenna including a radiating element including a conductive material or a conductive pattern formed in or on a substrate (e.g., a printed circuit board (PCB)). According to an embodiment, the antenna module 197 may include a plurality of antennas (e.g., array antennas). In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network 198 or the second network 199, may be selected, for example, by the communication module 190 (e.g., the wireless communication module 192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module 190 and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module 197.
According to various embodiments, the antenna module 197 may form a mmWave antenna module. According to an embodiment, the mmWave antenna module may include a printed circuit board, a RFIC disposed on a first surface (e.g., the bottom surface) of the printed circuit board, or adjacent to the first surface and capable of supporting a designated high-frequency band (e.g., the mmWave band), and a plurality of antennas (e.g., array antennas) disposed on a second surface (e.g., the top or a side surface) of the printed circuit board, or adjacent to the second surface and capable of transmitting or receiving signals of the designated high-frequency band.
At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI).
According to an embodiment, commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 via the server 108 coupled with the second network 199. Each of the electronic devices 102 or 104 may be a device of a same type as, or a different type, from the electronic device 101. According to an embodiment, all or some of operations to be executed at the electronic device 101 may be executed at one or more of the external electronic devices 102, 104, or 108. For example, if the electronic device 101 should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device 101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device 101. The electronic device 101 may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing technology may be used, for example. The electronic device 101 may provide ultra low-latency services using, e.g., distributed computing or mobile edge computing. In an embodiment, the external electronic device 104 may include an internet-of-things (IoT) device. The server 108 may be an intelligent server using machine learning and/or a neural network. According to an embodiment, the external electronic device 104 or the server 108 may be included in the second network 199. The electronic device 101 may be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology or IoT-related technology.
It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element.
As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, or any combination thereof, or any combination thereof, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC).
Various embodiments as set forth herein may be implemented as software (e.g., the program 140) including one or more instructions that are stored in a storage medium (e.g., internal memory 136 or external memory 138) that is readable by a machine (e.g., the electronic device 101). For example, a processor (e.g., the processor 120) of the machine (e.g., the electronic device 101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the “non-transitory” storage medium is a tangible device, and may not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium.
According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server.
According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added.
Referring to
The sizes or ratios of the displays or display regions of the electronic devices 101 illustrated in
For example, as illustrated in
For another example, as illustrated in
For another example, as illustrated in
In case of the slidable electronic device or the rollable electronic device, there may be a closed state in which the display is not expanded, an intermediate state, or an open state in which the display 2220 or 2320 is expanded, according to a direction in which the second housing 2215 or 2315 moves with respect to the first housing 2210 or 2310. The open state may be defined as a state where the display region is expanded compared to the closed state, and may provide display regions of various areas according to a movement position of the second housing 2215 or 2315.
Hereinafter, the electronic device 101 according to various embodiments disclosed herein is illustrated as an example of an electronic device (e.g., foldable electronic device) having a form factor of a foldable type, but the electronic device 101 and an operation thereof according to various embodiments are not limited thereto. For example, the electronic device 101 may have various form factors, such as a bar type or plate type, a rollable type, and/or a slidable type and may also operate according thereto.
According to an embodiment, the electronic device 101 of various form factors may provide a force touch management function based on artificial intelligence. For example, the electronic device 101 may determine whether a force touch is recognized, based on the variation (or a skin area/touch area according to skin compression) of a skin area, based on artificial intelligence software without a physical pressure sensor.
According to an embodiment, the electronic device 101 of various form factors may provide a force touch situation user interface (UI), based on force touch recognition.
According to an embodiment, the electronic device 101 of various form factors may provide a function (e.g., force touch action function) of mapping an action executable by a force touch according to a user configuration. The force touch action function may include an action function designatable for each application and/or a global action function applicable to an electronic device system.
According to an embodiment, the electronic device 101 of various form factors may learn a touch recognition area according to a form factor state (e.g., a first state, a second state, or an intermediate state) and a grip state (e.g., grip by one hand or grip by two hands), and dynamically change (or adjust) a force touch recognition algorithm (e.g., a reference touch area or a threshold value).
According to various embodiments, the electronic device 101 of various form factors may learn whether a force touch consistent with the user's intent is recognized, and when there is need to adjust force touch sensitivity according to a learning situation, provide a sensitivity correction user interface (UI).
Referring to
The application layer may include an application 310.
The application 310 may be an application 310 which is stored in the memory 130, is executable by the processor 120, or is installed. The application 310 may include, for example, a force touch action app 315, app1, app2, app3, a system user interface (UI), and/or various applications executable in the electronic device 101, and the type thereof may not be limited.
The force touch action app 315 according to an embodiment may be an application that manages a force touch function based on artificial intelligence and provides an interaction with the user to configure a force touch action. For example, the force touch action app 315 may be an application capable of applying a force touch action configuration function, a force touch learning function (e.g., a force touch sensitivity adjustment function and a touch area adjustment function), and a function of changing a screen layout according to a force touch action. The force touch action app 315 and a management operation thereof will be described with reference to drawings described later.
The system user interface (UI) may manage a system of the electronic device 101, such as a screen related to a notification bar or a quick view.
The framework layer may provide various functions to the application 310 so that the application 310 uses a function or information provided from at least one resource of the electronic device 101.
The framework layer may include, for example, an input device manager 320, a sensor manager 323, a view system 325, and an activity manager 327, but is not limited thereto.
The input device manager 320 may determine whether a force touch is recognized, based on a signal generated from a touch sensor (or touch data transferred through the sensor manager 323), and transfer force touch recognition information (or a force touch event) to the force touch action app 315. For example, the input device manager 320 may observe whether a force touch occurs, during a force touch observation interval (e.g., 300 ms), a period (e.g., 500 ms) configured for determining a long touch, and then perform a pressure calculation (e.g., skin area change) for the force touch to determine whether a force touch is recognized.
The sensor manager 323 may control a sensor, based on a configuration of the application 310. The sensor manager 323 may collect and control sensor information, based on the usability of a sensor module. For example, when a user's touch input occurs, the sensor manager 323 may control a touch sensor to instruct same to generate touch data. For example, the sensor manager 323 may generate touch data corresponding to a user's touch and transfer same to the input device manager 320.
The view system 325 may include a set of expandable views used to create an application user interface. According to an embodiment, the view system 325 may be a program for drawing at least one layer, based on the display region resolution of the display. According to an embodiment, the application 310 may use a view (e.g., a drawing library) to draw at least one layer based on the display region resolution of the display.
The activity manager 327 may manage a life cycle of activity. According to an embodiment, the activity manager 327 may manage execution and termination of the application 310.
The hardware abstraction layer is an abstracted layer between multiple hardware modules (e.g., the display module 160 and the sensor module 176 in
The event hub 330 may be an interface in which an event occurring in a touch circuit and a sensor circuit is standardized. The event hub 330 may be included in an abstracted layer (hardware abstraction layer, HAL) between multiple hardware modules included in the hardware layer and software of the electronic device.
The surface flinger 335 may synthesize multiple layers. For example, the surface flinger 335 may provide data indicating synthesized multiple layers to a display controller 345. A display controller (display driver IC, DDI) 345 may indicate a graphic display controller or a display drive circuit controller.
The kernel layer may include various drivers for controlling various hardware modules (e.g., the display module 160 and the sensor module 176) included in the electronic device 101. For example, the kernel layer may include a sensor driver 340 including an interface module that controls a sensor controller 350 connected to the sensor module 176, and the display controller (display driver IC, DDI) 345 that controls the display panel 355 connected to the display module 160.
The sensor driver 340 may connect an operating system to a sensor, and include information on a driving method, a characteristic, and/or a function of the sensor. The sensor driver 340 may include an interface module that controls the sensor controller connected to the sensor. The display controller 345 may receive data representing synthesized multiple layers from the surface flinger 335, and may correspond to a display drive circuit.
The hardware layer may include a hardware module or element (e.g., the sensor controller 350 and the display panel 355) included in the electronic device 101, but is not limited thereto and may include the elements illustrated in
Referring to
The electronic device 101 may distinguish between a long touch and a force touch, based on the variation of skin compression based on a touch area.
For example, as indicated by reference numeral <401>, it may be noted that when a user performs a long touch 410 on a touch screen, the variation of skin compression based on a touch area during a touch time (e.g., 300 ms) maintains the same size after a predetermine time, and on the contrary, in a case of a force touch 420 of pressing the touch screen hard, the variation of skin compression based on a touch area has a size gradually increasing.
As indicated by reference numeral <402>, the electronic device may, when the variation of skin compression based on a touch area for a touch occurring for a predetermined time increases, classify the touch as the force touch 420, and when the variation of skin compression based on the touch area is constant, classify the touch as the long touch 410.
In this case, the long touch 410 and the force touch 420 may have the same initial touch time. The electronic device 101 may observe whether the force touch 420 is generated within a long touch detection time (e.g., LT_a) serving as a criterion to determine the long touch 410, and then determine force touch recognition in a force touch determination interval (FT_b). For example, the electronic device 101 may divide into a force touch observation interval (FT_a), the force touch determination interval (FT_b), and a long touch determination interval (LT_a), and distinguish whether a touch is the long touch 410 and the force touch 420 according to a touch characteristic. As indicated by reference numeral <403>, the electronic device 101 may determine a second touch input as the force touch 420 because the touch pressure thereof exceeds a reference value (e.g., 300 gf/cm2) of a touch pressure in the force touch observation interval. The electronic device 101 may not recognize a first touch input as a force touch because the touch pressure thereof is lower than the reference value in the force touch observation interval, and may recognize the first touch input as the long touch 410 in the long touch determination interval.
Referring to
The touchscreen display 510 may include a display 5110, a touch sensor 5113, and a touch sensor IC 5115.
According to an embodiment, the display 5110 may display various images under the control of the processor 520. The display 5110 may be implemented as one of a liquid crystal display (LCD), a light-emitting diode (LED) display, a micro LED display, a quantum dot (QD) display, or an organic light-emitting diode (OLED) display, and is not limited thereto.
According to an embodiment, the touchscreen display 510 may be at least partially flexible, and may be implemented as a foldable display, a rollable display, a slidable display, or a stretchable display.
The touchscreen display 510 may detect a touch and/or proximity touch (or hovering) input made using a part (e.g., a finger) of the user's body or an input device (e.g., a stylus pen).
The touch sensor 5113 may convert touch inputs made through the user's finger or an input device into touch signals, and transmit the touch signals to the touch sensor IC 5115. The touch sensor may be implemented as, for example, one of an electrode sensor (conductivity sensor), a capacitive touch sensor, a resistive touch sensor, a surface touch sensor, a projected captivated (PCAP) touch sensor, or an ultrasonic touch sensor (surface acoustic wave touch sensor), and is not limited thereto.
The touch sensor IC 5115 may control the touch sensor to detect a touch input on at least one position. The touch sensor IC 5115 may generate touch data (e.g., a touch position, a touch area, touch video data, or a touch time) for a touch input detected based on a change of a detected signal (e.g., voltage, light quantity, resistance, or charge amount).
The memory 530 may store various instructions executable by the processor 520. Such instructions may include control commands such as arithmetic and logical operations, data transfer, or input/output which are recognizable by the processor 520. The memory 530 may include a volatile memory (e.g., the volatile memory 132 in
The processor 520 may include various processing circuitry operatively, functionally, and/or electrically connected to elements (e.g., the touchscreen display 510 and the memory 530) of the electronic device 101 so as to perform calculation or data processing related to control and/or communication of the elements. The processor 520 may include at least some of elements and/or functions of the processor 120 in
According to an embodiment, the processor 520 may detect various touch gestures based on touch signals, and execute a touch action corresponding to a touch gesture. For example, when a touch is maintained for a long touch detection time (LT_a) (e.g., approximately 500 ms), the processor 520 may detect a long touch.
According to an embodiment, the processor 520 may process (or perform) an operation related to a force touch management function based on artificial intelligence and a function (e.g., force touch action function) of mapping an action executable by a force touch according to a user configuration. The processor 520 may store, in a memory (e.g., a queue or a buffer register), touch data (e.g., a touch position, a touch area, touch video data, or a touch time) received from the touch sensor IC 5115. The processor 520 may align touch data in a queue and determine whether a force touch is recognized, based on the touch data aligned in the queue. There may be no limit to calculation and data processing functions implementable by the processor 520 on the electronic device 101. However, hereinafter, operations related to a force touch will be described.
According to an embodiment, the processor 520 may, in relation to a force touch operation, may include a calculation condition determination module 5210, a force touch learning module 5230, a force touch determination module 5220, and an execution control module 5240, and each module may include various circuitry and/or executable program instructions and be operated under the control of the processor 520.
The calculation condition determination module 5210 may observe whether there is a force touch, when a finger (or stylus pen) touches at least one point of the display for a configured reference time (e.g., approximately 300 ms) (e.g., a force touch observation interval (FT_a)) or longer, based on touch data. For example, the calculation condition determination module 5210 may, as illustrated in
The force touch determination module 5220 may determine whether the force touch 620 exists, within a long touch detection period (LT_a) illustrated in
The force touch determination module 5220 may, when the calculation condition related to the force touch is satisfied, determine whether a force touch is recognized, based on an artificial intelligence network using touch data in a pressure calculation interval (e.g., 300 ms-350 ms). For example, the force touch determination module 5220 may read touch data aligned in a queue with respect to a finger touch having a high priority. The force touch determination module 5220 may, after the force touch observation interval (FT_a), perform a pressure calculation for a touch being maintained in the pressure calculation interval (FT_b) to determine whether a force touch is recognized. The pressure calculation may be a process of, based on an artificial intelligence network, receiving pieces of touch data (or input data) and outputting pieces of force data (or output data). The touch data may indicate touch video data and the force data may indicate virtual force data.
The force touch determination module 5220 may analyze the variation (or skin compression based on a touch area) of the user's skin area obtained based on touch data, so as to distinguish between a force touch and a long touch. For example, the force touch determination module 5220 may recognize the touch as a force touch when a result of force touch observation indicates that a change in the skin area obtained based on the touch data gradually increases. The force touch determination module 5220 may determine the touch as a long touch when a result of force touch observation indicates that a change in the skin area does not gradually increase and the skin area is maintained at a constant size.
The execution control module 5240 may display a force touch situation UI on the display 5110 when the touch is recognized as a force touch. The force touch situation UI may be omitted according to a display configuration. The execution control module 5240 may execute a force touch action function mapped to (or configured for) a force touch when the recognition as the force touch is maintained for a configured time or is maintained while a force touch situation UI is displayed.
The execution control module 5240 may, when executing the force touch action function, reconfigure a screen layout to correspond to the force touch action function and/or an application characteristic to execute the force touch action function. The force touch action function may include an action function designatable for each application and/or a global action function applicable to an electronic device system. For example, the force touch action function may include, but is not limited to, a pop-up window execution function, a volume panel control function, a flash memo function, a function for creating and then sharing a screenshot, a clipboard function, a flashlight function, an auto-rotation function, and/or a mute control function.
According to an embodiment, the force touch learning module 5230 may control a touch sensitivity learning and/or touch area learning operation in relation to a force touch.
For example, the force touch learning module 5230 may record a malfunction situation in which a force touch different from the user's intent is recognized, when a force touch is released while a force touch situation UI is displayed or when a force touch is released after being recognized. For example, in a situation where a force touch situation UI is displayed for a configured time (e.g., n sec), based on an input within a force touch determination interval (e.g., 300-350 ms) being recognized as a force touch, when the user's touch input on a different region is received before the configured time is ended, the force touch learning module 5230 may consider that an unintended force touch has occurred and record this situation as a malfunction situation.
The force touch learning module 5230 may learn a malfunction situation for a force touch to perform force touch sensitivity correction. For example, when a situation where a force touch is released is repeated configured N or more times, the force touch learning module 5230 may display a sensitivity correction UI on the display 5110 to induce sensitivity correction. For another example, when a touch sensitivity configuration mode (or configuration screen) is entered according to a user request (e.g., manual adjustment), the force touch learning module 5230 may learn the user's touch data through the touch sensitivity configuration mode to adjust the sensitivity of a force touch.
For another example, the force touch learning module 5230 may learn a touch recognition area (or force touch recognition area) according to a form factor state and/or a grip state of the electronic device, and dynamically change a force touch recognition algorithm (e.g., a reference area or a threshold value). For example, when a touch of a finger is made according to various angles, the force touch learning module 5230 may store the dynamic area of the touch in a queue and train a force touch recognition model with the dynamic area of the touch to update (or customize) the model.
The intensity of a touch or the area of a finger making the touch may differ for each user. In addition, the touch area may vary according to the user's thumb, middle finger, index finger, ring finger, or little finger, and the touch area may differ according to the angles of the left and right hands. For example, the touch area of the thumb may dynamically change within a rotation radius of the thumb.
The force touch learning module 5230 may learn a recognition area (hereinafter, a touch recognition area) in which a force touch is made and the variation thereof according to a form factor state (e.g., a first state, a second state, or an intermediate state) and a grip state (e.g., grip by one hand or grip by two hands) of the electronic device 101.
For example, when a foldable electronic device is used as an example, the force touch learning module 5230 may store, in a queue, a touch recognition area within the rotation radius of the thumb of the right or left hand when one hand is used to grip the device in a folded state. The force touch learning module 5230 may store, in a queue, a touch recognition area of the left or right hand in a two-hand grip state. The force touch learning module 5230 may store, in a queue, a touch recognition area for the index finger of the right hand while the foldable electronic device is gripped by the left hand. The touch recognition area stored in the queue may be learned (e.g., machine learning or deep learning), based on an artificial intelligence network so as to increase a recognition rate.
An electronic device may, when a touch recognition area is reduced due to the rotation or tilt of the finger angle, have difficult in recognizing a force touch, but learn a touch area and a variation to update a force touch recognition algorithm (e.g., force touch recognition model) so as to improve the recognition rate of a force touch.
An electronic device (e.g., the electronic device 101 in
According to an example embodiment, the display may include a flexible display having a variable display region in which visual information is displayed.
According to an example embodiment, at least one processor, individually and/or collectively, may be configured to: receive an electronic device form factor state and/or grip state from a sensor module and learn a touch recognition area according to the electronic device form factor state and/or grip state to dynamically adjust a threshold value of a force touch recognition model comprising circuitry configured to determine skin compression.
According to an example embodiment, the force touch recognition model may be configured to receive touch data as input data and output force touch data, based on an artificial intelligence network.
According to an example embodiment, at least one processor, individually and/or collectively, may be configured to display a force touch situation UI on the display, based on the recognition of the force touch.
According to an example embodiment, at least one processor, individually and/or collectively, may be configured to, based on a user's touch release occurring in a state where the force touch situation UI is displayed or after the force touch is recognized, record a force touch malfunction situation and, based on the force touch malfunction situation being repeated configured N or more times, recognize that sensitivity adjustment of the force touch is required.
According to an example embodiment, at least one processor, individually and/or collectively, may be configured to observe whether a force touch occurs in a force touch observation interval before a long touch detection period, and perform a pressure calculation in a force touch determination interval to determine whether a force touch is recognized.
According to an example embodiment, at least one processor, individually and/or collectively, may be configured to, based on a variation of skin compression based on a touch area gradually increasing over time, observe that a corresponding touch is a force touch, and based on a variation of skin compression based on a touch area maintaining the same size over time, observe that a corresponding touch is a long touch.
According to an example embodiment, the force touch action function may include an action function designatable for each application and/or a global action function applicable to an electronic device system.
According to an example embodiment, at least one processor, individually and/or collectively, may be configured to receive information on an application being executed and, based on the force touch being recognized in a state where the application is executed, reconfigure a screen layout of the force touch action function, based on the information on the application to dynamically change and display a screen.
According to an example embodiment, the sensitivity correction UI may include a touch controller and may be configured to enable touch sensitivity adjustment according to a position of the touch controller.
In the embodiment below, operations may be sequentially performed, but sequential performance is not necessarily required. For example, the sequence of operations may be changed, and at least two operations may be performed in parallel.
Referring to
The application information may include at least one of an app type, PACKAGE NAME, description information, app state information (e.g., background information, PIP information based on screen transition, attribute information (e.g., screen window information (e.g., popup, split state, rotation, position, or size information))), and landscape/portrait mode support information.
According to an embodiment, the processor 520 may receive form factor state information and holding information from one sensor (e.g., gyro, acceleration, hinge angle (Hall IC) sensor, or grip sensor), based on a change of a form factor state.
According to an embodiment, operation 710 may be omitted.
In operation 720, the processor 520 may determine whether a force touch is recognized, based on reception of touch data.
The processor 520 may analyze a feature of touch data from a time point at which a touch is started, to observe whether a force touch is generated and determine whether a force touch is recognized, before long touch determination. For example, the processor 520 may observe whether a force touch is generated during a force observation interval within a configured long touch detection period (e.g., 500 ms), and perform a pressure calculation for the touch after the force observation interval to determine whether a force touch exists. The processor 520 may perform long touch determination for the touch after force touch determination.
According to an embodiment, in the process of determining a force touch, the electronic device 101 may learn pieces of touch data to adjust touch sensitivity and adjust a threshold value for a touch area so as to improve a touch recognition rate. For example, the processor 520 may receive form factor state information and holding information of the electronic device and learn the user's touch recognition area according to the form factor state information and the holding information to dynamically adjust a threshold value for a touch area used to determine a force touch (or update a force touch recognition model).
According to an embodiment, the processor 520 may learn a situation where a force touch unintended by the user is repeatedly recognized, and when it is determined that sensitivity adjustment is required, may induce sensitivity adjustment by providing a touch correction UI. Hereinafter, a force touch learning operation will be described in greater detail with reference to
In operation 730, the processor 520 may display a force touch situation UI on the display, based on recognition of a force touch.
The force touch situation UI may include at least one of a message and an animation object notifying the user of the occurrence of a force touch and the type and shape thereof is not limited. When the user recognizes, through the force touch situation UI, that an intended force touch has occurred, the force touch is maintained, and when the user recognizes that an unintended force touch is recognized, touch release may be performed.
According to an embodiment, operation 730 may be omitted.
In operation 740, the processor 520 may execute a mapped (or configured) force touch action function, based on the force touch recognition being maintained. For example, when the force touch is maintained while the force touch situation UI is displayed or when the force touch recognition is maintained, the processor 520 may execute a force touch action function mapped to the force touch.
In operation 750, the processor 520 may dynamically change a screen layout of the force touch action function and display same on the display. The processor 520 may reconfigure a screen layout, based on at least one of the form factor state of the electronic device and/or information on an application being executed, and display a screen for the force touch action function on the display.
For example, when a pop-up window execution function is mapped as the force touch action function, the processor 520 may execute a force touch action function of switching a full screen being currently displayed into a pop-up window screen, based on the recognition of a force touch, and reconfigure a screen layout and determine the size of a pop-up window, based on a feature (e.g., watching YouTube on the internet or reproducing a video) of an application (e.g., foreground application) being currently executed on the display, so as to display the pop-up window on the display. In this case, an icon (e.g., menu icon) included in the screen layout may be processed using a hiding function for screen optimization.
Referring to
The screen 810 for configuring a force touch action function may include an on/off toggle item 820 enabling selection of use/non-use of a force touch action, a touch sensitivity adjustment bar 830 for adjusting touch sensitivity, and force touch action function items 840.
In the example of
When the user selects one of the force touch action function items, the electronic device 101 may execute the force touch action function that is selected by the user (or is mapped or configured), based on recognition of a force touch.
Referring to
A user may input a force touch 920 while watching an image through an image reproduction screen (e.g., application execution screen) 910, as illustrated in a screen 901.
Although not illustrated in the example of
The electronic device 101 may, as illustrated in a screen 902, provide a volume panel adjustment bar 930 mapped to the force touch to the image reproduction screen 910, based on the recognition of the force touch. When the user maintains a drag 925 in a second direction while maintaining the force touch 920, the electronic device 101 may adjust the volume of the video according to the drag direction. As described above, as in the example of
Referring to
A screen 1001 may show an example of a state where a home screen 1010 is displayed on the display. A user may select a media content app icon on the home screen 1010 to execute a media content application.
A user may input a force touch 1030 while the electronic device is displaying a media streaming reproduction screen (e.g., application execution screen) 1020 on the display, as shown in a screen 1002. The electronic device 101 may, before long touch recognition, analyze pieces of touch data to determine whether a force touch is recognized.
As illustrated in a screen 1003, the electronic device 101 may display a force touch situation UI 1040, based on recognition of a force touch. The force touch situation UI 1040 may be omitted. Alternatively, the electronic device 101 may notify the user of the recognition of a force touch using an effect such as screen flickering or a haptic effect.
A screen 1004 shows an example of a state where, when it is determined that a force touch is recognized in the streaming reproduction screen displayed on the display, a pop-up window function is executed as a force touch action function. The electronic device 101 may display the streaming reproduction screen to overlap with the home screen 1010 in response to the force touch using a pop-up window 1025.
A screen 1005 may be an example in which the electronic device 101 reconfigures a screen layout of the pop-up window 1025 in consideration of a characteristic of the media content application and displays the reconfigured pop-up window 1027 on the display. For example, the electronic device 101 may process a top layout and a bottom layout of the streaming reproduction screen using a hiding function, and process only a streaming image to be displayed.
In the embodiment below, operations may be sequentially performed, but sequential performance is not necessarily required. For example, the sequence of operations may be changed, and at least two operations may be performed in parallel.
Referring to
For example, the processor 520 may observe whether a force touch is generated during a force observation interval within a configured long touch detection period (e.g., 500 ms), and perform a pressure calculation for the touch after the force observation interval to determine whether a force touch exists. The pressure calculation may be a process of, based on an artificial intelligence network, receiving pieces of touch data (or input data) and outputting pieces of force data (or output data). The touch data may indicate touch video data and the force data may indicate virtual force data.
The processor 520 may determine whether a force touch is recognized, through a pressure calculation or perform long touch determination for the touch after determination of whether a force touch exists.
In operation 1120, the processor 520 may recognize a long touch when a force touch is not recognized and the touch is maintained for a configured long touch detection time (no in operation 1110).
According to an embodiment, in the process of determining a force touch, the electronic device 101 may learn pieces of touch data to adjust touch sensitivity and adjust a touch area so as to improve a touch recognition rate. For example, the processor 520 may receive form factor state information and holding information of the electronic device and learn the user's touch recognition area according to the form factor state information and the holding information to dynamically adjust a threshold value (or reference touch area) for a force touch (or update a force touch recognition model). The electronic device 101 may learn the user's pieces of touch data to adjust a touch recognition area varying according to the current form factor state and grip state so as to improve a touch recognition rate.
In operation 1130, the processor 520 may display a force touch situation UI, based on recognition of a force touch.
According to an embodiment, operation 1130 may be omitted.
In operation 1140, the processor 520 may determine whether touch release is generated.
In operation 1150, the processor 520 may execute a force touch action function mapped to the force touch, based on touch release not being generated and the force touch being maintained (no in operation 1140).
In operation 1160, the processor 520 may reconfigure a screen layout of the force touch action function and dynamically change and display same on the display. For example, the processor 520 may determine a screen layout of the force touch action function, based on at least one of the form factor state of the electronic device and/or information on an application being executed.
In operation 1170, when touch release is generated in a situation where the force touch situation UI is displayed or when touch release is generated after a force touch is recognized (yes in operation 1140), the processor 520 may record malfunction situation information of the force touch.
In operation 1180, the processor 520 may determine whether the malfunction of a force touch repeatedly occurs N times, and if the malfunction occurs a number of times less than N (no in operation 1180), may return to operation 1110 to determine again whether a force touch is observed.
In operation 1190, if the malfunction of a force touch is repeated N or more times (yes in operation 1180), the processor 520 may perform a force touch learning operation for touch sensitivity. The force touch learning operation will be described in greater detail with reference to
Referring to
The processor 120 may process a recognized gesture through a framework layer and an application layer. For example, an interrupt signal for gesture recognition may be transmitted to an input device manager 1230. For example, if a volume panel control function is mapped as a force touch recognition function, the input device manager 1230 may request an audio service 1231 of the framework layer for volume panel control. The input device manager 1230 may transfer a force touch gesture event to a force touch app 1240 (e.g., the force touch action app 315 in
The force touch app 1240 may transfer a force touch gesture event (e.g., interrupt) to an action dispatcher 1242, based on reception of the event through a force touch event receiver app 1241. The action dispatcher 1242 may designate a touch force action function execution routine corresponding to a touch force. The touch action event receiver 1241 may transfer an execution task for a touch force action function execution routine to the activity manager 1250.
The processor 120 may, in a state where a force touch gesture (or interrupt occurrence) is recognized, when touch release is generated, record a malfunction situation for a force touch and when the same situation is repeatedly recorded N or more times, perform a force touch learning procedure for touch sensitivity.
In the embodiment below, operations may be sequentially performed, but sequential performance is not necessarily required. For example, the sequence of operations may be changed, and at least two operations may be performed in parallel.
Referring to
When touch sensitivity correction is required in a data processing process based on a force touch, the processor 520 may perform operation 1320 to 1345 to perform a touch sensitivity learning procedure. When the form factor state and/or grip state of the electronic device is changed, the processor 120 may perform operation 1350 to 1375 to perform a touch area learning procedure.
When explaining the touch sensitivity learning procedure, in operation 1320, the processor 520 may, after force touch recognition, determine whether a malfunction situation (or a situation where a force touch is recognized differently from the user's intent) caused by touch release repeatedly occurs N times. The processor 520 may, after force touch recognition, when a malfunction situation caused by touch release is not repeated N times, terminate the process of
In operation 1325, the processor 520 may display a sensitivity correction UI when a malfunction situation repeatedly occurs N times. For example, the sensitivity correction UI may be provided using a pop-up window, but is not limited thereto.
In operation 1330, the processor 520 may, when the user selects the sensitivity correction UI, enter a sensitivity correction configuration mode, and in operation 1335, the processor 520 may determine a touch method, based on an input obtained in the sensitivity correction configuration mode.
In operation 1340, the processor 520 may move a touch controller included in a sensitivity adjustment bar to the user's touch position. In operation 1340, the processor 520 may change a configuration of touch sensitivity according to the position of the touch controller. The changed configuration of touch sensitivity may be applied to touch gesture learning of the electronic device.
When explaining the touch area learning procedure, in operation 1350, the processor 520 may identify pieces of touch data stored in a database (DB) while the electronic device is in use.
In operation 1355, the processor 520 may identify a form factor state and a grip state of the electronic device. For example, the processor 520 may identify whether the electronic device is operating. The processor 520 may identify a form factor state (e.g., folded/unfolded/intermediate state). The processor 520 may determine whether the electronic device is gripped by one hand or gripped by two hands using a grip sensor. The processor 520 may determine the current inclination state of the electronic device using acceleration and gyro sensors.
In operation 1360, the processor 520 may determine whether a touch recognition area of touch data being currently processed is included in a range of a minimum value to a maximum value of the pieces of touch data stored in the database. The threshold of the minimum value or maximum value may be dynamically adjusted through a learning model. For example, since there are various cases, such as children, adults, men, women, people with large hand areas, and people with small hand areas, the electronic device 101 may support a function of adjusting a threshold for each user through touch area learning.
In operation 1365, the processor 520 may perform filtering for an area corresponding to a designated value in each frame.
In operation 1370, the processor 520 may determine a recognition model based on a dynamic area and touch data stored in the database (DB), and in operation 1375, the processor 520 may update the recognition model based on a dynamic area.
Referring to
A screen 1401 may be an example of illustrating a home screen 1410 of the electronic device 101. As illustrated in the screen 1401, a user may perform a long touch input 1420. The electronic device 101 may, before long touch detection, determine whether a force touch is recognized. For example, even though the user has intended a long touch input, the electronic device 101 may recognize the input as a force touch by determining that a skin area gradually increases after a predetermined time, based on a touch area.
As shown in a screen 1402, the electronic device 101 may display a force touch situation UI 1430 on the display, based on recognition of a force touch.
As shown in a screen 1403, the user may identify, through the force touch situation UI, that a force touch unintended by the user is recognized, and perform a touch release operation. When touch release is generated in a situation where the force touch situation UI is displayed, the electronic device 101 may recognize a force touch malfunction situation unintended by the user and record the malfunction situation.
A screen 1404 may show an example in which, after the electronic device 101 recognizes a force touch 1423, a user experience for a force touch malfunction, such as the occurrence of touch release, has repeatedly occurred N times.
As shown in a screen 1405, the electronic device 101 may display a sensitivity correction UI 1440 of recommending force touch sensitivity adjustment to the user on the display, after the occurrence of the force touch malfunction N times. The sensitivity correction UI 1440 may be implemented as a pop-up window including a touch sensitivity adjustment bar, but is not limited thereto.
As shown in a screen 1406, the user may enter a sensitivity correction configuration mode through the sensitivity correction UI 1440, and change a touch sensitivity configuration by adjusting an adjustment bar 1450.
Although an example of adjusting a touch sensitivity configuration by a user through a sensitivity correction UI has been described with reference to the example screens of
Referring to
A screen 1501 shows an example of a touch area learning UI 1510 according to a user's grip by two hands in a first state of a foldable electronic device. The user may input a force touch 1520 on various positions while gripping the electronic device with two hands.
A screen 1502 shows an example of a touch area learning UI 1515 according to a user's grip by one hand in a second state of a foldable electronic device. The user may input a force touch 1525 on various positions while gripping the electronic device by one hand. The electronic device 101 may provide the screen 1501 or the screen 1502 according to a user's request to configure force touch area learning, to learn the user's force touch area and adjust a threshold value for a touch area.
In the examples of
Hereinafter, examples of force touch action execution for each force touch action configuration are illustrated in
The screens of
The screens of
The electronic device 101 may, as shown in a screen 1702, based on recognition of a force touch 1720, execute a configured force touch action function, for example, a live caption function and reconfigure and display, on the display, a layout screen 1730 based on the live caption function. For example, the electronic device 101 may convert audio being executed on the background into text and convert the screen into the layout screen 1730 for displaying converted text 1735 to display the layout screen. The text 1735 displayed on the screen 1702 may support a copy/paste function.
The screens of
The screens of
The screens of
Although not illustrated in the drawings, a force touch action function may be variously configured. For example, the electronic device may provide, as a force touch action function, a function of generating a screenshot and then sharing same with another electronic device, a function of searching for a specific word in a text screen using a dictionary, a function of selecting a file and then previewing same, a function of selecting a schedule and then creating a new calendar, a function of changing the fast-forward or rewind speed during image reproduction, or a function of editing a file name, but the disclosure is not limited thereto.
Referring to
A method for improving a force touch operation according to an example embodiment may include: receiving touch data received from a touch sensor; based on the touch data, determining whether a force touch having a gradually increasing variation of skin compression according to a touch area is recognized within a long touch detection time; based on recognition of the force touch, executing a force touch action function mapped to the force touch; at a time of the recognition of the force touch, learning a malfunction situation for the force touch and, based on sensitivity adjustment of the force touch being required, displaying a sensitivity correction UI on a display.
The display according to an example embodiment comprises a flexible display having a variable display region in which visual information is displayed.
The method according to an example embodiment comprises determining whether the force touch is recognized.
The method according to an example embodiment comprises receiving an electronic device form factor state and/or grip state from a sensor module.
The method according to an example embodiment comprises learning a touch recognition area according to the electronic device form factor state and/or grip state to dynamically adjust a threshold value of a force touch recognition model configured to determine skin compression, to determine whether the force touch is recognized.
The force touch recognition model according to an example embodiment is configured to receive touch data as input data and output force touch data, based on an artificial intelligence network.
The determining of whether the force touch is recognized according to an example embodiment comprises displaying a force touch situation UI on the display, based on the recognition of the force touch.
The displaying of the sensitivity correction UI on the display according to an example embodiment comprises, based on a user's touch release occurring in a state where the force touch situation UI is displayed or after the force touch being recognized, recording the malfunction situation, and, based on the malfunction situation being repeated configured N or more times, a state where sensitivity adjustment of the force touch is required is recognized, and the sensitivity correction UI is displayed.
The determining of whether the force touch is recognized according to an example embodiment comprises observing whether a force touch occurs in a force touch observation interval before a configured long touch detection period, and performing a pressure calculation in a force touch determination interval to determine whether a force touch is recognized.
The observing of whether the force touch occurs in the force touch observation interval according to an example embodiment comprises, based on a variation of skin compression based on a touch area increasing over time, observing a corresponding touch as a force touch, and based on a variation of skin compression based on a touch area being maintained at a constant size over time, observing a corresponding touch as a long touch.
The force touch action function according to an example embodiment comprises an action function designatable for each application and/or a global action function applicable to an electronic device system.
The method according to an example embodiment further comprises: receiving information on an application being executed, wherein the receiving information on an application being executed comprises, based on the force touch being recognized in a state where the application is executed, reconfigure, based on the information on the application, a screen layout of the force touch action function to dynamically change and display a screen.
The method according to an example embodiment further comprises receiving a user input for adjusting a touch controller displayed on the sensitivity compensation UI and changing a touch sensitivity setting according to a position of the touch controller.
While the disclosure has been illustrated and described with reference to various example embodiments, it will be understood that the various example embodiments are intended to be illustrative, not limiting. It will be further understood by those skilled in the art that various changes in form and detail may be made without departing from the true spirit and full scope of the disclosure, including the appended claims and their equivalents. It will also be understood that any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0129975 | Oct 2022 | KR | national |
10-2022-0178409 | Dec 2022 | KR | national |
This application is a continuation of International Application No. PCT/KR2023/013792 designating the United States, filed on Sep. 14, 2023, in the Korean Intellectual Property Receiving Office and claiming priority to Korean Patent Application Nos. 10-2022-0129975, filed on Oct. 11, 2022, and 10-2022-0178409, filed on Dec. 19, 2022, in the Korean Intellectual Property Office, the disclosures of each of which are incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2023/013792 | Sep 2023 | WO |
Child | 19098534 | US |