This application relates to the field of electronic devices, and more specifically, to a screen display control method and an electronic device.
As foldable electronic devices enter people's life, split-screen use of the electronic devices also becomes a common manner. When a foldable electronic device is in a folded state, the foldable electronic device may separately perform displaying in display areas on two sides of a folding line. Because the foldable electronic device has usable display areas on two sides, a user may change a used display area.
Currently, when the user changes from facing one display area to facing the other display area for viewing, content currently viewed by the user is still displayed in the original display area. This is inconvenient for the user to view and operate.
This application provides a screen display control method and an electronic device, so that when a user changes from facing one display area to facing the other display area for viewing, content currently viewed by the user can be displayed in the other display area. This is convenient for the user to view and operate.
According to a first aspect, this application provides a screen display control method. The method is performed by an electronic device provided with a foldable screen that is divided into a first area and a second area when the screen is folded, where the first area corresponds to a first sensor, and the second area corresponds to a second sensor. The method includes displaying an interface of a first application in the first area; detecting first user identification information by using the first sensor; storing a correspondence between the first application and the first user identification information; and if the first user identification information is detected by using the second sensor, displaying the interface of the first application in the second area based on the correspondence between the first application and the first user identification information.
It should be understood that the first sensor and the second sensor may be any sensor that can detect user identification information, for example, may be a fingerprint sensor, an iris sensor, or a structured light sensor.
Disposing positions of the first sensor and the second sensor are not specifically limited in this application, provided that the first sensor can detect user identification information entered by a user in the first area and the second sensor can detect user identification information entered by a user in the second area.
For example, the first sensor may be disposed in the first area, and the second sensor may be disposed in the second area.
For another example, the first sensor and the second sensor may also be disposed on a same side, but are respectively configured to detect the user identification information entered by the user in the first area and the user identification information entered by the user in the second area.
The user identification information is information that can uniquely determine a user identity. For example, the user identification information may be face information of a user collected by the structured light sensor, fingerprint information of a user collected by the fingerprint sensor, or iris information of a user collected by the iris sensor.
In the foregoing technical solution, an application is bound to user identification information. In this way, when a screen facing a user changes, the electronic device may display an interface of an application bound to the user on a screen currently used by the user. This is convenient for the user to view and operate.
In a possible implementation, the method further includes if the first user identification information is detected by using the second sensor, displaying the interface of the first application in the second area, and turning off the first area or displaying a desktop interface in the first area.
In the foregoing technical solution, when the screen facing the user changes, the interface of the application bound to the user is displayed on the screen currently used by the user, and the screen originally facing the user is turned off. This helps reduce power consumption of the electronic device.
In a possible implementation, the method further includes displaying an interface of a second application in the second area; detecting second user identification information by using the second sensor; storing a correspondence between the second application and the second user identification information; and if the second user identification information is detected by using the first sensor but the first user identification information is not detected, displaying the interface of the second application in the first area based on the correspondence between the second application and the second user identification information.
In the foregoing technical solution, when a plurality of users use the electronic device in split-screen mode, an application is bound to user identification information. In this way, when a screen facing a user changes, the electronic device may display an interface of an application bound to the user on a screen currently used by the user. This is convenient for the user to view and operate.
In a possible implementation, the method further includes if the second user identification information is detected by using the first sensor, and the first user identification information is detected by using the second sensor, displaying the interface of the first application in the second area, and displaying the interface of the second application in the first area.
In the foregoing technical solution, when a plurality of users use the electronic device in split-screen mode, an application is bound to user identification information. In this way, when a screen facing a user changes, the electronic device may display an interface of an application bound to the user on a screen currently used by the user. This is convenient for the user to view and operate.
In a possible implementation, the method further includes: turning off the first area if any user identification information is not detected by using the first sensor, or user identification information detected by using the first sensor does not correspond to any application in the electronic device.
In the foregoing technical solution, when the first sensor does not detect any user identification information, or detected user identification information does not correspond to any application in the electronic device, that is, when the user no longer uses the first area, the electronic device turns off the first area. This helps reduce power consumption of the electronic device.
In a possible implementation, the method further includes if the first user identification information and the second user identification information are detected by using the first sensor, displaying the interface of the first application in the first area.
In other words, when two users using the electronic device in split-screen mode change from respectively using the first area and the second area to jointly using the first area, the interface of the first application is still displayed in the first area.
In a possible implementation, the method further includes if the first user identification information and the third user identification information are detected by using the first sensor, and the third user identification information does not correspond to any application in the electronic device, displaying the interface of the first application in the first area.
In the foregoing technical solution, when a new user uses the first area, because an original user still uses the first area, the interface of the first application is still displayed in the first area.
In a possible implementation, the method further includes prompting a user whether to store a correspondence between the first application and the third user identification information; detecting a first operation in the first area; and in response to the first operation, storing the correspondence between the first application and the third user identification information.
In a possible implementation, the method further includes if the first user identification information is detected by using both the first sensor and the second sensor, displaying the interface of the second application in the first area, and displaying the interface of the first application in the second area; or displaying the interface of the first application in the first area, and displaying the interface of the second application in the second area.
That is, when user identification information is detected in both the first area and the second area, the electronic device may exchange content displayed in the first area and content displayed in the second area, or may not exchange content displayed in the first area and content displayed in the second area.
In a possible implementation, the method further includes detecting a second operation in the first area; and in response to the second operation, closing the second application, and displaying a desktop interface or an interface displayed before the second application is started in the first area.
In a possible implementation, after the closing the second application, the method further includes detecting a third operation in the first area; in response to the third operation, starting a third application and displaying an interface of the third application in the first area; and storing a correspondence between the third application and the second user identification information.
According to the foregoing technical solution, even if a user changes a used application, “a screen change following a user” can still be implemented. This is convenient for the user to view and operate.
In a possible implementation, the first user identification information and the second user identification information include face information, fingerprint information, and iris information.
In a possible implementation, before the detecting first user identification information by using the first sensor, the method further includes prompting the user to enter user identification information corresponding to the first application.
In a possible implementation, the first application is an application displayed in the first area before the first user identification information is detected by using the first sensor, or an application selected by the user from at least two applications currently displayed in the first area.
In a possible implementation, before the detecting first user identification information by using the first sensor, the method further includes: determining that the electronic device is in a folded form or a support form.
According to a second aspect, this application provides a screen display control apparatus. The apparatus is included in an electronic device, and the apparatus has a function of implementing behavior of the electronic device in the foregoing aspect and the possible implementations of the foregoing aspect. The function may be implemented by hardware, or may be implemented by hardware by executing corresponding software. The hardware or the software includes one or more modules or units corresponding to the foregoing function, for example, a display module or unit and a detection module or unit.
According to a third aspect, this application provides an electronic device, including a foldable screen, one or more sensors, one or more processors, one or more memories, and one or more computer programs. The processor is coupled to the sensor, the foldable screen, and the memory. The one or more computer programs are stored in the memory. When the electronic device runs, the processor executes the one or more computer programs stored in the memory, so that the electronic device performs the screen display control method according to any possible implementation of the foregoing aspect.
According to a fourth aspect, this application provides a computer storage medium, including computer instructions. When the computer instructions are run on an electronic device, the electronic device is enabled to perform the screen display control method according to any possible implementation of the foregoing aspect.
According to a fifth aspect, this application provides a computer program product. When the computer program product is run on an electronic device, the electronic device is enabled to perform the screen display control method according to any possible implementation of the foregoing aspect.
The following describes implementations of embodiments in detail with reference to accompanying drawings. In descriptions of embodiments of this application, “I” means “or” unless otherwise specified. For example, AB may represent A or B. In this specification, “and/or” describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, in the descriptions in embodiments of this application, “a plurality of” means two or more than two.
The following terms “first” and “second” are merely intended for description, and shall not be understood as an indication or implication of relative importance or implicit indication of a quantity of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly or implicitly include one or more features.
A screen display control method provided in embodiments of this application may be performed by an electronic device having a flexible screen, such as, a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a personal digital assistant (PDA), a wearable device, or a virtual reality device. This is not limited in embodiments of this application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a radio frequency module 150, a communications module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headset jack 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display 194, a subscriber identification module (SIM) card interface 195, and the like.
It may be understood that the structure shown in embodiments of this application does not constitute a specific limitation on the electronic device 100. In some other embodiments of this application, the electronic device 100 may include more or fewer components than the components shown in the figure, some components may be combined, or some components may be split, or different component arrangements may be used. The components shown in the figure may be implemented through hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU). The controller may be a nerve center and a command center of the electronic device 100. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction reading and instruction execution. The memory may be further disposed in the processor 110, and is configured to store instructions and data. In some embodiments, the processor 110 may include one or more interfaces.
The charging management module 140 is configured to receive a charging input from a charger.
The power management module 141 is configured to connect to the battery 142, the charging management module 140, and the processor 110. The power management module 141 receives an input of the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, an external memory, the display 194, the camera 193, the communications module 160, and the like.
A wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the radio frequency module 150, the communications module 160, the modem processor, the baseband processor, and the like.
The antenna 1 and the antenna 2 are configured to transmit and receive electromagnetic wave signals. In some other embodiments, the antenna may be used in combination with a tuning switch. The radio frequency module 150 may provide a solution that is applied to the electronic device 100 and that includes wireless communications technologies such as 2G, 3G, 4G, and 5G. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium-high frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transmits the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The communications module 160 may provide a wireless communication solution that is applied to the electronic device 100 and that includes a wireless local area network (WLAN) (for example, a Wi-Fi network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field-communication (NFC) technology, and an infrared (IR) technology. The communications module 160 may be one or more components integrating at least one communication processor module. The communications module 160 receives an electromagnetic wave through the antenna 2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor 110. The communications module 160 may further receive a to-be-sent signal from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna 2.
In some embodiments, the antenna 1 of the electronic device 100 is coupled to the radio frequency module 150, and the antenna 2 is coupled to the communications module 160, so that the electronic device 100 may communicate with a network and another device by using a wireless communications technology. The wireless communications technology may include a Global System for Mobile Communications (GSM), a General Packet Radio Service (GPRS), code-division multiple access (CDMA), wideband code-division multiple access (WCDMA), time-division code-division multiple access (TD-SCDMA), Long-Term Evolution (LTE), new radio (NR) in a 5th generation (5G) mobile communications system, a future mobile communications system, BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a BeiDou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a satellite based augmentation system (SBAS).
The electronic device 100 implements a display function by using the GPU, the display 194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is configured to perform mathematical and geometric calculation, and render an image. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information. Optionally, the display 194 may include a display and a touch panel. The display is configured to output display content to the user, and the touch panel is configured to receive a touch event entered by the user on the flexible display 194.
The display 194 is configured to display an image, a video, or the like. The display 194 includes a display panel. The display panel may be a liquid-crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini-LED, a micro-LED, a micro-OLED, a quantum dot light emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include one or more displays 194.
In some embodiments, when the display panel is made of a material such as an OLED, an AMOLED, or an FLED, the display 194 shown in
For an electronic device configured with a foldable display, the foldable display of the electronic device may be switched between a small screen in a folded form and a large screen in an expanded form at any time.
A mobile phone is used as an example. As shown in
As shown in
It should be noted that, after the user folds the flexible display 194 along the folding line AB, the first area and the second area may be disposed opposite to each other, or the first area and the second area may be disposed back to back. As shown in
In some embodiments, as shown in
It should be understood that the folded line AB may alternatively be distributed horizontally, and the display 194 may be folded up and down. In other words, the first area and the second area of the display 194 may correspond to upper and lower sides of the middle folding line AB. In this application, an example in which the first area and the second area are distributed left and right is used for description.
For example, as shown in
Embodiments of this application provide a method for controlling display on the first area and the second area. The third area may be independently used for display, or may be used for display following the first area or the second area, or may not be used for display. This is not specifically limited in embodiments of this application.
Because the display 194 can be folded, a physical form of the electronic device may also change accordingly. For example, when the display 194 is fully expanded, a physical form of the electronic device may be referred to as an expanded form. When a part of an area of the display 194 is folded, a physical form of the electronic device may be referred to as a folded form. It may be understood that, in the following embodiments of this application, a physical form of the display 194 may refer to a physical form of the electronic device.
After the user folds the display 194, there is an included angle between the first area and the second area that are obtained by division.
In some embodiments, based on a size of the included angle between the first area and the second area, the display 194 of the electronic device may include at least three physical forms: an expanded form, a folded form, and a half-folded form (or referred to as a support form) in which the display is folded at a specific angle.
Expanded form: When the display 194 is in the expanded form, the display 194 may be shown in
Folded form: When the display 194 is in the folded form, the display 194 may be shown in
Support form: When the display 194 is in the folded form, the display 194 may be shown in
In addition, the support form of the display 194 may further include an unstable support form and a stable support form. In the stable support form, a range of the second angle β is a4≤β≤a3, a4 is less than or equal to 90 degrees, and a3 is greater than or equal to 90 degrees and less than 180 degrees. In the support form of the display 194, a form other than the stable support form is the unstable support form of the display 194.
In some other embodiments, a physical form of the display 194 may be divided into only a folded form and an expanded form. As shown in
It should be understood that division of physical forms of the display 194 and a definition of each physical form are not limited in this application.
The sensor module 180 may include one or more of a pressure sensor, a gyroscope sensor, a barometric pressure sensor, a magnetic sensor (for example, a Hall effect sensor), an acceleration sensor, a distance sensor, an optical proximity sensor, a fingerprint sensor, a structured light sensor, an iris sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like. This is not limited in embodiments of this application.
The pressure sensor is configured to sense a pressure signal, and may convert the pressure signal into an electrical signal. When a touch operation is performed on the display 194, the electronic device 100 detects a strength of the touch operation based on the pressure sensor. The electronic device 100 may also calculate a touch position based on a detection signal of the pressure sensor.
The gyroscope sensor may be configured to determine a motion posture of the electronic device 100. In embodiments of this application, a gyroscope sensor on each screen may also determine the included angle between the first area and the second area after the electronic device 100 is folded, to determine a physical form of the electronic device 100.
The fingerprint sensor is configured to collect a fingerprint. The electronic device 100 may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. In embodiments of this application, the electronic device 100 may collect fingerprint information of users in the first area and the second area by using fingerprint sensors, to determine a user that currently uses a screen on this side.
The structured light sensor may be configured to collect face information of the user. The electronic device 100 may use the collected face information to implement face-based unlocking, application lock access, photo beautification, and the like. In embodiments of this application, the electronic device 100 may collect face information of users in the first area and the second area by using structured light sensors, to determine a user that currently uses a screen on this side.
The iris sensor may be configured to collect iris information of the user. The electronic device 100 may use the collected iris information to implement iris-based unlocking, application lock access, iris-based photographing, and the like. In embodiments of this application, the electronic device 100 may collect iris information of users in the first area and the second area by using iris sensors, to determine a user that currently uses a screen on this side.
The touch sensor is also referred to as a “touch panel”. The touch sensor may be disposed on the display 194, and the touch sensor and the display 194 form a touchscreen, which is also referred to as a “touchscreen”. The touch sensor is configured to detect a touch operation performed on or near the touch sensor. The touch sensor may transfer the detected touch operation to the application processor, to determine a type of a touch event. A visual output related to the touch operation may be provided on the display 194. In some other embodiments, the touch sensor may be alternatively disposed on a surface of the electronic device 100, and is at a position different from that of the display 194.
It should be understood that the foregoing merely shows some sensors in the electronic device 100 and functions of the sensors. The electronic device may include more or fewer sensors. For example, the electronic device 100 may further include an acceleration sensor, a gravity sensor, and the like. In embodiments of this application, a foldable electronic device may include a first area and a second area that form a particular angle in a foldable form. The electronic device may determine a folding direction of the electronic device and an included angle between the first area and the second area by using an acceleration sensor and a gravity sensor after folding.
The electronic device 100 can implement a photographing function by using the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is configured to process data fed back by the camera 193. In some embodiments, the ISP may be disposed in the camera 193. The camera 193 is configured to capture a static image or a video. In some embodiments, the electronic device 100 may include one or N cameras 193, where N is a positive integer greater than 1. The digital signal processor is configured to process a digital signal, and may process another digital signal in addition to a digital image signal.
The video codec is configured to compress or decompress a digital video. The mobile phone 100 may support one or more video codecs.
The external memory interface 120 may be configured to connect to an external storage card such as a micro SD card, to extend a storage capability of the mobile phone 100. The internal memory 121 may be configured to store computer-executable program code. The executable program code includes instructions.
The processor 110 performs various function applications and data processing of the electronic device 100 by running the instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function (for example, a sound playing function and an image playing function), and the like. The data storage area may store data (such as audio data and a phone book) and the like created during use of the mobile phone 100. In addition, the internal memory 121 may include a high-speed random access memory, or may include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory, or a universal flash storage (UFS).
The electronic device 100 may implement an audio function such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headset jack 170D, the application processor, and the like.
The button 190 includes a power button, a volume button, and the like. The button 190 may be a mechanical button, or may be a touch button. The electronic device 100 may receive a button input, and generate a button signal input related to user setting and function control of the electronic device 100.
The motor 191 may generate a vibration prompt. The motor 191 may be configured to produce an incoming call vibration prompt and a touch vibration feedback.
The indicator 192 may be an indicator light, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like.
The SIM card interface 195 is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface 195 or removed from the SIM card interface 195, to implement contact with or separation from the electronic device 100. The electronic device 100 may support one or N SIM card interfaces, where N is a positive integer greater than 1. In some embodiments, the electronic device 100 uses an eSIM, namely, an embedded SIM card. The eSIM card may be embedded in the electronic device 100, and cannot be separated from the electronic device 100.
A layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture may be used for a software system of the electronic device 100. It should be understood that the method for establishing a connection between devices provided in embodiments of this application is applicable to systems such as Android and iOS, and the method has no dependency on a system platform of a device. In embodiments of this application, an Android system with a layered architecture is used as an example to describe a software structure of the electronic device 100.
In the layered architecture, software is divided into several layers, and each layer has a clear role and task. The layers communicate with each other through a software interface. In some embodiments, an Android system is divided into four layers from top to bottom: an application layer, an application framework layer, an Android runtime and system library, and a kernel layer.
The application layer may include a series of application packages. As shown in
The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for an application at the application layer. The application framework layer includes some predefined functions. As shown in
The keyguard service may be used to obtain, from an underlying display system, user identification information detected on the first area side and user identification information detected on the second area side. Further, the keyguard service may generate or update, based on the obtained user identification information, a binding relationship stored in a directory of the keyguard service, and determine specific content displayed in the first area and the second area. Further, the keyguard service may display, in the first area and the second area by using the window manager, content corresponding to the user identification information detected on the sides.
The binding relationship may be a correspondence between user identification information, screen content, and a display area. The user identification information is information that can uniquely determine a user identity. For example, the user identification information may be face information of a user collected by the structured light sensor, fingerprint information of a user collected by the fingerprint sensor, or iris information of a user collected by the iris sensor.
The system library, the kernel layer, and the like below the application framework layer may be referred to as an underlying system. The underlying system includes the underlying display system configured to provide a display service. For example, the underlying display system includes a display driver at the kernel layer and a surface manager in the system library.
The Android runtime includes a core library and a virtual machine. The Android runtime is responsible for scheduling and management of the Android system.
The kernel library includes two parts: a function that needs to be called in Java language, and a kernel library of Android.
The application layer and the application framework layer run on the virtual machine. The virtual machine executes Java files of the application layer and the application framework layer as binary files. The virtual machine is configured to implement functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
The system library may include a plurality of function modules, for example, a surface manager, a media library, a three-dimensional graphics processing library (for example, Open Graphics Library Embedded Systems (OpenGL ES)), and a two dimensional (2D) graphics engine (for example, Scalable Graphics Library (SGL)).
The surface manager is configured to manage a display subsystem and provide fusion of 2D and three-dimensional (3D) layers for a plurality of applications.
The media library supports playing and recording of a plurality of commonly used audio and video formats, static image files, and the like. The media library may support a plurality of audio and video coding formats such as Moving Picture Experts Group (MPEG)-4, G.264, MP3, AAC, AMR, JPG, and PNG.
The three-dimensional graphics processing library is configured to implement three-dimensional graphics drawing, image rendering, compositing, layer processing, and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, a sensor driver, and the like. This is not limited in embodiments of this application.
For ease of understanding, in the following embodiments of this application, a mobile phone having the structures shown in
As described in the background, because a foldable electronic device has usable areas on two sides, a user may change a used display area. Currently, when the user changes from facing one display area to facing the other display area for viewing, content currently viewed by the user is still displayed in the original display area. This is inconvenient for the user to view and operate.
In embodiments of this application, when the electronic device is in a support form or a folded form, a display includes at least two areas, and the two areas may display content of different applications. The electronic device may bind applications corresponding to the two areas to user identification information collected by using the sensor module, to display, in an area currently used by the user, content that matches the user. This implements “a screen change following a user”, and is convenient for the user to view and operate.
For example, when the electronic device is in the folded form, the display is divided into a first area and a second area shown in
For another example, when the electronic device is in the support form, the display is divided into a first area and a second area shown in
The technical solutions in embodiments of this application may be used in a scenario in which the electronic device is used in split-screen mode, for example, a scenario in which the electronic device is in the folded form or the support form. The following describes the technical solutions in embodiments in this application in detail with reference to accompanying drawings.
In some embodiments, if a user wants to use a screen switching function, the user needs to set an electronic device in advance.
It should be understood that the foregoing interfaces may include more or fewer setting icons. This is not specifically limited in embodiments of this application.
It should be understood that the foregoing interfaces may include more or fewer setting icons or options. This is not specifically limited in embodiments of this application.
In some embodiments, the user may simultaneously enable a plurality of the screen switching manners shown in
It may be understood that the interfaces shown in
It may be understood that the user may perform the setting operation before using the electronic device in split-screen mode, or may perform the setting operation on a screen on one side when using the electronic device in split-screen mode. This is not specifically limited in embodiments of this application.
When the electronic device is in the support form or the folded form, the electronic device may automatically start a screen switching process. A manner of determining a form of the electronic device by the electronic device is not specifically limited in embodiments of this application. For example, the electronic device may determine the form of the electronic device based on an included angle between the first area and the second area.
After starting the screen switching process, the electronic device determines whether the electronic device enables a screen switching function.
When determining that the screen switching function of the electronic device is enabled, the electronic device may pop up a selection interface in the first area and/or the second area of the display, to prompt the user to perform screen binding.
In some embodiments, the electronic device may automatically pop up a selection interface, to prompt the user to perform screen binding.
In an example, when detecting that only one application is displayed in the first area, the electronic device automatically pops up a selection interface in the first area; and/or when detecting that only another application is displayed in the second area, the electronic device automatically pops up a selection interface in the second area, to prompt the user to perform screen binding. For example, when the electronic device detects that an application 1 is displayed in full screen in the first area, the electronic device automatically pops up a selection interface in the first area; and/or when the electronic device detects that an application 2 is displayed in full screen in the second area, the electronic device automatically pops up a selection interface in the second area. For example, the application 1 is displayed in full screen in the first area, the electronic device may pop up a selection interface shown in
Similar to the first area, when the application 2 is displayed in full screen in the second area, a user 2 facing the second area may also be prompted to perform screen binding. The electronic device may generate a binding relationship shown in the second row in Table 1. The second row in Table 1 indicates that second user identification information is detected in the second area, the second user identification information is bound to the application 2, and the application 2 is an application currently displayed in full screen in the second area.
After the foregoing binding process is completed, the first area and/or the second area may display a prompt window shown in
It should be noted that the electronic device may directly display the interface shown in
In another example, when determining that the screen switching function of the electronic device is enabled, the electronic device may automatically pop up the selection interface shown in
In some other embodiments, after receiving a binding instruction of the user, the electronic device may pop up a selection interface, to prompt the user to perform screen binding.
In an example, when determining that the electronic device starts the screen switching process and enables the screen switching function, the electronic device may display a binding button in the first area and/or the second area, and the user may indicate, by using the button, the electronic device to start binding. For example, the electronic device may display a binding start button shown in
For example, after starting, in the first area, an application 1 to be bound, the user 1 may tap the binding start button. After receiving a binding start instruction from the user 1, the electronic device may pop up a selection interface shown in
For another example, when the electronic device has a plurality of bondable applications in the first area, after receiving a binding start instruction of the user, the electronic device may pop up a selection interface shown in
For
It should be noted that the foregoing screen binding process may alternatively be in another sequence. This is not specifically limited in embodiments of this application. For example, the electronic device may further prompt the user to enter user identification information, and then prompt the user to select a to-be-bound application.
It should be understood that forms of interfaces, windows, prompts, and binding relationships shown in
The following describes a screen switching method in embodiments of this application by using the electronic device in the support form as an example.
For example, the electronic device controls, based on collected face information, switching of display interfaces of areas on two sides of the display.
After the electronic device enters the support form, the screen switching process is started, and the user 1 may be prompted, in a manner shown in
As shown in
As shown in
The electronic device may control, based on the updated binding relationship, the second area to display the interface of the application 1. In this case, the first area may enter a screen-off state, or may continue to display another interface, for example, a desktop interface. This is not specifically limited in embodiments of this application.
For example, the electronic device controls, based on collected fingerprint information, switching of display interfaces of areas on two sides of the display.
After the electronic device enters the support form, the screen switching process is started, and the user 1 may be prompted, in a manner shown in
As shown in
After collecting fingerprint information of the user 1, the electronic device may generate a binding relationship similar to that in Table 2, except that the user identification information is the fingerprint information.
As shown in
Similar to controlling, based on the collected face information, switching of display interfaces of display areas on two sides of the display, the electronic device may further control, based on collected iris information, switching of display interfaces of areas on two sides of the display. Specifically, as shown in
In this way, when a location of the user relative to the electronic device changes, the electronic device may display, on a screen currently used by the user, content associated with the user, and the user does not need to perform an additional operation to perform screen switching. This is convenient for the user to view and operate.
In addition, it should be understood that when same user identification information is detected in both the first area and the second area, the electronic device may switch display of the first area and the second area, or may not switch display of the first area and the second area. This is not specifically limited in embodiments of this application. For example, the user switches, by using two fingers of which fingerprint information is entered in advance, display of the first area and display of the second area of the electronic device.
In this application, when the user performs an operation in the first area or the second area to close an application bound to the user, after detecting the user's operation of closing the application, the electronic device may delete the binding relationship shown in Table 2 or Table 3. In this case, the electronic device may control the first area or the second area to display an interface displayed before the user opens the application 1, or the electronic device may control the first area or the second area to display a specific interface, for example, display a desktop interface.
Further, the user 1 opens the application 2.
In some embodiments, after the electronic device detects the opening operation, a binding prompt may pop up in the first area or the second area, to prompt the user 1 to perform screen binding again. For example, a prompt window shown in
In some other embodiments, the electronic device may automatically perform screen binding again without prompting the user equipment, to generate the binding relationship shown in Table 4 or Table 5.
In this way, even if the user changes an application, “a screen change following a user” can still be implemented, to improve viewing and operation experience of the user.
For example, the electronic device controls, based on collected face information, switching of display interfaces of areas on two sides of the display.
As shown in
After the electronic device enters the support form, the screen switching process is started, and a user 1 and a user 2 may be prompted, in a manner shown in
As shown in
The electronic device updates the binding relationships based on a status of the collected face information.
In a possible case shown in
Based on the updated binding relationships, the electronic device may control the first area to display the interface of the application 2, and the second area to display the interface of the application 1.
In another possible case shown in
In some embodiments, a selection interface may pop up in the second area, to prompt the user to perform screen binding again. For example, a selection interface shown in
The electronic device may control, based on the updated binding relationships, the second area to display the interface of the application 2. In this case, the first area may enter a screen-off state, or continue to display the interface of the application 1. This is not limited in embodiments of this application.
In some other embodiments, because the first face information has been bound to the interface of the application 1, when the user 1 faces the second area, the electronic device may determine that the first face information has a binding relationship, and does not bind the first face information again. That is, the electronic device does not update the binding relationships, and the binding relationships are still those shown in Table 6. In this way, the electronic device still controls, based on the binding relationships, the second area to display the interface of the application 2. Optionally, because the electronic device does not detect the first face information in the first area, the electronic device may pause and exit a process of an application corresponding to the first area, and control the first area to enter the screen-off state. Optionally, the electronic device may also control the first area to continue to display the interface of the application 1.
That is, if a current display area is bound to the user, when a new user appears on the side of the display area, even if the new user is bound to an interface of an application, display content of the display area is not switched to the interface of the application bound to the new user. That is, content displayed in the display area does not change.
In still another possible case shown in
Based on the updated binding relationships, the electronic device may control the first area to display the interface of the application 1, and the first area to display the interface of the application 2.
When a new user (for example, the user 1 in
Similarly, as shown in
Similarly, as shown in
It should be understood that forms of interfaces, windows, and prompts shown in
In this way, when a plurality of users use the foldable electronic device in split-screen mode, when a location of a user relative to the electronic device changes, the electronic device may display, on a screen currently used by the user, content associated with the user, and the user does not need to perform an additional operation to switch screens, to improve viewing and operation experience of the user.
It may be understood that, that the location of the user changes in this embodiment of this application means that a location of the user relative to the electronic device changes. In other words, the location of the user may change, or a location or a direction of the electronic device may change. For example, that the user 1 moves from the side of the first area to the side of the second area may be that the user 1 changes a location, or may be that the user 1 rotates the electronic device, so that the second area faces the user 1. For another example, that the user 1 and the user 2 exchange locations may be that the user 1 moves to a location of the user 2 and the user 2 moves to a location of the user 1, or may be that the user rotates the electronic device, so that the first area faces the user 2 and the second area faces the user 1.
In this embodiment of this application, the electronic device may further determine a status of the user, such as “present” or “absent”, based on whether a sensor collects user identification information, to control screen display.
For example, the electronic device controls, based on collected face information, switching of display interfaces of areas on two sides of the display.
After the electronic device enters the support form, the screen switching process is started, and the user 1 may be prompted, in a manner shown in
As shown in
As shown in
There are many manners in which the electronic device determines that the user 1 is absent. For example, when the first face information is not detected in the first area, it is determined that the user 1 is absent. For another example, when the first face information is not detected in the first area in a preset period of time, it is determined that the user 1 is absent. For another example, when the first face information is not detected in the first area and the second area, it is determined that the user 1 is absent. For another example, when the first face information is not detected in the first area and the second area in a preset period of time, it is determined that the user 1 is absent. For another example, when a quantity of periods in which the first face information is not detected in the first area is greater than or equal to a preset value, it is determined that the user 1 is absent. For another example, when a quantity of periods in which the first face information is not detected in the first area and the second area is greater than or equal to a preset value, it is determined that the user 1 is absent. For another example, when any face information is not detected in the first area or detected face information does not correspond to any application in the electronic device, it is determined that the user 1 is absent.
When the electronic device determines that the user 1 leaves, the electronic device updates the user status to “absent”, as shown in Table 11.
When the user status is “absent”, the electronic device may control the first area to turn off the screen, and pause a process of the application corresponding to the first area. For example, when the user 1 plays a video by using the electronic device, the electronic device pauses video playing, and controls the first area to turn off the screen.
The electronic device continues to detect the face information of the user 1.
In a possible case shown in
The electronic device may turn on and unlock the first area, and continue each process of the application corresponding to the first area. For example, when the user equipment plays a video by using the electronic device, the electronic device turns on the first area, and continues to play the video.
In another possible case shown in
The electronic device may turn on and unlock the second area, and continue each process of an application corresponding to the second area. For example, when the user equipment plays a video by using the electronic device, the electronic device turns on the second area, and continues to play the video.
When the electronic device is set to a fingerprint-based screen switching manner or an iris-based screen switching manner, a manner of determining whether a user is present or absent and a screen display manner of the electronic device are similar to those shown in
In this way, when the user is absent, the electronic device may turn off the screen. This helps reduce power consumption of the electronic device. When the user is present again, content previously viewed by the user is automatically displayed, and the user does not need to perform an additional operation. This helps improve viewing and operation experience of the user.
It should be further understood that a disposing position of the sensor in
The foregoing describes, by using
With reference to
It should be understood that the first sensor and the second sensor may be any sensor that can detect user identification information, for example, may be a fingerprint sensor, an iris sensor, or a structured light sensor.
Disposing positions of the first sensor and the second sensor are not specifically limited in this application, provided that the first sensor can detect user identification information entered by a user in the first area and the second sensor can detect user identification information entered by a user in the second area.
For example, the first sensor may be disposed in the first area, and the second sensor may be disposed in the second area.
For another example, the first sensor and the second sensor may also be disposed on a same side, but are respectively configured to detect the user identification information entered by the user in the first area and the user identification information entered by the user in the second area.
The user identification information is information that can uniquely determine a user identity. For example, the user identification information may be face information of a user collected by the structured light sensor, fingerprint information of a user collected by the fingerprint sensor, or iris information of a user collected by the iris sensor.
The method 2000 includes the following steps.
2010: Display an interface of a first application in the first area.
For example, as shown in
For example, as shown in
For example, as shown in
For example, the first application is an application displayed in the first area before first user identification information is detected by using the first sensor.
For example, the first application is an application selected by the user from at least two applications currently displayed in the first area.
2020: Detect the first user identification information by using the first sensor.
For example, as shown in
For example, as shown in
For example, as shown in
Optionally, before the first user identification information is detected by using the first sensor, it is determined that the electronic device is in a folded form or a support form, and a screen switching process is started.
Optionally, before the first user identification information is detected by using the first sensor, the electronic device is set, to enable a screen switching function.
For example, as shown in
Optionally, when the screen switching process is started and it is determined that the screen switching function of the electronic device is enabled, the electronic device may pop up a selection interface in the first area of the display, to prompt the user to perform screen binding.
For example, as shown in
2030: Store the correspondence between the first application and the first user identification information.
In some scenarios, the second area is also used by a user. For the second area, a correspondence between a second application and second user identification information may also be generated and stored by using steps similar to the foregoing steps, and details are not described herein again.
Because a correspondence between an application and user identification information has been stored, when a screen facing a user changes, based on user identification information detected by the first sensor and the second sensor, an interface of an application corresponding to the user can be displayed on a screen currently used by the user. 2040: Control display of the first area and the second area based on the user identification information detected by the first sensor and the second sensor.
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, as shown in
For example, if the first user identification information is detected by using both the first sensor and the second sensor, an interface of the second application is displayed in the first area, and the interface of the first application is displayed in the second area; or the interface of the first application is displayed in the first area, and the interface of the second application is displayed in the second area.
The method 2000 further includes: detecting a second operation in the first area; and in response to the second operation, closing the second application, and displaying a desktop interface or an interface displayed before the second application is started in the first area; and after closing the second application, detecting a third operation in the first area; in response to the third operation, starting a third application and displaying an interface of the third application in the first area; and storing a correspondence between the third application and the second user identification information.
It may be understood that, to implement the foregoing functions, the electronic device includes corresponding hardware and/or software modules for performing the functions. Algorithm steps in examples described with reference to embodiments disclosed in this specification can be implemented in a form of hardware or a combination of hardware and computer software in this application. Whether a function is performed by hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application with reference to embodiments, but it should not be considered that the implementation goes beyond the scope of this application.
In embodiments, the electronic device may be divided into functional modules based on the foregoing method examples. For example, each functional module corresponding to each function may be obtained through division, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware. It should be noted that, in embodiments, division into modules is an example and is merely logical function division. During actual implementation, there may be another division manner.
When each function module is obtained through division based on each corresponding function,
The display unit 2110 may be configured to support the electronic device 2100 in performing step 2010, step 2040, and/or another process of the technology described in this specification.
The detection unit 2120 may be configured to support the electronic device 2100 in performing step 2020 and/or another process of the technology described in this specification.
The storage unit 2130 may be configured to support the electronic device 2100 in performing step 2030 and/or another process of the technology described in this specification.
It should be noted that the related content of the steps in the foregoing method embodiments may be cited in function descriptions of corresponding functional modules. Details are not described herein again.
A person of ordinary skill in the art may be aware that, in combination with the examples described in embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and the electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.
A person skilled in the art may clearly understand that, for the purpose of convenient and brief description, for detailed working processes of the foregoing system, apparatus, and unit, refer to corresponding processes in the foregoing method embodiments, and details are not described herein again.
In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the foregoing apparatus embodiment is merely an example. For example, division into the units is merely logical function division and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be implemented by using some interfaces. The indirect coupling or communication connection between the apparatuses or units may be implemented in electrical, mechanical, or another form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one location, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units may be integrated into one unit.
When the functions are implemented in a form of a software function unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of this application essentially, or the part contributing to the technology, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be, for example, a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or a compact disc.
The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
201911377589.6 | Dec 2019 | CN | national |
This is a continuation of International Patent Application No. PCT/CN2020/130138 filed on Nov. 19, 2020, which claims priority to Chinese Patent Application No. 201911377589.6 filed on Dec. 27, 2019. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/130138 | Nov 2020 | US |
Child | 17848827 | US |