This application relates to the field of in-vehicle device technologies, and in particular, to a control method and an electronic device.
With rapid development of intelligent vehicle technologies, more displays are configured in a vehicle to provide a user with more personalized and human-cultural use experience. For example, the displays configured in the vehicle include not only a central control display (“central display screen” for short below) but also a front passenger display (“front passenger screen” for short below) and a display disposed in another location, for example, a rear left seat display (“rear left screen” or “rear-row left screen” for short below) or a rear right seat display (“rear right screen” or “rear-row right screen” for short below). When riding the vehicle, a passenger can watch videos, play games, or the like through a display at a seat of the passenger. In addition, interaction such as interface transfer and a collaborative operation may also be performed between a driver and a passenger or between passengers through displays.
Interaction between in-vehicle displays supported by an existing intelligent in-vehicle terminal (“in-vehicle terminal” for short below) is usually full-screen cross-screen transfer. For example, as shown in
This application provides a control method and an electronic device, to provide, in a multi-task scenario, a user with efficient intelligent interface display that meets user requirements and habits.
According to a first aspect, an embodiment of this application provides a control method. The method may be applied to an interface transfer process of an electronic device like an in-vehicle terminal, where the in-vehicle terminal includes a plurality of displays (for example, including a first screen (or referred to as a first display) and a second screen (or referred to as a second display)). The method includes: When receiving a first operation for transferring a first interface that is on the first screen, the in-vehicle terminal transfers the first interface based on an intention of the first operation, where the transfer includes one-screen transfer and cross-screen transfer; and during the cross-screen transfer of the first interface, after the transfer of the first interface is completed, a display type of the first interface on the second screen is related to task information before the transfer of the first interface and/or screen task information of the second screen.
The task information before the transfer of the first interface indicates a display type and/or classification information of the first interface before the transfer of the first interface, and the classification information indicates whether the first interface is a preset focused application interface of the first screen. The screen task information of the second screen indicates a display type and/or classification information of a task interface on the second screen before the transfer of the first interface, and the classification information indicates whether an interface is a preset focused application interface of a screen.
According to the solution provided in the first aspect, when receiving an operation performed by a user for interface transfer, the in-vehicle terminal determines, by analyzing a specific intention of the operation, whether to perform cross-screen transfer or one-screen transfer, to provide a better intelligent interface display service for the user. For example, when determining that the intention of the first operation is to perform the cross-screen transfer, the in-vehicle terminal may display, on a target screen in an appropriate display type after the interface transfer based on an actual display situation before the transfer of the first interface and/or an interface display situation on the second screen, a task interface transferred from an original screen. According to the method, the task interface transferred from the original screen may be displayed with a most eye-catching display effect without interrupting a task currently concerned by the user on the target screen. This reduces subsequent operations of the user and provides the user with more user-friendly interface transfer experience.
In a possible design, the display type includes any one of an application window, a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, a control, or a notification. The solution provided in this application is applicable to transfer of first interfaces of various display types, and a display type of an interface on the second screen before the transfer is not limited. Regardless of a scenario, the solution provided in this application may be used to display, with the most eye-catching display effect without affecting the task currently concerned by the user on the target screen, the task interface transferred from the original screen. Applicability and practicability are high.
In a possible design, the first operation is an operation that one or more fingers leave after sliding on the first interface at a sliding speed greater than a preset threshold, and the first operation is used to transfer the first interface across screens. It may be understood that a one-finger or multi-finger sliding operation complies with operation habits of most users, and helps the users memorize and perform the operation. Therefore, specific interface transfer intentions such as a cross-screen transfer intention and a one-screen transfer intention are identified based on different sliding speeds (or sliding accelerations). This facilitates memory and operation of the user, and can further improve performance of interaction between the in-vehicle terminal and the user, to improve user experience.
In a possible design, that the in-vehicle terminal transfers the first interface based on an intention of the first operation includes: The in-vehicle terminal determines, based on a sliding direction of the first operation and location relationships between the first screen and the plurality of displays, that the second screen is a target screen for the cross-screen transfer, where the sliding direction points to the second screen; and the in-vehicle terminal transfers the first interface to the second screen across screens. When the in-vehicle terminal includes the plurality of displays (for example, the first screen and the second screen), the in-vehicle terminal definitely knows relative location relationships between the plurality of displays. Similarly, when the plurality of displays (for example, the first screen and the second screen) belong to different in-vehicle terminals, an in-vehicle terminal to which the first screen belongs may know the relative location relationships between the plurality of displays. For example, the in-vehicle terminal stores specific locations of the plurality of displays. In view of this, in this embodiment of this application, the in-vehicle terminal may determine, based on the sliding direction of the finger of the user and the relative location relationships between the plurality of displays, the target screen for the cross-screen transfer of the first interface.
In a possible design, a display type of the first interface after the transfer is the same as a display type of the first interface before the transfer. For example, if the first interface is displayed in full screen before the transfer, the first interface may be kept displayed in full screen after the interface transfer, to ensure eye-catching display of the first interface. For another example, if the first interface is displayed in a floating window before the transfer, the first interface may be kept displayed in a floating window after the interface transfer. This does not affect an interface on the target screen. In view of this, same interface display experience before and after the interface transfer can be maintained.
In a possible design, the display type of the first interface on the first screen before the transfer of the first interface is full-screen display; and after the transfer of the first interface is completed, the display type of the first interface on the second screen is full-screen display. In view of this, same eye-catching interface display experience before and after the interface transfer can be maintained.
In a possible design, the display type of the first interface after the transfer is related to the classification information of the first interface before the transfer, where the first interface before the transfer is a preset focused application interface of the first screen; and after the transfer of the first interface is completed, the display type of the first interface on the second screen is full-screen display. It may be understood that, the first interface before the transfer is the preset focused application interface of the first screen, which indicates that the interface is relatively highly concerned by the user or is of relatively high importance and that display of the interface needs to be preferentially ensured. In view of this, displaying the first interface in full screen after the transfer can ensure eye-catching display of the first interface, to help a user operation.
In a possible design, the display type of the first interface after the transfer is also related to the screen task information of the second screen, where before the transfer of the first interface, the task interface is not displayed, or the task interface is not displayed in full screen, or a preset focused application interface of the second display is not displayed on the second screen. It may be understood that, if the task interface is not displayed, the task interface is not displayed in full screen, or the preset focused application interface of the second screen is not displayed on the second screen before the transfer, it indicates that the second screen does not execute a task that is relatively highly concerned by the user or is of relatively high importance. In view of this, displaying the first interface in full screen after the transfer can ensure eye-catching display of the first interface, to help the user operation.
In a possible design, the display type of the first interface after the transfer is related to the screen task information of the second screen, where before the transfer of the first interface, the task interface is not displayed, or the task interface is not displayed in full screen, or a preset focused application interface of the second screen is not displayed on the second screen; and after the transfer of the first interface is completed, the display type of the first interface is full-screen display. It may be understood that, if the task interface is not displayed, the task interface is not displayed in full screen, or the preset focused application interface of the second screen is not displayed on the second screen before the transfer, it indicates that the second screen does not execute a task that is relatively highly concerned by the user or is of relatively high importance. In view of this, displaying the first interface in full screen after the transfer can ensure eye-catching display of the first interface, to help the user operation.
In a possible design, the display type of the first interface after the transfer is related to the screen task information of the second screen, where before the transfer of the first interface, the task interface is displayed in full screen or a preset focused application interface of the second screen is displayed on the second screen; and after the transfer of the first interface is completed, the first interface and another task interface on the second screen are displayed in split screen. It may be understood that, before the transfer, the task interface is displayed in full screen or the preset focused application interface of the second screen is displayed on the second screen, which indicates that the second screen is executing a task that is relatively highly concerned by the user or is of relatively high importance. In view of this, the first interface is displayed in split screen after the transfer, to avoid interrupting the task interface that is being focused on by the user on the second screen, and provide the user with more user-friendly transfer experience.
In a possible design, the display type of the first interface after the transfer is related to the screen task information of the second screen and the task information before the transfer of the first interface, where before the transfer of the first interface, the display type of the first interface is full-screen display or the first interface is a preset focused application interface of the first screen; the task interface is displayed in full screen or a preset focused application interface of the second screen is displayed on the second screen; and after the transfer of the first interface is completed, the first interface and another task interface on the second screen are displayed in split screen. It may be understood that, before the transfer, the first interface is displayed in full screen or the first interface is the preset focused application interface of the first screen, which indicates that the interface is relatively highly concerned by the user or is of relatively high importance. In addition, the task interface is displayed in full screen or the preset focused application interface of the second screen is displayed on the second screen, which indicates that the second screen is executing the task that is relatively highly concerned by the user or is of relatively high importance. In view of this, displaying the first interface in split screen after the transfer can ensure eye-catching display of the first interface, to avoid interrupting the task interface that is being focused on by the user on the second screen, and provide the user with more user-friendly transfer experience.
In a possible design, the display type of the first interface after the transfer is related to the screen task information of the second screen, where before the transfer of the first interface, the task interface is displayed in full screen or a preset focused application interface of the second screen is displayed on the second screen; and after the transfer of the first interface is completed, the display type of the first interface is split-screen display, a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, or a control. It may be understood that, before the transfer, the task interface is displayed in full screen or the preset focused application interface of the second screen is displayed on the second screen, which indicates that the second screen is executing the task that is relatively highly concerned by the user or is of relatively high importance. In view of this, after the transfer, the first interface is displayed in a form of non-full screen (for example, in a form of floating window or in a form of picture-in-picture), to avoid interrupting the task interface that is being focused on by the user on the second screen, and provide the user with more user-friendly transfer experience.
In a possible design, the display type of the first interface after the transfer is the same as the display type of the first interface before the transfer. For example, the first interface before the transfer is displayed on the first screen in a form of floating window, and the first interface after the transfer may be displayed on the second screen in a form of floating window.
In a possible design, after the transfer of the first interface is completed, a floating window, a floating icon, a floating bubble, an application icon, or a widget corresponding to the first interface is displayed on the first screen. This can help a user of the first screen still view the first interface or perform an operation on the first interface, and can avoid interference of the first interface to another focused task on the first screen.
In a possible design, when the transfer is the one-screen transfer, the first operation is an operation that one or more fingers leave after sliding for a distance on the first interface at a sliding speed less than or equal to a preset threshold, and the first operation is used to transfer the first interface on one screen. This complies with operation habits of most users, helps the user memorize and perform the operation to trigger the one-screen transfer, and can improve performance of interaction between the in-vehicle terminal and the user when helping the user memorize and perform an operation, to improve user experience.
In a possible design, that the in-vehicle terminal transfers the first interface based on an intention of the first operation includes: The in-vehicle terminal transfers the first interface to a target location on one screen, where the target location is an end location of the first operation on the first screen, or the target location is obtained by the in-vehicle terminal through calculation based on the first operation, or the target location is a preset location on an edge of the first screen. This can implement one-screen transfer functions of pinning (pinning) the first interface to a location on a screen, moving a location of the first interface, and the like, to meet diversified requirements of the user.
In a possible design, before the transfer of the first interface, the first screen displays the first interface and a second interface in split screen, and the first operation is an operation that one or more fingers of a user leave after sliding for a distance on the first interface at a sliding speed less than or equal to the preset threshold toward the second interface; and after the transfer of the first interface is completed, the in-vehicle terminal exchanges locations of the first interface and the second interface. This can conveniently and quickly exchange locations of split-screen interfaces.
In a possible design, after the first interface is transferred to the second screen across screens, the method further includes: When receiving an operation for transferring the first interface to the first screen across screens, the in-vehicle terminal transfers the first interface back to the first screen across screens, where after the first interface is transferred back to the first screen across screens, a display type of the first interface is related to task information of the first interface displayed on the second display, and/or screen task information of the first screen when the in-vehicle terminal receives the first operation. This application supports any transfer of an interface between a plurality of screens, including reverse transfer.
In a possible design, the in-vehicle terminal further includes a third screen, and after the first interface is transferred to the second screen across screens, the method further includes: When receiving an operation for transferring the first interface to the third screen across screens, the in-vehicle terminal transfers the first interface to the third screen across screens, where after the first interface is transferred to the third screen across screens, a display type of the first interface is related to task information of the first interface displayed on the second display and/or screen task information of the third screen when the in-vehicle terminal receives the first operation. This application supports any transfer of an interface between a plurality of screens, including relay transfer.
In a possible design, when the first operation performed by the user for transferring the first interface on the first screen is received, one or more first task interfaces are displayed on the first screen, and the one or more first task interfaces include the first interface. The solutions provided in this application are applicable to a multi-task scenario.
In a possible design, the first screen is any one of the following screens of a vehicle: a driver screen, a front passenger screen, a rear left screen, or a rear right screen. In an example, the solution provided in this application is applicable to interface transfer on any screen of the driver screen, the front passenger screen, the rear left screen, or the rear right screen in the vehicle.
In a possible design, the second screen is any one of the following screens of the vehicle: the driver screen, the front passenger screen, the rear left screen, or the rear right screen, and the second screen is different from the first screen. In an example, the solution provided in this application is applicable to transferring an interface to any screen in the vehicle.
According to a second aspect, an embodiment of this application provides a control method. In the method, a plurality of displays may collaboratively display an image, to provide a user with immersive viewing experience. The method may be applied to an electronic device including a first display, a second display, and a third display, where the second display and the third display are respectively located on two sides of the first display. The method may include: when displaying a first interface on the first display, displaying a second interface on the second display, and displaying a third interface on the third display, where the second interface is an interface obtained after first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the second display, the third interface is an interface obtained after the first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the third display, and the first effect processing is any one of Gaussian blur processing, solid color gradient processing, or particle animation processing.
In the method, when the electronic device having a plurality of displays displays the first interface on the first display, the second display and the third display can be controlled to simultaneously display different special effect interfaces related to the first interface by displaying, on the second display, a special effect interface corresponding to the partial interface that is of the first interface and that is close to the side of the second display and displaying, on the third display, a special effect interface corresponding to the partial interface that is of the first interface and that is close to the third display, so that one interface can be collaboratively displayed by using the plurality of displays, to further provide richer display effects and improve immersive viewing experience of the user.
In a possible design, sizes of the second interface and the third interface may be the same or may be different.
In the method, a size relationship between the second interface and the third interface may be flexibly set, which helps improve flexibility of interface display.
In a possible design, the method further includes: when receiving a first operation for transferring the first interface on the first display, transferring the first interface based on an intention of the first operation, where the transfer includes one-screen transfer and cross-screen transfer. After the first operation for transferring the first interface on the first display is received, the method further includes: when determining that the transfer is the cross-screen transfer, skipping displaying the second interface on the second display, and skipping displaying the third interface on the third display.
In the method, when the first interface displayed on the first display is transferred across screens, the electronic device may stop displaying, on displays on the two sides of the first display, special effect interfaces related to the first interface, so that special effects can be disabled in a timely manner, to avoid unnecessary power consumption.
In a possible design, the first electronic device includes a first OS and a second OS, the first OS is used to control the first display, and the second OS is used to control the second display. The displaying a second interface on the second display includes: determining a fourth interface by using the first OS, where the fourth interface is the first interface, or the partial interface that is of the first interface and that is close to the side of the second display, or an interface obtained after a part or all of a process of the first effect processing is performed on the partial interface that is of the first interface and that is close to the side of the second display; storing the fourth interface into a first memory by using the first OS; obtaining the fourth interface from the first memory by using the second OS; determining the second interface based on the fourth interface by using the second OS; and displaying the second interface on the second display by using the second OS.
In the method, different displays may be controlled by different OSes. When a special effect interface corresponding to the first interface is displayed, a process of generating the special effect interface may be completed by an OS to which any display belongs, or may be jointly completed by different OSes. Therefore, flexibility of the method is relatively high, and an OS that performs the process of generating the special effect interface can be flexibly selected based on processing capabilities or processing pressure of different OSes, thereby improving processing efficiency. In addition, different OSes may share a memory, and each OS may read data from the shared memory, or may store processed data into the shared memory. This can provide relatively high access efficiency, and save data storage space.
In a possible design, the determining the second interface based on the fourth interface by using the second OS includes: when the fourth interface is the first interface, determining, by using the second OS based on the fourth interface, the partial interface that is of the first interface and that is close to the side of the second display, and performing the first effect processing on the partial interface that is of the first interface and that is close to the side of the second display, to obtain the second interface; or when the fourth interface is the partial interface that is of the first interface and that is close to the side of the second display, performing the first effect processing on the fourth interface by using the second OS, to obtain the second interface; or when the fourth interface is an interface obtained after a part of the process of the first effect processing is performed on the partial interface that is of the first interface and that is close to the side of the second display, performing the remaining part of the process of the first effect processing on the fourth interface by using the second OS, to obtain the second interface; or when the fourth interface is an interface obtained after all of the process of the first effect processing is performed on the partial interface that is of the first interface and that is close to the side of the second display, determining the fourth interface as the second interface by using the second OS.
In the method, if an interface that is from the first OS and that is obtained by the second OS is an original interface on which special effect processing is not performed, the second OS may perform complete special effect processing on the interface. If the interface that is from the first OS and that is obtained by the second OS is an interface on which partial special effect processing is performed, the second OS may perform the remaining special effect processing on the interface. If the interface that is from the first OS and that is obtained by the second OS is an interface on which complete special effect processing is performed, the second OS may directly use the interface. Therefore, the second OS may perform corresponding subsequent processing based on the processing progress of the first OS, and flexibility is relatively high. When the processing pressure of the first OS is relatively high, the second OS may undertake a part or all of special effect processing tasks. This can improve processing efficiency.
In a possible design, the first display, the second display, and the third display belong to a same OS.
In the method, the first display, the second display, and the third display belong to the same OS. In this case, the OS can directly and quickly control the three displays, thereby improving control efficiency.
According to a third aspect, an embodiment of this application provides a control method. In the method, a plurality of displays may collaboratively display an image, to provide a user with immersive viewing experience. The method may be applied to an electronic device including a first display, a second display, and a third display, and the second display and the third display are respectively located on two sides of the first display. The method may include: when displaying the first interface on the first display, displaying a second interface on the second display, and displaying the second interface on the third display, where the second interface is an interface obtained after particle animation processing is performed on the first interface.
In the method, when displaying the first interface on the first display, the electronic device having a plurality of displays can collaboratively display one interface on the plurality of displays by displaying a special effect interface corresponding to the first interface on both the second display and the third display, to further provide richer display effects and improve immersive viewing experience of the user.
In a possible design, the method further includes: when receiving a first operation for transferring the first interface on the first display, transferring the first interface based on an intention of the first operation, where the transfer includes one-screen transfer and cross-screen transfer. After the first operation for transferring the first interface on the first display is received, the method further includes: when the transfer is the cross-screen transfer, skipping displaying the second interface on the second display, and skipping displaying the second interface on the third display.
In the method, when the first interface displayed on the first display is transferred across screens, display of a special effect interface related to the first interface may be stopped on another display, so that special effects can be disabled in a timely manner, to avoid unnecessary power consumption.
In a possible design, the first electronic device includes a first OS and a second OS, where the first OS is used to control the first display, and the second OS is used to control the second display; and the displaying a second interface on the second display includes: determining a third interface by using the first OS, where the third interface is the first interface, or an interface obtained after a part or all of a process of the particle animation processing is performed on the first interface; storing the third interface into a first memory by using the first OS; obtaining the third interface from the first memory by using the second OS; determining the second interface based on the third interface by using the second OS; and displaying the second interface on the second display by using the second OS.
In the method, different displays may be controlled by different OSes. When a special effect interface corresponding to the first interface is displayed, a process of generating the special effect interface may be completed by an OS to which any display belongs, or may be jointly completed by different OSes. Therefore, flexibility of the method is relatively high, and an OS that performs the process of generating the special effect interface can be flexibly selected based on processing capabilities or processing pressure of different OSes, thereby improving processing efficiency. In addition, different OSes may share a memory, and each OS may read data from the shared memory, or may store processed data into the shared memory. This can provide relatively high access efficiency, and save data storage space.
In a possible design, the first display, the second display, and the third display belong to a same OS.
In the method, the first display, the second display, and the third display belong to the same OS. In this case, the OS can directly and quickly control the three displays, thereby improving control efficiency.
According to a fourth aspect, an embodiment of this application provides a control method. In the method, a content interface and a special effect interface corresponding to the content interface may be simultaneously displayed on a display, to provide a user with immersive viewing experience. The method may be applied to an electronic device including a first display, and the method may include: displaying a first interface in a first area on the first display, and displaying a second interface in a second area on the first display, where the second area is an area other than the first area on the first display, and the second interface is an interface obtained after particle animation processing is performed on the first interface.
In the method, when displaying the first interface on the first display, the electronic device displays, in the display area other than the first interface on the first display, a special effect interface corresponding to the first interface, so that the content interface and the special effect interface can be simultaneously displayed on a same display. Therefore, richer display effects can be provided, and the immersive viewing experience of the user can be improved.
In a possible design, the method further includes: when receiving a first operation for transferring the first interface on the first display, transferring the first interface based on an intention of the first operation, where the transfer includes one-screen transfer and cross-screen transfer. After the first operation for transferring the first interface on the first display is received, the method further includes: when the transfer is the cross-screen transfer, skipping displaying the second interface on the first display.
In the method, when the first interface displayed on the first display is transferred across screens, the first display may stop displaying a special effect interface related to the first interface, so that special effects can be disabled in a timely manner, to avoid unnecessary power consumption.
According to a fifth aspect, an embodiment of this application provides a control method. In the method, a plurality of displays may collaboratively display an image, to provide a user with immersive viewing experience. The method may be applied to a first electronic device including a first display and a second display. The method may include: displaying a first interface on the first display, and displaying a second interface on the second display; and sending a third interface to a second electronic device, so that the second electronic device displays the third interface on a third display, or sending a fourth interface to a second electronic device, so that the second electronic device generates a third display based on the fourth interface and then displays the third interface on the third display. The second electronic device includes the third display, the second display and the third display are respectively located on two sides of the first display, the second interface is an interface obtained after first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the second display, the third interface is an interface obtained after the first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the third display, the fourth interface is the partial interface that is of the first interface and that is close to the side of the third display or an interface obtained after a part or all of a process of the first effect processing is performed on the partial interface that is of the first interface and that is close to the side of the third display, and the first effect processing is any one of Gaussian blur processing, solid color gradient processing, or particle animation processing.
In the method, when displaying the first interface on the first display, the first electronic device may control the second display of the first electronic device to display a special effect interface corresponding to the first interface. In addition, the first electronic device sends the first interface or a first interface on which a part or all of special effect processing is performed to the second electronic device, so that the second electronic device can determine the special effect interface corresponding to the first interface, and then display the special effect interface corresponding to the first interface on the third display of the second electronic device. Therefore, the method can support the first electronic device and the second electronic device in collaboratively implementing an effect of displaying the first interface on the first display and displaying the special effect interface related to the first interface on the second display and the third display, to further provide richer display effects and improve immersive viewing experience of the user.
In a possible design, sizes of the second interface and the third interface may be the same or may be different.
In the method, a size relationship between the second interface and the third interface may be flexibly set, which helps improve flexibility of interface display.
According to a sixth aspect, an embodiment of this application provides a control method. In the method, a plurality of displays may collaboratively display an image, to provide a user with immersive viewing experience. The method may be applied to a first electronic device including a first display. The method may include: when displaying a first interface on the first display, sending a second interface and a third interface to a second electronic device, so that the second electronic device separately displays the second interface and the third interface on a second display and a third display; or when displaying a first interface on the first display, sending the first interface to a second electronic device, so that the second electronic device separately generates a second interface and a third interface based on the first interface and then separately displays the second interface and the third interface on a second display and a third display. The second electronic device includes the second display and the third display, the second display and the third display are respectively located on two sides of the first display, the second interface is an interface obtained after first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the second display, the third interface is an interface obtained after the first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the third display, and the first effect processing is any one of Gaussian blur processing, solid color gradient processing, or particle animation processing.
In the method, when displaying the first interface on the first display, the first electronic device sends the first interface or a first interface on which a part or all of special effect processing is performed to the second electronic device, so that the second electronic device can determine a special effect interface corresponding to the first interface, and display the special effect interface corresponding to the first interface on the second display and the third display of the second electronic device. Therefore, the method can support the first electronic device and the second electronic device in collaboratively implementing an effect of displaying the first interface on the first display and displaying the special effect interface related to the first interface on the second display and the third display, to further provide richer display effects and improve immersive viewing experience of the user.
In a possible design, sizes of the second interface and the third interface may be the same or may be different.
In the method, a size relationship between the second interface and the third interface may be flexibly set, which helps improve flexibility of interface display.
According to a seventh aspect, an embodiment of this application provides a control method. In the method, a plurality of displays may collaboratively display an image, to provide a user with immersive viewing experience. The method may be applied to a first electronic device including a first display. The method may include: when displaying a first interface on the first display, sending a second interface to a second electronic device, so that the second electronic device displays the second interface on a second display, or sending a third interface to a second electronic device, so that the second electronic device generates a second interface based on the third interface and then displays the second interface on a second display; and when displaying the first interface on the first display, sending a fourth interface to a third electronic device, so that the third electronic device displays the fourth interface on a third display, or sending a fifth interface to a third electronic device, so that the third electronic device generates a fourth interface based on the fifth interface and then displays the fourth interface on a third display. The second electronic device includes the second display, the third electronic device includes the third display, the second display and the third display are respectively located on two sides of the first display, the second interface is an interface obtained after first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the second display, the third interface is the partial interface that is of the first interface and that is close to the side of the second display or an interface obtained after a part or all of a process of the first effect processing is performed on the partial interface that is of the first interface and that is close to the side of the second display, the fourth interface is an interface obtained after the first effect processing is performed on a partial interface that is of the first interface and that is close to a side of the third display, the fifth interface is the partial interface that is of the first interface and that is close to the side of the third display or an interface obtained after a part or all of the process of the first effect processing is performed on the partial interface that is of the first interface and that is close to the side of the third display, and the first effect processing is any one of Gaussian blur processing, solid color gradient processing, or particle animation processing.
In the method, when displaying the first interface on the first display, the first electronic device separately sends the first interface or a first interface on which a part or all of special effect processing is performed to the second electronic device and the third electronic device, so that the second electronic device and the third electronic device can determine a special effect interface corresponding to the first interface, and display the special effect interface corresponding to the first interface on the displays of the second electronic device and the third electronic device. Therefore, the method can support the first electronic device in collaborating with the second electronic device and the third electronic device to implement an effect of displaying the first interface on the first display and displaying the special effect interface related to the first interface on the second display and the third display, to further provide richer display effects and improve immersive viewing experience of the user.
In a possible design, sizes of the second interface and the third interface may be the same or may be different.
In the method, a size relationship between the second interface and the third interface may be flexibly set, which helps improve flexibility of interface display.
In a possible design, in the method according to any one of the second aspect to the seventh aspect, the method further includes: when displaying the first interface on the first display, adjusting, based on volume of an audio corresponding to the first interface, a control parameter of an air conditioning device in space in which the first display is located, where the control parameter is positively related to the volume, and the control parameter includes at least one of air exhaust vent volume, an air exhaust vent speed, or an air exhaust time.
In the method, the electronic device may determine the control parameter of the air conditioning device based on a displayed interface, and control an air conditioner based on the determined control parameter. Therefore, some auxiliary immersive experience can be provided by using the air conditioner when a content interface is displayed, to further provide more diversified immersive services, and then improve immersive experience of the user.
In a possible design, in the method according to any one of the second aspect to the seventh aspect, the method further includes: when displaying the first interface on the first display, determining, based on a preset correspondence between a picture scenario and a temperature, a target temperature corresponding to a picture scenario of the first interface; and adjusting, to the target temperature, a temperature of an air conditioning device in space in which the first display is located.
In the method, the electronic device may determine a temperature parameter of the air conditioning device based on a displayed interface, and adjust a temperature of an air conditioner based on the determined temperature parameter, so that the user feels a temperature corresponding to interface content seen by the user, to improve immersive experience of the user.
In a possible design, in the method according to any one of the second aspect to the seventh aspect, the method further includes: when displaying the first interface on the first display, determining a first target color based on at least one color on the first interface, where the first target color is any color of the at least one color, or the first target color is an average color of some or all colors of the at least one color; and adjusting, based on the first target color, a lighting color and/or brightness of a first lighting device in space in which the first display is located.
In the method, the electronic device may determine a control parameter like a color and brightness of a lighting device based on a displayed interface, and control the lighting device based on the determined control parameter. Therefore, when displaying a content interface, the electronic device can provide some auxiliary immersive experience by using the lighting device, to further provide more diversified immersive services, and improve immersive experience of the user.
According to an eighth aspect, an embodiment of this application provides a control method. The method can improve flexibility and practicability of controlling an audio output apparatus to play an audio in a multi-audio output apparatus scenario. The method may be applied to a first electronic device. The method may include: when displaying first content on a first display, playing a first audio corresponding to the first content by using a first audio output apparatus, where the first display is any display of a plurality of displays located in a first space area; displaying the first content on the first display, and displaying the first content on a second display in response to a received first operation, where the second display is included in the plurality of displays; and in response to the first operation, skipping playing the first audio by using the first audio output apparatus, and playing the first audio by using a second audio output apparatus, or continuing to play the first audio by using the first audio output apparatus, or playing the first audio by using an audio output apparatus of a specified type in the first space area. The first audio output apparatus is associated with a first sound zone, and the second audio output apparatus is associated with a second sound zone; the first sound zone is a candidate sound zone associated with the first display in a plurality of candidate sound zones, and the second sound zone is a candidate sound zone associated with the second display in the plurality of candidate sound zones; each candidate sound zone in the plurality of candidate sound zones is associated with one or more audio output apparatuses in the first space area; and/or the first audio output apparatus includes at least one audio output apparatus of a same type, and the second audio output apparatus includes at least one audio output apparatus of a same type, where the type includes at least one of a status type indicating a moving/static state of an audio output apparatus, a location type indicating a location range of an audio output apparatus, or a device type of an audio output apparatus. An interface on which the first content is located is a first interface.
In the method, in a scenario in which there are a plurality of displays and a plurality of audio output apparatuses in a space area, when displaying content on a display and playing a corresponding audio, an electronic device may select, from the plurality of sound zones, a sound zone associated with the display, and play the audio by using an audio output apparatus associated with the sound zone. When content on one display is transferred to another display for display, the electronic device may also flexibly select an audio output apparatus for playing an audio that is used after the transfer. In this manner, a control device may partition the displays and the audio output apparatuses in the space area by using a sound zone, and flexibly select, by using the sound zone as a reference, an audio output apparatus configured to play an audio corresponding to content on each display, without manual intervention. Therefore, control efficiency is relatively high. Audio control requirements in more common scenarios can also be met, and universality and practicability are relatively high.
In a possible design, if the first audio is a media-type audio or a call-type audio, or a service that provides the first audio is a media-type service or a call-type service, in response to the first operation, the first audio is not played by using the first audio output apparatus, and the first audio is played by using the second audio output apparatus.
In the method, when watching media-type/call-type content, a user has a relatively high requirement for listening to a corresponding audio. Therefore, when determining that content transferred from the first display to the second display is the media-type/call-type content, or determining that an audio corresponding to the content is a media-type/call-type audio, or determining that a service providing the audio is a media-type/call-type service, the electronic device may switch an audio output apparatus for playing the audio of the content. This can implement an effect that the media type/call-type audio is transferred along with an image, and improve audio listening experience of the user.
In a possible design, if the first audio is any one of the following audios: a navigation audio, a notification audio, a system audio, or an alarm audio, or if the service that provides the first audio is any one of the following services: a navigation service, a notification service, a system service, or an alarm service, or if a display type corresponding to the first content is any one of a floating window, picture-in-picture, a control, or a widget, in response to the first operation, the first audio continues to be played by using the first audio output apparatus.
In this method, the navigation audio, the notification audio, the system audio, and the alarm audio are generally mainly listened by the user at a driving location of a vehicle. Therefore, when content corresponding to this type of sound is transferred, this type of sound does not follow the transfer, which better meets an actual requirement of the user. Therefore, in this method, when the content corresponding to this type of sound is transferred, this type of sound does not follow the transfer, which is more applicable to an audio control scenario in the vehicle, and ensures relatively high listening experience of the user in a control process. When viewing content such as a floating window, picture-in-picture, a control, or a widget, the user has a relatively low requirement for listening to a corresponding audio, and a time for transferring the content may be relatively short. Therefore, when the content is transferred, the corresponding audio is not transferred along with the content, so that audio control complexity can be reduced, and impact on user experience is relatively small.
In a possible design, if the first audio is the call-type audio, in response to the first operation, the first audio is played by using all audio output apparatuses of the specified type in the first space area.
In the method, when determining that the audio corresponding to the content transferred from the first display to the second display is the call-type audio, the electronic device switches a playing manner of the audio to playing the audio by using an audio output apparatus of a specified type in the space area. For example, the electronic device switches a playing manner of the audio to playing the audio by using all or some audio output apparatuses of the specified type in the space area. For example, when a video call screen is transferred from a central display screen to a front passenger screen for display, a call audio may be played by using a full-vehicle loudspeaker. This can ensure that all users in a space area can listen to the call audio, thereby meeting a requirement of the users in the space area for participating in a call, and improving user experience. For another example, when a video call screen is transferred from a central display screen to a front passenger screen for display, front-row loudspeakers (a driver loudspeaker and a front passenger loudspeaker) may be used to play the call audio, to meet call experience of front-row users.
In a possible design, the method further includes: in response to a first voice indication of a first user, playing a third audio by using an audio output apparatus associated with the first sound zone, where a space area in which the first user is located is a space area associated with the first sound zone; and in response to a second voice indication of a second user, playing a fourth audio by using an audio output apparatus associated with the second sound zone, where a space area in which the second user is located is a space area associated with the second sound zone.
In the method, when the electronic device determines that a location of a user who sends a voice indication changes (for example, a user at a different location in the vehicle send a voice indication), an audio output apparatus may be switched to play an audio that responds to the user indication, so that continuity and smoothness of listening to the response audio by the user can be ensured, thereby improving user experience.
In a possible design, before the skipping displaying the first content on the first display, and displaying the first content on a second display in response to a received first operation, the method further includes: displaying a second content on the second display, and playing a second audio corresponding to the second content by using a third audio output apparatus, where the third audio output apparatus is associated with the second sound zone. The displaying the first content on the second display includes: displaying the first content and the second content on the second display in split screen; or displaying the first content in a first window on the second display, where the first window is overlaid on a window in which the second content is located, and a size of the first window is less than a size of the window in which the second content is located. After the skipping displaying the first content on the first display, and displaying the first content on the second display in response to the received first operation, the method further includes: continuing to play the second audio by using the third audio output apparatus.
In the method, before content on the first display is transferred to the second display, the second display may display the content and play a corresponding audio. After the content on the first display is transferred to the second display, the second display may display the transferred content in split screen or in a window. In this case, content originally displayed on the second display is still partially or completely visible, and the electronic device may continue to play the audio corresponding to the content. This can avoid a case in which an image does not match the audio and ensure video and audio consistency for the user.
In a possible design, before the skipping displaying the first content on the first display, and displaying the first content on a second display in response to a received first operation, the method further includes: displaying a second content on the second display, and playing a second audio corresponding to the second content by using a third audio output apparatus, where the third audio output apparatus is associated with the second sound zone. The displaying the first content on the second display includes: displaying the first content and skipping displaying the second content on the second display; and after the skipping displaying the first content on the first display, and displaying the first content on the second display in response to the received first operation, the method further includes: skipping playing the second audio by using the third audio output apparatus.
In the method, before content on the first display is transferred to the second display, the second display may display the content and play a corresponding audio. After the content on the first display is transferred to the second display, the second display may display the transferred content in full screen on the second display. In this case, content originally displayed on the second display is invisible to the user, and the electronic device stops playing the audio corresponding to the content. This can avoid a case in which an image does not match the audio and ensure video and audio consistency for the user.
In a possible design, after the skipping playing the second audio by using the third audio output apparatus, the method further includes: in response to a received second operation, skipping displaying the first content on the second display, and displaying the second content on the second display; and continuing to play the second audio by using the third audio output apparatus.
In the method, when the second display no longer displays the transferred content, the electronic device may continue to play, by using an audio output apparatus for playing the audio corresponding to the content on the second display before the content is transferred, the audio corresponding to the content originally displayed on the second display, to continue a service before the content is transferred, thereby improving continuity of content display and audio playing on the second display, and further improving user experience.
In a possible design, after the skipping playing the second audio by using the third audio output apparatus, the method further includes: in response to a received third operation, displaying the first content on the first display, skipping displaying the first content on the second display, and displaying the second content on the second display; and continuing to play the first audio by using the first audio output apparatus, and continuing to play the second audio by using the third audio output apparatus.
In the method, after the content on the first display is transferred to the second display for display, and after the content that is displayed on the second display and that comes from the first display exits display, the electronic device may switch back to an audio playing manner before the content is transferred, to continue a service before the content is transferred, thereby improving continuity of content display and audio playing on the second display, and improving user experience.
In a possible design, after the skipping displaying the first content on the first display, and displaying the first content on a second display in response to a received first operation, the method further includes: when a display area of the first content on the second display is a partial area of the second display, displaying the first content in full screen on the second display in response to a received fourth operation; and playing the first audio by using the audio output apparatus associated with the second sound zone, and skipping displaying the first audio by using the audio output apparatus associated with the first sound zone.
In the method, after the content on the first display is transferred to the second display and is not displayed in full screen, the electronic device may not switch the audio playing manner. After the content displayed in non-full screen on the second display changes to be displayed in full screen, the electronic device may switch the audio playing manner and play only audio corresponding to the content. Based on the method, audio control can be performed according to a change of a scenario. This improves flexibility of audio control and user experience.
In a possible design, before the playing a first audio corresponding to the first content by using a first audio output apparatus, the method further includes: determining the first audio output apparatus; and the determining the first audio output apparatus includes: determining the first sound zone based on the first display; and selecting an audio output apparatus with a highest priority from the at least one audio output apparatus as the first audio output apparatus. In the method, when displaying content on a display, the electronic device may play a corresponding audio by using an audio output apparatus with a highest priority in an associated sound zone on the display. Based on the method, the electronic device may flexibly select an audio output apparatus for playing an audio corresponding to content on different displays, without manual intervention. Therefore, control efficiency is relatively high.
In a possible design, the selecting an audio output apparatus with a highest priority from the at least one audio output apparatus as the first audio output apparatus includes: obtaining a priority order of the at least one audio output apparatus associated with the first sound zone; and selecting the audio output apparatus with the highest priority from the at least one audio output apparatus as the first audio output apparatus based on the priority order of the at least one audio output apparatus.
In a possible design, the determining the first sound zone based on the first display includes: selecting, from the plurality of candidate sound zones based on a specified association relationship between a display and a candidate sound zone, a candidate sound zone associated with the first display as the first sound zone; or determining the first sound zone based on a received sound zone selection operation, where the sound zone selection operation is used to select a candidate sound zone from the plurality of candidate sound zones as the first sound zone.
In the method, the electronic device may select, based on a preconfigured association relationship between a display and a candidate sound zone, a sound zone associated with the display, or may select, based on a user indication, a sound zone associated with the display. On one hand, manual intervention can be reduced and efficiency can be improved, and on the other hand, manual intervention is also allowed, and flexibility is relatively high.
In a possible design, the obtaining a priority order of the at least one audio output apparatus associated with the first sound zone includes: selecting, from a plurality of pieces of priority information based on a specified correspondence between an audio type and priority information, target priority information corresponding to an audio type of the first audio, where each of the plurality of pieces of priority information indicates a priority order of the at least one audio output apparatus associated with the first sound zone, and different priority information corresponds to different audio types; and determining the priority order of the at least one audio output apparatus based on the target priority information.
In the method, for at least one audio output apparatus associated with a same sound zone, when different types of audios are played, a priority order of the at least one audio output apparatus is different. Therefore, the electronic device may select, based on an audio type of a to-be-played audio, a more appropriate audio output apparatus to play the audio, thereby improving listening experience of the user, and performing audio control with relatively high flexibility.
In a possible design, before the skipping playing the first audio by using the first audio output apparatus, and playing the first audio by using a second audio output apparatus, the method further includes:
In a possible design, the selecting an audio output apparatus with a highest priority from the at least one audio output apparatus associated with the second sound zone as the second audio output apparatus includes: obtaining a priority order of the at least one audio output apparatus associated with the second sound zone; and selecting, based on the priority order of the at least one audio output apparatus associated with the second sound zone, an audio output apparatus with a highest priority from the at least one audio output apparatus associated with the second sound zone as the second audio output apparatus.
In a possible design, the first space area is a space area in a vehicle cockpit, and any audio output apparatus includes at least one of an in-vehicle loudspeaker, a headrest speaker, or a Bluetooth headset.
According to a ninth aspect, an embodiment of this application provides a control method. The method can improve flexibility and practicability of controlling an audio output apparatus to play an audio in a multi-audio output apparatus scenario. The method may be applied to a first electronic device. The method may include: when displaying first content on a first display, playing a first audio corresponding to the first content by using a first audio output apparatus, where the first display is any display of a plurality of displays located in a first space area; displaying the first content on the first display, and displaying the first content on a second display in response to a received first operation; or displaying first sub-content on the first display, and displaying second sub-content on a second display in response to a received second operation, where the first content includes the first sub-content and the second sub-content; and playing the first audio by using a second audio output apparatus and a third audio output apparatus; or playing the first audio by using an audio output apparatus of a specified type located in the first space area. The second display is included in the plurality of displays, the first audio output apparatus is associated with a first sound zone, the second audio output apparatus is associated with the first sound zone, the third audio output apparatus is associated with a second sound zone, the first sound zone is a candidate sound zone associated with the first display in a plurality of candidate sound zones, and the second sound zone is a candidate sound zone associated with the second display in the plurality of candidate sound zones; each candidate sound zone in the plurality of candidate sound zones is associated with one or more audio output apparatuses in the first space area; and/or the first audio output apparatus includes at least one audio output apparatus of a same type, the second audio output apparatus includes at least one audio output apparatus of a same type, and the third audio output apparatus includes at least one audio output apparatus of a same type, where the type includes at least one of a status type indicating a moving/static state of an audio output apparatus, a location type indicating a location range of an audio output apparatus, or a device type of an audio output apparatus. An interface on which the first content is located is a first interface.
In the method, in a scenario in which there are a plurality of displays and a plurality of audio output apparatuses in a space area, when displaying content on a display and playing a corresponding audio, an electronic device may select, from the plurality of sound zones, a sound zone associated with the display, and play the audio by using an audio output apparatus associated with the sound zone. When content on one display is copied or partially transferred to another display for display, the electronic device may also flexibly select an audio output apparatus for playing an audio that is used after the transfer. In this manner, a control device may partition the displays and the audio output apparatuses in the space area by using a sound zone, and flexibly select, by using the sound zone as a reference, an audio output apparatus configured to play an audio corresponding to content on each display, without manual intervention. Therefore, control efficiency is relatively high. Audio control requirements in more common scenarios can also be met, and universality and practicability are relatively high.
In a possible design, the second audio output apparatus is the same as the first audio output apparatus; and/or a type of the second audio output apparatus is the same as a type of the third audio output apparatus.
In the method, when the content on one display is partially transferred to the another display for display, the electronic device may continue to play an audio corresponding to the content by using an audio output apparatus used before the transfer, and at the same time, play the audio corresponding to the content by using an audio output apparatus corresponding to the display to which the content is transferred, which can ensure listening experience of users corresponding to different displays. Alternatively, the electronic device may switch to a new audio output apparatus of a same type for playing, to unify an audio playing effect and perform unified management.
In a possible design, after the displaying the first content on the first display, and displaying the first content on a second display in response to a received first operation, the method further includes: in response to a received third operation, skipping displaying the first content on the first display, and continuing to display the first content on the second display; and playing the first audio by using only an audio output apparatus associated with the second sound zone, and skipping playing the first audio by using an audio output apparatus associated with the first sound zone; or in response to a received fourth operation, continuing to display the first content on the first display, and skipping displaying the first content on the second display; and playing the first audio by using an audio output apparatus associated with the first sound zone, and skipping playing the first audio by using an audio output apparatus associated with the second sound zone.
In the method, in a scenario of displaying content in a copying manner, two displays display same content, and audio output apparatuses associated with the two displays play an audio corresponding to the content. When either of the displays no longer displays the content, the electronic device may disable an audio output apparatus associated with the display, and may continue to display the content by using the other display and continue to play the corresponding audio by using an audio output apparatus associated with the other display. This can meet a requirement of a user corresponding to each display, and improve user experience.
In a possible design, after the displaying first sub-content on the first display, and displaying second sub-content on a second display in response to a received second operation, the method further includes: in response to a received fifth operation, skipping displaying the second sub-content on the second display, and displaying the first content on the first display; and playing the first audio by using an audio output apparatus associated with the first sound zone, and skipping playing the first audio by using an audio output apparatus associated with the second sound zone; or in response to a received sixth operation, skipping displaying the first sub-content on the first display, and displaying the first content on the second display; and playing the first audio by using an audio output apparatus associated with the second sound zone, and skipping playing the first audio by using an audio output apparatus associated with the first sound zone.
In the method, in a scenario of displaying content in a splicing manner, two displays display same content in a splicing manner, and audio output apparatuses associated with the two displays play an audio corresponding to the content. When either of the displays no longer displays the content, the electronic device may disable an audio output apparatus associated with the display, and may display the complete content by using the other display and continue to play the corresponding audio by using an audio output apparatus associated with the other display. This can meet a requirement of a user corresponding to each display, and improve user experience.
In a possible design, before the playing a first audio corresponding to the first content by using a first audio output apparatus, the method further includes: determining the first audio output apparatus; and the determining the first audio output apparatus includes: determining the first sound zone based on the first display; and selecting an audio output apparatus with a highest priority from the at least one audio output apparatus as the first audio output apparatus.
In a possible design, the selecting an audio output apparatus with a highest priority from the at least one audio output apparatus as the first audio output apparatus includes: obtaining a priority order of the at least one audio output apparatus associated with the first sound zone; and selecting the audio output apparatus with the highest priority from the at least one audio output apparatus as the first audio output apparatus based on the priority order of the at least one audio output apparatus.
In a possible design, the determining the first sound zone based on the first display includes: selecting, from the plurality of candidate sound zones based on a specified association relationship between a display and a candidate sound zone, a candidate sound zone associated with the first display as the first sound zone; or determining the first sound zone based on a received sound zone selection operation, where the sound zone selection operation is used to select a candidate sound zone from the plurality of candidate sound zones as the first sound zone.
In a possible design, the obtaining a priority order of the at least one audio output apparatus associated with the first sound zone includes: selecting, from a plurality of pieces of priority information based on a specified correspondence between an audio type and priority information, target priority information corresponding to an audio type of the first audio, where each of the plurality of pieces of priority information indicates a priority order of the at least one audio output apparatus associated with the first sound zone, and different priority information corresponds to different audio types; and determining the priority order of the at least one audio output apparatus based on the target priority information.
In a possible design, before the playing the first audio by using a second audio output apparatus and a third audio output apparatus, the method further includes: determining the second audio output apparatus, and determining the third audio output apparatus; and the determining the third audio output apparatus includes: determining the second sound zone based on the second display; obtaining a priority order of the at least one audio output apparatus associated with the second sound zone; and selecting, based on the priority order of the at least one audio output apparatus associated with the second sound zone, an audio output apparatus with a highest priority from the at least one audio output apparatus associated with the second sound zone as the third audio output apparatus.
In a possible design, the first space area is a space area in a vehicle cockpit, and any audio output apparatus includes at least one of an in-vehicle loudspeaker, a headrest speaker, or a Bluetooth headset.
According to a tenth aspect, this application provides an electronic device. The electronic device includes a display, a memory, and one or more processors, where the memory is configured to store computer program code, and the computer program code includes computer instructions; and when the computer instructions are executed by one or more processors, the electronic device is enabled to perform the method according to any one of the first aspect to the ninth aspect or any possible design of any one of the first aspect to the ninth aspect.
According to an eleventh aspect, this application provides a computer-readable storage medium. The computer-readable storage medium stores a computer program, and when the computer program is run on a computer, the computer is enabled to perform the method according to any one of the first aspect to the ninth aspect or any possible design of any one of the first aspect to the ninth aspect.
According to a twelfth aspect, this application provides a computer program product. The computer program product includes a computer program or instructions. When the computer program or the instructions is/are run on a computer, the computer is enabled to perform the method according to any one of the first aspect to the ninth aspect or any possible design of any one of the first aspect to the ninth aspect.
According to a thirteenth aspect, this application provides a chip system. The chip system includes a processor and a memory, and the memory stores instructions. When the instructions are executed by the processor, the method according to any possible implementation of any one of the first aspect to the ninth aspect is implemented. The chip system may include a chip, or may include a chip and another discrete component.
For beneficial effects of the tenth aspect to the thirteenth aspect, refer to descriptions of beneficial effects of corresponding content in the first aspect to the ninth aspect. Details are not described herein again.
To make objectives, technical solution, and advantages of embodiments of this application clearer, the following further describes embodiments of this application in detail with reference to the accompanying drawings. In descriptions of embodiments of this application, the following terms “first” and “second” are merely intended for a purpose of description, and shall not be understood as an indication or implication of relative importance or implicit indication of a quantity of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly or implicitly include one or more features.
It should be understood that in embodiments of this application, “at least one” means one or more, and “a plurality of” means two or more. The term “and/or” describes an association relationship of associated objects, and indicates that three relationships may exist. For example, A and/or B may indicate the following three cases: Only A exists, both A and B exist, and only B exists. A and B may be singular or plural. The character “/” usually indicates an “or” relationship between the associated objects. “At least one of the following” or a similar expression thereof indicates any combination of these items, including a single item or any combination of a plurality of items. For example, at least one item (piece) of a, b, or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, where a, b, and c may be singular or plural.
Currently, more displays are configured in a vehicle, and a user has an increasing requirement for cross-screen interaction (for example, cross-screen transfer and a collaborative operation) on different displays. However, currently, interaction between in-vehicle displays is usually full-screen cross-screen transfer, which cannot meet diversified interaction requirements of the user. Therefore, how to efficiently perform interface transfer to provide the user with intelligent display that meets user requirements and habits is a problem to be studied.
In addition, as more displays are configured in the vehicle, a multi-screen-based immersive technology also develops. Currently, the multi-screen-based immersive technology mainly includes two forms: copying and displaying a same image by using a plurality of screens or splicing and displaying a same image by using a plurality of screens. More applications that provide immersive experience are developed based on multi-screen splicing. For example, after a passenger gets on the vehicle, the plurality of displays may display a start animation in a splicing manner; in a vehicle traveling process, the plurality of displays may display a navigation interface in the splicing manner; and when a video is played, the plurality of displays may display a video interface in the splicing manner. These methods for displaying an image in the splicing manner have a relatively high requirement on displayed content, where the displayed content is usually of a specific size customized by a specific manufacturer. Therefore, the methods can be applied only to specific scenarios (for example, preset scenarios such as a startup animation, a nap mode, a map, and a music rhythm effect), and can provide limited immersive experience, resulting in relatively low practicability.
In addition to a display apparatus, more audio output apparatuses (such as speakers) are currently configured in the vehicle. A vehicle cockpit is used as an example. Audio output apparatuses currently configured in the vehicle cockpit include an in-vehicle speaker, a headrest speaker, a Bluetooth headset, and the like. In the vehicle cockpit, users at different locations may have different requirements for listening to an audio. Therefore, an audio output apparatus configured to output the audio needs to be flexibly selected based on a user requirement. In addition, content displayed on displays at different locations may be different, and corresponding audios to be played may also be different. Therefore, when different displays display content, a used audio output apparatus also needs to be correspondingly adjusted. For audio control in the vehicle cockpit, a currently used solution is mainly as follows: In a plurality of set audio playing modes, a user may manually select an audio playing mode that needs to be adjusted; and after the user selects the audio playing mode, an audio output apparatus in the vehicle cockpit may play an audio in the audio playing mode selected by the user. Different audio playing modes correspond to different audio output apparatuses or audio output manners. In the foregoing audio control solution, manual intervention is required, the audio playing modes that can be selected by the user are limited, and control can be performed only in a preset fixed mode. Therefore, an audio playing control process is relatively inflexible, resulting in relatively low flexibility and practicability of audio control.
In view of the foregoing problems, embodiments of this application provide a control method. The method is used to control a content display manner and an audio playing manner in a multi-screen scenario, to improve control flexibility and practicability.
The method provided in embodiments of this application includes, in a multi-screen scenario, an interface transfer method, a multi-screen collaborative display method, an audio control method, and the like. The interface transfer method may be used to implement cross-screen transfer of a display interface between different displays. In some embodiments, the method may further implement cross-area (or cross-location) one-screen transfer of a display interface on a same display. The multi-screen collaborative display method is used to implement an effect of collaboratively displaying an image on a plurality of displays, to provide a user with immersive viewing experience. The audio control method is used to control a playing manner of an audio corresponding to a display, and is further used to implement audio transfer between different displays.
For ease of understanding, the following first briefly describes some names or terms that may appear in embodiments of this application.
In embodiments of this application, the “split-screen display” means that a display is divided into a plurality of areas, and each area displays an application window of a task interface. At a same moment, the plurality of areas of the display may separately display application windows of a plurality of task interfaces.
For example, refer to
In embodiments of this application, the “full-screen display” means that a display displays an application window of a task interface in full screen. Displaying the task interface in full screen on the display means displaying the task interface in all display areas that can be used for display on the display.
It should be noted that in embodiments of this application, displaying the task interface in full screen on the display does not indicate that the display does not display another interface element at the same time. For example, when displaying the task interface in full screen, the display may further display a status bar. The status bar may display information such as a network identifier (for example, a wireless fidelity (wireless fidelity, Wi-Fi) network identifier) and a remaining power identifier of an in-vehicle terminal. Certainly, when displaying the task interface in full screen, the display can display not only the status bar but also another interface element. Details are not described herein in embodiments of this application.
For example, refer to
In embodiments of this application, the “application window” is a basic unit that is set in a graphical user interface (graphical user interface, GUI) for an application to use data. The application may be an application (for example, Bluetooth or Gallery) integrated in an operating system of a device, or may be an application (for example, a map application, a music application, a video application, an email application, or a shopping application) installed by a user. This is not limited in embodiments of this application.
In embodiments of this application, the application window is, for example, a video application window, a navigation application window, or a music application window.
For example, the application window includes a full-screen application window and a split-screen application window. For example, refer to
In embodiments of this application, the “floating window” is a movable window displayed on a display in a floating manner. For example, the floating window may be displayed in a floating manner at an upper layer of an application window displayed on the display, and any operation (for example, an operation on the application window) performed by a user at a lower layer does not affect display (including a display location, a display type, and the like) of the floating window.
For example, refer to
In embodiments of this application, the “floating icon” is a movable icon displayed on a display in a floating manner. For example, the floating icon may be displayed in a floating manner at an upper layer of an application window displayed on the display, and any operation performed by a user on the application window does not affect display (including a display location, a display type, and the like) of the floating icon.
For example, refer to
In embodiments of this application, the “picture-in-picture” means that a video image (a “sub-image” for short) is independently displayed on another interface (for example, another video image (a “main image” for short)) in a form of overlay, where the sub-image may be always overlaid at an upper layer of the main image, and any operation performed by a user on the main image does not affect display (including a display location, a display type, and the like) of the sub-image.
For example, refer to
In embodiments of this application, the “widget” is also referred to as a “service widget”, and means that some important interface information or operation entrances are pre-installed in the widget, to achieve a direct service.
In a possible form, the widget may be independently displayed on a display.
In another possible form, the widget may be embedded into another application as a part of an interface of the application, and supports a function such as pulling up a page.
For example, refer to
In embodiments of this application, the “notification” is notification information about an application, a received message, or the like that is displayed on a display. For example, in embodiments of this application, the notification is a WeChat® message notification, an incoming call notification, or the like.
For example, refer to
The in-vehicle terminal in embodiments of this application is a terminal device disposed on a vehicle. For example, the in-vehicle terminal may be integrated into the vehicle. Optionally, the in-vehicle terminal may alternatively be independent of the vehicle and installed on the vehicle.
In embodiments of this application, the vehicle does not refer to a specific type of transportation means. Optionally, the vehicle may be a ground-based transportation means, for example, a car, a bus, a subway, or a high-speed railway. Optionally, the vehicle may alternatively be a water surface-based transportation means, for example, a ship, a cushion ship, or a submarine. Optionally, the vehicle may alternatively be an air transportation means, for example, an airplane or a helicopter.
The electronic device may be a device having a display function. Optionally, the electronic device may be a device having one or more displays.
In some embodiments of this application, the electronic device may be an in-vehicle terminal.
In some other embodiments of this application, the electronic device may be a portable device, for example, a mobile phone, a tablet computer, a wearable device with a wireless communication function (for example, a watch, a band, a helmet, or a headset), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, or a personal digital assistant (personal digital assistant, PDA). Alternatively, the electronic device may be a smart home device (for example, a smart television or a smart sound box), a smart car, a smart robot, a workshop device, a wireless terminal in self driving (Self Driving), a wireless terminal in remote medical surgery (Remote Medical Surgery), a wireless terminal in a smart grid (Smart Grid), a wireless terminal in transportation safety (Transportation Safety), a wireless terminal in a smart city (Smart City), a wireless terminal in a smart home (Smart Home), a flight device (for example, a smart robot, a hot air balloon, an uncrewed aerial vehicle, or an airplane), or the like.
In some embodiments of this application, the electronic device may alternatively be a portable terminal device that further includes another function like a personal digital assistant and/or a music player function. An example embodiment of the portable terminal device includes but is not limited to a portable terminal device using iOS®, Android®, Microsoft®, or another operating system. Alternatively, the portable terminal device may be another portable terminal device, for example, a laptop computer (Laptop) with a touch-sensitive surface (for example, a touch panel). It should be further understood that in some other embodiments of this application, the electronic device may alternatively be a desktop computer with a touch-sensitive surface (for example, a touch panel), instead of the portable terminal device.
The following describes, by using an example, a structure of a device to which the method provided in embodiments of this application is applicable.
In an example, refer to
As shown in
It may be understood that the electronic device 900 shown in
The processor 910 may include one or more processing units. For example, the processor 910 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU). The different processing units may be independent components, or may be integrated into one or more processors.
The controller may be a nerve center and a command center of the electronic device 900. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction fetching and instruction execution. The digital signal processor is configured to process a digital signal. In addition to the digital image signal, the digital signal processor may further process another digital signal. For example, when the electronic device performs frequency selection, the digital signal processor is configured to perform Fourier transform on frequency energy, or the like. The video codec is configured to: compress or decompress a digital video. The electronic device may support one or more types of video codecs. In this way, the electronic device may play or record videos in a plurality of coding formats, for example, moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, and MPEG-4. The NPU is a neural-network (NN) computing processor. The NPU quickly processes input information by drawing on a structure of a biological neural network, for example, by drawing on a transfer mode between human brain neurons, and may further continuously perform self-learning. The NPU can implement applications such as intelligent cognition of the electronic device, such as image recognition, facial recognition, speech recognition, and text understanding.
A memory may be further disposed in the processor 910, and is configured to store instructions and data. In some embodiments, the memory in the processor 910 is a cache. The memory may store instructions or data just used or cyclically used by the processor 910. If the processor 910 needs to use the instructions or the data again, the processor may directly invoke the instructions or the data from the memory. This avoids repeated access and reduces waiting time of the processor 910, thereby improving system efficiency.
Execution of the control method provided in embodiments of this application may be controlled by the processor 910 or completed by invoking another component, for example, invoking a processing program in this embodiment of this application stored in the internal memory 921, or invoking, by using the external memory interface 920, a processing program in this embodiment of this application stored in a third-party device, to control the wireless communication module 960 to perform data communication with another device, to implement intelligence and convenience of the electronic device 900, and improve user experience. The processor 910 may include different components. For example, when a CPU and a GPU are integrated, the CPU and the GPU may cooperate with each other to perform the control method provided in embodiments of this application. For example, some algorithms in the control method are executed by the CPU, and some algorithms are executed by the GPU, to obtain relatively high processing efficiency.
In some embodiments, the processor 910 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identification module interface, a universal serial bus interface, and/or the like.
The I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor 910 may include a plurality of groups of I2C buses. The processor 910 may be separately coupled to the touch sensor, the microphone, the camera 993, and the like through different I2C bus interfaces.
For example, in this embodiment of this application, the processor 910 may be coupled to the touch sensor through the I2C interface, so that the processor 910 communicates with the touch sensor through the I2C bus interface, to implement a touch function of the electronic device. Optionally, the processor 910 may be coupled to the camera 993 through the I2C interface, so that the processor 910 communicates with the camera 993 through the I2C bus interface, to implement an image obtaining function of the electronic device.
In some embodiments of this application, the processor 910 may obtain, through the I2C bus interface, a touch operation that is detected by the touch sensor and that is performed by a user on a display, for example, a tap operation, a touch and hold operation, a preset gesture operation, or a drag operation, to determine a specific intention corresponding to the touch operation, and respond to the touch operation to perform, for example, cross-screen interface transfer, one-screen interface transfer, or audio transfer.
Further, optionally, when the touch sensor detects the touch operation performed by the user on the display, the processor 910 may obtain, through the I2C bus interface, image information obtained by the camera 993, to recognize the user who inputs the touch operation, so as to perform corresponding interface transfer based on an identity of the user. For example, when receiving an operation performed by the user for starting an application on a driver screen and recognizing, based on the image information obtained by the camera 993, that the user is a front passenger, the electronic device may directly display an interface of the application on a front passenger screen in response to the operation for starting the application.
The I2S interface may be configured to perform audio communication. In some embodiments, the processor 910 may include a plurality of groups of I2S buses. The processor 910 may be coupled to the audio module 970 through the I2S bus, to implement communication between the processor 910 and the audio module 970. In some embodiments, the audio module 970 may transmit an audio signal to the wireless communication module 960 through the I2S interface, to implement a function of answering a call through a Bluetooth headset.
The PCM interface may also be configured to perform audio communication, and sample, quantize, and code an analog signal. In some embodiments, the audio module 970 may be coupled to the wireless communication module 960 through a PCM bus interface. In some embodiments, the audio module 970 may also transmit an audio signal to the wireless communication module 960 through the PCM interface, to implement a function of answering a call through a Bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus, and is configured to perform asynchronous communication. The bus may be a two-way communication bus. The bus converts to-be-transmitted data between serial communication and parallel communication. In some embodiments, the UART interface is usually configured to connect the processor 910 to the wireless communication module 960. For example, the processor 910 communicates with a Bluetooth module in the wireless communication module 960 through the UART interface, to implement a Bluetooth function. In some embodiments, the audio module 970 may transmit an audio signal to the wireless communication module 960 through the UART interface, to implement a function of playing music through the Bluetooth headset.
The MIPI interface may be configured to connect the processor 910 to a peripheral device like the display 994 or the camera 993. The MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and the like. In some embodiments, the processor 910 communicates with the camera 993 through the CSI, to implement an image obtaining function of the electronic device. The processor 910 communicates with the display 994 through the DSI interface, to implement a display function the of electronic device.
The GPIO interface may be configured by software. The GPIO interface may be configured for control signals or data signals. In some embodiments, the GPIO interface may be configured to connect the processor 910 to the camera 993, the display 994, the wireless communication module 960, the audio module 970, the sensor module 980, or the like. The GPIO interface may alternatively be configured as the I2C interface, the I2S interface, the UART interface, the MIPI interface, or the like.
The USB interface 930 is an interface that conforms to a USB standard specification, and may be specifically a mini USB interface, a micro USB interface, a USB Type-C interface, or the like. The USB interface 930 may be configured to exchange data between the electronic device and a peripheral device (for example, a mobile phone or a sound box), or may be configured to connect to a headset for playing an audio through the headset. The interface may be further configured to connect to another electronic device like a game device.
It should be understood that an interface connection relationship between modules illustrated in embodiments of this application is merely an illustrative description, and does not constitute a limitation on a structure of the electronic device. In some other embodiments of this application, the electronic device may alternatively use an interface connection manner different from that in the foregoing embodiment, or use a combination of a plurality of interface connection manners.
The display 994 is configured to display an image, a video, and the like. The display 994 includes a display panel. The display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini-LED, a micro-LED, a micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 900 may include one or N displays 994, where N is a positive integer greater than 1. The display 994 may be configured to display information entered by the user or information provided for the user and various graphical user interfaces (GUIs). For example, the display 994 may display a photo, a video, a web page, a file, or the like.
In embodiments of this application, the display 994 may be an integrated flexible display, or may be a spliced display including two rigid screens and one flexible screen located between the two rigid screens.
In embodiments of this application, the electronic device may include a plurality of displays 994. For example, as shown in
The electronic device implements a display function by using the graphics processing unit (GPU), the display 994, the application processor, and the like. The GPU is an image processing microprocessor, and is connected to the display 994 and the application processor. The GPU is configured to perform data and geometric computation for graphic rendering. The processor 910 may include one or more GPUs that execute program instructions to generate or change display information.
In embodiments of this application, the GPU may be configured to perform interface rendering. The display 994 may be configured to display an interface. For example, the interface may include but is not limited to an interface like an application interface (for example, a browser interface, an office application interface, a mailbox interface, a news application interface, a map application interface, or a social application interface), a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, a notification, or an applet interface.
The camera 993 (a front-facing camera, a rear-facing camera, or a camera that may serve as both a front-facing camera and a rear-facing camera) is configured to capture a static image or a video. Usually, the camera 993 may include a photosensitive element, for example, a lens group and an image sensor. The lens group includes a plurality of lenses (convex lens or concave lens), and is configured to: collect an optical signal reflected by a to-be-shot object, and transfer the collected optical signal to the image sensor. The image sensor generates an original image of the to-be-shot object based on the optical signal. For example, an optical image of an object is generated through a lens, and is projected to the photosensitive element. The photosensitive element may be a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts an optical signal into an electrical signal, and then transmits the electrical signal to the ISP for converting the electrical signal into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard format like red green blue (RGB) or YUV.
In embodiments of this application, the electronic device may include one or more cameras 993.
The electronic device may implement an image obtaining function (for example, photographing or image shooting) through the image signal processor (ISP), the camera 993, the video codec, the GPU, the display 994, the application processor, and the like. In this application, the camera 993 may be an optical zoom lens or the like. This is not limited in this application.
The ISP is configured to process data fed back by the camera 993. For example, during image shooting, a shutter is pressed, a ray of light is transmitted to a photosensitive element of the camera through a lens, and an optical signal is converted into an electrical signal. The photosensitive element of the camera transmits the electrical signal to the ISP for processing, to convert the electrical signal into a visible image. The ISP may further perform algorithm optimization on noise, brightness, and complexion of the image. The ISP may further optimize parameters such as exposure and a color temperature of an image shooting scenario. In some embodiments, the ISP may be disposed in the camera 993.
The external memory interface 920 may be configured to connect to an external storage card, for example, a micro SD card, to extend a storage capability of the electronic device. The external storage card communicates with the processor 910 through the external memory interface 920, to implement a data storage function. For example, files such as music, videos, and pictures are stored in the external storage card.
The internal memory 921 may be configured to store computer-executable program code, and the executable program code includes instructions. The processor 910 runs the instructions stored in the internal memory 921 to perform various function applications of the electronic device 900 and data processing. The internal memory 921 may include a program storage area and a data storage area. The program storage area may store an operating system, code of an application required by at least one function, and the like. The data storage area may store data (for example, a task widget) created in a use process of the electronic device 900, and the like.
The internal memory 921 may further store one or more computer programs corresponding to the algorithm of the control method provided in embodiments of this application. The one or more computer programs are stored in the internal memory 921 and are configured to be executed by the one or more processors 910. The one or more computer programs include instructions, and the instructions may be used to perform the steps in the following embodiments.
In addition, the internal memory 921 may include a high-speed random access memory, and may further include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory device, or a universal flash storage (universal flash storage, UFS).
Certainly, the code of the algorithm of the control method provided in embodiments of this application may be further stored in an external memory. In this case, the processor 910 may run, through the external memory interface 920, the code of the algorithm that is of the control method and that is stored in the external memory.
The touch sensor is also referred to as a “touch panel”. The touch sensor may be disposed on the display 994, and the touch sensor and the display 994 constitute a displays touchscreen, which is also referred to as a “touch screen”. The touch sensor is configured to detect a touch operation performed on or near the touch sensor. The touch sensor may transfer the detected touch operation (including information such as a touch location, touch strength, a contact area, and touch duration) to the application processor to determine a type of the touch event. A visual output related to the touch operation may be provided through the display 994. In some other embodiments, the touch sensor may also be disposed on a surface of the electronic device 900 at a location different from that of the display 994.
In embodiments of this application, the touch operation detected by the touch sensor may be an operation performed by the user on or near the touchscreen by using a finger, or may be an operation performed by the user on or near the touchscreen by using a stylus, a touch stylus, a touch ball, or another touch auxiliary tool. This is not limited in this application.
A wireless communication function of the electronic device 900 may be implemented through the antenna 1, the antenna 2, the mobile communication module 950, the wireless communication module 960, the modem processor, the baseband processor, and the like.
The antenna 1 and the antenna 2 are configured to: transmit and receive electromagnetic wave signals. Each antenna in the electronic device 900 may be configured to cover one or more communication frequency bands. Different antennas may be further multiplexed to improve antenna utilization. For example, the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In some other embodiments, the antenna may be used in combination with a tuning switch.
The mobile communication module 950 may provide a wireless communication solution that is applied to the electronic device 900 and that includes 2G, 3G, 4G, 5G, and the like. The mobile communication module 950 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like. The mobile communication module 950 may receive an electromagnetic wave through the antenna 1, perform processing such as filtering and amplification on the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 950 may further amplify a signal modulated by the modem processor, and convert the signal into an electromagnetic wave for radiation through the antenna 1. In some embodiments, at least some functional modules of the mobile communication module 950 may be disposed in the processor 910. In some embodiments, at least some functional modules of the mobile communication module 950 and at least some modules of the processor 910 may be disposed in a same device. In embodiments of this application, the mobile communication module 950 may be further configured to exchange information with another device.
The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium/high-frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transfers the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor, and then transferred to the application processor. The application processor outputs a sound signal through an audio Apparatus (not limited to the speaker 970A, the receiver 970B, and the like), and displays an image or a video through the display 994. In some embodiments, the modem processor may be an independent component. In some other embodiments, the modem processor may be independent of the processor 910, and is disposed in the same device with the mobile communication module 950 or another functional module.
The wireless communication module 960 may provide a wireless communication solution that includes a wireless local area network (WLAN) (for example, a wireless fidelity (Wi-Fi) network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field communication (near field communication, NFC) technology, an infrared (IR) technology, or the like and that is applied to the electronic device 900. The wireless communication module 960 may be one or more components integrating at least one communication processing module. The wireless communication module 960 receives an electromagnetic wave through the antenna 2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor 910. The wireless communication module 960 may further receive a to-be-sent signal from the processor 910, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna 2. In embodiments of this application, the wireless communication module 960 is configured to establish a connection to another electronic device to perform data exchange. Alternatively, the wireless communication module 960 may be configured to access the access point device, send a control instruction to another electronic device, or receive data sent by another electronic device.
In addition, the electronic device 900 may implement an audio function, for example, music playing and recording, by using the audio module 970, the speaker 970A, the receiver 970B, the microphone 970C, the headset jack 970D, the application processor, and the like.
The audio module 970 is configured to digital audio information into an analog audio signal for output, and is further configured to convert an analog audio input into a digital audio signal. The audio module 970 may be further configured to code and decode an audio signal. In some embodiments, the audio module 970 may be disposed in the processor 910, or some functional modules of the audio module 970 may be disposed in the processor 910.
The speaker 970A, also referred to as a “loudspeaker”, is configured to convert an electrical audio signal into a sound signal. The electronic device may be used by the user to listen to Audio or answer a hands-free call through the speaker 970A.
The receiver 970B, also referred to as an “earpiece”, is configured to convert an electrical audio signal into a sound signal.
The microphone 970C, also referred to as a “mike” or a “mic”, is configured to convert a sound signal into an electrical signal. When making a call or sending speech information, a user may place the mouth of the user near the microphone 970C to make a sound, to input a sound signal to the microphone 970C. In this application, at least two microphones 970C, for example, a local microphone or a wireless microphone, may be disposed in the electronic device. In some other embodiments, three, four, or more microphones 970C may alternatively be disposed in the electronic device, to collect a sound signal, reduce noises, and the like. In embodiments of this application, the electronic device may collect a sound signal in the real world via the microphone 970C.
The electronic device 900 may receive an input of the button 990, and generate a key signal input related to a user setting and function control of the electronic device 900. The electronic device 900 may generate a vibration prompt (for example, an incoming call vibration prompt) by using the motor 991. The indicator 992 in the electronic device 900 may be an indicator light, may be configured to indicate a charging state and a battery level change, and may be further configured to indicate a message, a missed call, a notification, and the like. The SIM card interface 995 in the electronic device 900 is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface 995 or removed from the SIM card interface 995, to implement contact with or separation from the electronic device 900.
The power management module 940 is configured to supply power to the processor 910, the internal memory 921, the display 994, the camera 993, the wireless communication module 963, and the like.
It may be understood that, the structure shown in this embodiment of this application does not constitute a specific limitation to the electronic device. During actual application, the electronic device 900 may include more or fewer components than those shown in
A software system of the electronic device 900 may use a layered architecture, an event-driven architecture, a microkernel architecture, a micro service architecture, or a cloud architecture. In embodiments of this application, the Android system of the layered architecture is used as an example to describe the software structure of the electronic device.
In the layered architecture, software is divided into several layers, and each layer has a clear role and task. The layers communicate with each other through a software interface. As shown in
The application layer is the top layer of an operating system and includes native applications of the operating system, such as Camera, Gallery, Calendar, Bluetooth, Music, Videos, and Messaging. An application in embodiments of this application is a software program that can implement one or more specific functions. Usually, a plurality of applications may be installed in the electronic device, for example, a camera application, a mailbox application, and a smart home device control application. An application mentioned below may be a system application installed when the electronic device is delivered from the factory, or may be a third-party application downloaded by a user from a network or obtained by the user from another electronic device during use of the electronic device.
Certainly, a developer may compile an application and install the application at the layer. In a possible implementation, the application may be developed by using a Java language, and is completed by invoking an application programming interface (API) provided by the application framework layer. The developer may interact with a bottom layer (for example, the kernel layer) of the operating system by using the application framework, to develop an application of the developer.
The application framework layer provides an application programming interface (API) and a programming framework for applications at the application layer. The application framework layer may include some predefined functions. The application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is configured to manage a window program. The window manager may obtain a size of the display, determine whether there is a status bar, perform screen locking, take a screenshot, and the like.
The content provider is configured to: store and obtain data, and enable the data to be accessed by an application. The data may include information, for example, a file (for example, a document, a video, an image, or an audio) and a text.
The view system includes visual controls, for example, controls that display content, for example, texts, pictures, and documents. The view system may be configured to construct an application. An interface in a display window may include one or more views. For example, a display interface including an SMS message notification icon may include a text display view and a picture display view.
The phone manager is configured to provide a communication function of the electronic device. The notification manager enables an application to display notification information in a status bar, and may be configured to transmit a notification-type message. The displayed information may automatically disappear after a short pause without user interaction.
The runtime includes a core library and a virtual machine. The runtime is responsible for scheduling and management of the Android system.
The kernel library of the system includes two parts: One is a function that needs to be called in Java language, and the other is the kernel library of the system. The application layer and the application framework layer run on the virtual machine. Java is used as an example. The virtual machine executes Java files of the application layer and the application framework layer as binary files. The virtual machine is configured to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
The system library may include a plurality of functional modules, for example, a surface manager, a media library, a three-dimensional graphics processing library (for example, OpenGL ES), a two-dimensional graphics engine (for example, SGL), and an image processing library. The surface manager is configured to manage a display subsystem and provide fusion of two-dimensional and three-dimensional graphics layers for a plurality of applications. The media library supports playback and recording in a plurality of commonly used audio and video formats, and static image files. The media library may support a plurality of audio and video encoding formats such as MPEG-4, H.564, MP3, AAC, AMR, JPG, and PNG. The three-dimensional graphics processing library is configured to implement three-dimensional graphics drawing, image rendering, compositing, layer processing, and the like. The two-dimensional graphics engine is a drawing engine for two-dimensional drawing.
The kernel layer provides a core system service of the operating system. For example, security, memory management, process management, a network protocol stack, and a driver model are all based on the kernel layer. The kernel layer is also used as an abstraction layer between hardware and a software stack. The layer has many drivers related to the electronic device, mainly including a display driver, a driver of a keyboard used as an input device, a flash driver that is based on a memory technology device, a camera driver, an audio driver, a Bluetooth driver, a Wi-Fi driver, and the like.
It should be understood that the function service described above is merely an example. During actual application, the electronic device may alternatively be divided into more or fewer function services based on other factors, or may be divided into functions of each service in another manner, or may not be divided into function services, but works as a whole.
The following separately describes the interface transfer method, the multi-screen collaborative display method, and the audio control method that are provided in embodiments of this application.
According to the interface transfer method provided in embodiments of this application, a task interface on a first screen can be transferred across screens to a second screen for display. For example, the task interface may be understood as an interface corresponding to an application running on an in-vehicle terminal.
In some embodiments, a plurality of task interfaces may be displayed on the first screen. Based on the interface transfer method provided in embodiments of this application, one of the plurality of task interfaces displayed on the first screen can be transferred to the second screen for display, or more than one (for example, two) of the plurality of task interfaces displayed on the first screen can be simultaneously transferred to the second screen for display.
In some other embodiments, one task interface may be displayed on the first screen. Based on the interface transfer method provided in embodiments of this application, the task interface displayed on the first screen can be transferred across screens to the second screen for display.
Optionally, in some other embodiments of this application, based on the interface transfer method provided in embodiments of this application, the task interface on the first screen can be transferred on one screen from a first location to a second location on the first screen for display.
Further, after the task interface on the first screen is transferred across screens to the second screen for display, based on the interface transfer method provided in embodiments of this application, the task interface transferred to the second screen may be further transferred back to the first screen for display, or the task interface transferred to the second screen may be transferred to a third screen for display.
Alternatively, further, after the task interface on the first screen is transferred on one screen from the first location to the second location on the first screen for display, based on the interface transfer method provided in embodiments of this application, the task interface transferred to the second location on the first screen may be further transferred across screens to the second screen for display, or the task interface transferred to the second location on the first screen may be transferred on one screen to a third location on the first screen for display. The third location may be the same as or different from the first location. This is not limited in embodiments of this application.
Based on the interface transfer method provided in embodiments of this application, when performing interface transfer, the in-vehicle terminal may determine, based on task information of a target screen (for example, the second screen) before the interface transfer, a display type on the target screen after the interface transfer. The task information of the target screen before the interface transfer may include a display type and/or classification information of a task interface displayed on the target screen before the interface transfer.
Optionally, based on the interface transfer method provided in embodiments of this application, when performing the interface transfer, the in-vehicle terminal may determine, based on task information of the first interface displayed on an original screen (for example, the first screen) before the interface transfer, the display type on the target screen after the interface transfer. The task information of the first interface displayed on the original screen (for example, the first screen) before the interface transfer includes a display type and/or classification information of the first interface displayed on the original screen (for example, the first screen) before the interface transfer.
Optionally, based on the interface transfer method provided in embodiments of this application, when performing the interface transfer, the in-vehicle terminal may further determine, based on task information of the first interface displayed on an original screen (for example, the first screen) before the interface transfer and/or task information of a target screen (for example, the second screen) before the interface transfer, a display type on the target screen after the interface transfer.
A display type of a task interface indicates a representation form of the task interface. For example, the display type of the task interface may include but is not limited to any one of full-screen display, split-screen display, a floating window, a floating icon, a floating bubble, picture-in-picture, a service widget, a control, a notification, or the like.
Classification information corresponding to the task interface indicates whether the task interface is a preset focused application interface. The preset focused application interface is an interface that is of an application with a relatively high priority and that is preset by a user. It may be understood that, in an in-vehicle scenario, some applications are relatively highly concerned by the user or are of relatively high importance, and display of these applications needs to be preferentially ensured. Therefore, the user may set a preset focused application based on a specific requirement of the user. For the preset focused application interface, when displaying the interface, including displaying the interface before and after the interface transfer, the in-vehicle terminal preferentially ensures eye-catching display (for example, full-screen display) of the interface, to avoid interference from another interface to the interface, and facilitate viewing and/or operation of the user.
For example, the preset focused application is a map application, a navigation application, or another application related to driving safety, or is a video application, a social application, an entertainment application (like a game application, a music application, or an office application), or another application that are frequently used by the user. The preset focused application is not specifically limited in embodiments of this application.
In embodiments of this application, different displays of the in-vehicle terminal may correspond to different preset focused applications. For example, a preset focused application corresponding to a driver screen may be a map application, a navigation application, or another application related to driving safety. A preset focused application corresponding to a front passenger screen, a rear left screen, or a rear right screen may be a video application, a social application, an entertainment application (like a game application, a music application, or an office application), or another application that are frequently used by the user.
It should be noted that specific display types of task interfaces on the original screen and the target screen before the interface transfer are not limited in embodiments of this application.
In a possible structure, the original screen and the target screen in embodiments of this application may belong to one in-vehicle terminal, and the original screen and the target screen may share modules such as a processor and a memory of the in-vehicle terminal. In other words, the in-vehicle terminal may include a plurality of displays, and the plurality of displays may work independently, for example, may separately perform audio and video playing, an entertainment game operation, and the like. A communication connection is established between the plurality of displays. For example, the plurality of displays may communicate with each other through a communication bus interface. For another example, the plurality of displays may communicate with each other through an in-vehicle local area network.
In another possible structure, the original screen and the target screen in embodiments of this application may separately belong to different in-vehicle terminals, and a communication connection is established between the in-vehicle terminals. In embodiments of this application, the cross-screen transfer of the task interface is cross-device interface transfer actually. For example, the in-vehicle terminals communicate with each other through a communication bus interface, or communicate with each other through an in-vehicle local area network.
It should be noted that, in embodiments of this application, a screen or a display screen may also be referred to as a display. For example, the first screen may also be referred to as a first display, the original screen may also be referred to as an original display, and the target screen may also be referred to as a target display.
The following describes in detail the interface transfer method provided in embodiments of this application with reference to a specific embodiment by using an example in which the in-vehicle terminal includes a plurality of displays (for example, including a first screen, a second screen, and a third screen).
In some embodiments, it is assumed that a task interface (denoted as a first task interface) is displayed on an original screen (for example, the first screen). Based on the interface transfer method provided in embodiments of this application, one or more of a plurality of task interfaces may be transferred across screens to a target screen (for example, the second screen or the third screen) for display.
S1001: An in-vehicle terminal displays one or more first task interfaces through a first screen.
In some embodiments, the in-vehicle terminal displays one first task interface through the first screen.
In some other embodiments, the in-vehicle terminal displays a plurality of first task interfaces through the first screen.
In this embodiment of this application, the one or more first task interfaces may be displayed on the first screen of the in-vehicle terminal in any display type.
If the in-vehicle terminal displays the one first task interface through the first screen, the one first task interface may be displayed on the first screen in full screen (as shown in
If the in-vehicle terminal displays the plurality of first task interfaces through the first screen, in some examples, all the plurality of first task interfaces may be displayed on the first screen in a form of non-full screen. For example, as shown in
In some other examples, one of the plurality of first task interfaces may be displayed on the first screen in full screen, and the other interfaces are displayed on the first screen in a form of floating window, floating icon, floating bubble, picture-in-picture, widget, control, notification, or the like. For example, as shown in
In embodiments of this application, the in-vehicle terminal may display the plurality of first task interfaces on the first screen in response to an operation performed by the user for starting the plurality of first task interfaces. For example, as shown in
Optionally, if an application to which the interface 1101 shown in
Alternatively, optionally, if an application to which the interface 1101 shown in
Alternatively, optionally, if an application to which the interface 1101 shown in
Optionally, as shown in
In an example, if the interface 1103 blocks a focused area (or a focus area) on the interface 1101, the in-vehicle terminal may perform adaptive location adjustment on the focused area (or the focus area) on the interface 1101 based on an area blocked by the interface 1103 on the interface 1101, so that the focused area (or the focus area) on the interface 1101 is not blocked by the interface 1103. The focused area (or the focus area) on the interface 1101 may be determined by the in-vehicle terminal based on one or more of the following: determining a function of information displayed on the interface 1101, an application function corresponding to the interface 1101, a type corresponding to a plurality of layers (Layer) on the interface 1101, a user-defined setting, and the like. This is not limited in embodiments of this application. For example, when the interface 1101 is an interface of a navigation application, the focused area on the interface 1101 may be an area on which a navigation route is displayed currently. Optionally, the in-vehicle terminal may also display, on the first screen in response to an operation performed by the user for starting a task combination, the plurality of first task interfaces corresponding to the task combination. Alternatively, the in-vehicle terminal may perform another process until the plurality of first task interfaces are displayed on the first screen. This is not limited in embodiments of this application.
It should be noted that
S1002: The in-vehicle terminal receives a first operation performed by the user on a first interface. The first interface is one of the one or more first task interfaces. The first operation is used to transfer the first interface.
The first operation is a transfer operation performed by the user on the first interface in the one or more first task interfaces displayed on the first screen. The first operation may include but is not limited to a preset sliding operation, a preset touch operation, a voice operation, a preset body action, and the like. This is not limited in this application.
For example, the preset sliding operation is an operation that a finger leaves after quickly sliding on the first interface, an operation that a finger leaves after slowly sliding for a distance on the first interface, an operation that a plurality of fingers (for example, two fingers or three fingers) leave after quickly sliding on the first interface, an operation that a plurality of fingers (for example, two fingers or three fingers) leave after slowly sliding for a distance on the first interface. For example, the preset touch operation is a touch and hold operation (for example, pressing duration meets a time threshold t), a double-tap operation, a multiple tap operation (for example, a double tap operation or a triple tap operation), a multi-finger pinch-in operation (for example, a two-finger pinch-in operation or a three-finger pinch-in operation), a multi-finger pinch-out operation (for example, a two-finger pinch-out operation or a three-finger pinch-out operation), or the like. For example, the preset body action is a preset air gesture operation or a preset facial expression.
In embodiments of this application, the first operation received by the in-vehicle terminal may be used to transfer the first interface across screens, or may be used to transfer the first interface on one screen.
User intentions corresponding to different operations such as the preset sliding operation, the preset touch operation, the voice operation, and the preset body action may be preset and associated by the user, or predefined by a system.
For example, if the user sets that a preset operation that two fingers leave after quickly sliding on the first interface is associated with the cross-screen transfer, when the in-vehicle terminal receives an operation that two fingers of the user leave after quickly sliding on the first interface, the in-vehicle terminal may determine that an intention of the user is to transfer the first interface across screens. For another example, if the user sets that a preset operation that two fingers leave after slowly sliding for a distance on the first interface is associated with the one-screen transfer, when the in-vehicle terminal receives an operation that two fingers of the user leave after slowly sliding for a distance on the first interface, the in-vehicle terminal may determine that an intention of the user is to transfer the first interface on one screen.
Optionally, to facilitate an operation of the user and provide more efficient interface transfer experience for the user, in some embodiments, the in-vehicle terminal may associate an interface transfer intention with a sliding operation of one or more fingers at different speeds (or accelerations). For example, the cross-screen transfer may be associated with an operation that one or more fingers leave after quickly sliding, and the one-screen transfer may be associated with an operation that one or more fingers leave after slowly sliding for a distance. Alternatively, the cross-screen transfer may be associated with an operation that one or more fingers leave after slowly sliding for a distance, and one-screen transfer may be associated with an operation that one or more fingers leave after quickly sliding. This is not specifically limited in this application.
In embodiments of this application, quick and slow are relative concepts. For example, if a sliding speed (or a sliding acceleration) is greater than a preset threshold, the in-vehicle terminal may consider that the sliding is quick sliding; or if a sliding speed (or a sliding acceleration) is less than or equal to a preset threshold, the in-vehicle terminal may consider that the sliding is slow sliding. Alternatively, if a sliding speed (or a sliding acceleration) is greater than or equal to a preset threshold, the in-vehicle terminal may consider that the sliding is quick sliding; or if a sliding speed (or a sliding acceleration) is less than a preset threshold, the in-vehicle terminal may consider that the sliding is slow sliding.
For example, refer to
It may be understood that a one-finger or multi-finger sliding operation complies with operation habits of most users, and helps the users memorize and perform the operation. Therefore, specific interface transfer intentions such as a cross-screen transfer intention and a one-screen transfer intention are identified based on different sliding speeds (or sliding accelerations). This can facilitate memory and operation of the user, and can further improve performance of interaction between the in-vehicle terminal and the user, to improve user experience.
In embodiments of this application, in an example, the in-vehicle terminal may determine a sliding speed of a finger of the user on the display based on received MotionEvent (for example, MotionEvent.ACTION_MOVE (“ACTION_MOVE event” for short below)). The sliding speed of the finger of the user on the display may be represented by a pixel at which the finger of the user slides on the display in a unit time (for example, 1 second).
It may be understood that MotionEvent is a time sequence related to a user touch. An operation (for example, a slide operation) performed by the finger of the user on the display indicates a touch event. For the multi-finger sliding operation, a MotionEvent.ACTION_DOWN event (“ACTION_DOWN event” for short below) is generated when a first finger of the user touches the display; an ACTION_POINTER_DOWN event is generated when a second finger or one or more next fingers touch the display; the ACTION_MOVE event is generated in a multi-finger sliding process; an ACTION_POINTER_UP event is generated when a non-last finger leaves the display; and a MotionEvent.ACTION_UP event (ACTION_UP event for short below) is generated when a last finger leaves the screen.
The ACTION_DOWN event indicates a start of the multi-finger sliding operation. To be specific, when the in-vehicle terminal detects that a touch point on the display is pressed, the ACTION_DOWN event is triggered.
The ACTION_MOVE event is triggered by an operation that a plurality of fingers of the user press the screen and sliding. To be specific, when the in-vehicle terminal detects that a plurality of touch points on the display are pressed and move, the ACTION_MOVE event is triggered.
The ACTION_UP event indicates an end of the multi-finger sliding operation. To be specific, when the in-vehicle terminal detects that no touch point on the display is pressed by the user, the ACTION_UP event is triggered.
Based on the foregoing touch event mechanism, in a possible implementation, the in-vehicle terminal may determine, based on a method shown in
As shown in S1401 in
For example, when receiving the ACTION_DOWN event, the in-vehicle terminal may determine and store a finger index of a first finger that is of the user and that touches the display and initial location coordinates (initial coordinates for short) of the finger on the display. For example, the in-vehicle terminal may determine an initial location of the finger of the user on the display based on the finger index, store the finger index into ArrayList, and store the initial location coordinates (namely, the initial coordinates) into HashMap based on the finger index.
Similarly, when receiving the ACTION_POINTER_DOWN event, the in-vehicle terminal may determine and store finger indexes of one or more other fingers that are of the user and that touch the display and initial coordinates of the fingers on the display. For example, the in-vehicle terminal may determine initial locations of the one or more other fingers of the user on the display based on the finger indexes, store the finger indexes into ArrayList, and store the initial coordinates into HashMap based on the finger indexes.
As shown in S1402 in
For example, when receiving the ACTION_MOVE event, the in-vehicle terminal calculates an average sliding speed of one or more fingers in real time. For example, the average sliding speed of the one or more fingers may include a speed of the one or more fingers in an x-axis direction and a speed of the one or more fingers in a y-axis direction. The in-vehicle terminal may separately compare the speed of the one or more fingers in the x-axis direction and the speed of the one or more fingers in the y-axis direction with preset thresholds, to determine whether the sliding speed is greater than a corresponding preset threshold, and further mark whether the sliding speed is greater than the preset threshold.
Optionally, the ACTION_MOVE event is generated a plurality of times in a one-finger or multi-finger sliding process. Therefore, a mark indicating whether the sliding speed is greater than the preset threshold is also refreshed a plurality of times.
Further, as shown in S1403 in
For example, if the sliding speed of the finger of the user on the display is greater than (or greater than or equal to) the preset threshold, the in-vehicle terminal may consider that the sliding is quick sliding; or if the sliding speed of the finger of the user on the display is less than or equal to (or less than) the preset threshold, the in-vehicle terminal may consider that the sliding is slow sliding.
It should be noted that cross-screen sliding can be actually implemented only when a speed in the sliding direction is greater than the preset threshold. For example, sliding speeds in a right direction and a left direction need to be determined by determining whether a speed in the x-axis direction is greater than the preset threshold; and sliding speeds directly ahead and directly behind need to be determined by determining whether a speed in the y-axis direction is greater than the preset threshold. Optionally, in another direction, whether a speed is greater than the preset threshold may be determined by randomly selecting a speed in the x-axis direction or a speed in the y-axis direction.
Based on the foregoing mechanism, the in-vehicle terminal may determine whether the interface transfer intention of the user is cross-screen transfer or one-screen transfer.
For example, as shown in
S1404-1: The in-vehicle terminal determines sliding directions of the plurality of fingers of the user on the display based on a received ACTION_POINTER_UP event and the received ACTION_UP event.
For example, when receiving the ACTION_POINTER_UP event, the in-vehicle terminal may store end location coordinates (end coordinates for short) of a corresponding finger into HashMap based on a finger index of the corresponding finger. Similarly, when receiving the ACTION_UP event, the in-vehicle terminal may store end coordinates of the last finger leaving the display into HashMap based on a finger index.
For example, the in-vehicle terminal may calculate a sliding direction of each finger based on initial coordinates (for example, (startPointX, startPointY)) and end coordinates (for example, (endPointX, endPointY)) that are of the plurality of fingers and that are stored in HashMap. For example, the sliding direction may be calculated based on Calculation formula 1:
In an example, for a multi-finger sliding operation, when calculating a sliding speed, the in-vehicle terminal may calculate a sliding speed of each finger, obtain a final sliding speed by calculating an average of sliding speeds of a plurality of fingers, and use the final sliding speed as the sliding speed of the first operation, or the in-vehicle terminal may select a sliding speed of any finger as the sliding speed of the first operation. This is not limited in embodiments of this application. Similarly, when determining a sliding direction of the first operation, the in-vehicle terminal may first determine a sliding direction of each finger, and then use a consistent sliding direction of the plurality of fingers as the sliding direction of the first operation, or the in-vehicle terminal may select a sliding direction of any finger as the sliding direction of the first operation. This is not limited in embodiments of this application. In addition, when determining a sliding distance of the first operation, the in-vehicle terminal may first determine a sliding distance of each finger, and calculate an average value as the sliding distance of the first operation, or the in-vehicle terminal may select a sliding distance of any one finger as the sliding distance of the first operation. This is not limited in embodiments of this application.
S1405-1: The in-vehicle terminal determines a target screen for cross-screen transfer based on the sliding directions of the fingers of the user on the display.
In a possible implementation, the in-vehicle terminal may determine the target screen based on sliding directions in which one or more fingers of the user quickly slide on the first interface and location relationships between an original screen and a plurality of displays. The target screen may be a display to which the sliding direction points.
Alternatively, for example, as shown in
S1404-2: The in-vehicle terminal determines sliding distances and sliding directions of the plurality of fingers of the user on the display based on a received ACTION_POINTER_UP event and the received ACTION_UP event.
For example, the in-vehicle terminal may calculate a sliding distance and a sliding direction of each finger based on initial coordinates (for example, (startPointX, startPointY)) and end coordinates (for example, (endPointX, endPointY)) that are of the plurality of fingers and that are stored in HashMap. For example, the sliding direction may be calculated based on Calculation formula 1, and the sliding distance may be calculated based on Calculation formula 2:
S1405-2: The in-vehicle terminal determines a target location for one-screen transfer based on the sliding distances and the sliding directions of the fingers of the user on the display.
Optionally, for the one-screen transfer, the in-vehicle terminal may further determine whether a sliding distance is greater than a preset threshold after obtaining the sliding distance of a finger on the display through calculation. If the sliding distance is greater than the preset threshold, the in-vehicle terminal performs S1405-2. If the sliding distance is less than or equal to the preset threshold, the in-vehicle terminal may keep the interface displayed at an original location.
Optionally, in this embodiment of this application, in a process in which the user slides one or more fingers on the first interface, the first interface may move along with the fingers of the user. For example, in the process in which the user slides one or more fingers on the first interface, the in-vehicle terminal may invoke a Rect object (for example, mWindowDragBounds) that indicates a window location in a terminal system, to implement an effect that the first interface moves along with the fingers of the user.
A sliding direction in which the user slides one or more fingers on the first interface may be represented by an angle between a sliding trajectory of the finger of the user and a positive direction of a horizontal axis of a preset coordinate system. For example, finger coordinates of the user are coordinates of a finger of the user in the preset coordinate system. The preset coordinate system is, for example, an xOy coordinate system, where an origin O of the xOy coordinate system may be an upper left corner of the first screen, an x-axis may be an upper edge of the first screen, a right direction is a +x-axis direction (that is, the positive direction of the horizontal axis is rightward), a y-axis may be a left edge of the first screen, and a downward direction is a +y-axis direction (that is, a positive direction of a vertical axis is upward). Specific settings of the preset coordinate system xOy are not limited in embodiments of this application.
For example, if the angle is within a range of [−22.5°, 22.5°) shown in
For another example, if the angle is within a range of [−67.5°, −22.5°) shown in
For another example, if the angle is within a range of [−112.5°, −67.5°) shown in
For another example, if the angle is within a range of [−157.5°, −112.5°) shown in
For another example, if the angle is within a range of [157.5°, −157.5°) shown in
For another example, if the angle is within a range of [112.5°, 157.5°) shown in
For another example, if the angle is within a range of [67.5°, 112.5°) shown in
For another example, if the angle is within a range of [22.5°, 67.5°) shown in
For example, if the original screen is the driver screen, it may be determined that the target screen is a rear right screen. It should be noted that the correspondence between the sliding direction and the target screen shown in
In some other embodiments, the in-vehicle terminal may alternatively determine, based on another mechanism, whether the interface transfer intention of the user is the cross-screen transfer or the one-screen transfer. For example, when receiving the first operation performed by the user on the first interface, the in-vehicle terminal may determine, based on a location of the first interface before the transfer and a location relationship between displays, whether the interface transfer intention of the user is the cross-screen transfer or the one-screen transfer, and a displayed location of the first interface after the transfer. For example, as shown in
Similarly, the display 2 is located on the right side of the display 1. For example, the first screen is the display 2. If the location of the first interface before the transfer is close to the left side of the display 2, and the first operation is an operation of sliding the first interface leftward, the in-vehicle terminal may determine that the interface transfer intention of the user is the cross-screen transfer, for example, the first interface is transferred from the display 2 to the display 1. If the location of the first interface before the transfer is close to the left side of the display 2, and the first operation is an operation of sliding the first interface rightward, the in-vehicle terminal may determine that the interface transfer intention of the user is the one-screen transfer, for example, the first interface is transferred from the left side of the display 2 to the right side of the display 2.
For example, as shown in
S1003-1: The in-vehicle terminal determines that the target screen is the second screen.
In some embodiments of this application, the in-vehicle terminal may determine the target screen based on the received first operation for cross-screen transfer of the first interface.
For example, the first operation for cross-screen transfer of the first interface is an operation that one or more fingers of the user leave after quickly sliding. In some embodiments, the in-vehicle terminal may determine the target screen based on the sliding direction of the finger of the user. For a specific method and process of determining the target screen, refer to the foregoing specific descriptions. Details are not described herein again.
It may be understood that, when the in-vehicle terminal includes the plurality of displays (for example, the first screen and the second screen), the in-vehicle terminal definitely knows relative location relationships between the plurality of displays. Similarly, when the plurality of displays (for example, the first screen and the second screen) belong to different in-vehicle terminals, an in-vehicle terminal to which the first screen belongs may know the relative location relationships between the plurality of displays. For example, the in-vehicle terminal stores specific locations of the plurality of displays. In view of this, in this embodiment of this application, the in-vehicle terminal may determine, based on the sliding direction of the finger of the user and the relative location relationships between the plurality of displays, the target screen for the cross-screen transfer of the first interface.
In an example, when the plurality of displays (for example, the first screen and the second screen) belong to different in-vehicle terminals, the in-vehicle terminal to which the first screen belongs may obtain the relative location relationships between the plurality of displays based on an automatic positioning technology. For example, the automatic positioning technology is an ultrasonic positioning technology. The in-vehicle terminal to which the first screen belongs may transmit an ultrasonic signal through a speaker disposed in the in-vehicle terminal based on the ultrasonic positioning technology, and receive an echo signal of the ultrasonic signal from another in-vehicle terminal through a microphone disposed in the in-vehicle terminal. Further, the in-vehicle terminal to which the first screen belongs may determine, based on a triangle positioning technology, specific locations of the plurality of in-vehicle terminals in the vehicle based on a transmission path for sending a signal and a transmission path for receiving a signal and with reference to a relative location relationship between the speaker and the microphone, to determine relative location relationships between the plurality of in-vehicle terminals, namely, the relative location relationships between the plurality of displays.
Alternatively, when the plurality of displays (for example, the first screen and the second screen) belong to different in-vehicle terminals, the in-vehicle terminal to which the first screen belongs may alternatively determine specific locations of the plurality of in-vehicle terminals in the vehicle based on related configuration information such as in-vehicle system configuration information or by using another method, to determine relative location relationships between the plurality of in-vehicle terminals, namely, the relative location relationships between the plurality of displays. A specific method and process are not specifically limited in embodiments of this application.
In some embodiments of this application, based on the method provided in embodiments of this application, one-to-one interface transfer can be implemented.
For example, based on the method provided in embodiments of this application, any interface on any screen can be transferred to any other screen across screens. For example, for a car, an interface can be transferred across screens from a driver screen to a front passenger screen, the driver screen to a rear left screen, the driver screen to a rear right screen, the front passenger screen to the driver screen, the front passenger screen to the rear left screen, the front passenger screen to the rear right screen, the rear left screen to the driver screen, the rear left screen to the front passenger screen, the rear left screen to the rear right screen, the rear right screen to the driver screen, the rear right screen to the front passenger screen, and the rear right screen to the rear left screen.
Refer to
For example, if the display 1 (namely, the first screen) receives an operation that is shown in
For another example, if the display 1 (namely, the first screen) receives an operation 5 that is shown in
For another example, if the display 1 (namely, the first screen) receives an operation that is shown in
In some other embodiments of this application, based on the method provided in embodiments of this application, one-to-many interface transfer can be implemented.
For example, based on the method provided in embodiments of this application, any interface on any screen can be transferred to a plurality of other screens across screens. For example, for a car, an interface can be transferred across screens from a driver screen to a rear-row screen (including a rear left screen and a rear right screen) and from the rear-row screen (for example, the rear left screen or the rear right screen) to a front-row screen (including a driver screen and a front passenger screen).
For example, as shown in
It should be noted that the sliding operations for the cross-screen transfer shown in
In addition,
S1004-1: The in-vehicle terminal displays the first interface through the second screen in a first display type. The first display type is related to task information before the transfer of the first interface and/or screen task information of the second screen.
The task information before the transfer of the first interface indicates a display type and/or classification information of the first interface displayed on the first screen before the interface transfer. The display type of the first interface displayed on the first screen indicates a representation form of the first interface displayed on the first screen. For example, the display type of the first interface before the transfer may include but is not limited to any one of full-screen display, split-screen display, a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, a control, a notification, or the like. The classification information of the first interface displayed on the first screen indicates whether the first interface is a preset focused application interface.
The screen task information of the second screen indicates a display type and/or classification information of a task interface on the second screen before the interface transfer. The display type of the task interface on the second screen indicates a representation form of the task interface on the second screen. For example, the display type of the task interface on the second screen may include but is not limited to any one of full-screen display, split-screen display, a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, a control, a notification, or the like. The classification information of the task interface on the second screen indicates whether the task interface on the second screen is a preset focused application interface.
In some embodiments, no task interface is displayed on the second screen of the in-vehicle terminal. For example, a display desktop, a notification center, a control center, and the like are displayed on the second screen of the in-vehicle terminal. Optionally, when a system page (for example, a setting page) is displayed on the second screen of the in-vehicle terminal, it may also be considered that no task interface is displayed on the second screen of the in-vehicle terminal.
In some other embodiments, one task interface (for example, one second task interface) is displayed on the second screen of the in-vehicle terminal.
In some other embodiments, a plurality of task interfaces (for example, a plurality of second task interfaces) are displayed on the second screen of the in-vehicle terminal.
In embodiments of this application, one or more second task interfaces may be displayed on the second screen of the in-vehicle terminal in any display type.
If one second task interface is displayed on the second screen of the in-vehicle terminal, the one second task interface may be displayed on the second screen in full screen (as shown in
If a plurality of second task interfaces are displayed on the second screen of the in-vehicle terminal, in some examples, the plurality of second task interfaces may be displayed on the second screen in split screen, as shown in
Similarly, if a plurality of second task interfaces are displayed on the second screen of the in-vehicle terminal, the plurality of task interfaces may be displayed on the second screen after the in-vehicle terminal responds to an operation performed by the user for opening the plurality of second task interfaces; or the plurality o task interfaces may be displayed on the second screen after the in-vehicle terminal responds to an operation performed by the user for opening a task combination. A specific process in which the in-vehicle terminal displays the plurality of second task interfaces through the second screen is not limited in embodiments of this application.
It may be understood that the second screen is the target screen. When the first interface is transferred from the first screen to the second screen, to provide intelligent display that can meet user requirements and habits, a display type of the first interface displayed on the second screen may be affected by a task interface that is being displayed on the second screen. For example, if the in-vehicle terminal does not consider the task interface on the second screen, but directly displays the first interface on the second screen in full screen, a task interface that is being focused on by the user on the second screen may be forcibly interrupted, affecting user experience. For another example, if the in-vehicle terminal does not consider the task interface on the second screen, but directly displays the first interface on the second screen in non-full screen, when the second screen does not display the task interface originally, the user may need to manually adjust the non-full-screen display of the first interface to full-screen display, affecting user experience.
In addition, that the user transfers the first interface on the first screen to the second screen is usually intended to use the first interface as a focused task of the second screen. Therefore, to provide the intelligent display that can meet the user requirements and habits, the display type of the first interface displayed on the second screen may also be affected by the display type or the classification information of the first interface displayed on the first screen. For example, if the in-vehicle terminal does not consider information that the first interface is previously displayed on the first screen in full screen or displayed as a focused task interface, but directly displays the first interface on the second screen in non-full screen, when the second screen does not display the task interface originally, the user may need to manually adjust the non-full-screen display of the first interface to full-screen display, affecting user experience.
Therefore, in embodiments of this application, during cross-screen interface transfer, the in-vehicle terminal determines, based on the obtained task information before the transfer of the first interface and/or the obtained screen task information of the second screen, a specific display type of the first interface on the second screen after the transfer, to display the first interface on the second screen with a more proper display effect after the interface transfer, for improving user experience.
With reference to specific examples, the following describes specific display effects of displaying, by the in-vehicle terminal on the second screen, the first interface transferred from the first screen by using Case (A) to Case (C).
Case (A): A display effect of displaying, by the in-vehicle terminal on the second screen, the first interface transferred from the first screen is related to the screen task information of the second screen before the transfer.
In some embodiments, the in-vehicle terminal displays the first interface in full screen through the second screen (that is, the first display type is full-screen display), and the first display type is determined by the in-vehicle terminal based on the screen task information of the second screen.
For example, the screen task information of the second screen indicates that no task interface is displayed on the second screen, and the in-vehicle terminal may determine, based on the screen task information of the second screen, to display the first interface in full screen through the second screen after the transfer.
For example, the second screen may display the desktop, the notification center, the control center, the system interface (for example, the setting page), or the like.
The second screen is located on the right side of the first screen. As shown in
In the example shown in
For example, in
Alternatively, for example, the screen task information of the second screen indicates that the task interface on the second screen is displayed in a form of non-full screen before the transfer. For example, the display type of the task interface on the second screen is any one of a split-screen window, a floating window, a floating icon, a floating bubble, a widget, a control, or a notification. The in-vehicle terminal may determine, based on the screen task information of the second screen, to display the first interface in full screen through the second screen after the transfer.
For example, in
In
Alternatively, for example, if the screen task information of the second screen indicates that the task interface on the second screen before the transfer does not include the preset focused application interface, the in-vehicle terminal may determine, based on the screen task information of the second screen, to display the first interface in full screen through the second screen after the transfer. For example, it is assumed that an interface B shown in
It may be understood that, in embodiments of this application, when no task interface is displayed or no task interface is displayed in full screen, or no preset focused application interface is displayed on the second screen, the in-vehicle terminal may directly display the first interface in full screen through the second screen after the interface transfer, to ensure that the first interface can be eye-catchingly displayed without manual adjustment by the user, avoid interference of another interface to the interface, and help the user view the first interface and/or perform an operation on the first interface through the second screen. This provides the user with more efficient and user-friendly interface transfer experience.
In some other embodiments, the in-vehicle terminal displays, through the second screen in a form of non-full screen (for example, the first display type is split-screen display, a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, or a notification), the first interface transferred from the first screen. The first display type is determined by the in-vehicle terminal based on the screen task information of the second screen.
For example, if the screen task information of the second screen indicates that the second screen includes a task interface displayed in full screen, the in-vehicle terminal determines, based on the screen task information of the second screen, that the display type of the first interface after the transfer is split-screen display, a floating window, picture-in-picture, or the like.
For example, in
In
Alternatively, for example, if the screen task information of the second screen indicates that the preset focused application interface is displayed on the second screen, the in-vehicle terminal determines, based on the screen task information of the second screen, that a display type of the first interface after the transfer is split-screen display, a floating window, picture-in-picture, or the like.
For example, the interface C shown in
When the preset focused application interface on the second screen and the first interface are displayed on the second screen in split screen after the transfer, an interface ratio of the first interface to the preset focused application interface on the second screen may be 1:1 as shown in
It may be understood that, in embodiments of this application, when there is a task interface displayed in full screen on the second screen, or there is a preset focused application interface displayed on the second screen, the in-vehicle terminal may display the first interface in a form of split-screen display, floating window, picture-in-picture, or the like through the second screen after the interface transfer, to avoid interrupting the task interface that is being focused on by the user on the second screen, and provide the user with more user-friendly interface transfer experience.
Case (B): A display effect of displaying, by the in-vehicle terminal on the second screen, the first interface transferred from the first screen is related to the task information before the transfer of the first interface.
In some embodiments, after the interface transfer, the in-vehicle terminal displays the first interface in full screen through the second screen (that is, the first display type is full-screen display), and the first display type is determined by the in-vehicle terminal based on the task information of the first interface displayed on the first screen before the transfer.
For example, the in-vehicle terminal may display the first interface on the second screen in a display type that is the same as that used when the first interface is displayed on the first screen. For example, if the display type of the first interface displayed on the first screen is full-screen display, the in-vehicle terminal may determine to display the first interface in full screen through the second screen after the transfer.
As shown in
It may be understood that a transfer requirement of an interface is usually based on communication between people in the vehicle. Therefore, it may be considered that the transfer of the interface is to transfer a focused task of the first screen to the second screen as a focused task of the second screen. In view of this, to provide the user with more efficient and user-friendly interface transfer experience, an in-vehicle device may display, in full screen through the second screen after the interface transfer, the first interface displayed in full screen on the first screen before the interface transfer, to ensure that the first interface can be eye-catchingly displayed without manual adjustment by the user, to avoid interference from another interface to the interface, and help the user view the first interface and/or perform an operation on the first interface through the second screen.
For example, when the first interface displayed on the first screen is the preset focused application interface, the in-vehicle terminal may determine to display the first interface in full screen through the second screen after the transfer. In view of this, it can be ensured that an interface that is relatively highly concerned by the user or is of a relatively high importance is eye-catchingly displayed, to avoid interference from another interface to the interface, and help the user view and/or perform an operation
In some other embodiments, the in-vehicle terminal displays the first interface in a form of non-full screen (that is, the first display type is split-screen display, a floating window, picture-in-picture, or the like) through the second screen. The first display type is determined by the in-vehicle terminal based on the task information of the first interface displayed on the first screen before the transfer.
For example, the in-vehicle terminal may display the first interface on the second screen in a display type that is the same as that used when the first interface is displayed on the first screen before the transfer. For example, as shown in
Similarly, when the first interface is displayed on the first screen in a form of picture-in-picture before the transfer, the in-vehicle terminal may also determine to display the first interface in a form of picture-in-picture through the second screen after the transfer (refer to
Case (C): A display effect of displaying, by the in-vehicle terminal on the second screen, the first interface transferred from the first screen is related to both the task information before the transfer of the first interface and the screen task information of the second screen before the transfer.
In some embodiments, the in-vehicle terminal may analyze the screen task information of the second screen. If the screen task information of the second screen indicates that the second screen includes a task interface displayed in full screen or the preset focused application interface is displayed on the second screen, the in-vehicle terminal may determine to display the first interface in a form of non-full screen after the transfer.
Further, after the in-vehicle terminal determines, based on the screen task information of the second screen, to display the first interface in a form of non-full screen after the transfer, the in-vehicle terminal may determine a specific display type (namely, the first display type) of the first interface on the second screen after the transfer.
For example, it is assumed that the first interface is displayed on the first screen in split screen before the transfer, and then the in-vehicle terminal may determine that the first interface is displayed on the second screen in split screen after the transfer. It is assumed that the first interface is displayed on the first screen in full screen before the transfer, and then the in-vehicle terminal may determine that the first interface is displayed on the second screen in a form of split-screen display, floating window, or picture-in-picture after the transfer. For another example, it is assumed that the first interface is displayed on the first screen in a form of floating window before the transfer, and then the in-vehicle terminal may determine that the first interface is displayed on the second screen in a form of floating window after the transfer. For another example, it is assumed that the first interface is displayed on the first screen in a form of picture-in-picture before the transfer, and then the in-vehicle terminal may determine that the first interface is displayed on the second screen in a form of picture-in-picture after the transfer.
For example, as shown in
Still as shown in
In some other embodiments, if the screen task information of the second screen indicates that the second screen does not include a task interface displayed in full screen and there is no preset focused application interface on the second screen, the in-vehicle terminal may further analyze the task information before the transfer of the first interface. If the task information before the transfer of the first interface indicates that the first interface is displayed in full screen on the first screen before the transfer and/or the first interface is the preset focused application interface, the in-vehicle terminal may determine to display the first interface in full screen through the second screen after the transfer (that is, the first display type is full-screen display).
For example, as shown in
Similarly, if the screen task information of the second screen indicates that the second screen does not include a task interface displayed in full screen and there is no preset focused application interface on the second screen, when the first interface is a preset focused application and a display type of the first interface on the first screen is a floating window, a floating icon, a floating bubble, picture-in-picture, a widget, a control, or a notification, the in-vehicle terminal may also determine to display the first interface in full screen through the second screen after the transfer. For details, refer to the foregoing or the following examples. Details are not listed one by one herein.
It should be noted that,
Optionally, in some embodiments, the in-vehicle terminal may also determine, based on a user setting, a system setting, or the like, a display type of the first interface after the transfer. Alternatively, the in-vehicle terminal may determine, based on another related factor, a specific display type of the first interface after the transfer. This is not specifically limited in embodiments of this application.
In addition, it should be noted that a specific location at which the first interface is displayed on the second screen in a form of non-full screen like a floating window or picture-in-picture after the interface transfer is not limited in embodiments of this application. The specific location depends on specific device settings or an interface layout case.
In addition, it should be noted that, in the foregoing embodiments of this application, an example in which the first screen no longer displays information related to the first interface after the first interface is transferred from the first screen to the second screen is merely used. In some other embodiments of this application, after the first interface is transferred from the first screen to the second screen, a small interface like an application icon or a floating icon corresponding to the first interface may be further displayed on the first screen. This can help a user of the original screen still view the first interface or perform an operation on the first interface, and can avoid interference of the first interface to another focused task on the first screen.
For example, as shown in
Alternatively, when the first interface is displayed on the first screen in full screen or in split screen before the transfer, after the first interface is transferred from the first screen to the second screen, the first interface may be further displayed on the first screen in a form of floating window, picture-in-picture, widget, or the like that occupies a small display area. This can help a user of the original screen still view the first interface or perform an operation on the first interface, and can avoid interference of the first interface to another focused task on the first screen. For example, as shown in
For example, as shown in
Alternatively, optionally, if the interface C and/or the interface D are/is the preset focused application interface, after the interface A (namely, the first interface) shown in
As shown in
In addition, it should be further noted that, in the foregoing embodiments of this application, cross-screen transfer of one task interface (namely, the first interface) is merely used as an example. The method provided in embodiments of this application may further support cross-screen transfer of a plurality of task interfaces at a time. For example, if an operation that a plurality of fingers of the user leave after quickly sliding is performed on a display area in which the plurality of task interfaces on the first screen are located, the in-vehicle terminal simultaneously transfers the plurality of task interfaces to the second screen across screens. It may be understood that, based on the interface transfer method provided in embodiments of this application, the user may trigger cross-screen transfer of a task interface from an original screen to a target screen by using a cross-screen transfer operation that meets user operation habits and is easy to memorize and operate, to meet diversified requirements of the user, for example, transferring the task interface to another passenger for use or performing a related operation with assistance of another passenger.
For example, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may display a navigation interface (namely, the first interface) through a front passenger screen or another display in response to an operation performed by a driver for transferring the navigation interface (namely, the first interface) across screens, so that a passenger at a corresponding location assists the driver in performing a navigation-related operation, for example, adding a waypoint or modifying a destination. This eliminates a security risk when the driver performs the navigation-related operation, ensures traveling safety of the vehicle, and improves user experience.
For another example, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may display a music interface (namely, the first interface) through a front passenger screen or another display in response to an operation performed by a driver for transferring the music interface (namely, the first interface) across screens, so that a passenger at a corresponding location assists the driver in performing a music-related operation, for example, music searching or music switching. This eliminates a security risk when the driver performs the music-related operation, ensures traveling safety of the vehicle, and improves user experience.
For another example, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may display a to-be-answered video call screen (namely, the first interface) through a front passenger screen or another display in response to an operation performed by a driver for transferring the to-be-answered video call screen (namely, the first interface) across screens, so that a passenger at a corresponding location replaces the driver to answer the video call or assists the driver in answering the video call. In this way, the driver can focus on driving of the vehicle, traveling safety of the vehicle is ensured, and user experience is improved.
For another example, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may transfer, in response to a driver, an animation playing interface (namely, the first interface) to a display (for example, a rear right screen or a rear left screen) at a location of a child in the vehicle, so that the child can watch an animation. This can resolve a problem that the child cannot play the animation independently because the child cannot perform related operations such as animation searching and animation playing independently, and an adult can monitor and take care of an activity performed by the child through the display, thereby improving user experience.
In addition, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may further analyze target screen task information and/or task information of a to-be-transferred interface, to display, on the target screen in an appropriate display type after interface transfer, the task interface transferred from the original screen. According to the method, the task interface transferred from the original screen may be displayed with a most eye-catching display effect without affecting a task currently concerned by the user on the target screen. This reduces subsequent operations of the user and provides the user with more efficient and user-friendly interface transfer experience.
For example, when there is a user-focused task on the target screen, the in-vehicle terminal may display, on the target screen in a form of split-screen display, floating window, picture-in-picture, or the like that does not block the user-focused task, the task interface transferred from the original screen.
For another example, when there is no user-focused task on the target screen, the in-vehicle terminal may display, in an eye-catching full-screen display form, the task interface transferred from the original screen, to reduce operations such as manually operating the task interface by the user.
In addition, in a process of cross-screen transfer of the task interface, animation processing may be performed on the task interface, to improve visual experience of the user. For example, when a task displayed in full screen is transferred from the original screen (for example, a central display screen) to the target screen (for example, a front passenger screen), and when a width of the central display screen is the same as a width of the front passenger screen, the user may obtain a relatively coherent visual effect. However, when the width of the central display screen is different from the width of the front passenger screen, a size of the task interface displayed on the central display screen is different from a size of the task interface displayed on the front passenger screen, affecting visual experience of the user. Therefore, in embodiments of this application, animation processing may be performed on the transferred task interface. In a possible implementation, after the task interface is transferred to the target screen in full screen or split screen, the task interface may be adaptively arranged. To be specific, the task interface is processed based on a size of the target screen, so that a display effect of the task interface adapts to a size of the target screen. In this manner, the task interface may be quickly transferred to the target screen by performing translation and then zooming processing on the task interface. In another possible implementation, in a process in which the task interface is transferred to the target screen in full screen or split screen, for example, in a process in which the task interface traverses the original screen and the target screen, the task interface may be simultaneously transformed and transferred. In this manner, visual coherence can be improved by simultaneously zooming in or out and performing translation transfer on the task interface.
The foregoing embodiments describe a case of the cross-screen transfer. As shown in
S1003-2: The in-vehicle terminal determines a target location.
In embodiments of this application, the in-vehicle terminal may determine the target location based on the received first operation for performing one-screen transfer of the first interface.
For example, the first operation for one-screen transfer of the first interface is an operation that one or more fingers of the user leave after slowly sliding for a distance. In some embodiments, the in-vehicle terminal may determine the target location based on the sliding distance of the finger of the user. For a specific method and process of determining the target location, refer to the foregoing specific description. Details are not described herein again.
S1004-2: The in-vehicle terminal displays the first interface at the target location based on screen task information of the first screen in a second display type. The second display type is related to the screen task information of the first screen.
The screen task information of the first screen indicates a display type and/or classification information of a task interface on the first screen before the interface transfer. The screen task information of the first screen includes task information before the transfer of the first interface.
In some embodiments, it is assumed that the first interface is displayed on the first screen in full screen before the transfer, and in this case, the in-vehicle terminal may determine that the second display type is a preset display type, for example, a floating window, a floating icon, a floating bubble, picture-in-picture, or a widget. This specifically depends on settings (including factory settings or manual settings) of an in-vehicle device. A specific size of the floating window, the floating icon, the floating bubble, the picture-in-picture, or the widget also depends on the settings (including the factory settings or the manual settings) of the in-vehicle device.
For example, as shown in
In some other embodiments, it is assumed that the first interface is displayed at an original location of the first screen in a form of non-full screen before transfer, and no other task interface is displayed at the target location. In this case, the in-vehicle terminal may determine that the second display type is the display type of the first interface before the transfer. In other words, it is assumed that the first interface is displayed at an original location of the first screen in a form of non-full screen before the transfer, and in this case, the in-vehicle terminal may keep an original display type, and move the first interface from the original location to the target location.
For example, as shown in
When the first interface is displayed at the original location of the first screen in a form of picture-in-picture, floating icon, widget, or the like before the transfer, the in-vehicle terminal may also keep the original display type of the first interface unchanged, and move the first interface from the original location to the target location. Details are not listed one by one herein.
In some other embodiments, it is assumed that the first interface and another interface (for example, a second interface) are displayed on the first screen in split screen before the transfer, and the target location is a task interface (namely, the second interface) displayed in split screen with the first interface. In this case, the in-vehicle terminal may determine to exchange the first interface and the second interface (that is, a location of the second interface before the transfer is the target location of the first interface).
For example, as shown in
It may be understood that, based on the interface transfer method provided in embodiments of this application, the user may trigger, by using a one-screen transfer operation that meets user operation habits and is easy to memorize and operate, one-screen transfer of a task interface from a location (namely, the original location) to another location (namely, a target location) on a screen, to meet diversified requirements of the user. For example, a full-screen interface is pinned (pinned) to a location on the screen, or a location of the task interface is moved, or task interfaces displayed in split screen are exchanged.
Optionally, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may further support reverse transfer of the first interface that has been transferred from the first screen to the second screen.
For example, as shown in
For example, in a reverse transfer process, the in-vehicle terminal may determine, based on a first operation (for example, an operation that one or more fingers leave after quickly sliding) performed by the user on the first interface on the second screen, that the target screen is the first screen, further determine, based on screen task information of the first screen and/or task information of the first interface at a current moment, a display type of the first interface on the first screen after the transfer, and further display the first interface in the determined display type through the first screen, implementing reverse transfer of the first interface to the first screen. In the reverse transfer process, the second screen is the original screen, and the first screen is the target screen. For a specific reverse transfer process, refer to the specific process in which the first interface is transferred from the first screen to the second screen across screens.
Alternatively, for example, as shown in
Alternatively, optionally, based on the interface transfer method provided in embodiments of this application, after the in-vehicle terminal transfers the first interface from the first screen to the second screen across screens, the in-vehicle terminal may further support one-screen transfer of the first interface on the second screen. For example, the in-vehicle terminal may transfer, from the original location to the target location based on a one-screen transfer operation of the user, the first interface that is transferred from the first screen to the second screen. For the one-screen transfer of the first interface on the second screen, refer to a specific process of the one-screen transfer of the first interface on the first screen in the foregoing embodiment. Details are not described herein again.
Alternatively, optionally, based on the interface transfer method provided in embodiments of this application, after the in-vehicle terminal transfers the first interface from the first screen to the second screen across screens, the in-vehicle terminal may further support in transferring the first interface to another display (for example, a third screen) across screens again. For cross-screen transfer of the first interface from the second screen to the third screen, refer to a specific process of cross-screen transfer of the first interface from the first screen to the second screen in the foregoing embodiments. Details are not described herein again.
It should be noted that, in the foregoing embodiments of this application, the in-vehicle terminal transfers, based on the cross-screen transfer operation of the user, the first interface displayed on the original screen (for example, the first screen) to the target screen (for example, the second screen) across screens. In some embodiments, the in-vehicle terminal may further display the first interface to the target screen (for example, the second screen) based on the cross-screen transfer operation of the user when a process corresponding to the first interface is started but the first interface is not displayed. The cross-screen transfer operation of the user is an operation performed by a passenger at a target screen location for opening the first interface on the first screen.
For example, as shown in
Alternatively, for example, it is assumed that the first interface is a lower-level interface of an interface (for example, a third interface). In response to detecting an operation that the user accesses the first interface on the third interface and determines that an identity of the user is a passenger at a second screen location, the in-vehicle terminal draws the first interface and displays the first interface through the second screen. Optionally, after the in-vehicle terminal displays the first interface through the second screen, the first screen may further continue to display the third interface. Alternatively, optionally, after the in-vehicle terminal displays the first interface through the second screen, the first screen may no longer display the third interface. This is not limited in embodiments of this application.
Alternatively, it is assumed that the first interface is a detailed information interface (namely, the first interface) corresponding to a widget, and an application corresponding to the widget is currently not running. In response to detecting an operation of tapping the widget by the user on the first screen and determining that an identity of the user is a passenger at a second screen location, the in-vehicle terminal starts the application corresponding to the widget, draws the detailed information interface (namely, the first interface) of the widget, and displays the first interface through the second screen. Optionally, after the in-vehicle terminal displays the first interface through the second screen, the first screen may further display the first interface in a form of small interface like a floating window, a floating icon, or a floating bubble, to avoid interference to another focused task on the first screen. This is not limited in embodiments of this application.
In a possible implementation, when receiving the cross-screen transfer operation, the in-vehicle terminal may capture image information in a viewfinder frame through a camera. It may be understood that the viewfinder frame of the camera usually includes an image of the user that inputs the cross-screen transfer operation. Further, the in-vehicle terminal may recognize, based on the image information of the user captured by the camera, the identity of the user that triggers the cross-screen transfer operation. For a specific method and process of determining the identity of the user based on the image information of the user, refer to a conventional technology. Details are not described herein again.
In another possible implementation, when receiving the cross-screen transfer operation, the in-vehicle terminal may draw, based on information such as a touch location, a touch force, and a touch area of the cross-screen transfer operation on the first screen, a touch profile corresponding to the cross-screen transfer operation. The touch profile corresponding to the cross-screen transfer operation indicates a contact area between a finger of the user and a screen. Further, the in-vehicle terminal may determine, based on the touch profile, the identity of the user that triggers the cross-screen transfer operation.
For example, the first screen is a driver screen. It may be understood that, as shown in
Alternatively, the in-vehicle terminal may determine, by using another method, the identity of the user that triggers the cross-screen transfer operation. This is not specifically limited in embodiments of this application.
It may be understood that, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may not only trigger, based on the cross-screen transfer operation of the user, the cross-screen transfer of the task interface from the original screen to the target screen, but also trigger, based on a one-screen transfer operation of the user, one-screen transfer of a task interface from a location (namely, the original location) to another location (namely, a target location) on a screen, to meet diversified requirements of the user, for example, transferring the task interface to another passenger for use or performing a related operation with assistance of another passenger, and for another example, pinning (pinning) the full-screen interface to the location on the screen or moving the location of the task interface.
In addition, based on the interface transfer method provided in embodiments of this application, the in-vehicle terminal may further analyze target screen task information and/or task information of a to-be-transferred interface, to display, on the target screen in an appropriate display type after interface transfer, the task interface transferred from the original screen. According to the method, the task interface transferred from the original screen may be displayed with a most eye-catching display effect without affecting a task currently concerned by the user on the target screen. This reduces subsequent operations of the user and provides the user with more efficient and user-friendly interface transfer experience.
According to the multi-screen collaborative display method provided in embodiments of this application, a plurality of displays may collaboratively display an image, to provide a user with immersive viewing experience. Content displayed on different displays or manners of displaying content are different. Based on the multi-screen collaborative display method provided in embodiments of this application, when one display displays a content interface, another display may display content based on the content on the display, so that a plurality of displays collaboratively display one image, thereby stimulating an atmosphere and providing a better display effect.
The multi-screen collaborative display method provided in embodiments of this application may be applied to a vehicle cockpit scenario. In this case, the first display, the second display, and the third display may be displays configured in the cockpit. Optionally, the first display, the second display, and the third display may be displays belonging to a same in-vehicle terminal, and the control mechanism of the first display, the second display, and the third display is a one-core multi-screen mechanism (including the single-core single-OS mechanism and the single-core multi-OS mechanism). Alternatively, the first display, the second display, and the third display may be displays belonging to different in-vehicle terminals, and the control mechanism of the first display, the second display, and the third display is a multi-core multi-screen mechanism (including the multi-core multi-OS mechanism).
In some embodiments of this application, the second display and the third display may be respectively located on two sides of the first display in terms of space locations. For example, the second display and the third display may be respectively located on a left side and a right side of the first display. Certainly, a relative location relationship among the first display, the second display, and the third display may also be another location relationship. In embodiments of this application, the first display, the second display, and the third display are not forcibly distributed in the foregoing manner.
In some embodiments of this application, the first display may be any display described in the foregoing content of Part 1 (namely, content of the interface transfer method part) or the following content of Part 3 (namely, content of the audio control method part), and the second display and the third display may be displays located on two sides of the any display. In some embodiments of this application, in a scenario in which the user enables the immersive display mode, the multi-screen collaborative display method provided in embodiments of this application may be used to display content.
The following describes, with reference to a specific application scenario, the multi-screen collaborative display method provided in embodiments of this application. The following uses a scenario in which the second display and the third display are respectively located on the left and right sides of the first display as an example for description. For an implementation in another relative location scenario, refer to the implementation in the scenario. Details are not described in embodiments of this application.
In this scenario, the first display, the second display, and the third display belong to a same electronic device (for example, a same in-vehicle terminal), only one OS is configured in the electronic device, and a control mechanism of the first display, the second display, and the third display is a single-core single-OS control mechanism.
For example,
The display management module is configured to control and manage the first display, the second display, and the third display. The effect processing module is configured to perform effect processing on a first interface displayed on the first display, so that the display management module can display, on the second display and the third display, a first interface obtained after the effect processing.
Optionally, the control system may further include a first application. The first application may be configured to draw the first interface, and control, by using the display management module, the first display to display the first interface. The first application is an application that is installed in the electronic device and that runs on the first display, and the first interface may be any interface of the first application.
Optionally, when the electronic device to which the control system belongs is implemented by using the software architecture shown in
It should be understood that the functional modules in the control system are merely an example. During actual application, the electronic device may alternatively be divided into more or fewer functional modules based on other factors, or may be divided into functions of each module in another manner, or may not be divided into functional modules, but works as a whole.
Refer to
S4301: After drawing a to-be-displayed image based on to-be-displayed content, the first application in the electronic device sends drawn image data to the display management module in the electronic device.
The first application may draw the to-be-displayed image at each refresh cycle of the first display.
S4302: The display management module synthesizes the image data to obtain a first interface to be displayed.
S4303: The display management module sends content of the first interface to the effect processing module in the electronic device.
S4304: The effect processing module performs rendering processing on the content of the first interface to obtain a second interface and a third interface.
The rendering processing may be any one of Gaussian blur processing, solid color gradient processing, or particle animation processing. For a specific processing process, refer to related descriptions in the following embodiments. Details are not described herein.
S4305: The effect processing module sends content of the second interface and content of the third interface to the display management module.
S4306: After receiving the content of the second interface and the content of the third interface, the display management module displays the first interface on the first display, displays the second interface on the second display, and displays the third interface on the third display.
Based on the foregoing method, when displaying the first interface on the first display, the electronic device having the plurality of displays may control the second display and the third display to separately display a special effect interface related to the first interface, to provide richer display effects and improve immersive viewing experience of the user.
In this scenario, the first display, the second display, and the third display belong to a same electronic device (for example, a same in-vehicle terminal), a plurality of OSes are configured in the electronic device, and a control mechanism of the first display, the second display, and the third display is a single-core multi-OS control mechanism.
For example,
In an optional implementation, in Case 1 shown in
In another optional implementation, in Case 2 shown in
In still another optional implementation, in Case 3 shown in
In the foregoing manners, each display management module in the control system is configured to control and manage a corresponding display. Each effect processing module is configured to perform effect processing on a first interface displayed on the first display, so that a display management module in an OS in which the effect processing module is located may display, on a corresponding display, a first interface obtained after the effect processing.
Optionally, the first OS may further include the first application. The first application may be configured to draw the first interface, and control, by using the first display management module, the first display to display the first interface. The first interface may be a first application to any interface screen.
In the single-core multi-OS scenario, a plurality of OSes (the first OS and the second OS, or the first OS, the second OS, and the third OS) in the control system may share a memory, and functional modules in the OSes may read data from the shared memory, or may store processed data into the shared memory. Therefore, relatively high access efficiency can be provided, and data storage space of the control system can be saved.
It should be understood that the functional modules in the control system are merely an example. During actual application, the electronic device may alternatively be divided into more or fewer functional modules based on other factors, or may be divided into functions of each module in another manner, or may not be divided into functional modules, but works as a whole.
Refer to
S4501: After drawing a to-be-displayed image based on to-be-displayed content, the first application in the first OS of the electronic device sends drawn image data to the first display management module in the first OS.
The first application may draw the to-be-displayed image at each refresh cycle of the first display.
S4502: The first display management module synthesizes the image data to obtain a first interface to be displayed.
S4503: The first display management module determines target content based on content of the first interface, and stores the target content into the shared memory of the first OS and the second OS of the electronic device, where the target content is the content of the first interface or content obtained after the first effect processing module performs a part or all of rendering processing on the content of the first interface.
The rendering processing may be any one of Gaussian blur processing, solid color gradient processing, or particle animation processing. For a specific processing process, refer to related descriptions in the following embodiments. Details are not described herein.
In some embodiments of this application, after obtaining the first interface, the first display management module may directly use the content of the first interface as the target content and store the target content into the shared memory; or after obtaining the first interface, the first display management module may perform a part of a rendering processing process (namely, a part of a complete rendering processing process) on the content of the first interface by using the first effect processing module, and use the obtained content as the target content and store the target content into the shared memory; or after obtaining the first interface, the first display management module may perform all of a rendering processing process (namely, a complete rendering processing process) on the content of the first interface by using the first effect processing module, use the obtained content as the target content, and store the target content into the shared memory.
A manner in which the first display management module determines the target content may be dynamically adjusted based on performance of the first OS and performance of the second OS. For example, when usage of a central processing unit (central processing unit, CPU) or a graphics processing unit (graphics processing unit, GPU) allocated to the first OS is greater than a specified value, the first display management module may directly use the content of the first interface as the target content, or may use the content obtained after the first effect processing module performs a part of rendering processing on the content of the first interface as the target content. In this manner, when the CPU or the GPU of the first OS is overloaded, the first OS does not perform rendering processing or performs only a part of rendering processing on the content of the first interface, and may leave remaining rendering processing work to the second OS for execution, thereby avoiding or reducing pressure on the CPU or the GPU of the first OS, and improving overall processing efficiency. For another example, when it is determined that usage of a central processing unit (central processing unit, CPU) or a graphics processing unit (graphics processing unit, GPU) allocated to the second OS is greater than a specified value, the first OS may use the content obtained after a part or all of rendering processing is performed on the content of the first interface as the target content. In this manner, when the CPU or the GPU of the second OS is overloaded, the first OS performs a part or all of rendering processing on the content of the first interface, so that the first OS shares a part of rendering processing work. This can reduce or avoid pressure of the rendering processing process on the CPU or the GPU of the second OS, thereby improving overall processing efficiency.
S4504: The first display management module displays the first interface on the first display.
S4505: The second display management module in the second OS of the electronic device obtains the target content from the shared memory.
S4506: The second display management module determines whether the target content is content on which all of rendering processing is performed. If yes, perform step S4507; otherwise, perform step S4508.
S4507: The second display management module obtains a second interface after synthesizing the target content. Step S4509 is performed.
S4508: The second display management module obtains a second interface after performing rendering processing on the target content by using the second effect processing module. Step S4509 is performed.
When the target content is content on which rendering processing is not performed, the second effect processing module may perform a complete rendering processing process on the target content to obtain the second interface. When the target content is content on which a part of rendering processing is performed, the second effect processing module may perform a remaining rendering processing process on the target content to obtain the second interface.
S4509: The second display management module displays the second interface on the second display.
The foregoing procedure shows a method for separately displaying an interface on the first display and the second display by the first OS and the second OS in the electronic device. For the third display, there are the following three cases:
(1) When the third display belongs to the first OS, a method for displaying an interface on the third display by the first OS may be: After obtaining the first interface, the first display management module sends the content of the first interface to the first effect processing module; the first effect processing module performs rendering processing on the content of the first interface to obtain a third interface; the first effect processing module sends content of the third interface to the first display management module; and the first display management module displays the third interface based on the received content.
(2) When the third display belongs to the second OS, for a method for displaying an interface on the third display by the second OS, refer to the method for displaying the second interface on the second display by the second OS in steps S4505 to S4509 in the foregoing procedure. Details are not described herein again.
(3) When the third display belongs to the third OS, for a method for displaying an interface on the third display by the third OS, refer to the method for displaying the second interface on the second display by the second OS in steps S4505 to S4509 in the foregoing procedure. Details are not described herein again.
Based on the foregoing method, a plurality of OSes in the electronic device may collaboratively implement an effect of displaying the first interface on the first display and displaying, on both the second display and the third display, a special effect interface related to the first interface, to provide richer display effects and improve immersive viewing experience of the user.
In this scenario, the first display, the second display, and the third display belong to different electronic devices (for example, different in-vehicle terminals), one or more OSes may be configured in each electronic device, each OS may include one or more displays, and a control mechanism of the first display, the second display, and the third display is a multi-core multi-OS control mechanism.
For example,
Optionally, the first electronic device may include only the first OS, or may include more OSes. Similarly, the second electronic device may include only the second OS, or may include more OSes.
In an optional implementation, the third display (not shown in
In another optional implementation, the third display may belong to a third OS, and the third OS includes a third display management module, a third effect processing module, and the third display. In a possible case, the third OS may belong to the first electronic device or the second electronic device. In another possible case, the third OS may belong to a third electronic device. In this case, the control system further includes the third electronic device. For a manner in which the plurality of OSes in the first electronic device, the second electronic device, or the third electronic device control a display, refer to the control manner of the OS in the control system shown in
In the foregoing manners, each display management module in the control system is configured to control and manage a corresponding display. Each effect processing module is configured to perform effect processing on a first interface displayed on the first display, so that a display management module in an OS in which the effect processing module is located may display, on a corresponding display, a first interface obtained after the effect processing.
Optionally, the first OS may further include the first application. The first application may be configured to draw the first interface, and control, by using the first display management module, the first display to display the first interface. The first interface may be a first application to any interface screen.
In the multi-core multi-OS scenario, the plurality of OSes that belong to a same electronic device in the control system may share a memory. Different electronic devices (or OSes of different electronic devices) cannot share the memory, and can receive and send data through a network stream.
It should be understood that the functional modules in the control system are merely an example. During actual application, the electronic device may alternatively be divided into more or fewer functional modules based on other factors, or may be divided into functions of each module in another manner, or may not be divided into functional modules, but works as a whole.
Refer to
S4701: After drawing a to-be-displayed image based on to-be-displayed content, the first application in the first OS of the first electronic device sends drawn image data to the first display management module in the first OS.
The first application may draw the to-be-displayed image at each refresh cycle of the first display.
S4702: The first display management module synthesizes the image data to obtain a first interface to be displayed.
S4703: The first display management module determines target content based on content of the first interface, and sends the target content to the second electronic device, where the target content is the content of the first interface or content obtained after the first effect processing module performs a part or all of rendering processing on the content of the first interface.
For methods such as rendering processing, target content, and a manner of determining the target content, refer to the related descriptions of step S4503 in the foregoing embodiment. Details are not described herein again.
S4704: The second display management module in the second OS of the second electronic device determines whether the target content received by the second electronic device is content on which all of rendering processing is performed. If yes, perform step S4705; otherwise, perform step S4706.
S4705: The second display management module obtains a second interface after synthesizing the target content. S4707 is performed.
S4706: The second display management module obtains a second interface after performing rendering processing on the target content by using the second effect processing module. Step S4707 is performed.
When the target content is content on which rendering processing is not performed, the second effect processing module may perform a complete rendering processing process on the target content to obtain the second interface. When the target content is content on which a part of rendering processing is performed, the second effect processing module may perform a remaining rendering processing process on the target content to obtain the second interface.
S4707: The second effect processing module displays the second interface on the second display.
The foregoing procedure shows a method for separately displaying an interface on the first display and the second display by the first OS of the first electronic device and the second OS of the second electronic device. For the third display, there are the following five cases:
(1) When the third display belongs to the first OS, a method for displaying an interface on the third display by the first OS may be: After obtaining the first interface, the first display management module sends the content of the first interface to the first effect processing module; the first effect processing module performs rendering processing on the content of the first interface to obtain a third interface; the first effect processing module sends content of the third interface to the first display management module; and the first display management module displays the third interface based on the received content.
(2) When the third display belongs to the second OS, for a method for displaying an interface on the third display by the second OS, refer to the method for displaying the second interface on the second display by the second OS in steps S4704 to S4707 in the foregoing procedure. Details are not described herein again.
(3) When the third display belongs to the third OS and the third OS belongs to the first electronic device, for a method for displaying an interface on the third display by the third OS, refer to the method for displaying the third interface on the third display by the first OS in the case (1) in the procedure. Details are not described herein again.
(4) When the third display belongs to the third OS and the third OS belongs to the second electronic device, for a method for displaying an interface on the third display by the third OS, refer to the method for displaying the second interface on the second display by the second OS in steps S4704 to S4707 in the procedure. Details are not described herein again.
(5) When the third display belongs to the third OS and the third OS belongs to the third electronic device, for a method for displaying an interface on the third display by the third OS, refer to the method described in steps S4703 to S4707 in the procedure. Details are not described herein again.
Based on the foregoing method, a plurality of OSes in the plurality electronic devices may collaboratively implement an effect of displaying the first interface on the first display and displaying, on both the second display and the third display, a special effect interface related to the first interface, to provide richer display effects and improve immersive viewing experience of the user.
The following describes the rendering processing method in the foregoing embodiment with reference to a specific example.
Refer to
S4801: The effect processing module determines a first interface based on content of the first interface from the display management module.
The effect processing module and the display management module may be respectively the effect processing module and the display management module in any OS described in the foregoing embodiments.
S4802: The effect processing module copies an interface of a specified size that is on the first interface and that is close to a side of a target display, to obtain a first target interface, where the target display is any display in the OS in which the effect processing module is located.
Optionally, the target display may be the second display or the third display in the foregoing embodiments.
In some embodiments of this application, the display management module may obtain a location relationship between the target display and another display in advance. The effect processing module may determine a location relationship between the target display and the first display based on the location relationship obtained by the display management module, and then determine, based on the location relationship, a location of the first target interface captured from the first interface. For example, when the target display is located on the left side of the first display, the effect processing module may tailor and copy an interface of a specified size on the left side of the first interface as the first target interface. For another example, when the target display is located on the right side of the first display, the effect processing module may tailor and copy an interface of a specified size on the right side of the first interface as the first target interface.
For example, as shown in (a) in
S4803: The effect processing module performs Gaussian blur processing on the first target interface to obtain a second target interface.
For example, when the target display is the second display, the second target interface may be an interface 3 shown in (c) in
S4804: The effect processing module zooms in the second target interface along a target direction to a specified width, and superimposes a transparency gradient on the second target interface, where the target direction is a direction from the first display to the target display, and in the zoomed-in second target interface, interface transparency shows a descending trend along the target direction.
For example, the specified width may be a width of the target display. Certainly, the width may be another width. This is not limited in embodiments of this application.
For example, when the target display is the second display, the zoomed-in second target interface may be an interface 5 shown in (d) in
Based on the foregoing method, when a user interface is displayed on the first display, a Gaussian blur rendering effect may be displayed on both the second display and the third display, and immersive viewing experience can be provided for the user in a combined display manner of the three displays.
Refer to
S5001: The effect processing module determines a first interface based on content of the first interface from the display management module.
S5002: The effect processing module copies an interface of a specified size that is on the first interface and that is close to a side of a target display, to obtain a first target interface, where the target display is any display in the OS in which the effect processing module is located.
For specific implementations of steps S5001 and S5002, refer to the method described in steps S4801 and S4802 in the foregoing embodiment. Details are not described herein again.
For example, as shown in (a) in
S5003: The effect processing module determines a target color based on a color on the first target interface.
For example, the target color may be any color selected from colors of the first target interface, or the target color may be an average color of all colors of the first target interface. Alternatively, the target color may be a color selected in another manner. This is not limited in this embodiment of this application.
S5004: The effect processing module generates a second target interface based on the target color, where an interface color gradually changes from the target color to a specified color along a target direction on the second target interface, and the target direction is a direction from the first display to the target display.
For example, the specified color may be black.
For example, when the target display is the second display, the second target interface may be an interface 3 shown in (c) in
Based on the foregoing method, when a user interface is displayed on the first display, a solid color gradient rendering effect may be displayed on both the second display and the third display, and immersive viewing experience can be provided for the user in a combined display manner of the three displays.
Refer to
S5201: The effect processing module determines a first interface based on content of the first interface from the display management module.
The effect processing module and the display management module may be respectively the effect processing module and the display management module in any OS described in the foregoing embodiments.
S5202: The effect processing module generates a corresponding particle animation interface based on the first interface, where a color of a particle on the particle animation interface is determined based on a color on the first interface.
In an optional implementation, the particle animation interface may display a particle animation corresponding to the entire first interface. In this manner, both the second display and the third display that are located on two sides of the first display may display a same particle animation interface. Optionally, in this manner, colors of particles at different locations on the particle animation interface may be determined based on colors of corresponding locations on the first interface, or may be determined in another manner. This is not limited in embodiments of this application. For example, the effect processing module may perform Gaussian blur processing on the first interface, and then select a color from different locations on the processed interface as the color of the particle corresponding to the location. For another example, the effect processing module may use a color at a specified location on the first interface as the color of the particle. Optionally, in this manner, a size of the particle animation interface may be the same as a size of the first interface.
For example, as shown in (a) in
In another optional implementation, the particle animation interface may display a particle animation corresponding to a partial interface of the first interface. The partial interface may be an interface of a specified size that is on the first interface and that is close to the target display, and the target display is a display configured to display the particle animation interface. In this manner, the second display and the third display that are located on two sides of the first display may display different particle animation interfaces. Optionally, in this manner, colors of particles at different locations on the particle animation interface may be determined based on colors of corresponding locations on the partial interface, or may be determined in another manner. This is not limited in this embodiment of this application. For example, this may be determined based on the method in the foregoing manner. Optionally, a size of the particle animation interface may be the same as or different from a size of the partial interface.
For example, as shown in (a) in
Optionally, when the first interface is an interface (for example, a video interface) of media content, in the foregoing two manners, a dynamic change amplitude (including height and depth change amplitudes) of a particle on the particle animation interface may be determined according to factors such as playing rhythm and volume of the media content; and a density of particles on the particle animation interface may be determined based on a color feature or an audio feature of the media content. The color feature may be a depth of an interface color, or the like, and the depth of the interface color may be distinguished based on parameters such as saturation, a color parameter, and brightness of an interface. The audio feature may be audio rhythm or the like, and the soothing and tension of the audio rhythm may be determined based on parameters such as a frequency of the audio. For example, when it is determined that the color on the first interface belongs to a light color range, a relatively low particle density may be set; and when it is determined that the color on the first interface belongs to a dark color range, a relatively high particle density may be set. For another example, when it is determined that the audio rhyme corresponding to the media content on the first interface belongs to a soothing range, a relatively low particle density may be set; and when it is determined that the audio rhyme corresponding to the media content on the first interface belongs to a tension range, a relatively high particle density may be set.
In a specific implementation process, for the interface color, a color parameter range corresponding to the light color range and a color parameter range corresponding to the dark color range may be preset; for the audio rhythm, an audio parameter range corresponding to the soothing range and an audio parameter range corresponding to the tension range may be preset. Particle density parameters corresponding to different color depth ranges and/or audio rhyme ranges may also be preset. Therefore, when the particle animation interface is displayed, a corresponding color depth range and/or audio rhythm range may be determined based on a color parameter and/or an audio parameter corresponding to the interface, to determine the particle density parameter of the particle animation interface.
In some embodiments of this application, when only the first interface is displayed on the first display, and a display area of the first interface on the first display does not occupy a complete display area of the first display, the first display management module may generate, by using the first effect processing module, a particle animation displayed outside the first interface, and display the particle animation in a display area that is not occupied by the first interface on the first display. For a manner of generating the particle animation by using the first effect processing module, refer to the foregoing method.
For example, as shown in
Based on the foregoing method, when a user interface is displayed on the first display, a particle animation rendering effect may be displayed on both the second display and the third display, and immersive viewing experience can be provided for the user in a combined display manner of the three displays.
In some embodiments of this application, in a scenario in which the electronic device in the foregoing embodiments is an in-vehicle terminal, the control system in the foregoing embodiments may be deployed in a vehicle cockpit. In addition to the components such as the device, the system, the functional module, and the hardware apparatus in the foregoing embodiments, the control system may further include a lighting device (for example, an atmosphere light), a seat, an air conditioning device, and the like shown in
The first OS may control the lighting device, the seat, and the air conditioning device in the control system based on content displayed on the first interface. Specifically, the following cases may be included:
(1) When an audio is being played when the first interface is displayed, the first display management module in the first OS may notify the air conditioner management module of a detected event that playing volume of the audio increases in specified decibels within specified duration (for example, millisecond-level duration such as 5 milliseconds). After receiving the notification, the air conditioner management module may increase air volume of an air exhaust vent of the air conditioning device by a specified value, increase air exhaust time by specified duration, or increase an air speed of an air exhaust vent by a specified value, so that the user can experience this change in an immersive manner. Optionally, the first display management module may further detect, by using the seat management module, a seat taken by a user, so that the air conditioner management module adjusts, in the foregoing manner, only an air conditioner corresponding to the seat taken by the user. Optionally, the first display management module may further control, by using the seat management module, the seat to shake, and the like, to provide the user with more immersive experience.
(2) The first display management module in the first OS may determine a target color according to a color on the first interface, and indicate the target color to the lighting management module. The lighting management module may adjust a color and brightness of the lighting device based on the target color, to control a display manner of the lighting device in real time, and assist in providing immersive viewing experience. For a manner of determining the target color based on the color on the first interface, refer to the method described in step S5003 in the foregoing embodiment. Details are not described herein again.
(3) The first display management module in the first OS may perform artificial intelligence (artificial intelligence, AI) analysis on content of the first interface to recognize a picture scenario, and determine, based on a preset correspondence between a different scenario and an air conditioner temperature, an air conditioner temperature corresponding to a current scenario. In this way, the temperature of the air conditioning device can be adjusted through the air conditioner management module. For example, when the recognized scenario is a cold glacier scenario or the like, the temperature of the air conditioner may be lowered; when the recognized scenario is a hot environment or the like, the temperature of the air conditioner may be increased; or when the recognized scenario is a natural scenario like flower, grass, or tree, a corresponding aroma may be given off.
Based on the foregoing method, an immersive service may be provided together with periphery elements when content is displayed, thereby improving immersive experience of the user.
In the multi-screen collaborative display method provided in embodiments of this application, the plurality of displays may simultaneously display an image of an application and different related special effects (including Gaussian blur, solid color gradient, particle animation, and the like), to implement a multi-screen image simultaneous display effect, and further provide richer immersive content outputs. In this way, relatively high flexibility and practicability are provided, and immersive experience of the user can be improved. In addition, according to the multi-screen collaborative display method provided in embodiments of this application, based on output content (including image content, sound content, and the like), a service provided by a periphery device of a display, for example, a light, an air conditioner, or a seat, may be further adjusted. For example, a color and brightness of the light, a temperature of the air conditioner, an air direction and air volume of an air exhaust outlet, and a seat status may be adjusted. This can further provide more diversified immersive services, and then improve immersive experience of the user.
According to the audio control method provided in embodiments of this application, audio playing corresponding to a display can be controlled in a scenario in which a plurality of screens and a plurality of audio output apparatuses are included, to improve flexibility and usability of audio playing control.
The audio control method provided in embodiments of this application may be applied to an audio control system that includes a control device, a plurality of displays, and a plurality of audio output apparatuses. The control device is configured to control the plurality of displays and the plurality of audio output apparatuses. The plurality of displays may be displays of the control device, or may be displays of other electronic devices. In the plurality of displays, different displays may belong to a same device, or may belong to different devices. The audio output apparatuses and the displays may belong to a same electronic device or different electronic devices. According to solutions provided in embodiments of this application, an appropriate audio output apparatus can be flexibly allocated to a display based on specific running statuses of the plurality of displays in the system to play an audio, so that flexibility, accuracy, and efficiency of audio control can be improved, and user experience of listening to an audio can be improved.
In some embodiments of this application, the control device may be the in-vehicle terminal described in the foregoing content of Part 1 (namely, the content of the interface transfer method part), or may be the electronic device described in the foregoing content of Part 2 (namely, the content of the multi-screen collaborative display method part). The plurality of displays may be the plurality of displays provided in the foregoing content of Part 1, or may be the plurality of displays provided in the foregoing content of Part 2.
Optionally, the electronic device shown in
In some embodiments of this application, the audio control system may be a system including a control device, and a plurality of displays and a plurality of audio output apparatuses that are located in a same space environment. The space environment may be, for example, a cockpit interior or an indoor environment. Based on the solution provided in this embodiment of this application, the plurality of audio output apparatuses in the space environment may provide an audio playing service for a user under unified control of the control device.
The solution provided in this embodiment of this application mainly includes audio control methods in two scenarios. The two scenarios are respectively a non-content cross-screen transfer scenario and a content cross-screen transfer scenario. The non-content cross-screen transfer scenario means a scenario in which content interaction does not exist between different displays in an audio control system. The content cross-screen transfer scenario means a scenario in which content interaction exists between different displays in the audio control system. Display content interaction means that a service/application running on a display may be shared to another display for running, or content displayed on a display may be shared to another display for display. The following describes audio control methods in various scenarios.
In this scenario, there is no content transfer between different displays, and therefore audio transfer is not involved. When a control device controls audio playing, interference between audio playing processes corresponding to content on different displays is relatively small. Therefore, the control device may relatively independently control content displayed on each display and the audio playing process. Specifically, in this scenario, the electronic device may use a partition management mechanism when performing audio control. The partition management mechanism means that the electronic device may perform sound zone division on a space environment corresponding to an audio control system, where the sound zone defines an area range in which an audio output apparatus is located in the space environment corresponding to the audio control system, different sound zones are different area ranges in the space environment, and each area defined by the sound zone includes at least one display and at least one audio output apparatus. Each display may be associated with a sound zone in which the display is located, and audio data corresponding to content displayed on each display may be played by using an audio output apparatus in the sound zone associated with the display. When displaying content on a display, the electronic device may select an associated sound zone based on the display, and play an audio by using an audio output apparatus in the associated sound zone.
In some embodiments of this application, the sound zone may be classified into a primary available sound zone and a secondary available sound zone. The primary available sound zone may be used as a default sound zone. When multi-sound zone division is not performed, each display may be associated with the primary available sound zone, and may be associated with a newly obtained sound zone after the multi-sound zone division is performed. For the space environment corresponding to the audio control system, the primary available sound zone may be the entire space environment, and audio output apparatuses corresponding to the primary available sound zone are all audio output apparatuses in the entire space environment. In other words, if the primary available sound zone is selected during audio playing, all audio output apparatuses in the space environment may be used to play an audio. The secondary available sound zone is a partial area in the space environment, and an audio output apparatus corresponding to the secondary available sound zone is an audio output apparatus in a partial area corresponding to the secondary available sound zone. There may be a plurality of secondary available sound zones.
In this embodiment of this application, one sound zone may be associated with a plurality of displays. For the sound zone, all the plurality of displays associated with the sound zone may play an audio by using an audio output apparatus in the sound zone. For a display, one display can play an audio only by using an audio output apparatus in one sound zone associated with the display. The division of the sound zone may be automatically performed by the control device according to a specified rule, or may be manually performed by the user.
For example, the audio control system may be a system including a control device, a plurality of displays, and a plurality of audio output apparatuses that are deployed in a vehicle cockpit, and the space environment corresponding to the audio control system may be space in the vehicle cockpit. As shown in
It should be noted that a quantity of sound zones, a sound zone division manner, an association relationship between a display and a sound zone, and the like shown in
Refer to
The system layer includes a plurality of displays (for example, the central display screen, the front passenger screen, and the rear-row screen shown in
The media framework layer may include an audio control module (for example, shown in
The HAL is an interface layer located between a kernel layer (which may be the kernel layer shown in
It should be noted that, a connection manner between modules or apparatuses such as the display, the audio control module, the sound zone module, the bus, and the interface corresponding to the audio output apparatus shown in
In an optional implementation, different buses may be respectively connected to interfaces corresponding to different audio output apparatuses, and each bus is connected to an interface corresponding to only one audio output apparatus. In this case, the audio control module can select only one audio output apparatus through one bus. After selecting an audio output apparatus, the audio control module may send audio data to a bus corresponding to the selected audio output apparatus, and transmit the audio data to the selected audio output apparatus through an interface connected to the bus and the bus. For example, as shown in
It should be noted that, for ease of viewing, for a connection manner between the sound zone module and the bus,
In another optional implementation, each bus may be connected to interfaces corresponding to a plurality of audio output apparatuses, and the audio control module may select a plurality of audio output apparatuses through one bus. In addition, each audio output apparatus may also be connected to a plurality of buses. For example, the driver headrest speaker, the driver loudspeaker, the front passenger loudspeaker, and the front passenger headrest speaker shown in
It should be noted that the software and hardware layer structure shown in
For example, when the control device runs a first application and displays application content of the first application on a first display, the audio control module may select, based on an association relationship between a display and a sound zone, a sound zone associated with the first display, further select, from the sound zone based on an audio type, an audio output apparatus for playing an audio, then transmit, through a bus corresponding to the selected audio output apparatus, the audio corresponding to the application content to an interface at the HAL, configure an audio output manner of the audio output apparatus, and transmit the audio corresponding to the application content to the corresponding audio output apparatus for playing. When the control device runs a second application and displays application content of the second application on a second display, a corresponding audio playing method is similar.
As shown in
S5901: An electronic device obtains and stores an association relationship between a display identifier and a sound zone identifier.
The electronic device may be the control device shown in
In some implementations, the association relationship between the display identifier and the sound zone identifier may be obtained and stored by using a sound zone management module of the electronic device. The association relationship between the display identifier and the sound zone identifier may be set by a user. Alternatively, the sound zone management module may automatically perform, according to a specified rule, sound zone division on ambient space corresponding to an audio control system, and after obtaining a location of a display in the ambient space, associate the display identifier of the display with the sound zone identifier of a sound zone in which the display is located, to obtain the association relationship between the display identifier and the sound zone identifier.
For example, based on the sound zone division manner shown in
S5902: When displaying first content on a first display by using the first application, the electronic device sends an identifier of the first display, a first audio corresponding to the first content, and an audio type of the first audio to an audio control module.
The first application may obtain, from a display management module, a display identifier of the first display on which the first application is located. The display management module may be configured to manage a plurality of displays. The display management module may be deployed at the application framework layer shown in
For example, when the first display is the central display screen, that is, the first application runs and displays on the central display screen, and the first audio played by the first application is a media-type audio, the first application may obtain a display identifier of the central display screen from the display management module. The first application may send the display identifier of the central display screen, the first audio, and the audio type (for example, a media-type audio type) of the first audio to the audio control module.
S5903: The electronic device obtains, from the sound zone management module based on the identifier of the first display, a target sound zone associated with the first display.
For example, the target sound zone associated with the first display may be obtained from the sound zone management module based on the identifier of the first display by using the audio control module of the electronic device. The audio control module may obtain, from the sound zone management module, an identifier of the target sound zone associated with the identifier of the first display. The sound zone management module determines, based on the received identifier of the first display and the stored association relationship between the display identifier and the sound zone identifier, the identifier of the target sound zone associated with the received identifier of the first display, and provides the identifier of the target sound zone to the audio control module, so that the audio control module determines the corresponding target sound zone.
For example, based on the foregoing example, the audio control module may send the display identifier of the central display screen to the sound zone management module. Because the central display screen is associated with the sound zone 1, the sound zone management module may determine the sound zone 1 as the target sound zone, and return a sound zone identifier of the sound zone 1 to the audio control module.
S5904: The electronic device selects a target audio output apparatus in the target sound zone based on the audio type of the first audio.
For example, the target audio output apparatus may be selected in the target sound zone by using the audio control module of the electronic device. After selecting the target sound zone, the audio control module may play an audio of the first application by using an audio output apparatus in the target sound zone, that is, use the target sound zone as the audio sound zone of the first application. The audio control module may obtain an audio output apparatus included in the target sound zone from the sound zone management module.
In some embodiments of this application, the audio control module may select, based on a preset correspondence between an audio type and an audio output apparatus and a priority of an audio output apparatus, an audio output apparatus corresponding to the audio type of the first audio, and play the first audio by using the selected audio output apparatus. Audio output apparatuses corresponding to different audio types may be the same or different. Priorities of audio output apparatuses corresponding to a same audio type in different sound zones may be the same or different. Priorities of audio output apparatuses corresponding to different audio types in a same sound zone may be the same or different. The priority of the audio output apparatus may be determined based on a location (for example, a driver seat or a front passenger seat), an audio type (for example, a media audio) of a played audio, an audio playing mode (for example, a private mode), a display mode (for example, copying display or splicing display), or the like, or may be determined based on a user indication (for example, a voice indication or an operation indication). For example, when the central display screen and the front passenger screen display video content in a splicing manner, a priority of the vehicle loudspeaker may be set to be the highest. When the front passenger screen displays a video in the private mode, a priority of a front passenger Bluetooth headset may be set to be higher than a priority of a front passenger loudspeaker. For another example, a priority of an audio output apparatus manually selected by the user may be the highest. Optionally, the audio control module may obtain the priority of the audio output apparatus from a configuration file of the system.
In some embodiments of this application, the audio control module may preferentially select an audio output apparatus corresponding to an audio type to play an audio of the corresponding audio type. In a plurality of audio output apparatuses corresponding to the audio type, the audio control module may preferentially select one or more audio output apparatuses with a higher priority to play the audio of the corresponding audio type. For example, based on the foregoing example 1, the preset correspondence between the audio type and the audio output apparatus may be: The media-type audio corresponds to a Bluetooth headset, the vehicle loudspeaker, each seat loudspeaker, each seat headrest speaker, and the like; and the navigation-type audio corresponds to the driver Bluetooth headset, the vehicle loudspeaker, the driver loudspeaker, the driver headrest speaker, and the like. When a same type of audio corresponds to a plurality of audio output apparatuses, different audio output apparatuses have different priorities. For example, there may be cases described in the following examples:
Case 1: In a case of media-type audio, priorities of audio output apparatuses corresponding to the sound zone module 1 associated with the central display screen in
Case 2: In a case of navigation-type audio, priorities of audio output apparatuses corresponding to the sound zone module 1 associated with the central display screen in
Case 3: In a case of media-type audio, priorities of audio output apparatuses corresponding to the sound zone module 2 associated with the front passenger screen in
For example, as shown in
Certainly, in the foregoing example, priorities of audio output apparatuses in each sound zone may alternatively be other orders, for example, a user-specified order. This is not limited to the foregoing prioritizing manner.
S5905: The electronic device sends the first audio to the sound zone management module, and indicates the sound zone management module to transmit the first audio to the target audio output apparatus.
For example, the first audio may be sent to the sound zone management module by using the audio control module of the electronic device, and the sound zone management module is indicated to transmit the first audio to the target audio output apparatus.
S5906: The electronic device sends the first audio to the target audio output apparatus through a bus between the sound zone management module and the target audio output apparatus.
For example, the sound zone management module in the electronic device may send the first audio to the target audio output apparatus through the bus between the sound zone management module and the target audio output apparatus.
S5907: The target audio output apparatus plays the first audio after receiving the first audio.
For example, based on the scenario shown in
In the audio division method provided in the foregoing embodiment, sound zone division is performed based on the space area range. However, an audio division rule applicable to the solution of this application is not limited thereto. In some embodiments of this application, sound zone division may alternatively be performed based on another feature of an audio apparatus. For example, for audio apparatuses in a vehicle, sound zone division may be performed based on a feature of an audio output apparatus, for example, a status type (for example, a static type or a moving type), a location range (for example, an area inside the vehicle, an area outside the vehicle, a front-row area, or a rear-row area), or a device type (for example, a speaker or a Bluetooth headset). A specific division rule may include at least one of the following:
1. For a static audio output apparatus, sound zone division may be performed based on a device location. The static audio output apparatus in this embodiment of this application may be an audio output apparatus at a fixed position, for example, may include a fixed audio output apparatus configured on a vehicle.
In this manner, audio output apparatuses located in a same location area may be classified into a same sound zone, and audio output apparatuses located in different location areas may be classified into different sound zones. A sound zone obtained through division in this manner is a static sound zone.
For example, an out-of-vehicle audio output apparatus configured outside a cockpit of the vehicle may be classified into an out-of-vehicle sound zone, and an audio output apparatus configured inside the cockpit of the vehicle may be classified into an in-vehicle sound zone.
2. For a static audio output apparatus, sound zone division may be performed based on a device type.
In this manner, audio output apparatuses of a same device type may be classified into a same sound zone, and audio output apparatuses of different device types may be classified into different sound zones. A sound zone obtained through division in this manner is a static sound zone.
For example, speakers configured in the cockpit may be classified into a speaker sound zone.
3. For a static audio output apparatus, sound zone division may be performed based on a device location and a device type.
In this manner, audio output apparatuses located in different location areas may be directly classified into different sound zones; and in audio output apparatuses located in a same location area, audio output apparatuses of a same device type may be classified into a same sound zone, and audio output apparatuses of different device types may be classified into different sound zones. In other words, the audio output apparatuses in different location areas belong to different sound zones, the audio output apparatuses of different device types in a same location area belong to different sound zones, and the audio output apparatuses of a same device type in a same location area belong to a same sound zone. A sound zone obtained through division in this manner is a static sound zone.
4. For a moving audio output apparatus, each audio output apparatus may be classified into one sound zone. The moving audio output apparatus in this embodiment of this application may be an audio output apparatus whose location is not fixed, for example, may include a mobile playing device like a Bluetooth headset carried by the user into the vehicle cockpit.
A sound zone obtained through division in this manner is a moving sound zone.
For example, a Bluetooth headset that is carried by a front passenger and that is connected to the front passenger screen in the vehicle cockpit may be classified into a Bluetooth sound zone 1, and a Bluetooth headset that is carried by the user in the rear-row left seat and that is connected to the rear-row left screen in the vehicle cockpit may be classified into a Bluetooth sound zone 2.
In some embodiments of this application, the foregoing sound zone division rules (namely, the foregoing rules 1 to 3) based on which the static audio output apparatus is classified into a sound zone may be used as a head unit audio configuration file to be preconfigured in a head unit system. The electronic device to which the displays in the cockpit belong may perform sound zone division according to the foregoing sound zone division rules after obtaining the foregoing sound zone division rules.
In some embodiments of this application, for the static audio output apparatus in the vehicle, the user may perform static sound zone division on the static audio output apparatus in advance, and store, into the head unit system, sound zone information of each obtained static sound zone as an audio configuration file of a head unit. In this case, the electronic device to which the displays in the cockpit belong may directly determine, based on the audio configuration file stored in the head unit system, the sound zone information of the divided static sound zone.
In some embodiments of this application, for the moving audio output apparatus, after establishing a communication connection to the moving audio output apparatus, the electronic device may perform moving sound zone division on the moving audio output apparatus according to the preset sound zone division rule (namely, the foregoing rule 4), so that the moving sound zone can be updated based on a change status of the moving audio output apparatus in the cockpit, thereby ensuring accuracy of the divided moving sound zone.
The sound zone information in this embodiment of this application includes at least the sound zone identifier and an identifier of an audio output apparatus included in a sound zone, may further include type information such as a status type, a location range, or a device type of the audio output apparatus included in the sound zone, and may further include an audio output channel corresponding to the sound zone, and audio transmission-related configuration information such as an audio format, a sampling rate, or a volume range supported by the audio output channel.
In some embodiments of this application, when the audio configuration file of the head unit includes sound zone information of a plurality of static sound zones, the electronic device may further prioritize the plurality of static sound zones based on a user indication or a user-specified rule, and select a used sound zone based on a priority order. Audio output apparatuses included in a same sound zone belong to a same type. Therefore, the prioritizing the plurality of static sound zones may also be understood as prioritizing the plurality of different types of audio output apparatuses. The prioritizing indicates a priority of selecting a sound zone or an audio output apparatus for playing an audio. For example, for an audio output apparatus in the vehicle, a possible prioritizing may be that a priority of a Bluetooth headset is higher than a priority of an in-vehicle speaker, and the priority of the in-vehicle speaker is higher than a priority of an out-of-vehicle speaker. Correspondingly, a priority of a Bluetooth sound zone is higher than a priority of an in-vehicle speaker sound zone, and the priority of the in-vehicle speaker sound zone is higher than a priority of an out-of-vehicle speaker sound zone. Optionally, when a same display is associated with different Bluetooth sound zones, priorities of the different Bluetooth sound zones may be determined based on a time of connection between a Bluetooth headset in a Bluetooth sound zone and a display, where a priority of a Bluetooth sound zone to which a Bluetooth headset connected to the display earlier belongs is higher than a priority of a Bluetooth sound zone to which a Bluetooth headset connected to the display later belongs.
In some embodiments of this application, different types of displays may correspond to different priorities. In this case, the electronic device may separately sort, based on the priorities corresponding to the different displays, audio output apparatuses available to the different displays, and further select an audio output apparatus with a highest priority from the audio output apparatuses available to the displays, to play an audio whose content corresponds to a display. For example, when the cockpit includes speakers and Bluetooth headsets, a priority order corresponding to the central display screen and an instrument screen in the cockpit may be that a priority of a speaker is higher than a priority of a Bluetooth headset, and a priority order corresponding to the front passenger screen and the rear-row screen in the cockpit may be that a priority of a Bluetooth headset is higher than a priority of a speaker.
It should be understood that the foregoing priority order is merely an example, and an actually available priority order is not limited thereto. Specifically, the priority order may be determined based on a setting of the user. This is not specifically limited in this embodiment of this application.
In some embodiments of this application, when the in-vehicle terminal described in Part 1 displays an interface, the in-vehicle terminal may control an audio device in a vehicle to play a corresponding audio according to the audio control method in the non-content transfer scenario. When the electronic device described in Part 2 is an electronic device in a cockpit, an audio device in the cockpit may be controlled to play a corresponding audio according to the audio control method in the non-content transfer scenario when content is displayed.
In Scenario 1 (namely, the non-content transfer scenario), when the user triggers to transfer content displayed on a display to another display for display, the scenario changes to a content transfer scenario, and audio control may be performed according to a method described in this part of content.
In this embodiment of this application, in the content cross-screen transfer scenario, the control device may determine, based on at least one piece of information of an application type, an audio type, a user interface (user interface, UI) type, or a content transfer manner, whether to perform audio transfer, that is, determine whether to adjust a playing manner of an audio corresponding to transferred content. In this embodiment of this application, adjusting an audio playing manner may be: adjusting an audio output apparatus used when the audio is played (for example, switching the used audio output apparatus), and/or adjusting an audio output manner of an audio output apparatus used when the audio is played (for example, switching a sound channel of the audio output apparatus).
For example, refer to
In this embodiment of this application, in the content cross-screen transfer scenario, an audio transfer rule may include one or more of the following:
Rule 1: If media-type content is transferred, a corresponding media-type audio is transferred along with the media content. To be specific, when the media-type content is displayed on a specific display, the media-type audio is played by using an audio output apparatus in a sound zone corresponding to the specific display.
For example, an audio output apparatus that corresponds to the front passenger screen and that has a highest priority for playing a media-type audio may be the front passenger Bluetooth headset. After media-type content is transferred to the front passenger screen for display, a media-type audio corresponding to the media-type content needs to be played by using a front passenger Bluetooth headset.
Rule 2: A navigation audio, a notification audio, a system audio, or an alarm audio may not be transferred along with content that is transferred across screens.
Optionally, the navigation audio, the system audio, or the alarm audio may be played by preferentially using an audio output apparatus that is mainly used by a driver, for example, a driver loudspeaker, a driver headrest speaker, a driver Bluetooth headset, or a vehicle loudspeaker.
Optionally, the notification audio may be played by using an audio output apparatus in a sound zone in which a user receiving the notification audio is located.
Rule 3: If content of a voice call or a video call is transferred, a corresponding call audio can be transferred along with the content.
Optionally, after the call audio is transferred along with the content of the voice/video call, the call audio may be switched to be played preferentially through the vehicle loudspeaker, and certainly, may alternatively be switched to be played by using an audio output apparatus in a sound zone corresponding to a display on which the content after the transfer is located.
Rule 4: An audio for responding to a voice control command can be transferred along with a location of a user who deliveries the command.
For example, when the user is in a driver seat, after the user deliveries the voice control command, the control device may display content corresponding to the voice control command on the driver screen, and play, by using an audio output apparatus in a sound zone associated with the driver screen, the audio for responding to the voice control command. When the user moves from the driver seat to a front passenger seat, the control device may display, on the front passenger screen based on a location change of the user, the content corresponding to the voice control command of the user, and play, by using an audio output apparatus in a sound zone associated with the front passenger screen, the audio for responding to the voice control command. Optionally, the control device may recognize the location change of the user in a manner of face detection, positioning, or the like. This is not limited in this application.
Rule 5: When a control or widget is transferred, a corresponding audio may not be transferred accordingly.
Rule 6: For a floating window, picture-in-picture, or the like, it can be determined based on interface content or a content transfer manner whether an audio is to be transferred.
When determining, based on the interface content, whether to perform transfer, the control device may determine, based on an audio type of the audio corresponding to the interface content, whether to transfer the audio that corresponds to the floating window, the picture-in-picture, or the like during the transfer. For example, whether to perform the transfer may be determined based on Rule 1 to Rule 4. For a method for determining, by the control device based on the content transfer manner, whether to perform the transfer, refer to the following description of related content.
In the foregoing rules, if the audio is transferred along with the content, after the content is transferred, the control device may select an audio output apparatus for playing the audio transferred along with the content by using the method described in the content of Scenario 1 (namely, the non-content transfer scenario). Optionally, if the selected audio output apparatus is playing an audio, the control device may control the audio output apparatus to pause the audio that is being played currently and switch to playing the transferred audio. Subsequently, when it is determined that the audio output apparatus does not need to play the transferred audio, the audio output apparatus may be controlled to continue to play the audio that is originally played by the audio output apparatus. In some implementations, prompt information may alternatively be output to the user to prompt the user to perform a selection. Based on the selection of the user, the audio output apparatus outputs the originally played audio or the transferred audio.
The foregoing transfer rule may be applied to the interface transfer scenario described in the interface transfer method in Part 1 in the foregoing embodiments. For example, in the scenarios shown in
The following describes the foregoing transfer rules with reference to specific embodiments.
1. When content is transferred across screens, an audio playing manner is switched accordingly.
In some embodiments of this application, when content displayed on a source display is shared to a target display, and an audio corresponding to the content is a media-type audio or a call-type audio (a voice call audio, a video call audio, or the like), a control device may adjust a playing manner of the audio corresponding to the content along with cross-screen transfer of the content. A specific adjustment manner may be any one of the following:
(1) Switch to playing the audio corresponding to the shared content by using an audio output apparatus in a sound zone associated with the target display.
The control device may select, according to the method described in the content of Scenario 1 (namely, the non-content transfer scenario), an audio output apparatus that needs to be used after the switching from the audio output apparatus in the sound zone associated with the target display. Details are not described herein again.
(2) Switch to playing the audio corresponding to the shared content by using an audio playing apparatus in a sound zone associated with a source display and an audio playing apparatus in a sound zone associated with the target display.
The control device may continue to use the audio playing apparatus in the sound zone associated with the currently used source display, or may reselect, according to the method described in the content of Scenario 1 (namely, the non-content transfer scenario), an audio output apparatus that needs to be used after the switching from the audio output apparatus in the sound zone associated with the source display. The control device may select, according to the method described in the content of Scenario 1 (namely, the non-content transfer scenario), the audio output apparatus that needs to be used after the switching from the audio playing apparatus in the sound zone associated with the target display. Details are not described herein.
The following provides description based on specific scenarios and examples.
In this scenario, the control device may shift (or referred to as transfer) the content displayed on the source display to the target display for display based on a user indication. If the audio corresponding to the content is the media-type audio or the call-type audio, after the content is transferred to the target display for display, the control device may switch the audio playing manner to playing the audio corresponding to the content by using the audio playing apparatus in the sound zone associated with the target display. The control device may select, according to the method described in the content of Scenario 1 (namely, the non-content transfer scenario), the audio playing apparatus in the sound zone to which the target display belongs, and perform corresponding audio playing control. Details are not described herein again.
For example, based on the scenario shown in
In some embodiments, the operation performed by the user for triggering the video playing interface displayed on the central display screen to be displayed on the front passenger screen may be a gesture operation (for example, the two-finger right sliding operation shown in
Optionally, as shown in
For example, based on the scenario shown in
For example, based on the scenario shown in
It should be understood that the scenario in which the content transfer result is the transferred content and the audio is transferred along with the content includes but is not limited to the scenarios shown in
For example, based on the scenario shown in
Optionally, in Scenario 1, after the image is transferred from the source display to the target display for display, the source display may further display a small-sized window (for example, a floating window, a floating bubble, or picture-in-picture) corresponding to the transferred content. Optionally, the small-sized window may display an image corresponding to the transferred content. Further, the user may operate the small-sized window on the source display, to transfer the image from the target display to the source display for display. When the image is transferred from the target display to the source display for display, for a method for transferring an audio along with the image, refer to the descriptions in the embodiment shown in Scenario 1.
In this scenario, the control device may copy, based on a user indication, the content displayed on the source display to the target display for display. If the audio corresponding to the content is the media-type audio or the call-type audio, after the content is transferred to the target display for display, the control device switches the audio playing manner to playing the audio corresponding to the content by using the audio playing apparatus in the sound zone associated with the source display and the audio playing apparatus in the sound zone associated with the target display. The control device may select, according to the method described in the content of Scenario 1 (namely, the non-content transfer scenario), the audio playing apparatus in the sound zone to which the target display belongs, and perform corresponding audio playing control. Details are not described herein again.
For example, based on the scenario shown in
For example, based on the scenario shown in
In this scenario, the control device may share, based on a user indication, some content displayed on the source display to the target display for display, and display the remaining content on the source display. In this case, the source display and the target display collaboratively display the same content. If an audio corresponding to the content is a media-type audio or a call-type audio, the control device may switch an audio playing manner to playing the audio corresponding to the content by using an audio playing apparatus in a sound zone associated with the source display and an audio playing apparatus in a sound zone associated with the target display. The control device may select, according to the method described in the content of Scenario 1 (namely, the non-content transfer scenario), the audio playing apparatus in the sound zone to which the target display belongs, and perform corresponding audio playing control. Details are not described herein again.
For example, based on the scenario shown in
For example, based on the example shown in
In some embodiments of this application, when playing an audio by using an audio output apparatus, the control device may simultaneously set a sound field mode used when the audio output apparatus plays the audio. The control device may preset an output manner used by each audio output apparatus in different sound field modes, for example, a manner of outputting a mono audio, outputting a stereo audio, or the like. When playing an audio, the control device may select a sound field mode according to a user indication or according to an actual scenario, and control an audio output device to play the audio in the sound field mode. For example, the sound field mode may include a theater mode, a front-row surround mode, a driver surround mode, a private mode, and the like. For example, the theater mode may mean playing an audio through the vehicle loudspeaker, the front-row surround mode may mean playing an audio through the front-row loudspeaker, the driver surround mode may mean playing an audio through the driver loudspeaker, the private mode may mean playing an audio through a loudspeaker at a specific location, and the like.
For example, based on the scenarios described in the examples in
In some embodiments of this application, when content displayed on the source display is shared to the target display, and an application corresponding to the content is a media-type application or a call-type application, the control device may adjust a playing manner of the audio corresponding to the content along with cross-screen transfer of the content. A specific adjustment manner may be: switching to playing the audio corresponding to the shared content by using the audio playing apparatus in the sound zone associated with the target display, or switching to playing the audio corresponding to the shared content by using the audio playing apparatus in the sound zone associated with the source display and the audio playing apparatus in the sound zone associated with the target display. For a specific implementation of the solution, refer to the method in the foregoing embodiments. Details are not described herein again.
In some embodiments of this application, after content displayed on a source display is shared to a target display, a control device may not adjust an audio playing manner, but still play, by using an audio output apparatus used before the content is shared, an audio corresponding to the shared content. The following provides description based on specific scenarios and examples.
Scenario 1: Content is transferred, and a playing manner of a navigation audio and a notification audio (a message audio) is not switched (an audio is not transferred).
In this scenario, the control device displays specified content on the source display, and plays an audio corresponding to the specified content by using an audio output apparatus in a sound zone associated with the source display. The specified content and the audio corresponding to the specified content may be any one of the following: navigation content and a navigation audio, system information and a system audio, and alarm information and an alarm audio. After the control device shares, according to a user indication, the specified content to the target display for display, the control device may not switch a playing manner of the audio corresponding to the specified content, that is, continue to play the audio corresponding to the specified content by using the audio output apparatus in the sound zone associated with the source display. When the control device shares, according to the user indication, the specified content displayed on the source display to the target display for display, if content is being displayed on the target display and a corresponding audio is being played, the control device may continue playing the original audio on the target display, or may pause playing the audio. Optionally, when displaying the content from the source display, the target display may display the content in a floating window.
For example, the audio is a system audio. When a full-screen window, a split-screen window, a floating window, picture-in-picture, a control, a widget, or the like is transferred between displays, the control device may play a corresponding system audio (for example, a sound effect like “shout” or “wow”). For example, the system audio may be configured by a system or customized by a user. Optionally, the system audio may be played by preferentially using an audio output apparatus that is mainly used by a driver, for example, a driver loudspeaker, a driver headrest speaker, a driver Bluetooth headset, or a vehicle loudspeaker. Alternatively, the system audio may be played by using an audio output apparatus in a sound zone associated with the target display for content transfer.
For example, the audio is an alarm audio. Alarm information corresponding to the alarm audio may be displayed on a central display screen by default. The alarm audio may be played by preferentially using an audio output apparatus that is mainly used by a driver, for example, a driver loudspeaker, a driver headrest speaker, a driver Bluetooth headset, or a vehicle loudspeaker.
For example, specified content is navigation content corresponding to the navigation audio. The control device may display the navigation content on the source display, and the control device plays, by using the audio output apparatus in the sound zone associated with the source display, the navigation audio corresponding to the navigation content. After the control device shares, according to a user indication, the navigation content displayed on the source display to the target display for display, the source display may no longer display the navigation content, and the target display displays the navigation content. In this case, when playing an audio, the control device does not switch a playing manner of the navigation audio, and may still play the navigation audio corresponding to the navigation content by using the audio output apparatus in the sound zone associated with the source display. When the control device shares, according to a user indication, the navigation content displayed on the source display to the target display for display, when content is displayed on the target display and a corresponding audio is being played, in an optional implementation, the control device may display, on the target display in split screen, the navigation content from the source display and the content originally displayed on the target display. Because the content originally displayed on the target display is still displayed on the display, the navigation content does not affect viewing of the content originally displayed on the target display. In this case, the audio playing manner may not need to be switched, that is, the audio output apparatus in the sound zone associated with the source display is still used to play the navigation audio corresponding to the navigation content, and the audio output apparatus in the sound zone associated with the target display is still used to play the audio corresponding to the content originally displayed on the target display. In another optional implementation, the control device may display, on the target display in full screen, the navigation content from the source display. In addition, the control device may continue to play, by using the audio output apparatus in the sound zone associated with the target display, the audio corresponding to the content originally displayed on the target display, or because the content originally displayed on the target display is blocked by the navigation content, the control device may also pause playing the audio corresponding to the content originally displayed on the target display.
When the control device displays the navigation content from the source display on the target display in full screen, refer to
S7101: The control device displays the navigation content on the source display, and plays, by using a first audio output apparatus, the navigation audio corresponding to the navigation content.
Optionally, the control device may display other content on the target display, and play, by using a second audio output apparatus, an audio corresponding to the other content.
The first audio output apparatus is the audio output apparatus in the sound zone associated with the source display, and the second audio output apparatus is the audio output apparatus in the sound zone associated with the target display.
For example, as shown in
S7102: The control device shares, to the target display in response to a received first sharing operation, the navigation content displayed on the source display.
For example, the first sharing operation may be a two-finger right and transverse sliding operation that is shown in
S7103: The control device stops displaying the navigation content on the source display, and displays the navigation content on the target display.
For example, as shown in
S7104: The control device continues to play, by using the first audio output apparatus, the navigation audio corresponding to the navigation content.
For example, as shown in
Optionally, before step S7102, the method may further include step S7100, and after step S7104, the method may further include steps S7105 to S7108.
S7100: The control device displays other content on the target display, and plays, by using the second audio output apparatus, an audio corresponding to the other content.
S7105: The control device continues to play, by using the second audio output apparatus, the audio corresponding to the other content, or the control device stops playing, by using the second audio output apparatus, the audio corresponding to the other content.
For example, as shown in
S7106: The control device returns, to the source display in response to a received second sharing operation, the navigation content displayed on the target display.
For example, as shown in
Optionally, in the scenario shown in
S7107: The control device displays the navigation content on the source display, and stops displaying the navigation content on the target display.
For example, as shown in
S7108: The control device continues to play, by using the first audio output apparatus, the navigation audio corresponding to the navigation content, and continues to play, by using the second audio output apparatus, the audio corresponding to the other content.
For example, as shown in
Scenario 2: A floating window, picture-in-picture, a control, or a widget is transferred, and an audio playing manner is not switched.
In this scenario, the control device may display first content (for example, display the first content in full screen) on the source display, and the control device may display second content in a window (for example, a floating window or picture-in-picture) on the source display. The window in which the second content is located may be overlaid on the window in which the first content is located. The control device may play audios corresponding to the first content and the second content by using an audio output apparatus in a sound zone associated with the source display. After the control device shares, according to a user indication, the window in which the second content is located on the source display to the target display for display, the source display no longer displays the window, and the target display displays the window. In this case, when playing the audios, the control device may not switch a playing manner of the audios corresponding to the first content and the second content, that is, still play the audios corresponding to the first content and the second content by using the audio output apparatus in the sound zone associated with the source display. When the control device shares, according to the user indication, the second content displayed on the source display to the target display for display, if third content is being displayed on the target display and a corresponding audio is being played, the control device may continue playing the audio corresponding to the third content, or may pause playing the audio corresponding to the third content.
For example, the window in which the second content is located is a floating window. Refer to
S7301: The control device displays the first content on the source display in full screen, displays the second content in a floating window on the source display, and plays, by using a third audio output apparatus, audios corresponding to the first content and the second content; and the control device displays third content on the target display, and plays, by using a fourth audio output apparatus, an audio corresponding to the third content.
The third audio output apparatus is the audio output apparatus in the sound zone associated with the source display, and the fourth audio output apparatus is an audio output apparatus in a sound zone associated with the target display.
For example, as shown in
S7302: The control device shares, to the target display in response to a received third sharing operation, the floating window displayed on the source display.
For example, the first sharing operation may be an operation of selecting the floating window and then transversely sliding rightward with two fingers shown in
S7303: The control device displays the first content on the source display in full screen, and displays the third content and the floating window on the target display.
For example, as shown in
S7304: The control device continues to play the audios corresponding to the first content and the second content by using the third audio output apparatus.
For example, as shown in
S7305: The control device continues to play the audio corresponding to the third content by using the fourth audio output apparatus, or the control device stops playing the audio corresponding to the third content by using the fourth audio output apparatus.
For example, as shown in
Optionally, the control device may determine, according to an audio playing status of the second content, whether to pause playing the audio corresponding to the third content. For example, when the music content in the floating window is in a no-content playing state, the control device may continue to play the audio corresponding to the third content by using the fourth audio output apparatus; and when the music content in the floating window is in a content playing state, the control device may stop playing the audio corresponding to the third content by using the fourth audio output apparatus.
Optionally, after step S7305, the method may further include steps S7306 and S7307.
S7306: The control device displays a floating window in full screen on the target display in response to a received window adjustment operation.
The window adjustment operation is used to indicate to switch the floating window to full-screen display, that is, to display content in the floating window in full screen on the target display.
For example, as shown in
S7307: The control device plays, by using the third audio output apparatus, the audio corresponding to the first content, and plays, by using the fourth audio output apparatus, the audio corresponding to the second content.
For example, as shown in
In some embodiments of this application, when the control device returns, based on a user operation, the floating window on the target display to the source display for display, the control device may continue to perform step S7301.
In some other embodiments, in the procedure shown in
In still some embodiments, in the procedure shown in
Further, when the control device switches, according to a user indication, the incoming call notification control displayed on the front passenger screen to full-screen display or starts an application corresponding to the incoming call notification control, as shown in
Optionally, in the foregoing embodiments, the manner (namely, an audio transfer manner used by the control device) of switching the audio output apparatus used when the control device plays an audio may be by default, for example, may be a default manner configured by the system or manually configured by the user. Even if the control device automatically determines, according to a preconfigured rule, an audio output apparatus used when an audio is played, the user may also manually adjust the audio output apparatus. When playing an audio, the control device preferentially uses the audio output apparatus manually selected by the user to play the audio.
Optionally, in the audio transfer rules provided in the foregoing embodiments of this application, the control device may preferentially perform control according to a rule that corresponds to audio transfer when content transfer is performed. When a service in the control device does not support control according to the rule, the control device may perform control according to other rules provided in the foregoing embodiments, or perform control according to a system configuration or a user-defined rule.
It should be noted that, both the interface transfer method provided in the content of Part 1 and the audio control method provided in the content of Part 3 provide an interface display method related to an interface transfer process. If there is a repetition or similar part in the interface display method provided in the content of the two parts, refer to each other. If there is a difference, a manner may be flexibly selected according to the related description or an actual scenario, which is not listed one by one in embodiments of this application.
It should be noted that the specific implementation procedure provided in each embodiment is merely an example of a method procedure applicable to embodiments of this application. For specific implementation, refer to the descriptions in the foregoing embodiments. In addition, an execution sequence of the steps in each implementation procedure may be correspondingly adjusted based on an actual requirement, other steps may be added or some steps may be reduced, and the like.
In the foregoing embodiments, the method provided in embodiments of this application is described by using an example in which the display is controlled by using the one-core multi-screen mechanism. However, the method provided in embodiments of this application is not limited to a scenario in which the display is controlled by using the one-core multi-screen mechanism, but may be further applied to a scenario in which the display is controlled by using a multi-core multi-screen mechanism.
When the solution provided in embodiments of this application is applied to the multi-core multi-screen mechanism or a scenario in which a display is controlled by a plurality of devices, a plurality of displays in the audio control system in the foregoing embodiments may belong to a plurality of control devices. In an implementation, a control device (for example, a vehicle) may be a vehicle that uses the multi-core multi-screen mechanism. In another implementation, a communication connection may be established between the plurality of devices. For example, a vehicle may establish a communication connection to an external device like a tablet or a large screen, and each display of the vehicle, the tablet, and the large screen jointly form the plurality of displays in the audio control system. In this scenario, for audio output apparatuses, each audio output apparatus is connected to (or associated with) only one control device in a plurality of control devices and is controlled by the control device. For each control device, an audio output apparatus connected to the control device may exist in the audio control system, or an audio output apparatus connected to the control device may not exist in the audio control system. If the audio output apparatus connected to the control device exists in the audio control system, the audio output apparatus may be used as an independent audio output apparatus of the control device, and the control device may display content by using a display of the control device, and play an audio by using the connected audio output apparatus. If an audio output apparatus connected to the control device does not exist in the audio control system, the control device can display content only through a display of the control device. If an audio needs to be played, an audio output apparatus corresponding to another control device needs to be used. For example, the audio to be played may be sent to another control device, and the another control device plays the audio by using the connected audio output apparatus.
In the foregoing multi-core multi-screen or multi-device control scenario, the audio control system may include the plurality of control devices, each control device includes at least one display, and displays of the plurality of control devices form the plurality of displays in the audio control system. Each control device may not be connected to an audio output apparatus or may be connected to at least one audio output apparatus. Audio control methods used by the plurality of control devices are the same.
For example, in the scenario of Example 1, if the multi-core multi-screen mechanism is used as a control mechanism of the plurality of displays, in an example, the central display screen and the front passenger screen may belong to a first control device, and the rear-row screen may belong to a second control device. For a layer structure of the first control device and the second control device, refer to the structure shown in
In an example scenario, when the audio output apparatus connected to the second control device includes the rear-row speaker, when content displayed on the central display screen of the first control device is transferred to the rear-row screen of the second control device and the audio is transferred along with the content, the first control device may stop playing, by using the connected audio output apparatus, the audio corresponding to the transferred content, and the second control device may play, by using the connected audio output apparatus, that is, the rear-row speaker, the audio corresponding to the transferred content.
In still another example scenario, when the audio output apparatus connected to the second control device includes a Bluetooth headset, when content displayed on the central display screen of the first control device is transferred to the rear-row screen of the second control device and an audio is transferred along with the content, the first control device may stop playing, by using the connected audio output apparatus, the audio corresponding to the transferred content, and the second control device may play, by using the connected audio output apparatus, that is, the Bluetooth headset, the audio corresponding to the transferred content.
In yet another example scenario, when the audio output apparatus connected to the first control device includes a driver loudspeaker and a front passenger loudspeaker, and the audio output apparatus connected to the second control device includes a rear-row loudspeaker, the second control device may display content on the rear-row screen and play a corresponding audio through the rear-row loudspeaker. When the second control device needs to play the audio through the vehicle loudspeaker, the second control device may send the audio to the first control device after establishing a communication connection to the first control device (for example, the second control device may establish a communication connection to the first control device through a Bluetooth connection or a frequency modulation (frequency modulation, FM) broadcast). The first control device may play the audio through a driver loudspeaker and a front passenger loudspeaker. In addition, the second control device continues to play the audio through the rear-row loudspeaker. In this way, an effect of playing an audio through the vehicle loudspeaker in the multi-core multi-screen control mechanism can be implemented.
For a content display manner, an audio playing manner, and an effect of the method provided in this embodiment of this application when the method is applied to the multi-core multi-screen mechanism or the multi-device control display scenario, refer to the content display manner, the audio playing manner, and the effect in the one-core multi-screen mechanism control display scenario in the foregoing embodiments. For example, in the scenario shown in
Based on the foregoing embodiments and a same concept, an embodiment of this application further provides an audio control method. As shown in
S7601: When displaying first content on a first display, an electronic device plays a first audio corresponding to the first content by using a first audio output apparatus, where the first display is any display of a plurality of displays located in a first space area, the first audio output apparatus is associated with a first sound zone, the first sound zone is a candidate sound zone associated with the first display in a plurality of candidate sound zones, and each candidate sound zone in the plurality of candidate sound zones is associated with one or more audio output apparatuses in the first space area.
In some embodiments of this application, the electronic device may be the control device in the foregoing embodiments. The first space area may be the space area in the vehicle cockpit in the foregoing embodiments. Each audio output apparatus may include at least one of an in-vehicle loudspeaker (for example, the driver loudspeaker, the front passenger loudspeaker, or the rear-row loudspeaker in the foregoing embodiments), a headrest speaker (for example, the driver headrest speaker, the front passenger headrest speaker, or the rear-row headrest speaker in the foregoing embodiments), or a Bluetooth headset (for example, the Bluetooth headset connected to each display in the foregoing embodiments). Apparatuses specifically included in each audio output apparatus may be of a same type, or may be of different types. Each audio output apparatus may include one or more in-vehicle loudspeakers, headrest speakers, or Bluetooth headsets. For example, the first audio output apparatus is used as an example. The first audio output apparatus may be the vehicle loudspeaker described in the foregoing embodiments, and the first audio output apparatus specifically includes a plurality of loudspeakers such as the driver loudspeaker, the front passenger loudspeaker, and the rear-row loudspeaker. For another example, the first audio output apparatus may also include the driver loudspeaker, the driver headrest speaker, and the like.
For example, the first display may be any display in the vehicle cockpit in the foregoing embodiments.
In some embodiments of this application, before playing the first audio corresponding to the first content by using the first audio output apparatus, the electronic device needs to first determine the first audio output apparatus. Specifically, the electronic device may determine the first sound zone based on the first display; obtain a priority order of the at least one audio output apparatus associated with the first sound zone; and select an audio output apparatus with the highest priority from the at least one audio output apparatus as the first audio output apparatus based on the priority order of the at least one audio output apparatus. In some embodiments of this application, the electronic device may select, from the plurality of candidate sound zones based on a specified association relationship between a display and a candidate sound zone, a candidate sound zone associated with the first display as the first sound zone; or the electronic device may determine the first sound zone based on a received sound zone selection operation, where the sound zone selection operation is used to select a candidate sound zone from the plurality of candidate sound zones as the first sound zone. The plurality of candidate sound zones, the audio output apparatus associated with each candidate sound zone, the correspondence between the display and the candidate sound zone, and the like may be preconfigured by a system or a user. The sound zone selection operation may be an operation of selecting a sound zone by the user. In some embodiments of this application, the electronic device may select, from a plurality of pieces of priority information based on a specified correspondence between an audio type and priority information, target priority information corresponding to an audio type of the first audio, where each of the plurality of pieces of priority information indicates a priority order of the at least one audio output apparatus associated with the first sound zone, and different priority information corresponds to different audio types; and then determine the priority order of the at least one audio output apparatus based on the target priority information. The plurality of priority information, a priority order of an audio output apparatus indicated by each piece of priority information, a specified correspondence between an audio type and priority information, and the like may be preconfigured by the system or the user.
In an example, when the first space area is the space in the vehicle cockpit in the foregoing embodiments, the plurality of candidate sound zones may be the sound zones 1 to 3 obtained by dividing the vehicle cockpit shown in
It should be noted that the audio output apparatus in the sound zone in embodiments of this application may also be understood as an audio output apparatus associated with the sound zone.
In some embodiments of this application, a same display may correspond to one or more pieces of priority information, and in a plurality of pieces of priority information corresponding to a same display, different priority information may correspond to different audio types. In some embodiments of this application, a same audio type may correspond to one or more pieces of priority information, and in a plurality of pieces of priority information corresponding to a same audio type, different displays may correspond to different priority information.
For example, the priority order of the audio output apparatus indicated by the priority information may be the priority order described in Case 1 to Case 3 in the foregoing embodiments, or certainly may be another priority order.
S7602: The electronic device does not display the first content on the first display and displays the first content on a second display in response to a received first operation, where the second display is included in the plurality of displays.
In some embodiments of this application, the first display may be the source display in the foregoing embodiments, and the second display may be the target display in the foregoing embodiments. For example, the first display and the second display may be the displays in the vehicle cockpit in the foregoing embodiments. For example, the first operation may be the operation (for example, the two-finger sliding operation) for indicating to transfer the content displayed on the source display to the target display for display in the foregoing embodiments.
In an embodiment, the user may determine, based on a direction of the first operation, the target display for the transfer. For example, the central display screen is located on the left side of the front passenger screen. An operation that two fingers slide rightward on the central display screen may be used to transfer the content displayed on the central display screen to the front passenger screen, and an operation that two fingers slide leftward on the front passenger screen may be used to transfer content displayed on the front passenger screen back to the central display screen. In some other embodiments, the user may determine, based on a type of the first operation, the target display for the transfer. For example, when the source display is the central display screen, the two-finger sliding operation may be used to transfer the content displayed on the central display screen to the front passenger screen, and the three-finger sliding operation may be used to transfer the content displayed on the central display screen to the rear-row screen. In still some embodiments, the user may select, on the central display screen, one or more target displays for content transfer. The first operation is not limited in this application.
A scenario in which the electronic device does not display the first content on the first display and displays the first content on the second display in response to the received first operation may be a scenario in which a content transfer result is mobile content in the foregoing embodiments. For example, the first display may be the central display screen shown in
Optionally, before the electronic device does not display the first content on the first display in response to the received first operation, and before the second display displays the first content, the second display may display second content, and the electronic device may play a second audio corresponding to the second content by using a third audio output apparatus, where the third audio output apparatus is associated with the second sound zone. In this scenario, when displaying the first content on the second display, the electronic device may display the first content in full screen/split screen on the second display, or display the first content in a floating window/picture-in-picture on the second display. Specifically, the electronic device may display the first content and the second content on the second display in split screen; or display the first content in a first window on the second display, where the first window is overlaid on a window in which the second content is located, and a size of the first window is less than a size of the window in which the second content is located, that is, the first window occupies a part of an area of the second display, for example, the first window is a floating window or picture-in-picture. Alternatively, the electronic device may display the first content and does not display the second content on the second display. For example, the electronic device displays the first content in full screen on the second display.
In the foregoing method, for a manner in which the electronic device determines the third audio output apparatus, refer to the method described in the content of Scenario 1 (namely, the non-content transfer scenario) in the foregoing embodiments. Details are not described herein again.
S7603: In response to the received first operation, the electronic device does not play the first audio by using the first audio output apparatus and plays the first audio by using a second audio output apparatus; or continues to play the first audio by using the first audio output apparatus; or plays the first audio by using a part or all of audio output apparatuses of a specified type in the first space area, where the second audio output apparatus is associated with a second sound zone, and the second sound zone is a candidate sound zone associated with the second display in the plurality of candidate sound zones
In some embodiments of this application, the electronic device does not display the first content on the first display in response to the received first operation, and after displaying the first content on the second display, may play the first audio corresponding to the first content in any one of the following manners:
(1) Not play the first audio by using the first audio output apparatus, and play the first audio by using the second audio output apparatus.
This manner corresponds to the processing manner in the scenario in which the audio is transferred along with the content in the foregoing embodiments (corresponding to Rule 1 and Rule 3 in the foregoing embodiments). In this manner, after the first content is transferred from the first display to the second display, the audio output apparatus for playing the first content is switched from the audio output apparatus in the sound zone associated with the first display to the audio output apparatus in the sound zone associated with the second display. For a specific implementation, refer to the implementation described in the corresponding scenario in the foregoing embodiments. Details are not described herein again.
For example, the first content may be displayed content corresponding to a service in the electronic device, and the first audio may be an audio of a service corresponding to the first content. For example, a video playing interface corresponding to the video service is the first content, and an audio corresponding to the video playing interface is the first audio. It may be understood that, after the first content is transferred from the first display to the second display for display, specific content of the first content and the first audio may change. For example, after the video playing interface is transferred from the first display to the second display for display, the video may continue to be played. Correspondingly, the first content and the first audio may change in real time with playing progress of the video. For second content, a second audio, and the like, refer to the foregoing descriptions.
In some embodiments of this application, if the first content is media-type content or call-type content, or the first audio is a media-type audio or a call-type audio, or a service that provides the first content/first audio is a media-type service or a call-type service, in response to the first operation, the first audio output apparatus does not play the first audio, and the electronic device plays the first audio by using a second audio output apparatus. That is, the media-type audio or the call-type audio is transferred along with the content. For a specific implementation, refer to the implementation in the scenario corresponding to a rule like Rule 1 or Rule 3 in the foregoing embodiments. Details are not described herein again.
In some embodiments of this application, before the first content is transferred from the first display to the second display, when the second content is displayed on the second display and the second audio corresponding to the second content is played by using a third audio output apparatus, the first display does not display the first content, and after the second display displays the first content, the electronic device may further continue to play the second audio by using the third audio output apparatus. Optionally, when the first content is displayed in a split-screen window or a floating window on the second display, the electronic device may not display the first content on the first display, and after displaying the first content on the second display, continue to play the second audio by using the third audio output apparatus.
In some embodiments of this application, before the first content is transferred from the first display to the second display, when the second content is displayed on the second display and the second audio corresponding to the second content is played by using the third audio output apparatus, the first display does not display the first content in response to the received first operation, and after the second display displays the first content in full screen, if the first content is displayed in a second window, the electronic device may stop playing the second audio by using the third audio output apparatus.
In some embodiments of this application, in response to a first voice indication of a first user, a third audio is played by using an audio output apparatus associated with the first sound zone, where a space area in which the first user is located is a space area associated with the first sound zone; and in response to a second voice indication of a second user, a fourth audio is played by using an audio output apparatus associated with the second sound zone, where a space area in which the second user is located is a space area associated with the second sound zone. In other words, when a location of a user changes from the space area associated with the first sound zone to the space area associated with the second sound zone, the control manner of transferring an audio along with content is used. The electronic device may determine the location of the user and a change status of the location of the user in a manner of face detection, positioning, or the like. This is not specifically limited in embodiments of this application.
In some embodiments of this application, before the first audio is not played by using the first audio output apparatus and the first audio is played by using the second audio output apparatus, the second audio output apparatus further needs to be determined. Specifically, the electronic device may determine the second sound zone based on the second display; obtain a priority order of the at least one audio output apparatus associated with the second sound zone; and select, based on the priority order of the at least one audio output apparatus associated with the second sound zone, an audio output apparatus with a highest priority from the at least one audio output apparatus associated with the second sound zone as the second audio output apparatus. For a specific implementation, refer to the foregoing implementation in which the electronic device determines the first audio output apparatus. Details are not described herein again.
(2) The electronic device continues to play the first audio by using the first audio output apparatus.
This manner corresponds to the processing manner in the scenario in which the audio is not transferred along with the content in the foregoing embodiments (corresponding to Rule 2, Rule 5, and the like in the foregoing embodiments). In this manner, after the first content is transferred from the first display to the second display, the audio output apparatus for playing the first content is still the audio output apparatus for playing the first content before the transfer, that is, the audio output apparatus in the sound zone associated with the first display. For a specific implementation corresponding to this manner, refer to the implementation described in the corresponding scenario in the foregoing embodiments. Details are not described herein again.
In some embodiments of this application, before continuing to play the first audio by using the first audio output apparatus, the electronic device further needs to determine that the first audio is any one of the following audios: a navigation audio, a notification audio, a system audio, or an alarm audio; or determine that a service that provides the first audio is any one of the following services: a navigation service, a notification service, a system service, or an alarm service; or determine that the first content is any one of a floating window, picture-in-picture, a control, or a widget. That is, the navigation-type audio, the notification-type audio, the system-type audio, and the alarm-type audio are not transferred along with the content. When the floating window, picture-in-picture, control, and widget are transferred, the corresponding audios do not need to be transferred. For a specific implementation, refer to implementations in scenarios corresponding to rules such as Rule 2, Rule 5, and Rule 6 in the foregoing embodiments. Details are not described herein again.
(3) The electronic device plays the first audio by using a part or all of audio output apparatuses of a specified type in the first space area.
In this manner, after the first content is transferred from the first display to the second display, the audio output apparatus for playing the first content may switch from the audio output apparatus in the sound zone associated with the first display to a part or all of audio output apparatuses of a specified type in a first space environment. For example, when the first space area is the vehicle cockpit described in the foregoing embodiments, all of audio output apparatuses of a specified type in the first space area may be a vehicle loudspeaker, all headrest speakers in the cockpit, all Bluetooth headsets in the cockpit, or the like. A part of audio output apparatuses of a specified type in the first space area may be a front-row loudspeaker or all front-row headrest speakers. For a specific implementation corresponding to the manner, refer to a related implementation in the foregoing embodiments. Details are not described herein again.
In some embodiments of this application, before playing the first audio by using all of audio output apparatuses of a specified type in the first space area, the electronic device further needs to determine that the first audio is a call-type audio. That is, the call-type audio can be transferred along with the content transfer. For a specific implementation, refer to the implementation in the scenario corresponding to a related rule in the foregoing embodiments. Details are not described herein again.
In some embodiments of this application, after the second audio is not played by using the third audio output apparatus, in response to a received second operation, the first content is not displayed on the second display, and the second content is displayed on the second display; and the second audio continues to be played by using the third audio output apparatus. The second operation is used to indicate the second display to stop displaying the first content.
In some embodiments of this application, after the second audio is not played by using the third audio output apparatus, in response to a received third operation, the first content is displayed on the first display, the first content is not displayed on the second display, and the second content is displayed on the second display; and the first audio continues to be played by using the first audio output apparatus, and the second audio continues to be played by using the third audio output apparatus. The third operation is used to indicate to return the first content to the first display for display.
In an example, for a possible application scenario and an implementation of the method, refer to the descriptions of some content corresponding to
In some embodiments of this application, after not displaying the first content on the first display, and displaying the first content on the second display in response to a received first operation, and when determining that a display area of the first content on the second display is a partial area of the second display, the electronic device may display the first content in full screen on the second display in response to a received fourth operation, may play the first audio by using the audio output apparatus associated with the second sound zone, and may not display the first audio by using the audio output apparatus associated with the first sound zone. The fourth operation is used to indicate to display the first content in full screen. When selecting the audio output apparatus associated with the second sound zone, the electronic device may use the method described in the content of Scenario 1 (namely, the non-content transfer scenario) in the foregoing embodiments. Details are not described herein again.
In an example, for a possible application scenario and an implementation of the method, refer to the descriptions of some content corresponding to
For specific steps performed by the electronic device in the method, refer to the related descriptions in the foregoing embodiments. Details are not described herein again.
Based on the foregoing embodiments and a same concept, an embodiment of this application further provides an audio control method. As shown in
S7701: When displaying first content on a first display, an electronic device plays a first audio corresponding to the first content by using a first audio output apparatus, where the first display is any display of a plurality of displays located in a first space area, the first audio output apparatus is associated with a first sound zone, the first sound zone is a candidate sound zone associated with the first display in a plurality of candidate sound zones, and each candidate sound zone in the plurality of candidate sound zones is associated with one or more audio output apparatuses in the first space area.
In some embodiments of this application, the electronic device, the first display, a second display, and the first space area may be respectively the same as the electronic device, the first display, the second display, and the first space area in the method corresponding to
For a method for determining the first audio output apparatus by the electronic device, refer to the related descriptions in the method corresponding to
S7702: The electronic device displays the first content on the first display and displays the first content on the second display in response to a received first operation; or displays first sub-content on the first display and displays second sub-content on the second display in response to a received second operation, where the first content includes the first sub-content and the second sub-content, and the second display is included in the plurality of displays.
In some embodiments of this application, the first display may be the source display in the foregoing embodiments, and the second display may be the target display in the foregoing embodiments. For example, the first display and the second display may be the displays in the vehicle cockpit in the foregoing embodiments. The first operation may be the operation (for example, the three-finger sliding operation) for indicating to copy the content displayed on the source display to the target display for display in the foregoing embodiments. The second operation may be the operation (for example, the four-finger sliding operation) for indicating to display the first content on the first display and the second display in split screen in the foregoing embodiments.
A scenario in which the electronic device displays the first content on the first display and displays the first content on the second display in response to the received first operation may be a scenario in which a content transfer result is copied content in the foregoing embodiments. A scenario in which the electronic device displays the first sub-content on the first display and displays the second sub-content on the second display in response to the received second operation may be a scenario in which a content transfer result is spliced content in the foregoing embodiments.
In some embodiments of this application, a size of a display area in which the first sub-content is located may be the same as or different from a size of a display area in which the second sub-content is located. The first content may be obtained by splicing the first sub-content and the second sub-content.
S7703: The electronic device plays the first audio by using a second audio output apparatus and a third audio output apparatus; or plays the first audio by using an audio output apparatus of a specified type in the first space area, where the second audio output apparatus is associated with the first sound zone, the third audio output apparatus is associated with a second sound zone, and the second sound zone is a candidate sound zone associated with the second display in the plurality of candidate sound zones.
In some embodiments of this application, a scenario in which the electronic device plays the first audio by using a second audio output apparatus and a third audio output apparatus corresponds to a processing manner in a scenario in which a content transfer manner is content copying or content splicing in the foregoing embodiments. For a specific implementation, refer to the implementation described in the corresponding scenario in the foregoing embodiments. Details are not described herein again.
In some embodiments of this application, the second audio output apparatus is the same as the first audio output apparatus; and/or a type of the second audio output apparatus is the same as a type of the third audio output apparatus. For a method for determining the second audio output apparatus and the third audio output apparatus by the electronic device, refer to the related descriptions in the method corresponding to
In some embodiments of this application, after displaying the first content on the first display and displaying the first content on the second display in response to the received first operation, in response to a received third operation, the electronic device may not display the first content on the first display, and continue to display the first content on the second display; and the electronic device plays the first audio by using an audio output apparatus associated with the second sound zone, and does not play the first audio by using an audio output apparatus associated with the first sound zone; or in response to a received fourth operation, the electronic device may continue to display the first content on the first display, and not display the first content on the second display; and the electronic device plays the first audio by using an audio output apparatus associated with the first sound zone, and does not play the first audio by using an audio output apparatus associated with the second sound zone. The third operation is used to indicate to stop displaying the first content on the first display. The fourth operation is used to indicate to stop displaying the first content on the second display.
In some embodiments of this application, after displaying the first sub-content on the first display, and displaying the second sub-content on the second display in response to a received second operation, in response to a received fifth operation, the electronic device may not display the second sub-content on the second display, and display the first content on the first display; and the electronic device plays the first audio by using an audio output apparatus associated with the first sound zone, and does not play the first audio by using an audio output apparatus associated with the second sound zone; or in response to a received sixth operation, the electronic device may not displays the first sub-content on the first display, and display the first content on the second display; and the electronic device plays the first audio by using an audio output apparatus associated with the second sound zone, and does not play the first audio by using an audio output apparatus associated with the first sound zone. The fifth operation is used to indicate to stop displaying the second sub-content on the second display, and display the complete first content on the first display, that is, the fifth operation is used to indicate to return the second sub-content displayed on the second display to the first display and combine the second sub-content and the first sub-content for display. The sixth operation is used to indicate to stop displaying the first sub-content on the first display, and display the complete first content on the second display, that is, the sixth operation is used to indicate to send the first sub-content displayed on the first display to the second display and combine the first sub-content and the second sub-content for display.
For a specific implementation corresponding to the foregoing method, refer to a related implementation in the foregoing embodiments. Details are not described herein again.
For specific steps performed by the electronic device in the method, refer to the related descriptions in the foregoing embodiments. Details are not described herein again.
Based on the foregoing embodiments and a same concept, an embodiment of this application further provides an electronic device. The electronic device may be configured to implement the control method provided in embodiments of this application. As shown in
The display 7801 is configured to display a related user interface, for example, an application interface.
The memory 7802 stores the one or more computer programs (code), and the one or more computer programs include computer instructions. The one or more processors 7803 invoke the computer instructions stored in the memory 7802, so that the electronic device 7800 performs the control method provided in embodiments of this application.
During specific implementation, the memory 7802 may include a high-speed random access memory, and may further include a nonvolatile memory, for example, one or more magnetic disk storage devices, a flash device, or another nonvolatile solid-state storage device. The memory 7802 may store an operating system (a system for short below), for example, an embedded operating system like Android, IOS, Windows, or Linux. The memory 7802 may be configured to store an implementation program of this embodiment of this application. The memory 7802 may further store a network communication program. The network communication program may be used to communicate with one or more additional devices, one or more user equipments, or one or more network devices.
The one or more processors 7803 may be a general-purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), or one or more integrated circuits for controlling program execution of the solutions of this application.
It should be noted that
It should be understood that the solutions in embodiments of this application may be properly combined for use, and explanations or descriptions of terms in embodiments may be cross-referenced or explained in embodiments. This is not limited.
It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of this application. The execution sequences of the processes should be determined according to functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of embodiments of this application.
Based on the foregoing embodiments and a same concept, an embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is run on a computer, the computer is enabled to perform the method provided in the foregoing embodiments.
Based on the foregoing embodiments and a same concept, an embodiment of this application further provides a computer program product. The computer program product includes a computer program or instructions. When the computer program or the instructions is/are run on a computer, the computer is enabled to perform the method provided in the foregoing embodiments.
It may be understood that, to implement functions of the foregoing embodiments, an electronic device (for example an in-vehicle terminal) include a corresponding hardware structure and/or a corresponding software module for performing each function. A person skilled in the art should easily be aware that, in combination with units and algorithm steps of the examples described in embodiments disclosed in this specification, this application may be implemented in a hardware form or in a form of combining hardware with computer software. Whether a function is performed in a manner of hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.
In embodiments of this application, the electronic device may be divided into functional modules. For example, each functional module corresponding to each function may be obtained through division, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of software functional module. It should be noted that, in embodiments of this application, module division is an example, and is merely a logical function division. During actual implementation, another division manner may be used.
It should be further understood that each module in the electronic device may be implemented in a form of software and/or hardware. This is not specifically limited herein. In other words, the electronic device is presented in a form of functional module. The “module” herein may be an application-specific integrated circuit ASIC, a circuit, a processor that executes one or more software or firmware programs and a memory, an integrated logic circuit, and/or another component that can provide the foregoing functions.
In an optional manner, when software is used for implementing data transmission, the data transmission may be completely or partially implemented in a form of computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to embodiments of this application are completely or partially implemented. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired manner (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or a wireless manner (for example, infrared, radio, and microwave, or the like). The computer-readable storage medium may be a usable medium accessible by the computer, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a digital video disk (DVD)), a semiconductor medium (for example, a solid-state drive (SSD)), or the like.
Method or algorithm steps described in combination with embodiments of this application may be implemented by hardware, or may be implemented by a processor by executing software instructions. The software instructions may be formed by a corresponding software module. The software module may be located in a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable magnetic disk, a CD-ROM, or a storage medium of any other form known in the art. For example, the storage medium is coupled to the processor, so that the processor can read information from the storage medium and write information to the storage medium. Certainly, the storage medium may alternatively be a component of the processor. The processor and the storage medium may be disposed in an ASIC. In addition, the ASIC may be located in an electronic device. Certainly, the processor and the storage medium may alternatively exist in the electronic device as discrete components.
Based on the foregoing descriptions of the implementations, a person skilled in the art may clearly understand that for the purpose of convenient and brief descriptions, division into the foregoing functional modules is merely used as an example for description. During actual application, the foregoing functions can be allocated to different functional modules for implementation based on a requirement, that is, an inner structure of an apparatus is divided into different functional modules to implement all or some of the functions described above.
It is clear that a person skilled in the art can make various modifications and variations to this application without departing from the scope of this application. This application is intended to cover these modifications and variations of this application provided that they fall within the scope of the claims of this application and their equivalent technologies.
| Number | Date | Country | Kind |
|---|---|---|---|
| 202210911043.X | Jul 2022 | CN | national |
| 202210912601.4 | Jul 2022 | CN | national |
| 202211716573.5 | Dec 2022 | CN | national |
This application is a continuation of International Application No. PCT/CN2023/107991, filed Jul. 18, 2023, which claims priority to Chinese Patent Application No. 202210912601.4, filed on Jul. 30, 2022 and Chinese Patent Application No. 202211716573.5, filed on Dec. 29, 2022 and Chinese Patent Application No. 202210911043.X, filed on Jul. 29, 2022. All of the aforementioned patent applications are hereby incorporated by reference in their entireties.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/CN2023/107991 | Jul 2023 | WO |
| Child | 19037203 | US |