The present disclosure relates generally to automobiles, and more particularly, to an interaction model for interacting between user interface displays of the automobiles.
As society becomes increasingly fast-paced and interconnected, drivers and passengers may seek an elevated experience in their vehicles while on the road and otherwise. Advanced automotive systems include display screens to augment the layout and control of a traditional dashboard. For example, advanced infotainment systems offer a wide range of features, such as touchscreen displays and smartphone integration, which allows passengers of the vehicle to access technological features and services with ease. Furthermore, the integration of smart display screens into automobile systems can assist the drivers in managing various tasks with reduced driving distractions. Accordingly, there is a need for automobile manufacturers to develop technologies that can reduce, and potentially minimize, driving distraction for providing an elevated user experience to the drivers and passengers.
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects. This summary neither identifies key or critical elements of all aspects nor delineates the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In aspects of the disclosure, a method, a computer-readable medium, and an apparatus are provided. In some embodiments, a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
In some embodiments, a method includes displaying, on a first screen, a first graphical user interface (GUI) including multiple regions, a first region including multiple graphical representations, each graphical representation representing a respective application through which a user can interact with the respective application. Method may also include detecting a user input on a selected graphical representation of the multiple graphical representations on the first GUI. Method may furthermore include in response to the detecting the user input, launching, on a second screen different from the first screen, the application corresponding to the selected graphical representation, the application being displayed on a second GUI of the second screen while displaying the graphical representation of the application on the first screen. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
To the accomplishment of the foregoing and related ends, the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed.
The detailed description set forth below in connection with the drawings describes various configurations and does not represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts.
Several aspects are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip, baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise, shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, or any combination thereof.
Accordingly, in one or more example aspects, implementations, and/or use cases, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
Now referring to
With reference to
The display processor 127 can be configured to perform one or more display processing techniques on one or more frames/graphical content generated by the processing unit 120 before the frames/graphical content is displayed through the GUI 133a-133b on the one or more displays 131. While the example automotive system 100 illustrates a display processor 127, it should be understood that the display processor 127 is one example of a processor that can perform the functions descried herein and that other types of processors, controllers, etc., can be used as a substitute for the display processor 127 to perform each of the functions described herein. The one or more displays 131 can be configured to display or otherwise present graphical content processed/output by the display processor 127. In some embodiments, the one or more displays 131 can include a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, a projection display device, a touchscreen, a touch surface, or any other type of display device or panel.
Memory external to the processing unit 120 and the content encoder/decoder 122, such as system memory 124, can be accessible to the processing unit 120 and the content encoder/decoder 122. For example, the processing unit 120 and the content encoder/decoder 122 can be configured to read from and/or write to external memory, such as the system memory 124. The processing unit 120 includes the internal memory 121. The content encoder/decoder 122 can also include an internal memory 123. The processing unit 120 and the content encoder/decoder 122 can be communicatively coupled to the system memory 124 over a bus. In some examples, the processing unit 120 and the content encoder/decoder 122 can be communicatively coupled to the internal memories 121/123 over the bus or via a different connection. The content encoder/decoder 122 can be configured to receive graphical content from any source, such as the system memory 124 and/or the processing unit 120 and encode or decode the graphical content. In some examples, the graphical content can be in the form of encoded or decoded pixel data. The system memory 124 can be configured to store the graphical content in an encoded or decoded form.
The internal memories 121/123 and/or the system memory 124 can include one or more volatile or non-volatile memories or storage devices. In some examples, internal memories 121/123 or the system memory 124 can include RAM, static random access memory (SRAM), dynamic random access memory (DRAM), erasable programmable ROM (EPROM), EEPROM, flash memory, a magnetic data media, optical storage media, or any other type of memory. The internal memories 121/123 or the system memory 124 can be a non-transitory storage medium according to some examples. The term “non-transitory” can indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that the internal memories 121/123 or the system memory 124 is non-movable or that its contents are static. In some embodiments, the system memory 124 can be removed from the device 104 and moved to another device. As another example, the system memory 124 cannot be removable from the device 104.
The processing unit 120 can be a central processing unit (CPU), a graphics processing unit (GPU), or any other processing unit that can be configured to provide content for display. The content encoder/decoder 122 can be any processor configured to perform content encoding and content decoding. In some examples, the processing unit 120 and/or the content encoder/decoder 122 can be integrated into a motherboard of the device 104. The processing unit 120 can be present on a graphics card that is installed in a port of the motherboard of the device 104 or can be otherwise incorporated within a peripheral device configured to interoperate with the device 104. The processing unit 120 and/or the content encoder/decoder 122 can include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), arithmetic logic units (ALUs), digital signal processors (DSPs), discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combination thereof. If the techniques are implemented partially in software, the processing unit 120 and/or the content encoder/decoder 122 can store instructions for the software in a suitable, non-transitory computer-readable storage medium (e.g., memory) and can execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., can be considered to be one or more processors.
In certain aspects, the processing unit 120 (e.g., CPU, GPU, etc.) can include an interaction model component 198, which can include software, hardware, firmware, or a combination thereof for synchronizing a display of an application on multiple screens of a vehicle and configured to: display, on a first screen, a first graphical user interface (GUI) including multiple regions, a first region including multiple graphical representations, each graphical representation representing a respective application through which a user can interact with the respective application; detect a user input on a selected graphical representation of the multiple graphical representations on the first GUI; in response to the detecting the user input, launch, on a second screen different from the first screen, the application corresponding to the selected graphical representation, the application being displayed on a second GUI of the second screen while displaying the graphical representation of the application on the first screen. Although the following description can be focused on synchronizing a display of an application on multiple screens of a vehicle, the concepts described herein can be applicable to other similar processing.
The first screen includes a first GUI 234a. The first GUI 234a can display mobile phone, media, navigation and applications. The first GUI 234a is configured for controlling the display on the first screen 133a. Similarly, positioned lower on the dashboard, the second screen 133b includes a second GUI 234b. The first GUI 234a and the second GUI 234b can be used for controlling operations of one or more software applications displaying content on the first GUI 234a and the second GUI 234b on the display on the first screen 133a and the second screen 133b, respectively. The second GUI 234b can display functions such as seat function, car drive mode, climate control, and all other vehicle settings. In some embodiments, the user can perform a swipe gesture on a graphical representation to cause content displayed on the home screen to be displayed on the pilot panel and vice versa. This content can include the display of in-depth controls and in-depth information of an application. In-depth controls can include various functions for drive modes, the doors, the seats, the interior lighting, and the cabin temperature. In-depth information of an application can include detailed information related to the selected application. For example, when a user selects a media application, the in-depth information of the media application can include media source selectors, a selected media title, available media title, and a menu bar.
In some embodiments, the first screen 133a can be positioned above the second screen 133b. In some embodiments, as shown in
The first GUI 234a can be divided into multiple regions. A first region of the first GUI 234a displays multiple graphical representations 218, 220, 222 with each being displayed in a separate sub-region. Each of the multiple graphical representations 218, 220, 222 represents a respective application through which a user can interact with the respective application. It should be understood that the terms “application” and “app” arc used interchangeably throughout this disclosure.
Referring to
In some other embodiments, the graphical representation 218 displays different content associated with the graphical representation. For example, the graphical representation 218 can display information indicating there is an incoming call. The graphical representation 218 can display recent contacts if there is no incoming call. Based on the display of the recent contacts, the driver can initiate a call to one of the recent contacts. In some embodiments, if the call is connected, a subsequent state of the graphical representation displays the contact details of the one of the most recent contacts.
A second graphical representation 220 represents various functions of a media application. The media application can generate multiple types of information for display by the media application. For example, the type of information includes playback and content browsing. The graphical representation 220 can display action soft buttons that control the media playback function such as previous 240, pause 242, and next 244. The automotive dual screen system selects the type of information to be displayed on the graphical representation while reserving the rest of the information to be displayed on a full screen mode when a user input is detected. A third graphical representation 222 represents various function of a navigation application. Note that while graphical representations 220-222 illustrate three examples of applications, the automotive dual screen system can be configured with different applications that generate different content for display.
In some embodiments, a carousel (not shown) can be displayed on the first screen 133a so the user can scroll horizontally (or vertically) to view additional graphical representations that are not currently visible to the user. The automotive dual screen system can utilize a progressive disclosure indication 250 on the first GUI 234a. The progressive disclosure indication 250 indicates that the first GUI 234a is not displaying all the graphical representations and that there are more graphical representations that are not currently being displayed. In this manner, the automotive dual screen system's decluttering features reduces, and potentially minimizes, the user's cognitive overload and helps users maintain the focus of their attention on driving, while ultimately providing an elevated user experience.
In some embodiments, in response to the detecting the user input, the application corresponding to the selected graphical representation is launched on the screen 133b being different from the screen 133a. The application can be displayed on the GUI 234b of the screen 133b while the application on the GUI 234b of the screen 133a is displayed.
In some embodiments, the system detects a movement of a swipe gesture in a first direction on one of the multiple graphical representations. For example, the user swipes downward on the graphical representation 220 on the GUI 234b of the first screen 133a. In response to detecting that the user has made a swipe down gesture, the system launches the media application onto the GUI 234b in the screen 133b.
In some embodiments, a user can swipe down on a second graphical representation 220 on the GUI 234a of the screen 133a to cause the system to launch the graphical representation 220 on the GUI 234b of the screen 133b. The launching of the second graphical representation 220 on the second GUI 234b of the second screen 133b causes replacement of a graphical representation 218 that was previously displayed on the GUI 234b of the screen 133b. In this manner, one application can have two different instances of its output display being shown on two different screens. For example, a user can single tap on the graphical representation 222 and launch a full screen of a corresponding application on the screen 133a. Simultaneously, the user can still swipe down the graphical representation 222 to launch the graphical representation 222 on the screen 133b.
In some embodiments, the automotive dual screen system allows multi-tasking operations of an application by a passenger and the driver of a vehicle. In some embodiments, the automotive dual screen system updates the application on the screen 133a and the screen 133b. In some examples, the navigation application displays a route to a desired destination on a screen 133a. While the navigation application displays a route to a desired destination on the screen 133a, the passenger can perform a search of a nearby coffee shop on the screen 133b. Once the passenger locates the location of the nearby coffee shop and provides a user input to select the location of the nearby coffee shop (e.g., by touching the display), the system updates the active route on the screen 133a to display a route from the current location to the nearby coffee shop. In some other examples, if the driver updates the route on the first screen 133a, the automotive dual screen system will update the route displayed on the second screen 133b.
In some other embodiments, the automotive dual screen system updates the application on one screen but not on the other screen. In this manner, the automotive dual screen system allows multiple instances of an application being displayed on two different screen to function independently. In other words, there is no synchronization of the two such that an action performed on one screen does not affect the other screen. For example, a driver is playing a song from a Spotify playlist displayed on a screen 133a, and at the same time the passenger is browsing for a song on an iHeart radio on the screen 133b. The automotive dual screen system does not update the driver's instance on the GUI 234a of the screen 133a until the passenger presses a play button on the GUI 234b of the screen 133b. The two instances of the application can independently run on their respective screens.
In some embodiments, the automotive dual screen system updates shared data associated with the application on both screens. When a user interacts with the application on one of the screens, for example, the screen 133a, the shared data associated with the application is automatically updated on the other screen (e.g., screen 133b). For example, when a user switches a currently played track A to a track B on the screen 133a, the currently played track A is automatically changed to track B on the screen 133b.
In some embodiments, the automotive dual screen system updates a shared state associated with the application on both screens. When a user interacts with the application on one of the screens, for example, the screen 133a, the shared state associated with the application is automatically updated on the other screen (e.g., screen 133b). For example, when a user pauses a currently played track A on the screen 133a, the currently played track A is automatically paused on the screen 133b.
In some embodiments, the automotive dual screen system updates a shared state and shared data on both screens based on the last user's input. For example, when a user sets a first temperature (e.g., 72-degree Fahrenheit) on the screen 133a at a time t, and then the user sets a second temperature (e.g., 74-degree Fahrenheit) on the screen 133b at time t+1, the system updates both screens with the last user input (e.g., 74-degree Fahrenheit).
In some embodiments, the automotive dual screen system renders an independent view of the application for each screen based on different ranges of shared data and the last user's input. For example, when a user sets a first temperature (e.g., 72-degree Fahrenheit) on the screen 133a at a time t, and then the user sets a second temperature (e.g., 74-degree Fahrenheit) on the screen 133b at time t+1, the system updates both screens with the last user input (e.g., 74-degree Fahrenheit) while displaying a first view (e.g., 72-degree Fahrenheit) on the screen 133a and a second view (e.g., 74-degree Fahrenheit) on the screen 133b.
Referring to
In some embodiments, the system detects a tap gesture on one of selector system icons representing an application. In response to the detecting the tap gesture, the application corresponding to one of the graphical representations is launched in a second region of the first GUI. For example, when a user performs a single tap gesture on a selector system icon representing a media application, the system launches media application 220 in a second region of the GUI 234a.
In some embodiments, at least a subset of the multiple graphical representations 218, 220, 222 can represent frequently used applications. In some other embodiments, at least a subset of the multiple graphical representations can be configured by the vehicle manufacturer. In some further embodiments, at least a subset of the multiple graphical representations can be configured by the user. For example, the user configures the graphical representation 218 to represent a mobile phone application.
In some embodiments, in response to detecting a second user input corresponding to a request to perform a function associated with the application, the associated application performs the requested function. In some embodiments, the output of performing the function can be simultaneously displayed on the two screens. For example, if a user wants to perform a function on the application (e.g., searching for an information using a search engine) on the screen 133a such as an individual using a search engine to search for a coffee shop, then the search engine generates a list of coffee shops along with corresponding locations. The application causes the system to display the list of coffee shops on GUI 234a along with corresponding locations on the screen 133a. The GUI 234b simultaneously displays the generated list of coffee shops along with corresponding locations on the screen 133b. Thereafter, the user can use the GUI 234b on the second screen 133b to review the list of coffee shops along with corresponding locations. The user can select the closest coffee shop on the second GUI 234b based on the user current location. In this manner, the user uses the screen 133b and not the screen 133a to perform a subsequent action although the user begins an initial action (or performed other previous actions) using the screen 133a.
With reference to
The method begins at block 402, where processing logic displays, on a first screen, a first GUI including multiple regions, a first region including multiple graphical representations, each graphical representation representing a respective application through which a user can interact with the respective application (block 402). For example, referring to
As also shown in
As further shown in
Although
One or more processors 502 coupled to memory 504 perform processing duties according to various software, hardware, firmware or combination modules. The processor(s) 502 and memory 504 are coupled to one or more cameras 506, a first display 508 (e.g., having a display screen) on which a user interface 512A is displayed, a second display 510 on which a user interface 512B is displayed, and an interface 514 that is or can be coupled to automotive components and systems 516 in order to control, adjust or otherwise provide input to such components and systems. Also, the processor(s) 502 and memory 504 are coupled to an operating system 522, a machine vision module 518, a gaze processing module 520, and a user interface generator 524. The user interface generator 524 has an input module 526, a rendering module 528, and output module 530, and in some embodiments includes a model 532 with rules 534. Other forms of models can be used in some embodiments. One form of implementation is special programming of the system, more specifically special programming of the processor(s) 502. It should be appreciated that structure of the various modules is determined by their functionality, in some embodiments.
In an operating scenario, which describes functionality of some embodiments, a driver or occupant of a vehicle is imaged by the camera(s) 506, while the driver views one, the other, or neither display 508, 510. Using the machine vision module 518, the system processes the imaging, and using the gaze processing module 520, the system determines driver gaze or occupant gaze. The determined gaze information is passed along through the input module 526 of the user interface generator 524, along with other input (e.g., touchscreen, touchpad, conventional control, etc., in some embodiments). The user interface generator 524 determines to change the display of the user interface 512A based on the determined gaze or other input information, interpreted as user input. With such changes to the user interface 512A based on the model 532 and rules 534 thereof, the system passes appropriate information to the rendering module 528 to generate display information for the updated user interface. In some embodiments, the model 532 includes a dual screen interaction model. In some embodiments, the model 532 includes a hide/reveal feature or a digital detox feature. This display information is output through the output module 530 of the user interface generator 524, to one, the other or both displays 508, 510, which then display the updated user interface, e.g., user interface 512A on one display 508 and/or user interface 512B on another display 510. In some embodiments, these displays 508, 510 are an upper display screen 133a and a lower display screen 133b (see, e.g.,
In some embodiments, user interface 605 includes one or more interfaces including, for example, a front dashboard display (e.g., a cockpit display, etc.), a touch-screen display (e.g., a pilot panel, etc.), as well as a combination of various other user interfaces such as push-button switches, capacitive controls, capacitive switches, slide or toggle switches, gauges, display screens, warning lights, audible warning signals, etc. It should be appreciated that if user interface 605 includes a graphical display, controller 601 may also include a graphical processing unit (GPU), with the GPU being either separate from or contained on the same chip set as the processor.
Vehicle 600 also includes a drive train 607 that can include an internal combustion engine, one or more motors, or a combination of both. The vehicle's drive system can be mechanically coupled to the front axle/wheels, the rear axle/wheels, or both, and may utilize any of a variety of transmission types (e.g., single speed, multi-speed) and differential types (e.g., open, locked, limited slip).
Drivers often alter various vehicle settings, either when they first enter the car or while driving, in order to vary the car to match their physical characteristics, their driving style and/or their environmental preferences. System controller 601 monitors various vehicle functions that the driver may use to enhance the fit of the car to their own physical characteristics, such as seat position (e.g., seat position, seat height, seatback incline, lumbar support, seat cushion angle and seat cushion length) using seat controller 615 and steering wheel position using an auxiliary vehicle system controller 617. In some embodiments, system controller 601 also can monitor a driving mode selector 619 which is used to control performance characteristics of the vehicle (e.g., economy, sport, normal). In some embodiments, system controller 601 can also monitor suspension characteristics using auxiliary vehicle system 617, assuming that the suspension is user adjustable. In some embodiments, system controller 601 also monitors those aspects of the vehicle which are often varied by the user in order to match his or her environmental preferences for the cabin 622, for example setting the thermostat temperature or the recirculation controls of the thermal management system 621 that uses an HVAC controller, and/or setting the radio station/volume level of the audio system using controller 623, and/or setting the lights, either internal lighting or external lighting, using light controller 631. Also, besides using user-input and on-board sensors, system controller 601 can also use data received from an external on-line source that is coupled to the controller via communication link 609 (using, for example, GSM, EDGE, UMTS, CDMA, DECT, WiFi, WiMax, etc.). For example, in some embodiments, system controller 601 can receive weather information using an on-line weather service 635 or an on-line data base 637, traffic data 638 for traffic conditions for the navigation system 630, charging station locations from a charging station database 639, etc.
As an example, upon turning on the vehicle 600, in some embodiments, system controller 601 identifies the current driver (and go to their last pre-set functions) or just go the last pre-set functions for the vehicle (independent of who the current driver is), related to such features as: media functions, climate functions-heating, ventilation and air conditioning (HVAC) system, driving functions, seat positioning, steering wheel positioning, light control (e.g., internal lighting, external lighting, etc.), navigation functions, etc.
Detailed illustrative embodiments are disclosed herein. However, specific functional details disclosed herein are merely representative for purposes of describing embodiments. Embodiments may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It should be understood that although the terms first, second, etc. may be used herein to describe various steps or calculations, these steps or calculations should not be limited by these terms. These terms are only used to distinguish one step or calculation from another. For example, a first calculation could be termed a second calculation, and, similarly, a second step could be termed a first step, without departing from the scope of this disclosure. As used herein, the term “and/or” and the “/” symbol includes any and all combinations of one or more of the associated listed items.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “includes”, and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Therefore, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
With the above embodiments in mind, it should be understood that the embodiments might employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing. Any of the operations described herein that form part of the embodiments are useful machine operations. The embodiments also relate to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
A module, an application, a layer, an agent or other method-operable entity could be implemented as hardware, firmware, or a processor executing software, or combinations thereof. It should be appreciated that, where a software-based embodiment is disclosed herein, the software can be embodied in a physical machine such as a controller. For example, a controller could include a first module and a second module. A controller could be configured to perform various actions, e.g., of a method, an application, a layer or an agent.
The embodiments can also be embodied as computer readable code on a tangible non-transitory computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion. Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
Although the method operations were described in a specific order, it should be understood that other operations may be performed in between described operations, described operations may be adjusted so that they occur at slightly different times or the described operations may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing.
In some embodiments, one or more portions of the methods and mechanisms described herein may form part of a cloud-computing environment. In such embodiments, resources may be provided over the Internet as services according to one or more various models. Such models may include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In IaaS, computer infrastructure is delivered as a service. In such a case, the computing equipment is generally owned and operated by the service provider. In the PaaS model, software tools and underlying equipment used by developers to develop software solutions may be provided as a service and hosted by the service provider. SaaS typically includes a service provider licensing software as a service on demand. The service provider may host the software, or may deploy the software to a customer for a given period of time. Numerous combinations of the above models are possible and are contemplated.
Various units, circuits, or other components may be described or claimed as “configured to” or “configurable to” perform a task or tasks. In such contexts, the phrase “configured to” or “configurable to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs the task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task, or configurable to perform the task, even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” or “configurable to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks, or is “configurable to” perform one or more tasks, is expressly intended not to invoke 35 U.S.C. 112, sixth paragraph, for that unit/circuit/component. Additionally, “configured to” or “configurable to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks. “Configurable to” is expressly intended not to apply to blank media, an unprogrammed processor or unprogrammed generic computer, or an unprogrammed programmable logic device, programmable gate array, or other unprogrammed device, unless accompanied by programmed media that confers the ability to the unprogrammed device to be configured to perform the disclosed function(s).
The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the embodiments and its practical applications, to thereby enable others skilled in the art to best utilize the embodiments and various modifications as may be suited to the particular use contemplated. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
The following aspects are illustrative only and may be combined with other aspects or teachings described herein, without limitation.
Example 1 is a method of synchronizing a display of an application on multiple screens of a vehicle, the method including displaying, on a first screen, a first graphical user interface (GUI) including multiple regions, a first region including multiple graphical representations, each graphical representation representing a respective application through which a user can interact with the respective application; detecting a user input on a selected graphical representation of the multiple graphical representations on the first GUI; in response to the detecting the user input, launching, on a second screen different from the first screen, the application corresponding to the selected graphical representation, the application being displayed on a second GUI of the second screen while displaying the graphical representation of the application on the first screen.
Example 2 may be combined with example 1 and further includes that the detecting the user input on the selected graphical representation of the multiple graphical representations including: detecting a swipe gesture made on the first screen in a first direction on one of the multiple graphical representations.
Example 3 may be combined with example 1 and further includes detecting a second gesture on one of selector system icons representing the graphical representations; and in response to the detecting the second gesture, launching the application corresponding to one of the graphical representations in a second region of the first GUI.
Example 4 may be combined with example 3 and further includes detecting the second gesture on the second region of the first GUI; and in response to detecting the second gesture, launching the application corresponding to the selected graphical representation of the graphical representations on the second screen.
Example 5 may be combined with example 4 and further includes that the second gesture includes at least one of: a single tap, a double tap, or a swipe.
Example 6 may be combined with example 1 and further includes that each graphical representation includes action GUI elements whose selection by a user causes performance of one or more functions associated with the application.
Example 7 may be combined with example 4 and further includes in response to detecting a second user input corresponding to a request to perform a function associated with the application; performing the function associated with the application, and simultaneously displaying, on the second screen, an outcome of performing the function while retaining a display of the outcome of performing the function on the first screen.
Example 8 may be combined with example 1 and further includes that at least a subset of the multiple graphical representations represents frequently used applications.
Example 9 may be combined with example 1 and further includes that the first screen and the second screen are juxtaposed in a predefined relative direction.
Example 10 is an apparatus for implementing a method as in any of Examples 1-9.
Example 11 is an apparatus including means for implementing a method as in any of Examples 1-9.
Example 12 is a non-transitory computer-readable medium storing computer executable code, the code when executed by at least one processor causes the at least one processor to implement a method as in any of Examples 1-9.
This application claims the benefit of U.S. Provisional Patent Application No. 63/596,931, filed 7 Nov. 2023 the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63596931 | Nov 2023 | US |