The present subject matter relates to automotive vehicle service equipment. The present subject matter has particular applicability to user interfaces for wheel alignment equipment.
A current conventional vehicle wheel alignment system uses sensors or heads that are attached to the wheels of a vehicle to measure various angles of the wheels and suspension. These angles are communicated to a host system, where they are used in the calculation of vehicle alignment angles. In the standard conventional aligner configuration, four alignment heads are attached to the wheels of a vehicle. Each sensor head comprises two horizontal or toe measurement sensors and two vertical or camber/pitch sensors. Each sensor head also contains electronics to support overall sensor data acquisition as well as communications with the aligner console, local user input, and local display for status feedback, diagnostics and calibration support.
In recent years, wheels of motor vehicles have been aligned in some shops using a computer-aided, three-dimensional (3D) machine vision alignment system. In such a system, one or more cameras view targets attached to the wheels of the vehicle, and a computer in the alignment system analyzes the images of the targets to determine wheel position and alignment of the vehicle wheels from the wheel position data. The computer typically guides an operator to properly adjust the wheels for precise alignment, based on calculations obtained from processing of the image data. A wheel alignment system or aligner of this image processing type is sometimes called a “3D aligner.” Examples of methods and apparatus involving computerized image processing for alignment of motor vehicles are described in U.S. Pat. No. 5,943,783 entitled “Method and apparatus for determining the alignment of motor vehicle wheels;” U.S. Pat. No. 5,809,658 entitled “Method and apparatus for calibrating cameras used in the alignment of motor vehicle wheels;” U.S. Pat. No. 5,724,743 entitled “Method and apparatus for determining the alignment of motor vehicle wheels;” and U.S. Pat. No. 5,535,522 entitled “Method and apparatus for determining the alignment of motor vehicle wheels.” A wheel alignment system of the type described in these references is sometimes called a “3D aligner” or “visual aligner.” An example of a commercial vehicle wheel aligner is the Visualiner 3D, commercially available from John Bean Company of Conway, Ark., a unit of Snap-on Inc.
Alternatively, a machine vision wheel alignment system may include a pair of passive heads and a pair of active sensing heads. The passive heads are for mounting on a first pair of wheels of a vehicle to be measured, and the active sensing heads are for mounting on a second pair of wheels of the vehicle. Each passive head includes a target, and each active sensing head includes gravity gauges for measuring caster and camber, and an image sensor for producing image data, including an image of a target of one of the passive heads, when the various heads are mounted on the respective wheels of the vehicle. The system also includes a spatial relationship sensor associated with at least one of the active sensing heads, to enable measurement of the spatial relationship between the active sensing heads when the active sensing heads are mounted on wheels of the vehicle. The system further includes a computer for processing the image data relating to observation of the targets, as well as positional data from the spatial relationship sensor, for computation of at least one measurement of the vehicle.
A common feature of all the above-described alignment systems is that a computer guides an operator to properly adjust the wheels for precise alignment, based on calculations obtained from processing of the sensor data. These systems therefore include a host computer having a user interface such as a display screen, keyboard, and mouse. Typically, the user interface employs graphics to aid the user, including depictions of the positions of the vehicle wheels, representations of analog gauges with pointers and numbers, etc. The more intuitive, clear, and informative such graphics are, the easier it is for the user to perform an alignment quickly and accurately. There exists a need for an alignment system user interface that enables the user to reduce the time needed to perform an alignment, and enables the user to perform the alignment more accurately.
Additionally, alignment shops typically store and/or have access to many different databases containing information of interest to the user of an alignment system. Such information includes data relating to the particular vehicle being aligned and/or its owner, and other similar vehicles that have been serviced by the shop. This information further includes vehicle manufacturers' technical data, data relating to vehicle parts provided by parts manufacturers, and instructional data. There exists a need for an alignment system user interface that presents technical information and individual vehicle information to the user on demand, in a desired format, to improve efficiency and accuracy.
The teachings herein improve over conventional alignment equipment by providing an improved user interface that enables a user to perform a vehicle alignment more quickly and accurately, thereby reducing costs.
According to the present disclosure, the foregoing and other advantages are achieved in part by a computer-implemented method for performing a plurality of vehicle service activities. The method comprises displaying, on a first portion of a display unit, a plurality of visual images, each visual image corresponding to a respective one of the vehicle service activities, arranged along a movement path; receiving a first selection of a first visual image included in the visual images; displaying, on a second portion of the display unit, a user interface for performing the vehicle service activity corresponding to the first visual image, in response to the first selection; displaying the visual indication for the first visual image that the first visual image was selected, in response to the first selection; and moving at least one of the plurality of visual images along the movement path in response to the first selection.
In accord with another aspect of the disclosure, a vehicle service system for performing a vehicle service activity comprising a series of service activities comprises a processor; and a computer readable medium having computer-executable instructions that, when executed by the processor, cause the computer system to: display, on a first portion of a display unit, a plurality of visual images, each visual image corresponding to a respective one of the vehicle service activities, arranged along a movement path; receive a first selection of a first visual image included in the visual images; display, on a second portion of the display unit, a user interface for performing the vehicle service activity corresponding to the first visual image, in response to the first selection; display the visual indication for the first visual image that the first visual image was selected, in response to the first selection; and move at least one of the plurality of visual images along the movement path in response to the first selection.
In accord with yet another aspect of the disclosure, a computer readable medium has instructions for performing a vehicle service activity comprising a series of service steps that, when executed by a computer system, cause the computer system to: display, on a first portion of a display unit, a plurality of visual images, each visual image corresponding to a respective one of the vehicle service activities, arranged along a movement path; receive a first selection of a first visual image included in the visual images; display, on a second portion of the display unit, a user interface for performing the vehicle service activity corresponding to the first visual image, in response to the first selection; display the visual indication for the first visual image that the first visual image was selected, in response to the first selection; and move at least one of the plurality of visual images along the movement path in response to the first selection.
Additional advantages and novel features will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following and the accompanying drawings or may be learned from production or operation of the examples. The advantages of the present teachings may be realized and attained by practice or use of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims.
Reference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout, and wherein:
a schematically shows a user interface display screen featuring a carousel control according embodiments of the present disclosure.
b is a flow chart of an exemplary process for implementing the carousel control of the present disclosure.
c-e are exemplary screen shots of the carousel control user interface according to embodiments of the present disclosure.
a is a flow chart of an exemplary process for implementing a user interface with nested controls according to the present disclosure.
b-f are exemplary screen shots of a user interface with nested controls according to embodiments of the present disclosure.
a-b are exemplary screen shots of dynamic drop down windows according to embodiments of the present disclosure.
a-b are exemplary screen shots of transparent pop up window backgrounds according to embodiments of the present disclosure.
a-b show exemplary windows with gradient background fill according to embodiments of the present disclosure.
a-c are exemplary screen shots of dashboard indicators according to embodiments of the present disclosure.
a-11h are exemplary screen shots of user interface graphics according to embodiments of the present disclosure.
a-b are exemplary screen shots of XSLT transformed documents incorporated into the user interface of embodiments of the present disclosure.
Several examples of graphic user interfaces according to the present disclosure will now be described with reference to the drawings.
Carousel Control
In an embodiment of the present disclosure shown in
Referring now to
In
In certain embodiments, the task icons 1-7 represent different processes available to the user (e.g., calibration, regular alignment, quick alignment, etc.) rather than steps in a process. Such a display could be the “home” display presented to the user when the system is first started up, or when the user clicks a “home” icon. In this case, clicking on a task icon brings up a new set of icons in the carousel representing the steps of the selected process.
Implementation of the disclosed carousel control in a user interface is diagrammed in
The operation of the carousel control in the context of performing a vehicle service such as a wheel alignment comprising a series of service activities will now be described with reference to
A first selection by the user of a first visual image 240c is received from one of a number of displayed user interface elements; for example, by the user mouse-clicking or touching one of the “previous” or “next” arrows 243a, 243b, or one of the icons 240a-e. The user could also use the scroll buttons 248 or the scroll bar 249 to scroll to a visual image in the carousel not shown in
As shown in
In certain embodiments, a visual indication for a second visual image is displayed indicating that the service step corresponding to the second visual image has been completed. In other embodiments, such as shown in
In a further example referring to
Referring again to
Note that the group of icons 243c next to the arrows 243a-b are utilities such as Help, Home, Print, etc. and always appear on every screen, while the group of icons 243d to the right of group 243c are specific to the task being displayed, and change from one task to another.
The disclosed carousel control is advantageous over conventional user interfaces typically found in alignment systems, wherein the user must proceed through the tasks in a linear fashion. In such systems, there is no visual reference to indicate which tasks have been performed, or what task will be performed in the next step. With the disclosed carousel control, the user can choose to proceed linearly through the tasks, or randomly access individual tasks of the ongoing process. Moreover, each task icon of the carousel can bear a visual indication of whether or not it has been performed. Thus, the disclosed carousel control gives dimension and perspective to enhance the user's focus on the immediate task(s), while simultaneously enabling the user to see tasks that have been or will be performed.
Nested User Interface Elements
Software elements such as tooltips, combo boxes, list boxes, etc. are a common part of personal computer user interfaces. For example, tooltips typically appear as simple text-based popup controls containing contextual information when a mouse pointer is placed over a certain location or other visual component within the active program. Combo boxes usually have a text box displaying a single text value, and an expander arrow to indicate there is list available for display.
In a further embodiment of the disclosure, such software elements are enhanced by nesting controls within other controls and by adding graphics, to provide a large amount of information without cluttering a screen already having many visual components. Also, this embodiment facilitates localization, reduces the effort for text translations, and improves efficiency of navigation of the interface.
Referring now to
The above features are implemented by embedding visual elements within other visual elements and by using data templating having the flexibility to customize the data presentation process. According to this embodiment, an aftermarket parts database is queried for part information, and the details of that part are used to construct a combo box for each wheel and angle to be adjusted/checked. The combo box is dynamically populated with more than simply a text description of a part. It is embedded with a thumbnail graphic that can also invoke a tooltip, which in turn is composed of a number of elements such as a larger graphic, a detailed description of the part, etc. In certain embodiments, the combo box contains several buttons for each list item, which are used to invoke other events, such as a video of a part, an HTML page having the part specifications, adjustment guide(s) for using the part, etc.
Implementation of the disclosed nested user interface elements is diagrammed in
In step 305, the user interacts with the interface to display a part list, display part details from the list, and to play a video, display an HTML document, or display a tooltip as desired. The user thus employs the combo boxes to choose which part to use for a particular alignment operation, and can create a report for their customer (see step 306).
The operation of the nested user control interface elements in the context of performing a vehicle service such as a wheel alignment will now be described with reference to
c shows the result of a first selection of the pulldown indicator of a first user interface element 311, as by a mouse click, by touching a touch screen, or by hovering the mouse cursor over the “46-1201” field. The first user interface element 311 is displayed, along with a listing of a plurality of items 311a-f in response to the first selection (in this example, a list of part numbers). Each item 311a-f is presented with a second user interface element 320 and a third user interface element 330, in this case icons. In certain embodiments, hovering over an item such as 311a will also bring up a tooltip with a visual display. For example, as shown in
Referring now to
Referring now to
By building complex controls and embedding varying interface elements, more information is provided to the user with easier and more efficient navigation. This embodiment can be implemented, for example, by defining a resource in the WPF/XAML file which creates a customized tooltip content, as by defining a stack panel control containing a label, a text block, and an image.
Dynamic Drop Down Windows
In certain embodiments of the present disclosure shown in
Floating Window
In certain embodiments shown in
Transparent Popup Window Background
In certain embodiments, a popup window in an aligner graphic user interface is implemented as a transparent window, as by using WPF. WPF's ability to render an entire window with per-pixel transparency also enables WPF's anti-aliasing rendering to operate on a layered (i.e., popup) window, consequently resulting in high edge quality in such a rendering. Transparency can be set in the non-client area and in the child windows. The “non-client area” refers to the parts of the window that the windowing system normally renders for the application, such as the title bar, the resize edge, the menu bar, the scroll bars, etc. As shown in
In still other embodiments, background colors can be changed; e.g., to other than black. A number of color options is provided for the user to select for the differently-colored background. The change of background can apply either to the entire application, or only to the selected screen.
Gradient Background Fill
In certain embodiments of the disclosure, gradient background fill is used to achieve a three-dimensional appearance without wire frame 3D modeling in meters, backgrounds, etc. When used in the background, the outline can appear to have backlighting. If the values of the gradient are varied in real time, an object can appear to rotate without using a 3D wire frame.
Dashboard Indicators
In certain embodiments, a display is implemented to inform the user about important and/or critical alignment related information. The disclosed display is analogous to the dashboard implementation of automobiles, wherein the check engine indicator, low oil indicator, high temperature indicator, traction indicator, etc. do not illuminate until needed to indicate the proper condition of the vehicle. However, the driver can still discern the outline of these indicators when they are not illuminated (although they do not need to pay attention to them until they illuminate). The disclosed aligner display screen implements this functionality as follows, using a well-known tool such as Visual Studio 2008, XAML, WPF, or C#. Other conventional toolkits (i.e., development environments) may be used to achieve similar effects.
In conventional alignment systems, indicators are placed on the screen or hidden on the screen. If the indicator is not active, the user is not aware that the indicator may pop up unless it has been previously experienced. For example, if the vehicle to be aligned does not have diagnostic charting information, no such icon appears on the display screen; but if the vehicle has diagnostic charting capabilities, an “iOBD” icon is displayed alerting the operator to a special condition. In other words, the indication is binary: either on or off.
The present embodiment of the disclosure provides multiple implementations between on and off, wherein on=100% and off=0% opacity. For example, on a scale from 1.0 (100%) to 0.0 (0%), 0.4 is 40%. As shown in
These effects are achieved in a Windows environment by setting the opacity level of the desired displayed object. The opacity level is set based on detecting a condition for which the operator may need to be alerted. When not alerted, the operator knows the condition does not exist because the condition indicator is still on the screen in the “non-alert” illumination mode (i.e., that object is at a reduced opacity level).
For example, using C#:
Object.Opacity=1.0; //100% opaque OR Object.Opacity=0.2; //20% opaque
In a further embodiment, a meter display changes state when a reading is within specification, giving the user confidence the reading is within tolerance. In conventional alignment systems, an operator is alerted to certain vehicle conditions as being in or out of tolerance solely based on whether the needle on a meter display is in or out of a predetermined zone, such as a green zone. If the display's needle or other indicator is on the transition from red to green (out of tolerance or within tolerance), it is difficult to determine the condition.
In the disclosed embodiment, as shown in
OuterGlowBitmapEffect ogbe=new OuterGlowBitmapEffect( );
Ogbe.GlowColor=Color.FromRGB(0,0xD0,0); //Green glow
Ogbe.GlowSize=25; //size of the glow
MeterObject.DitmapEffect=ogbe;
//To Unglow the meter object
MeterObject.BitmapEffect=null;
“True View” Screens
Conventional reading screens employ images such as a meter gauge having a needle indicating the current alignment reading, such as caster, camber, or toe. This reading is often relative to the manufacturer's specification for the vehicle being aligned. In certain embodiments of the disclosure, the needle indicator is replaced with a true representation of the angle being aligned, as shown in
One way to implement this embodiment is to draw a 2-dimensional image such as assembly 900 such that it looks like a 3-dimensional object, as by using a conventional graphical design package such as Microsoft Expression Design 2 available from Microsoft. The rotation point is set at the desired point, such as at the center of the rotor 901. This is saved as a PMG-type file, and then the meter gauge is implemented in XAML code, setting the image source for the circular pointer needle to be the name of the 3-dimensional image. To enable the image needle to move to the correct value, C# code can be used to set the value in a conventional manner.
In further embodiments, when a reading (such as caster, camber, or toe) for a specific wheel is enlarged, an inset panel is displayed showing readings for all desired parameters. As shown in
In other embodiments, the user clicks on one of the gauges (readings) of the inset, and that reading is zoomed. Referring now to
Virtual Instrumentation
In certain embodiments, conventional Windows graphical user interface controls such as sliders, radio buttons, and buttons to change values are replaced with a virtual representation of physical knobs, switches, and lights, as shown in
Mouse Over Graphic Glow
In conventional user interfaces, the mouse pointer is pointed at an area on the screen containing, e.g., an icon, and a tooltip pops up to indicate the function of the screen area (e.g., “Home”, “Help”, “Print”, etc.). However the tooltip goes away in a few seconds. Disadvantageously, if the selection pointer is on the edge of two buttons, it is not readily apparent which function will be activated by pressing the mouse button.
In certain embodiments of the disclosure, a characteristic(s) of the item under the mouse pointer is changed. For example, an icon is changed to have a glow, a drop shadow, or other graphics effect; and/or to transform, be animated, vibrate, or emit a sound or other sensory perceptible stimuli. This provides the user more confidence that, when they press the mouse button or other entry device, the appropriate selection will be made.
a shows a menu bar 1100 before the mouse pointer is moved over it (or it is otherwise selected).
In other embodiments, these graphic effects are used for items other than mouse pointer functions. Such effects are used to provide tactile feedback for keyboard navigation. For example, the screen of
A further use of tactile feedback is to inform the user of where they are currently in a multiple-step procedure.
The opacity of the above-described items is readily set and changed in C# by getting the item's object reference and setting the desired opacity value. The glow of each item is set in the same manner as the mouse-over described above.
XSLT Transformation of TSB/TPMS Data in Vehicle Alignment
In other embodiments of the present disclosure, XSLT transformation is implemented within a vehicle alignment system. XSLT (XSL Transformations) is an XML-based language for transforming XML documents into other XML documents. The original document is not changed; rather, a new document is created based on the content of an existing one. The new document may be serialized output by the processor in standard XML syntax or in another format, such as Hypertext Markup Language (HTML) or plain text. XSLT is often used to convert XML data into HTML or XHTML documents for display as a web page. The transformation may happen dynamically either on the client or on the server, or it may be performed as part of the publishing process. XSLT is developed and maintained by the World Wide Web Consortium (W3C).
Modern automobiles contain onboard monitoring and control systems such as tire pressure monitoring systems (TPMS), which are electronic systems for monitoring the air pressure inside the vehicle's tires. When a vehicle's tires are rotated, the wheel location must be synchronized with the TPMS so it will provide an accurate indication of tire air pressure. Additionally, automobile manufacturers write and publish large amounts of documentation relating to servicing, repairing, and maintaining the vehicles they manufacture. A common method of publishing this information is by issuing technical service bulletins (TSB). Presenting this documentation in a relevant and efficient way during the servicing processes is a great advantage to the technicians and owners of service shops.
The disclosed alignment software facilitates and provides this information to the user. In one embodiment, TSB and TPMS data is stored locally or on a server as raw data in XML format. This raw data is dynamically transformed and converted into HTML for display within an embedded browser that is part of the aligner's user interface. An associated XSLT file is paired with the XML data, in a conventional manner, to perform the transformation from data to presentation as desired. An example is shown in
XAML/WPF/Silverlight-Based Reports
According to the present disclosure, alignment summary reports are generated based on the calculations of measurement angles before and after adjustment, with reference to the manufacturer's specifications. The generated measurement angles are saved in an XML enabled format independent of the alignment system platform. The saved data in XML format is used to generate summary reports in XAML language. The XAML enabled data is capable of being rearranged and formatted so it can be arranged in various layouts according to the user. A sample report is shown in
A well-known tool such as Microsoft Blend is used to lay out the report in XAML and to bind all the fields to XML. For example, a text box is inserted, the field is named, and properties are selected to set the margins and assign the styles. This disclosed technique is advantageous in that it is not limited to third party tools, and any developer who has XML and XAML knowledge can modify the reports. As those skilled in the art will understand, the reports can be viewed in an viewer which supports XAML and XPS formats (the reports also support XML Paper Specification (XPS) format). The reports can also be presented in WPF or Microsoft Silverlight, which enable generation of an application with a compelling user interface that is either stand-alone or browser-hosted.
VIN Scanning and Decoding for Wheel Alignment
A Vehicle Identification Number (VIN) is a unique number used by the automotive industry to uniquely identify individual vehicles. A standard VIN is 17 characters in length. Encoded is information regarding where the vehicle was manufactured, the make, model, and year of the vehicle, and a limited number of the vehicle's attributes. The last several digits include a sequential number to provide the uniqueness. The VIN is used by many auto-related businesses such as parts suppliers and insurance companies to facilitate marketing and sales efforts.
Vehicle alignment software typically uses a proprietary database containing alignment specifications provided by the vehicle manufacturers. In conventional wheel alignment systems, the VIN is typically manually entered in a customer data screen, and contains no connection to any vehicle databases. The process of selecting a vehicle includes manually selecting the vehicle from a complete and lengthy list arranged in a tree fashion.
In this embodiment of the disclosure, implementing VIN into the alignment software is accomplished by matching a VIN to the vehicles defined in the alignment database. A barcode scanner 150 (see
In this embodiment, the VIN is entered using the keyboard 130 or barcode scanner 150 of system 100, and a database query is performed using the cross-reference table. If the VIN resolves to a single match, the alignment process automatically continues to a next step if desired. If the VIN matches to numerous entries in the specifications database, the user is given a very small subset to choose from to make a vehicle selection. Thus, this embodiment enables a faster and more accurate vehicle selection process that is easier to use.
Obfuscation
It has been possible for hackers to change the graphics of a user interface and present it as their own creation. Recently, with the advent of the .NET framework and just-in-time complying, it is possible to decompile a program and reverse engineer its contests to steal intellectual property. Certain embodiments of the present disclosure employ obfuscation to safeguard the above items by renaming symbols, adding extra symbols, dead code, unused branches, etc. After obfuscation, a decompiler will fail to produce readable source code that a computer hacker can use. One way to accomplish obfuscation is to use third party tools such as “dotfuscator” available at www.preemptive.com.
XML-Based Language Translations Using Unicode
In conventional user interfaces, all text is typically compiled as a resource in the executable code. To perform a human-language translation, the resource is extracted and the text translated to the desired language to create a new resource. A “satellite” data link layer driver (dll) is then generated from this new resource and loaded, thereby replacing the executables resource. Disadvantageously, the user is unable to make their own translations, since a specialized program is needed to generate satellite dlls, and new satellite dlls are required with every revision of the program (if any of the English-language text is revised, the translation(s) of the revised text is lost). Additionally, all languages are stored in their local text encoding, so unless the host PC is loaded with that locale, it might not be possible to display the text. Still further, the Windows operating system for different countries has different screen metrics, so when using the above-described satellite dll technique, the screen layout changes for each language as well.
These problems are addressed in certain disclosed embodiments by keeping all translations in XML files in Unicode, which files are easily edited by a text editor, as will be understood by those of skill in the art. Translations are loaded on the fly, and can be edited while the program is running. The translations are in Unicode, so they be displayed on any PC regardless of their locale, and screen metrics is not an issue. English is treated as a translation, so a phrase can change without affecting any other translations.
Web Cameras
In certain embodiments, web camera technology is used to take pictures of customers and vehicles, and to monitor the alignment rack as a drive-on aid. The picture(s) taken of the customer and/or vehicle are stored into a database with other customer information (e.g., name, address, etc.). When more than one web camera is connected to the alignment system's computer, the aligner user interface shows a list of all the available cameras in a drop down list. The user selects the camera whose image is to be shown on the screen. Images from multiple web cameras can also be displayed simultaneously in different areas of the screen. The integration of the webcam(s) is implemented, for example, using DirectShow and WPF in a conventional manner.
Those skilled in the art will understand that the above-described user interface elements are usable alone or in combination with each other as appropriate, even though every such combination is not explicitly set forth herein.
Computer hardware platforms may be used as the hardware platform(s) for one or more of the user interface elements described herein. The hardware elements, operating systems and programming languages of such computers are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith to adapt those technologies to implement the graphical user interface essentially as described herein. A computer with user interface elements may be used to implement a personal computer (PC) or other type of work station or terminal device, although a computer may also act as a server if appropriately programmed. It is believed that those skilled in the art are familiar with the structure, programming and general operation of such computer equipment and as a result the drawings should be self-explanatory.
The computer 1400, for example, includes COM ports 1450 connected to and from a network connected thereto to facilitate data communications. The computer 1400 also includes a central processing unit (CPU) 1420, in the form of one or more processors, for executing program instructions. The exemplary computer platform includes an internal communication bus 1410, program storage and data storage of different forms, e.g., disk 1470, read only memory (ROM) 1430, or random access memory (RAM) 1440, for various data files to be processed and/or communicated by the computer, as well as possibly program instructions to be executed by the CPU. The computer 1400 also includes an I/O component 1460, supporting input/output flows between the computer and other components therein such as user interface elements 1480. The computer 1400 may also receive programming and data via network communications.
Hence, aspects of the methods of generating the disclosed graphical user interface, e.g., the carousel control and nested controls, as outlined above, may be embodied in programming. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Tangible non-transitory “storage” type media include any or all of the memory or other storage for the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide storage at any time for the software programming.
All or portions of the software may at times be communicated through a network such as the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
Hence, a machine readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, which may be used to implement the system or any of its components as shown in the drawings. Volatile storage media include dynamic memory, such as a main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that form a bus within a computer system. Carrier-wave transmission media can take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
Those skilled in the art will recognize that the present teachings are amenable to a variety of modifications and/or enhancements. For example, although the implementation of various components described above may be embodied in a hardware device, it can also be implemented as a software only solution—e.g., an installation on a PC or server. In addition, the user interface and its components as disclosed herein can be implemented as a firmware, firmware/software combination, firmware/hardware combination, or a hardware/firmware/software combination.
The present disclosure can be practiced by employing conventional materials, methodology and equipment. Accordingly, the details of such materials, equipment and methodology are not set forth herein in detail. In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present teachings. However, it should be recognized that the present teachings can be practiced without resorting to the details specifically set forth. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure aspects of the present teachings.
While the foregoing has described what are considered to be the best mode and/or other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings.
The present invention claims priority of provisional patent application No. 61/301,349 filed Feb. 4, 2010, the contents of which are incorporated herein in their entirety.
Number | Date | Country | |
---|---|---|---|
61301349 | Feb 2010 | US |