The present invention is related to providing information services related to visual imagery. More specifically, the invention describes methods for providing information services related to visual imagery using cameraphones.
Systems for providing information services on mobile phones exist. However, a mechanism for providing information services related to visual imagery using cameraphones is in need.
The present invention describes a system and methods for providing information services related to visual imagery using cameraphones. The system and methods enable the providing of a user experience for requesting, presenting, and interacting with the information services.
Other objects, features, and advantages of the present invention will become apparent upon consideration of the following detailed description and the accompanying drawings, in which like reference designations represent like features throughout the figures.
A system and methods are described for providing information services related to visual imagery using cameraphones. Various embodiments present mechanisms for providing information services related to visual imagery. The specific embodiments described in this description represent exemplary instances of the present invention, and are illustrative in nature rather than restrictive.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the invention.
Reference in the specification to “one embodiment” or “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” or “some embodiments” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Features and aspects of various embodiments may be integrated into other embodiments, and embodiments illustrated in this document may be implemented without all of the features or aspects illustrated or described.
Various embodiments may be implemented in a computer system as software, hardware, firmware, or a combination of these. Also, an embodiment may be implemented either in a single monolithic computer system or over a distributed system of computers interconnected by a communication network. While the description below presents the full functionality of the invention, the mechanisms presented in the invention are configurable to the capabilities of the cameraphone and associated computer systems on which it is implemented, the resources available in the cameraphone and associated computer systems and the requirements for providing information services related to visual imagery.
In the context of this description, the term “system” refers to a system that provides information services related to visual imagery including a cameraphone.
In the context of this description, the term “information service” is used to refer to a user experience provided by the system that may include the logic to present the user experience, multimedia content used to provide the user experience, and related user interfaces. The term “content” is used to refer to multimedia data used in the information services. Content included in an information service may be in text, audio, video or graphical formats. For example, an information service may be comprised of text. Another exemplary information service may be comprised of text, video and associated controls for playing the video information. In some embodiments, information services may include information retrieved from various sources such as Web sites, Web search engines, news agencies, e-commerce storefronts, comparison shopping engines, entertainment content, games, and the like. In other embodiments, the information services may modify or add new components (e.g., software applications, ring tones, contact information) to the cameraphone on which the user interface is implemented.
In the context of this description, the term “visual imagery” refers to a single still image, a plurality of still images, a single video sequence, a plurality of video sequences or combinations thereof. Visual imagery may also include associated metadata such as capture device characteristics, file format characteristics, audio, tags, time of capture, location of capture, author name, filename, and the like. In the context of this description, the term “visual element” refers to text, numbers, icons, symbols, pictograms, ideograms, graphical primitives, and other such elements in a visual imagery and their layout and formatting information in the visual imagery.
In the context of this description, the term “user interface element” refers to icons, text boxes, menus, graphical buttons, check boxes, sounds, animations, lists, and the like that constitute a user interface. The terms “widget” and “control” are also used to refer to user interface elements. In the context of this description, the term “input component” refers to a component integrated into the system such as a key, button, joystick, touch pad, motion sensing device, speech input, and the like that can be used to input information to the user interface. In the context of this description, the term “cursor control component” refers to a component integrated into the system such as a key, button, joystick, touch pad, motion sensing device, speech input, and the like that can be used to control a cursor on the user interface. In the context of this description, the term “navigational component” refers to a component integrated into the system such as a key, button, joystick, touch pad, motion sensing device, speech input, and the like that can be used to select, control, and switch between various user interface elements. In the context of this description, the term “menu command” refers to a command associated a menu item on the user interface.
FIGS. 1(b) and 1(c) illustrate the components of an exemplary cameraphone 1120 on which information services related to visual imagery may be provided. Front view of cameraphone 1200 illustrated in
Exemplary User Interface Architecture
The user interface for accessing, presenting, and interacting with information services related to visual imagery on the cameraphone 1120 may be comprised of both visual and audio components. Visual components of the user interface may be presented on display 1206 and the audio components on speaker 1204. User inputs may be acquired by the system through camera 1310, microphone 1210, keypad 1208, and other input components integrated into cameraphone 1120. In some embodiments, the user interface may be presented using a plurality of devices that together provide the functionality of cameraphone 1120. For instance, visual components of the user interface may be presented on a television set while user inputs are obtained from a television remote control.
The visual component of the user interface may include a plurality of visual representations herein termed as “views.” Each view may be configured to address the needs of a specific set of functions of the system as further described.
A “login view” may enable authentication to the system. A “camera view” may enable capture of visual imagery and include a viewfinder to present visual imagery. In some embodiments, the viewfinder may encompass the entire camera view.
Information services may be presented in “index” and “content” views. An index view may be used to present one or more information services. A user may browse through the available set of information service options presented in an index view and select one or more information services to be presented in a content view or using components external to the system (e.g., a web browser). The information services presented in the index view may have a compact representation to optimize the use of the display area. The content view may be used to present an information service in its full form.
Help information related to the system may be presented in a “help view.” In addition, transient information services may be presented in a “transient information view.” The user may also interact with the views using various control widgets embedded in the information service, controls such as menu commands integrated into the user interface and appropriate input components integrated into cameraphone 1120.
The views described here may include controls for controlling the presentation of information in audio or video format. The controls may enable features such as play, pause, stop, forward, and reverse of the audio or video information. Audio information may be presented through speaker 1204 or other audio output component connected to the system.
In some embodiments, the user interface and its functionality may be integrated as a single entity in the system. For example, the user interface may be implemented by a software application (e.g., in environments like J2ME, Symbian, and the like) that is part of the system. In other embodiments, some components of the user interface and their functionality may be implemented by various components integrated into the system. For example, the camera view may be integrated into a camera software application or the index and content views may be integrated into a World Wide Web browser.
In some embodiments, the user interface views may also incorporate elements for presenting various system statuses. If the system is busy processing or communicating information, the busy status may be indicated by a graphical representation of a flashing light 2120. In other embodiments, the busy status may be represented differently. For example, the progress of a system activity over an extended duration of time may be indicated using progress bar 2140. A fraction of progress bar 2140, proportionate to the fraction of the extended duration activity completed, may change color to indicate the progress of the operation. Information may also be presented in auxiliary or status panes in textual and graphical form.
Further, in some embodiments, the user may be aided in navigating between the different views through use of user interface elements. For example, the different views may be represented in the form of a tabbed panel 2118, wherein various tabs represent different views in the user interface. In some embodiments, the views may be presented as windows that may overlap to various extents.
The views described here may include controls for controlling the presentation of information in audio or video format. The controls may enable features such as play, pause, stop, forward and reverse of the audio or video information. Audio information is presented through speaker 1204 or other audio output component connected to the system. System status indicators, tabbed panels and windows may also be incorporated in any of the views described.
The camera view may optionally include controls that indicate the status of the system and characteristics of the visual imagery. For example, in some embodiments, system status (e.g., camera zoom level) and visual imagery characteristics (e.g., brightness of visual imagery) may be indicated by other optional controls on the user interface 2126. In some embodiments, controls for adjusting zoom level, macro mode, focus and the like may be integrated in the user interface 2126.
In some embodiments, the user interface may employ a lighter color (e.g., white) for presenting information against a dark color (e.g., black) background. Such a color scheme is especially useful while presenting information services on a backlit LCD display.
In the embodiment illustrated in
In some embodiments, the information services may be presented in a compact form to maximize use of the display space for presenting the information services. Compact representation of an information service may involve the use of a subset of the information available in an information service. For example, a compact representation may show only the title text of an information service. Audio information may be presented through speaker 1204 integrated into cameraphone 1120.
In some embodiments, items in the list may be selected using cursor 2148. In addition, in some embodiments, the items that were previously selected may be depicted with a representation that differs from items that have not been selected. For example, in
Information related to the items in the list may also be presented in auxiliary pane 2136 described earlier. For example, price of a book, URL of a web site, WWW domain name, source of a news item, type of a product, time and location associated with an information service, etc. may be presented in auxiliary pane 2136. In addition, as a user moves cursor 2148, auxiliary pane 2136 may be updated to display metadata related to the item currently highlighted by cursor 2148. In some embodiments, a short clip of the audio information associated with an information service may be played as preview when an item in the list is selected.
In some embodiments, the index view may also include controls for controlling presentation when presenting information in audio or video format. The controls may enable features such as play, pause, stop, forward and reverse of the audio or video information. Audio information may be presented through speaker 1204 integrated into cameraphone 1120. The scroll indicators 2152 serve to guide the navigation of the help information as described earlier. In some embodiments, information that share common attributes (e.g., information sourced from World Wide Web) may be represented using shared attributes such as a common icon, text color or background color.
In some embodiments, the index view may employ a lighter color (e.g., white) for presenting information against a dark color (e.g., black) background. Such a color scheme is especially useful while presenting information services on a backlit LCD display.
In some embodiments, parts of the information presented may be identified as significant. For instance, here, text of significance is highlighted 2158. In other embodiments, a region of significance may be depicted through other textual and graphical marks a (e.g., change in color, underlining, flashing). A graphical cursor may be used in conjunction with cursor control keys, joystick or other similar input components to highlight presented information. In
In some embodiments, the content view may employ a lighter color (e.g., white) for presenting information against a dark color (e.g., black) background. Such a color scheme is especially useful while presenting information services on a backlit LCD display.
The user interface may also allow customization. Such customizations of user interfaces are commonly referred to as themes or skins. The customization may be either specified explicitly by the user or determined automatically by the system based on criteria such as system and environmental factors. System factors used by the system for customizing the user interface include the capabilities of cameraphone 1120, the capabilities of the communication network, the system learned preferences of the user and the media formats used in the information services being presented. Another system factor used for the customization may be the availability of sponsors for customization of the user interface. Sponsors may customize the user interface with their branding collateral and advertisement content. Environmental factors used by the system for customizing the user interface may include the geographical and spatial location, the time of day of use and the ambient lighting. User interface options that are thus customized may include color schemes, icons used in the user interface, the layout of the widgets in the user interface and commands assigned to various functions of the user interface.
The user interface may enable communication information presented in the views using communication services such as email, SMS, MMS and the like. For instance, the list of information presented in the index view or the information service presented in detail in the content view may be communicated to a recipient as an email using appropriate menu commands or by activating appropriate graphical user interface widgets.
The user interface may also enable storage information services presented in the views. For instance, the list of information services presented in the index view or the information service presented in detail in the content view may be stored for later access and use, using appropriate menu commands or by activating appropriate graphical user interface widgets.
User Interface Input Mechanisms
In the context of this description, the term “click” refers to an user input on the user interface wherein, the user clicks on a key, button, joystick, scroll wheel, thumb wheel or equivalent integrated into cameraphone 1120, the user flicks a joystick integrated into cameraphone 1120, the user spins or clicks a scroll wheel, thumb wheel or equivalent, or the user taps on a touch sensitive or pressure sensitive input component. In the context of this description, the term “flick” refers to a movement of a joystick, scroll wheel, or thumb wheel in one of its directions of motion.
In addition, in the context of this description, the term “click” may refer to 1) the transitioning of an input component from its default state to a selected or clicked state (e.g. key press), 2) the transitioning of an input component from its selected or clicked state to its default state (e.g. key release) or 3) the transitioning of an input component from its default state to a selected or clicked state followed by its transitioning back from the selected or clicked state to its default state (e.g. key press followed by a key release). The action to be initiated by the click input may be triggered on any of the three versions of click events defined above as determined by the implementation of a specific embodiment.
In addition, input components may also exhibit a bistate behavior wherein clicking on the input component once transitions it to a clicked state in which it continues to remain. If the input component is clicked again, the input component is returned to its default or unclicked state. This bistate behavior is termed “toggle” in the context of this description.
In the context of this description, the term “click hold” is used to refer to a user input on the user interface that has an extended temporal duration. For example, the user may click on a key or button integrated into the cameraphone and hold it in its clicked state or the user may click on a joystick integrated into the cameraphone and hold it in its clicked state or the user may flick a joystick integrated into cameraphone 1120 and hold it in its flicked state or the user may spin or click a scroll wheel, thumb wheel or equivalent and hold the wheel in its engaged state or the user may input a single input on a touch sensitive or pressure sensitive input component and continue the input in an uninterrupted manner.
The end of the click hold operation, and hence the duration of the click hold event, is marked by the return of the input component to its default or unclicked state. The action to be initiated by the click hold input may be triggered either at the transition of a key from its default state to its clicked state, after the user holds the input component in its clicked state for a previously specified period of time or on return of the input component from its clicked state to its default state.
The difference between a click and a click hold is that a click represents an instantaneous moment, while a click hold represents a duration of time, with the start and end of the duration marked by the click and the release or return of the input component to its unclicked or default state.
In addition to clicks, click holds and toggles, the motion of the cameraphone by itself may be used to represent input events, in certain embodiments. For instance, in some embodiments, motion tracking and estimation processes are used on the visual imagery captured with the camera to detect the motion of cameraphone 1120 relative to its environment.
In other embodiments, the motion of cameraphone 1120 may be sensed using other motion sensing mechanisms such as accelerometers and spatial triangulation mechanisms such as the Global Positioning System (GPS). Specific patterns in the motion of the cameraphone, thus inferred, are used to represent clicks and click hold events. For instance, unique gestures such as the motion of the cameraphone perpendicular to the plane of the camera sensor, a circular motion of the cameraphone or a quick lateral movement of the cameraphone are detected from the motion sensing mechanisms and used to represent various click and click hold events. In addition, a plurality of such unique gestures may be used to represent a plurality of unique click, click hold and toggle events.
In some embodiments, speech input may also be used to generate commands equivalent to clicks, click holds, and toggles using speech and voice recognition components integrated into the system. Further, speech input may also be used for control cursor, highlighting, selection of items in lists and selection of hyperlinks.
Graphical Widgets, Their Selection and Operation
Clicks, click holds, toggles, and equivalent inputs may optionally be associated with visual feedback in the form of widgets integrated into the user interface. An example of a simple widget integrated into the user interface is a graphical button on the cameraphone's display 1206. In some embodiments, a plurality of such widgets integrated into the user interface may be used in conjunction with an input component, to provide a plurality of functionalities for the input component. For example, a joystick may be used to move a selection cursor between a number of graphical buttons presented on the client display to select a specific mode of operation. Once a specific mode of operation has been selected, the system may present the user interface for the selected mode of operation which may include redefinition of the actions associated with the activation of the various input components used by the system. Effectively, such a graphical user interface enables the functionality of a plurality of “virtual” user interface elements (e.g. graphical buttons) using a single physical user interface component (e.g., joystick).
Using an input component to interact with multiple widgets in a graphical user interface may involve a two step process: 1) a step of selecting a specific widget on the user interface to interact with and 2) a step of activating the widget.
The first step of selecting a widget is performed by pointing at the widget with an “arrowhead” mouse pointer, a cross hair pointer or by moving widget highlights, borders and the like, upon which the widget may transition from the unselected to selected state. Moving the cursor away from a widget may transition it from the selected to unselected state. The second step of activating the widget is analogous to the click or click hold operations described earlier for physical input components.
In the context of this description, the term “widget select” is used to describe one of the following operations: 1) the transitioning of a widget from unselected to selected state, 2) the transitioning of a widget from selected to unselected state, or 3) the transitioning of a widget from unselected to selected state followed by its transitioning from selected to unselected state. The term “widget activate” is used to refer to one of the following operations: 1) the transitioning of a widget from inactive to active state, 2) the transitioning of a widget from active to inactive state, or 3) the transitioning of a widget from inactive to active state followed by its transitioning from active to inactive state. A “widget hold” event may be generated by the transitioning of a widget from inactive to active state and the holding of the widget in its active state for an extended duration of time. The return of the widget to its default or inactive state may mark the end of the widget hold event.
In addition, widgets may optionally exhibit a bistate behavior wherein clicking on the input component once while a widget is selected transitions it to an activated state in which it continues to remain. If the widget which is now in its activated state is selected and the input component clicked again, the widget is returned to its default or inactive state. This bistate behavior is termed “widget toggle.”
Widget activate, widget hold and widget toggle events may be generated by the user using clicks, click holds, toggles and equivalent inputs generated using an input component integrated into cameraphone 1120, in conjunction with widgets selected on the graphical user interface.
The selection of a widget on the user interface may be represented by changes in the visual appearance of a widget, e.g., through use of highlights, color changes, icon changes, animation, drawing of a border around the widget or other equivalent visual feedback, through the use of audio feedback such as sounds or beeps or through tactile feedback such as vibrations. Similarly, the activation of a widget using a widget activate operation or an extended activation of a widget using a widget hold operation may be represented by changes in the visual appearance of a widget, e.g., through use of highlights, color changes, icon changes, animation, drawing of a border around the widget or other equivalent visual feedback, through use of audio feedback such as sounds or beeps or through tactile feedback such as vibrations.
Widget select events may be input using an input component that supports selection between a plurality of widgets such as a mouse, joystick, scroll wheel, thumb wheel, touch pad or cursor control keys. Widget activate, widget toggle and widget hold events may be input using input components such as a mouse, joystick, touch pad, scroll wheel, thumb wheel or hard or soft buttons. In addition, the motion of cameraphone 1120 by itself may be used to control the cursor and generate widget select, widget activate, widget toggle and widget hold events, in certain embodiments.
For instance, in some embodiments, motion tracking or estimation mechanisms may be used on the visual imagery captured with camera 1310 to detect the motion of the cameraphone relative to its environment and used to control the movement of the cursor, i.e., for widget select events. In such an embodiment, the motion of the cursor or the selection of widgets mimics the motion of cameraphone 1120. Specific patterns in the motion of cameraphone 1120, may be used to represent widget activate and widget hold events. For instance, unique gestures such as the motion of cameraphone 1120 perpendicular to the plane of the camera sensor, a circular motion of cameraphone 1120 or a quick lateral movement of cameraphone 1120 may be detected from the motion sensing mechanisms and are used to represent various widget activate and widget hold events. The motion of the cameraphone may also be optionally sensed using other motion sensing mechanisms such as accelerometers and triangulation mechanisms such as the Global Positioning System.
In some embodiments, speech input may also be used to generate commands equivalent to click, click hold, toggle, widget select, widget activate, and widget hold events using speech and voice recognition components integrated into the system.
Equivalency of User Interface Inputs
In some embodiments, clicks may be substituted with a click hold, where the embodiment may interpret the click hold such as to automatically generate a click or toggle event from the click hold user input using various system and environmental parameters. For instance, in some embodiments, upon the start of the click hold input or a toggle, the system may monitor the visual imagery for any changes in the characteristics of the visual imagery such as average brightness and automatically capture a still image when such a change occurs and in the process emulates a click. In some embodiments, upon start of a user input click hold event or a toggle, a system timer may be used to automatically capture a still image after a preset interval or a preset number of video frames and in the process emulate a click.
In some embodiments, a click or toggle may be substituted for a click hold. In this case, the implicit duration of the click hold event represented by a click or toggle may be determined automatically by the system based on various system and environmental parameters as determined by the implementation. Similarly, widget activate, widget toggle, and widget hold operations may also be optionally used interchangeably when used in conjunction with additional system or environmental inputs, as in the case of clicks and click holds.
While the following description describes the operation of embodiments using clicks and click holds, other embodiments may substitute these inputs with toggle, widget select, widget activate, widget toggle, and widget hold operations. For instance, in some embodiments, the selection of a button widget may be interpreted as equivalent to a click. In some embodiments, some user interface inputs may be in the form of spoken commands that are interpreted using speech recognition.
Features of Visual Components of User Interface
A user using the system for accessing information services related to visual imagery first captures live visual imagery or selects it from storage and then requests related information services. Upon capture of visual imagery or its selection from storage, the selected or captured visual imagery may be optionally displayed on the user interface.
In some embodiments, where a single still image is captured with the camera or selected from storage, the still image may be displayed on the user interface. In some embodiments, where a plurality of still images or video sequences or combinations thereof are captured from the camera or selected from stored visual imagery, the visual imagery may be displayed in a tiled layout or as a filmstrip on the user interface. When displaying video sequences in tiled layout or filmstrip form, the video sequence itself is played or a specific frame of the video sequence is displayed as a still image. When the visual imagery is comprised of a plurality of still images or video sequences, in some embodiments, only the first or last still image or video sequence to be captured or selected by the user may be presented on the user interface.
In some embodiments, users may request information services related to selected spatiotemporal regions of the visual imagery. Spatiotemporal regions for which a user requests related information services may be represented in the visual imagery displayed on the user interface using various markers such as icons, highlights, overlays, and timelines to explicitly show the demarcation of the spatiotemporal regions in the visual imagery. For instance, a rectangular region selected by the user in a still image may be represented by a rectangular graphic overlaid on the still image. The selection of a specific spatial region of visual imagery in the form of a video sequence is represented by the embedding of a marker in the spatial region through the duration of the video sequence. Examples of such a marker are a change in the brightness, contrast, or color statistics of the selected region such that it stands out from the rest of the visual imagery.
In some embodiments that use input components in conjunction with selectable widgets on the user interface, the process of selecting a widget on the user interface and widget activating or widget toggling or widget holding using a input component is intended to provide a look and feel analogous to clicking or toggling or click holding respectively on an input component used without any associated user interface widgets. For instance, selecting a widget in the form of a graphical button by moving a cursor in the form of a border around the button using a joystick and activating the widget by clicking on the joystick is a user experience equivalent to clicking on a specific physical button.
Similarly, in some embodiments that use input components in conjunction with selectable widgets on the user interface, the process of requesting information services related to a given visual imagery may require the user to select visual imagery displayed in the form of widgets on the user interface such as a viewfinder, tiled layout or filmstrip as described earlier, and widget activate the visual imagery. Such a process provides a user experience that is analogous to “clicking” on the visual imagery.
Features of Audio Components of User Interface
In some embodiments, the user interface may employ audio cues to denote various events in the system. For instance, the system may generate audio signals (e.g., audio tones, audio recordings) when the user switches between different views, inputs information in the user interface, uses input components integrated into the cameraphone (e.g., click, click hold, toggle), uses widgets integrated into the cameraphone user interface (e.g., widget select, widget activate, widget toggle, widget hold) or to provide an audio rendering of system status and features (e.g., system busy status, updating of progress bar, display of menu options, readout of menu options, readout of information options).
In some embodiments, the system may provide an audio rendering of various information elements obtained by processing the visual imagery. The user may then select segments of the audio rendering that are representative of spatiotemporal regions of the visual imagery for which the user is interested in requesting related information services. This process enables users to request information services related to visual imagery without relying on the visual components of the user interface. Users may mark the segments of audio corresponding to the spatiotemporal regions of the visual imagery they are interested in, using various input mechanisms described earlier.
In some embodiments, the system may provide an audio rendering of the information in various media types (e.g., using a text-to-speech converter) in the information services generated by the system as related to visual imagery. This enables users to browse and listen to the information services without using the visual components of the user interface. This feature in conjunction with the other audio feedback mechanisms presented earlier may enable a user to use all features of the system using only the audio components of the user interface, i.e., without using the visual components of the user interface.
Operation of Exemplary Embodiments
The operation of the system may involve capturing of the visual imagery, requesting of information services related to visual imagery, identification of related information services, providing of related information services, presentation of the related information services, optionally in compact form, selection of one or more information services for presentation optionally in their entirety, and the presentation of the selected information services. The process of requesting related information services may be initiated explicitly by the user through user inputs or triggered automatically by the system or environmental events monitored by the system.
In some embodiments, the request for information services related to visual imagery may be generated by cameraphone 1120 and communicated to system server 1160 over communication network 1140. The system server 1160, upon receiving the request, generates the information services. The process of generating the related information services may involve the generation of a plurality of contexts from the visual imagery, associated metadata, information extracted from the visual imagery, and knowledge derived from knowledgebases. A plurality of information services related to the generated contexts may be identified from a plurality of information service sources or knowledgebases. The information services identified as related to the contexts and hence to the visual imagery may then be presented to the user on the cameraphone.
User interface views integrated into system 1100 may enable users to capture visual imagery, request related information services and interact with the related information services. Users may use the different views of the user interface to perform various functions related to requesting, accessing, and using the information services. Users may interact with the user interface through use of appropriate input components integrated into cameraphone 1120.
In some embodiments, operation of the system may require the use of one view (e.g., camera view) of the user interface for capturing the visual imagery, the use of another view (e.g., index view) for presenting the plurality of information services in compact form and the use of another view (e.g., content view) for the presentation of the information services in their entirety. In some embodiments, in response to a request for information services related to a visual imagery, the system may present a single information service as most relevant to a visual imagery, for instance in the content view, without presenting a plurality of information services.
A user using the system to request information services related to visual imagery may first capture visual imagery or select it from storage and then request related information services. In some embodiments, the system presents the captured visual imagery and then the requested information services. In some other embodiments, information services may be presented as the visual imagery is being captured or retrieved from storage, over an extended period of time. In such embodiments, the visual imagery may have extended time duration as in the case of a video sequence or a sequence of still images. Information services related to the visual imagery may be presented as the visual imagery is being communicated or streamed from the cameraphone to system server and processed by the system server. The information services being presented may also be updated continually as the visual imagery is communicated to the system server.
In some embodiments, the information services provided by the system may be presented independent of the visual imagery, for instance, in a separate view of the user interface from the one used to present the visual imagery. In some embodiments, the information services provided by the system may be presented along with the captured visual imagery, for instance, in the same view of the user interface as the captured visual imagery. In some embodiments, the information services may also be presented such that they augment the captured visual imagery.
In some embodiments, transient information services may be presented between the various steps of system operation. For instance, in some embodiments, transient information services may be presented when the system is busy processing or communicating information. In some embodiments, transient information services may be presented for providing sponsored information services. In some embodiments, transient information services may be presented as an interstitial view between displaying different views of the user interface.
The process of capturing visual imagery and the requesting of related information services may use one of the modes of operation discussed below. While the following modes of operation describe the capture of visual imagery, other associated information such as metadata of the visual imagery and other user and system inputs may also be captured along with the visual imagery and used to provide related information services.
One-Step Mode of Operation
Here, the operation of some embodiments in which a user requests information services related to visual imagery using a single step of inputs is described. The single step may comprise of a set of user inputs that is used for both capturing visual imagery and requesting related information services.
In some embodiments, the single step of inputs may trigger the capture of visual imagery by cameraphone 1120, creation of a request for related information services, communication of the request to system server 1160, identification and generation of the related information services by system server, communication of the information services to the cameraphone and presentation of the information services on the cameraphone user interface.
In some embodiments, a one-step mode of operation may be used to request information services related to a single still image captured using the camera integrated into the cameraphone. Here, the user points the camera integrated into cameraphone 1120 at the scene of interest and inputs a click on an input component. Upon that single user input, visual imagery is captured by the cameraphone and a request for related information services is generated. Optionally, the captured still image may be displayed on the user interface. Then the information services related to the still image obtained from the system server may be presented to the user on the cameraphone user interface.
This one-step mode of operation is analogous to taking a picture using a cameraphone with a single click. In this embodiment, upon the user inputting a single click, a still image is captured and the user is presented one or more related information services as opposed to simple storage of the captured image, as in the case of a camera function. Further, exactly a single click may be required to capture the image and to request related information services in the one-step mode of operation, when the captured visual imagery is in the form of a single still image.
Here, a user views the visual scene using the viewfinder integrated into the camera view 3110. The user may optionally align the visual imagery displayed in the viewfinder as required in some embodiments 3120. The user then clicks on a joystick to trigger the system to capture a single still image and simultaneously request related information services 3130. The captured still image may optionally be presented in the user interface while the system retrieves related information services 3140. The related information services may be then presented in the index view or content view 3160. In some embodiments, transient information services may be presented before information services related to the visual imagery are presented 3150.
In some embodiments, the one step operation described above for visual imagery comprised of a single still image may be repeated iteratively. In such embodiments, in each cycle of the iteration a single still image may be captured and information services are requested after the capture of the still image. The information services presented in each cycle may be identified and provided based on one or more of the still images captured until that iteration. In this mode of operation, in “N” cycles, the user inputs “N” number of clicks for capturing “N” still images and requesting related information services. This mode of operation helps a user to iteratively filter the obtained information services by providing additional visual imagery input each time.
In some embodiments, a one-step mode of operation may be used to request information services related to a single still image obtained from storage. Here, the user navigates the visual imagery available in storage and selects a still image in order to retrieve information services related to it. The images may be stored in cameraphone 1120 or on system server 1160. In some embodiments, the images available in the system may be presented on the cameraphone user interface with decreased dimensions (i.e., as thumbnails) representative of the images. The user input for the selection of the still image also triggers the request for related information services from the system server. Optionally, the selected still image may be displayed on the user interface. Then the information services related to the still image obtained from the system server may be presented to the user on the cameraphone user interface.
In some embodiments, a one-step mode of operation may be used to request information services related to a contiguous set of still images or a single video sequence captured using the camera integrated into the cameraphone. Here, the user points the camera integrated into the cameraphone at the scene of interest and initiates a click hold on an input component to begin capture of the visual imagery. The system then captures a contiguous set of still images or a video sequence, depending upon the embodiment. Upon termination of the click hold, the visual imagery is used to request related information services. Optionally, the captured visual imagery may be displayed on the user interface. Then the information services related to the visual imagery obtained from the system server may be presented to the user on the cameraphone user interface.
In some embodiments, a one-step mode of operation may be used to request information services related to a single video sequence obtained from storage. Here, the user navigates the visual imagery available in storage and selects a video sequence in order to retrieve information services related to it. The video sequences may be stored in cameraphone 1120 or on system server 1160. In some embodiments, the video sequences available in the system may be presented on the cameraphone user interface with decreased dimensions (i.e., as thumbnails) representative of the video sequences. The user input for the selection of the video sequence also triggers the request for related information services from the system server. Optionally, the selected video sequence may be displayed on the user interface. Then the information services related to the video sequence obtained from the system server may be presented to the user on the cameraphone user interface.
In some embodiments, a one-step mode of operation may be used to request information services related to visual imagery in the form of a plurality of still images, a plurality of video sequences or a combination thereof, captured live from the camera integrated into the cameraphone or obtained from storage. Here, the user captures or selects each of the visual imagery using inputs as discussed earlier. The final user input may also serve as the trigger for request of information services related to the visual imagery. For instance, if the user has not made any additional input for a predetermined duration, the system may interpret the last input as a request for information services related to the set of visual imagery captured or selected so far. In some embodiments, the choice of capturing or selecting visual imagery in the form of a single still image versus a plurality of still images versus a single video sequence versus a plurality of video sequences versus a combination thereof, upon user input, may be automatically made by the system based on parameters such as system timers, user preferences or changes in characteristics of the visual imagery. Further, in the one-step mode of operation, exactly “N” user inputs may be required for requesting information services related to “N” still images captured by a user.
Two-Step Mode of Operation
Here, the operation of some embodiments in which a user requests information services related to visual imagery using two steps of inputs is described. The first step consists of a set of user inputs for capturing visual imagery. The second step consists of a set of user inputs for requesting related information services.
In some embodiments, the first step of operation may trigger the capture of visual imagery by cameraphone 1120. Then, the second step of operation may trigger the creation of a request for related information services, communication of the request to system server 1160, identification and generation of the related information services by system server, communication of the information services to the cameraphone and presentation of the information services on the cameraphone user interface.
In some embodiments, a two-step mode of operation may be used to request information services related to a single still image captured using the camera integrated into the cameraphone. Here, in the first step of operation, the user points the camera integrated into the cameraphone at the scene of interest and inputs a click on an input component to capture a single still image. Optionally, the captured still image may be displayed on the user interface. The user may then request information services related to the still image, in the second step of operation, using an input in the form of a single click. Then the information services related to the still image obtained from the system server may be presented to the user on the cameraphone user interface. Some embodiments may include visual feedback on the user interface such that the captured and displayed visual imagery is highlighted before the user makes the second click. This process in effect creates the user experience of clicking on the captured image.
In some embodiments, using a two-step mode of operation for requesting information services related to a single still image captured using a camera, only two clicks are required—one for capturing the still image and the other for requesting information services.
In some embodiments, the two step operation described above for visual imagery comprised of a single still image may be repeated iteratively. In such embodiments, in each cycle of the iteration a single still image may be captured and information services are requested after the capture of the still image. The information services presented in each cycle may be identified and provided based on one or more of the still images captured until that iteration. In this mode of operation, in “N” cycles, the user inputs “N” number of clicks for the first step of capturing the still images and “N” number of clicks for the second step to request related information services. This mode of operation helps a user to iteratively filter the obtained information services by providing additional visual imagery input with each iteration.
In some embodiments, a two-step mode of operation may be used to request information services related to a set of N still images captured using the camera integrated into the cameraphone. Here, in the first step of operation, the user points the camera integrated into the cameraphone at the scenes of interest and inputs N clicks on an input component to capture a set of N still images. Optionally, the captured still images may be displayed on the user interface. The user may then request information services related to the still images, in the second step of operation, using an input in the form of a single click. Then the information services related to the still image obtained from the system server may be presented to the user on the cameraphone user interface. Thus, exactly N+1 inputs are required to request information services related to N still images −“N” clicks for capturing the images and one click for requesting information services.
In some embodiments, a two-step mode of operation may be used to request information services related to a single still image selected from storage. Here, in the first step of operation, the user navigates the visual imagery available in storage and selects a still image. Optionally, the selected still image may be displayed on the user interface. The user may then request information services related to the still image, in the second step of operation using an input in the form of a single click. Then information services related to the still image are obtained from the system server and presented to the user on the cameraphone user interface. This process in effect creates the user experience of interacting with the selected image.
In some embodiments, a two-step mode of operation may be used to request information services related to a contiguous set of still images or a video sequence captured using the camera integrated into the cameraphone. Here, in the first step of operation, the user points the camera integrated into the cameraphone at the scene of interest and inputs a click hold on an input component to capture the visual imagery. The start of the capture of visual imagery may be marked by the transition of the click hold input component to its clicked state and the end of the capture by the return of the click hold input component to its unclicked state. Optionally, the captured visual imagery may be displayed on the user interface. The user may then request information services related to the visual imagery, in the second step of operation, using an input in the form of a single click. Then the information services related to the visual imagery obtained from the system server may be presented to the user on the cameraphone user interface. This process in effect creates the user experience of clicking on the visual imagery.
In some embodiments, a two-step mode of operation may be used to request information services related to a single video sequence selected from storage. Here, in the first step of operation, the user navigates the visual imagery available in storage and selects a video sequence. Optionally, the selected video sequence may be displayed on the user interface. The user may then request information services related to the video sequence, in the second step of operation using an input in the form of a single click. Then information services related to the video sequence are obtained from the system server and presented to the user on the cameraphone user interface. This process in effect creates the user experience of interacting with the video sequence.
In some embodiments, a two-step mode of operation may be used to request information services related to a plurality of still images, a plurality of video sequences, or a combination thereof, captured by a camera integrated into the cameraphone or obtained from storage. Here, in the first step of operation, the user uses clicks and click holds as described earlier to capture or select the visual imagery. Optionally, the visual imagery may be displayed on the user interface. The user may then request information services related to the visual imagery, in the second step of operation using an input in the form of a single click. Then information services related to the visual imagery are obtained from the system server and presented to the user on the cameraphone user interface. This process in effect creates the user experience of interacting with the visual imagery.
Three-Step Mode of Operation
Here, the operation of some embodiments in which a user requests information services related to visual imagery using three steps of inputs is described. The first step consists of a set of user inputs for capturing visual imagery. The second step consists of a set of user inputs for requesting information options. The third step consists of a set of user inputs for requesting information services related to one or more of the information options presented.
In some embodiments, the first step of operation may trigger the capture of visual imagery by cameraphone 1120. Then, the second step of operation may trigger the creation of a request for related information options, communication of the request to system server 1160, identification and generation of the related information options by system server, communication of the information options to the cameraphone and presentation of the information options on the cameraphone user interface. Then, the third step of operation may trigger the creation of a request for information services related to an information option, communication of the request to system server, identification and generation of the related information services by system server, communication of the information services to the cameraphone and presentation of the information services on the cameraphone user interface.
Information options employed in the second step of operation include hotspots, derived information elements, and hyperlinks. Hotspots are spatiotemporal regions of visual imagery that may be demarcated using graphical overlays such as hotspot boundaries, icons, embedded modifications of the visual imagery (e.g., highlighting of hotspots) or through use of audio cues (e.g., the system may beep when a cursor is moved over a hotspot). In some embodiments, the spatiotemporal regions may have spatial dimensions smaller than the spatial dimensions of the visual imagery, e.g., the spatiotemporal regions demarcate a subset of the pixels of the visual imagery. In some embodiments, the spatiotemporal regions may have temporal dimensions that are smaller than the temporal dimensions of the visual imagery, for instance, the spatiotemporal regions may be comprised of a set of adjacent video frames or still images. In some embodiments, the spatiotemporal regions may be demarcated both in spatial and temporal dimensions simultaneously. In some embodiments, hotspots may be presented such that they appear as visual augmentations on the captured visual imagery.
Information elements derived from visual imagery include text strings or other textual, graphical, or audio representations of visual elements extracted from the visual imagery. For instance, embedded textual information extracted from visual imagery may be presented as text strings on the user interface, using icons to denote their location on the visual imagery or be presented through audio output components using speech synthesis. These elements may be presented to the user in the camera view or in the index view as a list or other such representations. For instance, in one embodiment, all the text strings extracted from the visual imagery may be presented as a list in the index view as information options. The user may choose one or more of the presented text strings to obtain related information services. In some embodiments, the captured visual imagery may be presented along with the derived elements. The derived elements may be presented sorted by relevance to the captured visual imagery. Information elements may also be derived by the system based on other inputs, other system information, and system state.
In some embodiments, a three-step mode of operation may be used to request information services related to a single still image captured using the camera integrated into the cameraphone. Here, in the first step of operation, the user points the camera integrated into the cameraphone at the scene of interest and inputs a click on an input component to capture a single still image. Optionally, the captured still image may be displayed on the user interface. The user may then request information options related to the still image, in the second step of operation, using an input in the form of a single click. The information options related to the still image obtained from the system server may then be presented to the user on the cameraphone user interface. The user may then select one or more information options presented and request related information services in the third step of operation. This process in effect creates the user experience of interacting with the captured image.
In the case of requesting information services related to a single still image, the three-step mode of operation may require exactly three clicks: one for capturing the image, one for the generating a list of information options and the last click for requesting information services based on the default information option.
The selection of one or more information options and the requesting of related information services are analogous to selecting and activating one or more widgets on the user interface in terms of the user experience. Hence, all parameters of interaction with widgets in a graphical user interface using a multifunction input component (e.g., use of multifunction input components, the specific types of user's interaction with the multifunction input component, the visual feedback presented on the graphical user interface, use of accelerated key inputs) apply to the user's interaction with the information options.
In some embodiments, a three-step mode of operation may be used to request information services related to a single still image obtained from storage. Here, in the first step of operation, the user navigates the visual imagery available in storage and selects a still image. Optionally, the selected still image may be displayed on the user interface. The user may then request information options related to the still image, in the second step of operation, using an input in the form of a single click. The information options related to the still image obtained from the system server may then be presented to the user on the cameraphone user interface. The user may then select one or more information options presented and request related information services in the third step of operation. This process in effect creates the user experience of interacting with the selected image.
In some embodiments, a three-step mode of operation may be used to request information services related to a set of contiguous still images or single video sequence captured using the camera integrated into the cameraphone. Here, in the first step of operation, the user points the camera integrated into the cameraphone at the scene of interest and inputs a click hold on an input component to capture the visual imagery. Optionally, the captured visual imagery may be displayed on the user interface. The user may then request information options related to the visual imagery, in the second step of operation, using an input in the form of a single click. The information options related to the visual imagery obtained from the system server may then be presented to the user on the cameraphone user interface. The user may then select one or more information options presented and request related information services in the third step of operation. This process in effect creates the user experience of interacting with the captured visual imagery.
In some embodiments, a three-step mode of operation may be used to request information services related to a set of N still images captured using the camera integrated into the cameraphone. Here, in the first step of operation, the user points the camera integrated into the cameraphone at the scenes of interest and inputs N clicks on an input component to capture a set of N still images. Optionally, the captured still images may be displayed on the user interface. The user may then request information options related to the visual imagery, in the second step of operation, using an input in the form of a single click. The information options related to the visual imagery obtained from the system server may then be presented to the user on the cameraphone user interface. The user may then select one or more information options presented and request related information services in the third step of operation. This process in effect creates the user experience of interacting with the N captured still images. With this process, exactly N+2 inputs are required to request information services related to N still images-N clicks for capturing the images, one click for requesting related information options, and one click to request information services related to the default information option.
In some embodiments, a three-step mode of operation may be used to request information services related to a single video sequence obtained from storage. Here, in the first step of operation, the user navigates the visual imagery available in storage and selects a video sequence. Optionally, the selected video sequence may be displayed on the user interface. The user may then request information options related to the video sequence, in the second step of operation, using an input in the form of a single click. The information options related to the video sequence obtained from the system server may then be presented to the user on the cameraphone user interface. The user may then select one or more information options presented and request related information services in the third step of operation. This process in effect creates the user experience of interacting with the selected video sequence.
In some embodiments, a three-step mode of operation may be used to request information services related to a plurality of still images, a plurality of video sequences or a combination thereof, obtained either from storage or captured using a camera integrated into the cameraphone. Here, in the first step of operation, the user captures or selects the visual imagery as described earlier. Optionally, the visual imagery may be displayed on the user interface. The user may then request information options related to the visual imagery, in the second step of operation, using an input in the form of a single click. The information options related to the visual imagery obtained from the system server may then be presented to the user on the cameraphone user interface. The user may then select one or more information options presented and request related information services in the third step of operation. This process in effect creates the user experience of interacting with the visual imagery.
In the embodiments using a three-step mode of operation described above, the information options are generated and presented by the system. In some embodiments employing the three-step mode of operation, the user may define information elements manually. For instance, a user may use inputs from navigational input components to “draw” the demarcation boundaries of a hotspot. Examples of navigational input components include joysticks, trackballs, scroll wheels, thumb wheels, and other components with equivalent functionality. A cursor and cursor control keys or other appropriate input components integrated into cameraphone may also be used to markup the hotspots. Then, the user may request information services related to the manually demarcated hotspot on the visual imagery using a third step, which may involve inputting a single click.
In some embodiments using a three-step mode of operation, the first step and the second step (i.e., capturing visual imagery and generating associated information elements or hotspots) are combined. Hence, this mode of operation may be considered a special case of a two-step mode of operation. The user inputs for the combined first and second steps captures and processes the visual imagery resulting in a list of information options. The user input for the third step, which now is actually the second step, selects one or more information options and requests related information services.
Zero-Input Mode of Operation
Here, embodiments which use zero user inputs for requesting information services related to visual imagery are described. In some embodiments using a zero-input mode of operation, the user points the camera integrated into the cameraphone 1120 at a scene of interest. The visual imagery from the camera may be optionally displayed on the camera view as a viewfinder. As the user points the camera at the visual scene and scans it, cameraphone 1120 may capture still images or video sequences or a combination of both and requests information services related to the captured visual imagery from the system. The choice of capturing still images versus video sequences versus a combination of them and the instant at which to capture the visual imagery and the durations for which to capture the video sequences may be determined by the system based on various system parameters. System parameters used for capturing the visual imagery may include absolute time, a periodic timer event or environmental factors (e.g., ambient lighting, motion in the visual scene or motion of the camera) and the like.
The system identifies and provides information services related to the visual imagery which are then presented automatically without any user input. Optionally, the provided information services may be presented in the form of graphical marks or icons, as an overlay on the visual imagery presented in the viewfinder. In the user's perspective, as the user scans the visual scene with the cameraphone 1120, he may be presented an augmented version of the visual imagery captured by the camera on the viewfinder.
In some embodiments, visual imagery and associated system and environmental information may be captured using a cameraphone 1120 and used to generate a request for related information services without any user input. The request may be communicated to a remote system server 1160 over a communication network 1140. The system server identifies and generates the related information services and communicates the information services to the cameraphone for presentation on the cameraphone user interface. The information services may then be presented as an augmentation of the visual imagery captured earlier. In some embodiments, the information services may be presented as an augmentation of visual imagery being captured and presented live on the viewfinder, with appropriate processing to account for any cameraphone motion i.e., compensation for global motion in the visual imagery caused by camera motion. In some embodiments, the information services may be presented independent of the visual imagery.
In another embodiment using a zero-input mode of operation, the user retrieves and plays back visual imagery stored in cameraphone 1120 or in other components of the system. Upon playback, the cameraphone 1120 automatically selects still images, video sequences or a combination thereof and requests related information services from the system. The related information services provided by the system are then presented to the user on the cameraphone 1120. Optionally, the information services may be presented such that they are integrated with the played back visual imagery for an augmented reality experience.
In this mode of operation, the capture of visual imagery and the requesting of information services related to the visual imagery do not require any explicit user inputs i.e., it is a zero-input mode of operation.
Accelerated User Input
In some embodiments, the user may provide inputs that accelerate the process of providing information services related to visual imagery. In some embodiments, these accelerated user inputs may represent shortcuts to system operations that may otherwise be performed using a plurality of user inputs and operation steps. In some embodiments, these additional inputs may be provided in the final step of the modes of operation described above, such that the system may provide information services accounting for the accelerated user input. In some embodiments, these additional inputs may also be provided after a user is presented the information service such as to help accelerate the process of interacting with information services presented, e.g., limit the information services presented to those from a specific source or database.
The user may perform this additional input by clicking or click holding on one of a plurality of keys integrated into the cameraphone 1120, where each key may be assigned to a particular source or type of information services. For instance, the user may click on a graphical soft button on the display named “WWW” to request related information services only from the World Wide Web. In another embodiment, the user after capturing the visual imagery may click a specific key on the device, say key marked “2” to request information services related to shopping.
In these operations, the system searches or queries only specific databases or knowledgebases as defined in the system, filters the identified information services from them as per the user input, and presents the user with a list of related information services. In some embodiments, a plurality of sources of information services may be mapped to each key. In some embodiments, the user clicks on a plurality of the keys simultaneously or in a specific order to select a plurality of sources or types of information services. Also, in other embodiments, the functionality described above for keys integrated into the cameraphone 1120 may be offered by widgets in the user interface. In some embodiments, the functionality of the keys may be implemented using speech or motion based inputs described earlier.
These accelerated user inputs may provide access to features of the system that otherwise may require multiple user inputs in order to achieve the same results. For instance, in some embodiments, accelerated input options may be available for the commands available in the menus or user preference settings.
Multiple Facets of System Operation
In some embodiments, the system may feature multiple facets of operation. The facets enable a user to select between subsets of features of the system. For instance, a specific facet may feature only a subset of information services provided as related to visual imagery. In other embodiments, a specific facet may feature only a subset of the menu commands available for use. Other examples of facets that may be supported by embodiments include one-step mode of operation, two-step mode of operation, three-step mode of operation, zero input mode of operation, audio supported mode of operation, muted audio mode of operation and augmented user interface mode of operation. In embodiments supporting multiple facets, users may select one among the available set of facets for access to the features of the selected facet. This enables users to use facets i.e., feature sets, appropriate for various use scenarios.
Users may switch between different facets of operation of the system using appropriate user interface elements. For instance, in some embodiments, users may select a specific facet by using a specific input component (e.g., by clicking on a specific key on the key pad) or by activating a specific widget in the user interface (e.g., by selecting and activating a specific icon in the user interface).
In some embodiments, information services may be generated from content available in the World Wide Web. These content are identified and obtained by searching the Web for Web pages with related content. The presentation of such information services may include one or more snippets of the content from the identified Web pages as representative of the content available in its entirety in the Web pages. Such snippets may be generated in real-time at the time of request for information services from the websites or may be previously fetched and stored in the system.
In addition, the information presented may optionally include a headline before the snippets, a partial or complete URL of the Web page and hyperlinks to the source Web pages. The headline may be derived from the title of the associated Web pages or synthesized by interpreting or summarizing the content available in the Web pages. The title or the URL may optionally be hyperlinked to the Web page. The hyperlinks embedded in the information presented enable users to view the Web pages in their entirety if necessary. The user may optionally activate the hyperlink to request the presentation of the Web page in its entirety in a Web browser or on the content view itself.
Computer system 4100 includes a bus 4102 or other communication mechanism for communicating information, which interconnects subsystems and devices, such as processor 4104, system memory 4106 (e.g., RAM), storage device 4108 (e.g., ROM), disk drive 4110 (e.g., magnetic or optical), communication interface 4112 (e.g., modem or Ethernet card), display 4114 (e.g., CRT or LCD), input device 4116 (e.g., keyboard), and cursor control 4118 (e.g., mouse or trackball).
According to some embodiments, computer system 4100 performs specific operations by processor 4104 executing one or more sequences of one or more instructions stored in system memory 4106. Such instructions may be read into system memory 4106 from another computer readable medium, such as static storage device 4108 or disk drive 4110. In some embodiments, hard wired circuitry may be used in place of or in combination with software instructions to implement the system.
The term “computer-readable medium” refers to any medium that participates in providing instructions to processor 4104 for execution. Such a medium may take many forms, including but not limited to, nonvolatile media, volatile media, and transmission media. Nonvolatile media includes, for example, optical or magnetic disks, such as disk drive 4110. Volatile media includes dynamic memory, such as system memory 4106. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 4102. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, carrier wave, or any other medium from which a computer may read.
In some embodiments, execution of the sequences of instructions to practice the system is performed by a single computer system 4100. According to some embodiments, two or more computer systems 4100 coupled by communication link 4120 (e.g., LAN, PSTN, or wireless network) may perform the sequence of instructions to practice the system in coordination with one another. Computer system 4100 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 4120 and communication interface 4112. Received program code may be executed by processor 4104 as it is received, and/or stored in disk drive 4110, or other nonvolatile storage for later execution.
This description of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications. This description will enable others skilled in the art to best utilize and practice the invention in various embodiments and with various modifications as are suited to a particular use. The scope of the invention is defined by the following claims.
This application claims the benefit of U.S. provisional patent application 60/715,529, filed Sep. 9, 2005, and is a continuation-in-part of U.S. patent application Ser. No. 11/215,601, filed Aug. 30, 2005, which claims the benefit of U.S. provisional patent application 60/606,282, filed Aug. 31, 2004. These applications are incorporated by reference along with all other references cited in this application.
Number | Date | Country | |
---|---|---|---|
60715529 | Sep 2005 | US | |
60606282 | Aug 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11215601 | Aug 2005 | US |
Child | 11530449 | Sep 2006 | US |