The claimed subject matter is described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject innovation.
As utilized herein, terms “component,” “system,” “interface,” “device,” “API,” and the like are intended to refer to a computer-related entity, either hardware, software (e.g., in execution), and/or firmware. For example, a component can be a process running on a processor, a processor, an object, an executable, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Now turning to the figures,
For instance, the immersed view can provide geographic data for presentation in a manner such that orientation is maintained between the aerial data (e.g., map data) and the ground-level perspective. Moreover, such presentation of data is user friendly and comprehendible based at least in part upon employing a ground-level orientation paradigm. Thus, the ground-level perspective can be dependent upon a location and/or starting point associated with the aerial data. For example, an orientation icon can be utilized to designate a location related to the aerial data (e.g., aerial map), where such orientation icon can be the basis of providing the perspective for the ground-level view. In other words, an orientation icon can be pointing in the north direction on the aerial data, while the ground-level view can be a first-person view of street-side imagery looking in the north direction. As discussed below, the orientation icon can be any suitable display icon such as, but not limited to, an automobile, a bicycle, a person, a graphic, an arrow, an all-terrain vehicle, a motorcycle, a van, a truck, a boat, a ship, a space ship, a bus, a plane, a jet, a unicycle, a skateboard, a scooter, a self-balancing human transporter, and any suitable orientation icon that can provide a direction and/or orientation associated with aerial data.
In one example, the receiver component 104 can receive aerial data related to a city and a starting location (e.g. default and/or input), such that the interface component 102 can generate at least two portions. The first portion can relate to map data (e.g., such as aerial data and/or any suitable data related to a map), such as a satellite aerial view of the city including an orientation icon, wherein the orientation icon can indicate the starting location. The second portion can be a ground-level view of street-side imagery with a first-person and/or third-person perspective associated with the orientation icon. Thus, if the first portion contains the orientation icon on an aerial map at a starting location on the intersection of Main St. and W. 47th St., facing east, the second portion can display a first-person view of street-side imagery facing east on the intersection of Main St. and W. 47th St. at and/or near ground level (e.g., eye-level for a typical user). By utilizing this ground-level orientation paradigm, a user can easily receive first-perspective data and/or third-person perspective data based on map data continuously without disorientation based on the easy to comprehend ground-level orientation paradigm.
In another example, map data (e.g. aerial data and/or any suitable data related to a map) associated with a planetary surface, such as Mars can be utilized by the interface component 102. A user can then utilize the orientation icon to maneuver about the surface of the planet Mars based on the location of the orientation icon and a particular direction associated therewith. In other words, the interface component 102 can provide a first portion indicating a location and direction (e.g., utilizing the orientation icon), while the second portion can provide a first-person and/or third-person, ground-level view of imagery. It is to be appreciated that as the orientation icon is moved about the aerial data, the first-person and/or third-person, ground-level view corresponds therewith and can be continuously updated.
In accordance with another aspect of the claimed subject matter, the interface component 102 can employ maintaining a ground-level direction and/or route associated with at least a portion of a road, a highway, a street, a path, course of direction, etc. In other words, the interface component 102 can utilize a road/route snapping feature, wherein regardless of the input for a location, the orientation icon will maintain a course on a road, highway, street, path, etc. while still providing first-person and/or third-person ground-level imagery based on such snapped/designated course of the orientation icon. For instance, the orientation icon can be snapped and/or designated to follow a particular course of directions such that regardless of input, the orientation will only follow designated roads, paths, streets, highways, and the like.
Moreover, the system 100 can include any suitable and/or necessary presentation component (not shown and discussed infra), which provides various adapters, connectors, channels, communication paths, etc. to integrate the interface component 102 into virtually any operating and/or database system(s). In addition, the presentation component can provide various adapters, connectors, channels, communication paths, etc., that provide for interaction with the interface component 102, receiver component 104, the immersed view, and any other device, user, and/or component associated with the system 100.
The system 200 can further include a data store 204 that can include any suitable data related to the system 200. For example, the data store 204 can include any suitable geographic data such as, but not limited to, 2-dimensional geographic data, 3-dimensional geographic data, aerial data, street-side imagery (e.g., first-person perspective and/or third-person perspective), ground-level imagery, planetary data, planetary ground-level imagery, satellite data, digital data, images related to a geographic location, orthographic map data, scenery data, map data, street map data, hybrid data related to geography data (e.g., road data and/or aerial imagery), topology photography, geographic photography, user settings, user preference, configurations, graphics, templates, orientation icons, orientation icon skins, data related to road/route snapping features and any suitable data related to maps, geography, and/or outer space.
It is to be appreciated that the data store 204 can be, for example, either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM). The data store 204 of the subject systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory. In addition, it is to be appreciated that the data store 204 can be a server, a database, a hard drive, and the like.
The interface component 102 can provide data related to the first portion and second portion to an application programmable interface (API) 302. In other words, the interface component 102 can create and/or generate an immersed view including the first portion and the second portion for employment in a disparate environment, system, device, network, and the like. For example, the receiver component 104 can receive data and/or an input across a first machine boundary, while the interface component 102 can create and/or generate the immersed view and transmit such data to the API 302 across a second machine boundary. The API 302 can then receive such immersed view and provide any manipulations, configurations, and/or adaptations to allow such immersed view to be displayed on an entity 304. It is to be appreciated that the entity can be a device, a PC, a pocket PC, a tablet PC, a website, the Internet, a mobile communications device, a smartphone, a portable digital assistant (PDA), a hard disk, an email, a document, a component, a portion of software, an application, a server, a network, a TV, a monitor, a laptop, any suitable entity capable of displaying data, etc.
In one example, a user can utilize the Internet to provide a starting address and an ending address associated with a particular portion of map data (e.g., aerial data and/or any suitable data related to a map). The interface component 102 can create the immersed view based on the particular starting and ending addresses, wherein the API component 302 can format such immersed view for the particular entity 304 to display (e.g. a browser, a monitor, etc.). Thus, the system 300 can provide the immersed view to any entity that is capable of displaying data to facilitate providing directions, exploration, and the like in relation to geographic data.
Corresponding to the orientation icon 404 can be at least one first-person view and/or third-person view of ground-level imagery in a perspective consistent with a ground-level orientation paradigm. It is to be appreciated that although the term “ground-level” is utilized, the claimed subject matter covers any variation thereof such as, sea-level, planet-level, ocean-floor level, a designated height in the air, a particular coordinate, etc. A second portion (e.g., divided into three sections) can include the respective and corresponding first-person view and/or third-person view of ground-level imagery. Thus, a first section 406 can illustrate the direction A to display first-person and/or third-person perspective ground-level imagery respective to the position of the orientation icon 404 (e.g., the north direction); a second section 408 can illustrate the direction B to display first-person and/or third-person perspective ground-level imagery respective to the position of the orientation icon 404 (e.g., the west direction); and a third section 410 can illustrate the direction C to display first-person and/or third-person perspective ground-level imagery respective to the position of the orientation icon 404 (e.g., the east direction).
Although the generic user interface 400 illustrates three (3) first-person and/or third-person perspective views of ground-level imagery, it is to be appreciated that the user interface 400 can illustrate any suitable number of first-person and/or third-person views corresponding to the location of the orientation icon related to the map data (e.g., aerial data and/or any suitable data related to a map). However, it is to be stated that to increase user friendliness and decrease user disorientation, three (3) views is an ideal number to mirror a user's real-life perspective. For instance, while walking, a user tends to utilize a straight-ahead view, and corresponding peripheral vision (e.g., left and right side views). Thus, the generic user interface 400 mimics the real-life perspective and views of a typical human being.
It is to be appreciated that the screen shot 500 is solely for exemplary purposes and the claimed subject matter is not so limited. For example, the orientation icon can be any suitable icon that can depict a particular location and at least one direction on the aerial data. As stated earlier, the orientation icon can be, but is not limited to being, a an automobile, a bicycle, a person, a graphic, an arrow, an all-terrain vehicle, a motorcycle, a van, a truck, a boat, a ship, a space ship, a bus, a plane, a jet, a unicycle, a skateboard, a scooter, a self-balancing human transporter, etc. Moreover, the aerial data depicted is hybrid data (satellite imagery with road/street/highway/path graphic overlay) but can be any suitable aerial data such as, but not limited to, aerial graphics, any suitable data related to a map, 2-D graphics, 2-D satellite imagery (e.g., or any suitable photography to depict an aerial view), 3-D graphics, 3-D satellite imagery (e.g., or any suitable photography to depict an aerial view), geographic data, etc. Furthermore, the skin can be any suitable skin that relates to the particular orientation icon. For example, if the orientation icon is a jet, the skin can replicate the cockpit of a jet.
It is to be appreciated that although the user interface depicts aerial data associated with a first-person view from an automobile, it is to be appreciated that the claimed subject matter is not so limited. In one particular example, the aerial data can be related to the planet Earth. The orientation icon can be a plane, where the first-person views can correspond to a particular location associated with the orientation icon such that the views simulate the views in the plane as if traveling over such location.
It is to be understood that the intelligent component 602 can provide for reasoning about or infer states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g. support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines . . . ) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter.
A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, that is, f(x)=confidence(class). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed. A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. Other directed and undirected model classification approaches include, e.g., naive Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
The interface component 102 can further utilize a presentation component 604 that provides various types of user interfaces to facilitate interaction between a user and any component coupled to the interface component 102. As depicted, the presentation component 604 is a separate entity that can be utilized with the interface component 102. However, it is to be appreciated that the presentation component 604 and/or similar view components can be incorporated into the interface component 102 and/or a stand-alone unit. The presentation component 604 can provide one or more graphical user interfaces (GUIs), command line interfaces, and the like. For example, a GUI can be rendered that provides a user with a region or means to load, import, read, etc., data, and can include a region to present the results of such. These regions can comprise known text and/or graphic regions comprising dialogue boxes, static controls, drop-down-menus, list boxes, pop-up menus, as edit controls, combo boxes, radio buttons, check boxes, push buttons, and graphic boxes. In addition, utilities to facilitate the presentation such as vertical and/or horizontal scroll bars for navigation and toolbar buttons to determine whether a region will be viewable can be employed. For example, the user can interact with one or more of the components coupled and/or incorporated into the interface component 102.
The user can also interact with the regions to select and provide information via various devices such as a mouse, a roller ball, a keypad, a keyboard, a pen and/or voice activation, for example. Typically, a mechanism such as a push button or the enter key on the keyboard can be employed subsequent entering the information in order to initiate the search. However, it is to be appreciated that the claimed subject matter is not so limited. For example, merely highlighting a check box can initiate information conveyance. In another example, a command line interface can be employed. For example, the command line interface can prompt (e.g., via a text message on a display and an audio tone) the user for information via providing a text message. The user can than provide suitable information, such as alpha-numeric input corresponding to an option provided in the interface prompt or an answer to a question posed in the prompt. It is to be appreciated that the command line interface can be employed in connection with a GUI and/or API. In addition, the command line interface can be employed in connection with hardware (e.g., video cards) and/or displays (e.g., black and white, and EGA) with limited graphic support, and/or low bandwidth communication channels.
Referring to
The claimed subject matter employs an intuitive user interface (e.g., an immersed view) for street-side imagery browsing centered around a ground-level orientation paradigm. By depicting street side imagery through the view of being inside a vehicle, the users are presented with a familiar context such as driving along a road and looking out the windows. In other words, the user instantly understands what they are seeing without any further explanation since the experience mimics that of riding in a vehicle and exploring the surrounding scenery. Along with the overall vehicle concept, there are various details of the immersed view, illustrated as an overview with screen shot 700.
The immersed view can include a mock vehicle interior with a left side window, center windshield, and right side window. The view displayed in the map is ascertained by the vehicle icon's position and orientation on the map relative to the road it is placed on. The vehicle can snap to 90 degrees that are parallel or orthogonal to the road. The center windshield can shows imagery from the view the nose of the vehicle to which it is pointing towards. For instance, if the vehicle is oriented along the road, a front view of the road in the direction the car is pointing can be displayed.
Turning quickly to
Referring back to
Another option for setting the car orientation can be employed such as using direct gesture. Direct gesture can be utilized by clicking on the car, and dragging the mouse while holding the mouse button. The dragging gestures can define a view direction from the car position, and the car orientation is set to face that direction. Such interface is suited for viewing specific targets. The user can click on the car and drag towards the wished target in the top view. The result is an image in the front view that shows the target.
Another technique that can be implemented by the immersed view is a direct manipulation in the car display. The view in the car display can be dragged. A drag to the left will rotate the car in a clock-wise direction while a drag in the opposite direction will turn the car in a counter-clockwise direction. This control is, in particular, attractive when the images displayed through the car windows are a full 360 degrees or cylindrical or spherical panorama. Moreover, it can also be applicable for separate images such as described herein. Another example is dragging along the vertical axis to tilt the view angle and scan a higher image or even an image that spans the hemisphere around the car.
As discussed above, a snapping feature and/or technique can be employed to facilitate browsing aerial data and/or first-person perspective street-side imagery. It is to be appreciated that the snapping feature can be employed to an area that includes imagery data and areas with no imagery data. The car cursor can be used to tour the area and view the street-level imagery. For instance, important images such as those that are oriented in front of a house or other important land mark can be explored. Thus, users can prefer to see an image that captures most of a house, or that a house is centered in the image, rather than images that shows only parts of a house. By snapping the car cursor to points that best views houses on the street, we enable fast and efficient browsing of the images. The snapping can be generated given information regarding the houses foot print, or by detecting approximate foot print of the houses directly from the images (e.g. both the top view and the street-side images). Once the car is snapped to the house while dragging, or fast driving, a correction to the car position can be generated by keys input or slow dragging with the mouse. It is to be appreciated that the snapping feature can be employed in 2-D and/or 3-D space. In other words, the snapping feature can enforce the car to move along only the road geometry in both X, Y and Z dimensions for the purpose of showing street side imagery or video. The interface design is suitable for any media delivery mechanism. It is to be appreciated that the claimed subject matter is applicable to all forms of still imagery, stitched imagery, mosaic imagery, video, and/or 360 degree video.
Moreover, the street side concept directly enables various driving direction scenarios. For example, the subject claims can allow a route to be described with an interconnection of roads and automatically “play” the trip from start to end, displaying the street side media in succession simulating the trip from start point to end point along the designated route. It is to be understood that such aerial data and/or first-person and/or third-person street-side imagery can be in 2-D and/or 3-D. In general, it is to be appreciated that the aerial data need not be aerial data, but any suitable data related to a map.
In accordance with another aspect of the subject innovation, the user interface can detect at least one image associated with a particular aerial location. For instance, a bounding box can be defined around the orientation icon (e.g., the car icon), then a meta-database of imagery points can be checked to find the closest image in that box. The box can be defined to be large enough to allow the user to have a buffer zone around the road so the car (e.g., orientation icon) does not have to be exactly on the road to bring up imagery.
Furthermore, the subject innovation can include a driving game-like experience through keyboard control. For example, a user can control the orientation icon (e.g., the car icon) using the arrow keys on a keyboard. The up arrow can indicate a “forward” movement panning the map in the opposite direction that the car (e.g., icon) is facing. The down arrow can indicate a backwards movement and pans the map in the same direction that the car is facing move the car “backwards” on the map. The left and right arrow keys default to rotating the car to the left or right. The amount of rotation at each key press, could be set from 90 degrees jumps to very fine angle (e.g. to simulate a smooth rotation). In one example, the shift key can be depressed to allow a user can “strafe” left or right or move sideways. If the house-snapping feature is used, then a special strafe could be used to scroll to the next house along the road.
Furthermore, the snapping ability (e.g., feature and/or technique) allows the ability for the car (e.g., orientation icon) to “follow” the road. This is done by ascertaining the angle of the road at each point with imagery, then automatically rotating the car to a line with that angle. When a user moves forward the icon can land on the next point on the road and the process continues, providing a “stick to the road” experience even when the road curves.
Turning briefly to
At reference numeral 1604, an immersed view with a first portion of map data (e.g., aerial data and/or any suitable data related to a map) and a second portion of first-person and/or third-person perspective data can be generated. The immersed view can provide an efficient and intuitive interface for the implementation of presenting map data and first-person and/or third-person perspective imagery. Thus, the second portion of the immersed view corresponds to a location identified on the map data. In addition, it is to be appreciated that the second portion of first-person and/or third-person perspective data can be partitioned into any suitable number of sections, wherein each section corresponds to a particular direction on the map data. Furthermore, the first portion and the second portion of the immersed view can be dynamically updated in real-time to provide exploration and navigation within the map data (e.g., aerial data and/or any suitable data related to a map) and the first-person and/or third-person imagery in a video-like experience.
At reference numeral 1606, an orientation icon can be utilized to identify a location associated with the map data (e.g. aerial). The orientation icon can be utilized to designate a location related to the map data (e.g., aerial map, aerial data, any data related to a map, normal rendered map, a 2-D map, etc.), where such orientation icon can be the basis of providing the perspective for the first-person and/or third-person view. In other words, an orientation icon can be pointing in the north direction on the aerial data, while the first-person and/or third-person view can be a ground-level, first-person and/or third-person perspective view of street-side imagery looking in the north direction. The orientation icon can be any suitable display icon such as, but not limited to, an automobile, a bicycle, a person, a graphic, an arrow, an all-terrain vehicle, a motorcycle, a van, a truck, a boat, a ship, a space ship, a bus, a plane, a jet, a unicycle, a skateboard, a scooter, a self-balancing human transporter, and any suitable orientation icon that can provide a direction and/or orientation associated with map data.
At reference numeral 1704, an immersed view including a first portion and a second portion can be generated. The first portion of the immersed view can include aerial data, while the second portion can include a first-person perspective based on a particular location associated with the aerial data. In addition, it is to be appreciated that the second portion can include any suitable number of sections that depict a first-person perspective in a specific direction on the aerial data. At reference numeral 1706, an orientation icon can be employed to identify a location on the aerial data. The orientation icon can identify a particular location associated with the aerial data and also allow movement to update/change the area on the aerial data and the first-person perspective view. As indicated above, the orientation icon can be any graphic and/or icon that indicates at least one direction and a location associated with the aerial data.
At reference numeral 1708, a snapping ability (e.g. feature and/or technique) can be utilized to maintain a course of travel. By employing the snapping ability, regardless of the input for a location, the orientation icon can maintain a course on a road, highway, street, path, etc. while still providing first-person ground-level imagery based on such snapped/designated course of the orientation icon. For instance, the orientation icon can be snapped and/or designated to follow a particular course of directions such that regardless of input, the orientation will only follow designated roads, paths, streets, highways, and the like. In other words, the snapping ability can be employed to facilitate browsing aerial data and/or first-person perspective street-side imagery.
At reference numeral 1710, at least one skin can be employed to the second portion of the immersed view. The skin can provide an interior appearance wrapped around at least the portion of the immersed view, wherein the skin corresponds to at least an interior aspect of the representative orientation icon. For example, when the orientation icon is a car icon, the skin can be a graphical representation of the inside of a car (e.g., steering wheel, gauges, dashboard, etc.). For example, the skin can be at least one of the following: an automobile interior skin; a sports car interior skin; a motorcycle first-person perspective skin; a person-perspective skin; a bicycle first-person perspective skin; a van interior skin; a truck interior skin; a boat interior skin; a submarine interior skin; a space ship interior skin; a bus interior skin; a plane interior skin; a jet interior skin; a unicycle first-person perspective skin; a skateboard first-person perspective skin; a scooter first-person perspective skin; and a self-balancing human transporter first perspective skin.
In order to provide additional context for implementing various aspects of the claimed subject matter,
Moreover, those skilled in the art will appreciate that the inventive methods may be practiced with other computer system configurations, including single-processor or multi-processor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based and/or programmable consumer electronics, and the like, each of which may operatively communicate with one or more associated devices. The illustrated aspects of the claimed subject matter may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all, aspects of the subject innovation may be practiced on stand-alone computers. In a distributed computing environment, program modules may be located in local and/or remote memory storage devices.
One possible communication between a client 1810 and a server 1820 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The system 1800 includes a communication framework 1840 that can be employed to facilitate communications between the client(s) 1810 and the server(s) 1820. The client(s) 1810 are operably connected to one or more client data store(s) 1850 that can be employed to store information local to the client(s) 1810. Similarly, the server(s) 1820 are operably connected to one or more server data store(s) 1830 that can be employed to store information local to the servers 1820.
With reference to
The system bus 1918 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI).
The system memory 1916 includes volatile memory 1920 and nonvolatile memory 1922. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 1912, such as during start-up, is stored in nonvolatile memory 1922. By way of illustration, and not limitation, nonvolatile memory 1922 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory 1920 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM).
Computer 1912 also includes removable/non-removable, volatile/non-volatile computer storage media.
It is to be appreciated that
A user enters commands or information into the computer 1912 through input device(s) 1936. Input devices 1936 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1914 through the system bus 1918 via interface port(s) 1938. Interface port(s) 1938 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1940 use some of the same type of ports as input device(s) 1936. Thus, for example, a USB port may be used to provide input to computer 1912, and to output information from computer 1912 to an output device 1940. Output adapter 1942 is provided to illustrate that there are some output devices 1940 like monitors, speakers, and printers, among other output devices 1940, which require special adapters. The output adapters 1942 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1940 and the system bus 1918. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1944.
Computer 1912 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1944. The remote computer(s) 1944 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1912. For purposes of brevity, only a memory storage device 1946 is illustrated with remote computer(s) 1944. Remote computer(s) 1944 is logically connected to computer 1912 through a network interface 1948 and then physically connected via communication connection 1950. Network interface 1948 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
Communication connection(s) 1950 refers to the hardware/software employed to connect the network interface 1948 to the bus 1918. While communication connection 1950 is shown for illustrative clarity inside computer 1912, it can also be external to computer 1912. The hardware/software necessary for connection to the network interface 1948 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
What has been described above includes examples of the subject innovation. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art may recognize that many further combinations and permutations of the subject innovation are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.
In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the innovation includes a system as well as a computer-readable medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
In addition, while a particular feature of the subject innovation may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” and “including” and variants thereof are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising.”