Many applications and/or websites provide information through visual interfaces, such as maps. For example, a videogame may display a destination for a user on a map; a running website may display running routes through a web map interface; a mobile map app may display driving directions on a road map; a realtor app may display housing information, such as images, sale prices, home value estimates, and/or other information on a map; etc. Such applications and/or websites may facilitate various types of user interactions with maps. In an example, a user may zoom-in, zoom-out, and/or rotate a viewing angle of a map. In another example, the user may mark locations within a map using pinpoint markers (e.g., create a running route using pinpoint markers along the route). However, such pinpoint markers may occlude a surface of the map.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Among other things, one or more systems and/or techniques for populating a scene of a visual interface with a portal are provided herein. For example, a visual interface, depicting a scene, may be displayed. The scene may comprise a map, photography, a manipulatable object, a manipulatable space, a panorama, a rendering, an image, and/or any other type of visualization. In an example, a map service, remote to a client device, may provide visual information, such as mapping information, to the client device for display through the visual interface (e.g., the client device may display the visual interface through a map app, a map website, search results of a search charm and/or other map interfaces that may connect to and/or consume mapping information from the mapping service such as by using mapping service APIs and/or remote HTTP calls). In an example, a client device (e.g., a mobile map app; a running map application executing on a personal computer; etc.) may provide the visual information for display through the visual interface, such as where the visual information corresponds to user information (e.g., imagery captured by the user; a saved driving route; a saved search result map; a personal running route map, etc.)
One or more points of interest, such as a first point of interest, within the scene may be identified (e.g., a doorway into a restaurant depicted by a downtown scene of a city). For example, the first point of interest may be identified based upon availability of imagery for the first point of interest (e.g., users may have captured and shared photography of the restaurant) and/or based upon the first point of interest corresponding to an entity (e.g., a business, a park, a building, a driving intersection, and/or other interesting content). The scene may be populated with portals corresponding to the one or more points of interest. For example, a first portal, corresponding to the first point of interest, may be populated within the scene (e.g., the first portal may have a relatively thin linear shape, such as a circle, having a semi-transparent perimeter that encompasses at least some of the first point of interest). Responsive to receiving focus input associated with the first portal (e.g., the first portal may be hovered over by a cursor; the visual interface may be panned such that the first portal encounters a trigger zone such as a center line/zone; etc.), the first portal may be hydrated with imagery associated with the first point of interest to create a first hydrated portal (e.g., a display property of a portal user interface element may be set to an image, photography, a panorama, a rendering, an interactive manipulatable object, an interactive manipulatable space, and/or any other visualization). For example, a visualization depicting the inside of the restaurant may be populated within the first portal. In this way, a user may preview the restaurant to decide whether to further or more deeply explore additional imagery and/or other aspects (e.g., advertisements, coupons, menu items, etc.) of the restaurant. For example, responsive to receiving selection input associated with the first portal, the visual interface may be transitioned to a second scene associated with the first point of interest (e.g., the second scene may depict the inside of the restaurant). In this way, the user may freely navigate into buildings, underground such as into a subway, through walls, down a street, and/or other locations to experience frictionless traveling/viewing.
To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings.
The claimed subject matter is now described with reference to the drawings, wherein like reference numerals are generally used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, structures and devices are illustrated in block diagram form in order to facilitate describing the claimed subject matter.
One or more techniques and/or systems for populating a scene of a visual interface with a portal are provided. For example, a scene may be populated with portals corresponding to points of interest of the scene (e.g., a park scene may correspond to a water fountain point of interest, a bird's nest point of interest, a jogging trail point of interest, etc.). A portal can generally have any shape and/or other properties (e.g. size, color, degree of translucency/transparency, etc.), and is not intended to be limited to the examples provided herein. A portal may be a circle, a square, a polygon, a rectangle, a rain drop, an adaptive shape that may change based upon a characteristic of a point of interest within the portal, etc. A portal may be semi-transparent and/or have a semi-transparent perimeter or border to delineate the portal from non-portal portions of the scene. The portal is thus discernable but does not occlude (or occludes to a relatively minor and/or variable degree) portions of the scene. A size of a portal may correspond to a ranking assigned to a point of interest by a search engine, such as a relatively larger size for a relatively high ranking point of interest (e.g., the Empire State Building for a search for sights to see in New York city) as compared to a relatively smaller size for a relatively lower ranking point of interest (e.g., a hotdog stand in New York city for a search for sights to see in New York city). A portal may comprise a graphics user interface element, such as a control object (e.g., an application object of an application, a web interface object of a website and/or other programming object(s) that may be used to visually represent a point of interest), having various properties and/or functionality. For example, a portal may comprise focus functionality, such that when a user hovers over the portal and/or otherwise interacts with the portal a visual state of the portal is modified (e.g., becomes less translucent, is highlighted, undergoes a color change, is zoomed-in, is hydrated with imagery, etc.). A portal may comprise a selection functionality (e.g., a selection state/method) that triggers a transition of the user interface from displaying the scene to displaying a new scene corresponding to the point of interest.
An embodiment of populating a scene of a visual interface with a portal is illustrated by an exemplary method 100 of
At 108, the scene may be populated with a first portal corresponding to the first point of interest. In an example, the scene may be populated with a plurality of portals corresponding to the one or more points of interest of the scene (e.g., a second portal for the second point of interest corresponding to the park, a third portal for the third point of interest corresponding to the gargoyle, etc.). In an example, the first portal comprises a semi-transparent perimeter that encompasses at least some of the first point of interest, which may mitigate occlusion of the scene (e.g., the first portal may have a relatively thin linear shape, such as a circle, which encompasses at least some of the museum front door and/or other portions of the front of the museum). Portals may or may not visually overlap within the scene (e.g., the first portal for the museum front door may overlap with the third portal for the gargoyle). Size, transparency, and/or display properties of portals may be modified, for example, based upon a point of interest density for the scene (e.g., portals may be displayed relatively smaller and/or more transparent if the scene is populated with a relatively large amount of portals, which may mitigate occlusion of the scene) and/or based upon point of interest rankings (e.g., a web search engine may determine that the park has a relatively high rank based upon search queries and/or browsing history of users, and thus may display the second portal at a relatively large size).
In an example, portals may be populated within the scene based upon time. For example, a temporal modification input may be received (e.g., a particular date, a time of day such as daylight or night, etc.). For example, the temporal modification input may correspond to 1978. Points of interest that do not correspond to the temporal modification input may be removed (e.g., the second portal for the park may be removed because the park was not built until 1982). The scene may be populated with one or more points of interest that correspond to the temporal modification input (e.g., a fourth portal for a fourth point of interest corresponding to a building that was in existence in 1978 may be displayed). In this way, points of interest may be exposed through portals based upon time.
In an example where the visual interface corresponds to a kinetic map, portals may be displayed at a first scale and non-portal portions of the scene may be displayed at a collapsed scale smaller than the first scale (e.g.,
Portals may allow a user to preview “peek” into a point of interest before committing to traveling through the visual interface to the point of interest. In an example, focus input associated with the first portal may be received (e.g., hover over input associated with the first portal; navigation input for the scene that places the first portal within a trigger zone such as a center zone/line; etc.). Responsive to the focus input, the first portal may be hydrated with imagery corresponding to the first point of interest to create a first hydrated portal. The first hydrated portal may comprise an image, a panorama, 3D imagery, a rendering, photography, a streetside view, an interactive manipulatable object (e.g., the user may open, close, turn a nob, and/or manipulate other aspects of the museum front door), an interactive manipulatable space, and/or other imagery depicting the front of the museum. In an example, a transparency property of the first hydrated portal may be adjusted (e.g., the transparency may be increased as the user hovers away from the first portal with a cursor or as the user pans the scene such that the first portal moves away from the trigger zone or is de-emphasized), which may mitigate occlusion as the user expresses increasing disinterest in the first point of interest (e.g., by panning away). In an example, the imagery may depict the first point of interest according to a portal orientation that corresponds to a scene orientation of the scene (e.g., the museum front door may be depicted from a viewpoint of the scene). In an example, the imagery within the first portal may be modified based upon a temporal modification input (e.g., imagery depicting the museum at night may be used to hydrate the first portal based upon a nighttime setting; imagery depicting the museum in 1992 may be used to hydrate the first portal based upon a 1992-1996 time range; etc.)
In an example, visual navigation between one or more portals populated within the scene may be facilitated. A user may “flip” through portals (e.g., a relatively large amount of portals that may visually overlap) where a single portal is brought into focus (e.g., a size may be increased, a transparency may be decreased, the first portal may be brought to a front display position, etc.) one at a time to aid the user in distinguishing between points of interest. For example, for respective portals encountering a trigger zone of the visual interface (e.g., a portal overlapping the trigger zone above a threshold amount; a portal having a portal center point that is closer to a trigger zone than other portal center points of other portals are to the trigger zone; etc.), a portal may be hydrated while the portal encounters the trigger zone and may be dehydrated responsive to the portal no longer encountering the trigger zone. In an example, while hydrated, the portal may be displayed on top of one or more portals that overlap the portal.
In an example, a story mode may be facilitated for points of interest within the scene (e.g.,
Navigation from the scene to other scenes corresponding to points of interest may be facilitated (e.g., a user may freely and/or frictionlessly navigate into buildings, through walls, underground, down streets, around corners, etc.). For example, selection input associated with the first portal may be received (e.g., a user may click or touch the first portal). Responsive to the selection input, the visual interface may be transitioned from the scene to a second scene associated with the first point of interest. For example, the second scene may depict a museum lobby that the user may explore through the second scene. In an example, the second scene may have a second scene orientation that corresponds to a scene orientation of the scene (e.g., as if the user had walked directly into the museum lobby from outside the museum). In an example, one or more portals, corresponding to points of interest within the second scene, may be populated within the second scene (e.g., a portal corresponding to a doorway to a prehistoric portion of the museum; a portal corresponding to a gift shop; etc.). In this way, navigation through the museum may be facilitated. In an example, responsive to receiving a back input (e.g., a user may select a back button or may select outside a scene portal for the second scene), the visual interface may be transitioned from the second scene to the scene of the outside of the museum (e.g., the scene may maintain the scene orientation from before the visual interface was transitioned to the second scene). In this way, the user may freely and/or frictionlessly navigate around scenes and/or preview points of interest before navigating deeper into imagery. At 110, the method ends.
Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device is illustrated in
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims.
As used in this application, the terms “component,” “module,” “system”, “interface”, and/or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Although not required, embodiments are described in the general context of “computer readable instructions” being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
In other embodiments, device 1112 may include additional features and/or functionality. For example, device 1112 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in
The term “computer readable media” as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 1118 and storage 1120 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 1112. Computer storage media does not, however, include propagated signals. Rather, computer storage media excludes propagated signals. Any such computer storage media may be part of device 1112.
Device 1112 may also include communication connection(s) 1126 that allows device 1112 to communicate with other devices. Communication connection(s) 1126 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 1112 to other computing devices. Communication connection(s) 1126 may include a wired connection or a wireless connection. Communication connection(s) 1126 may transmit and/or receive communication media.
The term “computer readable media” may include communication media. Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
Device 1112 may include input device(s) 1124 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device(s) 1122 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 1112. Input device(s) 1124 and output device(s) 1122 may be connected to device 1112 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device(s) 1124 or output device(s) 1122 for computing device 1112.
Components of computing device 1112 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components of computing device 1112 may be interconnected by a network. For example, memory 1118 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 1130 accessible via a network 1128 may store computer readable instructions to implement one or more embodiments provided herein. Computing device 1112 may access computing device 1130 and download a part or all of the computer readable instructions for execution. Alternatively, computing device 1112 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 1112 and some at computing device 1130.
Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments.
Further, unless specified otherwise, “first,” “second,” and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object.
Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used herein, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.