Due to modern advances in transportation, people are often able to travel to places and regions that are new and unfamiliar. New geographic locations typically present a number of challenges to first time visitors. For example, first time visitors may encounter new languages and customs. Even people familiar with a particular geographic location may need assistance or additional insight about certain places or customs within the location. People may need assistance with directions, advice, translations, or other additional information. While traveling, people may wish to obtain directions and information about certain places including museums, restaurants, or historical monuments.
Typically, people may resort to acquiring assistance from a person with knowledge about the certain geographic location to overcome the challenges that the foreign place may present. For example, people who specialize in giving advice may be referred to as travel companions. Travel companions may provide information including information or directions to popular restaurants, tourist sites, and exciting experiences for people to try. In addition, a travel companion may be able to assist translating languages that are new or unfamiliar. Different travel companions may have varying levels of knowledge about museums, restaurants, parks, customs, and other unique elements of a geographic location. A person or group of people may hire a travel companion to answer questions and provide services for a predefined cost.
In addition, people often rely on the use of computing devices to assist them within various geographic locations. Whether unfamiliar with the geographic location or simply trying to receive more information about a certain place, people rely on technology to provide them with answers to any obstacles a geographic location may present. Computing devices such as personal computers, laptop computers, tablet computers, cellular phones, and countless types of Internet-capable devices are increasingly prevalent in numerous aspects of modern life, including during travel in new geographic locations. Over time, the manner in which these devices are providing information to users is becoming more intelligent, more efficient, more intuitive, and/or less obtrusive. Examples include travelers using computing devices to access information about a new geographic location from the Internet or using a global positioning system (GPS) to find directions to a place.
Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
This disclosure may disclose, inter alia, methods and systems for on-demand travel guide assistance.
In one example, a method is provided that includes receiving at a server associated with a travel companion service, a request from a wearable computing device for interaction with a live travel companion that is knowledgeable of aspects of a geographic location of the wearable computing device. The method includes determining the geographic location of the wearable computing device and determining from among a plurality of live travel companions associated with the travel companion service, the live travel companion that is assigned to the geographic location of the wearable computing device, wherein each of the plurality of live travel companions is assigned to a give geographic location. The method also comprises receiving from the wearable computing device real-time video and real-time audio both of which are based on a perspective from the wearable computing device and initiating an experience-sharing session between the wearable computing device and a second computing device associated with the live travel companion, wherein the experience-sharing session includes the real-time video and real-time audio from the wearable computing device. In response to the real-time video and real-time audio, the method includes providing a communication channel between the second computing device and the wearable computing device via the experience-sharing session for real-time interaction.
In another example, an example system is described. The system comprises a processor and memory configured to store program instructions executable by the processor to perform functions. In the example system, the functions include receiving from a wearable computing device a request for interaction with a live travel companion that is knowledgeable of aspects of a geographic location of the wearable computing device and determining the geographic location of the wearable computing device. Additional functions include determining from among a plurality of live travel companions associated with a travel companion service, the live travel companion that is assigned to the geographic location of the wearable computing device, wherein each of the plurality of live travel companions is assigned to a given geographic location, and receiving from the wearable computing device real-time video and real-time audio both of which are based on a perspective the wearable computing device. Further, the functions include initiating an experience-sharing session between the wearable computing device and a second computing device associated with the live travel companion, wherein the experience-sharing session includes the real-time video and real-time audio from the wearable computing device, and in response to the real-time video and real-time audio, providing a communication channel between the second computing device and the wearable computing device via the experience-sharing session for real-time interaction.
Any of the methods described herein may be provided in a form of instructions stored on a non-transitory, computer readable medium, that when executed by a computing device, cause the computing device to perform functions of the method. Further examples may also include articles of manufacture including tangible computer-readable media that have computer-readable instructions encoded thereon, and the instructions may comprise instructions to perform functions of the methods described herein.
In another example, a computer-readable memory having stored thereon instructions executable by a computing device to cause the computing device to perform functions is provided. The functions may comprise receiving at a server associated with a travel companion service, a request from a wearable computing device for interaction with a live travel companion that is knowledgeable of aspects of a geographic location of the wearable computing device. The functions may further include determining the geographic location of the wearable computing device and determining from among a plurality of live travel companions associated with the travel companion service, the live travel companion that is assigned to the geographic location of the wearable computing device. Each of the plurality of live travel companions is assigned to a given geographic location. The functions may also include receiving from the wearable computing device real-time video and real-time audio both of which are based on a perspective from the wearable computing device and initiating an experience-sharing session between the wearable computing device and a second computing device associated with the live travel companion. The experience-sharing session may comprise the real-time video and real-time audio from the wearable computing device. The function may further comprise providing a communication channel between the second computing device and the wearable computing device via the experience-sharing session for real-time interaction in response to the real-time video and real-time audio.
The computer readable medium may include non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and Random Access Memory (RAM). The computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. The computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage medium.
In addition, circuitry may be provided that is wired to perform logical functions in any processes or methods described herein.
In still further examples, any type of devices or systems may be used or configured to perform logical functions in any processes or methods described herein. As one example, a system may be provided that includes an interface, a control unit, and an update unit. The interface may be configured to provide communication between a client device and a data library. The data library stores data elements including information configured for use by a given client device and that are associated with instructions executable by the given client device to perform a heuristic for interaction with an environment, and the data elements stored in the data library are further associated with respective metadata that is indicative of a requirement of the given client device for using a given data element to perform at least a portion of an associated heuristic for interaction with the environment. The control unit may be configured to determine a data element from among the data elements stored in the data library that is executable by the client device to perform at least a portion of a task of the client device, and to cause the data element to be conveyed to the client device via the interface. The update unit may be configured to provide to the client device via the interface an update of application-specific instructions for use in a corresponding data element stored on the client device.
In yet further examples, any type of devices may be used or configured as means for performing functions of any of the methods described herein (or any portions of the methods described herein).
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, examples, and features described above, further aspects, examples, and features will become apparent by reference to the figures and the following detailed description.
In the following detailed description, reference is made to the accompanying figures, which form a part thereof. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative examples described in the detailed description, figures, and claims are not meant to be limiting. Other examples may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are contemplated herein.
This disclosure may disclose, inter alia, methods and systems for on demand experience sharing with a live travel companion that is associated with a head-mountable device (HMD, such as a glasses-style wearable computer. An HMD may connect to a network and request interaction with another computing device associated with a live travel companion. The interaction between a device requesting information and a device associated with a travel companion may include video and audio transmitted in real-time. A network system may provide a communication channel for the sharing the real-time video and audio between computing devices. The network may include components, including servers and nodes, to allow the real-time interaction between a traveler and travel companion. Different types of media may be used for the interaction, including using an experience-sharing session.
A travel companion may be selected from a plurality of travel companions to interact upon request from a device depending on the location of the request. Travel companions may provide assistance to one or more travelers in real-time via an experience-sharing session. Various examples may exist that illustrate possible interactions that may occur between a traveler and travel companion via devices associated with each, respectively.
a. Example Server System Architecture
As shown, the wearable computer 100 includes a transmitter/receiver 102, a head-mounted display (HMD) 104, a data processing system 106, and several input sources 108.
The transmitter/receiver 102 may be configured to communicate with one or more remote devices through the communication network 112, and connection to the network 112 may be configured to support two-way communication and may be wired or wireless.
The HMD 104 may be configured to display visual objects derived from many types of visual multimedia, including video, text, graphics, pictures, application interfaces, and animations. Some examples of an HMD 104 may include a processor 118 to store and transmit a visual object to a display 120, which presents the visual object. The processor 118 may also edit the visual object for a variety of purposes. One purpose for editing a visual object may be to synchronize displaying of the visual object with presentation of an audio object to the one or more speakers 122. Another purpose for editing a visual object may be to compress the visual object to reduce load on the display 120. Still another purpose for editing a visual object may be to correlate displaying of the visual object with other visual objects currently displayed by the HMD 104.
While
The data processing system 106 may include a memory system 124, a central processing unit (CPU) 126, an input interface 128, and an audio visual (A/V) processor 130. The memory 124 may include a non-transitory computer-readable medium having program instructions stored thereon. As such, the program instructions may be executable by the CPU 126 to carry out the functionality described herein. The memory system 124 may be configured to receive data from the input sources 108 and/or the transmitter/receiver 102. The memory system 124 may also be configured to store received data and then distribute the received data to the CPU 126, the HMD 106, the speaker 122, or to a remote device through the transmitter/receiver 102. The CPU 126 may be configured to detect a stream of data in the memory system 124 and control how the memory 124 distributes the stream of data. The input interface 128 may be configured to process a stream of data from the input sources 108 and then transmit the processed stream of data into the memory system 124. This processing of the stream of data converts a raw signal, coming directly from the input sources 108 or A/V processor 130, into a stream of data that other elements in the wearable computer 100, computing device 116, and the server 114 can use. The A/V processor 130 may be configured to perform audio and visual processing on one or more audio feeds from one or more of the input sources 108. The CPU 126 may be configured to control the audio and visual processing performed on the one or more audio feeds and the one or more video feeds. Examples of audio and video processing techniques, which may be performed by the A/V processor 130, will be given later.
The input sources 108 include features of the wearable computing device 100 such as a video camera 132, a microphone 134, a touch pad 136, a keyboard 138, one or more applications 140, and other general sensors 142 (e.g. biometric sensors). The input sources 108 may be internal, as shown in
The computing device 116 may be any type of computing device capable of receiving and displaying video and audio in real time. In addition, the computing device 112 may be able to transmit audio and visual in real time to permit live interaction to occur. The computing device 112 may also record and store images, audio, or video in memory. Multiple wearable computing devices may link and interact via the network 132.
b. Example Device Architecture
Each of the frame elements 204, 206, and 208 and the extending side-arms 214, 216 may be formed of a solid structure of plastic and/or metal, or may be formed of a hollow structure of similar material so as to allow wiring and component interconnects to be internally routed through the head-mounted device 202. Other materials may be possible as well.
One or more of each of the lens elements 210, 212 may be configured of any material that may suitably display a projected image or graphic. Each of the lens elements 210, 212 may also be sufficiently transparent to allow a user to see through the lens element. Combining these two features of the lens elements may facilitate an augmented reality or heads-up display where the projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements.
The extending side-arms 214, 216 may each be projections that extend away from the lens-frames 204, 206, respectively, and may be positioned behind a user's ears to secure the head-mounted device 202 to the user. The extending side-arms 214, 216 may further secure the head-mounted device 202 to the user by extending around a rear portion of the user's head. Additionally or alternatively, for example, the system 200 may connect to or be affixed within a head-mounted helmet structure. Other possibilities exist as well.
The system 200 may also include an on-board computing system 218, a video camera 220, a sensor 222, and a finger-operable touch pad 224. The on-board computing system 218 is shown to be positioned on the extending side-arm 214 of the head-mounted device 202; however, the on-board computing system 218 may be provided on other parts of the head-mounted device 202 or may be positioned remote from the head-mounted device 202 (e.g., the on-board computing system 218 could be wire- or wirelessly-connected to the head-mounted device 202). The on-board computing system 218 may include a processor and memory, for example. The on-board computing system 218 may be configured to receive and analyze data from the video camera 220 and the finger-operable touch pad 224 (and possibly from other sensory devices, user interfaces, or both) and generate images for output by the lens elements 210 and 212.
The video camera 220 is shown positioned on the extending side-arm 214 of the head-mounted device 202; however, the video camera 220 may be provided on other parts of the head-mounted device 202. The video camera 220 may be configured to capture images at various resolutions or at different frame rates. Many video cameras with a small form-factor, such as those used in cell phones or webcams, for example, may be incorporated into an example of the system 200.
Further, although
The sensor 222 is shown on the extending side-arm 216 of the head-mounted device 202; however, the sensor 222 may be positioned on other parts of the head-mounted device 202. The sensor 222 may include one or more of a gyroscope or an accelerometer, for example. Other sensing devices may be included within, or in addition to, the sensor 222 or other sensing functions may be performed by the sensor 222.
The finger-operable touch pad 224 is shown on the extending side-arm 214 of the head-mounted device 202. However, the finger-operable touch pad 224 may be positioned on other parts of the head-mounted device 202. Also, more than one finger-operable touch pad may be present on the head-mounted device 202. The finger-operable touch pad 224 may be used by a user to input commands. The finger-operable touch pad 224 may sense at least one of a position and a movement of a finger via capacitive sensing, resistance sensing, or a surface acoustic wave process, among other possibilities. The finger-operable touch pad 224 may be capable of sensing finger movement in a direction parallel or planar to the pad surface, in a direction normal to the pad surface, or both, and may also be capable of sensing a level of pressure applied to the pad surface. The finger-operable touch pad 224 may be formed of one or more translucent or transparent insulating layers and one or more translucent or transparent conducting layers. Edges of the finger-operable touch pad 224 may be formed to have a raised, indented, or roughened surface, so as to provide tactile feedback to a user when the user's finger reaches the edge, or other area, of the finger-operable touch pad 224. If more than one finger-operable touch pad is present, each finger-operable touch pad may be operated independently, and may provide a different function.
The lens elements 210, 212 may act as a combiner in a light projection system and may include a coating that reflects the light projected onto them from the projectors 228, 232. In some examples, a reflective coating may not be used (e.g., when the projectors 228, 232 are scanning laser devices).
In alternative examples, other types of display elements may also be used. For example, the lens elements 210, 212 themselves may include: a transparent or semi-transparent matrix display, such as an electroluminescent display or a liquid crystal display, one or more waveguides for delivering an image to the user's eyes, or other optical elements capable of delivering an in focus near-to-eye image to the user. A corresponding display driver may be disposed within the frame elements 204, 206 for driving such a matrix display. Alternatively or additionally, a laser or LED source and scanning system could be used to draw a raster display directly onto the retina of one or more of the user's eyes. Other possibilities exist as well.
As shown in
The wearable computing device 272 may include a single lens element 280 that may be coupled to one of the side-arms 273 or the center frame support 274. The lens element 280 may include a display such as the display described with reference to
As described in the previous section and shown in
A server can help reduce a processing load of a wearable computing device. For example, a wearable computing device may interact within a remote, cloud-based server system, which can function to distribute real-time audio and video to appropriate computing devices for viewing. As part of a cloud-based implementation, the wearable computing device may communicate with the server system through a wireless connection, through a wired connection, or through a network that includes a combination of wireless and wired connections. The server system may likewise communicate with other computing devices through a wireless connection, through a wired connection, or through a network that includes a combination of wireless and wired connections. The server system may then receive, process, store, and transmit any video, audio, images, text, or other information from the wearable computing device and other computing devices. Multiple wearable computing devices may interact within the remote server system.
In addition, for the method 300 and other processes and methods disclosed herein, the flowchart shows functionality and operation of one possible implementation of present embodiments. In this regard, each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive. The computer readable medium may include non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and random access memory (RAM). The computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. The computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device.
In addition, for the method 300 and other processes and methods disclosed herein, each block in
At block 302, the method 300 includes receiving video and audio in real-time. In some examples, a wearable computing device may receive video and audio using cameras, microphones, or other components. The capturing of video and audio in-real time may be performed by any of the components as described in
At block 304, the method 300 includes providing video and audio to a server system through a communication network. In some examples, the wearable computing device may transmit captured video and audio to a server system through a communication network.
At block 306, the method 300 includes the server system processing the video and audio, and at block 308, the method 300 includes the server system providing the processed video and audio to one or more computing devices through the communication network.
A server system may process captured video and audio in various ways. In some examples, a server system may format media components of the captured video and audio to adjust for a particular computing device. For example, consider a computing device that is participating in an experience-sharing session via a website that uses a specific video format. In this example, when the wearable computing device sends captured video, the server system may format the video according to the specific video format used by the website before transmitting the video to the computing device. As another example, if a computing device is a personal digital assistant (PDA) that is configured to play audio feeds in a specific audio format, then the server system may format an audio portion of the captured video and audio according to the specific audio format before transmitting the audio portion to other computing devices. These examples are merely illustrative, and a server system may format the captured video and audio to accommodate give computing devices in various other ways. In some implementations, a server system may format the same captured video and audio in a different manner for different computing devices in the same experience-sharing session.
In still other examples, a server system may be configured to compress all or a portion of the captured video and audio before transmitting the captured video and audio to a computing device. For example, if a server system receives high-resolution captured video and audio, the server may compress the captured video and audio before transmitting the captured video and audio to the one or more computing devices. In this example, if a connection between the server system and a certain computing device runs too slowly for real-time transmission of the high-resolution captured video and audio, then the server system may temporally or spatially compress the captured video and audio and transmit the compressed captured video and audio to the computing device. As another example, if a computing device requires a slower frame rate for video feeds, a server system may temporally compress a captured video and audio by removing extra frames before transmitting the captured video and audio to the computing device. As yet another example, a server system may be configured to save bandwidth by down sampling a video before transmitting the video to a computing device that can handle a low-resolution image. In this example, the server system may be configured to perform pre-processing on the video itself, for example, by combining multiple video sources into a single video feed, or by performing near-real-time transcription (or, in other words, closed captions) or translation.
Further, a server system may be configured to decompress captured video and audio, which may enhance a quality of an experience-sharing session. In some examples, a wearable computing device may compress captured video and audio before transmitting the captured video and audio to a server system, in order to reduce transmission load on a connection between the wearable computing device and the server system. If the transmission load is less of a concern for the connection between the server system and a given computing device, then the server system may decompress the captured video and audio prior to transmitting the captured video and audio to the computing device. For example, if a wearable computing devices uses a lossy spatial compression algorithm to compress captured video and audio before transmitting the captured video and audio to a server system, the server system may apply a super-resolution algorithm (an algorithm that estimates sub-pixel motion increasing the perceived spatial resolution of an image) to decompress the captured video and audio before transmitting the captured video and audio to one or more computing devices. In other examples, a wearable computing device may use a lossless data compression algorithm to compress captured video and audio before transmission to a server system, and the server system may apply a corresponding lossless decompression algorithm to the captured video and audio so that the captured video and audio may be usable by a given computing device.
In the example illustrated by
As noted above, traveler 402 may use the HMD 404 to initiate an experience-sharing session with travel companion 406. In the example illustrated by
The example illustrated in
In the example illustrated by
In the example in
In the example illustrated by
In response to initially connecting with traveler 402, travel companion 406 may communicate in real-time with advice, questions, comments, answers, or communication elements, for example. In the example illustrated by
In other examples, audio 414, 416 may represent various communications between travel companion 406 and traveler 402, and may include continuous streaming of audio and video data, or discrete portions of audio and video data. Traveler 402 may turn off the microphones associated with HMD 404 to prevent interrupting travel companion 406. For example, a travel companion may be providing a tour of a museum and travelers may choose to limit the noise from their surroundings being captured by the HMDs by keeping their HMDs' microphones on mute except when they may have a question. In addition, travel companion 406 may be able to mute the microphone of computing device 408.
In the example illustrated by
In other examples, travel companion 406 may interact with traveler 402 by sending graphical images or textual images to be displayed on the lens of HMD 404. For example, in the case that traveler 402 asked for directions to building 420, travel companion 406 may transmit directions along with a map and text instructions to HMD 404 for traveler 402 to view and follow.
The example illustrated by
In the example illustrated by
In the example, traveler 422 initiated interaction with travel companion 426 to receive assistance communicating with local person 424. Traveler 422 may have a varying level of ability to communicate with local person 424. For example, traveler 422 may understand local person 424, but may be unable to speak in that language to respond. Other possibilities may exist for traveler 422 initiating an experience sharing session with travel companion 426. For example, traveler 422 and local person 424 may be trying to reach the same destination and use the HMD of the traveler to receive directions from travel companion 426 for both of them.
In the example illustrated by
In the example, travel companion 426 is located in geographic location 400d. In other examples, travel companion 426 may provide assistance from other geographic locations. Travel companion 426 may receive the audio 428 and captured video from the HMD of traveler 422. Travel companion 426 may hear audio 430 in multiple ways, including through a speaker connected to the computing device, headphones, an HMD, for example. Travel companion 426 may respond with audio 430 through a microphone. In other examples, travel companion 426 may initially start the interaction with traveler 422 by sending a message, visual, or audio 430 to the HMD of traveler 422. The HMD of traveler 422 may play audio 430 from travel companion 426 out loud so that both traveler 422 and local 424 may hear it. In another embodiment, the HMD may only play audio 430 through a headphone so that only traveler 422 may hear the advice or translation from travel companion 426. In an additional example, travel companion may also transmit video in real-time so that the HMD may enable traveler 422 to also see travel companion 426. The HMD may also display visual images and videos from travel companion 426 on the lens of the HMD or projected so that local person 424 may also see the travel companion 426.
In the example, audio 428, 430 represent any possible conversation that may occur between the people in
At block 502, the method 500 includes receiving a request from a wearable computing device for interaction with a live travel companion. The request may be received by a server, and may be a direct request or a time-specific request. A direct request includes immediate connection between the devices of a traveler and travel companion. For example, a traveler may come across a statue that he or she finds intriguing and may perform a direct request to initiate an experience-sharing session to receive information about the statue from a live travel companion. A time-specific request may occur when a connection is made at a time that is predefined prior to the connection. For example, the traveler may choose and set up time-specific requests to automatically initiate an experience-sharing session at the same time every day during a vacation as a daily check-up with the travel companion. In some examples, the request may be sent by a computing device other than a HMD, including but not limited to other wearable computing devices, smart phones, tablets, and laptops.
In some examples, the request may be the initial attempt for interaction with a live travel companion. The traveler may be new to the service and may try live interaction for the first time. In other examples, the traveler may have prior experience with the overall service, but this may be the initial interaction with this particular travel companion. In a different example, the request may be an additional request in a series of requests that have already occurred between the same traveler and travel companion. Multiple experience-sharing sessions between the same traveler and travel companion may result in a more comfortable relationship, and thus better enhance the experience of both, the traveler and travel companion.
In some examples, a computing device of the travel companion may send the requests to one or more wearable computers of travelers to initiate experience-sharing sessions. For example, the travel companion may provide information for a tour of a famous museum at the same time daily and may wish to initiate the experience-sharing session with any travelers that previously signed up. Thus, the travel companion may present information on a tour without physically being present in that geographic location to lead the tour.
Furthermore, examples may have various payment structures to enable compensation to occur in exchange for the assistance from travel companions. Some examples provide the service for free or for a specific cost. In one example, a traveler may pay a predefined amount for every request for interaction. This predefined amount may vary over time, may increase, or may lower with the increase of usage to provide an incentive for the travel to use the service more. In one scenario, the traveler may pay for the overall time spent interacting in advance or cover all the costs at the end of usage. Different types of requests or interactions may result in different costs. For example, a tour of a museum may cost more for the traveler than receiving directions to that museum. In another example, every type of request may cost an equal, predefined amount. The traveler may be able to sign up for the service and pay the entire cost upfront and interact with a travel companion an unlimited amount of times. For example, if a traveler knows he or she is about to go on a trip for the next two weeks, the traveler may pay a two week service fee to enable use of the service during the trip, and thus, subscribe to the service. In addition, various locations may cost more than others in some examples. For example, popular tourist locations may cost more due to a higher frequency of requests coming from that geographic location. In one example, requesting and interacting with a travel companion may occur only through the use of a programmed application that may be purchased or downloaded. Additional features to the service may cost extra. For example, the ability for multiple travelers to group interact with a travel companion simultaneously may result in an additional fee.
At block 504, the method 500 includes determining the geographic location of the wearable computing device. The wearable computing device may be configured to determine the geographic location of the wearable computing device and provide such information to a server associated with the live travel companion service. In another example, the wearable computing device may determine its own location and send that location within the request for interaction. The geographic location may be determined through the use of a global positioning system (GPS) or another means of determining location. In one example, a memory storage device may store the location of the wearable computing device and update the location periodically. In another example, a traveler may disclose the geographic location that he or she will require the service for while purchasing the service.
At block 506, the method 500 includes determining from among a plurality of live travel companions the live travel companion based on the geographic location. A server associated with the liver travel companion service may perform the determination. The determination aims to select a travel companion that is assigned to that geographic location. In other examples, the selection of travel companion may be based on other details including the level of expertise of travel companions, availability, number of current incoming requests, or other reasons. A server or another type of computing device may use algorithms to select travel companions based on requests from travelers. In another example, a travel companion may answer requests in the order that the requests are received. Multiple travel companions that all have knowledge about the same geographic location may switch off accepting requests coming from that particular geographic location.
In one example, a wearable computing device of a traveler may connect to a computing device of a travel companion, a person, or entity that receives more information from the traveler about the request. In response to receiving more information, another connection may be made between the traveler and a travel companion that was determined best suited to help fulfill that request. This way the person or entity that initially accepts the request may select the travel companion that may provide the best answer to the request. For example, a wearable computing device of a traveler may send a request to initiate an experience-sharing session with a travel companion for advice during a hike in a state park. A first travel companion, person, or entity may receive the request in the state park and ask one or more questions to determine the purpose behind the request of the traveler. In response to determining that the traveler would like specific information on hiking within that state park, the first travel companion, person, or entity may choose to connect the traveler with a travel companion that has the greatest knowledge about hiking and/or that state park. Examples may also include additional check points or automated services to improve the experience of the traveler.
At block 508, the method 500 includes receiving from the wearable computing device real-time video and real-time audio. In some examples, the computing device of live travel companion may receive the real-time video and real-time audio in an experience-sharing session between devices of the traveler and the live travel companion. The live travel companion may receive the real-time video and real-time audio through a computing device including a tablet, a laptop, wearable computing device, for example. The live travel companion may receive one of the real-time video, the real-time audio, or both. In addition, the live travel companion may receive recorded video, audio, text, or other forms of data transfer. The travel companion may also interact by sending real-time video and real-time audio or other types of information transfer.
At block 510, the method 500 includes initiating an experience-sharing session between the wearable computing device of the traveler and a second computing device associated with the live travel companion. The experience-sharing session may incorporate functions as described in
At block 512, the method 500 includes providing a communication channel between the wearable computing device and the second computing device via the experience-sharing session for real-time interaction. The communication channel may be any type of link that connects the computing device of the travel companion and wearable computing device of the traveler. The link may use one or more networks, wireless or wired portions of data transfer, and other means of permitting interaction to occur. The system may operate in a manner similar to that of the system shown in
It should be understood that arrangements described herein are for purposes of example only. As such, those skilled in the art will appreciate that other arrangements and other elements (e.g. machines, interfaces, functions, orders, and groupings of functions, etc.) may be used instead, and some elements may be omitted altogether according to the desired results. Further, many of the elements that are described are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
While various aspects and examples have been disclosed herein, other aspects and examples will be apparent to those skilled in the art. The various aspects and examples disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims, along with the full scope of equivalents to which such claims are entitled. It is also to be understood that the terminology used herein is for the purpose of describing particular examples only, and is not intended to be limiting.