The present disclosure relates generally to autonomous aerial vehicle operations, and more particularly to methods, computer-readable media, and apparatuses for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained.
Current trends in wireless technology are leading towards a future where virtually any object can be network-enabled and addressable on-network. The pervasive presence of cellular and non-cellular wireless networks, including fixed, ad-hoc, and/or or peer-to-peer wireless networks, satellite networks, and the like along with the migration to a 128-bit IPv6-based address space provides the tools and resources for the paradigm of the Internet of Things (IoT) to become a reality. In addition, drones or autonomous aerial vehicles (AAVs) are increasingly being utilized for a variety of commercial and other useful tasks, such as package deliveries, search and rescue, mapping, surveying, and so forth, enabled at least in part by these wireless communication technologies.
In one example, the present disclosure describes a method, computer-readable medium, and apparatus for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained. For instance, in one example, a processing system including at least one processor may collect, via at least one camera of at least one autonomous aerial vehicle, viewing information for a plurality of positions for each of a plurality of viewing locations within an event venue. The processing system may next present a viewing location selection interface to a user, where the viewing location selection interface provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, and where the simulated view is based upon the viewing information that is obtained. The processing system may then obtain a selection from the user of a viewing location of the plurality of viewing locations for an event at the event venue.
The teaching of the present disclosure can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
Examples of the present disclosure describe methods, computer-readable media, and apparatuses for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained. In particular, examples of the present disclosure relate to an autonomous aerial vehicle (AAV) operating at an event venue that may use onboard sensors to collect data related to specific viewing locations (e.g., seats and/or seating locations, locations for stand-up viewing, viewing boxes, wheelchair-accessible viewing locations, etc.). The data may be used to aid in configuring the event venue and for seating selection for the purchase of tickets. For instance, in one example, images may be captured for a plurality of positions at each viewing location (e.g., different images for several heights, e.g., two feet above a seat, or above the floor, the ground, etc., five feet above, 5.5 feet above, six feet above, 6.5 feet above, etc.). In one example, the number of positions may approximate representative seating and standing views for a variety of people of different heights. In one example, audio data may also be record at each viewing location for at least one of the plurality of positions. The viewing and other data may then be used to improve a patron's ability to understand an expected experience when choosing viewing locations for an event at the event venue.
To further illustrate, a sensor-equipped AAV may be used to sense and record conditions at every viewing location at a venue. For instance, the AAV may be equipped with microphones, still cameras, video cameras, thermometers, light sensors, and other sensors. The AAV may be in communication with a viewing location management system, e.g., an event ticketing and reservation system with a database relating to each viewing location for each event. For instance, the AAV may maintain a session via a wireless access point or other network connections. The AAV may access a location mapping of every viewing location in the venue. The mapping precision may permit the AAV to define its own flight path so that every viewing location may be visited. At each viewing location the AAV may navigate itself to several positions, e.g., heights representing typical eye levels of different sized people for both sitting and standing. The flight path may include every seat or it may include only selected seats to obtain a representative sampling of viewing information (e.g., one or more images, a video clip, or the like). The images or video clips for each position for each viewing location, along with simulated patron heights, may be stored in a viewing location database.
In one example, for outdoor venues, the AAV may also capture images at different times of the day and different days of the year. These images, along with indications of the time and/or date may also be stored in the viewing location database and may be provided to patrons in connection with making viewing location selections for various events. For instance, some patrons may be interested to know which seats may be in sunlight vs. shade on a sunny day in the middle of the hot summer to help make a seat selection. In one example, a venue owner or operator, an event producer, or the like may also vary the price of sunny vs. shady seats. In one example, the AAV may be dispatched to empty seats or other viewing locations during actual events of various types to capture images and/or video to provide representative views of different event types. For instance, a view from a particular seat in an arena may be perceived differently for a hockey event versus a basketball event versus a concert, and so forth.
In one example, the AAV may further be used to test sound levels of the venue's speaker system. For instance, test audio may be played over the speaker system while the AAV traverses a flight path. The AAV may receive the test audio via one or more microphones and may record sound measurements including intensity, such as in decibels; frequency, and other measures may also be recorded. The measured level(s) may be compared against expected level(s) and adjustments may be made to the speaker system to make corrections. The test audio sensed by the AAV may also be analyzed by the AAV or sent to a controller of the speaker system to identify echo, sound clarity, and other sound quality characteristics that may be used in the speaker system adjustments. In one example, the test audio level (or actual samples of the test audio) may also be stored in the viewing location database. This may be used, for example, if the venue is a theater for hosting an orchestra where it may be useful to include sound levels and quality as an element informing a patron's viewing location selection. In one example, the AAV may also record a video of an event, perhaps a rehearsal. The video may be analyzed to determine where motion occurs over time and represent various levels of density of motion over the course of the entire event.
The collected AAV sensor data may be used to improve the viewing location selection experience and expectations of a patron. For instance, a patron may use a device to access a reservation application or website (e.g., a component of the ticketing and reservation system or in communication with the ticketing and reservation system). Using the AAV-collected sensor data in the viewing location database, the ticketing and reservation system may present an improved simulation of the experience that the patron may expect during the event. For instance, the patron may enter his or her own height to improve the accuracy of a simulated view that may be presented for the patron. For instance, the reservation system may use the patron's stated height to provide the best match of AAV-captured images for a viewing location that the patron is interested in. In addition, the ticketing and reservation system may collect data from other reservations that have already been made to permit the patron to have a better understanding of who would be seated in the area around a particular viewing location currently being viewed/considered by the patron. For example, other patrons who have reserved seats may have included data with their reservations that may be stored in the viewing location database, such as their heights, team preferences, types of fan, and other descriptors. These data and others may be used by the ticketing and reservation system to present a simulated view of the viewing location and its surroundings. For instance, one or more AAV-captured images corresponding to the patron's seating and/or standing heights at the viewing location may be provided via a viewing location selection interface of the reservation application or website of the ticketing and reservation system to the patron, which may also be enhanced with simulated audience members based upon the heights or other data provided by these other patrons. In one example, the simulated patrons may comprise silhouettes or avatars representing generic individuals. In another example, the simulated patrons may comprise AI-generated representation of patrons of particular heights, fan base type, etc. (e.g., using a generative adversarial network (GAN)-based generator, for instance).
In addition, the patron may optionally login to the ticketing and reservation system via one or more social networking applications or may provide a social network identifier when accessing the reservation application or website. In this case, the ticketing and reservation system may have access to identifiers of the patron's friends and connections. In one example, if any known contacts have existing reservations, they may also be displayed in a simulated view for a particular viewing location (and/or for other viewing locations that may also be considered by the patron). In one example, sound quality information may also be included in the viewing location selection process (and in one example, as a factor in viewing location pricing). Similarly, a motion heat map may be presented to the patron to use as another factor in viewing location selection, as a representation of visibility from a candidate viewing location as it relates to where motion (e.g., actions of team players in a sports event) is expected to occur during the event.
In one example, an event venue may provide patrons coming to an on-site box office/ticket office with AAV assisted real-time or near-real-time views from different viewing locations. For instance, a patron may be interested in a particular seat. The event venue owner or operator, or an event producer may then allow the patron (or site personnel assisting the patron), via an on-site user interface device, to cause an AAV to be dispatched to a viewing location that is selected by a patron, obtain current viewing information of at least one position at the viewing location (e.g., at a height, or seating/standing heights of the patron), and providing the current viewing information via a viewing location selection interface. Thus, the patron may obtain a timely view of the actual event in progress (or about to begin) from the considered viewing location). These and other aspects of the present disclosure are discussed in greater detail below in connection with the examples of
To aid in understanding the present disclosure,
In one example, the server(s) 125 may each comprise a computing device or processing system, such as computing system 400 depicted in
In one example, server(s) 125 may comprise an event venue management system, which in one example may include a seating database, a reservation and ticketing system, and an AAV management system. For instance, server(s) 125 may receive and store information regarding AAVs, such as (for each AAV): an identifier of the AAV, a maximum operational range of the AAV, a current operational range of the AAV, capabilities or features of the AAV, such as maneuvering capabilities, payload/lift capabilities (e.g., including maximum weight, volume, etc.), sensor and recording capabilities, lighting capabilities, visual projection capabilities, sound broadcast capabilities, and so forth. In one example, server(s) 125 may direct AAVs in collecting viewing location data and providing real-time viewing location visual feeds to individuals or groups of people, as described herein.
In one example, server(s) 125 may store detection models that may be deployed to AAVs, such as AAV 160, in order to detect items of interest based upon input data collected via AAV sensors in an environment. For instance, in one example, AAVs may include on-board processing systems with one or more detection models for detecting objects or other items in an environment/space/area. In accordance with the present disclosure, the detection models may be specifically designed for detecting venue seats, viewing boxes, locations for stand-up viewing, wheelchair accessible viewing locations, etc. The MLMs, or signatures, may be specific to particular types of visual/image and/or spatial sensor data, or may take multiple types of sensor data as inputs. For instance, with respect to images or video, the input sensor data may include low-level invariant image data, such as colors (e.g., RGB (red-green-blue) or CYM (cyan-yellow-magenta) raw data (luminance values) from a CCD/photo-sensor array), shapes, color moments, color histograms, edge distribution histograms, etc. Visual features may also relate to movement in a video and may include changes within images and between images in a sequence (e.g., video frames or a sequence of still image shots), such as color histogram differences or a change in color distribution, edge change ratios, standard deviation of pixel intensities, contrast, average brightness, and the like. For instance, these features could be used to help quantify and distinguish plastic seats from a concrete floor, metal railings, etc.
As noted above, in one example, MLMs, or signatures, may take multiple types of sensor data as inputs. For instance, MLMs or signatures may also be provided for detecting particular items based upon LiDAR input data, infrared camera input data, and so on. In accordance with the present disclosure, a detection model may comprise a machine learning model (MLM) that is trained based upon the plurality of features available to the system (e.g., a “feature space”). For instance, one or more positive examples for a feature may be applied to a machine learning algorithm (MLA) to generate the signature (e.g., a MLM). In one example, the MLM may comprise the average features representing the positive examples for an item in a feature space. Alternatively, or in addition, one or more negative examples may also be applied to the MLA to train the MLM. The machine learning algorithm or the machine learning model trained via the MLA may comprise, for example, a deep learning neural network, or deep neural network (DNN), a generative adversarial network (GAN), a support vector machine (SVM), e.g., a binary, non-binary, or multi-class classifier, a linear or non-linear classifier, and so forth. In one example, the MLA may incorporate an exponential smoothing algorithm (such as double exponential smoothing, triple exponential smoothing, e.g., Holt-Winters smoothing, and so forth), reinforcement learning (e.g., using positive and negative examples after deployment as a MLM), and so forth. It should be noted that various other types of MLAs and/or MLMs may be implemented in examples of the present disclosure, such as k-means clustering and/or k-nearest neighbor (KNN) predictive models, support vector machine (SVM)-based classifiers, e.g., a binary classifier and/or a linear binary classifier, a multi-class classifier, a kernel-based SVM, etc., a distance-based classifier, e.g., a Euclidean distance-based classifier, or the like, and so on. In one example, a trained detection model may be configured to process those features which are determined to be the most distinguishing features of the associated item, e.g., those features which are quantitatively the most different from what is considered statistically normal or average from other items that may be detected via a same system, e.g., the top 20 features, the top 50 features, etc.
In one example, detection models (e.g., MLMs) may be deployed in AAVs, to process sensor data from one or more AAV sensor sources (e.g., cameras, LiDAR, and/or other sensors of AAVs), and to identify patterns in the features of the sensor data that match the detection model(s) for the respective item(s). In one example, a match may be determined using any of the visual features mentioned above, e.g., and further depending upon the weights, coefficients, etc. of the particular type of MLM. For instance, a match may be determined when there is a threshold measure of similarity among the features of the sensor data streams(s) and an item signature. In the present disclosure, locations for stand up viewing may have designated markings on the ground such that these locations are visually identifiable and may have an associated detection model that may detect such locations from images captured from AAV imaging sensors (and similarly for wheelchair accessible viewing locations). In one example, an AAV, such as AAV 160, may utilize on-board detection models in addition to a venue map of viewing locations, a GPS unit, and altimeter, e.g., to confirm the AAV 160 is in a correct viewing location to capture viewing information from a plurality of positions (e.g., different heights at the viewing location).
In one example, the system 100 includes a telecommunication network 110. In one example, telecommunication network 110 may comprise a core network, a backbone network or transport network, such as an Internet Protocol (IP)/multi-protocol label switching (MPLS) network, where label switched routes (LSRs) can be assigned for routing Transmission Control Protocol (TCP)/IP packets, User Datagram Protocol (UDP)/IP packets, and other types of protocol data units (PDUs), and so forth. It should be noted that an IP network is broadly defined as a network that uses Internet Protocol to exchange data packets. However, it will be appreciated that the present disclosure is equally applicable to other types of data units and transport protocols, such as Frame Relay, and Asynchronous Transfer Mode (ATM). In one example, the telecommunication network 110 uses a network function virtualization infrastructure (NFVI), e.g., host devices or servers that are available as host devices to host virtual machines comprising virtual network functions (VNFs). In other words, at least a portion of the telecommunication network 110 may incorporate software-defined network (SDN) components.
In one example, one or more wireless access networks 115 may each comprise a radio access network implementing such technologies as: global system for mobile communication (GSM), e.g., a base station subsystem (BSS), or IS-95, a universal mobile telecommunications system (UMTS) network employing wideband code division multiple access (WCDMA), or a CDMA3000 network, among others. In other words, wireless access network(s) 115 may each comprise an access network in accordance with any “second generation” (2G), “third generation” (3G), “fourth generation” (4G), Long Term Evolution (LTE), “fifth generation” (5G), or any other existing or yet to be developed future wireless/cellular network technology. While the present disclosure is not limited to any particular type of wireless access network, in the illustrative example, base stations 117 and 118 may each comprise a Node B, evolved Node B (eNodeB), or gNodeB (gNB), or any combination thereof providing a multi-generational/multi-technology-capable base station. In the present example, user devices 141, and AAV 160 may be in communication with base stations 117 and 118, which provide connectivity between AAV 160, user device 141, and other endpoint devices within the system 100, various network-based devices, such as server(s) 112, server(s) 125, and so forth. In one example, wireless access network(s) 115 may be operated by the same service provider that is operating telecommunication network 110, or one or more other service providers.
For instance, as shown in
As illustrated in
In accordance with the present disclosure, AAV 160 may include a camera 162 and one or more radio frequency (RF) transceivers 166 for cellular communications and/or for non-cellular wireless communications. In one example, AAV 160 may also include one or more module(s) 164 with one or more additional controllable components, such as one or more: microphones, loudspeakers, infrared, ultraviolet, and/or visible spectrum light sources, projectors, light detection and ranging (LiDAR) units, temperature sensors (e.g., thermometers), a global positioning system (GPS) unit, an altimeter, a gyroscope, a compass, and so forth.
In addition, AAV 160 may include an on-board processing system to perform steps, functions, and/or operations in connection with examples of the present disclosure for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained. For instance, AAV 160 may comprise all or a portion of a computing device or processing system, such as computing system 400 as described in connection with
In an illustrative example, the event venue 190 may utilize at least AAV 160 to navigate to each viewing location within the event venue 190 to capture viewing information (e.g., camera images, video clips, audio clips, or the like, which may include panoramic images and/or video, 360 degree images and/or video, etc.) from a plurality of different positions (e.g., heights or elevations). In one example, AAV 160 may be provided with instructions for traversing at least a portion of the event venue 190 and capturing viewing information for a plurality of positions for each of the viewing locations therein. In addition, AAV 160 may be provided with a viewing location map of the event venue with marked/identified locations of different viewing locations. AAV 160 may navigate itself to such locations using an on board GPS unit, may utilize an altimeter or the like to navigate to various positions/heights at such locations, and so forth. In one example, the map may comprise a 3D map (e.g., a LiDAR generated map/rendering of the environment of the event venue 190) containing markers of seat locations. Using its own LiDAR unit and collecting LiDAR sensor data, AAV 160 may verify its position within the event venue 190 and navigate to one or more marked viewing locations in accordance with such a map. Alternatively or in addition, AAV 160 may traverse the event venue 190 and may detect and identify different viewing locations via collection of sensor data via on-board sensors (in this case, one or more of LiDAR sensor data, optical camera sensor data (e.g., images and/or video, or the like), and applying the sensor data as input(s) to one or more detection models for detecting seats, viewing boxes, stand-up viewing locations, wheelchair accessible viewing locations, etc. In one example, the instructions and/or the map of the event venue 190 may be provided by or via the server(s) 125. For instance, server(s) 125 may be operated by or associated with the event venue 190. A venue owner or operator, an event producer, or the like may control server(s) 125 and may cause the server(s) 125 to provide the map to AAV 160, to instruct AAV 160 to commence viewing information collection at various viewing locations, etc.
For instance, as shown in
In one example, AAV 160 may also, at each viewing location, collect audio information via at least one microphone of AAV 160 for at least one of the plurality of positions. For instance, the audio information may comprise a recording of sample music, a short recording from a prior musical event or other events (e.g., crowd noise, announcements, etc. from a sporting event, or the like), and so forth. The audio information may similarly be stored and/or provided to server(s) 125 and/or server(s) 112 to be stored in a record for the viewing location in a viewing location database.
After viewing information (or other data, such as audio information) for different positions (e.g., heights) at different viewing locations is obtained via AAV 160 and/or other AAVs, server(s) 125 and/or server(s) 112 may then provide a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, wherein the simulated view is based upon the viewing information that is obtained.
As noted above, an event venue may offer patrons real-time AAV-captured views of available viewing locations (e.g., for an event already in progress or starting soon and for which audience members are already being seated, team players are already warming up on the field, or orchestra musicians seating are already set on a stage, etc.). To illustrate, in
The foregoing illustrates just one example of a system in which examples of the present disclosure for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained may operate. In addition, although the foregoing example is described and illustrated in connection with a single AAV 160, a single patron 140, etc., it should be noted that various other scenarios may be supported in accordance with the present disclosure.
It should also be noted that the system 100 has been simplified. In other words, the system 100 may be implemented in a different form than that illustrated in
As just one example, one or more operations described above with respect to server(s) 125 may alternatively or additionally be performed by server(s) 112, and vice versa. In addition, although server(s) 112 and 125 are illustrated in the example of
In the example of
The event information section 250 provides information on the event that a user may be searching for tickets and/or reservations, in this case a baseball game between Team A and Team B at 1:30 PM on Saturday Aug. 3, 2021. The current seat and view information zone 270 indicates that the user is currently viewing a particular seat: section 227, row F, seat 2. In addition, the current seat and view information zone 270 indicates the user is currently viewing the visual information (in viewing area 280) from a perspective of a height of 5 feet 11 inches (e.g., standing height for the user). The visual information in viewing area 280 for this height/position may be one of a set of different visual information captured by UAV 160 of
Notably, the viewing information presented in the viewing area 280 may comprise a simulated view comprising the viewing information that is obtained (e.g., a camera image), which is modified to include a simulation of audience members at the event, based upon a plurality of reservations of different viewing locations within the event venue. For instance, as discussed above, patron information may be collected voluntarily from other users/patrons (e.g., height information and/or other information) in connection with prior viewing location reservations that have been made for the same event. As such, the simulated view may include simulated audience members corresponding to viewing locations and user/patron information associated with such prior reservations. For instance, in the example of
It should be noted that the foregoing is just one example of a viewing location selection interface in accordance with the present disclosure. Thus, other, further, and different examples may be provided having different buttons or zones, different layouts, different types of information presented, and so forth. In addition, although a simulated view is shown with simulated audience members, in another example, simulated views may alternatively or additionally be provided without simulated audience members. For instance, a user may prefer to have a view that does not include such simulation. In still another example, there may be insufficient data collected regarding other audience members, or there may be an insufficient number of prior reservations. In these or other scenarios, in one example, a simulated audience may be generated based upon a prediction or forecast of a crowd, e.g., based upon past crowds/audience members for similar events. Thus, these and other modifications are all contemplated within the scope of the present disclosure.
At optional step 310, the processing system may provide a viewing location map of an event venue and instructions to the at least one AAV to collect viewing information for a plurality of positions (e.g., heights) for each of a plurality of viewing locations within the event venue.
At step 315, the processing system collects, via at least one camera of at least one AAV, viewing information for the plurality of positions for each of the plurality of viewing locations within the event venue. For instance, the plurality of viewing locations may comprise seats, viewing boxes, locations for stand-up viewing, wheelchair-accessible viewing locations, and so forth. In addition, as noted above, for each of the plurality of viewing locations, the plurality of positions may comprise a plurality of viewing heights. In one example, the at least one AAV may be configured to identify viewing locations via at least one detection model for processing imaging sensor data and detecting at least one type of viewing location. In another example, the AAV may navigate in accordance with the viewing location map and a GPS unit and altimeter of the AAV. Alternatively, or in addition, the map may comprise a 3D LiDAR generated map with markers for viewing locations thereon. The AAV may thus utilize its LiDAR unit to detect its location in space using the 3D map and its current LiDAR readings and to navigate to a destination viewing location for viewing information collection.
In one example, step 315 may further include collecting via at least one microphone of the at least one AAV, audio information for at least one of the plurality of positions for each of the plurality of viewing locations within the event venue. For instance, the audio information may comprise a recording of sample music, a short recording from a prior musical performance or other event (e.g., crowd noise, announcements, etc. from a sporting event, or the like), and so forth.
At optional step 320, the processing system may obtain a plurality of reservations of different viewing locations. In one example, optional step 320 may include obtaining sizing information of a plurality of audience members associated with the plurality of reservations.
At optional step 325, the processing system may obtain sizing information from the user according to a user consent (e.g., at least a height of an individual who is likely to occupy each particular seat, e.g., a user may purchase four seats for his or her family and the user may also specify each occupant's height for each of the four seats). For instance, the user may input this information via a user interface, such as viewing location selection interface 200. Alternatively, or in addition, the user may elect to have this information stored from a prior reservation for use in making the current or future viewing location selections.
At step 330, the processing system presents a viewing location selection interface to the user, where the viewing location selection interface provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, and where the simulated view is based upon the viewing information that is obtained. In one example, the at least one of the plurality of positions for the at least one of the plurality of viewing locations is selected for the user in accordance with the sizing information of the user. In one example, the at least one of the plurality of positions comprises at least two positions, where the at least two positions comprise a standing height of the user and a calculated seated height of the user. In addition, in one example, the simulated view may comprise a simulation of audience members at the event, based upon a plurality of reservations of different viewing locations within the event venue that may be obtained at optional step 320. For instance, the simulated view may include simulated audience members having sizes corresponding to the sizing information of the plurality of audience members (e.g., with the simulated audience members being included in the simulated view at locations corresponding to the different viewing locations of the respective reservations). In addition, in one example, the viewing location selection interface may further provide audio information for the at least one of the plurality of viewing locations that may be collected at step 315.
At step 335, the processing system obtains a selection from the user of a viewing location of the plurality of viewing locations for the event at the event venue. For instance, the user may choose to reserve a particular viewing location for which a simulated view is presented at step 330. For example, the user may select a button such as the seat purchase button 240 of
At optional step 340, the processing system may reserve the viewing location for the event in response to the selection. For instance, the processing system may mark the viewing location as “unavailable” in a viewing location database for the event. In addition, the processing system may charge a user account or may direct a device of the user to a payment processing system to complete the transaction, or the like.
At optional step 345, the processing system may deploy at least one AAV to the viewing location that is selected by the user. For instance, in one example, the selection of step 335 may not result in a reservation of the viewing location, but may instead cause an AAV to be dispatched to the viewing location. For instance, the viewing location selection interface of step 330 may be provided via a terminal outside the event venue and/or may be via a mobile device of the user when the mobile device is determined to be already at or within the event venue.
At optional step 350, the processing system may obtain current viewing information of at least one position at the viewing location, e.g., via a live feed from the AAV.
At optional step 355, the processing system may providing the current viewing information to the user via the viewing location selection interface. In one example, if the user is satisfied with this seat selection (e.g., as indicated by the decision step 360: “viewing location accepted?”), the method may proceed to optional step 340. Otherwise, the method may proceed back to step 330 or to step 395.
Following step 340 or one of optional steps 345 or 355, the method 300 may proceed to step 395. At step 395, the method 300 ends.
It should be noted that the method 300 may be expanded to include additional steps, or may be modified to replace steps with different steps, to combine steps, to omit steps, to perform steps in a different order, and so forth. For instance, in one example, the processing system may repeat one or more steps of the method 300, such as step 330 for different viewing locations, steps 325-340 for different users, etc. In various other examples, the method 300 may further include or may be modified to comprise aspects of any of the above-described examples in connection with
In addition, although not expressly specified above, one or more steps of the method 300 may include a storing, displaying and/or outputting step as required for a particular application. In other words, any data, records, fields, and/or intermediate results discussed in the method can be stored, displayed and/or outputted to another device as required for a particular application. Furthermore, operations, steps, or blocks in
Although only one hardware processor element 402 is shown, the computing system 400 may employ a plurality of hardware processor elements. Furthermore, although only one computing device is shown in
It should be noted that the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computing device, or any other hardware equivalents, e.g., computer-readable instructions pertaining to the method(s) discussed above can be used to configure one or more hardware processor elements to perform the steps, functions and/or operations of the above disclosed method(s). In one example, instructions and data for the present module 405 for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained (e.g., a software program comprising computer-executable instructions) can be loaded into memory 404 and executed by hardware processor element 402 to implement the steps, functions or operations as discussed above in connection with the example method(s). Furthermore, when a hardware processor element executes instructions to perform operations, this could include the hardware processor element performing the operations directly and/or facilitating, directing, or cooperating with one or more additional hardware devices or components (e.g., a co-processor and the like) to perform the operations.
The processor (e.g., hardware processor element 402) executing the computer-readable instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor. As such, the present module 405 for collecting via an autonomous aerial vehicle viewing information for a plurality of positions for a plurality of viewing locations within an event venue and presenting a viewing location selection interface that provides a simulated view with respect to at least one of the plurality of positions for at least one of the plurality of viewing locations, based upon the viewing information that is obtained (including associated data structures) of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, non-volatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like. Furthermore, a “tangible” computer-readable storage device or medium may comprise a physical device, a hardware device, or a device that is discernible by the touch. More specifically, the computer-readable storage device or medium may comprise any physical devices that provide the ability to store information such as instructions and/or data to be accessed by a processor or a computing device such as a computer or an application server.
While various examples have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred example should not be limited by any of the above-described examples, but should be defined only in accordance with the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5295551 | Sukonick | Mar 1994 | A |
5636123 | Rich et al. | Jun 1997 | A |
7415331 | Dapp et al. | Aug 2008 | B2 |
7451023 | Appleby et al. | Nov 2008 | B2 |
7737878 | Van Tooren et al. | Jun 2010 | B2 |
8914182 | Casado et al. | Dec 2014 | B2 |
8948935 | Peeters et al. | Feb 2015 | B1 |
9169030 | Wong et al. | Oct 2015 | B2 |
9317034 | Hoffman et al. | Apr 2016 | B2 |
9405181 | Wong et al. | Aug 2016 | B2 |
9464907 | Hoareau et al. | Oct 2016 | B1 |
9523986 | Abebe et al. | Dec 2016 | B1 |
9567077 | Mullan et al. | Feb 2017 | B2 |
9691285 | Jarrell | Jun 2017 | B2 |
9713675 | Levien et al. | Jul 2017 | B2 |
9720519 | Verma | Aug 2017 | B2 |
9754496 | Chan et al. | Sep 2017 | B2 |
9760087 | Hoareau et al. | Sep 2017 | B2 |
9798329 | Shattil | Oct 2017 | B2 |
9835709 | Tran et al. | Dec 2017 | B2 |
9848459 | Darrow et al. | Dec 2017 | B2 |
9854206 | Ren et al. | Dec 2017 | B1 |
9861075 | Shen et al. | Jan 2018 | B2 |
9896202 | Jourdan | Feb 2018 | B2 |
9940525 | Wolf | Apr 2018 | B2 |
9943965 | Moore | Apr 2018 | B2 |
9977428 | Hall | May 2018 | B2 |
9984579 | Harris et al. | May 2018 | B1 |
9986378 | Jones | May 2018 | B2 |
10050760 | Ross et al. | Aug 2018 | B2 |
10073336 | Maes et al. | Sep 2018 | B2 |
10155166 | Taylor et al. | Dec 2018 | B1 |
10159218 | Shen et al. | Dec 2018 | B2 |
10203701 | Kurdi et al. | Feb 2019 | B2 |
10254766 | High et al. | Apr 2019 | B2 |
10269257 | Gohl et al. | Apr 2019 | B1 |
10274952 | Cantrell et al. | Apr 2019 | B2 |
10308430 | Brady et al. | Jun 2019 | B1 |
10313638 | Yeturu et al. | Jun 2019 | B1 |
10325506 | Goddemeier et al. | Jun 2019 | B2 |
10331124 | Ferguson et al. | Jun 2019 | B2 |
10332394 | Gomez Gutierrez et al. | Jun 2019 | B2 |
10354537 | Beaurepaire et al. | Jul 2019 | B2 |
10372122 | Zach | Aug 2019 | B2 |
10440229 | Drako | Oct 2019 | B2 |
10441020 | Andon et al. | Oct 2019 | B1 |
10453345 | Greenberger et al. | Oct 2019 | B2 |
10467885 | Trundle et al. | Nov 2019 | B2 |
10481600 | Yen et al. | Nov 2019 | B2 |
10501180 | Yu | Dec 2019 | B2 |
10565395 | Matusek et al. | Feb 2020 | B2 |
10586464 | Dupray et al. | Mar 2020 | B2 |
10600326 | Kim et al. | Mar 2020 | B2 |
10607462 | Drako | Mar 2020 | B2 |
10636297 | Wang et al. | Apr 2020 | B2 |
10643406 | Arya et al. | May 2020 | B2 |
10654482 | Urano et al. | May 2020 | B2 |
10655968 | Rezvani | May 2020 | B2 |
10672278 | Deluca et al. | Jun 2020 | B2 |
10676022 | Zevenbergen et al. | Jun 2020 | B2 |
10683088 | Erickson et al. | Jun 2020 | B2 |
10706634 | Baumbach et al. | Jul 2020 | B1 |
10748429 | Bosworth | Aug 2020 | B2 |
10761544 | Anderson et al. | Sep 2020 | B2 |
10762795 | Contreras et al. | Sep 2020 | B2 |
10762797 | Navot et al. | Sep 2020 | B2 |
10765378 | Hall et al. | Sep 2020 | B2 |
10818187 | Perko | Oct 2020 | B2 |
20050259150 | Furumi et al. | Nov 2005 | A1 |
20070288132 | Lam | Dec 2007 | A1 |
20150202770 | Patron et al. | Jul 2015 | A1 |
20150242763 | Zamer | Aug 2015 | A1 |
20150269258 | Hunt, Jr. | Sep 2015 | A1 |
20150350614 | Meier et al. | Dec 2015 | A1 |
20160214717 | De Silva | Jul 2016 | A1 |
20160246297 | Song | Aug 2016 | A1 |
20160373699 | Torres et al. | Dec 2016 | A1 |
20170081026 | Winn et al. | Mar 2017 | A1 |
20170278409 | Johnson et al. | Sep 2017 | A1 |
20170291608 | Engel et al. | Oct 2017 | A1 |
20170368413 | Shavit | Dec 2017 | A1 |
20180035606 | Burdoucci | Feb 2018 | A1 |
20180072416 | Cantrell et al. | Mar 2018 | A1 |
20180136659 | Matloff | May 2018 | A1 |
20180162504 | Lindsø | Jun 2018 | A1 |
20180232580 | Wolf | Aug 2018 | A1 |
20180259960 | Cuban et al. | Sep 2018 | A1 |
20180308130 | Hafeez et al. | Oct 2018 | A1 |
20190035128 | Russell | Jan 2019 | A1 |
20190051224 | Marshall et al. | Feb 2019 | A1 |
20190052852 | Schick et al. | Feb 2019 | A1 |
20190058852 | Viswanathan | Feb 2019 | A1 |
20190061942 | Miller | Feb 2019 | A1 |
20190112048 | Culver | Apr 2019 | A1 |
20190135450 | Zhou et al. | May 2019 | A1 |
20190185158 | Blake et al. | Jun 2019 | A1 |
20190196577 | Sronipah | Jun 2019 | A1 |
20190197254 | Salgar | Jun 2019 | A1 |
20190227557 | Kim et al. | Jul 2019 | A1 |
20190238338 | Obrien et al. | Aug 2019 | A1 |
20190294742 | Zhao | Sep 2019 | A1 |
20190324456 | Ryan et al. | Oct 2019 | A1 |
20190339712 | Williams et al. | Nov 2019 | A1 |
20190369641 | Gillett | Dec 2019 | A1 |
20190377345 | Bachrach et al. | Dec 2019 | A1 |
20200014759 | Wunderlich | Jan 2020 | A1 |
20200032484 | ODonnell | Jan 2020 | A1 |
20200042013 | Kelkar et al. | Feb 2020 | A1 |
20200043347 | Wartofsky | Feb 2020 | A1 |
20200066147 | Vadillo et al. | Feb 2020 | A1 |
20200066163 | Emsbach et al. | Feb 2020 | A1 |
20200082731 | Choi et al. | Mar 2020 | A1 |
20200094964 | Myslinski | Mar 2020 | A1 |
20200103882 | Sullivan et al. | Apr 2020 | A1 |
20200130827 | Kozak | Apr 2020 | A1 |
20200145619 | Drako | May 2020 | A1 |
20200183384 | Noh et al. | Jun 2020 | A1 |
20200207371 | Dougherty et al. | Jul 2020 | A1 |
20200250848 | Kim et al. | Aug 2020 | A1 |
20200262450 | Pan | Aug 2020 | A1 |
20200265701 | Schenker et al. | Aug 2020 | A1 |
20200265723 | Gordon et al. | Aug 2020 | A1 |
20200273353 | Oconnell et al. | Aug 2020 | A1 |
20200341471 | Kozak | Oct 2020 | A1 |
20200356115 | Kubie | Nov 2020 | A1 |
20200357288 | Stewart et al. | Nov 2020 | A1 |
Number | Date | Country |
---|---|---|
105278759 | Jan 2016 | CN |
107945103 | Apr 2018 | CN |
3525157 | Aug 2019 | EP |
3667451 | Jun 2020 | EP |
102160722 | Sep 2020 | KR |
I693959 | May 2020 | TW |
2016210156 | Dec 2016 | WO |
2017055080 | Apr 2017 | WO |
2017065107 | Apr 2017 | WO |
2017068224 | Apr 2017 | WO |
2017157863 | Sep 2017 | WO |
2018052352 | Mar 2018 | WO |
2019006769 | Jan 2019 | WO |
2019235667 | Dec 2019 | WO |
2020057887 | Mar 2020 | WO |
2020072387 | Apr 2020 | WO |
Entry |
---|
“Jogging with a Quadcopter” exertiongameslab.org (Apr. 18, 2015); http://exertiongameslab.org/wp-content/uploads/2011/07/quadcopter_chi2015.pdf. |
“Joggobot: A Flying Robot as Jogging Companion” exertiongameslab.org (May 5, 2012); https://exertiongameslab.org/wp-content/uploads/2011/07/joggobot_chi2012.pdf. |
Al Zayer, Majed, et al. “Exploring the use of a drone to guide blind runners.” Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 2016. https://rrl.cse.unr.edu/media/documents/2016/p263-al-zayer.pdf. |
Alshareef, Hazzaa N., and Dan Grigoras. “An adaptive task scheduler for a cloud of drones.” 2018 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech). IEEE, 2018. |
Altawy, Riham and Youssef, Amr. M. “Security, Privacy, and Safety Aspects of Civilian Drones: A Survey,” researchgate.net. ACM Transactions on Cyber-Physical Systems, Nov. 2016. |
Amato, Andrew, “Projector Drone Turns Any Surface Into a Video Screen,” DRONELIFE.com, dronelife.com, Jun. 26, 2014. |
Bertram, Joshua R., Peng Wei, and Joseph Zambreno. “Scalable FastMDP for Pre-departure Airspace Reservation and Strategic De-conflict.” arXiv preprint arXiv:2008.03518 (2020). |
Blank, Peter; Kirrane, Sabrina; and Spiekerman, Sarah. “Privacy-Aware Restricted Areas for Unmanned Aerial Systems,” computer.org. IEEE Security & Privacy. Mar./Apr. 2018, pp. 70-79, vol. 16. |
Brock, Anke M., et al. “Flymap: Interacting with maps projected from a drone.” Proceedings of the 7th ACM International Symposium on Pervasive Displays 2018. |
Bui, Khac-Hoai Nam, and Jason J. Jung. “Internet of agents framework for connected vehicles: A case study on distributed traffic control system.” Journal of Parallel and Distributed Computing 116 (2018): 89-95. |
Cameron, Lori. “Building a Framework to Protect Your Privacy from Drones,” computer.org. Accessed: Nov. 10, 2020. IEEE Computer Society, (2020). |
Choi, Han-Lim, Luc Brunet, and Jonathan P. How. “Consensus-based decentralized auctions for robust task allocation.” IEEE transactions on robotics 25.4 (2009): 912-926. |
Colley, Ashley, et al. “Investigating drone motion as pedestrian guidance.” Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. 2017. |
Frias-Martinez, Vanessa, Elizabeth Sklar, and Simon Parsons. “Exploring auction mechanisms for role assignment in teams of autonomous robots.” Robot Soccer World Cup. Springer, Berlin, Heidelberg, 2004. |
Irfan, Muhammad, and Adil Farooq. “Auction-based task allocation scheme for dynamic coalition formations in limited robotic swarms with heterogeneous capabilities.” 2016 International Conference on Intelligent Systems Engineering (ICISE). IEEE, 2016. |
Isop, W., Pestana, J., Ermacora, G., Fraundorfer, F. & Schmalstieg, D. (2016). Micro Aerial Projector—stabilizing projected images of an airborne robotics projection platform. 5618-5625. 10.1109/IROS.2016.7759826. |
Kamali, Maryam, et al. “Formal verification of autonomous vehicle platooning.” Science of computer programming 148 (2017): 88-106. |
Lee, Eun-Kyu, et al. “Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs.” International Journal of Distributed Sensor Networks 12.9 (2016): 1550147716665500. |
Lucien, Laurent, et al. “A proposition of data organization and exchanges to collaborate in an autonomous agent context.” 2016 IEEE Intl Conference on Computational Science and Engineering (CSE) and IEEE Intl Conference on Embedded and Ubiquitous Computing (EUC) and 15th Intl Symposium on Distributed Computing and Applications for Business Engineering (DCABES). IEEE, 2016. |
Minaeian, S., Liu, J., & Son, Y. (2018). Effective and Efficient Detection of Moving Targets From a UAV's Camera. IEEE Transactions on Intelligent Transportation Systems, 19, 497-506. 10.1109/TITS.2017.2782790. |
Pongpunwattana, Anawat, and Rolf Rysdyk. “Real-time planning for multiple autonomous vehicles in dynamic uncertain environments.” Journal of Aerospace Computing, Information, and Communication 1.12 (2004): 580-604. |
Porfiri, Maurizio, D. Gray Roberson, and Daniel J. Stilwell. “Tracking and formation control of multiple autonomous agents: A two-level consensus approach.” Automatica 43.8 (2007): 1318-1328. |
Raboin, Eric, et al. “Model-predictive asset guarding by team of autonomous surface vehicles in environment with civilian boats.” Autonomous Robots 38.3 (2015): 261-282. |
Scheible, J. Funk, M. (2016). In-Situ-DisplayDrone: Facilitating co-located interactive experiences via a flying screen. In Proceedings of the 5th ACM International Symposium on Pervasive Displays (PerDis '16). Association for Computing Machinery, 251-252. 10.1145/2914920.2940334. |
Scheible, Jurgen, et al. “Displaydrone: a flying robot based interactive display.” Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 2013. |
Schneider, Eric, et al. “Auction-based task allocation for multi-robot teams in dynamic environments.” Conference Towards Autonomous Robotic Systems. Springer, Cham, 2015. |
Xiang, Xianbo, Bruno Jouvencel, and Olivier Parodi. “Coordinated formation control of multiple autonomous underwater vehicles for pipeline inspection.” International Journal of Advanced Robotic Systems 7.1 (2010): 3. |
Yaacoub, Jean-Paul et al. “Security analysis of drones systems: Attacks, limitations, and recommendations,” ncbi.nlm.nih.gov. Internet of Things. Sep. 2020. 11: 100218. |
Yu, Jun, et al. “iPrivacy: image privacy protection by identifying sensitive objects via deep multi-task learning.” IEEE Transactions on Information Forensics and Security 12.5 (2016): 1005-1016. |
Zhu, Guodong, and Peng Wei. “Pre-departure planning for urban air mobility flights with dynamic airspace reservation.” AIAA Aviation 2019 Forum. 2019. |
Number | Date | Country | |
---|---|---|---|
20220172127 A1 | Jun 2022 | US |