SYSTEMS AND METHODS FOR TRANSLATING USER SIGNALS INTO A VIRTUAL ENVIRONMENT HAVING A VISUALLY PERCEPTIBLE COMPETITIVE LANDSCAPE

Abstract
Techniques for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. Various participants' acquisition interest levels are determined by analyzing the participants' user activity in association with the online auction. Avatars that represent the participants are rendered differently based on the participants' level of interest in (e.g., motivation toward) acquiring the item that is being auctioned. In this way, the individual participants' avatars are rendered in the virtual environment in a manner such that the individual participants' level of interest in acquiring the item is visually perceptible. As a specific example, avatars may be rendered to appear more (or less) excited about the item as their corresponding user activity indicates that they are more (or less) likely to competitively bid on the item in a genuine attempt to win the online auction.
Description
BACKGROUND

Conventional online auction systems fail to provide meaningful and real-time insight into the competitive landscape for active online auctions. As an example, a conventional online auction system may provide users with a mere indication of how many other users have previously viewed a particular item that is currently being auctioned and/or how many other users have previously added the particular item to their “watch lists” (e.g., to receive updates after bids are submitted). However, merely knowing how many other users have previously viewed and/or “watch-listed” a particular item does not provide meaningful insight into whether any specific users are likely to competitively bid on the particular item. This is because many users casually browse through and even “watch-list” a multitude of online auctions without any intention whatsoever of actually submitting competitive bids in an aggressive effort to win a particular online auction.


The lack of insight into the competitive landscape surrounding particular online auctions unfortunately leads some users to lose interest in the particular auctions and then continue to browse through other auctions. For example, since each auction appears similar to other auctions, particular auctions may fail to grab the users' attention regardless of how competitive those particular auctions may actually be. Furthermore, even users that remain interested in the particular item are all too often lured into browsing through other online actions for fungible or similar items in a futile effort to ascertain how aggressively they should be bidding for the particular item.


The unfortunate result of users lacking insight into the competitive landscape of online auctions is a significant increase in web traffic as users continue to browse—often aimlessly—through a multitude of auction webpages. The increased web traffic that stems from the aforementioned scenarios of course results in increased network bandwidth usage. For example, each additional auction webpage that users view while browsing through an online auctioneer's website results in an incremental increase in an amount of data that is transferred over various networks to and/from the server(s) that are hosting the online auctioneer's web site. This increased web traffic also results in unnecessary utilization of other computing resources such as processing cycles, memory, and battery.


It is with respect to these and other technical challenges that the disclosure made herein is presented.


SUMMARY

In order to address the technical problems described briefly above, and potentially others, the disclosed technologies can efficiently translate user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. Through implementations of the disclosed technologies, a plurality of avatars can be rendered in a virtual environment such as, for example, a three-dimensional (3D) immersive environment that is associated with the online auction for an item. Individual avatars may be rendered in accordance with avatar modification states that specifically correspond to acquisition interest levels for participants of the online auction. Acquisition interest levels may be determined for individual participants based on user activity of these individual participants in association with the online auction for the item. Thus, if a particular participant exhibits user activity that indicates a high probability of an intention to competitively bid on the item in an aggressive effort to win the online auction, then this particular participant's avatar can be rendered in the virtual environment in a manner such that the particular participant's interest in the item is visually perceptible to other participants. As a specific example, an avatar that represents the particular participant within the virtual environment may be rendered with excited and/or enthusiastic facial expressions directed toward the item being auctioned.


The disclosed techniques can effectively retain participants' interests in an online auction by providing meaningful insight into the competitive landscape of the online auction. This can reduce or even eliminate the lure for these participants to aimlessly browse through other online auctions. Thus, by improving human-computer interaction with computing devices, the disclosed technologies tangibly improve computing efficiencies with respect to a wide variety of computing resources that would otherwise be wastefully consumed and/or utilized. This is because reducing the lure for participants to leave a “competitive” auction that is currently being viewed in order to browse through other auctions directly results in reduced network bandwidth usage and processing cycles consumed by server(s) that are hosting the online auctions. Technical benefits other than those specifically identified herein might also be realized through implementations of the disclosed technologies.


In one illustrative example, activity data that defines user activity that various participants perform in association with an online auction for an item is received. In some instances, the online auction may be conducted by an online auctioneer to facilitate competitive bidding by the various participants for the item. The online auctioneer may utilize one or both of a client-server computing architecture or a peer-to-peer computing architecture. Unlike conventional online auction systems which monitor user activity in the aggregate for multiple users, in accordance with the present techniques the activity data may define the user activity on a per-user basis. For example, the activity data may indicate that a particular participant has viewed the online auction for the item several times per hour for the last several hours whereas one or more other participants have viewed the online auction only once and have not returned thereto. The activity data may further indicate that the particular participant has added the item to their “watch list” to trigger updates any time a bid is submitted for the item whereas the one or more other participants are not “watching” the item.


An analysis of the activity data may be performed to identify, on a per-user basis, user signals that are indicative of acquisition interest levels for the various participants. Stated in plain terms, individual acquisition interest levels may indicate strengths of intentions of corresponding participants to acquire the item through the competitive bidding. Continuing with the example from above, the particular participant having added the item to their watchlist and continuing to view the online auction for the item several times per hour may indicate that the particular participant has very strong intentions of enter a winning bid toward the end of the auction. Therefore, based on these user signals, an acquisition interest level may be determined for the particular participant that is relatively higher than for other participants for which corresponding user signals indicate to be relatively less motivated to acquire the item through the competitive bidding.


Avatar profile data that defines avatar profiles for the various participants may also be received and utilized to determine how to graphically render avatars for the various participants within the virtual environment. In some embodiments, the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants. For example, a 3D model for a particular user may be dynamically modified as user signals are received that indicate that the particular user is more (or less) motivated to acquire the item being auctioned. In some implementations, individual participants may be enabled to define or otherwise control certain aspects of their corresponding avatars. For example, individual participants may be enabled to define various parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter. It can be appreciated, therefore, that an individual participant may define parameters for their corresponding avatar to cause the avatar to generally resemble what the individual participant looks like in real life.


Based on the avatar profile data, avatar modification states can be determined for the various participants' avatars that correspond on a per-user basis to the various participants' acquisition interest levels. Continuing again with the example from above, due to the particular participant having the very strong intentions to acquire the item via the competitive bidding, an avatar modification state may be determined for the particular participant's avatar to make the particular participant's intentions visually perceptible to others via the appearance of the particular participant's avatar. Furthermore, if user activity associated with another participant indicates that this other participant is generally interested in the item but does yet indicate a strong intention to acquire the item, a different modification state can be determined for another avatar that represents this other participant in the virtual environment.


Then, one or more computing devices may be caused to display the avatars for the various participants in accordance with the avatar modification states that correspond to the various participants' acquisition interest levels. In some embodiments, the avatars may be displayed within the virtual environment alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction.


Aspects of the technologies disclosed herein can be implemented by a wearable computing device, such as an augmented reality (“AR”) device or virtual reality (“VR”) device. For example, a participant of an online auction might don the wearable computing device to view the virtual reality environment associated with the online auction. Then, the wearable device can render the avatars of the various participants of the online auction so that the excitement and/or motivation of the various participants—as indicated by their corresponding user activities—is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems.


The above-described subject matter can be implemented as a computer-controlled apparatus, a computer-implemented method, a computing device, or as an article of manufacture such as a computer-readable medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.


This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

The Detailed Description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.



FIG. 1 illustrates aspects of an exemplary system for analyzing activity data that is received in association with online auctions to render a virtual environment that has a visually perceptible competitive landscape.



FIG. 2A illustrates an exemplary virtual environment in which an avatar that represents a participant is rendered to visually communicate an acquisition interest level of the participant.



FIG. 2B illustrates the exemplary virtual environment 106 of FIG. 2A with an additional avatar being rendered to represent another participant that has performed user activity consistent with a high probability of competitively bidding on the item.



FIG. 2C illustrates the exemplary virtual environment of FIGS. 2A and 2B with the avatar this is initially shown in FIG. 2A being rendered in accordance with an avatar modification state corresponding to a “heightened” acquisition interest level.



FIG. 3 illustrates an alternate embodiment of a virtual environment via which aspects of an online auction are made to be visually perceptible to a participant of the online auction.



FIG. 4 is a flow diagram that illustrates an example process describing aspects of the technologies disclosed herein for efficient rendering of 3D models using model placement metadata.



FIG. 5 shows an illustrative configuration of a wearable device capable of implementing aspects of the technologies disclosed herein.



FIG. 6 illustrates additional details of an example computer architecture for a computer capable of implementing aspects of the technologies described herein.





DETAILED DESCRIPTION

This Detailed Description describes technologies for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. In various implementations, avatars are rendered in a virtual environment that is generated to communicate a competitive landscape associated with an online auction that facilitates competitive bidding for an item. Various participants' acquisition interest levels may be determined by analyzing the participants' user activity in association with the online auction. In general terms, an acquisition interest level for a particular participant in association with an online auction is indicative of a probability that the particular participant will competitively bid for an item being auctioned off in the online auction. By determining the participants' acquisition interest levels, the participants' avatars may be rendered differently based on the participants' level of interest in (e.g., motivation toward) acquiring the item that is being auctioned. In this way, the individual participants' avatars can be rendered in a three-dimensional (“3D”) immersive environment in a manner such that the individual participants' level of interest in acquiring the item is visually perceptible. As a specific example, avatars may be rendered to appear more (or less) excited about the item as their corresponding user activity indicates that they are more (or less) likely to competitively bid on the item in a genuine attempt to win the online auction.


The disclosed techniques provide meaningful insight into the competitive landscape of the online auction and, by doing so, excite the participants' competitive nature so as to effectively retain participants' interests in the online auction. This can reduce or even eliminate the lure for these participants to aimlessly browse through other online auctions. In this way, the disclosed technologies tangibly improve human interaction with computing devices in a manner that improves computing efficiencies with respect to a wide variety of computing resources that would otherwise be wastefully consumed and/or utilized. This is because reducing the lure for participants to leave a “competitive” auction that is currently being viewed in order to browse through other auctions directly reduces both the network bandwidth and processing cycles consumed by server(s) that are hosting the online auctions. Technical benefits other than those specifically identified herein might also be realized through implementations of the disclosed technologies.


As described in more detail below, aspects of the technologies disclosed herein can be implemented by a wearable computing device such as, for example, an augmented reality (“AR”) device or virtual reality (“VR”) device. For example, a participant of an online auction might don a wearable computing device to view a virtual reality environment that is specifically tailored to visually communicate aspects of the competitive landscape of the online auction. For example, the wearable device can render avatars of various participants of the online auction so that the excitement and/or motivation of the various participants is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems. As used herein, the term “virtual environment” refers to any environment in which one or more user perceptible objects (e.g., avatars, display menus, price icons, etc.) are rendered virtually as opposed to existing within a real-world environment surrounding a user. Thus, it can be appreciated that an AR device may be effective at generating a virtual environment within the context of the present disclosure—even if some real-world objects remain perceptible to a user.


It is to be further appreciated that the technologies described herein can be implemented on a variety of different types of wearable devices configured with a variety of different operating systems, hardware components, and/or installed applications. In various configurations, for example, the wearable device can be implemented by the following example wearable devices: GOOGLE GLASS, MAGIC LEAP ONE, MICROSOFT HOLOLENS, META 2, SONY SMART EYEGLASS, HTC VIVE, OCULUS GO, PLAYSTATION VR, or WINDOWS mixed reality headsets. Thus, embodiments of the present disclosure can be implemented in any AR-capable device, which is different than goggles or glasses that obstruct a user's view of real-world objects, e.g., actual reality. The techniques described herein are device and/or operating system agnostic.


Turning now to FIG. 1, various aspects are illustrated of an exemplary system for analyzing activity data 102 that is received in association with a first online auction 104(1) to render a virtual environment 106 that has a visually perceptible competitive landscape. As illustrated, activity data 102 is received from client devices 108 that correspond to various individual participants 110 of the first online auction 104(1). More specifically, first activity data 102(1) is received via a first client device 108(1) that is being used by a first participant 110(1), second activity data 102(2) is received via a second client device 108(2) that is being used by a second participant 110(2), and so on.


In the illustrated embodiment, an online auctioneer system 112 is utilizing at least one database 114 to host a plurality of online auctions 104 (e.g., online auctions 104(1) through 104(N)). In this embodiment, the online auctioneer system 112 is configured in accordance with a client-server computing architecture in which activity data 102 is transferred between the online auctioneer system 112 and one or more client devices via at least one network 116. In some embodiments, the online auctioneer system 112 may be configured in accordance with a peer-to-peer computing architecture.


The online auctioneer system 112 may monitor various instances of the activity data 102 on a per-user basis. For example, first activity data 102(1) may be monitored for the first participant 110(1), second activity data 102(2) may be monitored for the second participant 110(2), and so on. For purposes of the discussion of FIG. 1, presume that the first activity data 102(1) indicates that the first participant 110(1) has added the item associated with the first auction 104(1) to her watchlist and that she has also opened a web browser to view the item several times per hour for the last several hours. Further presume that the second activity data 102(2) indicates that the second participant 110(2) has added the item associated with the first auction 104(1) to his watchlist and that he has also periodically opened a web browser to view the item—albeit not as frequently as the first participant 110(1).


The online auctioneer system 112 may then analyze the activity data 102 on a per-user basis to identify user signals that are indicative of acquisition interest levels for the various participants 110. The acquisition interest level determined for each particular participant may generally indicate a strength of that user's intentions to acquire the item through the competitive bidding.


As a specific but nonlimiting example, the user activities of the first participant 110(1) having added the item associated with the first auction 104(1) to her watchlist may be identified as a user signal(s) that indicates an intention of the first participant 110(1) to acquire the item through the competitive bidding. That is, the first participant 110(1) having added the item to her watchlist serves as evidence that the first participant 110(1) will competitively bid on the item in the sense that her “watching” the item makes it objectively appear more probable that she intends to bid on the item than it would objectively appear had she not “watched” the item. Furthermore, the user activities of the first participant 110(1) having frequently opened the web browser to view the item over the last several hours may be identified as another user signal that indicates an intention of the first participant 110(1) to acquire the item through the competitive bidding. Thus, based on these identified user signals, a “first” acquisition interest level may be determined for the first participant 110(1).


Similarly, the user activities of the second participant 110(2) having added the item associated with the first auction 104(1) to his watchlist and also having frequently opened the web browser to view the item may be identified as user signals that indicate an intention of the second participant 110(2) to acquire the item through the competitive bidding. Thus, based on these identified user signals, a “second” acquisition interest level may be determined for the second participant 110(2). However, since the second participant 110(2) has viewed the item with a slightly lower frequency than the first participant 110(1), the “second” acquisition interest level that is determined for the second participant 110(2) may be slightly lower than the “first” acquisition interest level that is determined for the first participant 110(1). Stated plainly, the identified user signals may indicate that both the first participant 110(1) and the second participant 110(2) intend to competitively bid on the item but that the first participant is slightly more enthusiastic and/or motivated to do so.


In some embodiments, determining the acquisition interest levels for the various participants may be based on historical activity data 120 associated with the individual participants 110. The historical activity data 120 may define historical user activities of at least some of the plurality of participants in association with previous online auctions 104—i.e., online auctions that have already occurred. For example, the historical user activity date 120 may indicate trends of how users (either individually or as a general populous) tend to behave with respect to particular online auctions prior to bidding on those online auctions. As a specific but non-limiting example, the historical activity data 120 may reveal that it is commonplace for users to add an item to their watchlist and then somewhat compulsively view and re-view the item prior to beginning to enter competitive bids on the item.


In some embodiments, the historical activity data 120 may be “user specific” historical activity data that defines historical user activities of a specific participant in association with previous online auctions 104—i.e., online auctions that have already occurred. For example, if historical user activity that is stored in association with a specific user profile 125 indicates that this particular user frequently adds items to her watchlist without later bidding on the item, then this particular user adding an item to her watchlist may be given little or no weight with respect to determining this particular user's acquisition interest level for this item. In contrast however, if the historical user activity indicates that this particular user rarely adds items to her watchlist and always submits competitive bids for such items toward the end of the associated auctions, then this particular user adding an item to her watchlist may be weighed heavily in determining this particular user's acquisition interest level for this item.


In some embodiments, the online auctioneer system 112 may utilize a machine learning engine 124 to identify correlations between certain types of user activities and competitively bidding on an item. The machine learning engine 124 may build and/or continually refine an acquisition interest model 118 based upon the identified correlations. For example, as illustrated, the online auctioneer system 112 may provide the activity data 102 (and/or the historical activity data 120) to the machine learning engine 124. The machine learning engine 124 may then use this data to build the acquisition interest model 118 which is a model that is usable to predict and/or output acquisition interest levels for individual participants based on the types of user activities that those participants perform in association with an online auction 104. Exemplary types of user activities that the machine learning engine 124 might identify as correlating with users competitively bidding on an item may include, but are not limited to, users adding items to their watchlists, users frequently checking a status of particular auctions, users leaving a particular auction open (e.g., in a web browser or other application) on their client device for long durations of time, users monitoring a particular auction without browsing through other auctions, and/or any other suitable activity that might be generally indicative an increased likelihood of a participant competitively bidding on an item.


It should be appreciated that any appropriate machine learning techniques may also be utilized, such as unsupervised learning, semi-supervised learning, classification analysis, regression analysis, clustering, etc. One or more predictive models may also be utilized, such as a group method of data handling, Naïve Bayes, k-nearest neighbor algorithm, majority classifier, support vector machines, random forests, boosted trees, Classification and Regression Trees (CART), neural networks, ordinary least square, and so on.


The user profiles 124 may further include avatar profile data 122 that defines avatar profiles for the participants 110. The avatar profile data 122 may be utilized by the online auctioneer system 112 to determine how to graphically render avatars for the participants 110 within the virtual environment 106. In some embodiments, the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants 110. For example, each of the first participant 110(1), the second participant 110(2), and the N-th participant 110(N) may have corresponding 3D models that may be rendered to graphically represent these participants' presence within the virtual environment 106 that is associated with the first auction 104(1).


In some embodiments, individual participants may define or otherwise control certain aspects of their corresponding avatars. For example, individual participants may be enabled to define a variety of parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter. It can be appreciated, therefore, that an individual participant may define parameters for their corresponding avatar to cause the avatar to generally resemble what the individual participant looks like in real life. For example, as described in more detail below, the first participant 110(1) may define parameters within her avatar profile so that the avatar that graphically represents her presence within the virtual environment generally resembles how she appears in real life. Similarly, the other participants 110 may also define parameters within their own avatar profiles so that their respective avatars also resemble them or, if they so choose, some sort of alternate ego. For example, various user's may define parameters to cause their avatar to appear as a dinosaur, a team mascot for a college football team, a robot, or any other suitable configuration.


The online auctioneer system 112 may use the avatar profile data 122 to determine avatar modification states for the various participants' avatars. The determined avatar modification states may correspond on a per-user basis to the various participants' acquisition interest levels. For example, a “first” avatar modification state may be determined for use with the first participant's 110(1) avatar profile based on the first participant 110(1) having added the item to her watchlist and also frequently checking the status of the first auction 104(1). In this way, a first avatar 126(1) that represents the first participant 110(1) within the virtual environment 106 may be rendered so that the first participant's 110(1) acquisition interest level relative to other participants is visually perceptible. The other participants' 110 avatars may also be rendered according to those participants' acquisition interest levels so that their relative acquisition interest levels are also visually perceptible. As illustrated, for example, a second avatar 126(2) that represents the second participant 110(2) is rendered so as to appear highly motivated—albeit slightly less so than the first avatar 126(1)—to acquire the item.


In the illustrated embodiment, an N-th participant 110(N) is viewing the virtual environment 106 via a wearable device 128 such as, for example, an augmented reality (“AR”) device or virtual reality (“VR”) device. More specifically, the N-th participant 110(N) is wearing the wearable device 128 on his head and is viewing a virtual environment 160 associated with the first online auction 104(1). In the illustrated example, the virtual environment 106 is a VR environment in which the wearable device 128 is rendering the first avatar 126(1) and the second avatar 126(2) that represent the presence of the first participant 110(1) and the second participant 110(2), respectively. It can be appreciated from the illustrated avatars 126 that the excitement and/or motivation of the various participants 110—as indicated by their corresponding user activities—is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems.


In various embodiments illustrated herein, the virtual environment 106 is rendered in accordance with a first-person view. For example, as illustrated in FIG. 1, when viewing the virtual environment 106 the N-th participant is able to see avatars associated with the other participants of the auction (e.g., the first participant 110(1) and the second participant 110(2) but not an avatar associated with himself. In various other embodiments, however, the virtual environment 106 is rendered in accordance with a second-person view or a third-person view.


In some embodiments, the avatars 126 may be displayed within the virtual environment 106 alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars 126 in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants 110, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction 104 are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction. This can lessen the lure for the participants 110 to leave the first action 104(1) to aimlessly browse through the other auctions 104(2) through 104(N). As described above, this provides a marked improvement to various computing resources by reducing unnecessary web browsing and, therefore, reducing network bandwidth usage.



FIGS. 2A though 2C illustrate as aspects of an implementation of the techniques described herein in which the virtual environment is a three-dimensional immersive environment. For example, the participant that is viewing (e.g., peering into) the virtual environment may be enabled to walk around similar to a virtual gaming environment.


Turning now to FIG. 2A, illustrated is an exemplary virtual environment 106 in which an avatar 126 that represents a participant 110 is rendered to visually communicate an acquisition interest level of the participant 110. For purposes of FIG. 2A, the illustrated avatar is the second avatar 126(2) that represents the second participant 110(2) of FIG. 1. As illustrated, the second avatar 126(2) is being rendered in an avatar modification state that is designed to communicate that the second participant 110(2) is generally interested in acquiring an item 202.


In some embodiments, a graphical representation of the item 202 may be shown within the virtual environment alongside the second avatar 126(2). The graphical representation of the item 202 may be a two-dimensional image of the item 202. For example, a seller may take a picture of the item for sale and upload the picture onto the online auctioneer system 112. Alternatively, the graphical representation of the item 202 may be a three-dimensional model of the item 202. For example, the seller may generate or otherwise obtain object data that defines a 3D model that is associated with the item that is being auction. Exemplary object data may include, but is not limited to, STEP files (i.e., 3D model files formatted according to the “Standard for the Exchange of Product Data”), IGES files (i.e., 3D model files formatted according to the “Initial Graphics Exchange Format”), glTF files (i.e., 3D model files formatted according to the “GL Transmission Format”), and/or any other suitable format for defining 3D models.


In FIG. 2A, the second avatar 126(2) is being rendered in accordance with a particular acquisition interest level that indicates that the second participant 110(2) has performed some user activity with respect to the item 202 which indicates he is at least generally interested in the item 202. However, as of this point, the second participant 110(2) has not performed user activity that indicates he is strongly motivated to acquire the item 202 through the competitive bidding. For example, perhaps the second participant 110(2) has viewed the item 202 a few times and maybe has even added the item to his watchlist but is not viewing the item 202 with a frequency that is high enough to indicate a high probability of aggressively bidding on the item 202. In this specific but nonlimiting example, the avatar modification state causes the second avatar 126(2) to be rendered with a facial expression and a hand gesture that visually communicates at least some interest in the item 202.


In FIG. 2A, the N-th participant is donning the wearable device 128 which is rendering the virtual environment 106. In this way, the N-th participant can “peer” into a virtual auction hall that corresponds to at least the online auction for the item 202. Upon “peering” into the virtual auction hall, the N-th participant can immediately obtain insight into the competitive landscape of the online auction for the item 202. Furthermore, it can be appreciated that as of the point of time illustrated in FIG. 2A, the landscape is not as competitive with respect to the item 202 as compared to the points in time illustrated in FIGS. 2B and 2C. For example, only the second participant 110(2) has performed user activities which indicate an interest in the item 202. Furthermore, relative to FIGS. 2B and 2C discussed below, no participants 110 have performed user activities that indicate a high probability that they will aggressively bid on the item 202.


In some embodiments, individual virtual environments may be specifically tailed to individual participants 110. For example, the virtual environment 106 illustrated in FIG. 2A is shown to include the “Sports Tickets” item 202 along with a “Watch” item 204 and a “Shoe” item 206. In some embodiments, the virtual environment 106 may be uniquely generated for the N-th participant 110(N) to include items that the N-th participant 110(N) has demonstrated an interest in. As a specific example, the N-th participant 110(N) may have added each of the “Sports Tickets” item 202, the “Watch” item 204, and the “Shoe” item 206 to his watchlist. Then, the virtual environment 106 is generated to include all of the items which the N-th participant is currently “watching.” In some embodiments, the virtual environment 106 for any particular participant 110 may include an indication of that virtual environment 106 being at least partially customized or tailored to the particular participant 110. For example, in the illustrated embodiment the virtual environment 106 includes the text of “Steve's Virtual Auction Hall” to indicate to the N-th participant 110(N) that the virtual environment 106 is his own.


In various embodiments, the virtual environment 106 may include on or more text fields 208 that displays various types of information regarding the online auction. As illustrated, for example, a first text field 208(1) displays some specific information regarding the item 202 such as, for example, notes from the seller, which teams will compete, a section of the seats, how many tickets are included, a date that event will take place, and any other suitable type of information. As further illustrated, the virtual environment 106 includes a second text field 208(2) that displays information regarding the competitive landscape of the online auction. For example, as illustrated, the text field 208(2) indicates that 5 participants have added the item 202 to their watchlists, 121 participants have viewed the item, and furthermore that a particular participant (i.e., the second participant 110(2)) with a username of “Super_Fan_#1” has performed user activity that demonstrates a general interest in the item. It can be appreciated that second avatar 126(2) corresponds to this user.


Turning now to FIG. 2B, the exemplary virtual environment 106 of FIG. 2A is illustrated with an additional avatar being rendered to represent another participant that has performed user activity consistent with a high probability of competitively bidding on the item 202. For purposes of FIG. 2B, the additional avatar is the first avatar 126(1) that represents the first participant 110(1) of FIG. 1. As illustrated, the first avatar 126(1) is being rendered in an avatar modification state that is designed to communicate that the first participant 110(1) is highly motivated to acquire the item 202 by submitting competitive bids within the online auction. In the specific but nonlimiting example illustrated, the first avatar 110(1) is being rendered to represent the first participant 110(1) having her fingers crossed in hopes that she will “win” the online auction.


In some embodiments, the various modification states may be designed to dynamically change a size with which the avatars 126 are rendered based on the acquisition interest levels of the participants being represented. For example, as illustrated in FIG. 2B, the first avatar 126(1) is rendered relatively larger than the second avatar 126(2) due to the analyzed user activity indicating that the first participant 110(1) is more likely to competitively bid on the item 202 than the second participant 110(2). Thus, the first avatar 126(1) appears more prominent within the virtual environment 106 than the second avatar 126(2). In this way, the excitement of the first participant 110(1) toward the item may be contagious within the virtual environment 106 and “rub off” on the other participants by sparking various feelings such as, for example, urgency for the item 202, scarcity of the item 202, or competition for the item 202.


Turning now to FIG. 2C, the exemplary virtual environment 106 of FIGS. 2A and 2B is illustrated with the second avatar 126(2) being rendered in accordance with an avatar modification state corresponding to a “heightened” acquisition interest level. That is, the acquisition interest level of the second participant 110(2) has been heightened in comparison to FIGS. 2A and 2B. For example, the second participant 110(2) may have also been viewing the virtual environment 106 and, therefore, may have seen the first avatar 126(1) that represents how excited the first participant 110(1) is about acquiring the “Sports Tickets” item 202. As a result, the second participant 110(2) may have become more excited about the “Sports Tickets” item 202 and begun to perform certain user activities that are consistent with a high probability that the second participant 110(2) would competitively bid against the first participant 110(1) in an effort to “win” the online auction.


By donning the wearable device 128 and “peering into” the virtual environment 106, the N-th participant 110(N) is enabled to visually perceive the increasingly competitive landscape of the online auction over time. In this way, the N-th participant's interest in the “Sports Tickets” item 202 may be better acquired and retained as opposed to conventional online auction systems for which a competitive landscape in visually imperceptible.


In some embodiments, the online auctioneer system 112 may monitor a status of the online auction in order to determine which participant currently holds a high bid for the item being auctioned. The online auctioneer system 112 may control and/or modify various aspects of the virtual environment 106 based upon which particular participant currently has the highest bid. As an example, an acquisition interest level for a particular participant may be determined based on that participant currently and/or frequently having the high bid for an item. Additionally, or alternatively, various aspects of a modification state for an avatar of a particular participant may be determined based on that particular participant currently and/or frequently having the high bid. For example, an avatar for the current high bidder may be rendered so as to appear more excited and/or more happy than other avatars associated with participants that have not bid on the item and/or have lost “high bidder” status.


In some embodiments, the online auctioneer system 112 may exclusively provide a current high bidder with various avatar abilities with respect to the item. Such abilities may be exclusively provided to the particular participant in the sense that the other participants are not provided with the same abilities. As a specific but non-limiting example, a participant that is currently the high bidder for the item may be provided with the ability for their avatar to hold the item within the virtual environment and/or to taunt the other participant's avatars with the item. For example, the particular participant's avatar may hold a 3D model of the item and may walk up to other participants' avatars and hold the 3D model up to their face and then pull it away quickly while laughing. These avatar abilities that are exclusively provided to the particular participant may, in some implementations, be revoked in the event that some other participant outbids the particular participant. Then, these avatar abilities may be provided to this other participant so long as that participant continues to have the high bid.


Turning now to FIG. 3, illustrated is an alternate embodiment of a virtual environment 302 via which aspects of an online auction are made to be visually perceptible to a participant 110(N) of the online auction. In this alternate embodiment, a participant 110 that is wearing the wearable device 128 enters a virtual live auction experience in which an item (e.g., a guitar 304) is being auctioned. In various implementations, the virtual live auction experience displays images and/or identifiers of other participants bidding on the item 304. For example, as illustrated, the virtual live auction experience illustrates an avatar, a user photo, or some other representation for the other participants that are watching, bidding, or are likely to bid on the item 304. In some embodiments, a username (e.g., Jane D., Sam W., Beth L., John M, Expert Bidder, Music Guy86, Strummin Gal, etc.) may be illustrated adjacent to the various representations of the other participants.


As illustrated, the virtual live auction experience also displays the guitar 304 along with item information 306 about the guitar 304 (e.g., a manufacturer, a model, a description, etc.). The virtual auction experience further displays auction information 308 such as: a minimum bid (e.g., $150), a minimum bid increment (e.g., $5), a current high bid (e.g., $245 which belongs to Strummin Gal), time remaining in the live auction (e.g., two minutes and twenty-four seconds), total number of bids (e.g., 16), total number of bidders (e.g., 8—the seven shown in the virtual auction experience and the participant 110), a bid history, and so forth.


As illustrated, Beth L. is in the process of placing or has recently placed a new high bid of $250—which has yet to but will soon be reflected within the auction information 308 once it is updated. Moreover, the wearable device 128 provides the participant with an option 310 to bid $255 or another amount that is higher than the current high bid. The virtual live auction experience can provide sound and other effects (e.g., a visual celebration for a winning bidder), as well.


In some embodiments, a computer system can receive activity data defining user activity from other members of an auction. The activity data can include computing activity such as watching, frequently viewing, or even bidding on the item. The activity data may also include gesture indicators that can be translated into an audio, haptic, and/or visual indicator for the participant. Each gesture of other participants can be ranked or can be related to a priority value and/or an acquisition interest level. For instance, when activity data received from other participants of an auction indicates that another participant is talking with a loud voice or they indicate a high bid, a high priority signal may be generated and translated into a computer-generated voice that is played to the participant. The computer-generated voice may indicate the intent of other participants. As described in more detail in relation to FIGS. 1 through 2C, such signals and associated priority values can be translated into body language that is displayed via an avatar to the participant on a graphical user interface. These types of signals are not usually present in online auctions and the techniques of this invention enable participants and auction platforms to benefit from these types of signals that are only available at live auction environments.


In some embodiments, the participants of an online auction can also be ranked or prioritized with respect to one another. For instance, if a first participant is in auction with three highly ranked participants, then the techniques described herein may cause audio, visual, and/or haptic indicators generated by the highly ranked participants may be more prominent to the user than audio, visual and/or haptic indicators generated by lower ranked participants. The rankings may be based on prior user history and activity levels.



FIG. 4 is a flow diagram that illustrates an example process 400 describing aspects of the technologies presented herein with reference to FIGS. 1-3 for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. The process 400 is illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations.


The particular implementation of the technologies disclosed herein is a matter of choice dependent on the performance and other requirements of the computing device. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts, and modules can be implemented in hardware, software (i.e. computer-executable instructions), firmware, in special-purpose digital logic, and any combination thereof. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform or implement particular functions It should be appreciated that more or fewer operations can be performed than shown in the figures and described herein. These operations can also be performed in a different order than those described herein. Other processes described throughout this disclosure shall be interpreted accordingly.


At block 401, a system receives activity data that defines user activity of a plurality of participants in association with an online auction for an item. The user activity may indicate on a per-user basis which participants of the online auction have viewed the online auction, a frequency with which the participants have viewed the online auction, which participants have bid on the item, which users have scheduled bids for the item, which participants have added the item to their watchlist, and any other suitable user activity that may be indicative of whether specific participants will likely attempt to acquire the item via competitive bidding.


Unlike conventional online auction systems which monitor user activity in the aggregate for multiple users, in accordance with the present techniques the activity data may define the user activity on a per-user basis. For example, the activity data may indicate that a particular participant has viewed the online auction for the item several times per hour for the last several hours whereas one or more other participants have viewed the online auction only once and have not returned thereto. The activity data may further indicate that the particular participant has added the item to their “watch list” to trigger updates any time a bid is submitted for the item whereas the one or more other participants are not “watching” the item.


In some implementations, the activity data may define physical activities performed by the individual participants within their corresponding real-world environments. For example, suppose that a particular participant is donning a virtual reality headset to become immersed into a three-dimensional immersive environment as described herein. Further suppose that while immersed therein, the participant verbally states “Wow, I'd give anything for those ‘Sports Event’ Tickets.” In some implementations, the virtual reality headset may detect and analyze the participant's statement. Additionally, or alternatively, the virtual reality device may detect gestures (e.g., via a camera or other type of sensor) of the participant and analyze these gestures to determine how excited and/or motivated the participant is to acquire the item.


At block 403, the activity data may be analyzed to identify user signals that indicate acquisition interest levels for the plurality of participants. The analysis of the activity data may be performed on a per-user basis so that identified user signals are indicative of acquisition interest levels for the various participants on a per-user basis. Stated in plain terms, individual acquisition interest levels may indicate strengths of intentions of corresponding participants to acquire the item through the competitive bidding. For example, the particular participant having added the item to their watchlist and continuing to view the online auction for the item several times per hour may indicate that the particular participant has very strong intentions of enter a winning bid toward the end of the auction. Therefore, based on these user signals, an acquisition interest level may be determined for the particular participant that is relatively higher than for other participants for which corresponding user signals indicate to be relatively less motivated to acquire the item through the competitive bidding.


At block 405, avatar profile data may be received that defines avatar profiles for the various participants. The avatar profile data may be utilized to determine how to graphically render avatars for the various participants within the virtual environment. The avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants. For example, a 3D model for a particular user may be dynamically modified as user signals are received that indicate that the particular user is more (or less) motivated to acquire the item being auctioned.


In some implementations, individual participants may be enabled to define or otherwise control certain aspects of their corresponding avatars. For example, individual participants may be enabled to define various parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter. It can be appreciated, therefore, that an individual participant may define parameters for their corresponding avatar to cause the avatar to generally resemble what the individual participant looks like in real life.


At block 407, avatar modification states are determined for the various participants' avatars. The avatar modification states may specifically correspond on a per-user basis to the various participants' acquisition interest levels. For example, due to the particular participant having the very strong intentions to acquire the item via the competitive bidding, an avatar modification state may be determined for the particular participant's avatar to make the particular participant's intentions visually perceptible to others via the appearance of the particular participant's avatar.


At block 409, one or more computing devices may be caused to display the avatars for the various participants in accordance with the avatar modification states that correspond to the various participants' acquisition interest levels. In some embodiments, the avatars may be displayed within the virtual environment alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction.



FIG. 5 shows an illustrative configuration of a wearable device 500 (e.g., a headset system, a head-mounted display, etc.) capable of implementing aspects of the technologies disclosed herein. The wearable device 500 includes an optical system 502 with an illumination engine 504 to generate electro-magnetic (“EM”) radiation that includes both a first bandwidth for generating computer-generated (“CG”) images and a second bandwidth for tracking physical objects. The first bandwidth may include some or all of the visible-light portion of the EM spectrum whereas the second bandwidth may include any portion of the EM spectrum that is suitable to deploy a desired tracking protocol.


In the example configuration, the optical system 502 further includes an optical assembly 506 that is positioned to receive the EM radiation from the illumination engine 504 and to direct the EM radiation (or individual bandwidths of thereof) along one or more predetermined optical paths. For example, the illumination engine 504 may emit the EM radiation into the optical assembly 506 along a common optical path that is shared by both the first bandwidth and the second bandwidth. The optical assembly 506 may also include one or more optical components that are configured to separate the first bandwidth from the second bandwidth (e.g., by causing the first and second bandwidths to propagate along different image-generation and object-tracking optical paths, respectively).


The optical assembly 506 includes components that are configured to direct the EM radiation with respect to one or more components of the optical assembly 506 and, more specifically, to direct the first bandwidth for image-generation purposes and to direct the second bandwidth for object-tracking purposes. In this example, the optical system 502 further includes a sensor 508 to generate object data in response to a reflected-portion of the second bandwidth, i.e. a portion of the second bandwidth that is reflected off an object that exists within a real-world environment.


In various configurations, the wearable device 500 may utilize the optical system 502 to generate a composite view (e.g., from a perspective of a user 128 that is wearing the wearable device 500) that includes both one or more CG images and a view of at least a portion of the real-world environment that includes the object. For example, the optical system 502 may utilize various technologies such as, for example, AR technologies to generate composite views that include CG images superimposed over a real-world view 126. As such, the optical system 502 may be configured to generate CG images via a display panel. The display panel can include separate right eye and left eye transparent display panels.


Alternatively, the display panel can include a single transparent display panel that is viewable with both eyes and/or a single transparent display panel that is viewable by a single eye only. Therefore, it can be appreciated that the technologies described herein may be deployed within a single-eye Near Eye Display (“NED”) system (e.g., GOOGLE GLASS) and/or a dual-eye NED system (e.g., OCULUS RIFT). The wearable device 500 is an example device that is used to provide context and illustrate various features and aspects of the user interface display technologies and systems disclosed herein. Other devices and systems, such as VR systems, may also use the interface display technologies and systems disclosed herein.


The display panel may be a waveguide display that includes one or more diffractive optical elements (“DOEs”) for in-coupling incident light into the waveguide, expanding the incident light in one or more directions for exit pupil expansion, and/or out-coupling the incident light out of the waveguide (e.g., toward a user's eye). In some examples, the wearable device 500 may further include an additional see-through optical component.


In the illustrated example of FIG. 5, a controller 510 is operatively coupled to each of the illumination engine 504, the optical assembly 506 (and/or scanning devices thereof,) and the sensor 508. The controller 510 includes one or more logic devices and one or more computer memory devices storing instructions executable by the logic device(s) to deploy functionalities described herein with relation to the optical system 502. The controller 510 can comprise one or more processing units 512, one or more computer-readable media 514 for storing an operating system 516 and data such as, for example, image data that defines one or more CG images and/or tracking data that defines one or more object tracking protocols.


The computer-readable media 514 may further include an image-generation engine 518 that generates output signals to modulate generation of the first bandwidth of EM radiation by the illumination engine 504 and also to control the scanner(s) to direct the first bandwidth within the optical assembly 506. Ultimately, the scanner(s) direct the first bandwidth through a display panel to generate CG images that are perceptible to a user, such as a user interface.


The computer-readable media 514 may further include an object-tracking engine 520 that generates output signals to modulate generation of the second bandwidth of EM radiation by the illumination engine 504 and also the scanner(s) to direct the second bandwidth along an object-tracking optical path to irradiate an object. The object tracking engine 520 communicates with the sensor 508 to receive the object data that is generated based on the reflected-portion of the second bandwidth.


The object tracking engine 520 then analyzes the object data to determine one or more characteristics of the object such as, for example, a depth of the object with respect to the optical system 502, an orientation of the object with respect to the optical system 502, a velocity and/or acceleration of the object with respect to the optical system 502, or any other desired characteristic of the object. The components of the wearable device 500 are operatively connected, for example, via a bus 522, which can include one or more of a system bus, a data bus, an address bus, a PCI bus, a Mini-PCI bus, and any variety of local, peripheral, and/or independent buses.


The wearable device 500 may further include various other components, for example cameras (e.g., camera 524), microphones (e.g., microphone 526), accelerometers, gyroscopes, magnetometers, temperature sensors, touch sensors, biometric sensors, other image sensors, energy-storage components (e.g. battery), a communication facility, a GPS receiver, etc. Furthermore, the wearable device 500 can include one or more eye gaze sensors 528. In at least one example, an eye gaze sensor 528 is user facing and is configured to track the position of at least one eye of a user. Accordingly, eye position data (e.g., determined via use of eye gaze sensor 528), image data (e.g., determined via use of the camera 524), and other data can be processed to identify a gaze path of the user. That is, it can be determined that the user is looking at a particular section of a hardware display surface, a particular real-world object or part of a real-world object in the view of the user, and/or a rendered object or part of a rendered object displayed on a hardware display surface.


In some configurations, the wearable device 500 can include an actuator 529. The processing units 512 can cause the generation of a haptic signal associated with a generated haptic effect to actuator 529, which in turn outputs haptic effects such as vibrotactile haptic effects, electrostatic friction haptic effects, or deformation haptic effects. Actuator 529 includes an actuator drive circuit. The actuator 529 may be, for example, an electric motor, an electro-magnetic actuator, a voice coil, a shape memory alloy, an electro-active polymer, a solenoid, an eccentric rotating mass motor (“ERM”), a linear resonant actuator (“LRA”), a piezoelectric actuator, a high bandwidth actuator, an electroactive polymer (“EAP”) actuator, an electrostatic friction display, or an ultrasonic vibration generator.


In alternate configurations, wearable device 500 can include one or more additional actuators 529. The actuator 529 is an example of a haptic output device, where a haptic output device is a device configured to output haptic effects, such as vibrotactile haptic effects, electrostatic friction haptic effects, or deformation haptic effects, in response to a drive signal. In alternate configurations, the actuator 529 can be replaced by some other type of haptic output device. Further, in other alternate configurations, wearable device 500 may not include actuator 529, and a separate device from wearable device 500 includes an actuator, or other haptic output device, that generates the haptic effects, and wearable device 500 sends generated haptic signals to that device through a communication device.


The processing unit(s) 512, can represent, for example, a CPU-type processing unit, a GPU-type processing unit, a field-programmable gate array (“FPGA”), another class of digital signal processor (“DSP”), or other hardware logic components that may, in some instances, be driven by a CPU. For example, and without limitation, illustrative types of hardware logic components that can be used include Application-Specific Integrated Circuits (“ASICs”), Application-Specific Standard Products (“ASSPs”), System-on-a-Chip Systems (“SOCs”), Complex Programmable Logic Devices (“CPLDs”), etc.


As used herein, computer-readable media, such as computer-readable media 514, can store instructions executable by the processing unit(s) 522. Computer-readable media can also store instructions executable by external processing units such as by an external CPU, an external GPU, and/or executable by an external accelerator, such as an FPGA type accelerator, a DSP type accelerator, or any other internal or external accelerator. In various examples, at least one CPU, GPU, and/or accelerator is incorporated in a computing device, while in some examples one or more of a CPU, GPU, and/or accelerator is external to a computing device.


In various examples, the wearable device 500 is configured to interact, via network communications, with a network device (e.g., a network server or a cloud server) to implement the configurations described herein. For instance, the wearable device 500 may collect data and send the data over network(s) to the network device. The network device may then implement some of the functionality described herein (e.g., analyze passive signals, determine user interests, select a recommended item, etc.). Subsequently, the network device can cause the wearable device 500 to display an item and/or instruct the wearable device 500 to perform a task.


Computer-readable media can include computer storage media and/or communication media. Computer storage media can include one or more of volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Thus, computer storage media includes tangible and/or physical forms of media included in a device and/or hardware component that is part of a device or external to a device, including but not limited to random access memory (“RAM”), static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), phase change memory (“PCM”), read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), flash memory, rotating media, optical cards or other optical storage media, magnetic storage, magnetic cards or other magnetic storage devices or media, solid-state memory devices, storage arrays, network attached storage, storage area networks, hosted computer storage or any other storage memory, storage device, and/or storage medium that can be used to store and maintain information for access by a computing device.


In contrast to computer storage media, communication media can embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media. That is, computer storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se.



FIG. 6 shows additional details of an example computer architecture for a computer capable of executing the functionalities described herein such as, for example, those described with reference to FIGS. 1A-4, or any program components thereof as described herein. Thus, the computer architecture 600 illustrated in FIG. 6 illustrates an architecture for a server computer, or network of server computers, or any other type of computing device suitable for implementing the functionality described herein. The computer architecture 600 may be utilized to execute any aspects of the software components presented herein, such as software components for implementing the e-commerce system 116 and the item listing tool 102.


The computer architecture 600 illustrated in FIG. 6 includes a central processing unit 602 (“CPU”), a system memory 604, including a random-access memory 606 (“RAM”) and a read-only memory (“ROM”) 608, and a system bus 610 that couples the memory 604 to the CPU 602. A basic input/output system containing the basic routines that help to transfer information between elements within the computer architecture 600, such as during startup, is stored in the ROM 608. The computer architecture 600 further includes a mass storage device 612 for storing an operating system 614, other data, and one or more application programs. The mass storage device 612 may further include one or more of the activity data 102, auction data 104, user profiles 125, or the machine learning engine 125, and/or any of the other software or data components described herein.


The mass storage device 612 is connected to the CPU 602 through a mass storage controller (not shown) connected to the bus 610. The mass storage device 612 and its associated computer-readable media provide non-volatile storage for the computer architecture 600. Although the description of computer-readable media contained herein refers to a mass storage device, such as a solid-state drive, a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available computer storage media or communication media that can be accessed by the computer architecture 600.


According to various implementations, the computer architecture 600 may operate in a networked environment using logical connections to remote computers through a network 650 and/or another network (not shown). The computer architecture 600 may connect to the network 650 through a network interface unit 616 connected to the bus 610. It should be appreciated that the network interface unit 616 also may be utilized to connect to other types of networks and remote computer systems. The computer architecture 600 also may include an input/output controller 618 for receiving and processing input from a number of other devices, including a keyboard, mouse, or electronic stylus (not shown in FIG. 6). Similarly, the input/output controller 618 may provide output to a display screen, a printer, or other type of output device (also not shown in FIG. 6). It should also be appreciated that a computing system implemented using the disclosed computer architecture 600 to communicate with other computing systems.


It should be appreciated that the software components described herein may, when loaded into the CPU 602 and executed, transform the CPU 602 and the overall computer architecture 600 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein. The CPU 602 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the CPU 602 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the CPU 602 by specifying how the CPU 602 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the CPU 602.


Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like. For example, if the computer-readable media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon.


As another example, the computer-readable media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.


In light of the above, it should be appreciated that many types of physical transformations take place in the computer architecture 600 in order to store and execute the software components presented herein. It also should be appreciated that the computer architecture 600 may include other types of computing devices, including smartphones, embedded computer systems, tablet computers, other types of wearable computing devices, and other types of computing devices known to those skilled in the art. It is also contemplated that the computer architecture 600 may not include all of the components shown in FIG. 6, may include other components that are not explicitly shown in FIG. 6, or may utilize an architecture completely different than that shown in FIG. 6.


Illustrative Configuration

The following clauses described multiple possible configurations for implementing the features described in this disclosure. The various configurations described herein are not limiting nor is every feature from any given configuration required to be present in another configuration. Any two or more of the configurations may be combined together unless the context clearly indicates otherwise. As used herein in this document “or” means and/or. For example, “A or B” means A without B, B without A, or A and B. As used herein, “comprising” means including all listed features and potentially including addition of other features that are not listed. “Consisting essentially of” means including the listed features and those additional features that do not materially affect the basic and novel characteristics of the listed features. “Consisting of” means only the listed features to the exclusion of any feature not listed.


The disclosure presented herein also encompasses the subject matter set forth in the following clauses:


Example Clause A, a computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item, wherein the online auction is being conducted by an online auctioneer system to facilitate competitive bidding by the plurality of participants for the item; analyzing the activity data to identify user signals that indicate acquisition interest levels for the plurality of participants, wherein individual acquisition interest levels are indicative of intentions of individual participants to acquire the item through the competitive bidding; receiving avatar profile data defining avatar profiles that facilitate dynamic modifications for three-dimensional model for the plurality of participants; determining, based on the avatar profile data, avatar modification states that correspond to the individual acquisition interest levels for the individual participants; and causing at least one computing device to display, in a virtual environment associated with the online auction, a graphical representation of the item and a plurality of avatars, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent the individual acquisition interest levels for the individual participants.


Example Clause B, the computer-implemented method of Example Clause A, further comprising receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions, wherein identifying the user signals that indicate the acquisition interest levels is based at least in part on the historical activity data.


Example Clause C, the computer-implemented method of any one of Example Clauses A through B, further comprising: receiving user specific historical activity data defining historical user activity of a particular participant, of the plurality of participants, in association with previous online auctions, wherein a particular interest acquisition level for the particular participant is determined based at least in part on the user specific historical activity data.


Example Clause D, the computer-implemented method of any one of Example Clauses A through C, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.


Example Clause E, the computer-implemented method of any one of Example Clauses A through D, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and determining at least some aspects for a particular modification state for, a particular avatar that corresponds to the particular participant, based on the particular participant currently having the high bid for the item.


Example Clause F, the computer-implemented method of any one of Example Clauses A through E, wherein the virtual environment is a three-dimensional immersive environment in which one or more three-dimensional objects are rendered.


Example Clause G, the computer-implemented method of any one of Example Clauses A through F, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; providing the particular participant with at least some avatar abilities with respect to the item within the three-dimensional immersive environment.


Example Clause H, the computer-implemented method of any one of Example Clauses A through G, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.


Example Clause I, a system, comprising: one or more processors; and a memory in communication with the one or more processors, the memory having computer-readable instructions stored thereupon that, when executed by the one or more processors, cause the one or more processors to: receive activity data defining user activity of a plurality of participants in association with an online auction that facilitates competitive bidding, by the plurality of participants, for an item; analyze the activity data to identify user signals that indicate acquisition interest levels associated with intentions of the plurality of participants to acquire the item through the competitive bidding; receive avatar profile data defining avatar profiles associated with the plurality of participants; determine, based on the avatar profile data, a plurality of avatar modification states that correspond to the acquisition interest levels for individual participants; and cause at least one computing device to display a virtual environment that includes a plurality of avatars being rendered adjacent to a graphical representation of the item, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent individual acquisition interest levels for the individual participants.


Example Clause J, the system of Example Clause I, wherein the computer-readable instructions further cause the one or more processors to: monitor the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and control at least some abilities a particular avatar that corresponds to the particular participant based on the particular participant currently having the high bid for the item.


Example Clause K, the system of any one of Example Clauses I through J, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.


Example Clause L, the system of any one of Example Clauses I through K, wherein the computer-readable instructions further cause the one or more processors to: receive historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and deploy a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.


Example Clause M, the system of any one of Example Clauses I through L, wherein the acquisition interest levels are determined by analyzing the activity data with respect to the acquisition interest model.


Example Clause N, the system of any one of Example Clauses I through M, wherein the virtual environment is a three-dimensional immersive environment.


Example Clause O, the system of any one of Example Clauses I through N, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.


Example Clause P, a computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item; analyzing the activity data to identify user signals that indicate acquisition interest levels for individual participants of the plurality of participants; determining avatar modification states that correspond to individual acquisition interest levels for individual participants of the plurality of participants; and causing at least one computing device to display a virtual environment that includes a graphical representation of the item and a plurality of avatars that are rendered in accordance with the avatar modification states that correspond to the individual acquisition interest levels for individual participants.


Example Clause Q, the computer-implemented method of any one of Example Clause P, wherein the individual acquisition interest levels for the individual participants are determined based on an acquisition interest model.


Example Clause R, the computer-implemented method of any one of Example Clauses P through Q, further comprising: receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and deploying a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.


Example Clause S, the computer-implemented method of any one of Example Clauses P through R, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with different expressive characteristics based on the acquisition interest levels.


Example Clause T, the computer-implemented method of any one of Example Clauses P through S, wherein the virtual environment is a three-dimensional immersive environment.


Conclusion

For ease of understanding, the processes discussed in this disclosure are delineated as separate operations represented as independent blocks. However, these separately delineated operations should not be construed as necessarily order dependent in their performance. The order in which the process is described is not intended to be construed as a limitation, and any number of the described process blocks may be combined in any order to implement the process or an alternate process. Moreover, it is also possible that one or more of the provided operations is modified or omitted.


Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts are disclosed as example forms of implementing the claims.


The terms “a,” “an,” “the” and similar referents used in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural unless otherwise indicated herein or clearly contradicted by context. The terms “based on,” “based upon,” and similar referents are to be construed as meaning “based at least in part” which includes being “based in part” and “based in whole” unless otherwise indicated or clearly contradicted by context.


It should be appreciated that any reference to “first,” “second,” etc. items and/or abstract concepts within the Summary and/or Detailed Description is not intended to and should not be construed to necessarily correspond to any reference of “first,” “second,” etc. elements of the claims. In particular, within the Summary and/or Detailed Description, items and/or abstract concepts such as, for example, modification states and/or avatars and/or acquisition interest levels may be distinguished by numerical designations without such designations corresponding to the claims or even other paragraphs of the Summary and/or Detailed Description. For example, any designation of a “first acquisition interest level” and “second acquisition interest level” of the participants within any specific paragraph of this the Summary and/or Detailed Description is used solely to distinguish two different acquisition interest levels within that specific paragraph—not any other paragraph and particularly not the claims.


Certain configurations are described herein, including the best mode known to the inventors for carrying out the invention. Of course, variations on these described configurations will become apparent to those of ordinary skill in the art upon reading the foregoing description. Skilled artisans will know how to employ such variations as appropriate, and the configurations disclosed herein may be practiced otherwise than specifically described. Accordingly, all modifications and equivalents of the subject matter recited in the claims appended hereto are included within the scope of this disclosure. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.

Claims
  • 1. A computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item, wherein the online auction is being conducted by an online auctioneer system to facilitate competitive bidding by the plurality of participants for the item;analyzing the activity data to identify user signals that indicate acquisition interest levels for the plurality of participants, wherein individual acquisition interest levels are indicative of intentions of individual participants to acquire the item through the competitive bidding;receiving avatar profile data defining avatar profiles that facilitate dynamic modifications for three-dimensional model for the plurality of participants;determining, based on the avatar profile data, avatar modification states that correspond to the individual acquisition interest levels for the individual participants; andcausing at least one computing device to display, in a virtual environment associated with the online auction, a graphical representation of the item and a plurality of avatars, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent the individual acquisition interest levels for the individual participants.
  • 2. The computer-implemented method of claim 1, further comprising receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions, wherein identifying the user signals that indicate the acquisition interest levels is based at least in part on the historical activity data.
  • 3. The computer-implemented method of claim 1, further comprising: receiving user specific historical activity data defining historical user activity of a particular participant, of the plurality of participants, in association with previous online auctions, wherein a particular interest acquisition level for the particular participant is determined based at least in part on the user specific historical activity data.
  • 4. The computer-implemented method of claim 1, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
  • 5. The computer-implemented method of claim 1, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; anddetermining at least some aspects for a particular modification state for, a particular avatar that corresponds to the particular participant, based on the particular participant currently having the high bid for the item.
  • 6. The computer-implemented method of claim 1, wherein the virtual environment is a three-dimensional immersive environment in which one or more three-dimensional objects are rendered.
  • 7. The computer-implemented method of claim 6, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item;providing the particular participant with at least some avatar abilities with respect to the item within the three-dimensional immersive environment.
  • 8. The computer-implemented method of claim 1, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
  • 9. A system, comprising: one or more processors; anda memory in communication with the one or more processors, the memory having computer-readable instructions stored thereupon that, when executed by the one or more processors, cause the one or more processors to: receive activity data defining user activity of a plurality of participants in association with an online auction that facilitates competitive bidding, by the plurality of participants, for an item;analyze the activity data to identify user signals that indicate acquisition interest levels associated with intentions of the plurality of participants to acquire the item through the competitive bidding;receive avatar profile data defining avatar profiles associated with the plurality of participants;determine, based on the avatar profile data, a plurality of avatar modification states that correspond to the acquisition interest levels for individual participants; andcause at least one computing device to display a virtual environment that includes a plurality of avatars being rendered adjacent to a graphical representation of the item, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent individual acquisition interest levels for the individual participants.
  • 10. The system of claim 9, wherein the computer-readable instructions further cause the one or more processors to: monitor the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; andcontrol at least some abilities a particular avatar that corresponds to the particular participant based on the particular participant currently having the high bid for the item.
  • 11. The system of claim 9, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
  • 12. The system of claim 9, wherein the computer-readable instructions further cause the one or more processors to: receive historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; anddeploy a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
  • 13. The system of claim 12, wherein the acquisition interest levels are determined by analyzing the activity data with respect to the acquisition interest model.
  • 14. The system of claim 9, wherein the virtual environment is a three-dimensional immersive environment.
  • 15. The system of claim 9, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
  • 16. A computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item;analyzing the activity data to identify user signals that indicate acquisition interest levels for individual participants of the plurality of participants;determining avatar modification states that correspond to individual acquisition interest levels for individual participants of the plurality of participants; andcausing at least one computing device to display a virtual environment that includes a graphical representation of the item and a plurality of avatars that are rendered in accordance with the avatar modification states that correspond to the individual acquisition interest levels for individual participants.
  • 17. The computer-implemented method of claim 16, wherein the individual acquisition interest levels for the individual participants are determined based on an acquisition interest model.
  • 18. The computer-implemented method of claim 16, further comprising: receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; anddeploying a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
  • 19. The computer-implemented method of claim 16, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with different expressive characteristics based on the acquisition interest levels.
  • 20. The computer-implemented method of claim 16, wherein the virtual environment is a three-dimensional immersive environment.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of and priority to U.S. Provisional Application No. 62/588,189, filed Nov. 17, 2017 and entitled “Augmented Reality, Mixed Reality, and Virtual Reality Experiences,” the entire contents of which are incorporated herein by reference.

Provisional Applications (1)
Number Date Country
62588189 Nov 2017 US