The present disclosure generally relates to communication networks. More particularly, and not by way of any limitation, the present disclosure is directed to a system, method and architecture for facilitating real-time native advertisements in an augmented/mixed reality (AR/MR) environment.
Increasingly, augmented and virtual reality (AR/VR) are becoming more than gaming environments, with companies finding enterprise potential in the technology in a host of applications. One of the goals of the industry is to replace conventional user interfaces such as keyboards, displays, etc. with new paradigms for human-machine communication and collaboration, thereby facilitate a major shift in user engagement in AR/VR environments. Accordingly, the enterprise potential of AR/VR technology continues to grow as companies are constantly exploring new use cases beyond pilot or “one-off” applications.
Mixed reality (MR) represents a further advance where both AR and real world environments may be merged in additional enhancements to provide richer user experiences. As the trends in AR/VR/MR deployment continue to grow apace, interest in marketing and monetizing the available digital “real estate” in AR/VR/MR environments has also grown concomitantly, albeit potentially within the constraints of efficient bandwidth utilization and optimization in an AR-supported network.
The present patent disclosure is broadly directed to systems, methods, apparatuses, devices, and associated non-transitory computer-readable media and network architecture for facilitating placement of native advertisements (or, ads for short) in an AR/MR environment. In one aspect, an example method includes, inter alia, receiving real world object identification and spatial mapping data relative to a plurality of real world scenarios sensed or otherwise detected in respective AR sessions engaged by corresponding users using a plurality of AR devices. The real world object identification and spatial mapping data may be determined responsive to sensory and environmental information received from at least one of the AR devices and the corresponding users. Responsive to the real world object identification and spatial mapping data, one or more contextualized ads are obtained from an advertisement campaign management system that are customizable within the respective real world scenarios. The claimed method further involves assigning the one or more advertisements to one or more of the plurality of AR devices, e.g., based on network flow optimization techniques. The ads are then inserted into the respective AR sessions of the one or more AR devices to which the advertisements have been assigned for placement relative to one or more real world objects perceived in the respective AR views displayed by the one or more AR devices.
In a further aspect, an embodiment of a system, apparatus, or network platform is disclosed which comprises, inter alia, suitable hardware such as processors and persistent memory having program instructions for executing an embodiment of the methods set forth herein.
In still further aspects, one or more embodiments of a non-transitory computer-readable medium or distributed media containing computer-executable program instructions or code portions stored thereon are disclosed for performing one or more embodiments of the methods of the present invention when executed by a processor entity of a network node, apparatus, system, network element, subscriber device, and the like, mutatis mutandis. Further features of the various embodiments are as claimed in the dependent claims.
Beneficial features of an embodiment of the present invention may include but not limited to one or more of the following: (i) the disclosed AR ad placement architecture is configured to learn the environment and detect objects that would match the relevant products and services to be advertised, in addition to identifying possible locations for those ads to be placed; (ii) the AR ad placement architecture allows for placement rules to be applied so the sponsored product(s) may be placed in a way that the experience of the consumer is not disrupted (i.e., the sponsored products look like native content in their natural “habitat”); (iii) the AR ad placement architecture may be configured to take into consideration consumer data for personalized native ad experience (i.e., two different consumers in the same environment may see different ads or views of the same ad); (iv) the AR ad placement architecture allows a natural integration within current ad exchange markets; and (v) the AR ad placement architecture can be readily scaled as the number of consumers with AR media connectivity continues to grow.
Additional benefits and advantages of the embodiments will be apparent in view of the following description and accompanying Figures.
Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the Figures of the accompanying drawings in which like references indicate similar elements. It should be noted that different references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references may mean at least one. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
The accompanying drawings are incorporated into and form a part of the specification to illustrate one or more exemplary embodiments of the present disclosure. Various advantages and features of the disclosure will be understood from the following Detailed Description taken in connection with the appended claims and with reference to the attached drawing Figures in which:
In the following description, numerous specific details are set forth with respect to one or more embodiments of the present patent disclosure. However, it should be understood that one or more embodiments may be practiced without such specific details. In other instances, well-known circuits, subsystems, components, structures and techniques have not been shown in detail in order not to obscure the understanding of the example embodiments. Accordingly, it will be appreciated by one skilled in the art that the embodiments of the present disclosure may be practiced without such specific components. It should be further recognized that those of ordinary skill in the art, with the aid of the Detailed Description set forth herein and taking reference to the accompanying drawings, will be able to make and use one or more embodiments without undue experimentation.
Additionally, terms such as “coupled” and “connected,” along with their derivatives, may be used in the following description, claims, or both. It should be understood that these terms are not necessarily intended as synonyms for each other. “Coupled” may be used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” may be used to indicate the establishment of communication, i.e., a communicative relationship, between two or more elements that are coupled with each other. Further, in one or more example embodiments set forth herein, generally speaking, an element, component or module may be configured to perform a function if the element is capable of performing or otherwise structurally arranged or programmed under suitable executable code to perform that function.
As used herein, an AR/MR network element, platform or node may be comprised of one or more pieces of service network equipment, including hardware and software that communicatively interconnects other equipment on a network (e.g., other network elements, end stations, etc.), and is adapted to host one or more advertisement applications or services (hereinafter “ad applications” or “ad services”, or terms of similar import, for short) with respect to a plurality of subscribers. As such, some network elements may be disposed in conjunction with a wireless radio network environment whereas other network elements may be disposed in conjunction with a public packet-switched network infrastructure, including or otherwise involving suitable content delivery network (CDN) infrastructures and/or various Internet-based ad campaign management architectures. In still further arrangements, one or more network elements may be disposed in cloud-based platforms or datacenters having suitable equipment running virtualized functions or applications relative to various types of media, e.g., ads, AR/MR content, as well as other subscriber-specific or broadcast audio/video/graphics media including computer-generated or holographic content. Accordingly, at least some network elements may comprise “multiple services network elements” that provide support for multiple network-based functions (e.g., A/V media management, session control, Quality of Service (QoS) policy enforcement, bandwidth scheduling management, subscriber/device policy and profile management, content provider and AR publisher priority policy management, streaming policy management, network storage policy management, and the like), in addition to providing support for multiple application services (e.g., data and multimedia applications). Subscriber end stations, client devices or customer premises equipment (CPE) may comprise any device configured to execute, inter alia, an AR/MR client application and/or a HTTP-based download application for receiving live/stored AR content from one or more AR content providers as well as real-time AR-based native advertisements, e.g., via a suitable access network or edge network arrangement based on a variety of access technologies, standards and protocols. For purposes of one or more embodiments of the present invention, an example client device may therefore comprise any known or heretofore unknown AR/MR device including such as, e.g., a Google Glass device, Microsoft HoloLens device, etc., as well as holographic computing devices, which may or may not be deployed in association with additional local hardware such as networked or local gaming engines/consoles (such as Wii®, Play Station 3®, etc.), portable laptops, netbooks, palm tops, tablets, phablets, mobile phones, smartphones, multimedia/video phones, mobile/wireless user equipment, portable media players, smart wearables such as smartwatches, goggles, digital gloves, and the like. Further, the client devices may also access or consume other content/services (e.g., non-AR/MR) provided over broadcast networks (e.g., cable and satellite networks) as well as a packet-switched wide area public network such as the Internet via suitable service provider access networks. In a still further variation, the client devices or subscriber end stations may also access or consume content/services provided on virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet.
One or more embodiments of the present patent disclosure may be implemented using different combinations of software, firmware, and/or hardware in one or more modules suitably programmed and/or configured. Thus, one or more of the techniques shown in the Figures (e.g., flowcharts) may be implemented using code and data stored and executed on one or more electronic devices or nodes (e.g., a subscriber client device or end station, a network element, etc.). Such electronic devices may store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks, optical disks, random access memory, read-only memory, flash memory devices, phase-change memory, etc.), transitory computer-readable transmission media (e.g., electrical, optical, acoustical or other form of propagated signals—such as carrier waves, infrared signals, digital signals), etc. In addition, such network elements may typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (e.g., non-transitory machine-readable storage media) as well as storage database(s), user input/output devices (e.g., a keyboard, a touch screen, a pointing device, and/or a display), and network connections for effectuating signaling and/or bearer media transmission. The coupling of the set of processors and other components may be typically through one or more buses and bridges (also termed as bus controllers), arranged in any known (e.g., symmetric/shared multiprocessing) or heretofore unknown architectures. Thus, the storage device or component of a given electronic device or network element may be configured to store code and/or data for execution on one or more processors of that element, node or electronic device for purposes of implementing one or more techniques of the present disclosure.
Referring now to the drawings and more particularly to
By way of illustration, an example AR/MR device 102 is depicted as a client device operative with advanced AR/MR technologies including, e.g., computer/machine vision and object recognition, in addition to inter-operating with various sensory devices 104-1 to 104-N, at least some of which may be integrated within the client device 102 in an embodiment. Where such sensory devices may be provided as separate entities or elements, they may communicate with the client AR/MR device 102 using suitable wired and/or wireless communications technologies, e.g., optical, radio, Bluetooth, etc., for generating, receiving and/or transmitting myriad types of sensory data and associated control signaling, via applicable communication paths 101-1 to 101-N. Additionally, alternatively, or optionally, a local computing platform 106 (i.e., hardware, operating system software/firmware and applications) 106 may also be coupled to the client AR/MR device 102 via a suitable communication path 103, wherein the local computing platform 106 may represent any number and/or type of desktop computers, laptops, mobile/smartphones, tablets/phablets, smart TVs including high definition (HD), ultra HD (UHD), 4/8K projection/display devices, set-top boxes (STBs), holographic computers, other media consumption devices, etc. Collectively, the local computing hardware/software 106, client AR/MR device 102 and associated sensory devices 104-1 to 104-N may be considered a client AR/MR platform within the AR network architecture 100, which may include or interface with a plurality of such client platforms (e.g., hundreds of thousands, depending on scale). With respect to the sensory devices 104-1 to 104-N, example devices may include but not limited to cameras, microphones, accelerometers, Global Positioning System (GPS) locators, touch sensors, mood sensors, temperature sensors, pressure sensors, gesture sensors/controllers, optical scanners, near-field communications (NFC) devices, head movement detectors, ocular movement trackers, and directional sensors such as solid-state compasses, etc., as well as wearable devices comprising health/exercise monitors and biometric identification devices, and so on. Further, a subset of sensors may be provided as part of an Internet of Things (IoT) environment associated with the AR/MR device 102. In a typical arrangement, for instance, a head-mounted display (HMD) may be included as part of the AR/MR client device 102, which may be paired with a helmet or a harness adjustable to the user, and may employ sensors for six degrees-of-freedom monitoring that allows alignment of virtual information to the physical world perceived in a field of view (FOV) and adjust accordingly with the user's head and/or eye movements. An example AR/MR client device 102 may also be rendered on devices resembling eyewear or goggles that include cameras to intercept the real world view and redisplay its augmented view through an eye piece or as a projected view in front of the user. Such devices may include, but not limited to, smartglasses such as, e.g., Google Glass, Microsoft HoloLens, etc., as well as bionic/electronic contact lenses and virtual retinal displays. A separate head-up display (HUD) may also be implemented in association with an example AR/MR client device 102 depending on the specific AR/MR application, AR/MR content provider, and/or the AR/MR client platform implemented in an embodiment.
In accordance with the teachings of the present patent disclosure, an Object and sound Recognition System (ORS) 108 and a Spatial Mapping System (SMS) 110 may be integrated or otherwise co-located with the client AR/MR device 102 in an example embodiment. In an alternative or additional embodiment, ORS 108, SMS 110 or both may be provided as separate network infrastructure elements disposed in an edge/access network servicing the user/subscriber associated with the client AR/MR device 102, communicatively operating therewith using suitable wired/wireless communication paths 109, 111, respectively. In a still further embodiment, ORS 108 and/or SMS 110 may be implemented as a virtual functionality or appliance in a cloud-based implementation. In one embodiment, irrespective of the specific implementation, ORS 108 may be configured as a system, apparatus or virtual appliance that is operative, depending on available sensors and/or other peripherals associated with an example AR/MR device 102, for collecting information about physical objects, sounds, smells, brands, consumer's mood, etc. in the real world environment perceived by the user (collectively referred to herein as “sensory and environmental information”). In one example arrangement, AR/MR device 102 may use microphones and different types of cameras to recognize the sounds, objects, brands and feed the data to ORS 108. As noted previously, an example AR/MR device may also include biometrics-based sensors that may be configured to provide suitable information that may be used to determine the mood of the AR user/consumer. Depending on where an example implementation of ORS is located, the processing of the sensory/environmental data may be effectuated locally on the AR/MR device 102, its local computing platform 106, or on the network edge/cloud infrastructure where the sensory/environmental data may be transmitted via cellular, WiFi and/or other types of connectivity. Skilled artisans will realize that various known or heretofore unknown techniques may be employed for processing the sensory/environmental data (e.g., image recognition, pattern recognition, machine vision techniques, etc.) so as to identify/recognize the existing physical world objects, images, sounds, etc. in relation to a real world view seen/perceived via the AR/MR device 102 and generate real world object identification data. As will be set forth in further detail below, an AR-based advertisement infrastructure element 112, referred to herein as AR Native Ads Platform or ARNAP, is operative to receive the real world object identification data, among other pieces of information, for purposes of selection and/or suggestion of applicable advertisement content in a programmatic manner in conjunction with one or more additional network modules or elements according to an embodiment of the present invention.
Continuing to refer to
In some embodiments of the present invention, the functionalities of ORS 108 and SMS 110 may also be integrated or otherwise co-located. Broadly, ORS and SMS may inter-operate together wherein the coordinates of a real world environment and the physical objects therein may be derived or generated using a combination of techniques involving computer vision, video tracking, visual odometry, etc. In a first or initial stage, the process may involve detecting interest points, fiducial markers, or optical flow in the sensed camera images, wherein various feature detection methods such as corner detection, blob detection, edge detection, and other image processing methods may be employed. In a follow-up or second stage, a real world coordinate system and the location/positioning of the physical objects therein may be restored from the data obtained in the first stage, using techniques including but not limited to simultaneous localization and mapping, projective/epipolar geometry, nonlinear optimization, filtering, etc. In an example implementation, AR Markup Language (ARML) may be used to describe the location and appearance of the objects in an AR/MR scenario.
In accordance with the teachings herein, ARNAP 112 may be interfaced with an Advertisement Campaign Management System (ACMS) 116 via a suitable interface 117, wherein ACMS 116 is operative to manage one or more ad campaigns in association with an applicable advertisement architecture, e.g., including, but not limited to, an architecture similar to web-based advertising. In one arrangement, ACMS 116 may be configured as a supply-side platform (SSP) of an advertisement architecture that interacts with existing demand-side platforms (DSPs) 120 via one or more ad exchanges 118. As an SSP or at least as part thereof, ACMS 116 may be implemented as a technology platform that enables AR content publishers to manage their advertising space inventory, fill it with ads and monetize revenue accordingly. For example, ACMS 116 may be configured to provide impression-level bidding based on the data generated by ARNAP 112, preferably contextualized and customized in an AR/MR environment according to an embodiment of the present invention as will be described in detail further below. In one example implementation, DSP 120 may be realized as a technology platform that allows buyers to purchase digital inventory, e.g., ad spaces in an AR/MR environment, from various ad exchanges and ad network accounts in a number of ways, including but not limited to real-time bidding (RTB) where digital inventory may be bought and sold on per-impression basis, via programmatic instantaneous auction, also contextualized in an AR/MR environment according to an embodiment of the present invention. Accordingly, a publisher content server and/or a publisher ad server may be provided as part of the functionality of ARNAP 112 in an example embodiment of the present invention, although such entities may also be deployed as separate entities depending on a particular AR-based advertisement architecture implementation.
For purposes of the present invention, native advertising is a type of advertising where the advertisement is presented in a disguised/non-intrusive manner, e.g., the ad content at least substantially matches the form/function of a real world scenario displayed or otherwise perceived in an AR/MR environment. In example cases, native ads may manifest as articles or products, although not necessarily limited thereto, produced by an advertiser with the specific intent to promote a product. Accordingly, in an embodiment of the present invention, native advertising may be contextualized with respect to the various physical objects, images, sounds, smells, etc. and/or the AR content that may be superimposed on the real world view presented in an AR application. It should be appreciated that the term “native” may therefore refer to this contextualized coherence of the ad content with the other media and/or tangible entities appearing in a dynamically varying AR environment. The ad exchange 118 and/or DSPs 120 may be provided with suitable application program interfaces (APIs) and associated data structures to request AR native ads spaces according to the teachings herein for purposes of an example embodiment as will be set forth in detail further below.
In a further arrangement, ARNAP 112 may also be interfaced with various additional sources of data, which may be hosted or managed by one or more third-party networks, entities, private/public enterprises, or operators, whereby user/subscriber profiles, past AR/MR environment and usage data pertaining to the subscribers, and other third-party data may be selectively/optionally utilized in selecting, assigning and placing native AR ads in an example embodiment of the present invention. Subscriber-based factors forming a user profile may comprise any combination or sub-combination of parameters/variables such as subscriber demographics including, but not limited to, subscriber personal data such as names, age(s), gender(s), ethnicities, number of individuals in the premises or size of the household, socioeconomic parameters, subscribers' residential information (i.e., where they live—city, county, state, region, etc.), employment history, income or other economic data, spending habit data, consumption data and product preferences, social media data/profiles, religion, language, etc., which are collectively shown at reference numeral 126. Environment data 124 obtains past and/or present data for the AR environments in different geolocations, which may be used in an example embodiment to enhance the real-time ORS data for the respective users/subscribers. For instance, another AR user/device nearby may also be connected and their data can be used as well by ARNAP 112, especially if the environment data from other AR user/device is of better quality, for example. Past environment data can be useful from a historical perspective, which could come from current user AR device sensors or others in the same location or vicinity. Other third-party data sources 122 may comprise or provide additional information relative to the subscribers' geolocations, e.g., ambient environmental/weather or climate data, news, and other location-based data. One skilled in the art will recognize that these various additional data sources 122, 124, 126 may be disposed or deployed at different parts or nodes of the AR network architecture 100, and may therefore be provided with appropriate communication networks 130 for communicating with ARNAP 112.
Regardless of where such data sources are disposed, it should be understood that the various pieces of information from them may be selectively/optionally utilized by ARNAP 112 in conjunction with other modules of the AR network architecture 100 depending on a policy-based management system that may take into consideration an assortment of factors such as the scope/extent of a particular ad campaign, AR content in respective AR environments, subscribers' geolocations, licensing and/or other geographical/temporal restrictions, and the like.
In one arrangement, ARNAP 112 may be configured to detect, obtain, receive, monitor or utilize various types of sensory/environmental data as well as real world object identification and spatial mapping data for native placement of ads in AR environments, (e.g., subscribers with active AR connections or sessions), wherein the ads may be received based on pre-cached ad bids and/or RTB-based ad bids from one or more DSPs 120. Associated with ARNAP 112 is a sub-system, module or apparatus 114, referred to herein as Advertisement Placement Management System (APMS), which may be integrated with or provided separately from ARNAP 112, for facilitating rule-based placement logic with respect to the selected ads in relation to one or more real world objects/entities perceived by respective AR subscribers in corresponding AR/MR environments. In an example implementation, APMS 114 may be provided with configurable rules (e.g., policy-based) for native ad placement. For example, if the objective is to place an ad for a pair of running shoes (that is, assuming that ACMS/SSP 116 is configured to provide or fill one or more suitable locations for the shoes from a shoe supplier based on an exchange-mediated ad transaction), APMS 114 may be configured to identify matching objects that represent suitable placeholders for the shoes ad in the real world environment of an AR/MR environment. In an illustrative scenario, the rules-based recommendations from APMS 114 may contain other details such as placing the advertisement next to real world shoes in an empty space (i.e., devoid of a physical object, or separate from other physical objects by a predetermined marginal space, etc.). In accordance with the teachings of the present invention, ARNAP 112 is operative to compile the recommendations received from APMS 112 and determine an optimal decision as to which ads to be placed and where to place them. As noted previously, since the functionality of APMS 114 may be integrated within ARNAP 112 some deployments, the overall ad placement service logic may be executed by or at ARNAP 112 without having to engage in external service/functional calls.
As noted previously, communications between the entities of system 200A may include control plane signaling communications, user plane data communications, or both. Further, flow of information on any communication interface may be unidirectional or bidirectional unless otherwise specifically described.
Taking
Depending on whether or not the ads are obtained responsive to pre-cached bids, different configurations for ad fulfillment may be obtained in an embodiment within the scope of the present invention. Process 400B in
Process 400C in
Turning to
In a further or optional arrangement, a learning sub-module, module or sub-system 268 may be provided as part of APMS 200B for effectuating a trainable “expert system” that can learn from, inter alia, subscribers' respective interactive behaviors relative to the ads natively placed in their AR environments. As illustrated in
As noted above, possible ads to be placed may be received by ARNAP 112 based on whether pre-cached bids or RTB-based ads are obtained from ACMS 116. Accordingly, the ad content input 254 to the placement module can be either actual bid for ads received from the ACMS or a list of potential ads that the ARNAP foresees will need placement and would like to advertise the space(s) to the ad exchange via the ACMS. As to the policy-based inputs 259, ARNAP 112 may be configured to include, generate and/or provide a number of advertisement policies. For example, a policy may be to avoid ads of certain type for certain consumer groups or to prioritize ads of certain type for given AR environments. ARNAP 112 may also include “metering” or access control policies, e.g., a policy for showing fewer ads or no ads to certain consumers during certain hours or at certain locations, etc. Other data 260 may also include data relating to black listed ads for certain locations or consumer groups, and the like.
Set forth below are example data formats with respect to the various input data described above, which may be used to extend existing ad server APIs for implementing an embodiment of a system or architecture for facilitating native ads in an AR environment according to the teachings of the present disclosure. Whereas the below example formats are provided in a JavaScript Object Notification (JSON) format for sample data objects, it should be appreciated that the format type and/or ad content examples are illustrative only and therefore are non-exhaustive and non-limiting.
An example consumer data object may be formatted as below:
An example environment data object may be set forth as below:
And example AR ad content data object may be set forth as below:
As described previously, output of the placement module 252 preferably comprises the list of native ads 262 to be placed in respective AR environments. In one arrangement, such output 262 may include a single placement option per ad or a list of possible placement locations per ad. Skilled artisans will recognize that other variations, alternatives, modifications, and the like (e.g., geolocation targeting, subscriber targeting, AR content-specific ad selection, etc.) with respect to the output list 262 are possible within the scope of the present invention. In a still further arrangement, ARNAP 112 may be configured to provide an additional level of filtering or determination as to selecting which ad(s) to be placed where (e.g., AR geolocations), based on the output list 262 provided by APMS module 200B. An example AR-rendered ad data object may be set forth as below, again without any limitation as to the type of format and/or content:
In an example embodiment of the present invention, the placement module 252 may be configured to assign, or place, the ads to the various AR consumers/locations based on a network flow optimization technique wherein a flow metric associated with a logical graph constructed for the universe of the AR consumers (or, AR devices or locations) served by ARNAP 112. By way of illustration, an optimization process may be implemented as follows:
Upon executing a placement/assignment process based on a flow metric, a flow solution may be obtained indicating an optimal assignment of the ads in the list A to the location nodes in N.
Turning to
Additional non-exhaustive and non-limiting example scenarios where an embodiment of the present invention may be practiced are set forth below.
Sneakers recognition and advertisement matching:
Sound detection and advertisement matching for the mood:
Watching commercials on TV and matching the advertisement with an object in the real world:
Turning to
Accordingly, depending on implementation and/or network architecture of an AR media communications network and/or native ad architecture network, apparatus 600 may be configured in different ways suitable for operation at different hierarchical levels of a network infrastructure which may include a CDN and/or mobile broadcast network, e.g., at a super headend node, regional headend node, video hub office node, AR publisher server node, central or regional or edge distribution node(s), etc., on the basis of where AR source media feeds or other content sources are injected into an example deployment. Suitable network interfaces, e.g., I/F 614-1 to 614-L, may therefore be provided for effectuating communications with other network infrastructure elements and databases (e.g., AR media source feeds, global databases for storing AR media segments, metadata/manifest files, applicable digital rights management (DRM) entities, etc.), as well as one or more ACMS nodes, APMS/ORS/SMS nodes (where separately deployed), consumer profile databases and other third-party data sources, etc. In some embodiments, apparatus 600 may be configured as an integrated ad platform architecture including an ORS module 620 and/or SMS module 610. Interfaces 612-1 to 612-K for effectuating communications sessions with one or more downstream nodes, e.g., access network nodes and other intermediary network elements, subscriber premises nodes, AR subscriber devices, and the like. As noted above, one or more processors 602 may be provided as part of a suitable computer architecture for providing overcall control of the apparatus 600, which processor(s) 602 may be configured to execute various program instructions stored in appropriate memory modules or blocks, e.g., persistent memory having specific program instructions 608, including additional modules or blocks specific to AR ad selection, assignment, optimized placement, rendering, consumer-behavior based learning, etc. An ad content cache 604 may also be included in an example embodiment where ads forecasted based on previous learning may be stored. Where included, an APMS module 613 may include program instructions and related hardware for effectuating optimized ad placement and feedback-based learning as described in detail hereinabove. ARNAP functionality may also be embodied as a module or component 616 in an integrated ad platform architecture as one example configuration of apparatus 600. Where an implementation includes or otherwise associated as part of an SSP, ACMS functionality may be embodied as a module or component 622 of the integrated SSP architecture. Also, a learning module 606 may be separately provided from a placement module (e.g., in APMS 613) in one implementation. Additionally or optionally, where bandwidth management is provided in conjunction with AR ad policy management, appropriate functionality may be embodied as a module or component 618 as illustrated in
Based on the foregoing, skilled artisans will recognize that embodiments of the present invention can advantageously place native ads in a contextualized manner relative to real world objects, entities, sounds, etc., as perceived by individual AR subscribers in respective AR environments. Whereas in a pure VR environment the content is already predetermined, and therefore ad placement can also be determined in advance, the AR environments can be dynamically changing, thus requiring real-time object identification and spatial mapping. Embodiments of the present invention not only address this technological need but also facilitate ad placement in an optimized manner based on a weighted flow maximization process. Accordingly, overall bandwidth consumption of AR flows including native ads in a network can be optimized as well.
Further beneficial features of an embodiment may include one or more of the following: (i) real-time non-intrusive (i.e., native) advertisement experience even in dynamically varying AR/MR environments; (ii) better placement of targeted ads based on consumer's environment and profile; (iii) customizable ads placement rules and ad management based on policies; (iv) continuous learning from past ads placement results (e.g., the interaction from the consumer) to improve the real-time ad insertion techniques; (v) continuous improvement of the learning by a feedback sensing component; (vi) scalable architecture that can be deployed in a cloud-centric implementation; (vii) flexibility of deployment since the processing for detecting objects, sounds and environment could be either hosted on the AR device or located in the network/cloud; (viii) enhanced adaptability of the architecture because new sensing components and/or new AR rendering such as tactile rendering can be readily integrated into an AR environment; and (ix) implementation of cloud-based security to assure user privacy and anonymization.
One skilled in the art will recognize that various apparatuses, subsystems, AR functionalities/applications, ad exchange network elements, and/or endpoint AR nodes as well as the underlying network infrastructures set forth above may be architected in a virtualized environment according to a network function virtualization (NFV) architecture in additional or alternative embodiments of the present patent disclosure. For instance, various physical resources, databases, services, applications and functions executing within an example network of the present application, including ARNAP, ORS, SMS, APMS, and ACMS functionalities, etc., may be provided as virtual appliances, machines or functions, wherein the resources and applications are virtualized into suitable virtual network functions (VNFs) or virtual network elements (VNEs) via a suitable virtualization layer. Resources comprising compute resources, memory resources, and network infrastructure resources are virtualized into corresponding virtual resources wherein virtual compute resources, virtual memory resources and virtual network resources are collectively operative to support a VNF layer, whose overall management and orchestration functionality may be supported by a virtualized infrastructure manager (VIM) in conjunction with a VNF manager and an NFV orchestrator. An Operation Support System (OSS) and/or Business Support System (BSS) component may typically be provided for handling network-level functionalities such as network management, fault management, configuration management, service management, and subscriber management, etc., which may interface with VNF layer and NFV orchestration components via suitable interfaces.
Furthermore, at least a portion of an example network architecture disclosed herein may be virtualized as set forth above and architected in a cloud-computing environment comprising a shared pool of configurable virtual resources. Various pieces of hardware/software associated with ARNAP, ORS, SMS, APMS, and ACMS functionalities, and the like may be implemented in a service-oriented architecture, e.g., Software as a Service (SaaS), Platform as a Service (PaaS), infrastructure as a Service (IaaS) etc., with multiple entities providing different features of an example embodiment of the present invention, wherein one or more layers of virtualized environments may be instantiated on commercial off the shelf (COTS) hardware. Skilled artisans will also appreciate that such a cloud-computing environment may comprise one or more of private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, multiclouds and interclouds (e.g., “cloud of clouds”), and the like.
In the above-description of various embodiments of the present disclosure, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and may not be interpreted in an idealized or overly formal sense expressly so defined herein.
At least some example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. Such computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, so that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s). Additionally, the computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks.
As pointed out previously, tangible, non-transitory computer-readable medium may include an electronic, magnetic, optical, electromagnetic, or semiconductor data storage system, apparatus, or device. More specific examples of the computer-readable medium would include the following: a portable computer diskette, a random access memory (RAM) circuit, a read-only memory (ROM) circuit, an erasable programmable read-only memory (EPROM or Flash memory) circuit, a portable compact disc read-only memory (CD-ROM), and a portable digital video disc read-only memory (DVD/Blu-ray). The computer program instructions may also be loaded onto or otherwise downloaded to a computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer and/or other programmable apparatus to produce a computer-implemented process. Accordingly, embodiments of the present invention may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor or controller, which may collectively be referred to as “circuitry,” “a module” or variants thereof. Further, an example processing unit may include, by way of illustration, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGA) circuits, any other type of integrated circuit (IC), and/or a state machine. As can be appreciated, an example processor unit may employ distributed processing in certain embodiments.
Further, in at least some additional or alternative implementations, the functions/acts described in the blocks may occur out of the order shown in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Furthermore, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction relative to the depicted arrows. Finally, other blocks may be added/inserted between the blocks that are illustrated.
It should therefore be clearly understood that the order or sequence of the acts, steps, functions, components or blocks illustrated in any of the flowcharts depicted in the drawing Figures of the present disclosure may be modified, altered, replaced, customized or otherwise rearranged within a particular flowchart, including deletion or omission of a particular act, step, function, component or block. Moreover, the acts, steps, functions, components or blocks illustrated in a particular flowchart may be inter-mixed or otherwise inter-arranged or rearranged with the acts, steps, functions, components or blocks illustrated in another flowchart in order to effectuate additional variations, modifications and configurations with respect to one or more processes for purposes of practicing the teachings of the present patent disclosure.
Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above Detailed Description should be read as implying that any particular component, element, step, act, or function is essential such that it must be included in the scope of the claims. Reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Accordingly, those skilled in the art will recognize that the exemplary embodiments described herein can be practiced with various modifications and alterations within the spirit and scope of the claims appended below.