The specification relates generally to streaming video, and specifically to a device, system and method for real-time personalization of streaming video.
Personalization of video is a growing field. For example, an entity, such as a company, may use a client database, and the like, to personalize a video on a client-by-client basis, for example by incorporating text of a name of a particular client from the database into at least some portions of a video, as the entire video is being rendered. The entire video, personalized for the particular client may then be stored in a memory and/or a database, and a link to the entire video may be transmitted to a device of the client (e.g. via an email, and the like) so that the entire video, which is personalized for the specific client, may be requested via the link. However, when the video is to be personalized for a plurality of names of clients, and the like, for example as part of a marketing campaign, videos are generally produced personalized for each of the plurality of names, and at least on a one-to-one basis for the plurality of names; links to each of the respective personalized videos are transmitted to each of respective devices of the clients. When the number of names of clients is in the thousands, tens of thousands, hundreds of thousands, and the like, the number of videos produced becomes commensurately very large, which uses a large amount of processing resources. Furthermore, a large amount of memory is allocated to store the videos, which may need to be stored for a lengthy period in the event, for example, a client doesn't request the video for days, months and/or years and/or in the event a client requests the video more than once; hence, memory for storing thousands, tens of thousands, hundreds of thousands, and the like, of personalized videos may need to be allocated for a lengthy period, whether the personalized videos are requested or not. Indeed, in some instances, more than one video in different formats may be produced for each of the plurality of names, for example with different resolutions, different frame rates, different video types (e.g. MPEG2TS vs MP4 file types, and/or other file types) and the like, which again increases use of processing and memory resources.
For a better understanding of the various embodiments described herein and to show more clearly how they may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings in which:
An aspect of the specification provides a device comprising: a communication interface; and a controller having access to a memory storing: nonpersonalized video segments; and data for rendering personalized video segments, the nonpersonalized video segments and the personalized video segments associated with a given order, the controller configured to: receive, from a communication device, using the communication interface, a request for a personalized video; cause rendering of at least a subset of the personalized video segments using the data for rendering the personalized video segments and incorporating personal data associated with the request; generate, and transmit to the communication device, using the communication interface, a manifest identifying at least a first video segment selected according to the given order; update, and transmit to the communication device, using the communication interface, the manifest to identify, according to the given order, further available video segments, as rendering of each of the personalized video segments is completed; and provide to the communication device, using the communication interface, as the personalized video, video segments identified in the manifest, in response to receiving requests for the video segments from the communication device.
Another aspect of the specification provides a method comprising: receiving, at a device, from a communication device, using a communication interface, a request for a personalized video, the device having access to a memory storing: nonpersonalized video segments; and data for rendering personalized video segments, the nonpersonalized video segments and the personalized video segments associated with a given order; causing rendering of at least a subset of the personalized video segments using the data for rendering the personalized video segments and incorporating personal data associated with the request; generating, and transmitting to the communication device, using the communication interface, a manifest identifying at least a first video segment selected according to the given order; updating, and transmitting to the communication device, using the communication interface, the manifest to identify, according to the given order, further available video segments, as rendering of each of the personalized video segments is completed; and providing to the communication device, using the communication interface, as the personalized video, video segments identified in the manifest, in response to receiving requests for the video segments from the communication device.
Attention is directed to
The computing device 101 is further in communication with a number “M” of a plurality of video processing devices 105-1 . . . 105-M, interchangeably referred to hereafter, collectively, as the video processing devices 105 and, generically, as a video processing device 105. While the video processing devices 105 are depicted as separate from the computing device 101, one or more of the video processing devices 105 may be components of the computing device 101 and/or another device of the system 100. For example, the computing device 101 may be configured to at least temporarily dedicate a portion of processing resources to implement the one or more of the video processing devices 105. Alternatively, one or more of the video processing devices 105 and/or the computing device 101 may implement functionality of the video processing devices 105 via one or more of an anonymous function, a function literal, lambda abstraction, lambda expression and/or a lambda function, and the like.
Furthermore, the number of “M” video processing devices 105 may be similar to a number of personalized video segments to be rendered for a personalized video, as described in more detail below, however the number of “M” video processing devices 105 may be less than or greater than a number of personalized video segments to be rendered for a personalized video. Either way, when the number of “M” video processing devices 105 is two or more, the number of “M” video processing devices 105 may be configured to render personalized video segments in parallel.
As depicted, the computing device 101 is in further communication with a plurality of communication devices 107-1 . . . 107-N, interchangeably referred to hereafter, collectively, as the communication devices 107 and, generically, as a communication device 107, each of which are generally configured for requesting and playing streaming video, as described in more detail below. While the computing device 101 is depicted as being in communication with all the communication devices 107, the computing device 101 may be in communication with the communication devices 107 at different points in time, for example as each of the communication devices 107 request personalized videos for a respective user associated with each of the communication devices 107. The number “N” of communication devices 107 may be as few as one communication device 107 or may be as many as tens, hundreds, thousands, hundreds of thousands, or more communication devices 107. The computing device 101 and the communication devices 107 may be in communication via respective communication connections and/or paths, and the like, represented by arrows in
The computing device 101 may comprise of a plurality of computing devices and/or servers, for example in a cloud computing arrangement and/or a cloud computing environment; any of such plurality of computing devices and/or servers may be executing functionality of the computing device 101 in a distributed fashion, sequentially or in parallel, across the one or more computing devices. Such cloud computing devices and/or servers may include, but are not limited to, the video processing devices 105. Any of such plurality of computing devices and/or servers may be geographically co-located or remotely located and inter-connected via electronic and/or optical connections and/or paths, and the like. The computing device 101 may comprise one or more web servers configured to respond to requests for content from the communication device 107, including, but not limited to, requests for a personalized video.
In general, the memory 103 stores: nonpersonalized video segments 111-1, 111-2, 111-3, 111-4, 111-5 (each labelled “NP VS”, where “NP VS” represents the term “Non-Personalized Video Segment”); and data 112-1, 112-2, 112-3, 112-4 (each labelled “Data PVS”, where “PVS” represents the term “Personalized Video Segment”), the data 112-1, 112-2, 112-3, 112-4 for rendering personalized video segments, as described in more detail below. The nonpersonalized video segments 111-1, 111-2, 111-3, 111-4, 111-5, will be interchangeably referred to hereafter, collectively, as the nonpersonalized video segments 111, and, generically, as a nonpersonalized video segment 111; similarly, the data 112-1, 112-2, 112-3, 112-4, will be interchangeably referred to hereafter, collectively, as the data 112 and/or, and, generically, as a set of data 112. Furthermore the nonpersonalized video segments 111 and the data 112 may be identified using a Universally Unique Identifier (UUID) a Globally Unique Identifier (GUID) and the like, which may be used to identify a specific set of nonpersonalized video segments 111 and data 112. Similarly, each of the nonpersonalized video segment 111 and each set of data 112 may be identified using respective UUIDs, GUIDs and the like.
As depicted, the memory 103 further stores icon data 113 for rendering a personalized icon, as described in more detail below with respect to
The nonpersonalized video segments 111 and the personalized video segments to be rendered from the data 112 are generally associated with a given order; for example, as depicted, the nonpersonalized video segments 111 and the personalized video segments to be rendered from the data 112 are associated with a given order shown by the order of the nonpersonalized video segments 111 and the data 112. In other words, as depicted, the nonpersonalized video segment 111-1 is first in the given order, a personalized video segment to be rendered from the data 112-1 is second in the given order, etc. It is understood, however, that in
Together, however, the given order of the nonpersonalized video segments 111 and the personalized video segments to be rendered from the data 112 represent a personalized video that may be rendered and provided (e.g. streamed) to a communication device 107, for example by the computing device 101 upon request from a communication device 107.
While the memory 103 is depicted as storing only one set of nonpersonalized video segments 111 and data 112 for rendering personalized video segments, the memory 103 may further store a plurality of sets of associated nonpersonalized video segments and data for rendering personalized video segments to render different personalized videos. For example, a first set of associated nonpersonalized video segments and data for rendering personalized video segments may be used to render a personalized video for a holiday greeting, while a second set of associated nonpersonalized video segments and data for rendering personalized video segments may be used to render a personalized video for a sales campaign and/or a marketing campaign.
The computing device 101 may be configured to provide content to the communication devices 107 requesting such content therefrom, including, but not limited to a personalized video streamed to each communication device 107. However, the personalized video, as a whole, is generally not rendered prior to receiving a request for the personalized video.
Rather, nonpersonalized video segments 111 are generally prerendered, while personalized video segments are rendered from the data 112 upon receiving a request for the personalized video, for example using personal data associated with the request. Furthermore, a portion of the data 112 may include one or more prerendered layers that are not personalized and data for rendering a personalized layer of the personalized video. Once the nonpersonalized video segments 111 and the personalized video segments, rendered from the data 112, are provided to a communication device, the personalized video segments may be deleted and/or discarded such that only the nonpersonalized video segments 111 and the data 112 are stored long-term; in other words, personalized videos are provided “on-demand” and not prerendered. However in some embodiments, the video rendered from the nonpersonalized video segments 111 the personalized video segments are rendered from the data 112 may be stored at least temporarily and/or a given time period in the event the video is again requested such that rendering a second time is obviated.
As described in more detail below, the data 112 includes data used to render a personalized layer using personal data, that may be combined with one or more of prerendered static layers and prerendered non-static layers; the data 112 may also comprise such prerendered static layers and/or prerendered non-static layers.
Such personal data used to render a personalized layer may be stored in a database 115, for example as personal data 116-1 . . . 116-N, interchangeably referred to hereafter, collectively, as the personal data 116 and, generically, as a set of personal data 116. The number “N” of sets of personal data 116 may be the same as the number “N” of the communication devices 107. Indeed, each set of personal data 116 may correspond to a database record of a client and/or a customer and the like of an entity associated with the system 100 and/or being managed by components of the system 100, and each communication device 107 may be a communication device associated with such clients (e.g. a client may be a user of a communication device 107; hence, the terms user and client are used interchangeably hereafter); however, the number of sets of personal data 116 and the number of communication devices 107 need not be the same.
Regardless, the database 115 may comprise a database of clients and/or customers and the like of an entity associated with the system 100 and/or being managed by components of the system 100. Hence, the database 115 may include tens, hundreds, thousands, hundreds of thousands and/or millions of sets of personal data 116, or more.
Each set of personal data 116 may include personal data of a respective client, and the like, including, but not limited to text corresponding to respective names 117-1 . . . 117-N (interchangeably referred to hereafter, collectively, as the names 117 and, generically, as a name 117), text corresponding to respective network addresses 118-1 . . . 118-N (interchangeably referred to hereafter, collectively, as the network addresses 118 and, generically, as a network address 118), and the like.
As depicted, each of the names 117 stored in the personal data 116 includes a respective first name of an associated client (e.g. “Bob” and “Sally”), and each of the network addresses 118 stored in the personal data 116 includes a respective email address name of an associated client (e.g. “bob@abc.com” and “sally@123.com”), and the like. However, the personal data 116 may comprise other data associated with the user of the communication device 107, including, but not limited to, text corresponding to one or more of a name of a company at which the user works, a home address, a work address, favorite colors and/or other personal preferences, and the like. Furthermore, while the personal data 116 is described herein with respect to stored text, the personal data 116 may alternatively include graphics, including, but not limited to, an image of an associated client.
Furthermore, each set of personal data 116 may be associated with a user of a communication device 107. For example, a user of the communication device 107-1, associated with the personal data 116-1, may be named “Bob” and have an email address of “bob@abc.com”; a user of the communication device 107-2, associated with the personal data 116-2, may be named “Sally” and have an email address of “sally@123.com”
Furthermore, the personal data 116 may be associated with a user of the communication device 107, but not necessarily associated with a communication device 107. For example, a communication device 107 may comprise an email application and/or a messaging application and/or a video application (not depicted) and the like, configured to receive and send messages associated with an email address stored in the personal data 116, such an email address may be used to configure an email application and/or a messaging application and/or a video application at any communication device 107. However, the email address is not specifically associated with a given communication device 107.
In other embodiments, the personal data 116 may be associated with a user of the communication device 107, and/or associated with a specific communication device 107 of a user; for example, personal data 116 associated with a communication device 107 of a user may include, but is not limited to, a phone number of a communication device 107, a MAC (Media Access Control) address of a communication device 107, an IP (Internet Protocol) address of a communication device 107, and the like.
Furthermore each set of personal data 116 may be identified using a respective UUID, a respective GUID, and the like, which may be used to identify a specific set of personal data 116, and/or each name and/or network address and the like in each set of personal data 116 may be identified using a respective UUID, a respective GUID, and the like.
Attention is next directed to
Furthermore, the communication devices 107 may have a similar device structure as the device 200, though adapted for functionality of the communication device 107; for example, while the device 200 is depicted without a display device, an input device, speakers, microphones, cameras, location determining devices (e.g. a Global Positioning System (GPS) device and the like), the device 200 may include one or more of such components. Hence, while not depicted, the device 200 may further include one or more input devices, one or more display devices, one or more speakers, one or more microphones, one or more cameras, one or more location determining devices, and/or other types of components.
The device 200 comprises a controller 220, a memory 222 storing one or more applications 223, and a communication interface 224 (interchangeably referred to hereafter as the interface 224) interconnected, for example, using a computer bus. The one or more applications 223 are generally used to implement the functionality of the device 200; when there are two or more applications 223, such applications may be used to implement the functionality of the device 200 according to different modes of operation; for example one of the applications 223 may be used to render real-time personalized video via web form requests, while another of the applications 223 may be used to respond to requests via links in email, as described in more detail below. For simplicity, the one or more applications 223 will be interchangeably referred to hereafter as the application 223.
The controller 220 can comprise a processor and/or a plurality of processors, including but not limited to one or more central processors (CPUs) and/or one or more processing units; either way, the controller 220 comprises a hardware element and/or a hardware processor. Indeed, in some implementations, the controller 220 can comprise an ASIC (application-specific integrated circuit) and/or an FPGA (field-programmable gate array) specifically configured to implement specific functionality for real-time personalization of streaming video. Hence, the device 200 is preferably not a generic computing device, but a device specifically configured to implement specific functionality for real-time personalization of streaming video. For example, the device 200 and/or the controller 220 can comprise a computer executable engine configured to implement specific functionality for real-time personalization of streaming video.
The memory 222 can comprise a non-volatile storage unit (e.g. Erasable Electronic Programmable Read Only Memory (“EEPROM”), Flash Memory) and a volatile storage unit (e.g. random access memory (“RAM”)). Programming instructions that implement the functional teachings of the device 200 as described herein are typically maintained, persistently, in the memory 222 and used by the controller 220 which makes appropriate utilization of volatile storage during the execution of such programming instructions. Those skilled in the art recognize that the memory 222 is an example of computer readable media that can store programming instructions executable on the controller 220. Furthermore, the memory 222 is also an example of a memory unit and/or memory module and/or a non-volatile memory.
In particular, the memory 222 stores the application 223 that, when processed by the controller 220, enables the controller 220 and/or the device 200 to: receive, from a communication device (for example the communication device 107), using the communication interface 224, a request for a personalized video; cause rendering of at least a subset of the personalized video segments using data for rendering the personalized video segments (for example the data 112) and incorporating personal data associated with the request (for example the personal data 116); generate, and transmit to the communication device, using the communication interface 224, a manifest identifying at least a first video segment selected according to the given order; update, and transmit to the communication device, using the communication interface 224, the manifest to identify, according to the given order, further available video segments, as rendering of each of the personalized video segments is completed; and provide, to the communication device, using the communication interface 224, as the personalized video, video segments identified in the manifest, in response to receiving requests for the video segments from the communication device.
Hence, it is understood by a person skilled in the art that such a manifest may be used in streaming video, including, but not limited to, streaming video according to the HTTP (Hypertext Transfer Protocol) Live Streaming (HLS) protocol. Hence, it is further understood by a person skilled in the art that the device 200, and the communication devices 107 to which a personalized video is provided, may each be generally configured to operate according to compatible streaming video protocols that use manifests to provide such streaming video including, but not limited to, the HLS protocol.
The communication interface 224 comprises a wired or wireless network interface which may include, but is not limited to, any suitable combination of serial ports, parallel ports, USB ports (Universal Serial Bus), and cables therefore, one or more broadband and/or narrowband transceivers, such as a cellular network transceiver, a wireless radio, a cell-phone radio, a cellular network radio, a Bluetooth™ radio, a NFC (near field communication) radio, a WLAN (wireless local area network) radio, a WiFi radio (e.g. one or more local area network or personal area network transceivers operating in accordance with an IEEE 802.11 standard (e.g., 802.11a, 802.11b, 802.11g)), a WiMax (Worldwide Interoperability for Microwave Access, operating in accordance with an IEEE 902.16 standard) radio, a packet based interface, an Internet-compatible interface, an analog interface, a PSTN (public switched telephone network) compatible interface, and the like, and/or a combination.
Attention is now directed to
Regardless, it is to be emphasized, that the method 300 need not be performed in the exact sequence as shown, unless otherwise indicated; and likewise, various blocks may be performed in parallel rather than in sequence; hence the elements of the method 300 are referred to herein as “blocks” rather than “steps”. It is also to be understood, however, that the method 300 can be implemented on variations of the device 200 as well.
Furthermore, while it is assumed hereafter that the method 300 is performed at one device 200, the method 300 may be performed at one or more devices 200, for example at a combination of one or more of the computing device 101 and the video processing devices 105.
At a block 302, the controller 220 receives, from a communication device (for example the communication device 107), using the communication interface 224, a request for a personalized video.
At a block 304, the controller 220 causes rendering of at least a subset of the personalized video segments using data for rendering the personalized video segments (for example the data 112) and incorporating personal data associated with the request (for example the personal data 116 and/or personal data received in the request of the block 302). Rendering of personalized video segments is described in more detail below with respect to
The rendering may be performed in parallel using the video processing devices 105. For example, the controller 220 may retrieve the data 112 and the personal data 116 that is respective to a communication device that transmitted the request received at the block 302, and cause the rendering of each personalized video segment to occur in parallel using the video processing devices 105. Control of the video processing devices 105 may occur by way of the device 200 retrieving the data 112 from the memory 103, and retrieving the personal data 116 to be used in rendering the personalized video segment from the databases 115, and passing the data 112 and the personal data 116 to the video processing devices 105. Alternatively, and/or in addition, control of the video processing devices 105 may occur by way of the device 200 providing JSON (JavaScript Object Notation), XML (Extensible Markup Language) and the like to the video processing devices 105; for example the JSON and/or the XML may instruct the video processing devices 105 to retrieve the data 112 and the personal data 115 from locations of the memory 103 and the database 115. Such JSON and/or XML may be used with UUIDs and/or GUIDs and the like identifying the data 112 and personal data 116 at the memory 103 and/or the database 115. Such UUIDs and/or GUIDs and the like may be received with the request at the block 302.
When the personalized video segments are rendered in parallel, the controller 220 may cause parallel processing resources (e.g. the video processing devices 105) to complete rendering of the personalized video segments according to the given order. For example, the controller 220 may cause the video processing devices 105 to render a respective personalized video segment from the first set of data 112-1 in a manner that causes the respective personalized video segment rendered from the first set of data 112-1 be completed prior to other personalized video segments rendered from the other data 112. In other words, as the data 112-1 is for rendering a second video segment of the personalized video, the controller 220 may cause the parallel processing resources to complete rendering the second segment prior to completing rendering further personalized segments.
Furthermore, the personal data used to render a personalized video segment may be retrieved from the personal data 116 stored in the database (e.g. using a respective network address, UUID, and the like, received with the request of the block 302); for example, in these embodiments, the request received at the block 302 may include a network address 118 stored in the personal data 116, a UUID identifying the personal data 116, and the like, which may be used to retrieve a corresponding name 117, and the like, from the personal data 116, for example in a database look-up process, and the like. For example, in these embodiments, a message, such an email, and the like, may be transmitted to a communication device, the message including a link for initiating a request for a personalized video from the device 200; when the link is actuated, the request may be transmitted to the device 200 (e.g. via an associated network address in the link), the request including a network address of the communication device, UUIDs and/or GUIDs, and the like, identifying the personal data 116 to be used when generating a personalized video, and/or the segments 111 and/or the data 112 to be used when generating a personalized video. Furthermore, such UUIDs and/or GUIDs, and the like may be included in the message transmitted to the communication device.
Alternatively, the personal data used to render a personalized video segment may be received with the request of the block 302; for example, in these embodiments, the request received at the block 302 may include text corresponding to a name to be used to render the personalized video segments and/or a graphic (e.g. an image of a user) to be used to render the personalized video segments. In these embodiments, for example, a user of a communication device transmitting a request may fill out a form, such as a webform, and the like, requesting a personalized video, the form including fields for receiving personal data to be used to render a personalized video, such as a name, an image, and the like; the form may further include a virtual button, a link, and the like, for initiating a request for a personalized video from the device 200; when the virtual button, the link, and the like, is actuated, the request may be transmitted to the device 200 (e.g. via a network address associated with the virtual button), the request including the personal data to be used to render a personalized video. Alternatively, the webform may include code, such as javascript and the like, which causes data received in fields to be pre-submitted to the device 200; such a webform may comprise a multi-stage webform, and the like, the data from which is fully submitted when the virtual button, the link, and the like, is actuated.
At a block 306, the controller 220 generates, and transmits to the communication device, using the communication interface 224, a manifest identifying at least a first video segment selected according to the given order. The manifest may comprise a link, such as a URL (Uniform Resource Locator), and the like, to the first video segment. For example, with reference to
In some embodiments, as depicted in
However, the first video segment may alternatively be a personalized video segment to be rendered from a set of data 112. Hence, when the request is received at the block 302, rendering of the first video segment is initiated and the manifest identifying at least the first video segment may be transmitted whether the first video segment has completed rendering, or not. In these embodiments, the link to the first video segment may be a link and/or URL to a personalized video segment that is not yet available. However, when the rendering of the first video segment is complete, the controller 220 associates the completed first video segment with the link to the first video segment in the manifest.
At a block 308, the controller 220 determines whether rendering of a personalized video segment is complete, for example a next personalized video segment according to the given order. When rendering of a personalized video segment is not complete (a “NO” decision at the block 308), the controller 220 continues to monitor for completion of rendering of a personalized video segment at the block 308.
When rendering of a personalized video segment is complete (a “YES” decision at the block 308), at a block 310, the controller 220 updates, and transmits to the communication device, the manifest to identify, according to the given order, further available video segments. For example, each instance of a next personalized video segment is completed, according to the given order, the manifest is updated to identify the next personalized video segment, as well as any prerendered nonpersonalized video segments 111 that follow the next personalized video segment. Furthermore, when the manifest is updated to identify a last video segment in the personalized video, the manifest may be updated to identify that no further segments will be added (for example, an HLS manifest may include an “ENDLIST” indicator), and the like.
At a block 312, the controller 220 determines whether a request for video segments is received from the communication device. When a request for the video segments is not received from the communication device (a “NO” decision at the block 312), the controller 220 continues to monitor for completion of rendering of a personalized video segment at the block 308.
When a request for the video segments is received from the communication device (a “YES” decision at the block 312), at a block 314, the controller 220 provides, to the communication device, as the personalized video, video segments identified in the manifest, in response to receiving requests for the available video segments from the communication device.
In general, requests for video segments received from the communication device may comprise the links to video segments provided in the manifest, and a communication device requesting the video segments generally requests the video segments in the order of the links in the manifest, which is generally the given order of the nonpersonalized video segments and the personalized video segments. Hence, for example, a communication device receiving a manifest may request the video segments using the links in the manifest requested in the order of the links in the manifest (e.g. a video segment associated with a first link is requested first, a video segment associated with a second link is requested second, etc.).
In embodiments where the request for the video segments received from the communication device is for the first video segment (e.g. the link to the first video segment), and the first video segment is a prerendered nonpersonalized video segment, such as the nonpersonalized video segment 111-1, the controller 220 transmits and/or streams the first video segment upon receiving the request.
However, in embodiments where the request for the video segments received from the communication device is for the first video segment, and the first video segment is a personalized video segment where rendering is not yet completed, the controller 220 may transmit an error message (e.g. a “404 Not Found” error message, and the like) to the requesting communication device. At communication devices operating according to some streaming protocols, receipt of such an error message causes a communication device to again request the video segment that caused the error message. Alternatively, when a requested video segment is not received within a given time period, and the like, the communication device requesting the video segment will resubmit the request for the video segment until received and/or a timeout occurs; indeed, communication devices configured to operate in this manner will continue to request video segments until it is identified that no further video segments are available (e.g. the HLS “ENDLIST” identifier in the manifest is reached). Hence, this process may automatically continue between the controller 220 and the requesting communication device until rendering of the personalized video segment is complete and the controller 220 provides the personalized video segment to the communication device.
At a block 316, the controller 220 determines whether all video segments in the personalized video have been provided to the communication device. When all video segments in the personalized video have not yet been provided to the communication device (a “NO” decision at the block 316), the controller 220 may repeat one or more of the blocks 308, 310, 312, 314.
However, the controller 220 generally stops executing the blocks 308, 310 once rendering of all the personalized video segment is complete, but continues to monitor for requests at the block 312, presuming not all video segments in the personalized video have been provided to the communication device. Indeed, once rendering of all the personalized video segment is complete a “NO” decision at the block 312 may result in the block 312 being repeated by the controller 220 until a request is received (e.g. a “YES” decision at the block 312). Once all the available video segments are provided (a “YES” decision at the block 316), the controller 220 ends the method 300 at a block 318.
While the method 300 has been described with respect to each personalized video segment being rendered independent of one another, in some embodiments the controller 220 determines whether any portions and/or layers and/or content of previously rendered personalized video segments are shared with personalized video segments to be rendered, for example by “pre-processing” the data 112 and/or the personal data 116 to determine personalized video segments that share content; such “pre-processing” the data 112 and/or the personal data 116 may occur when the request for the personalized video is received at the block 302, prior to rendering at least a subset of the personalized video segments at the block 304 and/or receiving requests for video segments at the block 312. Such pre-processing may include comparing a set of data 112 with the other sets of data 112 to determine portions and/or layers and/or content that are shared.
When personalized video segments share any portions and/or layers and/or content, such shared content may be rendered once, for example when rendering a first personalized video segment that includes the shared content, and reused when rendering later personalized video segment that includes the shared content. In some of these embodiments, some personalized video segments may be the same as each other (e.g. respective data 112 is the same for each of a plurality of personalized video segments) and such personalized video segments may be rendered once and provided at the block 314 each time a request is received, at the block 312, for a personalized video segment that is the same as a previously rendered personalized video.
Prior to discussing example embodiments of the method 300, rendering of personalized video segments at the block 304 is described with respect to
Attention is next directed to
For example, the personalized video segment to be rendered using the example data 112 depicted in
The three prerendered non-static layers 402 may each comprise a respective non-static layer of each of the three frames (e.g. the prerendered non-static layer 402-1 is a non-static layer of the first frame, the prerendered non-static layer 402-2 is a non-static layer of the second frame, and prerendered non-static layer 402-3 is a non-static layer of the third frame). Each of the non-static layers 402 include, for example, a moving object (e.g. as depicted a person holding a whiteboard, a sign, and the like, and referred to hereafter as a “whiteboard”) which moves from frame to frame; as such the moving object in each of the non-static layers 402 has had a transformation applied so that the moving object has a “blur” (represented by dotted lines) from the first frame to the second frame to the third frame.
In general, each of the layers 401, 402 have been prerendered and stored in the example data 112.
However, a personalized layer has not yet been rendered. As such, the example data 112 depicted in
In some embodiments a position of a field 404 may be static from layer to layer (e.g. the personal data does not move from frame to frame); in these embodiments, only one personalizable layer 403 may be provided and used to render one personalized layer to be used in all the frames.
However, as depicted, the field 404 moves from layer to layer; in other words, in each of the personalizable layers 403, a respective field 404 is in a different position. Hence, the data 112 further includes transformation data 405-1, 405-2, 405-3 (interchangeably referred to, hereafter, collectively as the transformation data 405 and, generically, as a set of transformation data 405) which may comprise an algorithm and/or data for transforming the personalized data in the respective fields 404 for each respective personalizable layer 403, for example when rendering a respective personalized layer. For example, such a respective personalized layer (e.g. rendered from each of the personalizable layers 403) may be transformed according to one or more of a previous respective personalized layer and a next respective personalized layer. In particular, the set of transformation data 405-1 is for transforming personalized data in the fields 404-1 of the personalizable layer 403-1, the set of transformation data 405-2 is for transforming personalized data in the fields 404-2 of the personalizable layer 403-2, and the set of transformation data 405-3 is for transforming personalized data in the fields 404-3 of the personalizable layer 403-3. The transformation data 405 may indicate how text and/or graphics of personal data in a respective field 404 is to be transformed in each of personalized layers to better render movement of the text and/or graphics of personal data between the frames, and/or to customize the text and/or graphics for rendering onto an object in the non-static layers 402 (e.g. for example the transformation data 405 may be for warping and/or shaping the text and/or graphics for compatibility with the object). Non-limiting examples of such transformation data 405 include, but are not limited to, any algorithms and/or data for blurring text and/or graphics opposite a direction of movement from frame to frame. Furthermore, each set of transformation data 405 may be generated when the layers 403 are generated. In some embodiments, a set of transformation data 405 may comprise an empty set and/or may not be present, for example when no transformation is to occur for a given field 404 (e.g. when a field 404 doesn't move from frame to frame).
Furthermore, while positions of the fields 404 are depicted as being coupled with movement of the objects in the non-static layers 402 (e.g. such that a field 404 aligns with a position of the whiteboard, in the non-static layers 402), positions of the fields 404 may be decoupled with the positions of the objects in the non-static layers 402. As will be described below, a name of a client from personal data may be rendered on the whiteboard of the non-static layers 402, with the positions and sizes of the fields 404 being similar to the sizes of whiteboard in the non-static layers 402; however, a name of a client may alternatively move independent of the whiteboard.
Furthermore, the example data 112 of
Attention is next directed to
In particular, the name “Bob” is inserted into each of the fields 404 to render a respective personalized layer 503 of each of the three frames, with the transformation data 405 used to “blur” the name “Bob” (the blurring represented by broken lines), opposite a direction of movement of the name “Bob” between the personalized layers 503. For example, the name “Bob” is moving up and to the right from the personalized layer 503-1 to the personalized layer 503-2, hence, the name “Bob” is blurred down and to the left in the personalized layer 503-2, the transformation data 405-2 defining such blurring; similarly, the name “Bob” continues to move up and to the right from the personalized layer 503-2 to the personalized layer 503-3, hence the name “Bob” is blurred down and to the left in the personalized layer 503-3, the transformation data 405-3 defining such blurring. As the name “Bob” appears to be moving faster between the personalized layers 503-2, 503-3, then compared to between the personalized layers 503-1, 503-2, there may be more blurring in the personalized layer 503-3, as compared to the personalized layer 503-2 (e.g. as depicted), as defined in the transformation data 405. As the name “Bob” is initially still and/or static in the personalized layer 503-1, the transformation data 405-1 may be omitted and/or may be an empty set, and/or may define blurring that may be provided in the personalized layer 503-1 that corresponds to any blurring in the corresponding non-static layer 403-1.
Attention is next directed to
The rendering depicted in
In particular, in
In yet further embodiments, the respective layers 401, 402 for each frame 601 may be combined and stored in the data 112, for example, when generating the data 112. In other embodiments, the respective layers 401, 402 for each frame 601 may be combined while the personalized layers 503 are being generated. In yet further embodiments, more layers may be added to the frames 601 including, but not limited to, layers where objects occlude the personalized data, such layers being “on top” of the personalized layers 503. In yet further embodiments, when the movement of personalized data is decoupled from the movement of objects in the non-static layers 402, and the objects in the non-static layers 402 are to occlude the personalized data in the personalized layers 503, one or more of the non-static layers 402 (e.g. the non-static layers 402 that provide such occlusion), maybe “on top” of the personalized layers 503. Indeed, the various layers of the frames 601 may be combined in any suitable manner, with data 112 including data defining an order of the layers when combining into the frames 601.
While rendering of the personalized video segment 612 is described with respect to one field 404 in the personalizable layers 403, the personalizable layers 403 may include more than one field 404, for example for including other text and/or graphics from the personal data 116 including, but not limited to, names of companies and the like. Furthermore, other data from the personal data 116 may be used to customize text and/or graphics incorporated into the fields 404 including, but not limited, to favorite colors, and the like, from the personal data 116; for example, the text “Bob” of the name 117-1 may be rendered in a color stored in the personal data 116-1 as a favorite color.
Similarly,
Techniques similar to those described with respect to
The method 300 is next described with respect to
Attention is next directed to
In the depicted example embodiment, the computing device 101 is receiving (e.g. at the block 302 of the method 300) a request 801 for a personalized video from the communication device 107-1. For example, as depicted, a webform 810 is being provided at the communication device 107-1 (e.g. at a display device thereof), for example in a browser application; in particular, a user of the communication device 107-1 may be browsing the Internet and load the webform 810 into a browser due to an interest in a product, service, and the like being provided by an entity associated with the nonpersonalized video segments 111 and the data 112. The user may use the webform 810 to request a personalized video. Such a user may or may not have a pre-existing relationship with an entity providing the webform 810 and hence data of the user may or may not be stored in the personal data 116.
As depicted, the webform 810 includes a field 811 for inputting a name (e.g. “Bob”), for example using an input device of the communication device 107-1, and/or a field 812 for inputting an email address (e.g. “bob@abc.com”). The webform 810 further includes an optional virtual button 813, and the like, which, when actuated, (e.g. using an input device of the communication device 107-1) causes the communication device 107-1 to transmit the request 801 to the computing device 101, with the name and the email address entered in the respective fields 811, 812. The request 801 may further include a rendered version of the webform 810 with the fields 811, 812 “filled in” with the name and the email address. The optional virtual button 813 is hence associated with a network address of the computing device 101 such that the request 801 is automatically transmitted to the computing device 101 when the optional virtual button 813 is actuated. However, any actuatable link, and the like, may be used in place of the virtual button 813 and/or the webform 810 may submit data entered in the fields 811, 812 as entered and automatically transmit the request 801 without actuation of a virtual button, a link, and the like, in the webform 810 (indeed, in these embodiments, an actuatable virtual button, link, and the link may not be present in the webform 810).
The request 801 may further include a file type (e.g. MPEG2TS, MP4, and the like), a frame rate, a resolution and the like of a personalized video being requested.
The request 801 may further comprise one or more identifiers of the personalized video to be generated, for example one or more UUIDs, GUIDs, and the like identifying the segments 111 and the data 112; such UUIDs and/or GUIDs, and the like may be incorporated into the webform 810.
As also depicted in
However, in other embodiments, there need not be a corresponding set of personal data 116 for the email address 818; rather, the personal data used to render a personalized video may be received via the request 801 without a database lookup and the like.
Furthermore, a person of skill in the art will further understand that the webform 810 and the request 801 are specifically for requesting a personalized video to be generated from the nonpersonalized video segments 111 and the data 112. When the memory 103 stores other sets of nonpersonalized video segments and the data for rendering personalized video segments, other webforms may be used to request personalized videos from those other sets. Hence, the request 801 may include an identifier identifying the personalized video to be generated from the nonpersonalized video segments 111 and the data 112.
Attention is next directed to
As depicted, the computing device 101 has retrieved the nonpersonalized video segments 111 and the data 112 from the memory 103, for example in response to receiving the request 801. While the present example embodiment is described with respect to the computing device 101 having retrieved the nonpersonalized video segments 111 and the data 112 from the memory 103, the computing device 101 may alternatively not retrieve the nonpersonalized video segments 111 and the data 112 from the memory 103 but rather control the various components of the system 100 to retrieve data from the memory 103 using JSON and/or XML (each of which may use UUIDs, GUIDs and the like to identify data from the memory 103 and/or the database 115), and the like, and/or using links to the nonpersonalized video segments 111 and the data 112 from the memory 103 as stored in the memory 103.
As further depicted in
While as depicted only the first two sets of data 112-1, 112-2 are provided to the video processing devices 105 to render a first two personalized video segments (e.g. in parallel), in other embodiments, all the data 112 may be provided to respective video processing devices 105 to render all the personalized video segments (e.g. in parallel), assuming that the number “M” of video processing devices 105 is greater than or equal to the number “4” of sets of data 112. When the number “M” of video processing devices 105 is less than the number “4” of sets of data 112, the computing device 101 may generally cause parallel processing resources (e.g. the video processing devices 105) to complete rendering of the personalized video segments according to the given order such that rendering of the first personalized video segment (from the data 112-1) is completed at least first, rendering of the second personalized video segment (from the data 112-2) is completed at least second, etc. However, generally, the computing device 101 may generally cause parallel processing resources (e.g. the video processing devices 105) to render all the personalized video segments concurrently.
As further depicted in
Each link 901 may identify a respective video segment using a URL to a respective video segment. For example, the link 901-1 may comprise a link and/or a URL to the nonpersonalized video segment 111-1, as retrieved from the memory 103 or as stored in the memory 103, the link 901-2 may comprise a link and/or a URL to a personalized video segment that is being rendered by the video processing device 105-1 from the data 112-1 and the name 817, etc. In some embodiments, the link 901-1 may be encrypted and/or signed with a cryptographic key (e.g. previously provided to the communication device 107-1, for example when the request 801 is received), and the like; in other words, encryption and/or signing schemes may be used when exchanging data between the computing device 101 and the communication devices 107. Alternatively, the link 901-2 may not be generated until the personalized video segment that is being rendered by the video processing device 105-1 from the data 112-1 and the name 817 is complete. Similarly, links 901 to other personalized video segments to be rendered by the video processing device 105-1 from the data 112 and the name 817 may not be generated until respective rendering is complete.
However, the links 901 to all the video segments to be provided to the communication device 107-1 may be generated when the request 801 is received and/or before rendering of the personalized video segments begins; indeed, generation of the links 901 may include pre-processing of the data 112 and/or the personal data 116 to determine personalized video segments (yet to be generated) that share content, such pre-processing used to reduce redundant generation of at least portions of the personalized video segments. Furthermore, when the links 901 are generated, before rendering of the personalized video segments begin, a memory location is allocated for storing each of the personalized video segment associated with the links 901 (e.g. a memory location at the computing device 101, the memory 103, the database 115 and/or another memory).
As depicted, the computing device 101 has associated links 901, that identify personalized video segments to be rendered, with respective data 112, though such an association may be used as a placeholder until the respective personalized video segments are rendered, and/or such an association may not occur, and some other placeholder may be used.
As depicted, the computing device 101 generates (e.g. at the block 306 of the method 300) a manifest 903 identifying at least a first video segment selected according to the given order and, in particular, the link 901-1 which identifies the first nonpersonalized video segment 111-1. The manifest 903 does not include the link 901-2 as rendering of the respective personalized video segment is not yet complete. Indeed, the computing device 101 omits links 901 to other video segments that follow the link 901-2 to ensure that the respective video segments are not requested out of the given order.
Also depicted in
Hence,
However, in embodiments where a first available video segment, according to the given order, is a personalized video segment, and the controller 220 of the device 200 and/or the computing device 101 may be further configured to: in response to receiving a request, transmit a manifest identifying the first personalized video segment before rendering of the first personalized video segment is complete; and, when a request for the first personalized video segment is received prior to the rendering of the first personalized video segment being complete, return a message that prompts the communication device 107-1 to again request the first personalized video segment.
Attention is next directed to
As depicted, each video processing device 105-1, 105-M has completed rendering of a respective personalized video segment 1012-1, 1012-2 (e.g. the personalized video segment 1012-1 was rendered from the data 112-1 and the name 817, and the personalized video segment 1012-2 was rendered from the data 112-2 and the name 817). The respective personalized video segments 1012-1, 1012-2 are generally rendered as described with respect to
In response the computing device 101 determines (e.g. a “YES” decision at the block 308) that rendering of a personalized video segment 1012 is complete generates (e.g. at the block 310) an updated manifest 1023 (e.g. the manifest 903 is updated to the manifest 1023) to include the links 901-2, 901-3, 901-4, 901-5, 901-6. For example the links 901-2, 901-3 identify the respective personalized video segments 1012-1, 1012-2 and/or the computing device 101 associates the links 901-2, 901-3 with the personalized video segments 1012-1, 1012-2; indeed, in the depicted example, the data 112-1, 112-2 is replaced with the personalized video segments 1012-1, 1012-2 in the association with the links 901-2, 901-3 (though such a replacement is illustrative and the initial association with the data 112-1, 112-2 with the links 901-2, 901-3 may not have occurred and/or may have been used as placeholder and/or another type of placeholder may have been used).
However, the manifest 1023 further includes the links 901-4, 901-5, 901-6 that identify nonpersonalized video segments 111-2, 111-3, 111-4 that occur between the personalized video segment 1012-2 and another personalized video segment that is still being rendered and/or that is still to be rendered (e.g. a personalized video segment rendered from the data 112-3). In other words, the updated manifest 1023 includes all available video segments. Put another way, as rendering of a next personalized video segment is completed, the computing device 101 updates a manifest to identify the next personalized video segment and any nonpersonalized video segments that occur between the next personalized video segment and another personalized video segment that is still being rendered (and/or that is still to be rendered).
As also depicted in
Furthermore, in
Hence, the computing device 101 receives the request 1031 (e.g. at the block 312 of the method 300) and, in response, provides (e.g. at the block 314 of the method 300), the nonpersonalized video segment 111-1 to the communication device 107-1. For example, the computing device 101 may stream the nonpersonalized video segment 111-1 to the communication device 107-1, as previously retrieved from the memory 103 and/or from the memory 103 and/or by retrieving the nonpersonalized video segment 111-1 from the memory 103 when not previously retrieved. As the nonpersonalized video segment 111-1 is received at the communication device 107-1, the communication device 107-1 “plays” the nonpersonalized video segment 111-1, for example using a streaming video application.
While in
In general, the computing device 101 continues to update the manifest used to provide video segments to the communication device 107-1 as rendering of the personalized video segments is completed. For example, attention is next directed to
Also depicted in
Similarly, as depicted in
When rendering of all the personalized video segments 1012-1, 1012-3, 1212-3, 1212-4 have been completed, the block 308 and the block 310 of the method 300 are no longer implemented, though the computing device 101 continues to implement the block 312 and the block 314 until all the video segments of the personalized video are provided to the communication device 107-1 (e.g. a “YES” decision at the block 316 such that the method 300 ends at the block 318).
While not depicted, it is further understood by a person of skill in the art that the computing device 101 and/or the video processing devices 105 may further convert any video segments (including the nonpersonalized video segments 111) to a file type, a frame rate and/or a resolution received in the request. When such conversions are to occur, the computing device 101 may control the video processing devices 105 in a manner that causes the first video segment to be converted before other video segments.
In addition, while present embodiments are described with respect to the personalized video segments being rendered, in parallel, two at a time, all the personalized video segments may be rendered concurrently and/or in parallel when the video processing devices 105 are available to do so.
Attention is next directed to
For example, also depicted in
The personalized icon 1313 is provided to the computing device 101 which may associate a link 1315 and/or another identifier (such as a UUID and the like) with the personalized icon 1313, the link 1315, and the like, for requesting a personalized video using the personal data 116-N, the nonpersonalized video segments 111 and the data 112. The link 1315, and the like, may include the name 117-N, the email address 118-N, and/or an identifier of the personal data 116-N (such as a UUID and the like), such that when the link 1315 is used to request a personalized video, the computing device 101 may receive the name 117-N, the email address 118-N, and/or the identifier reference of the personal data 116-N to render a personalized video. Indeed, the personalized icon 1313 may be associated with one or more first UUIDs identifying the personalized data 116-N, and one or more second UUIDs identifying the segments 111 and the data 112.
The computing device 101 may incorporate the personalized icon 1313 and the associated link 1315, and like, as a virtual button 1317, and the like into a message 1323, such as an email, and the like, and transmits the message 1323 to the communication device 107-N using, for example, the email address 118-N stored in the personal data 116-N that includes the name 117-N used to generate the personalized icon 1313. As depicted, the message 1323 includes optional text “Press Icon For Personalized Video”, to prompt a user of the communication device 107-N to actuate the virtual button 1317.
As depicted, the communication device 107-N receives the message 1323 and renders the message 1323 at a display device thereof, for example in a messaging application, including the virtual button 1317. It is understood by a person of skill in the art that when the virtual button 1317 is actuated (e.g. using an input device of the communication device 107-N), the communication device 107-N transmits a request 1131, that includes the link 1315, to the computing device 101. Receipt of the request 1131 at the computing device 101 causes the computing device 101 to initiate the method 300, as described above with respect to
Furthermore, the name 117-N “Sally” used to render the personalized video may be retrieved from the memory 103 and/or may be received with the request 1131. Indeed, in embodiments described herein, personal data associated with a request, to be used to render a personalized video, may be received with the request and/or the personal data associated with a request may be retrieved from a database (e.g. the memory 103) using information received with the request (such as a reference to a database record, such as the personal data 116-N, an email address, and the like).
In this manner, a personalized video may be rendered for each of the communication devices 107 and/or users thereof “on demand”, without prerendering and storing a personalized video for each, thereby using processing resources to render such personalized videos “on demand”. Furthermore, memory resources in the system 100 are used to store the prerendered nonpersonalized video segments 111 and the data 112, and not personalized videos for each of the communication devices 107 and/or users rendered thereof in advance of the personalized videos being requested.
In addition, embodiments described herein depict personalized videos being requested in association with webforms and/or in response to receiving a message that includes a virtual button, and the like, for requesting a personalized video, such messages generated, for example, from a database of clients and the like. Hence, campaigns to reach existing clients using personalized videos may be initiated by transmitting messages to each of the clients, and a personalized video is rendered only for those clients who respond to the message, thereby saving processing resources and/or memory resources as compared to prerendering and storing personalized videos for all the clients stored in the database.
In this specification, elements may be described as “configured to” perform one or more functions or “configured for” such functions. In general, an element that is configured to perform or configured for performing a function is enabled to perform the function, or is suitable for performing the function, or is adapted to perform the function, or is operable to perform the function, or is otherwise capable of performing the function.
It is understood that for the purpose of this specification, language of “at least one of X, Y, and Z” and “one or more of X, Y and Z” can be construed as X only, Y only, Z only, or any combination of two or more items X, Y, and Z (e.g., XYZ, XY, YZ, XZ, and the like). Similar logic can be applied for two or more items in any occurrence of “at least one . . . ” and “one or more . . . ” language.
The terms “about”, “substantially”, “essentially”, “approximately”, and the like, are defined as being “close to”, for example as understood by persons of skill in the art. In some implementations, the terms are understood to be “within 10%,” in other implementations, “within 5%”, in yet further implementations, “within 1%”, and in yet further implementations “within 0.5%”.
Those skilled in the art will appreciate that in some implementations, the functionality of devices and/or methods and/or processes described herein can be implemented using pre-programmed hardware or firmware elements (e.g., application specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), etc.), or other related components. In other implementations, the functionality of the devices and/or methods and/or processes described herein can be achieved using a computing apparatus that has access to a code memory (not shown) which stores computer-readable program code for operation of the computing apparatus. The computer-readable program code could be stored on a computer readable storage medium which is fixed, tangible and readable directly by these components, (e.g., removable diskette, CD-ROM, ROM, fixed disk, USB drive). Furthermore, it is appreciated that the computer-readable program can be stored as a computer program product comprising a computer usable medium. Further, a persistent storage device can comprise the computer readable program code. It is yet further appreciated that the computer-readable program code and/or computer usable medium can comprise a non-transitory computer-readable program code and/or non-transitory computer usable medium. Alternatively, the computer-readable program code could be stored remotely but transmittable to these components via a modem or other interface device connected to a network (including, without limitation, the Internet) over a transmission medium. The transmission medium can be either a non-mobile medium (e.g., optical and/or digital and/or analog communications lines) or a mobile medium (e.g., microwave, infrared, free-space optical or other transmission schemes) or a combination thereof.
Persons skilled in the art will appreciate that there are yet more alternative implementations and modifications possible, and that the above examples are only illustrations of one or more implementations. The scope, therefore, is only to be limited by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
8209396 | Raman | Jun 2012 | B1 |
8479228 | Simon | Jul 2013 | B1 |
8732301 | Funk | May 2014 | B1 |
10298882 | Galloway | May 2019 | B1 |
20020054115 | Mack | May 2002 | A1 |
20030187538 | Somaia | Oct 2003 | A1 |
20050060229 | Riedl | Mar 2005 | A1 |
20070067104 | Mays | Mar 2007 | A1 |
20110320626 | Wong | Dec 2011 | A1 |
20120110618 | Kilar | May 2012 | A1 |
20120301111 | Cordova | Nov 2012 | A1 |
20130022294 | Hirayama | Jan 2013 | A1 |
20130071087 | Motiwala | Mar 2013 | A1 |
20130120651 | Perry | May 2013 | A1 |
20140162729 | Garden | Jun 2014 | A1 |
20140302826 | Naono | Oct 2014 | A1 |
20150156159 | Hanson | Jun 2015 | A1 |
20150161565 | Kraft | Jun 2015 | A1 |
20150221316 | Mufti | Aug 2015 | A1 |
20160041998 | Hall | Feb 2016 | A1 |
20160073117 | Grasmug | Mar 2016 | A1 |
20160105691 | Zucchetta | Apr 2016 | A1 |
20160127440 | Gordon | May 2016 | A1 |
20170353516 | Gordon | Dec 2017 | A1 |
20180288461 | Funk | Oct 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
20190296840 A1 | Sep 2019 | US |