METHOD AND SYSTEM FOR CONSTRUCTING AN INTERNET-BASED IMAGING SYSTEM

Abstract
A system and method is presented that uses software logic (executing in one or more servers) to inter-connect mobile devices and location-deriving signal generating devices to collectively act as an Imaging System that outputs a periodic succession of images containing information about the preferences and purchase and transaction history of consumers. The Imaging System so constructed is called a SoftCamera in a fashion analogous to a conventional light-based camera. Images output by the SoftCamera may be transmitted to third-party providers or used to send information and recommendations to consumers. The SoftCamera thus may be used as a Recommender System in certain embodiments.
Description
BACKGROUND

As indoor location-sensing systems become more prevalent, the opportunity arises of using mobile devices to sense various geographical aspects of an area, derive proximity and location information about the contents within said area, and use such information to provide customers with a personalized and customized experience.


With respect to location-deriving signal generating devices and technologies, we mention Indoor GPS systems and Bluetooth-Low Energy (BLE) devices (e.g., beacons) as examples that allow more accurate location sensing than conventional GPS systems.


A BLE device typically broadcasts messages at a periodic rate within a coverage area with a radius of a few meters. Mobile devices within said coverage may then receive these broadcasts and applications (“apps”) within the mobile device are made aware of the broadcasts by said mobile device's operating system. The apps then process the incoming broadcast messages and convey them in turn to servers in a wide-area network wherein further processing of the signals takes place.


SUMMARY

In accordance with one aspect of the invention a system and method is provided to inter-connect mobile devices and location-deriving signal generating devices into a specific arrangement to yield an Imaging System.


In accordance with one aspect of the invention the inter-connected components of the Imaging System are geographically distributed using a private data network such as the Internet.


In accordance with one aspect of the invention the inter-connection logic that constructs the Imaging System resides in a collection of servers connected to a wide area network (also known as a Cloud Computing System).


In accordance with one aspect of the invention the Imaging System produces datasets that are called “images”.


Furthermore, the Imaging System may produce a succession of images at a pre-determined and configurable rate.


In accordance with one aspect of the invention a system and method is provided whereby information received from one or more mobile devices located within a demarcated geographical area is aggregated to produce sequences of images of said geographical area.


In accordance with one aspect of the invention the location-deriving information received from mobile devices originates from Bluetooth-Low Energy (BLE) devices, also known as beacons, e.g., Gimbals, Estimote; furthermore also from devices based on the industry-standard specification called iBeacon.


In accordance with one aspect of the invention the mobile devices transmitting the received information are mobile communication devices such as smartphones, tablet computers, Personal Digital Assistants (PDA), smart watch, smart glasses, mobile terminals, mobile computers and mobile phones.


In accordance with one aspect of the invention the aggregation of information further comprises integrating information received from online data feeds, said data feeds emanating from websites, private data sources or data repositories, news feeds, and public and private databases.


In accordance with one aspect of the invention aggregation of information comprises the identification of different areas of the geographical area distinguished by the number of mobile devices in said areas over a pre-determined or configurable time period.


In accordance with one aspect of the invention the aggregation of information further comprises integration of information received from descriptions of the geographical area, possibly including inventory and product or item information.


In accordance with one aspect of the invention the aggregation of information further comprises information obtained by analysis of online data feeds.


In accordance with one aspect of the invention the analysis of online data feeds comprises use of machine learning procedures and techniques.


In accordance with one aspect of the invention an image is a description of the layout of the geographical space, e.g., location of mobile devices in a geographical area and the location of BLE devices, the items in the space, shelves, aisles, etc.


In accordance with one aspect of the invention an image is produced periodically with a rate determined by system policy, administrator policy or a policy governed by the cost and limitations of computing and networking machinery.


In accordance with one aspect of the invention the temporal sequence of images of a geographical area are stored in a historical data repository.


In accordance with one aspect of the invention said historical data repository is used to predict future areas of the geographical area that will be distinguished by the number of mobile devices.


In accordance with one aspect of the invention said historical data repository is used to predict the future sale of items in the geographical area.


In accordance with one aspect of the invention the predictions of regions of the geographical area distinguished by number of mobile devices is used to manage the merchandizing and inventory of items contained in the geographical area.


In accordance with one aspect of the invention the location of a mobile device as indicated by an image is used determine its proximity to one or more, i.e., a collection of, items.


In accordance with one aspect of the invention the proximity of a mobile device to a collection of items is used to isolate one or more specific items, designated as “items of interest”.


In accordance with one aspect of the invention mobile devices present in an image are uniquely associated with one or more identifiers (“user identifier list”) available from the mobile devices, or by matching said available identifiers with data from historical, loyalty, or Customer Relationship Management (CRM) datasets.


In accordance with one aspect of the invention the user identifier list is used to identify consumers, i.e., one or more identifiers signifying an electronic address, e.g., email address, Twitter handle, Skype address, etc.


In accordance with one aspect of the invention the identified consumers are associated with the identified items of interest.


In accordance with one aspect of the invention the identified consumers are sent electronic messages related to the items of interest.


In accordance with one aspect of the invention the electronic messages may be mobile push messages of various types such as full, interactive, in-app push, rich push, etc.), email messages, notifications, or text messages.


In accordance with one aspect of the invention the electronic messages may be delivered while the device is within the demarcated geographical area or after the device has exited the geographical area.


In accordance with one aspect of the invention the items of interest list is augmented with additional items based on analysis of online data feeds and/or derived from machine-learning techniques.


In accordance with one aspect of the invention the electronic messages relate to the augmented list of items of interest.


In accordance with one aspect of the invention the geographical area is demarcated by one or more geo-fences whose boundaries are determined with respect to the range of the BLE devices within the geographical area (“BLE-geo-fence”).


In accordance with one aspect of the invention a mobile device within a BLE-geo-fence is identified as a “marked device”.


In accordance with one aspect of the invention the marked device triggers special software logic (“orchestration script”) to be downloaded to the mobile device from one or more servers (“server complex”) connected to the mobile device by a wide area network.


In accordance with one aspect of the invention the orchestration script is pre-loaded to the mobile device preceding its time of entry to the BLE-geo-fence.


In accordance with one aspect of the invention the orchestration script is launched when the mobile device encounters a first BLE device, or at a pre-determined or pre-configured time within the BLE-geo-fence.


In accordance with one aspect of the invention the orchestration script is launched when the mobile device recognizes the loss of connectivity to the cloud.


In accordance with one aspect of the invention the exit of a mobile device from within a BLE-geo-fence triggers special software logic (“post-visit orchestration script”) to be executed in the server complex.


In accordance with one aspect of the invention the post-visit orchestration script causes electronic messages to be delivered to the mobile device or to the identified consumers.


In accordance with one aspect of the invention the image derived for a demarcated geographical area is rendered on laptops, personal computers, tablets, and mobile devices in a visual representation for human users.


In accordance with one aspect of the invention the image is delivered to external multiple computers and/or mobile devices.


In accordance with one aspect of the invention the delivery of an image to other devices may be contemporaneous with the location of one or more devices within the demarcated geographical area.


In accordance with one aspect of the invention an image delivered to other devices may additionally contain recommendations, instructions or informational objects inserted manually, i.e., curated by a human user, or by a policy curated by a human, or by a policy derived automatically by the system based on historical analysis of past manual insertions.


In accordance with one aspect of the invention the image derived for a demarcated geographical area is made available to third-party service providers in a machine-readable format.


In accordance with one aspect of the invention the items of interest list is rank-ordered, said ranking determined by utilizing information from online data analysis or machine-learning techniques.


In accordance with an aspect of the invention the areas with the demarcated geographical area that are distinguished by number of mobile devices are used to manage advertising displays in said areas.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an example Planogram.



FIGS. 2, 3, 4 and 5 show a sequence of images produced by successive enhancements. Each image is produced at one click of the Shutter Speed Clock of the SoftCamera (detailed later). In particular, FIG. 2 shows an image obtained by enhancing the Planogram shown in FIG. 1.



FIG. 3 shows an enhancement of the image of FIG. 2 by addition of “hot zones”.



FIG. 4 shows an enhancement of the image of FIG. 3 by zooming in on a particular mobile device of consumer “John”.



FIG. 5 shows an enhancement by further zooming in on the preferences for consumer “John”.



FIG. 6 shows an example of a Planogram for a given DGA.



FIG. 7 shows an image obtained from a Planogram by adding more information (i.e., mobile devices).



FIG. 8 shows a conventional view of a light and photographic film-based camera.



FIG. 9 shows the components of the SoftCamera in analogy to a conventional camera.



FIG. 10 shows a general architecture of the SoftCamera.



FIG. 11 shows the internal modules of the SoftCamera.



FIG. 12 shows an example of a Script Table for an Orchestration Script.



FIG. 13 illustrates various components of an illustrative computing-based device in which embodiments of various servers and/or clients as described herein may be implemented.





DETAILED DESCRIPTION
Motivation

Consider a retail establishment that displays its wares in shelves arranged by aisles. Customers browse the items being displayed on the shelves. A store manager watches the consumers and their actions on a display device connected to a camera installed in the store. Perhaps he is situated in a remote location from the store with said display device connected to the camera by private data networks such as the Internet.


Such an arrangement allows a store manager to oversee store operations, e.g., how the sales people are interacting with the customers, and security, e.g., the items on the shelves are not being stolen.


There are, however, other far-reaching benefits. Over time the store manager notices which aisles see more traffic, where customers usually congregate and what items garner more attention. He gets to recognize repeat customers, items frequently bought in conjunction with other items, etc.


A thoughtful manager may assimilate such information and codify it into certain decisions that reflect his said assimilation. He may re-arrange his items in a different way, re-arrange his aisles and shelves, give different instructions to his sales staff, issue real-time instructions to his staff to cater to a customer while the customer is lingering by an item in the store, pro-actively offer other items to customers browsing in the store, entreat passersby to enter the store with an enticing offer, said offer relevant and pertaining to the interests of the browsing customer or passerby.


The manager's decisions are rooted in the information that he gathers from the camera, the information he obtains from his inventory, the arrangement and layout of his store, data about current and past purchases, and trends in user interests as gleaned from one or more information sources accessible to the manager.


The present invention is concerned with inter-connecting location-deriving signal generating devices and mobile devices to create a system that produces “images” of geographical areas; said images containing information about the customers in said area, their interests, purchase intent, movements, etc. Said images may be further enhanced by adding inferences made by the system based on various datasets such as CRM (Customer Relationship Management) data, past purchase data or Loyalty data. Said images may be manipulated by computer systems and/or provided to third-party providers for further processing.


Said inter-connection is achieved by software logic executing in one or more computers, the inter-connected components being in connection with said software logic via private data networks such as the Internet.


The system comprising the software logic, the inter-connected mobile devices and the location-deriving signal generating devices together form an Imaging System called the SoftCamera.


The SoftCamera produces images akin to a conventional light and photographic-film based camera, said images comprising data that can be manipulated by computers, e.g., to make recommendations or offers to consumers.


The SoftCamera uses a special dataset in a crucial role. The dataset is called a Planogram. The data within the dataset relates to the layout and items contained within a geographic area. Planograms nowadays find usage in retail store applications. However, their use need not be limited to the retail domain only. The data in a Planogram may be manipulated by computer programs and also may be rendered on display devices for human consumption. Consider by way of example a Planogram of a retail establishment rendered on a display device as shown in FIG. 1. The figure shows various aisles of the store and may list items on the shelves, etc. The figure also shows several location sensing devices placed within the store.


We may now enhance the data in a Planogram by adding more information to it. If we enhance the Planogram rendered in FIG. 1 and render it on a display device we may get a rendering as shown in FIG. 2. For ease of terminology in forthcoming descriptions we will abbreviate by simply stating that FIG. 2 enhances FIG. 1. An enhanced Planogram is called an image.


One may successively enhance images. For example, FIG. 3 enhances the image shown in FIG. 2 by integrating the positions of the mobile devices within the store over a period of time and representing the results visually (also called “hot zones”).



FIG. 4 further enhances the image of FIG. 3 by enhancing one mobile device and calculating information about the owner (“John”) of said mobile device, his purchase history, and the advertisements he was shown recently in his online browsing sessions (obtained from an external dataset provided to the system).



FIG. 5 continues the enhancement of customer John and shows the result of calculating John's items of interest based on his movements within the store (“linger time”) and his past purchases, etc.


It may thus be seen that a system that can generate a series of images such as FIG. 5 above at a periodic rate may provide valuable information that may be used by marketing professionals, for example.


INTRODUCTION

The following detailed descriptions are made with respect to the various figures included in the application and may refer to specific examples in said drawings; however, it is to be noted that such specific cases do not limit the generality of the various aspects of the invention.


Certain figures pertain to a specific implementation of the present invention. These figures are included in the application to show a particular implementation of some of the embodiments of the present invention and not intended to limit the generality of the various aspects of the present invention.


The present invention is partly motivated by the advent of devices with computational capabilities applied to physical areas. This trend seems to have been started with cell towers installed for supporting mobile communications but also used for locating mobile devices in geographical areas. Global Positioning Systems (GPS) further improved the accuracy of location identification. Smaller-sized cell tower technologies such as pico-cells, femto-cells, etc., have also been used for mobile communications and for location tracking in indoor spaces. Indoor GPS systems are a further example of location sensing technologies. Wi-Fi routers and access points have also been used for determining locations of devices. Recently, so-called Bluetooth Low Energy (BLE) devices provide improved location tracking capabilities in indoor spaces.


The descriptions that follow use the language and terminology of BLE devices; however, said descriptions are only for descriptive reasons and may not be taken to limit the inventions claimed herein. In particular, we will refer to any device that broadcasts signals that may be received by mobile devices and which contain information from which the location of the mobile device can be derived as a location-deriving signal generator. Thus, any reference herein to a BLE is presented by way of illustration only and more generally may refer to any suitable location-deriving signal generators, including but not limited to the aforementioned cell towers, GPS satellites, Wi-Fi routers and other access points, and so on.


As a mobile device roams, it goes out of range of certain signals and comes into the range of other signals from location sensing devices. Thus, at any given time, an area of geography may be determined wherein a mobile device is able to receive signals from one or more location sensing devices. Such a coverage area may be demarcated by defining a geo-fence whose surface area subsumes the coverage area. We will refer to such an arrangement wherein a geographical area that has location sensing devices installed within it and has one or more geo-fences as a Demarcated Geographical Area (DGA).


As an example of a DGA consider a retail establishment that has multiple BLE devices, e.g., Gimbals, installed within its premises, and has a geo-fence defined as a boundary around its premises in such a way that the signals from one or more of the BLE devices reach all parts of the DGA.


It should also be noted that no assumption is being made as to where the one or more BLE devices are installed within a DGA. In particular, said devices may be installed anywhere as long as their broadcast signals may be received within the DGA. It should also be noted that a DGA might cover a wide geographical area and may contain several geo-fences, possibly intersecting, subsuming and overlapping. For example, we may be interested in a DGA that covers homes of customers and their journeys to a store and to capture such a wide area one may need to use several geo-fences and BLE devices covering multiple areas.


Mobile devices receive the signals broadcast by BLE devices in a DGA.


Generally, mobile devices support applications (“apps”), and said apps may operate using data received from the signals transmitted by one or more said devices. In some cases the app must first register to receive signals from the one or more devices. When the mobile device's operating system receives such signals, it makes the registered applications aware of the receipt of said signals. When these applications are made aware of a received location-deriving signal, they may—transmit the data embodied in said location-deriving signal to one or more servers using a network connection to a wide area network, e.g., a cloud infrastructure, wherein the servers assemble the received data into one or more datasets.



FIG. 6 shows an example of a Planogram of a retail store. The oval 100 represents a geo-fence that has been defined around the physical environment 200. Thus, the physical environment may be thought of as a DGA. The environment contains aisles marked 500, 600 and 700. The aisles have shelves containing items enumerated as I11, I21, etc. The entry to the environment is marked as 300. Finally, BLE devices are installed and shown as 1000, 1001, 1002, 1003, and 1004. Thus, there is a BLE device for the exit, one by the cashier's station and one BLE device each for the aisles. There is no significance attached to the placement of the BLE devices in this invention; it is assumed that management makes a determination to install the BLE devices at particular locations within the DGA.


It is a characteristic of BLE devices that their transmitted signals contain a signal strength indicator (called RSS denoting Received Signal Strength). It is known from prior art that a mobile device (or a server in connection with said mobile device) may use the RSS signal from a BLE device to calculate the distance and location of said mobile device from said BLE device.


A mobile device may receive RSS indicia from several BLE devices but said mobile device may be programmed to make a determination as to which BLE device is the closest and chose a BLE device on the basis of such a determination.


It is assumed that a Planogram such as FIG. 6 is provided to the system being described, i.e., it is an input to the system.



FIG. 7 shows the Planogram of FIG. 6 but it has now been enhanced by the addition of mobile devices marked as 2000, 2001, 2002, 2003, 2004 and 2005. A Planogram that has been enhanced with additional information will be referred to as an “image”.


One important aspect of images produced by the system described herein is that they can be generated to contain information that can be automatically processed, i.e., processed by other computer programs without human intervention. For example, one may generate images of the retail store in such a manner that a congregation of customers may be detected at a certain location in the store by a computer program that analyzes said images, and an alert may be generated automatically to a pre-designated terminal or device, e.g., a manager's station. Similarly, a particular user moving in the store (as indicated by successive locations of his mobile device) may be captured in an image that also lists his preferences, i.e., items he is lingering by and may be of interest to him. Finally, it is to be noted that images may be generated at periodic intervals to capture “snapshots” of the DGA over time, the period of such intervals being determined by administrator or system policy. This notion of periodic images is discussed in more detail later.


SoftCamera

A video camera produces images (successive frames) of lighted objects that may be used to manage items, recognize people, etc. The merging of the physical and digital worlds, as entailed by the advent of BLE devices, has provided the opportunity for creating images that capture customer-specific information.


The present invention is concerned with inter-connecting location-deriving signal generating devices and mobile devices to create a system that produces “images” of geographical areas; said images containing information about the customers in said area, their interests, purchase intent, movements, etc. Said images may be manipulated by computer systems and/or provided to third-party providers for further processing.


Said inter-connection is achieved by software logic executing in one or more computers, the inter-connected components being in connection with said software logic via any combination of wireless and wired communication networks including, for instance, private data networks such as the Internet.


The system comprising the software logic, the inter-connected mobile devices and the location-deriving signal generating devices together form an Imaging System called the SoftCamera.


The SoftCamera produces images akin to a conventional light and photographic-film based camera, said images comprising data that can be manipulated by computers, e.g., to make recommendations or offers to consumers.


One embodiment of the present invention utilizes location sensing devices of the type BLE and mobile devices of the type smartphones (e.g., iPhone, Android phone, etc.), to construct the SoftCamera.



FIG. 8 shows a conventional camera. Light from a light source 100 reflects off an object 200 into a lens 300 that collects and provides the incident light to a camera body 400. The camera body may use specially processed film, e.g., color film as opposed to black and while film, to enhance certain properties of the incident light. The light rays then impinge on a photographic film 600 wherein the image 700 is produced.



FIG. 9 shows the components of one illustrative example of the SoftCamera which is analogous to a conventional camera. Location sensing devices such as the BLE serve analogously to light sources in conventional cameras in the sense that their transmitted messages “light up” the receiver and operating system (OS) of the mobile device, which may be thought of as the “object”. The transmitter of the mobile device then may be thought of as the “lens” that provides signals to the server 400 (“camera body”), generally over more wireless networks that may include, for example, a cellular network. The camera body in order to enhance the received signals from the “lens” may use various datasets 500. Such datasets may then be said to perform the role of specially pre-processed film in a conventional camera. The input Planogram 600 could be taken as the analog of photographic film and the enhanced information produced after processing may be taken as analogous to the images 700 of the SoftCamera.


The datasets 500 may be pre-provisioned in the server 400 or they may be otherwise communicated to the server 400, possibly though not necessarily, from the DGA itself over one or more communication systems. In some embodiments one or more of the datasets 500 may be communicated to the server 400 in real-time, e.g., at the same time the mobile devices are providing signals to the server 400.


It is to be noted that as mobile devices enter and exit a DGA, the amount of information entering the body of the SoftCamera may vary. This may be referred to as the focusing power of the lens varying over time (just as a light camera lens may lose focusing power due to changes in ambient light).


The SoftCamera produces an image 700 for a DGA containing one or more mobile devices. Since one may be interested in multiple DGAs distributed over a wide geographical area (e.g., in a city wide service) the SoftCamera may produce multiple images in parallel for multiple DGAs.


The images 700 so generated contain computer-generated data objects.


When computer programs process and manipulate the images of a DGA and render the results on display devices, we refer to such processes as rendering an image—much like images captured by (visible) light cameras are rendered by processing of photographic film. The server 400 may produce the image 700 and distribute it to be rendered at any suitable location. For instance, in the retail example, the server 400 may communicate the image 700 to a rendering device in the retail store so that it can be seen by a manager. It should be noted however, that images need not always be rendered. For example, as discussed below, the SoftCamera may contain modules such as a messaging module that may employ the data contained in an image without rendering the image.


Like conventional light cameras, a lens of the SoftCamera has its own “shutter”. A lens is operational for a certain fixed amount of time, an interval, called the shutter speed interval. The length of the interval may be calculated by taking into account the amount of computational processing and communication processing that is needed to process the information collected by all the lenses in a DGA. Adding or removing computational resources varies the interval. A system clock is defined (for a DGA) that is called the Shutter Speed Clock (SSC) and this clock ticks at the rate of the shutter speed interval for said DGA. At the start of the interval the lenses in a DGA become operational, i.e., information from the mobile devices within the DGA starts to be received. When the interval ends, the lenses become non-operational and the receipt of information is terminated. Then a new interval starts and the lenses become operational again.


Note that a SSC is defined for each DGA, which in turn may have several mobile devices (and hence several lenses since the number of lenses correspond to the number of mobile devices). However, we assume that the shutter speed for all lenses in a single DGA is the same. Thus, we have one SSC for each DGA. The SoftCamera produces one image per click of the SSC for a given DGA.


Conventional light cameras use photographic film on which images are rendered. In the SoftCamera the notion analogous to photographic film is the planogram. A SoftCamera renders the information it receives from a lens (after processing said information) on the planogram provided to it as input.


The body of the SoftCamera is implemented as specialized software logic running on a server complex that is in network connection with mobile devices in several DGA via any combination of wireless and wired communication networks including, for instance, a private data network such as the Internet.



FIG. 10 shows an overview of an example of the SoftCamera from a general perspective. The figure shows two DGAs “A” and “B” containing mobile devices 100 and 200 respectively. As mentioned above, components within said mobile devices acting as lenses transmit information 101 and 201 respectively. The transmitted information reaches the camera body 500 via aperture 800 said aperture controlled by Shutter Speed Clocks 600 and 700 respectively. The camera body 500 has access to datasets 900 such as a User List, CRM Data, Purchase Data and Trends (by way of example). The camera body 500 processes all the incident information and, using the Planogram 950 and datasets 900 as specially pre-processed film, produces enhanced images such as 1001, 1002, etc. for DGA “A” and images 2001, 2002, etc., for DGA “B”.



FIG. 11 shows more details of the internal modules of one example of the SoftCamera system (100). Image Formulator 200 is a module that takes a collection of datasets (e.g., CRM, Purchase, User list, Trends) and information produced by other modules to create images. Aperture Control module 300 is controlled by the Shutter Speed Clocks (SSCs) module 800 and opens and closes the shutter of a lens. SSC uses a system parameter to define the time interval constituting the shutter speed interval. This interval may be set by system policy, system administrator or inferred by the system based on the resources and demands being made to the system. Thus the Aperture Control module accepts input signals from mobile devices in a DGA for one shutter speed interval. It sends received information to the Messaging Module (MM) 500 for processing and creating an image. It then resets the aperture and proceeds to accept information from said mobile device in the succeeding shutter speed interval.


The Messaging Module (MM), receiving information obtained from an image, formulates a collection of promotional offers that are to be sent to various mobile devices. It optionally uses the Recommender System 600 for making hyper-relevant offers and for inferring items that a user may like based on the user's linger-time and past purchase behavior.


The Analytics Module (AM) 400 operates on a collection of images rather than a single image. Its purpose is to analyze aggregate behavior over several images and derive “macro” analytics such as a determination of those areas in the DGA that see most traffic (“hot areas”).


The process for enhancing a planogram to serve as a “photographic film” for the SoftCamera involves the following steps.

    • I. The input is the planogram for a given DGA.
    • II. Add trend analysis of online feeds to the planogram.
    • III. Add the provided list of consumers to the planogram.
    • IV. Add CRM data to the planogram.
    • V. Add historical purchase data to the planogram.
    • VI. Add inventory data to the planogram.


For producing (initial) images the SoftCamera proceeds as follows. We take as input the planogram of said DGA.


Process Number: 1000





    • I. Start Shutter Speed Clocks, one for each DGA.

    • II. For each mobile device in the DGA do the following steps.

    • III. At the start of a first shutter speed interval, start receiving information from the lenses in the DGA. (The shutter is opened.)

    • IV. Collect information while the shutter is open.

    • V. Stop collecting information when the shutter closes.

    • VI. Send the collected information to be processed and the image to be produced (explained below).

    • VII. Repeat the above steps ad infinitum.





As explained above, an image is an enhancement of a dataset. Such an enhancement may involve correlating the new information with the information previously contained in the dataset and appending new information to the dataset. In terms of producing an image that could be rendered on a display device for human consumption, use can be made of data visualization technologies known in prior art.


For producing further processed images involves the following steps. It is to be noted that the process described below sends its output to the Messaging Module that produces the final images.


Process Number: 2000





    • I. The input is the image produced by Process 1000 above and the list of all users of the system.

    • II. Locate the BLE device in the input image.

    • III. Locate all the mobile devices in the input image. Find identifiers of users of said mobile devices (“user identifier list”).

    • IV. Find users that are common between the input list of all users and the user identifier list of step III. Call this the list of “consumers in DGA”.

    • V. Compute the list of items proximate to every mobile device (“items of interest”).

    • VI. Compute “linger time” (explained below) for each user.

    • VII. Send list “consumers in DGA” to the “Messaging Module”.

    • VIII. Send list “items of interest” for each mobile device to the “Messaging Module”.





Messaging Module

The term “linger time” refers to an interval of time (in units of the shutter speed interval) that a mobile device spends in close proximity to one or more items in a DGA. The distance that constitutes “proximity” and the amount of time to be used as a parameter denoting “linger time” may be pre-determined or specified as configuration parameters by the system administrator.


For example, we may assume that the “linger time” is to be taken as 5 shutter speed intervals and the proximity distance is to be taken as 5 ft. from the center of a BLE device that is closest to a mobile device.


To compute the linger time for a given mobile device and a given BLE device we need to ascertain that said mobile device was within 5 ft. of the BLE device for 5 successive shutter speed intervals, i.e., in 5 successive images received by the Messaging Module from the Process Number 2000 described above. The “items of interest” list is then the list of items designated by the planogram as being proximate to the BLE. (We are assuming that the original planogram gives locations of items and BLE devices and their spatial correspondence.)


The term “user identifier” refers to one or more identifiers that uniquely describe credentials of a user that can be used to send a message to the user, or that can be used to derive or infer credentials of a user. As an example of a credential consider an email address such as john@gmail.com or a Facebook or Twitter credential such as john@facebook.com or @john, etc.


The Messaging Module (MM) takes as input an image containing a list of user identifiers and their “items of interest” list in a given DGA.


MM performs the following main functions.


For each user in the user identifiers list it creates one or more promotional “offers” (e.g., coupons, discounts, etc.), advertisements, recommendations or the like based on human curated recommendations. It then “sends” the offer or the like to a human curator for approval. Approved offers are sent to the indicated users. Alternatively, no recourse to a human curator may be undertaken and a completely autonomous and automatic process may be followed.


In a more elaborate arrangement, the Messaging Module may have access to a Recommender System based on Machine-Learning technology that rank orders the “items of interest” list for a given user and recommends a subset of the list as being of “more interest” to said user.


Furthermore, said Recommender System may also recommend items that are not on the “items of interest” list as possible items that said user is expected to like, based on the past purchase data of said user. Recall that in some cases past purchase data for users as explained above is assumed to be available to the original planogram as an input to the system.


In some embodiments information obtained from images in different DGAs may be used by the messaging module (or other modules) for various purposes such as creating promotional offers or the like. For instance, inferences concerning a user's intent and interests may be determined by examining the user's behavior when he first goes to retail establishment A and then a bit later that day goes to retail establishment B. As a simple example, a user may first go to a pet store and then to a bookstore. In this example, once in the bookstore an offer may be sent to the user concerning books about pets.


It is to be noted that prior art is replete with teachings of recommender systems based on machine-learning techniques such as Collaborative Filtering, etc.


Orchestration Script

Many buildings do not have access to the Internet and mobile devices within such a building cannot communicate with Internet servers or cloud computing infrastructures while within the confines of the building.


The above real-world situation implies that mobile devices within a DGA may not be able to act as lenses of the SoftCamera as described in the above procedures. The above connection-free situation is analogous to low-light situations faced by conventional cameras.


Conventional cameras overcome this situation by using flash light modules attached to a camera body.


In order to circumvent this problem in the SoftCamera we describe a process called the “Orchestration Script” (OScript). The general goal of the OS is to allow a connection-free operation by pre-loading a table of conditions and actions into every mobile device that is within a given DGA.


OScript is based on a table, called the Script Table (ST), each row of which is associated with a BLE in a given DGA. For each BLE, the table lists the items that are proximate to said BLE according to the planogram of the DGA, the offers associated with said BLE pre-determined by the store manager, and the Linger-Time offers associated with said BLE (also pre-determined by store management). The ST may also contain specialized actions (“procedures”) such as “Get Cashier”, etc. A sample OScript table is shown in FIG. 12. It is assumed that a pre-configured process creates the Script Table (ST) for a given DGA, possibly curated with human help.



FIG. 12 shows a BLE device numbered #123 with proximate items I3 and I45, along with offers associated with said device and specialized procedures “Sales Help”, etc.


The OScript process has the following steps for a given DGA and its associated Script Table, ST.


Process: OScript





    • I. The process is initiated when a mobile device enters a geo-fence surrounding a DGA.

    • II. Said mobile device is “marked”, say “D”.

    • III. The ST corresponding to device “D” is downloaded to the marked device.

    • IV. The device “D” executes the following procedure until it exits the geo-fence.
      • a. Find the proximate BLE device from ST.
      • b. Execute the offers listed in column “Offers”.
      • c. Determine “linger-time”; if positive execute offer in column “Linger-Time Offer”.
      • d. Determine if “Linger-Time” is greater than 5 minutes; if positive execute “sales help”.

    • V. Send record of all executed actions and sightings of BLE to the SoftCamera body if the geo-fence has been exited and connectivity has been restored.





The procedures referred to above such as “execute sales help” refer to software programs (sometimes also called subroutines) that may specify actions, e.g., send email messages to a list of individuals.


Those skilled in the art will recognize that the process OScript described above is an example of a state machine and that state machines are a venerable part of computer science. Prior art teaches how to program and construct state machines to perform actions and execute specialized procedures based on complex pre-conditions. It is to be noted that the above example has been kept deliberatively simple for didactical reasons.


Analytics Module

The Analytics Module (AM) is special software logic that takes as input set of (i.e., one or more) images to derive analytics associated with a DGA. For example, we may be interested in those locations where mobile devices spend more than a given amount of time, the most frequently visited aisles, etc. Such traffic-related parameters are sometimes referred to as “hot spots” that may be derived by analyzing a group of successive images, i.e., over a certain number of shutter speed intervals.


In terms of the overall flow of the SoftCamera process it may be stated that the output of Process Number 2000 above serves as input to the Analytics Module (AM) that in turn takes as input a collection of images and produces an enhanced image with the hot spot indications (by way of one example of analytics).


Post-Visit Orchestration Script

Once a mobile device has exited the geo-fence marking the boundary of the DGA, special software logic is executed in the server complex. Recall that all images generated during the time interval that said device was within the DGA are recorded (i.e., generated and saved by the SoftCamera) and are available for analysis.


The Post-Visit Orchestration (PVO) script is analytical software that is triggered when a mobile device exists a given DGA. Upon exit, the PVO script when triggered analyzes any offers that were made to the mobile device whilst it was in the DGA, if any actions were taken by said mobile device, offers made or accepted, time spent in the DGA, any purchases made, linger time, items of interest, etc. Based on all these parameters the PVO script determines if any additional offers are to be made and presents its conclusions to a system administrator for approval.


In the post-visit phase the SoftCamera images for a DGA may be provided to third-party providers through a suitable API and/or rendered on suitable display devices for human consumption.


In the Orchestration Script phase (or the phase in which the mobile device is within the DGA) the images being generated for the DGA may be rendered on suitable display devices wherein human operators may assimilate the information being displayed and acted thereon.


A preferred embodiment of the claimed invention has been implemented in a cloud-based computing system with FIGS. 1-5 representing screen shots of some of the aspects of the invention.


Illustrative Computing Environment

Aspects of the subject matter described herein may be described in the general context of computer-executable instructions, such as program modules or components, being executed by a computer. Generally, program modules or components include routines, programs, objects, data structures, and so forth, which perform particular tasks or implement particular abstract data types. Aspects of the subject matter described herein may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.



FIG. 10 illustrates various components of an illustrative computing-based device 400 which may be implemented as any form of a computing and/or electronic device, and in which embodiments of various aspects of the SoftCamera as described above may be implemented.


The computing-based device 400 comprises one or more inputs 406 which are of any suitable type for receiving media content, Internet Protocol (IP) input, activity tags, activity state information, resources or other input. The device also comprises communication interface 407 to enable the device to communicate with one or more other entity using any suitable communications medium.


Computing-based device 400 also comprises one or more processors 401 that may be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device in order to provide a search augmentation system. Platform software comprising an operating system 404 or any other suitable platform software may be provided at the computing-based device to enable application software 403 to be executed on the device.


The computer executable instructions may be provided using any non-transitory computer-readable media, such as memory 402. The memory is of any suitable type such as random access memory (RAM), a disk storage device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM may also be used.


An output is also provided such as an audio and/or video output to a display system integral with or in communication with the computing-based device. A display interface 405 is provided to control a display device to be used in conjunction with the computing device. The display system may provide a graphical user interface, or other user interface of any suitable type.

Claims
  • 1.-20. (canceled)
  • 21. A method of producing a first sequence of images of a first demarcated geographic area, comprising: A. generating a first image in the first sequence by (i) enhancing a first dataset associated with the first demarcated geographic area with a second dataset, the second dataset obtained from location-derivable information received over a network connection from two or more transmitting elements located in the first demarcated geographic area, the location-derivable information in the second dataset being obtained during a first time-interval determined by a system clock;(ii) configuring the first time-interval based on a system policy or parameter;B. generating a second image in the first sequence by (i) enhancing the first dataset associated with the first demarcated geographic area with a third dataset, the third dataset obtained from second location-derivable information received over the network connection from at least one transmitting element located in the first demarcated geographic area, the second location-derivable information in the second dataset being obtained during a second time-interval determined by the system clock; and(ii) configuring the second time-interval based on a system policy or parameter; andC. rendering the first and second images in the first sequence of images.
  • 22. The method of claim 21 wherein at least one of the transmitting elements is incorporated in a mobile communication device.
  • 23. The method of claim 21 further comprising enhancing the first and second datasets with a third dataset, the third dataset including trend analysis data.
  • 24. The method of claim 21 further comprising enhancing the first and second datasets with a third dataset, the third dataset being selected from the group consisting of customer relationship management data and purchase history data.
  • 25. The method of claim 21 further comprising enhancing the first and second datasets with a third dataset and obtaining the third dataset from an analysis of one or more online data feeds.
  • 26. The method of claim 21 further comprising enhancing the first and second datasets with an inference obtained from an analysis of a third dataset.
  • 27. The method of claim 21 wherein the first dataset includes a planogram.
  • 28. The method of claim 26 wherein the analysis is performed using machine-learning techniques.
  • 29. The method of claim 21 wherein each of the transmitting elements is located in a mobile communication device and further comprising predicting a number of mobile communication devices that will be in a specified portion of the first demarcated geographic area at a future time based on the first and second images.
  • 30. The method of claim 21 wherein each of the transmitting elements is located in a mobile communication device and further comprising associating at least one of the mobile communication devices with a user identifier that identifies the user of the respective mobile device.
  • 31. The method of claim 30 further comprising sending an electronic message to the identified user
  • 32. The method of claim 31 further comprising sending the electronic message after the identified user has exited the first demarcated geographic area.
  • 33. The method of claim 31 wherein content of the electronic message is determined based on the first and second images.
  • 34. The method of claim 21 wherein the mobile communication device includes an orchestration script that is operable when the mobile communication device is in the first demarcated geographic area and the network connection is unavailable, the orchestration script causing specified actions to be performed upon specified conditions being satisfied by a location of the mobile communication device obtained from the location-derivable information available to the mobile communication device.
  • 35. The method of claim 21 further comprising producing a second sequence of images of a second demarcated geographic area by: D. generating a first image in the second sequence by (i) enhancing a third dataset associated with the second demarcated geographic area with a fourth dataset, the fourth dataset obtained from location-derivable information received over a network connection from two or more transmitting elements located in the second demarcated geographic area, the location-derivable information in the fourth dataset being obtained during a third configured time-interval determined by the system clock;(ii) configuring the third time-interval based on a system policy or parameter;E. generating a second image in the second sequence by (i) enhancing the third dataset associated with the second demarcated geographic area with a fifth dataset, the fifth dataset obtained from second location-derivable information received over the network connection from at least one of the transmitting elements located in the second demarcated geographic area, the second location-derivable information in the fifth dataset being obtained during a fourth time-interval determined by the system clock; and(ii) configuring the fourth time-interval based on a system policy or parameter; andF. rendering the first and second images in the second sequence of images.
  • 36. The method of claim 35 wherein each of the transmitting elements is located in a mobile communication device and further comprising identifying a common user of a mobile communication device that is rendered in one of the images in the first sequence of images of the first demarcated geographic area and in one of the images in the second sequence of images of the second demarcated geographic area.
  • 37. The method of claim 35 further comprising examining behavior of the common user based on the images in the first sequence of images of the first demarcated geographic area and in the second sequence of images of the second demarcated geographic area.
  • 38. The method of claim 37 further comprising inferring an intent of the common user based on the behavior.
  • 39. The method of claim 35 wherein the first sequence of images in the first demarcated geographic area and the second sequence of images in the second demarcated geographic area are produced in parallel with one another.
  • 40. The method of claim 35 wherein the third dataset includes a second planogram.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 14/798,231, filed Jul. 13, 2015 and is a non-provisional and claims priority to U.S. Provisional No. 62/023,457, filed Jul. 11, 2014 entitled “A System and Method for Creating and Rendering Mediated Reality Representations of Networked Spaces Using a SoftCamera” and is also a non-provisional and claims priority to U.S. Provisional Application No. 62/113,605, filed Feb. 19, 2015 entitled “Mediated Representations of Real and Virtual Spaces”, the entirety of each prior application being incorporated by reference herein. This application is also related to the following: 1. U.S. application Ser. No. 14/701,874 (Our Ref.: 12000/5) entitled “A System and Method for Mediating Representations with Respect to Preferences Of a Party Not Located in the Environment” filed May 1, 2015.2. U.S. application Ser. No. 14/701,858 (Our Ref.: 12000/2) entitled “A System and Method for Mediating Representations with Respect to User Preferences” filed May 1, 2015.3. U.S. patent application Ser. No. 14/701,883 (Our Ref.: 12000/6) entitled “A System and Method for Inferring the Intent of a User While Receiving Signals On a Mobile Communication Device From a Broadcasting Device” filed May 1, 2015. The entirety of each prior application is incorporated by reference herein.

Provisional Applications (2)
Number Date Country
62023457 Jul 2014 US
62113605 Feb 2015 US
Continuations (1)
Number Date Country
Parent 14798231 Jul 2015 US
Child 16687470 US