Method for Allocating Resources in a Geographic Area

Information

  • Patent Application
  • 20230297903
  • Publication Number
    20230297903
  • Date Filed
    October 26, 2021
    2 years ago
  • Date Published
    September 21, 2023
    8 months ago
Abstract
A method is disclosed for allocating resources in a geographic area. The method comprises: subdividing the area into a number of pixels; selecting an observation period and subdividing the observation period in a number of time sub-intervals; acquiring a data set associated with the pixels during the observation period; on the basis of a sub-set of data of the data set, the sub-set of data being associated with the identified one or more typical time-subintervals of the observation period, applying a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator; and allocating resources to a cluster of the number of clusters on the basis of the trends.
Description
TECHNICAL FIELD

The present invention relates to the field of data management and data powered resource management. In particular, the present invention relates to a method for allocating resources in a geographic area, for instance a city.


BACKGROUND ART

According to the United Nations report on the status of the world's cities (https://unhabitat.org), the number of urban residents is growing by almost 60 million each year. This population growth will lead to the situation in which over 60% of the world population will live in cities by 2030 and over 70% by 2050, using only 2% of the territory. As the urban population and the corresponding needs continue to expand, the management of all aspects of urban life and of social, economic and environmental well-being by the city authorities will be increasingly complex and it will be necessary to adopt “smart platforms”.


In the context of the great effort being made to ensure that these urban areas are transformed from agglomerations of citizens and services into a “smart” environment, the ability to interpret the information gathered from various data sources becomes essential for being able to provide useful services to the City Manager and/or to the individual citizen.


The data of interest in this context are of different types. These data comprise data of an “open” type, namely data provided by the public administration, data coming from sensors distributed throughout the territory, data derived from resource consumption trends (e.g. electricity, gas, water, waste production, amount of telephone traffic, etc.), and data derived from human or device (e.g. a smartphone) presence indicators. These human or device presence indicators may comprise data records of calls in fixed and/or mobile communication networks (such as, for instance, the so called Call


Data Records or CDRs in a GSM mobile communication network), information on the use of transport means, video recordings, information coming from people counting means, information coming from pollution data, etc. The expression “data related to the presence of people in a geographic area” used in this description is referring to data derived from resource consumption trends and/or data derived from human or device presence indicators as those cited above.


Nowadays, the set of data continuously generated by sensors and/or by presence detectors is truly impressive and represents an example of big data. The term “big data” was invented to describe the exponential growth of structured and unstructured data that are difficult to capture, store, manage and process using conventional data management or analysis techniques.


At the same time, there has been a great improvement in the calculation tools, both in terms of memory and of computational capacity, which allows to collect and analyse large data series. In particular, for a single phenomenon, a very large set of variables can be observed, and a huge amount of discrete observations, from a temporal point of view, of single complex entities can be gathered and processed.


Cao Zechum, Wang Sujing, Forestier Germain, Puissant Anne, Eick Christoph F., “Analyzing the composition of cities using spatial clustering”, Proceedings of the 2nd ACM SIGKDD International Workshop on Urban Computing—UrbComp '13 (2013), p. 1, discloses a spatial clustering approach to discover interesting regions and regions which serve different functions in cities. Spatial clustering groups the objects in a spatial dataset and identifies contiguous regions in the space of the spatial attributes. The task of finding uniform regions in spatial data is formally defined as a maximization problem of a plug-in measure of uniformity and a prototype-based clustering algorithm named CLEVER is introduced to find such regions. Moreover, polygon models which capture the scope of a spatial cluster and histogram-style distribution signatures are used to annotate the content of a spatial cluster in the proposed methodology; they play a key role in summarizing the composition of a spatial dataset. Furthermore, algorithms for identifying popular distribution signatures and approaches for identifying regions which express a particular distribution signature are presented.


Liang Wang, Kunyuan Hu, Tao Ku, and Junwei Wu, “Urban Mobility Dynamics Based on Flexible Discrete Region Partition”, International Journal of Distributed Sensor Networks, Volume 2014, Article ID 782649, focuses on taxicab moving trajectory records and presents a new approach to modeling and analyzing urban mobility dynamics. The proposed method comprises two phases. First, discrete space partition based on flexible grid is developed to divide urban environment into finite nonoverlapping subregions. By integrating mobility origin-destination points with covered region, the partitioned discrete subregions have better spatial semantics scalability. Then, mobility activity is studied and its distribution randomness during given time periods among discrete subregions. Moreover, the analysis of mobility linkage of mobility trips between different regions is carried out by O-D matrix.


Rozenfeld H. D., Rybski D., Andrade J. S. Jr, Batty M., Stanley H. E., Makse H. A., “Laws of population growth”, Proc. Natl. Acad. Sci. USA. 2008 Dec. 2; 105(48):18702-7, discloses a method to designate metropolitan areas, denoted “City Clustering Algorithm” (CCA). The CCA is based on spatial distributions of the population at a fine geographic scale, defining a city beyond the scope of its administrative boundaries. The CCA is used to examine Gibrat's law of proportional growth, which postulates that the mean and standard deviation of the growth rate of cities are constant, independent of city size. The authors found that the mean growth rate of a cluster utilizing the CCA exhibits deviations from Gibrat's law, and that the standard deviation decreases as a power law with respect to the city size. The CCA allows for the study of the underlying process leading to these deviations, which are shown to arise from the existence of long-range spatial correlations in population growth. These results have socio-political implications, for example, for the location of new economic development in cities of varied size.


SUMMARY OF THE INVENTION

The inventors have identified the problem of managing and allocating resources (e.g., electricity, gas, water, transport, data network bandwidth, radio frequencies, etc.) that are inherently limited, in particular of improving, dynamically in time, the geographical distribution of the available resources of this type. A solution to this problem could lead to concentrating the allocation of the available resources in a predictive way in the geographic areas where they are actually needed most. In order to predict the need for resources in such a dynamic way, it might be useful to identify, within a given geographic area such as a city, a number of sub-areas, even remote one from another, showing a similar trend in terms of consumption of one or more resources (e.g. electricity, gas, water, waste production, etc.) and/or in terms of one or more of the human or device presence indicators cited above.


In the light of the above, the Applicant has tackled the problem of providing a method for allocating resources within a given geographic area that allows improving, dynamically in time, the geographic distribution of the available resources on the basis of the actual users' needs.


According to the present invention, the claimed method comprises gathering and managing data related to the presence of people in the geographic area and allows to identify, within the given geographic area, a number of sub-areas showing a similar trend in terms of such data.


The possibility of identifying homogeneous clusters of sub-areas, namely groups of sub-areas that have a similar trend over time, based on a human or device presence trend or on daily resource consumptions (e.g. sub-areas showing presence or consumption peaks in the morning, or in the evening, or sub-areas showing constant trends, where presences or consumptions do not vary over time) allows to perform statistical analyses with very effective techniques. These analyses are of interest both for a public administration and for a service provider.


Indeed, thanks to these analyses, it is possible to learn more about the social and human dynamics of a certain geographic area and about the changes that occur in the considered context (e.g. increase/decrease in the resident population). For service providers, it is possible to profile their customers according to their consumption, to identify their habits and therefore to be able to supply services in a more effective way (e.g. to increase the phone coverage at certain times of the year or to supply more water at certain times/days of the week) or more appropriate commercial offers. In particular, a telecommunication service provider, for instance, may use the statistical analyses to evaluate the actual capacity of its network, to better plan future deployments of the network resources and/or to define more personalized commercial offers for the network users.


According to the present invention, the geographic area under examination is preferably subdivided into a number of sub-areas, not necessarily of similar size, which will be referred to as “pixels”; then, data related to the presence of people associated with these pixels are gathered and clusters of pixels, comprising not necessarily contiguous pixels, are identified, each cluster comprising a number of pixels showing a similar trend in terms of the gathered data.


According to a first aspect, the present invention provides a method for allocating resources in a geographic area, the method comprising:

    • a) subdividing said area (A) into a number of pixels;
    • b) selecting an observation period and subdividing said observation period in a number of time sub-intervals;
    • c) acquiring a data set associated with said pixels during said observation period, the data set comprising, for each pixel, a set of observations related to the presence of people in said geographic area (A), wherein each observation derives from an estimate or a measurement of a resource consumption or a presence indicator;
    • d) processing said data set to identity one or more typical time sub-intervals during said observation period;
    • e) on the basis of a sub-set of data of said data set, the sub-set of data being associated with the identified one or more typical time-subintervals of said observation period, applying a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator; and
    • f) allocating resources to a cluster of the number of clusters on the basis of the trends.


Preferably, processing the data set comprises checking whether each observation of the data set has a value higher than a first acceptability threshold, Smax, or lower than a second acceptability threshold, Smin, and determining an acceptable observation as an observation having a value lower than or equal to the first acceptability threshold, Smax, and higher than or equal to the second acceptability threshold, Smin.


Preferably, processing the data set comprises checking whether the number of acceptable observations, |P|, does not differ from the number of observations of the acquired data set by more than a predefined tolerance, ε, the number of observations of the acquired data set being equal to N×|T|×M, where N is the number of observations within a time sub-interval, d, M is the number of pixels, and |T| is the number of sub-intervals of the observation period, T.


Preferably, processing the data set comprises:

    • checking, for each time sub-interval, d, whether the number of observations of the considered time sub-interval, |Pd|, is higher than or equal to an expected value corresponding to the number of observations of the acquired data set for the predefined sub-interval, d, plus a further predefined tolerance, where the number of observations of the acquired data set for the predefined sub-interval is equal to N×M and the further predefined tolerance is equal to ε/|T|;
    • in the negative, identify one or more pixels associated with a number of observations which is smaller than an expected value of observations;
    • in case a pixel is identified which, during the considered time sub-interval, d, is associated with a number of acceptable observations smaller than the expected value of observations, checking whether the number of missing observations is higher than a predefined threshold or whether observations are missing for a time period, within the considered time sub-interval, d, longer than a predefined number of hours;
    • in the affirmative, discard from the data set the observations related to the identified pixel and associated with the considered time sub-interval, d.


Preferably, processing the data set comprises for each time sub-interval, d, checking whether the number of discarded pixels is higher than a further predetermined threshold, and, in the affirmative, discarding the observations associated with the considered time sub-interval, d, from the data set.


Preferably, processing the data set comprises identifying pixels where (xmax(i)−xmin(i))≤Slow, where xmax(i) is the maximum value of the observations associated with a i-th pixel over a considered time period, xmin(i) is the minimum value of the observations associated with the i-th pixel over the considered time period, and Slow is a pre-defined threshold, wherein the considered time period comprises a number of consecutive time sub-intervals, and discarding from the data set the observations related to the identified pixels.


Preferably, step e) comprises computing a normalized value, XNorm(i, t), for each value of an observation, x(i, t), within the sub-set of data associated with a i-th pixel, wherein:









x


Norm



(

i
,
t

)


=


x

(

i
,
t

)



x
¯

(
i
)







where










x
¯

(
i
)

=


1

N





Σ

t
=
1


N





x

(

i
,
t

)



,





and N′ is an integer number representing the number of observations of the sub-set of data for the i-th pixel.


Preferably, step e) comprises:

    • applying a functional data analysis technique to transform the normalized values in a time variable xNorm(i, t) by rewriting them as a linear combination of a set of basis functions as:









x


Norm



(

i
,
t

)






I




c
I

(
i
)




φ
I

(
t
)









    • where l is an integer index ranging from 1 to an integer number L representing a total number of basis functions and the basis functions are:












φ
0

(
t
)

=
1

;









φ
1

(
t
)

=



cos

(
It
)

π



when


I


is


an


even


number


;









φ
1

(
t
)

=



sin

(
It
)

π



when


I


is


an


odd


number


,






    • while coefficients cl(i) are real-valued coefficients;

    • applying a principal component analysis technique as follows:












x


Norm



(

i
,
t

)






I




c
I

(
i
)




φ
I

(
t
)








j
=
1


k






p
j

(
i
)




Ψ
j

(
t
)









    • where

    • ψj(t) are the functional principal components;

    • pj(i) are the corresponding coefficients;

    • k′ is a total number of principal components where 1≤k′≤L;

    • determining a matrix, Y, of dimension equal to M×k′, M being the number of pixels, wherein an i-th row of the matrix, Y, comprises the coefficients, pj(i), of the functional principal components for the i-th pixel.





Preferably, the number, k′, of principal components is 3.


Preferably, step e) comprises applying a centroid-based clustering algorithm to the matrix, Y. More preferably, the clustering algorithm is a fuzzy k-means algorithm.


Preferably, step e) comprises applying a method for checking an optimal number of clusters, the method being an Elbow method or a silhouette method.


Preferably, step e) comprises checking the stability of the clustering by:

    • determining a stability matrix wherein an entry (i, j) in the stability matrix corresponds to the number of pixels belonging to a i-th cluster of the clustering in a first day which have been assigned to a j-th cluster of the clustering in a second day;
    • determining a number of unstable pixels, wherein each unstable pixels is a pixel that has been associated with different clusters over the considered first day and second day;
    • comparing the number of unstable pixels to a threshold, Sstab;
    • determining that the clustering is unstable if the number of unstable pixels is higher than the threshold, Sstab.


According to a second aspect, the present invention provides a computer program product comprising computer-executable instructions for performing, when the program is run on a computer, the steps of the method as set forth above.


According to a third aspect, the present invention provides a system for allocating resources in a geographic area, the system comprising:

    • a computation engine configured to:
      • subdivide the area into a number of pixels;
      • select an observation period and subdivide the observation period in a number of time sub-intervals;
      • acquire a data set associated with the pixels during the observation period, the data set comprising, for each pixel, a set of observations related to the presence of people in said geographic area, wherein each observation derives from an estimate or a measurement of a resource consumption or a presence indicator;
      • process the data set to identity one or more typical time sub-intervals during the observation period;
      • on the basis of a sub-set of data of the data set, the sub-set of data being associated with the identified one or more typical time sub-intervals of the observation period, apply a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator; and
      • allocate resources to a cluster of the number of clusters on the basis of the trends;
    • a repository configured to store the data set; and
    • a user interface configured to receive inputs from, and to provide outputs to, a user of the system.


According to a fourth aspect, the present invention provides a method for providing a service in a geographic area, the method comprising:

    • a) subdividing the area into a number of pixels;
    • b) selecting an observation period, T, and subdividing the observation period in a number of time sub-intervals;
    • c) acquiring a data set associated with the pixels during the observation period, the data set comprising, for each pixel, a set of observations related to the presence of people in the geographic area, wherein each observation derives from an estimate or a measurement of a resource consumption or a presence indicator;
    • d) processing the data set to identity one or more typical time sub-intervals during the observation period;
    • e) on the basis of a sub-set of data of the data set, the sub-set of data being associated with the identified one or more typical time-subintervals of the observation period, applying a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator;
    • f) allocating resources to a cluster of the number of clusters on the basis of the trends; and
    • g) providing the service in the cluster on the basis of the allocated resources.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will become clearer from the following detailed description, given by way of example and not of limitation, to be read with reference to the accompanying drawings, wherein:



FIG. 1 schematically shows an exemplary geographical area under investigation and a block scheme of a data management system according to the present invention;



FIG. 2 is a flowchart illustrating steps of the method according to the present invention;



FIG. 3 is a flowchart illustrating further steps of the method according to the present invention;



FIG. 4 is a flowchart illustrating further steps of the method according to the present invention;



FIG. 5 shows an exemplary dissimilarity matrix that can be used to indicate a dissimilarity between different days of an observation month according to embodiments of the method of the present invention;



FIG. 6 shows an exemplary week correlation matrix that can be used to indicate a correlation between weekdays of the observation month according to embodiments of the method of the present invention;



FIG. 7 shows an exemplary day correlation matrix that can be used to indicate a correlation between hours of the observation days according to embodiments of the method of the present invention;



FIG. 8 is a flowchart illustrating further steps for reducing the data set according to the method according to the present invention;



FIGS. 9a and 9b show two exemplary graphs reporting the value of the presence of people in five different pixels over time;



FIG. 10 is a flowchart illustrating further steps for determining the clusters according to the method according to the present invention;



FIG. 11 shows an exemplary graph resulting from the application of the Elbow method according to embodiments of the present invention; and



FIG. 12 shows two exemplary stability matrices that can be used to check the present of unstable pixels according to embodiments of the present invention.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION


FIG. 1 schematically shows a geographic area A (in the following description, simply denoted as “area”) and a data management system 100 according to an embodiment of the present invention.


The area A is a selected geographic region which is to be examined according to the method of the present invention. For example, the area A may be either a district, a town, a city, or any other kind of geographic area. It will be assumed, for sake of non-limiting example, that the area A is a city (in FIG. 1, the map of the city of Rome is schematically reported).


The area A is preferably subdivided into a plurality of sub-areas or pixels.


The method for managing data according to the present invention comprises examining the area A and is based on data collected on a time basis, the data being related to the presence of people in the area A and being hence indicative of a behaviour of the people inside the area A, as well as of a variation in such behaviour. In particular, examining the area A comprises associating the area A with K groupings of pixels (called clusters), each grouping comprising pixels associated with a similar behaviour of the people therein.


Data comprise estimates and/or measurements related to resource consumption (e.g. electricity, gas, water, waste production). Alternatively, or in addition, data comprise estimates and/or measurements of human or device presence indicators, such as, for instance, call records, people counting information, video recordings, information about use of transport means, information about pollution, etc.


For sake of simplicity, in the following description, the data used comprise a set of observations related to the presence of people in the considered geographic area, wherein each observation derives from an estimate or a measurement of a resource consumption or from an estimate or a measurement of a presence indicator. For instance, an observation may indicate the number of people present within a given pixel at a given time of the day during a given day of the week within the considered month as observation period, which may be derived from measurements of the positioning data of the user mobile devices present within the given pixel as retrieved from the mobile communication network serving the area A. According to another example, an observation may indicate the number of appliances consuming electricity within a given pixel at a given time of the day during a given day of the week, which may be derived from measurements of the electricity consumption data.


For sake of example, in the following description, the term “observation” will refer to the number of people whose presence has been observed within a given pixel at a given time of the day.


The following notation will be used:

    • M is the number of pixels in which the area A under consideration is subdivided. M is an integer number higher than 1. Preferably, all the pixels inside the area A have a same size;
    • T is an observation period (for example a month);
    • t is an integer index indicating a time instance (e.g. a time during the day) associated with an observation;
    • d is a predefined time sub-interval of the observation period T (for instance, a day);
    • D represents the set of sub-intervals d in a given observation period T;
    • H is a number of observations in an hour;
    • N is the total number of observations within the time sub-interval d (for instance, N=96 if H=4 and the sub-interval is a day);
    • i is an integer index indicating a pixel (i=1, 2, . . . , M);
    • l represents the set of pixels;
    • P represents the set of observations used for the analysis; during each step or sub-step of the analysis, the set of observations will be associated with adjective “current” to indicate the data set resulting from the previous analysis step or sub-step;
    • Pi represents the set of observations associated with pixel i, where i=1, 2, . . . , M;
    • Pd represents the set of observations associated with all the pixels during the time sub-interval d;
    • y(i, t, d) is an observation associated with pixel i at time indicated by index t of time-sub-interval d;
    • x(i, t) is an observation associated with pixel i at time indicated by index t of a “typical time-sub-interval (e.g. day)”;
    • a “typical time sub-interval” represents a particular time sub-interval, e.g. a day, which is used for reducing the amount of data, as it will be described herein after.


The method according to the present invention comprises the steps illustrated in the flowchart of FIG. 2. The geographic area A is subdivided in a number M of pixels, for which a number of data is available, namely a number of data y(i, t, d) corresponding to the observations gathered in the time interval T, subdivided in time sub-intervals d, each time-subinterval being associated with N observations. As an outcome of the method according to the present invention, the geographic area A is associated with K clusters of pixels, wherein each cluster comprises a group of pixels associated with similar trends in a resource consumption and/or in a presence indicator, as it will be clearer from the following detailed description of the method of the present invention. As illustrated in FIG. 2, the method according to the present invention comprises a step including the identification of the time interval for the analysis (step 21), and a subsequent step according to which data are gathered for the identified time interval (step 22). Step 23 comprises cleaning the data that has been gathered for the considered time interval. At the end of this operation, it is checked whether the amount of cleaned data is sufficient for the subsequent analysis (step 24). Step 25 comprises adjusting the data set. Step 26 comprises identifying, on the basis of the cleaned data, one or more typical time-sub-intervals, as already cited above. Step 27 comprises reducing the data associated with the identified typical time-sub-interval(s). Step 28 comprises determining the K clusters associated with the geographic area A. Each of the mentioned steps will be described in detail herein after.


The system 100 is preferably configured to gather the data related to the presence of people inside the area A. For example, the system 100 may be connected to a mobile communication network, such as a 2G, 3G, 4G or higher generation cellular communication network, serving the area A, and it may be configured to receive from the mobile communication network data related to the position of the users' communication devices (e.g. smartphones) located within the area A, associated with appropriate identifiers of the devices (e.g. the International Mobile Equipment Identity or MEI). As known, data related to the position of the users' communication devices may be collected by the base stations of the mobile communication network anytime a user's device interacts with the base stations (e.g. at power on/off, at location area update, at incoming/outgoing calls, at sending/receiving SMS or MMS, at Internet access, etc.). The system 100 preferably comprises a computation engine 101 configured to process the data, and a repository 102 (such as a data base) configured to store the data. The system 100 preferably comprises a user interface 103 configured to receive inputs from, and to provide outputs to, a user of the system 100. The term “user” of the system 100 may refer to a human being and/or to an external computing system. The output of the cluster identification process according to the present invention allows performing statistical analyses which may be of interest to a public administration and/or a service provider, for instance to more efficiently plan the distribution of network resources and/or define personalized commercial offers.


The system 100 may be implemented on a single computer, or on a network of distributed computers, or on one or more virtual machines running on one or more computers.


With reference again to the flowchart of FIG. 2, the first step 21 of the method of the present invention involves identifying an initial observation period T for the analysis. The observation period T for the analysis is preferably a predetermined time interval, for instance two months. The observation period T in subdivided into a number D of time sub-intervals (e.g., days), which, as already mentioned above, will be indicated by the notation d. The choice of the initial observation period T is preferably made so as to consider a sufficient number of days to get a reliable analysis and to minimize the number of “perturbing” events occurring within the area A (in other words, in order to rule out events perturbing the normal life within the area A, such as, for instance, a public holiday, a fair, a sport event, a concert, the day at which school starts/ends, etc.).


At step 22 of the method according to the present invention, the system 100 preferably retrieves the data that will be used for the analysis. The data may be retrieved from different sources of data (such as, for instance, from different databases, data lakes, data warehouses, etc.) and stored in the repository 102. Since different sources of data may be involved in the retrieving of data, step 22 may comprise an operation of transforming data so as to make them reciprocally coherent. For instance, this operation may involve conforming units of measure and/or reference points, such as a reference geographic unit for the positioning data.


For instance, as already anticipated above, the system 100 may retrieve positioning data and device identifiers from the mobile communication network serving the area A. In operation, each instance of positioning data as well as the corresponding device identifier, which are received from the mobile communication network, is associated with respective timing data (namely, the date and time instant at which the positioning data are detected) and stored in the repository 102. In particular, the data may be continuously retrieved by the system 100 from the mobile communication network. Alternatively, the data may be collected by the system 100 periodically (e.g. on a daily or weekly basis). For example, the data may be transferred from the mobile communication network to the system 100 as they are generated, in a sort of “push” modality, or they may be collected daily in the mobile communication network and then transferred to the system 100 periodically or upon request.


The data stored in the repository 102 for the observation period T are made available to the computation engine 101 for processing according to steps 23-28 of the flowchart of FIG. 2.


Preferably, steps 23-28 are implemented at the computation engine 101 by one or more software program products running in the computation engine 101, which are stored in a memory of the system 100 (possibly, they may be stored in the repository 102).


Step 23 comprises cleaning the data set that has been gathered for the observation period T. The step of cleaning the data set preferably comprises checking the data set. In turn, checking the data set preferably comprises a first check related to the “acceptability” of the data comprised in the data set. This means checking whether each datum of the data set, namely each observation y(i, t, d), has a value higher than a first acceptability threshold Smax or lower than a second acceptability threshold Smin and in any one of these cases discarding it. For any considered datum, the first acceptability threshold and the second acceptability threshold are specifically pre-defined in accordance with the type of datum being considered. The first acceptability threshold Smax may have a value related to the geographic context and be pre-defined so as to identify a datum having an anomalous value. The second acceptability threshold Smin may have a value equal to 0, which allows excluding data with negative values, or a pre-defined value different from 0 which allows excluding data with less significant values for example related to the presence of M2M (Machine-to-Machine) devices in the considered area A. At the end of this first check (acceptability check), the current data set preferably comprises only acceptable observations y(i, t, d), namely observations y(i, t, d) such as:





∀i∈l, ∀t∈T, ∀d∈D Smin≤y(i, t, d)≤Smax.


In other words, the observations that do not comply with the equation above are preferably discarded from the data set.


Then, checking the data set preferably comprises a second check related to the completeness of the data comprised in the current data set (namely, the data set comprising only acceptable observations according to the first check mentioned above). This second check is illustrated in the flow chart of FIG. 3. Checking the completeness of the data set comprises, at step 31, checking whether the number of observations within the current data set is complete, namely whether it does not differ significantly from the number of observations of the gathered data set. This check can be written as follows:





check if N×|T|×M−|P|≤ε,


wherein N is the total number of observations within the time sub-interval d, |T| is the number of sub-intervals of the observation period T, M is the number of pixels, |P| is the number of observations comprised in the current data set (namely, the cardinality of the current data set of observations P) and ε represents a pre-defined tolerance. The value ε of the tolerance is preferably related to the number of observations that are gathered for a single pixel within the sub-interval d, the number of discarded observations after the first check, the first acceptability threshold Smax and the second acceptability threshold Smin. This tolerance is related to the specific context in which the method operates. If, for example, 6 observations per hour are foreseen for a total of 144 daily observations over 1000 pixels, it may be acceptable to lose 720 observations per day, i.e. those relating to 5 pixels or 0.5% of the total number of pixels, if the second acceptability threshold Smin has been set to 0, i.e. no data relating to M2M or rural areas have already been filtered. But if one wants to filter such data, i.e. when the value of second acceptability threshold Smin is higher (for example equal to the average number of transactions per hour of an IoT device, or equal to 1 in order to exclude rural areas) the value of the tolerance must take into account the daily observations related to the pixels that may be excluded.


In case the difference is higher than the pre-defined tolerance ε, the method provides for checking, for each sub-interval d of the considered observation period T, whether the number of observations of the considered sub-interval is complete (step 32), namely:





check if |Pd|≥N×M+ε/|T|,


wherein |Pd| is the number of observations comprised in the current data set and related to all pixels during the considered time sub-interval d and ε/|T| is the pre-defined tolerance. In case this check is negative (in other words, in case the difference, in absolute value, between the expected theoretical number of observations during the considered time sub-interval d and the number of observations associated with the considered time sub-interval d in the current data set, is higher than the predefined tolerance), the method provides for identifying the one or more pixels currently associated with a number of observations which is smaller than the expected theoretical value (step 33).


In case a pixel is identified which, during the considered time sub-interval d, is associated with a number of acceptable observations smaller than the expected theoretical value, the method provides for checking whether the number of missing observations is significant (step 34), which means checking: whether (i) this number is higher than a predefined threshold or (ii) observations are missing for a time period, within the time sub-interval d, longer than a predefined number of hours (e.g. missing observations for three consecutive hours during the day). In case this check is positive, all the observations related to the identified pixel and associated with the considered time sub-interval d are discarded from the current data set (step 35). These identified pixels will be referred to as “discarded pixels”.


Once the operation of checking the completeness of the observations for all the considered time sub-intervals is terminated (step 36), the method provides for performing the following further checks:

    • for each time sub-interval d, checking whether the number of discarded pixels is higher than a predetermined threshold; in the affirmative, the observations associated with the considered time sub-interval d are discarded from the current set of data;
    • checking whether the discarded pixels are the same for different time sub-intervals; in the affirmative, the observations associated with the discarded pixels are removed from the current set of data for the considered time sub-interval d or for the entire observation period;
    • checking whether the discarded pixels are geographically adjacent; in the affirmative, if this situation repeats over several days, it may be necessary to evaluate the topographical/demographic or commercial characteristics of the pixels in that they could represent an area with peculiar characteristics not previously identified and therefore they may represent a cluster in themselves.


Again with reference to the flow chart of FIG. 2, at step 24, preferably, the method provides for evaluating the amount of discarded data. If the amount of discarded data exceeds a threshold, it is necessary to select a new period on which to recover data that can be separated from the previous period or contain a greater number of days.


At step 25, preferably, the method provides for performing an adjustment of the data set by possibly discarding the observations associated with one or more given pixels and/or related to one or more given days. In particular, step 25 may comprise the following operations:

    • i) identifying pixels where observations associated with a given presence indicator do not significantly vary over a pre-determined period of time (e.g. a number of consecutive time sub-intervals, or the entire observation period T), namely pixels wherein:





(xmax(i)−xmin(i))≤Slow

    •  where xmax(i) is the maximum value of the observations associated with generic pixel i over the considered time period, xmin(i) is the minimum value of the observations associated with generic pixel i over the considered time period, and Slow is a pre-defined threshold representing a required minimum variation of the considered observations for the subsequent analysis.
    • ii) identifying anomalous pixels and/or anomalous day (optional);
    • iii) identifying portions of area A which, during the observation period T, are interested by specific event(s), wherein the observations associated with such portions are strongly affected by those event(s) (optional).


Operation (i) further comprises discarding from the current data set the observations related to the identified pixels. Similarly, also operation (ii) further comprises discarding from the current data set the observations associated with the anomalous pixels and/or with the anomalous time sub-intervals (e.g. days).


In particular, as to optional operation (ii), an “anomalous pixel” is preferably identified as follows. For each pixel i of the area A, the following sum is preferably computed:







P
i

=




t
=
1

N


y

(

i
,
t
,
x

)








    • where y(i, t, x) is the observation associated with pixel i at time indicated by index t of time-sub-interval x. N is the number of observations within time-sub interval x. For instance, sum Pi may indicate the number of people whose presence has been observed within pixel i during day x of the considered observation period. Then, the computed sum Pi is compared with an expected value Ai, which can be determined from historical data, demographic information, statistics, etc. If, for pixel i, the computed sum Pi is not nearly equal to the expected value Ai over each one of a pre-determined number of consecutive time sub-intervals, pixel i is identified as an “anomalous pixel”. For each anomalous pixel, it may be checked whether there have been errors during the process of gathering the observations or whether the pixel belongs to a portion of area A which may host events that may perturb the observations. In case no error/perturbation is found possibly affecting the observation associated with the anomalous pixel, the observations associated with pixel i are preferably discarded from the current data set.





Further, in case the number of anomalous pixels for the same sub-interval is higher than a pre-defined threshold, the method provides for checking the observations associated with the considered sub-interval in deeper detail. This means splitting the time sub-interval into time periods grouping a number of hours or a number of minutes (for instance, splitting a day into quarters of an hour) and considering the difference between observations from any one of the time periods and the subsequent one. If the number of anomalous pixels wherein the difference is higher than a first threshold (e.g. 1%) is higher than a second threshold (e.g. 300), the observations associated with the considered time sub-interval are preferably discarded from the current set of data. Such a time sub-interval is referred to an “anomalous time sub-interval”.


As to optional operation (iii), it further comprises discarding from the current set of data the observations associated with pixels belonging to portions of the area A identified as being strongly affected by specific event(s).


Referring back to the flow chart of FIG. 2, step 26 comprises identifying, on the basis of the current data set, one or more typical time-subintervals. Identification of the typical time sub-intervals is performed in order to limit the amount of data for the successive analysis. FIG. 4 shows a flow chart illustrating the steps relating to the operation of identifying the typical time sub-intervals. For sake of simplicity, in the following description, reference will be made to an observation period T as a month and to time-sub-intervals as the days of the considered month.


At step 41, a dissimilarity matrix is computed. The dissimilarity matrix is a square matrix having a same number of rows and columns, this same number being equal to the number of days of the considered month.


Each record of the dissimilarity matrix preferably comprises an index of dissimilarity D between the observations of the day associated with the considered row of the matrix (day x) and the observations of the day associated with the considered column (day y). The dissimilarity index D(x, y) is a real number: the higher its value, the higher the dissimilarity between the considered days. The dissimilarity index D(x, y) can be computed as the Euclidean distance between two arrays of values associated with day x and day y, wherein each value of an array represents the sum, over all the pixels, of the values of the observations associated with a given time instance of the considered day. Hence, the dissimilarity index D(x, y) can be computed as follows:






D(x, y)=|Px*−Py*|


where:






P*
xi=1My(i, t, x) and P*yi=1My(i, t, y).


It is to be noticed that other types of distance can be used, in alternative to the Euclidean distance, to compute the dissimilarity index D(x, y), such as the Manhattan distance or the Hamming distance.


Once the distances are computed, they can be normalized so that the dissimilarity index D(x, y) ranges between 0 (no dissimilarity) and 1 (maximum dissimilarity).



FIG. 5 shows an exemplary dissimilarity matrix for a month comprising 31 days. The matrix reports, on a grey scale, the dissimilarity indexes between each couple of days of the month. The value of the dissimilarity index is represented as a dot: the larger and darker the dot, the higher the dissimilarity index.


At step 42, a week correlation matrix is computed. Indeed, when, on the basis of the previous analysis, a relationship is observed between, for instance, the number of presences and the specific day of the week (e.g. Monday, Tuesday, etc.), a second matrix may be built to identify the correlations of the different days with each other. For instance, it may happen that the seven days of a week are analysed together to define a sub-area; but this could not be the right choice because of specific events that can occur just a single day of the week, such as a local open-air market. Hence, the analysis can be negatively influenced by classifying as different two sub-areas that are exactly the same six days out of seven. Therefore, in this second type of grouping, the days of the week with a similar trend are sought (e.g. Tuesday has a number of presences similar to Thursday but different from Sunday).


The week correlation matrix is a square matrix having a same number of rows and columns, this same number being equal to the number of days within a week. Each record of the week correlation matrix preferably comprises a week correlation index Cw(sx, sy) indicating the correlation between the observations of the week day associated with the considered row of the matrix (day sx) and the observations of the week day associated with the considered column of the matrix (day sy).


The week correlation index Cw(sx, sy) is for example computed as follows. Firstly, a sum Ps* is computed for each day of the week (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday) as the sum of the values of the observations associated with the considered weekday s (e.g. each Monday) within the given month, as follows:







p
s


=




i
=
1

M





d
=
1

X


y

(

i
,
t
,
d

)







where, for each considered weekday s, Ps* indicates an array of N values corresponding to the number of time instances within a day, wherein each value is associated with a given time instance t and is the sum, over all the pixels and over the number X of occurrences of the considered weekday s within the considered month, of the related observations.


The week correlation index Cw(sx, sy) related to the correlation of weekday sx and weekday sy may be computed as follows:








C
w

(


s

x

,

s

y


)

=


cov


(


P
sx


,

P

s

y




)




sd

(

P
sx


)

·

sd

(

P

s

y



)







where Psx* is the array containing the sums of observations of weekday sx computed as mentioned above (i.e. an array of N values, for instance 96 values), Psy* is the array containing the sums of observations of weekday sy computed as already mentioned above (i.e. an array of N values, for instance 96 values), cov(⋅) is the covariance and sd(⋅) is the standard deviation. Data are comparable if the number of occurrences of weekday sx and the number of occurrences of weekday sy are the same within the considered month.



FIG. 6 shows an exemplary week correlation matrix. The weekdays are indicated with numbers 1 (Monday), 2 (Tuesday), 3 (Wednesday), 4 (Thursday), 5 (Friday), 6 (Saturday), 7 (Sunday). The matrix reports, on a grey scale, the values of the week correlation index between each couple of weekdays. The week correlation index of the exemplary matrix of FIG. 6 ranges between −1 and 1. As apparent, the week correlation matrix is a diagonal matrix. The values on the diagonal are all equal to 1. The matrix of FIG. 6 shows that, for example, the observations of the Wednesdays and the observations of the Sundays are very different from the observations of the other weekdays.


At optional step 43, a day correlation matrix is computed.


The day correlation matrix is a square matrix having a same number of rows and columns, this same number being equal to the number of hours during a day (or, alternatively to a number of fractions of an hour within a day). For simplicity, in the following description, reference will be made to the hours of the day. Each record of the day correlation matrix preferably comprises a day correlation index Cd(hx, hy) indicating the correlation between the observations of the hour associated with the considered row of the matrix (hour hx) and the observations of the hour associated with the considered column of the matrix (hour hy).


The day correlation index Cd(hx, hy) is for example computed as follows. Firstly, an array Ph* is computed for each hour h of the day, each array containing a number H of values (for instance, H=4 values), where each value is the sum, over all the pixels and over a number of days, of the observations associated with a given time instance t during the considered hour. The array Ph* is computed as follows:







P
h


=




M


i
=
1






X


d
=
1



y

(

i
,
t
,
d

)







where M is the number of pixels, X is a number of days considered for the sum (e.g. all days within the considered month) and index t indicates a time instance during the considered hour h.


The day correlation coefficient Cd(hx, hy) related to the correlation of hour hx and hour hy during the day may be computed as follows:








C
d

(


h

x

,

h

y


)

=



cov


(


P
hx


,

P

h

y




)



s



d

(

P
hx


)

·

sd

(

P

h

y



)




.





where Phx* is the array containing the sums of observations associated with hour hx and Phy* is the array containing the sums of observations associated with hour hy.



FIG. 7 shows an exemplary day correlation matrix. The hours of the day are indicated with numbers from 0 to 23. The matrix reports, on a grey scale, the values of the day correlation index between each couple of hours during the day. The day correlation index of the exemplary matrix of FIG. 7 ranges between −1 and 1. As apparent, the day correlation matrix is a diagonal matrix. The values on the diagonal are all equal to 1. The matrix of FIG. 7 shows that, for example, the observations of the night hours (0-6 and 17-23) have a positive correlation between them, while they are negatively correlated with the observations of the daily hours (7-14).


Referring back to the flow chart of FIG. 2, at the end of step 26 one or more typical days are identified. This means limiting the current data set to observations related to a one or more given days within the considered observation period. In particular, this step 26 comprises selecting one or more days during each week, or one or more days during the considered month, and limiting the current data set to the observations associated therewith. A typical day may be selected starting from the dissimilarity matrix. A typical day, in particular, is selected as the day on which the correlation coefficients are greater: in the example of FIG. 6 it is noted that Wednesday and Sunday have a very low correlation index with the other days. If the correlation values of one day with the other days are added up, the sum is: Monday 5.22, Tuesday 5.28, Wednesday 4.24, Thursday 5.37, Friday 5.32, Saturday 5.32, Sunday 4.18. In this case, the highest value corresponds to Thursday.


In particular, when the observations related to one single day are selected for the subsequent analysis, the method provides for using them as they are, while when more days are identified, the median is computed. At the end of step 26, hence, the current data set comprises the observations of the single typical day or, alternatively, it comprises the values computed as the median of the observations when more typical days are identified.


Referring back to the flow chart of FIG. 2, step 27 comprises reducing the data of the current data set. FIG. 8 is a flow chart illustrating the steps relating to reducing the data of the current data set.


In particular, step 81 preferably comprises processing the data of the current set of data so as to normalize the values of the different observations of the current set of data within a common range. Indeed, the values of the observations are different from pixel to pixel, as can be seen from the exemplary graphs of FIG. 9a. These graphs show the value of the presence of people in five different pixels over time (24 hours). Starting from the observations x(i, t) associated with the typical day, step 81 preferably comprises computing the following data:









x
¯

(
i
)

=


1

N










t
=
1


N





x

(

i
,
t

)



,








σ

(
i
)

=





Σ



t
=
1


N






(


x

(

i
,
t

)

-


x
_

(
i
)


)

2



N





,









x
¯

(
t
)

=


1

M










i
=
1


M





x

(

i
,
t

)



,




namely, respectively, the average value over time (where N′ is an integer number representing the number of time instances over which the observations of the i-th pixel are to be normalized, which corresponds to the number of observations of the typical day), the standard deviation of the values of the observations for pixel i, and the average values over the pixels in the current data set (M′ being an integer number smaller than or equal to M) of the values of the observations at time t.


Then, step 81 preferably comprises computing a normalized value xNorm(i, t) for each value x(i, t) according to the following equation:








x

N

o

r

m


(

i
,
t

)

=



x

(

i
,
t

)



x
¯

(
i
)


.






FIG. 9b shows the normalized values of the data contained in the graphs of FIG. 9a, the values having been normalized according to the above procedure. As can be seen, the trend of data over time along each graph is preserved after the normalization procedure described above, and data of the different pixels have comparable values.


Step 82 preferably comprises transforming the normalized data in time variables. In particular, a known Functional Data Analysis technique is preferably implemented. Preferably, the time-discrete normalized values xNorm(i, t) are rewritten as a linear combination of a set of L basis functions (L being an integer number higher than 1) and transformed in a time variable xNorm(i, t), namely:








x

N

o

r

m


(

i
,
t

)






l
=
1

L




c
l

(
i
)




ϕ
l

(
t
)







where l is an integer index ranging from 1 to L and the basis functions are, for instance, the following:









φ
o

(
t
)

=
1

;









φ
l

(
t
)

=



cos

(
lt
)

π



when


l


is


an


even


number


;









φ
l

(
t
)

=



sin

(
lt
)

π



when


l


is


an


odd


number


,




while coefficients cl(i) are the real-valued coefficients that allow a best reconstruction of the signal (typically, computed using a known Ordinary Least Squares (OLS) algorithm or more advanced techniques).


As already cited above, having 4 observations per hour means that the number of observations available per day is 96. Preferably, the number L of basis functions used for transforming the data in a time variable is equal to an odd number higher than half the number of daily observations. In case the number of daily observations is 96, the number of basis functions is preferably chosen to be 51. The choice of this number followed some tests performed by the inventors and represents a good compromise between the amount of stored data, the level of detail and the need to filter out the statistic noise associated with the gathered observations.


Step 83 preferably comprises applying a known Principal Component Analysis (PCA) technique, in order to further limit the amount of data. According to the present invention, the known functional version of this technique (fPCA) is adopted. This step preferably comprises rewriting the reconstructed signal Σlcl(i)φl(t) as









x
Norm

(

1
,
t

)





l




c
l

(
i
)




φ
l

(
t
)











j
=
1


k






p
j

(
i
)




Ψ
j

(
t
)



=


d
i

(
t
)





where

    • xNorm(i, t) is a row vector comprising the N′ observations of the typical day for pixel i;
    • ψj(t) are the (functional) principal components (that is, the optimized basis functions);
    • pj(i) are the corresponding coefficient (“loadings”), computed via matrix multiplication;
    • k′ is the total number of principal components considered in reconstructing/rewriting xNorm(i, t) (1≤k′≤L, where L is the total number of basis functions);
    • di(t) results from rewriting xNorm(i, t) in the principal components' space.


      With fPCA, it is thus possible to rewrite the reconstructed signal Σlcl(i)φl(t) in an optimised and more concise (k<<l) way Σj=1kpj(i)ψj(t).


According to the method of the present invention, at the end of step 83, a sub-set of the principal components is considered for the subsequent analysis. Preferably, the sub-set comprises the first 3 principal components. Indeed, the inventors made some tests and found that considering only the first three principal components allows taking into consideration 90% of the variability of the original data.


Hence, at the end of step 83, a matrix Y is preferably computed of dimensions Mx3, wherein the i-th row of matrix Y comprises the components pj(i) of the time variable xNorm(i, t) associated with pixel i over the three-dimensional space of the first three principal components. Each row can be represented as a point in a scatter plot of matrix Y.


At step 84, the method provides for optionally computing an “average day” for each pixel i. This can be done by considering the time variables xNorm(i, t) associated with pixel i for a number of days (preferably, the same weekday over a number of weeks in a month) and applying the principal component analysis technique already described above. Each day can be represented as a point in a scatter plot of the resulting matrix. The points are the closer the more the original data are similar. The average day of the considered pixel i can be chosen as the day associated with the median point of the group of resulting points in the scatter plot.


Alternatively, the method may provide for computing a median value associated with each principal component over the considered days.


Referring back to the flow chart of FIG. 2, step 28 comprises identifying the clusters of pixels. Identifying the clusters of pixels comprises the steps illustrated in the flow chart of FIG. 10. In particular, the method of the present invention provides for using a clustering algorithm to determine, starting from matrix Y, a number K of clusters, wherein each cluster shows similar trends in the gathered data.


The clustering algorithm is preferably a centroid-based clustering algorithm, more preferably the known fuzzy k-means algorithm. Indeed, the inventors noticed that this particular algorithm is well suited for the considered analysis and allows providing an estimate about how accurate is the cluster determination for given data. However, other algorithms may be used as well, such as one of the following: k-means, PAM (Partitioning Around Medoids) k-medoids, CLARA (Clustering Large Applications), CLARANS (Clustering Large Applications based on RANdomized Search).


For sake of non-limiting example, the following description and FIG. 10 will refer to the application of the fuzzy k-means algorithm. The input is the matrix Y comprising the components of the time variables representing the observations associated with all the pixels of area A. Step 1001 preferably provides for selecting a range of the number K of clusters. For instance, K may range between 2 and 15, or between 2 and 10, etc. Moreover, step 1001 preferably comprises selecting a fuzziness parameter. For instance, the fuzziness parameter is equal to 2. At step 1002, the clustering algorithm is preferably applied by considering a given value of K, for instance starting with 2, and fuzziness parameter, for instance 2. The fuzzy k-means algorithm is known and hence no further details will be provided herein after.


The algorithm is iteratively repeated by varying the value of K within the selected range. At the end of the iterations, the obtained clusters are preferably checked (step 1003). Preferably, step 1003 comprises applying one or more methods for checking an optimal number of clusters. The optimal number of clusters is the number of clusters according to which the points within each cluster are as close as possible to each other and each cluster is as far as possible from the other clusters.


In particular, preferably, the optimal number of clusters is determined by applying the known Elbow method. This method comprises computing the so called “within cluster sums of squares” as follows:








W

(

Y
,
C

)

K

=




k
=
1

K





i


Y
K







"\[LeftBracketingBar]"



y
i

-

c
k




"\[RightBracketingBar]"


2







where C is the set of centroids, K is the considered number of clusters, ck is the centroid of the k-th cluster, Y is the matrix comprising the components yi of the time variables representing the observations associated with all the pixels i of area A, Yk is the k-th partitioning created by the clustering algorithm.



FIG. 11 shows a graph representing exemplary values of W(Y,C)K for different numbers of clusters (from 1 to 15).


According to the Elbow method, the optimal number of clusters is determined as the number which corresponds to an elbow on the graph of W(Y,C)K. In FIG. 11, the position of the elbow corresponds to a number of clusters equal to 6, which is hence the optimal number of clusters for the exemplary case illustrated therein.


Step 1003 may comprise applying another method such as the known silhouette method. The “silhouette” parameter is a measure of how close a point is to the other points of its cluster and how far it is with respect to the points of the other clusters. The silhouette varies between −1 and 1: a value close to 1 indicates that the point is close to the points of the cluster to which it has been assigned, and far with respect to the points of the other clusters, while a value close to −1 indicates that the point is not close to the points of the same cluster and hence its assignment is incorrect. Considering a number K of clusters, the silhouette is computed as follows:








S

(
Y
)

K

=




s
_

(
y
)

k

_










s
¯

(
y
)

k

=



b

(
y
)

-

a

(
y
)



max

(


a

(
y
)

,

b

(
y
)


)






where y is a point of a given cluster, s(y)k is the average silhouette of points y of the k-th cluster, a(y) is the average distance between point y and the other points of the k-th cluster (internal cohesion), b(y) is the minimum average distance between y and the other points of all the clusters, S(Y)K is the silhouette score, i.e. the average silhouette of all the points of all the K clusters.


The optimal number K of clusters is the number corresponding to the highest value of S(Y)K, when K ranges within the range selected at step 1001.


At the end of step 1003, the optimal numbers of clusters obtained with the possibly different methods described above are preferably compared. With reference to the exemplary methods described above, in particular, the optimal numbers of clusters obtained by applying the Elbow method and the silhouette method are compared. In this case, if the numbers do not coincide, the optimal number of clusters is preferably the number obtained with the silhouette method.


At step 1004, the method provides for checking whether the clustering obtained by using the optimal number of clusters determined at step 1003 is acceptable. This means, for instance, checking whether the silhouette score S(Y)K is acceptable, that is checking if it is close enough to 1 with a tolerance given by the current use case. In the negative, steps 1002 and 1003 are preferably repeated by considering a different range for the number of clusters K.


Once it is determined that the clustering is acceptable, at step 1005 the method provides for checking whether the clustering is stable. This means, according to a preferred embodiment, checking whether considering, as typical days, “similar” days over different weeks (e.g. every Monday over a number of consecutive weeks), the procedure described above gives the same clustering result. Checking the stability of the clustering may comprise determining how many pixels are grouped in a same cluster over different weeks. It will be assumed herein below that two similar days over two consecutive weeks are considered for sake of example. Step 1005 preferably comprises determining a square stability matrix of dimensions K×K (being K the number of clusters) wherein entry (i, j) in the matrix corresponds to the number of pixels belonging to the i-th cluster in the first day which have been assigned to the j-th cluster in the second day. This number is zero if none of the pixels assigned to the i-th cluster in the first day is assigned to the j-th cluster in the second day. The entries of the matrix may be normalized with respect to the number of pixels assigned to the i-th cluster in the first day or with respect to the number of pixels assigned to the j-th cluster in the second day. In either case, a value equal to 1 corresponds to a situation in which the same pixels are assigned to the i-th cluster in the first day and to the j-th cluster in the second day. A high value, namely higher than 0.7, is indeed indicative of the fact that pixels identified as similar in the first day (hence, assigned to the same cluster) remain similar also in the second day (and again assigned to a same cluster, not necessarily the same cluster of the first day).



FIG. 12 shows two exemplary stability matrices. A number of K=7 clusters is considered for sake of example, and the number of pixels within each cluster is taken into consideration for two exemplary days, October 20th and October 27th. The matrix on the left of FIG. 12 contains values that have been normalized with respect to the number of pixels of the first day, while the matrix on the right of FIG. 12 contains values that have been normalized with respect to the number of pixels of the second day.


Step 1005 further comprises determining a number of unstable pixels. An unstable pixel is a pixel that has been associated with different clusters over the considered days. More in particular, unstable pixels are those pixels that, in the stability matrix, are not associated with cells corresponding to the same cluster over different days. In the example considered above in FIG. 12, unstable pixels belong, for example, to cluster 1 on the first day and to cluster 4 or 7 on the second day, or they belong to cluster 2 on the first day, and to cluster 7 on the second.


Then, step 1005 preferably comprises comparing the number of unstable pixels to a threshold Sstab. The obtained clustering is determined to be unstable if the number of unstable pixels is higher than the threshold. In this case, a further analysis is preferably performed to determine whether any unstable pixel corresponds to a portion of area A affected by a particular event or the like. As an outcome of this analysis, the unstable pixels may be excluded from the data set and the method may be applied again with an updated data set starting from the identification of the typical day. If the unstable pixels are not associated with perturbing events, the clustering algorithm may be applied again by considering the current cluster partitioning as a starting point and then, for instance, moving some elements from a cluster to another one or creating new clusters: in this case, a different number of clusters may be considered, as illustrated in the flow chart of FIG. 12, and/or a different clustering algorithm may be applied.


Once the obtained clustering is determined to be stable, the area A is partitioned in the obtained clusters. It follows from the method described above that each cluster is indicative of a particular trend of a presence indicator, and/or a particular trend in a resource consumption, and may then be used to associate the respective group of pixels associated therewith with a particular characterization such as, for instance, one of the following: business, residential, tourist, and so on.


The method of the present invention then provides for allocating in an identified cluster available resources (for example electricity, gas, water, data network bandwidth, etc.), on the basis of the trends of a presence indicator and/or of a resource consumption associated with the cluster.


Also, the method provides for providing a service in the cluster on the basis of the allocated resources. The service may relate to the supply of e.g. water, gas, electricity, waste collection, data network, telephone network, etc.


Thanks to the method of the present invention, it is possible to better understand the social and human dynamics of a certain area, the changes that take place in the context (e.g. increase/decrease in the resident population), profile the customers based on consumption data, identify their habits and therefore be able to provide improved or additional services (for instance, greater telephone coverage at certain times of the year or greater quantities of water at certain hours/days of the week) or more suitable commercial offers.


Examples of application of the method of the present invention are:

    • statistical estimate of the presence of users and their activity;
    • evaluation of the correct required capacity and consequent improvement of the transport or telephone networks, supply of services (e.g. water, gas, electricity, waste collection, data network, telephone network);
    • estimate of future traffic flows;
    • Geo-Market Measurement: for example, on the basis of detectable presence data (e.g. people counters, video cameras, telephone data) it is possible to identify optimal opening times for shops/services;
    • analysis of mobility patterns and footfall (counting of pedestrian flows);
    • relevance of presence in specific events or “typical days”;
    • public security;
    • monitoring of group dynamics for the development of commercial activities and their opening;
    • sentiment analysis through social data;
    • segmentation of customers based on spending behaviour;
    • social activities.


The process described above and the obtained clusterization allows allocating the available resources in an effective way. This is of great importance when such resources are limited. Indeed, available resources can be configured and concentrated in clusters where they are most needed on the basis of the recognized trends in the resource consumption and/or in a presence indicator, as described above. This allows optimizing the resource allocation without any wastage thereof. Moreover, this allows providing services based on the allocated resources, which are hence tailored on the effective needs of users of a given area.

Claims
  • 1. A method for allocating resources in a geographic area, the method comprising: a) subdividing the geographic area into a number of pixels;b) selecting an observation period, and subdividing the observation period, into a number of time sub-intervals;c) acquiring a data set associated with the pixels during the observation period, the data set comprising, for each pixel, a set of observations related to presence of people in geographic area, wherein each observation is derived from an estimate or a measurement of a resource consumption or a presence indicator;d) processing the data set to identity one or more typical time sub-intervals during the observation period;e) on the basis of a sub-set of data of the data set, the sub-set of data being associated with the identified one or more typical time sub-intervals of the observation period, applying a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator; andf) allocating resources to a cluster of the number of clusters on the basis of the trends.
  • 2. The method according to claim 1, wherein the processing the data set comprises checking whether each observation of the data set has a value higher than a first acceptability threshold, or lower than a second acceptability threshold, and determining an acceptable observation as an observation having a value lower than or equal to the first acceptability threshold, and higher than or equal to the second acceptability threshold.
  • 3. The method according to claim 2, wherein the processing the data set comprises checking whether a number of acceptable observations, does not differ from a number of observations of the acquired data set of more than a predefined tolerance, ε, the number of observations of the acquired data set being equal to N×|T|×M, where N is the number of observations within a time sub-interval, M is the number of pixels, and |T| is a number of sub-intervals of the observation period, T.
  • 4. The method according to claim 3, wherein the processing the data set comprises: checking, for each time sub-interval, whether the number of observations of the corresponding time sub-interval, is higher than or equal to an expected value corresponding to the number of observations of the acquired data set for a predefined sub-interval, plus a further predefined tolerance, where the number of observations of the acquired data set for the predefined sub-interval is equal to N×M and the further predefined tolerance is equal to ε/|T|;in the negative, identify one or more pixels associated with a number of observations which is smaller than an expected value of observations;in case a pixel is identified which, during the corresponding time sub-interval, is associated with a number of acceptable observations smaller than the expected value of observations, checking whether a number of missing observations is higher than a predefined threshold or whether observations are missing for a time period, within the corresponding time sub-interval, longer than a predefined number of hours; andin the affirmative, discard from the data set the observations related to the identified pixel and associated with the corresponding time sub-interval.
  • 5. The method according to claim 4, wherein the processing the data set comprises for each time sub-interval, checking whether a number of discarded pixels is higher than a further predetermined threshold, and, in the affirmative, discarding the observations associated with the corresponding time sub-interval, from the data set.
  • 6. The method according to claim 3, wherein the processing the data set comprises identifying pixels where (xmax(i)−xmin(i))≤Slow, where xmax(i) is the maximum value of the observations associated with a i-th pixel over a considered time period, xmin(i) is the minimum value of the observations associated with the i-th pixel over the considered time period, and Slow is a pre-defined threshold, wherein the considered time period comprises a number of consecutive time sub-intervals, and discarding from the data set the observations related to the identified pixels.
  • 7. The method according to claim 1, wherein step e) comprises computing a normalized value, xNorm(i, t), for each value of an observation, x(i, t), within the sub-set of data associated with a i-th pixel, wherein:
  • 8. The method according to claim 7, wherein step e) comprises: applying a functional data analysis technique to transform the normalized values in a time variable xNorm(i, t) by rewriting them as a linear combination of a set of basis functions as:
  • 9. The method according to claim 8, wherein the number, k′, of principal components is 3.
  • 10. The method according to claim 8, wherein step e) comprises applying a centroid-based clustering algorithm to the matrix, Y.
  • 11. The method according to claim 9, wherein the clustering algorithm is a fuzzy k-means algorithm.
  • 12. The method according to claim 9, wherein step e) comprises applying a method for checking an optimal number of clusters, the method being an Elbow method or a silhouette method.
  • 13. The method according to claim 9, wherein step e) comprises checking stability of the clustering by: determining a stability matrix wherein an entry (i, j) in the stability matrix corresponds to the number of pixels belonging to a i-th cluster of the clustering in a first day which have been assigned to a j-th cluster of the clustering in a second day;determining a number of unstable pixels, wherein each unstable pixel is a pixel that has been associated with different clusters over the first day and second day;comparing the number of unstable pixels to a threshold; anddetermining that the clustering is unstable if the number of unstable pixels is higher than the threshold.
  • 14. A non-transitory computer readable medium for storing a computer program comprising computer-executable instructions that, when the program is executed on a computer, perform the method of claim 1.
  • 15. A system for allocating resources in a geographic area, the system comprising: a computation engine configured to: subdivide the geographic area into a number of pixels;select an observation period, and subdivide the observation period, in a number of time sub-intervals;acquire a data set associated with the pixels during the observation period, the data set comprising, for each pixel, a set of observations related to presence of people in the geographic area, wherein each observation is derived from an estimate or a measurement of a resource consumption or a presence indicator;process the data set to identity one or more typical time sub-intervals during the observation period;on the basis of a sub-set of data of the data set, the sub-set of data being associated with the identified one or more typical time sub-intervals of the observation period, apply a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator; andallocate resources to a cluster of the number of clusters on the basis of the trends;a repository configured to store the data set; anda user interface configured to receive inputs from, and to provide outputs to, a user of the system.
  • 16. A method for providing a service in a geographic area, the method comprising: a) subdividing the geographic area into a number of pixels;b) selecting an observation period, and subdividing the observation period, into a number of time sub-intervals;c) acquiring a data set associated with the pixels during the observation period, the data set comprising, for each pixel, a set of observations related to presence of people in the geographic area, wherein each observation is derived from an estimate or a measurement of a resource consumption or a presence indicator;d) processing the data set to identity one or more typical time sub-intervals during the observation period;e) on the basis of a sub-set of data of the data set, the sub-set of data being associated with the identified one or more typical time sub-intervals of the observation period, applying a clustering algorithm to partition the pixels into a number of clusters, each cluster comprising a respective group of pixels associated with similar trends in the resource consumption and/or in the presence indicator;f) allocating resources to a cluster of the number of clusters on the basis of the trends; andg) providing the service in the cluster on the basis of the allocated resources.
Priority Claims (1)
Number Date Country Kind
102020000025489 Oct 2020 IT national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2021/079618 10/26/2021 WO