This application claims priority under 35 U.S.C. Section 119 of Japanese Patent Application No. 2023-178844 filed Oct. 17, 2023, entitled “INFORMATION PROCESSING APPARATUS, COMMODITY ORDER SYSTEM, CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS, AND PROGRAM”. The disclosure of the above application is incorporated herein by reference.
The present invention relates to an information processing apparatus for performing a process related to an order of a commodity, a commodity order system, a control method for the information processing apparatus, and a non-transitory tangible storage medium storing a program that causes the information processing apparatus to execute a predetermined function.
At present, a commodity order apparatus that is used for ordering a commodity is installed in a store. For example, as an apparatus of this kind, a ticket vending machine is installed in a store. A customer operates the ticket vending machine to select a commodity that the customer desires, and performs a payment process. Accordingly, an exchange ticket for the commodity is issued from the ticket vending machine. The apparatus of this kind is desired to allow the customer to smoothly select a commodity.
Japanese Laid-Open Patent Publication No. 2012-022589 describes a commodity selection support method in which the line of sight of a customer is detected to support selection of a commodity. In this method, a screen in which a plurality of images of commodities in a random arrangement are scrolled is displayed on a display. When the customer approaches this screen and gazes at an image of a specific commodity, an image of commodities in a gazed region including the gazed point is displayed in a region at the center of the screen. Then, when the customer continues to gaze at an image of a commodity in this region, the image of this commodity is highlighted. Accordingly, the customer can smoothly perform selection of the commodity.
However, in the above commodity selection support method, after the customer has faced the display, the commodity in which the customer is interested is determined from the line of sight of the customer, and then, the display screen is switched. Therefore, it takes time from when the customer has faced the display until the commodity of the gazing target is displayed, and during this time, the customer has to wait to select the commodity.
A first aspect of the present invention relates to an information processing apparatus to be used in a store. In the store, a plurality of kinds of menu media are disposed so as to be able to be viewed by a customer in a movement path of the customer up to a facing position where the customer faces a display configured to display a selection candidate for a commodity. The information processing apparatus includes a controller. The controller executes: a first process for specifying the menu medium in which the customer has shown interest, from a captured image of the customer in the movement path; and a second process for causing the display to display, at least at an arrival timing when the customer has arrived at the facing position, a display screen such that a commodity related to the menu medium specified by the first process with respect to the customer is prioritized as the selection candidate.
In the information processing apparatus according to the present aspect, a commodity related to the menu medium in which the customer has shown interest in the movement path is displayed on the display screen in a prioritized manner as a selection candidate at least at the timing when the customer has arrived at the facing position. Therefore, the commodity in which the customer has shown interest can be smoothly and quickly displayed as a selection candidate. Accordingly, convenience in commodity selection for the customer can be enhanced.
A second aspect of the present invention relates to a commodity order system configured to receive an order of a commodity by using a commodity order apparatus configured to display a selection candidate for a commodity. The commodity order system is used in a store, and in the store, a plurality of kinds of menu media are disposed so as to be able to be viewed by a customer in a movement path of the customer up to a facing position where the customer faces the commodity order apparatus. The commodity order system includes a controller. The controller executes: a first process for specifying the menu medium in which the customer has shown interest, from a captured image of the customer in the movement path; and a second process for causing a display of the commodity order apparatus to display, at least at an arrival timing when the customer has arrived at the facing position, a display screen such that a commodity related to the menu medium specified by the first process with respect to the customer is prioritized as the selection candidate.
A third aspect of the present invention relates to a control method for an information processing apparatus to be used in a store. In the store, a plurality of kinds of menu media are disposed so as to be able to be viewed by a customer in a movement path of the customer up to a facing position where the customer faces a display configured to display a selection candidate for a commodity. The control method according to this aspect includes: specifying the menu medium in which the customer has shown interest, from a captured image of the customer in the movement path; and executing a process for causing the display to display, at least at an arrival timing when the customer has arrived at the facing position, a display screen such that a commodity related to the menu medium specified with respect to the customer is prioritized as the selection candidate.
A fourth aspect of the present invention relates to a non-transitory tangible storage medium storing a program for causing an information processing apparatus to be used in a store to execute a predetermined function. In the store, a plurality of kinds of menu media are disposed so as to be able to be viewed by a customer in a movement path of the customer up to a facing position where the customer faces a display configured to display a selection candidate for a commodity. The program according to this aspect causes the information processing apparatus to execute: a function for specifying the menu medium in which the customer has shown interest, from a captured image of the customer in the movement path; and a function for causing the display to display, at least at an arrival timing when the customer has arrived at the facing position, a display screen such that a commodity related to the menu medium specified with respect to the customer is prioritized as the selection candidate.
The effects and the significance of the present invention will be further clarified by the description of the embodiments below. However, the embodiments below are merely examples for implementing the present invention. The present invention is not limited to the description of the embodiments below in any way.
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
The commodity order system 1 includes a ticket vending machine 10, cameras 21, 22, and an information processing apparatus 30. The commodity order system 1 is used in a store such as a restaurant.
The ticket vending machine 10 is installed in the store. A customer having visited the store purchases a ticket for a commodity that the customer desires, from the ticket vending machine 10. The ticket vending machine 10 is a commodity order apparatus that displays a selection candidate for a commodity. In the store, in a movement path of the customer up to a facing position where the customer faces the ticket vending machine 10, a plurality of kinds of menu media 40 are disposed so as to be able to be viewed by the customer. Here, a plurality of menu media 40 are disposed in the store.
In the respective menu media 40, figures of commodities different from each other and brief information (price, catch phrase, etc.) regarding the commodities are displayed. Each menu medium 40 is a poster, for example. The menu medium 40 may be another medium such as a digital signage or a large screen.
The camera 21 captures an image of the customer who moves toward the ticket vending machine 10 in the store. The camera 21 is used in order to determine whether or not the customer has viewed the menu media 40. The camera 21 is disposed so as to be able to capture an image of the face of the customer viewing the menu media 40. Preferably, the camera 21 is disposed in the surrounding of the arrangement region of the menu media 40, such as at a position immediately above the arrangement region.
The camera 22 captures an image of a predetermined region to the front of the ticket vending machine 10, from behind the ticket vending machine 10. A captured image from the camera 22 is used in collation between the customer having viewed the menu media 40 in the movement path and the customer having reached the facing position. When this collation can be performed only by the camera 21 according to the relationship between the arrangement position of the menu media 40 and the imaging field of view of the camera 21, the camera 22 may be omitted.
The information processing apparatus 30 is communicably connected to the ticket vending machine 10 and the camera 21. The information processing apparatus 30 is installed in an office room or the like of the store where the ticket vending machine 10 is installed. The information processing apparatus 30 may be installed in a facility other than the store. In this case, the information processing apparatus 30 performs communication with the ticket vending machine 10 and the camera 21 via a public telecommunication network such as the Internet.
The information processing apparatus 30 performs a process for adjusting a screen (commodity selection screen) that is displayed on the ticket vending machine 10, based on a captured image from the camera 21. The information processing apparatus 30 is implemented by a server computer or a personal computer, for example. When a plurality of sets of the ticket vending machine 10 and the camera 22 are installed in the store, the information processing apparatus 30 may be communicably connected to the ticket vending machine 10 of each set. In this case, the information processing apparatus 30 performs the above-described process for each set of the ticket vending machine 10 and the camera 22.
The ticket vending machine 10 is communicably connected to the camera 22. The ticket vending machine 10 has a housing having an approximately cube shape forming the outer shape of the apparatus. In an upper portion of the front face of the ticket vending machine 10, an operation/display unit 11 of a touch panel-type is disposed. As described later with reference to
In a center portion of the front face of the ticket vending machine 10, a banknote inlet/outlet 12 through which banknotes are deposited and dispensed, a coin inlet 13 through which coins are deposited, a coin outlet 14 through which coins are dispensed, and a ticket issuing port 15 through which a ticket is issued are disposed. Furthermore, a human detection sensor is disposed in the ticket vending machine 10. The human detection sensor detects that the customer has reached the vicinity of the facing position where the customer faces the operation/display unit 11 of the ticket vending machine 10.
Here, the facing position is the position where the customer faces the operation/display unit 11 (a display configured to display a selection candidate for a commodity) in order to perform an order for a commodity. That is, the facing position is the position for performing selection of a commodity by using a display screen.
In Embodiment 1, the position where the customer faces the operation/display unit 11 for causing ticket issuing for a commodity corresponds to the facing position. In other words, the position where the customer operates (the position where the customer can operate) the operation/display unit 11 is the facing position. The facing position need not necessarily be directly in front of the operation/display unit 11, and may be slightly shifted to the left or right from the position directly in front of the operation/display unit 11. With respect to the ticket vending machine 10, the facing position is defined as a position separated to the front by a predetermined distance (e.g., several tens of cm) from the operation/display unit 11.
The farthest position of the detection range of the human detection sensor is set to be equal to or larger than the distance up to this facing position, at least. The farthest position of the detection range may be set to be a position that is still farther by a predetermined distance (e.g., several tens of cm) from the distance up to the facing position. The vicinity of the facing position described above is the range between this farthest position and the facing position.
The fact that the customer has reached the vicinity of the facing position may be determined by using the captured image from the camera 22. In this case, for example, the ticket vending machine 10 extracts a face region from the captured image from the camera 22, and when the size (e.g., area, vertical width, etc.) of the extracted face region becomes equal to or larger than a predetermined threshold, the ticket vending machine 10 determines that the customer has reached the vicinity of the facing position. In this configuration, the human detection sensor can be omitted from the ticket vending machine 10.
When the fact that the customer has reached the vicinity of the facing position has been detected by the human detection sensor, a commodity selection screen is displayed on the operation/display unit 11. Here, selection items for foods and beverages that the store can provide is displayed. The customer touches a desired selection item among a plurality of selection items displayed on the operation/display unit 11, thereby performing selection of a commodity to be purchased.
Then, the customer puts banknotes or coins corresponding to the amount of money necessary for purchase of the commodity, into the banknote inlet/outlet 12 or the coin inlet 13. Accordingly, a ticket corresponding to the selection item is issued from the ticket issuing port 15. When there is change, banknotes or coins corresponding to the change are dispensed from the banknote inlet/outlet 12 or the coin outlet 14. Then, one transaction process ends.
The ticket vending machine 10 is installed on the depth side in the direction in which a customer 3 moves straight from an entrance 2a of the store 2. The ticket vending machine 10 is installed such that the front thereof is oriented in the direction of the entrance 2a. The range from the entrance 2a up to before the ticket vending machine 10 serves as a movement path 2b1 in which the customer 3 moves toward the ticket vending machine 10. The left side of the movement path 2b1 is an eating and drinking space 2c. In the eating and drinking space 2c, a plurality of tables and a plurality of chairs are disposed. The movement path 2b1 is defined between the eating and drinking space 2c and the inner wall of the store 2. The customer 3 moves straight along the movement path 2b1 and reaches a facing position P1. After ticket issuing has been performed, the customer 3 advances from the facing position P1 to the eating and drinking space 2c.
On the depth side of the eating and drinking space 2c, the plurality of menu media 40 are arranged. The plurality of menu media 40 are able to be viewed by the customer 3 in the movement path 2b1. The camera 21 captures an image of the vicinity of the movement path 2b1 in a diagonal direction from a position immediately above the region where the plurality of menu media 40 are disposed. The range between the broken lines extending from the camera 21 is the imaging field of view of the camera 21. The boundary on the ticket vending machine 10 side of the imaging field of view of the camera 21 is on the entrance 2a side at a predetermined distance from the facing position P1. The other camera 22 is disposed approximately at the center on the depth side of the ticket vending machine 10, and captures an image of the vicinity of the facing position P1.
In the example in
The ticket vending machine 10 includes a controller 101, a memory 102, a banknote handling unit 103, a coin handling unit 104, a ticket issuing processing unit 105, the display 106, the touch sensor 107, a speaker 108, and a communication unit 109.
The controller 101 includes an arithmetic processing circuit such as a CPU (Central Processing Unit), and controls components according to a program stored in the memory 102.
The memory 102 includes a storage medium such as a ROM (Read Only Memory), a RAM (Random Access Memory), etc., and stores a program executed by the controller 101 and various kinds of data. The memory 102 is used as a work region when the controller 101 performs control. The memory 102 stores a face recognition engine for acquiring face information of the customer 3 from the captured image from the camera 22.
The banknote handling unit 103 includes a banknote storage unit for storing banknotes of each denomination, a transport unit that transports banknotes, and a denomination recognition unit that recognizes the denomination of each banknote that is transported, and the banknote handling unit 103 allows transport of banknotes between the banknote storage unit and the banknote inlet/outlet 12 (see
The ticket issuing processing unit 105 includes a band body generation unit that generates a paper band body, and a printing unit that performs printing on the band body, and the ticket issuing processing unit 105 sends out a ticket obtained by printing a name or the like of a food or a beverage by the printing unit on the band body, to the ticket issuing port 15. The display 106 and the touch sensor 107 form the operation/display unit 11 in
The information processing apparatus 30 includes a controller 301, a memory 302, and a communication unit 303.
The controller 301 includes an arithmetic processing circuit such as a CPU, and controls components according to a program stored in the memory 302. The memory 302 includes a storage medium such as a ROM, a RAM, a hard disk, etc., and stores a program executed by the controller 301 and various kinds of data. The memory 302 is used as a work region when the controller 301 performs control.
The communication unit 303 performs communication with the camera 21 and the communication unit 109 under control by the controller 301. The communication unit 303 is connected to an external communication network 60 such as the Internet, and performs communication with an apparatus outside the store 2 under control by the controller 301.
In Embodiment 1, functions of a menu specifying unit 301a and a display adjustment unit 301b are provided to the controller 301 by a program stored in the memory 302.
The menu specifying unit 301a specifies a menu medium 40 in which the customer 3 has shown interest, from the captured image from the camera 21, i.e., from the captured image of the customer 3 in the movement path 2b1. The display adjustment unit 301b executes a process for causing the operation/display unit 11 to display, at least at the arrival timing when the customer 3 has arrived at the facing position P1, a display screen (commodity selection screen) such that a commodity related to the menu medium 40 specified by the menu specifying unit 301a with respect to the customer is prioritized as a selection candidate.
In Embodiment 1, direct control of the adjustment of the display screen with respect to the operation/display unit 11 is performed by the controller 101 of the ticket vending machine 10, and information to serve as a trigger or a condition for this control is transmitted from the information processing apparatus 30 to the ticket vending machine 10, by the function of the display adjustment unit 301b. That is, the display adjustment unit 301b executes indirect control for adjustment of the above display screen with respect to the operation/display unit 11.
Other than this, the memory 302 stores a face recognition engine that allows the controller 301 to extract a face image from the captured image from the camera 21, to acquire face information. The face information that is acquired is information on a feature amount of the face. The face information may be the extracted face image itself.
The memory 302 stores a viewpoint estimation engines that allows the controller 301 to estimate the direction in which the customer 3 views, from the captured image from the camera 21, and estimate whether or not the customer 3 is viewing any of the menu media 40, from the estimated direction. These engines are used by the menu specifying unit 301a.
Furthermore, the memory 302 stores an attribute estimation engine that allows the controller 301 to estimate the attribute of each customer such as the age and the sex of the customer, from each piece of face information extracted by the face recognition engine.
The viewpoint estimation engine described above may be a machine learning model, for example. As the machine learning, machine learning using a neural network is applied. For example, a neural network according to deep learning in which neurons are combined in multiple stages is applied. However, the machine learning that is applied is not limited thereto, and another machine learning such as a support vector machine or a decision tree may be applied to the machine learning model.
The state data set is a collection of a large number of pieces of state data. The state data is key point data indicating the positions of a plurality of human body portions, acquired from a person on a captured image acquired by the camera 21. The key point data is acquired by applying a skeleton estimation engine to an image range of the person on the captured image.
In generation of the state data set, a two-dimensional region 70, which is a virtual plane, is set. The two-dimensional region 70 is set so as to have a predetermined positional relationship with respect to the imaging direction (the central axis direction of the imaging field of view) of the camera 21. The two-dimensional region 70 is set so as to be perpendicular to the imaging direction of the camera 21. The center of the two-dimensional region 70 matches the central axis of the imaging field of view of the camera 21. Not limited thereto, for example, an intermediate position on the upper side of the two-dimensional region 70 may match the central axis of the imaging field of view of the camera 21.
The two-dimensional region 70 has a predetermined dimension in each of an X-axis direction (the lateral direction of the imaging field of view) and a Z-axis direction (the vertical direction of the imaging field of view). Here, the shape of the two-dimensional region 70 is a rectangle whose long side is parallel to the X-axis direction. However, the setting method for the two-dimensional region 70 is not limited thereto. For example, the shape of the two-dimensional region 70 may be a square.
The two-dimensional region 70 is divided into a grid shape, whereby a plurality of reference regions 71 are set. Here, the two-dimensional region 70 is divided into a grid shape at the same pitch in the X-axis direction and the Z-axis direction. However, the method for dividing the two-dimensional region 70 into a grid shape is not limited thereto. For example, the division pitch of the two-dimensional region 70 in the X-axis direction and the division pitch of the two-dimensional region 70 in the Z-axis direction may be different from each other.
The state data set, which is teaching data, is a data set obtained by collecting, with respect to the state (posture) of a plurality of persons, for each reference region 71, the state data regarding the state of each person on a captured image when the person has directed his/her line of sight toward the reference region 71.
In this case, six pieces of state data are acquired with respect to the reference region 71a. That is, in the state of the person 3a in each of
Here, the key point data is acquired as a coordinate position (pixel position) of each human body portion (key point) below on a captured image, for example.
For acquisition of the key point data, the following method can be used. That is, in an environment in which the relationship between the camera 21 and the two-dimensional region 70 is set to be the above relationship (e.g., the relationship in which the two-dimensional region 70 is set so as to be perpendicular to the imaging direction of the camera 21, and the center of the two-dimensional region 70 matches the central axis of the imaging field of view of the camera 21), at various positions in the imaging field of view of the camera 21, captured images when the person 3a has viewed the reference region 71a in various postures are actually acquired, each acquired captured image is applied to the skeleton estimation engine, and the coordinate position (pixel position) of each key point in the captured image is extracted.
Alternatively, the key point data when a person views the reference region 71a may be acquired by using an extraction engine capable of extracting the key point data in a virtual space. Various extraction engines of this kind have already been developed and are available. The virtual space includes a virtual human body model to which the above key points are affixed in advance. The operator can arbitrarily change the position and the posture of this human body model. In this virtual space, the imaging field of view of the camera 21 and the two-dimensional region 70 are set under the same relationship as above. The operator can direct the line of sight of the human body model to each reference region 71 of the two-dimensional region 70. The operator can cause an extraction engine to extract the coordinate positions, on the captured image from the camera 21, of the key points of the human body model at that time.
In this case, as shown in
In
In
A plurality of input items respectively corresponding to a plurality of portions for which the key point data is acquired are assigned to the input to the machine learning model. A plurality of output items respectively corresponding to the plurality of reference regions 71 of the two-dimensional region 70 are assigned to the output of the machine learning model.
While learning is performed with respect to the machine learning model, the state data set generated as above is sequentially inputted to the plurality of input items of the machine learning model. A plurality of pieces of key point data included in each piece of state data are inputted to the input items of the corresponding portions. When one piece of state data is to be inputted, 100% is set to the output item corresponding to the reference region 71 (the reference region 71 viewed by the person 3a) for which this state data has been acquired, and 0% is set to the other output items. In this manner, learning is performed with respect to all of the state data in the state data set.
Through this learning, when the key point data being the state data has been inputted, the machine learning model outputs, from each output item, a probability (0 to 100%) that the line of sight has been directed to the output item. Therefore, when, from the captured image from the camera 21, the key point data (state data) of the customer 3 on the captured image is acquired by means of the skeleton estimation engine, and the acquired key point data is inputted to each corresponding input item, the probability of the output item, among the plurality of output items, corresponding to the reference region 71 viewed by the customer 3 becomes higher than that of the other output items. Therefore, the reference region 71 corresponding to an output item having the highest probability can be acquired as the reference region 71 to which the customer 3 has directed his/her line of sight.
In the information processing apparatus 30, the region of each menu medium 40 disposed in the store 2 is set in advance on the two-dimensional region 70, based on the positional relationship between the camera 21 and the menu medium 40. In the example shown in
This setting is performed by a manager of the store 2 in accordance with completion of arrangement of the menu media 40. At this time, the manager associates the region of each menu medium 40 on the two-dimensional region 70 and the menu medium 40 (identification information of the menu medium 40) with each other. This association is stored into the memory 302 of the information processing apparatus 30.
The memory 302 in
The process for estimating that the customer 3 has viewed a menu medium 40 is not limited to the above example. For example, in the above, the viewpoint estimation engine used by the menu specifying unit 301a is a machine learning model based on a neural network. However, a machine learning model other than this may be used as the viewpoint estimation engine. In the above, as the information to be used in the viewpoint estimation, the key point data based on skeleton estimation is used. However, the line of sight may be estimated from a captured image, and whether or not this line of sight is directed to any of the menu media 40 may be determined. Various methods can be used as the process of determining whether or not the customer 3 has viewed a menu medium 40, from the captured image from the camera 21.
When having acquired the face information of the customer 3 from the captured image from the camera 21, the menu specifying unit 301a generates gaze status association information indicating the status where the customer 3 has viewed each menu medium 40 in the movement path 2b1, and causes the memory 302 to store the gaze status association information.
The gaze status association information is configured such that a serial number, face information, and the number of times and time (cumulative time) for which each menu medium 40 has been viewed are associated with each other. “Serial number” is a continuous number that is provided when face information is newly registered in the gaze status association information. “Face information” is the face information of the customer extracted by the face recognition engine described above. “The number of times” is the number of times a corresponding customer 3 has viewed a menu medium 40 (in
For convenience,
As shown in
The commodity association information in
The classification association information in
When the store 2 is a restaurant that handles various commodities, the first classification defines whether the commodity is a main dish or a side dish, and the second classification defines whether the commodity is a food or a beverage, for example. The third classification defines which of noodles, a rice meal (rice, rice bowl dish), a set meal, a salad, a dessert, and a beverage the commodity is. Furthermore, a sub-classification can be set for the third classification. For example, when the third classification is noodles, udon, ramen, pasta, and the like are set as sub-classifications thereof. This association information is registered in the ticket vending machine 10 at the time of opening of the store 2, and then, is updated in accordance with change in the menu.
Registration of the commodity association information and the classification association information is performed by the manager using the operation/display unit 11 of the ticket vending machine 10. The manager performs this registration by setting the ticket vending machine 10 to a registration mode of these pieces of association information. This registration may be performed via the information processing apparatus 30. In this case, the manager operates the information processing apparatus 30 to input these pieces of association information. Upon completion of the input, these pieces of association information are transmitted from the information processing apparatus 30 to the ticket vending machine 10, and are stored into the memory 102 of the ticket vending machine 10.
The commodity association information and the classification association information are used when the ticket vending machine 10 displays a commodity selection screen. A display process of the commodity selection screen in the ticket vending machine 10 will be described later in detail with reference to
In the process in
When the controller 301 has newly detected the face of a customer 3 on the captured image from the camera 21 (S101: YES), the controller 301 acquires face information from this face, and newly registers the acquired face information into the gaze status association information in
The controller 301 determines whether or not the customer having the detected face is in the movement path 2b1 (S103). In the determination in step S103, while the target customer 3 is included in the captured image from the camera 21, YES is determined, and when the customer 3 has passed through the imaging field of view of the camera 21 and has disappeared from the captured image, NO is determined. In a case where the imaging field of view of the camera 21 is large, NO may be determined in step S103 when the customer 3 has reached a position at a predetermined distance, to the entrance 2a side, from the facing position P1.
The controller 301 tracks the customer 3 on the captured image from the camera 21, to perform determination in step S103. When the customer 3 is in the movement path 2b1 (S103: YES), the controller 301 determines whether or not the customer 3 has viewed any of the menu media 40 (S104).
When the customer 3 has viewed any of the menu media 40 (S104: YES), the controller 301 accumulates a value in the detection item, in the gaze status association information in
Specifically, when the determination in step S104 has turned to YES, the controller 301 adds 1 to “the number of times” corresponding to the viewed menu medium 40. When the determination in step S104 has turned to YES, the controller 301 starts accumulation of a time in “time” corresponding to the viewed menu medium 40, and then, when the determination in step S104 has turned to NO, the controller 301 ends this accumulation.
Then, while the customer 3 is in the movement path 2b1 (S103: YES), the controller 301 accumulates values in the detection item associated with the menu medium 40 viewed by the customer 3 (S104: YES, S105).
Then, when the customer 3 has passed through the movement path 2b1 (S103: NO), the controller 301 refers to the gaze status association information and specifies the menu medium 40 in which the customer 3 has shown interest (S106).
For example, the controller 301 specifies the menu medium 40 in which the customer 3 has shown the most interest, from the gaze status association information. In this case, a menu medium 40 having the longest “time” in the gaze status association information of the customer 3 is specified as the menu medium 40 in which the customer 3 has shown the most interest. In this specifying, when the longest “time” exists in a plural number, a menu medium 40 having a larger “number of times” is specified as the menu medium 40 in which the customer 3 has shown the most interest. When the “numbers of times” are also the same number, a plurality of menu media 40 are specified as the menu medium 40 in which the customer 3 has shown the most interest.
Here, the menu medium 40 is specified with “time” prioritized over “the number of times”. However, the menu medium 40 in which the customer 3 has shown the most interest may be specified with “the number of times” prioritized. Together with the menu medium 40 in which the customer 3 has shown the most interest, the menu media 40 down to a predetermined number (e.g., the second) from the top in the order of the magnitude of the shown interest may be specified as the menu media 40 in which the customer 3 has shown interest.
Then, when the menu medium 40 in which the customer 3 has shown interest has been specified, the controller 301 transmits, to the ticket vending machine 10, information on the specified menu medium 40 and the face information of the customer 3, together with the serial number associated with these (S107). These pieces of information are stored into the memory 102 in the ticket vending machine 10. Then, the controller 301 ends the process in
When the customer 3 has not viewed any of the menu media 40, the detection items of the menu media 40 associated with the customer 3 all indicate zero. In this case, the controller 301 determines, in step S106, that there is no menu medium 40 in which the customer 3 has shown interest, and in step S107, transmits information indicating this to the ticket vending machine 10, together with the face information and the serial number. In this case, the process in step S107 may be skipped.
The process in
The controller 101 monitors whether or not the customer 3 has reached the vicinity of the facing position P1 described above, through an output or the like from the human detection sensor (S201). When the customer 3 has reached the vicinity of the facing position P1 (S201: YES), the controller 101 acquires face information of the customer 3 from the captured image from the camera 22, and collates the acquired face information with the face information received from the information processing apparatus 30 in a most recent predetermined period through the process in step S107 in
Here, with respect to the face information that is acquired from the captured image from the camera 22, the largest face image, among face images included in the captured image, that is in the vicinity of the center, serves as the target. That is, the face information that is acquired in step S202 is the face information that is acquired from the customer 3 (the customer 3 who operates the operation/display unit 11) at the facing position P1.
The controller 101 receives, from the information processing apparatus 30, the face information that is consistent with the face information acquired from the captured image from the camera 22, and further determines whether or not information indicating the menu medium 40 in which the customer 3 has shown interest has been received from the information processing apparatus 30 together with this face information (S203). When the determination in step S203 is NO, the controller 101 sets the commodity selection screen to be displayed on the operation/display unit 11 to a default selection screen (S207).
One event in which the determination in step S203 becomes NO is a case where the face information of the customer 3 in the movement path 2b1 has not been able to be acquired from the captured image from the camera 21, for example.
Another event in which the determination in step S203 becomes NO is a case where, although the face information acquired from the captured image from the camera 22 is consistent with one piece of the above face information received from the information processing apparatus 30, information indicating a menu medium 40 has not been received from the information processing apparatus 30 together with this face information.
That is, as described above, in the gaze status association information, when zero is indicated in the detection items (the number of times, time) of all of the menu media 40 associated with this face information, the information processing apparatus 30 transmits information indicating that no menu medium 40 has been specified, together with this face information. In this case, since there is no menu medium 40 corresponding to this face information, the determination in step S203 becomes NO.
On the other hand, when the acquired face information matches one piece of the above face information received from the information processing apparatus 30, and information specifying the menu medium 40 in which the customer 3 has shown interest has been received together with this face information (S203: YES), the controller 101 specifies a commodity corresponding to this menu medium 40 from the association information in
The controller 101 causes the operation/display unit 11 to display the commodity selection screen set in step S205 or step S207 (S206). Then, the controller 101 ends the process in
Here, the commodity selection screen is composed of a plurality of layers of selection screens.
The selection screen 80 includes a message 81 that urges selection of a commodity type, buttons 82 each for selecting a commodity type, and illustrative images 83 of representative commodities included in respective commodity types. Each illustrative image 83 is disposed immediately below the button 82 of a corresponding commodity type. These commodity types correspond to the classification of the third classification in the classification association information in
As shown in
The selection screen 90 includes a message 91 that urges selection of a commodity, indications 92 each indicating a sub-classification (here, udon, ramen, pasta) of the commodity type, buttons 93 each for selecting a commodity, a button 94 for returning the screen to one layer before, and a button 95 for determining the selection. The customer 3 touches a button 93 corresponding to the commodity, in the displayed commodity group, that the customer desires, and touches the button 95. Accordingly, the screen is shifted to a screen for a payment process. If the customer 3 touches the button 94, the customer 3 can redo the commodity selection. Before the screen is shifted to the screen for the payment process, selection of another commodity may further be allowed.
The selection screen 80 in this case also includes the message 81 and the buttons 82 corresponding to the respective types. However, in this selection screen, the illustrative image 83 corresponding to each button 82 is omitted, and the layout of the buttons 82 has been changed accordingly. In addition, this selection screen 80 includes a message 84 that recommends selection of a predetermined commodity, and further, includes a button 85 for selecting this commodity and an illustrative image 86 of this commodity.
Here, the commodity displayed as a recommendation is a commodity corresponding to the menu medium 40 in which the customer 3 has shown interest in the movement path 2b1. From the commodity association information in
As described above, in the gaze status association information, when there are a plurality of menu media 40 being at the top and having the same number in “time” and “the number of times”, or when the menu media 40 down to a predetermined number from the top are specified as the menu media 40 in which the customer 3 has shown interest, a plurality of menu media 40 are included in the set of the face information and the menu medium 40 received from the information processing apparatus 30. In this case, with respect to all of the menu media 40 in the combinations, a set of a button 85 and an illustrative image 86 may be displayed on the selection screen 80.
In this selection screen 80, when the button 85 has been touched, display of the selection screen 90 in
Instead of the screen in
In the selection screen 80 in
In the selection screen 80 in
The controller 101 of the ticket vending machine 10 may transmit, to the information processing apparatus 30, information on the commodity actually purchased by the customer 3 having reached the facing position P1. In this case, the controller 101 may transmit, to the information processing apparatus 30, relationship information indicating the relationship between the commodity (hereinafter, referred to as “specified commodity”) corresponding to the menu medium 40 received from the information processing apparatus 30 in combination with the face information of the customer 3, and the commodity (hereinafter, referred to as “purchased commodity”) actually purchased by the customer 3.
For example, as the above relationship information, the controller 101 may transmit, to the information processing apparatus 30, information indicating match/mismatch between the specified commodity and the purchased commodity, and, in the case of a mismatch, information indicating the similarity between both commodities. As the information indicating the similarity, information indicating whether or not each classification defined in the classification association information in
The controller 101 transmits, to the information processing apparatus 30, information (e.g., commodity name) for identifying the specified commodity and the purchased commodity, the above relationship information, and the serial number received together with the specified commodity. The controller 301 of the information processing apparatus 30 sequentially stores these pieces of received information in association with each other.
At this time, further, the controller 301 may extract, from the face information associated with this serial number in the gaze status association information, attribute information such as the sex and the age of the customer 3, and further associate this attribute information with the information described above. The attribute information of the customer 3 may be acquired from the captured image (the captured image from the camera 21) of the customer 3 in the movement path 2b1, and registered, in advance, in association with the gaze status association information of the customer 3. The attribute information may include attributes (e.g., the height, the color of clothes, etc.) other than the sex and the age of the customer.
The controller 301 of the information processing apparatus 30 registers, into the purchase management information in
With this purchase management information, effectiveness of the menu media 40 can be evaluated, for example. That is, based on information in the items “match/mismatch” and “similarity”, whether or not the menu medium 40 has evoked purchase of the commodity or a similar commodity can be evaluated, and further, this evaluation can be performed in detail for each attribute of the customer 3.
Such evaluation may be performed by the manager on the information processing apparatus 30, or may be performed in an external evaluation tool (server, etc.). In the latter case, the purchase management information in
Generation of the relationship information may be performed by the information processing apparatus 30. In this case, information similar to the classification association information in
As shown in
With this configuration, a commodity related to the menu medium 40 in which the customer 3 has shown interest in the movement path 2b1 is displayed as a selection candidate in a prioritized manner on the commodity selection screen (display screen), at least at the timing when the customer 3 has arrived at the facing position P1. Therefore, the commodity in which the customer 3 has shown interest can be smoothly and quickly displayed as a selection candidate. Accordingly, convenience in commodity selection for the customer 3 can be enhanced.
As described above, transmission in step S107 is performed at a timing when the customer 3 has deviated from the movement path 2b1, and the process in
As shown in
Therefore, the interest of the customer 3 in each of the menu media 40 can be comprehensively determined. Therefore, a commodity according to the interest of the customer 3 can be appropriately displayed as a selection candidate.
As described with reference to
Since the process of estimating the direction in which the customer 3 views is used, which menu medium 40 the customer 3 has viewed can be appropriately determined. Therefore, from this determination result, the menu medium 40 in which the customer 3 has shown interest can be accurately specified.
As shown in
Thus, since the gaze status association information (association information) is prepared while the customer 3 moves to the facing position P1, the selection screen 80 (display screen) according to the interest of the customer 3 can be smoothly presented to the customer 3.
As shown in
Accordingly, without use of means for tracking the customer 3 who moves from the movement path 2b1 toward the facing position P1, when, in step S202 in
As shown in
With this configuration, the menu medium 40 in which the customer 3 has shown interest can be specified from the captured image of the customer 3 who moves in the store 2.
As shown in
With this configuration, since the button 85 (selection item) for the commodity corresponding to the menu medium 40 in which the customer 3 has shown interest is included in the selection screen 80 in the initial state, the customer 3 can smoothly select the commodity in which the customer 3 has shown interest.
As shown in
Accordingly, the button 82 (type selection item) of the commodity type including the commodity corresponding to the menu medium 40 in which the customer 3 has shown interest is effectively displayed. Therefore, when not selecting the commodity in which the customer 3 has shown interest, the customer 3 can smoothly select a commodity of the same type as this.
As shown in
With this configuration, the buttons 82 (type selection items) of other commodity types that satisfy the predetermined group condition with respect to the commodity corresponding to the menu medium 40 in which the customer 3 has shown interest are effectively displayed. Therefore, when not selecting the commodity in which the customer 3 has shown interest, the customer 3 can smoothly select a commodity of the same type as this and a commodity similar to this.
As shown in
Accordingly, the above-described effects are exhibited by the information processing apparatus 30.
As shown in
According to the third aspect and the fourth aspect of the present invention, effects similar to those in the above first aspect are exhibited.
Accordingly, the above-described effects by the information processing apparatus 30 are exhibited.
In Embodiment 2, the camera 21 and the menu media 40 are omitted from the arrangement example in
The gaze status association information in
In Embodiment 2 as well, effects similar to those in Embodiment 1 are exhibited.
In Embodiment 2, from the captured image of the customer 3 referring to the menu media 51 outside the store 2, the menu medium 51 in which the customer 3 has shown interest can be specified.
In Embodiment 2 as well, similar to Embodiment 1, the menu media 40 and the camera 21 may be further disposed in the store 2, and the gaze status association information may be further configured with respect to these menu media 40. In this case, when the menu media 40 in the store 2 and the menu media 51 outside the store 2 correspond to the same commodities, these menu media 40, 51 may be associated with each other. Then, when the menu medium in which the customer 3 has shown interest is to be specified, “time” and “the number of times” corresponding to the menu media 40, 51 associated with each other may be totaled, and the totaled value may be used in specifying the menu medium in which the customer 3 has shown interest. When information on the menu media 40, 51 associated with each other is to be transmitted from the information processing apparatus 30 to the ticket vending machine 10, information on either one of the menu media may be transmitted.
In Embodiment 3, the information processing apparatus 30 is omitted and the functions of the menu specifying unit and the display adjustment unit are included as the function of the controller 101 of the ticket vending machine 10. The camera 21 is connected to the communication unit 109 of the ticket vending machine 10.
The function of a menu specifying unit 101a provided to the controller 101 is approximately the same as the function of the menu specifying unit 301a in Embodiment 1 above. However, the menu specifying unit 101a does not perform the process in step S107 in the flowchart in
In the configuration in Embodiment 3, the function of a display adjustment unit 101b provided to the controller 101 corresponds to the function according to the flowchart in
In Embodiment 3, the information processing apparatus of the present invention is configured by the controller 101, the memory 102, and the communication unit 109. That is, in Embodiment 3, the information processing apparatus of the present invention is built in the ticket vending machine 10.
In Embodiment 3 as well, effects similar to those in Embodiment 1 are exhibited.
In the commodity order system 1 in
With this configuration, without use of means for tracking the customer who moves from the movement path 2b1 toward the facing position P1, the menu medium 40 in which the customer 3 has shown interest can be smoothly specified.
In Embodiment 3 as well, changes similar to those in Embodiment 2 can be made.
In Embodiment 1 above, one commodity is included in one menu medium 40. However, a plurality of commodities of the same type may be included one menu medium 40. In this case, the buttons 85 and the illustrative images 86 for respectively selecting a plurality of commodities corresponding to the menu medium 40 in which the customer 3 has shown interest may be included in the selection screen 80 in
In Embodiment 1 above, the menu medium 40 in which the customer has shown interest is specified according to the number of times and the time for which the customer 3 has viewed the menu medium 40. However, this specifying may be performed according to another evaluation element. For example, for each viewing of a menu medium 40 by the customer 3, the time from the beginning of the viewing to the end of the viewing may be acquired, the acquired time may be associated with the viewed menu medium 40, and the menu medium 40 corresponding to the viewing having the longest time may be specified as the menu medium 40 in which the customer 3 has shown interest. Alternatively, the menu medium 40 viewed by the customer 3 in the vicinity of the end of the movement path 2b1, or the menu medium 40 last viewed by the customer 3 in the movement path 2b1 may be specified as the menu medium 40 in which the customer 3 has shown interest.
In Embodiment 1 above, the face information of the customer 3 and the number of times and the time for which the customer 3 has viewed each of the menu media 40 are associated with each other. However, the face information of the customer 3 need not necessarily be associated. For example, in the case of the layout in
In Embodiment 1 above, collation of the face information is performed in the ticket vending machine 10. However, collation of the face information may be performed in the information processing apparatus 30. In this case, the camera 22 is connected to the information processing apparatus 30. The information processing apparatus 30 causes the memory 302 to store the combination of the menu medium 40 in which the customer 3 has shown interest and the face information.
At the timing when the customer 3 has reached the vicinity of the facing position P1, the information processing apparatus 30 receives a notification indicating this from the ticket vending machine 10. In accordance with this, the information processing apparatus 30 acquires the face information of the customer 3 facing the operation/display unit 11 from the captured image from the camera 22, and collates the acquired face information with the face information in the above combination stored in the memory 302. When there is a face image that is consistent with the acquired face image, the information processing apparatus 30 transmits information on the menu medium 40 combined with this face image, to the ticket vending machine 10. When there is no such face image, the information processing apparatus 30 transmits information indicating that there is no face image that is consistent with the acquired face image, to the ticket vending machine 10.
Upon receiving these pieces of information, the ticket vending machine 10 executes the processes in step S203 and thereafter in
In Embodiment 1 above, in step S107 in
In Embodiment 1 above, the functions of the menu specifying unit and the display adjustment unit are included in the information processing apparatus 30, and in Embodiment 3 above, these functions are included in the ticket vending machine 10. However, these functions may be included in any of the apparatuses forming the commodity order system 1, or may be provided so as to be distributed in a plurality of apparatuses. For example, when the commodity order system 1 includes the ticket vending machine 10, the information processing apparatus 30, and another information processing apparatus, the functions of the menu specifying unit and the display adjustment unit may be included in this information processing apparatus. In this case, the camera 21 is connected to the other information processing apparatus.
In Embodiments 1 to 3 above, the ticket vending machine 10 is a commodity order apparatus. However, an apparatus other than the ticket vending machine 10 may be the commodity order apparatus. For example, an apparatus that does not perform ticket issuing and that executes a function of displaying a selection candidate for a commodity and receiving an order may be the commodity order apparatus, or merely, an apparatus having only a function of displaying a selection candidate for a commodity may be the commodity order apparatus. In these cases as well, similar to the above embodiments, a commodity corresponding to the menu medium in which the customer 3 has shown interest may be displayed as a selection candidate in a prioritized manner.
The form in which a commodity corresponding to the menu medium 40 in which the customer 3 has shown interest is displayed as a selection candidate in a prioritized manner is not limited to the forms in
For example, in the selection screen 90 in
Similarly, the method in which the type selection item of the commodity type including the commodity corresponding to the menu medium 40 in which the customer 3 has shown interest is effectively displayed as compared with the other type selection items is not limited to the method in
In the selection screen 80 in
The configuration of the association information in which the customer 3 and the menu medium 40 in which the customer 3 has shown interest are associated with each other is not limited to the configuration shown in
In Embodiments 1, 3 above, the menu media 51 outside the store 2 are commodity samples. However, the menu media 51 outside the store 2 may be other menu media such as posters disposed on the outer wall of the store 2.
The layout in the store 2, the commodities handled by the store 2, and the classification of the commodities are not limited to those described in the above embodiments, either.
In addition to the above, the embodiments of the present invention can be modified as appropriate within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
2023-178844 | Oct 2023 | JP | national |