This invention relates to applications of robots, more particularly to applications of robots in surveys. This invention also relates to a user interface of an electronic device, more particularly to presentation of buttons in a user interface and implementations of buttons in surveys, emails, short messages, and tweets.
Surveys on customer satisfaction are important for business. Surveys on social or political issues are important for the society and politics. Survey results benefit business owners, policy makers as well as the general public. For example for a business, surveys may be used to monitor customer service, improve product quality, detect defects, observe future trends, etc. Traditional surveys use a questionnaire that has many questions on several pages. The questions are often long and take time to comprehend. In addition, surveys often show up as an unwelcome surprise. Hence, no matter whether a questionnaire is on paper or on a screen, most people usually just shy away from it because, it is considered time consuming, burdensome, and intrusive. In many cases, even the allure of raffle prize won't make people answer survey questions.
Therefore, there exists a need for a survey which is quick, easy, and less intrusive.
Buttons presented in an interface have a regular size and fixed appearance. These buttons can be easily overlooked or ignored. It is desirable to configure certain buttons to make them stand out in some cases, such as buttons representing answers in a survey.
Currently, surveyors are sent to a specific location, such as a store, event, or public space, to perform on-site surveys. The surveyors directly engage with individuals to ask survey questions and collect responses. Although real-time feedback about experience and opinions on a particular topic can be obtained, it suffers from a high labor cost and limited availability of surveyors. There is a need for on-site surveys that have a low cost and are always available.
Accordingly, several main objects and advantages of the present invention are:
Further objects and advantages will become apparent from a consideration of the drawings and ensuing description.
In accordance with the present invention, a dual-function button is configured in an email. Upon activation, the button performs a function and closes an email page. A user may use the dual-function button to answer a survey question or submit an opinion quickly and conveniently. The dual-function button also may be configured in a short message or a tweet. To be more attractive, a button is enlarged temporarily when a finger approaches it. To be more attractive, a button on a page is enlarged temporarily when the page shows up. Further, a survey is designed which needs a single action only and is especially suitable for emails, short messages, and tweets. A survey may present a single question which is about user satisfaction and has one or a few words only. The survey process is quick, simple, convenient, and less troublesome compared to completing a traditional questionnaire.
In accordance with the present invention, robots are employed as surveyors to do on-site surveys. The robots may observe people, select an individual by certain criteria, and then engage with the individual for a quick survey. In some cases, people who walk with a normal pace or appear relaxed are selected for surveys.
The subject matter, which is regarded as the invention, is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and also the advantages of the invention will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
100, 102, 104, 106, 107, 108, 110, 112, 114, 116, 117, 118, 120, 122, 124, 128, 130, 132, 134, 136, 138, 140, 142, 144, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, and 188 are exemplary steps.
The following exemplary embodiments are provided for complete disclosure of the present invention and to fully inform the scope of the present invention to those skilled in the art, and the present invention is not limited to the schematic embodiments disclosed, but can be implemented in various types. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or like parts.
The word “server” means a system or systems which may have similar functions and capacities as one or more servers. Main components of a server may include one or more processors, which control and process data and information by executing software, logic, code, or carrying out any other suitable functions. A server, as a computing device, may include any hardware, firmware, software, or a combination. In the most compact form, a server may be built on a single processor chip. In the figure, server 82 may represent one or more server entities that collect, process, maintain, and/or manage survey information and documents, help conduct surveys, communicate with users, deliver information required by users, etc. Server 82 may exemplarily be divided into three blocks, represented by a processing module 18, a log database 20, and a survey database 12. Processing module 18 may include processing and communication functions. Log database 20 may store user ID information and survey ID information, which may be used to trace a survey result a user provided. Survey database 12 may store survey results and other survey related information, such as background information on survey events. Database 12 and 20 may include aforementioned memory chips and/or storage modules.
A communication network 14 may cover a range of entities such as the Internet or the World Wide Web, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network, an intranet, wireless, and other types of networks. Client 80 and server 82 may be connected to network 14 by various wired, wireless, optical, or other connections.
Returning to
The above described survey requires only one action, i.e., a click, a touch, or other single actions as illustrated above and below, or no action. The whole survey process is simple, swift, easy, and less intrusive, and thus more likely to be accepted by users than a traditional survey. For example, when a user purchases a cup of coffee at a coffee shop, a smartphone may be used to pay for it in an electronic payment process. The transaction may be concluded after the user waves the smartphone in front of a cash register. Then a survey window may appear on the smartphone's touch screen or GUI. The user may give a quick touch on the screen, maybe costing one or two seconds. After that, the survey ends, the survey window closes automatically, and the smartphone screen may return to its previous GUI.
Therefore compared to a traditional questionnaire-type survey, a single-action survey is more convenient, takes less time, and thus may be more acceptable by average people. Although single-action surveys have a simple output, for a shop manager, the survey results may still yield important information about product quality and customer service.
Back to
The wait stage at Step 150 may be initiated by clicking or tapping a “Wait” button 90 on surface 36 and may last for some time which may be set up by a user. For instance, after a user makes a payment at a store, the user may have to wait for his order (such as waiting in line for a burger or drink), or may have hands full, thus the user may want to delay a survey until a shopping process is concluded or when it's more convenient to do so. There is also a “Shrink” button 92 located between buttons 90 and 33. Clicking or tapping button 92 causes shrinkage of survey window and suspension of survey process, which may turn the window into a small graphic icon or send the session to an alert list. The suspended process may be resumed when being activated through clicking or tapping the shrunk icon or a corresponding item on the alert list.
The single-action survey window of
Returning to the previous interface or GUI before a survey session may also mean Step 132, the end of a survey. However every now and then in real life, users may want to take part in a skipped survey, adjust their submitted survey answers, add a comment, or rewrite a posted essay. To satisfy such demands, a client system may provide an option or application which allows a user to redo a survey or take a survey session which was missed in the past. At Step 134, a user may decide whether or not to go back to a survey. Going back means returning to Step 124 with a GUI displaying a single-action survey window, where there are options for the single-action and regular surveys. Referring to
A survey implemented immediately after an event makes it natural and relevant. But some surveys are desirable before an event happens. Examples of this type include survey on social or political issues before an election, product surveys before release, surveys on future trends, and so on. Thus for certain subjects, a survey request may be presented to users before an event takes place. A surveyee may be randomly chosen in some cases, when there is no exact information about who is more relevant to an event. Starting time of some surveys may also be randomly arranged within a time frame. In such cases, the first survey step may be to create a survey window or start it audibly when there is no display in a client system.
For instance as shown in window 70, a survey question may be “Vote for Measure A?” and yes, no and undecided may be three answers represented by three interactive buttons for a single-action or one-action survey. In addition, another option is provided by button “More Opinions”. After a user taps “More Opinions”, another window may show up with more answers about the “Measure A” topic for a user to select, such as “Strongly Oppose”, “Strongly Agree”, and “Don't Care”. Alternatively, a “More Polling Questions” button may be configured, which leads to a new window with polling questions on other issues, such as “Mr. Smith for Mayor?”, “Vote for Sales Tax Increase?”, “New Cross-Bay Bridge?” and so on. Moreover, “More Opinions” and “More Polling Questions” buttons may be presented together in window 70, which gives a user three survey choices at the same time. Like foregoing survey configurations, a user input space may be arranged for a user to write comments or express opinions in a survey window.
Survey window 70 of
Moreover, after an event, a single-action survey window may appear either on a device which a user uses or carries in the event, or on a device located at home or office. The place and timing of survey window appearance may be decided by a user in advance. For instance, a survey window may be arranged on a home computer or office computer, so that a user may complete it after things are settled down. Doing surveys at home may be especially preferred by users busy in day time, even though a survey requires only a single action.
Referring to
For a quick and easy survey, a single survey question should be simple, short, and easy to understand. Exemplary single question may include “Satisfied?”, “Are you satisfied?”, “Are you satisfied with Shop A?” or another short and easy-to-understand sentence. There are at least three types of single survey question, which may be applied to all foregoing and following cases.
At Step 168, a single survey question may be designed to have one or a few words only, like “Satisfied?” or other examples just mentioned. Thus a survey may be arranged to have a single question and the single question may contain a few words or even one word only. For instance, a single question may have at most seven words. Such a survey question may be arranged and suitable for different or even far different events, such as dining, shopping, auto repair, and ball games.
At Step 170, a single survey question may be designed to be related to user satisfaction, like “Satisfied?” and “Are you satisfied?” which are suitable for different or even far different events as well. So, when multiple surveys are conducted, the single question for each survey may focus on the same subject that is whether a user feels satisfied, regardless of how different the events are. In a sense, for this type of single question, question wording may change from one event to another; but its objective remains the same, i.e., to find out whether a user is satisfied regarding an event or experience. In other words, a single question may be always related to user satisfaction for a wide range of events.
Step 172 introduces the third type of single question. As a single question may be so simple and short while appropriate for different events, it may naturally address a general or universal issue. Thus, a single question may be created such that it may use the same wording repeatedly in many occasions for various events. It's obvious that an exemplary question like “Satisfied?” or “Are you satisfied?” fits the feature of this type. Such questions may be used repeatedly with the same wording for many events and many occasions, or the same wording of survey question may be used in surveys arranged for different or even far different events. Since sometimes survey questions may be written as “Satisfied with Shop A?” and “Satisfied with Shop B?”, the third type of survey question may be modified as a question with the same wording except the name of a survey target. Again, the modified survey question may be suitable and arranged for different or even far different events.
Like aforementioned cases, a user is presented with two options, tapping an answer button to do a quick one-action survey or tapping button 206 to do a regular survey. Once an answer button, such as button 208, 210, or 212, is tapped or clicked, the one-action survey is concluded and a survey result is sent to a survey processing program at Market or the survey facility. Then, question “Satisfied?” may be replaced by a message such as “1-Action Survey Completed. Thank you!”, which may be presented on screen to make it clear that the quick survey is ended. While the conclusion announcement is displayed, button 206 may still remain there, continuing offering a chance for a regular survey. A user may close the message page or tap button 206 to answer more questions. Once button 206 is tapped or clicked, a new window may show up or a web page may appear as the start of a regular survey session.
A business or entity may collaborate with a survey facility and utilize an email service to design and construct a survey email. An email service is usually hosted by a server of an internet service provider. For instance, a business may obtain a survey module from the survey facility, create email content using an email service, and embed the module in the content. The module may be configured to present survey functions and content in an email. It may be arranged that either a survey facility or a business monitors and administers a survey process, like monitoring whether a button is activated and collecting and sorting survey data. After a user taps a button on a survey email page at a client system or user device, a signal may be sent from the device to the survey facility or a survey processing program at the business, depending on an arrangement.
Usually after a user reads an email message, the user may click a close button to leave a message page and return to an email inbox interface. Sometimes, a button with a backward symbol is used instead of the close button. An email inbox page or interface may display emails, e.g., a list of emails, including new and old ones. Since a user is supposedly going to leave a message page once a one-action survey is finished, it may be designed that activation of one of the answer buttons 208, 210, or 212 not only concludes a quick survey, but also closes the email page automatically. Thus, a user may enter an email inbox interface on a screen, tap or click a survey email to open a message page, take a brief look at the survey content, and then tap or click an answer button. Next, the client system receives a survey result that is submitted by the user and then sends out survey information to the survey facility or survey processing program. At the same time, the client system removes the message page from the screen, and displays the email inbox interface.
In current email configuration, tapping or clicking a close button represents the only method available to return to an email inbox interface from a message page. The close button is designed by an email service and arranged on a message page automatically. Although there is no need to create a redundant close button, it may be desirable to assign a page-closing function plus another function to an interactive button or icon to create a dual-function button. A dual-function button saves one step and thus may save time and improve user experience. To enable such a dual-function button, a service provider may configure an email system for an email service and make it available to add the page-closing function to a button which carries another function in the system. Thus when a business or entity constructs content items of an email, it may combine two functions to generate a dual-function button, where one function is of page closing. The button may have a shape such as a square, a rectangle, a circle, an oval, or an irregular shape.
In some embodiments, after a user taps a dual-function button, a thank-you message may be displayed briefly, say one or two seconds, on the message page before the page is replaced by an inbox page. The message may be used to assure a user that a survey is conducted besides expressing appreciation. In some other embodiments, after a user taps a dual-function button, the page is replaced by an inbox page without showing any message. Some user may appreciate a thank-you message, while some other users may prefer a simple and quick action without a thank-you message.
Assume that an email program is installed at a client system or user device for a user to access and manage emails. Alternatively, the user may also access and manage emails via a portal website of an email service. After the user launches the email program at the device or logs in an email account at the portal, an email interface may show up on a display screen. At the beginning, the email program or a server at the portal may present an inbox interface, where a list of interactive items representing new and old email messages may appear. The user may tap or click a list item to open a message and enter a message page. From another angle, after the program or the server receives info that a list item is activated, it may present content of a corresponding email on a message page and then keep monitoring whether a button is activated. Assume that an email contains a dual-function button. Then, the dual-function button and a regular close button may be displayed on the message page along with other interactive buttons and icons. When the program or server detects that the dual-function button is activated, it may perform the other function first before carrying out the page-closing function which shuts the message page. After the message page is closed, the inbox interface may be displayed. The page-closing function may also be implemented by replacing a message page by the inbox interface.
Like above-described cases, a user is presented with two survey options, tapping an answer button to do a quick one-action survey or tapping button 226 to do a regular survey. Further, the user is presented with another option to express or submit an opinion using button 228. If a user taps button 228, it may be considered that the user likes the products. As there are survey options and the “like” option, the dual-function buttons may be configured differently than what described above in
Hence, the client system 214 performs the dual functions after button 228 is activated and one of the answer buttons is activated. In another scenario, if the client system detects that button 228 is activated first and one of the answer buttons is activated subsequently, the client system may conclude the survey and close the email page simultaneously. As such the dual functions of the answer buttons 220-224 and 228 are performed only after one of the answer buttons and button 228 are activated. If it is detected that only one of the answer buttons or only button 228 is activated, the dual functions are not carried out, that is, the email page may remain and may not be replaced by the inbox page automatically.
After the user taps button 236, it may indicate that the user has read the message and may have reviewed the new products. As such, it is time to return to the inbox page. Thus, button 236 may be configured to have dual functions. The client system may monitor whether button 236 is activated. If the client system 230 detects that button 236 is activated by the user, the client system may send a message to report the event to the survey processing program or a service facility. At the same time, the client system may close the email page and display the inbox page.
In some other embodiments, more than one dual-function button or icon may be configured in interface 232. For example, a “Not Like” button (not shown) may be arranged in interface 232 for a user to express or submit another opinion that is different than the like opinion. The “Not Like” button may be configured to perform dual functions, submission of an opinion and closing the email page. After it is detected that the “Not Like” button is activated, the client system 230 receives the submitted opinion, sends a message to the service facility, closes or removes the email page, and displays the inbox page.
In addition, a “Back” button is configured at the upper left corner and a “Close” button is configured at the upper right corner. A user may use the “Back” or “Close” button to close the current short message page and return to the inbox page. In some embodiments, both the “Back” and “Close” buttons may be presented by the client system 238. In some other embodiments, one of the “Back” and “Close” buttons may be presented. After the client system 238 detects that the “Back” button or “Close” button is activated, the client system closes the current short message page and displays the inbox page.
Like above-described examples, a user is presented with two survey options, tapping an answer button to do a quick one-action survey or tapping button 254 to do a regular survey. Similar to the answer buttons shown in
In some other embodiments, more than one dual-function button or icon may be configured in interface 258. For example, a “Need to Change” button (not shown) may be arranged in interface 258 for a user to express or submit a request or another opinion that is different than the like opinion. The “Need to Change” button may be configured to perform dual functions, submitting a request or opinion and closing the short message page. After it is detected that the “Need to Change” button is activated, the client system 256 receives the submission of the request or opinion, sends a message to the service facility, closes or removes the short message page, and displays the inbox page.
Besides emails and short messages, dual-function buttons may also be configured in tweets or tweet messages. A tweet may represent a social media posting or message at a social media platform.
In addition, a “Back” button is configured at the upper left corner and a “Close” button is configured at the upper right corner. A user may use the “Back” or “Close” button to close the current tweet page and return to the inbox page. In some embodiments, both the “Back” and “Close” buttons may be presented by the client system 268. In some other embodiments, one of the “Back” and “Close” buttons may be presented. After the client system 268 detects that the “Back” button or “Close” button is activated, the client system closes the current tweet page and displays the inbox page.
Like above-described examples, a user is presented with two survey options, tapping an answer button to do a quick one-action survey or tapping button 284 to do a regular survey. Similar to the answer buttons shown in
In some other embodiments, more than one dual-function buttons or icons may be configured in interface 288. For example, a “Need to Upgrade” button (not shown) may be arranged in interface 288 for a user to express or submit a request or another opinion that is different than the like opinion. The “Need to Upgrade” button may be configured beside button 296 and perform dual functions, submission of a request or opinion and closing the tweet page. After it is detected that the “Need to Upgrade” button is activated, the client system 286 receives the submission of the request or opinion, sends a message to a service facility, closes or removes the tweet page, and displays the inbox page.
In embodiments as shown in
For example, a client system may present an email on an email page, a short message on a short message page, or a tweet message on a tweet page. If the client system detects that the “Close” button or “Back” button is activated, the client system may close the message page and displays the inbox page. If the client system detects that one of the answer buttons for a one-action survey is activated, the client system may conclude the survey, send a survey result to the survey facility, close the page, and display the inbox page. If the client system detects that the “Like” button is activated, the client system may send a message to the service facility, close the message page, and display the inbox page. As such, the dual-function button saves one step for a user and thus saves time and may improve user experience.
When a survey is embedded in an email, a short message, or a tweet message, it may make more users to submit an answer, i.e., activating an answer button, if the answer button gets the users' attention. As a larger button may stand out among content items, and a larger button is easier to tap or click, a button with an enlarged size may attract more attention, make answering a question more convenient, and encourage more users to express themselves (e.g., participating in surveys). In descriptions below, a display screen may be touch sensitive or not touch sensitive. When the display screen is not touch sensitive, certain methods apply when it is suitable.
Referring to
In some cases, multiple select buttons may be presented with an enlarged size for a predetermined time after an email message page is displayed. For example, buttons 208 and 210 shown in
In some cases, a client system does not show buttons with a changeable size when a message page is presented. It may happen when other content of the message already takes the screen space. In these cases, when a user scrolls down the message page, buttons with a changeable size appear. Once these buttons show up on screen, their size may be enlarged temporarily. Thus, when select buttons appear either by presentation of a page or a scrolling down act, the select buttons may show up with an enlarged size temporarily.
In some embodiments, a button may be enlarged when it is detected that a finger (or an external object) is proximate to the button. As aforementioned, a proximity sensor array may be installed at a screen to detect gestures of a finger above the screen surface. Assume the screen is parallel to a horizontal plane. As used herein, being in proximity to a button indicates a finger hovers above the button vertically or is proximate (e.g., about 1-5 millimeters away from the button) to the button horizontally. The close position of a finger may indicate a user is likely to tap the button. Thus, the button may be enlarged to make it a little easier to do a tapping act, while further encouraging the user to tap it. Referring to
In some embodiments, only select buttons may be enlarged when a finger approaches them. For example, only certain answer buttons for a survey (e.g., buttons 278 and 280) may be enlarged when they are approached by a finger with respect to
In some cases, three modes may be arranged. Options may be provided for a user to select (or enable) a mode or disable all three modes. The options are configured for better user experience, as users have difference preferences. The options may be arranged for access at a setup section of an app or the client system.
In the first mode, after it is detected that a finger approaches a button by hovering over or being in proximity to the button, the size of the button may be enlarged for a preset time period, e.g., at least 2-10 seconds. During the preset time period, the button is enlarged whether or not the finger remains there. The button returns to its normal size after the preset time period passes.
In the second mode, a button is enlarged and keeps the enlarged dimensions when it is detected that a finger hovers over or is in proximity of the button. For example, when it is detected a finger is proximate to a button for a time period, the button may be enlarged during the time period. The button returns to its normal size after it is detected that the finger is no longer there.
In the third mode, a button is enlarged and keeps the enlarged dimensions when it is detected that a finger hovers over or is in proximity of the button. For example, when it is detected a finger is proximate to a button for a time period, the button may be enlarged during the time period. After it is detected that the finger is no longer there, the button may keep the enlarged size for a certain period of time, e.g., at least 2-10 seconds. During the period of time, the button remains enlarged even though the finger is absent. After the period of time, the button returns to its normal size.
Optionally, certain embodiments as illustrated above may be combined when it is suitable to do so. For example, a select button may be enlarged when the content of a page is presented. The button may return to its normal size after a certain time. Thereafter, the button may be enlarged again when a finger approaches it, and keep the enlarged dimensions for a certain time. In some embodiments, when a button is enlarged temporarily, the enlarged button may partially cover or block another button or item that is close to it. The finger-triggered-enlargement methods as illustrated above and shown in
When a display screen is not touch sensitive, the finger-triggered-enlargement methods may be replaced by cursor-triggered-enlargement methods. For example, when a client system detects that a cursor enters an area of a select button, such as the cursor overlapping the select button, the select button may be enlarged temporarily in manners similar to those illustrated above. Thus, methods of the finger-triggered-enlargement may be applied to cases of the cursor-triggered-enlargement. For example, after a cursor overlaps a select button, the button may be enlarged temporarily for a certain time in one mode. In another mode, the button may be enlarged temporarily and then return to its normal size after the cursor leaves the area of the button.
As used herein, the term “robot” is referred to as a device or machine capable of performing complex actions, especially a complex series of actions, automatically. Robots may include a humanoid robot that resembles a human being in appearance. Robots may also include a device that partially looks like a human or looks like an animal (e.g., a bear or panda) or a figure from an animated movie. The latter form of robot includes an autonomous vehicle (or driverless vehicle) with wheels or legs and a head with a humanlike face or animal face. The latter form of robot also includes a stationary device that e.g., has at least a humanlike face or animal face.
A robot may include a processor, a memory device, cameras (or imaging sensors), a positioning system (e.g., GPS), an input component, an output component, communication components, a screen that shows info and messages, etc. A robot may also include a voice recognition mechanism, a facial recognition mechanism, and a gesture recognition mechanism. In addition, the robot may further include a radar system, a light detection and ranging (LiDAR) system, a speed sensor, accelerometers, gyroscopes, an electronic compass, microphones, speakers, pressure sensors, etc. The cameras, radar, and LiDAR may detect stationary and moving objects, e.g., an approaching person or vehicle. The cameras may be mounted on the robot so that they face and may detect targets located in all directions. In some cases, the cameras may be used to detect a person's gaze direction or facing direction. The facing direction becomes critical when a person is at a distance from the robot and it becomes difficult to detect the person's gaze direction. The speed sensor, accelerometers, gyroscopes, electronic compass may detect the robot's movement, position, and orientation. The pressure sensors may be mounted on the outer surface of a robot to detect a touch or pad on the robot made by a person. The robot may be connected via the communication components to a device (e.g., client 80) of a user using Wi-Fi, Bluetooth, a wireless network, or other suitable communication technologies.
Robots may be employed to work as surveyors and implement on-site surveys. It saves labor cost and makes on-site surveys always available when needed. While a robot may have a face resembling a human face, an animal face, or a face of an animated figure, the body of the robot may look like a human body, an animal body, or a machine. A robot may smile (or appear smiling) when its facial part generates a smiling pattern. In some cases, a robot smiles when inviting people to do a survey with a displayed survey question or audible survey question, as smiling represents a good will, good intention, and comfortable environment. A robot may be movable or stationary at a site, such as near an exit (e.g., a gate) of a store, a building, a shopping mall, a square, an arena, a park, etc. When a robot is movable, the robot may move forward one or a few steps toward a person who is coming closer to the robot before presenting a survey question. When a robot moves, even moving a little bit, it may attract more attention from a person and help launch a survey. In addition, a robot is arranged to stay at or move around a position that is on the edge of a pathway through which people leave the site and go home. As such, the robot does not obstruct, hinder, or interfere with a person's walking.
At Step 1, robot 300 finds a person 302 who is at a position B1 and moving along a direction 304. Direction 304 points to the exit gate or approximately to a place where robot 300 stays. Robot 300 keeps monitoring person 302 using the sensors as the person continues moving along the direction 304.
At Step 2, while person 302 arrives at a position B2 and is within a preset distance (e.g., 3-5 meters) from robot 300, robot 300 may move forward a little toward person 302, arrive at a position A2, turn toward person 302 to be face-to-face with the person, smile, look at (or appear looking at) person 302, and present a survey question. Robot 300 at position A1 and person 302 at position B1 are depicted in dotted lines at Step 2 for comparison purpose. The survey question may be “Satisfied?” that is shown on the display. Hopefully, person 302 may notice the robot and read the survey question. Person 302 may utter “yes” to indicate satisfaction and complete the survey. The person may also nod or make a thumbs-up gesture which may be considered as a yes answer. On the other hand, person 302 may also say “no”, make a thumbs-down gesture, or shake head to express dissatisfaction or disappointment. As illustrated above, person 302 may also utter “so-so”.
In some embodiments, robot 300 may observe a person who comes toward the robot and detect the facial expression and body gestures of the person through the sensors. First, images and videos are taken, recorded, and collected. Then, the recorded facial expression and gestures are checked and analyzed in real time by analyzing procedures through certain algorithms or analytical software. The analytical software may include artificial intelligence (AI) tools such as artificial general intelligence (AGI) and generative artificial intelligence (GenAI). Certain models like large language models (LLMs) may be included in the AI tools in some aspects. In some cases, the models may be trained with a huge amount of data for detecting features and indicators of facial expressions, moods, states, and gestures.
In some cases, robot 300 may ascertain and determine whether a person is relaxed, nervous, concerned, happy, unhappy, tired, or energetic through analyzing the recorded images and videos. The robot may calculate the number of people who are in a specific mood (e.g., a relaxed, nervous, concerned, happy, or unhappy mood) or multiple moods (e.g., relaxed and happy moods). At a store, such information may be used to detect the general shopper mood which reflects certain aspects of shopping experience and shopping environment and is useful for store owners. Additionally, robot 300 may ascertain the age of people and calculate the number of people at each age group. The age group may include, e.g., child, teen, adult, middle age adult, and senior adult. The age group information of shoppers may reflect the activity level of shoppers and marketing potential of new products. Further, the above mood and age information may be added to survey results when surveys are performed. It gives survey results more content and details.
Further, when the person does not answer the survey question, comes closer, and is within another preset distance (e.g., 1-3 meters), robot 306 may generate an audible survey question through a speaker, such as “Are you satisfied?” or “Everything is okay?”. Optionally, the audible survey question may be combined with an audible greeting message. For example, robot 306 may utter via the speaker “Hi, are you satisfied?” in some cases. If the person answers the question, robot 306 may thank the person and generate another audible survey question. Robot 306 may utter “Thank you. Would you like to do a regular survey?” to encourage the person to provide more information. If the person answers positively, more questions are presented. As questions in a regular survey have long sentences, robot 306 may use display 308 to show the questions to the person.
In some embodiments, when robot 306 detects the person is within a preset distance and simultaneously looks at the robot (e.g., looking in a direction to the eye, head, or face of the robot), the robot generates an audible survey question. The reason is when the person is close but looks elsewhere, the person may not pay attention to robot 306 and it may not be suitable for suggesting a survey which requires intention and effort. When the person is looking at the robot, it suggests the person pays attention to robot 306 and may want to interact with the robot. Thus, whether the person looks at the robot is an important indicator.
On the other hand, when the place where robot 306 is around becomes crowded, for example, when there are a certain number of people (e.g., three or more people) who are within a predetermined distance (e.g., 1-2 meters) and pass by and/or stand in front of the robot and, the robot may not engage with any person to start a survey. In a crowded environment, it is difficult to interact with a person and conduct a survey. Alternatively, when there are three to five people in front of the robot and within the predetermined distance, one person of these people looks at robot 306 and faces the display, and there is no other person between the robot and this person, robot 306 may present a survey question on the display and/or generate an audible survey question via a speaker, inviting the person to do a survey.
In addition, robot 306 may also show a message like “Get coupons here”, when a person is in front the robot. The person may tap a button to select and print out coupons or have the robot transmit coupons electronically to a smart phone or an email account. Further, robot 306 may function as an information center to receive and answer all sorts of questions through display 308 or verbal conversations. In the conversation, the voice recognition mechanism is utilized by the robot.
At Step 176, the robot continues calculating the distance between the person and the robot. In response to the distance is smaller than a preset value (e.g., 3-6 meters), the robot smiles at the person and presents a survey question (e.g., “Satisfied?”) on the display that faces the person, in a manner similar to that shown in
At Step 180, the robot detects the walking pace of the person using, e.g., a LiDAR system. The walking pace is the speed at which the person walks. For average people, the normal walking pace is about 82 meters per minute. If the robot detects the person moves at a pace greater than a preset value (e.g., 90-100 meters per minute), it may indicate the person is in a hurry, in a tense state, or preoccupied. Consequently, the robot determines the person is not suitable for a survey. If the robot detects the person moves at a pace below the preset value, it means the person is not in a hurry and not tense. Thus two conditions are met: Both the distance and the pace are below their threshold values. Then, Step 188 is taken. At Step 188, the robot smiles at the person and presents a survey question (e.g., “Satisfied?”) on the display. The robot may rotate its body and the display a bit so that the robot and the display face the person. The robot monitors the person and detects whether an answer is given. The person may utter “Yes” or “No”, or make a gesture to reply to the question.
At Step 182, the robot detects whether the person appears free and relaxed (or in a free and relaxed state). If detection results are negative, it is unsuitable for doing a survey, as the person may be busy, have hands full, be distressed or exhausted and would have no interest to participate in a survey. The robot may check whether the person is talking to another person or on the phone, carries, e.g., a big object or two or more bags, or walks strenuously, which indicates the person is not in a free and relaxed state. After the robot determines the person is in a free and relaxed state and the distance is below the threshold value, Step 188 is taken by the robot and the survey question displayed.
At Step 184, the robot detects whether the person is in a good mood. If the detection result is negative, it is unsuitable for a survey, since a person with a bad mood does not qualify as a surveyee. The robot may analyze the person's face using the AI tools. A long face, gloomy face, or sullen face means a bad mood. A smiling face, cheerful face, or easy-going face indicates a good mood. If the robot determines the person is in a good mood and the distance is below the threshold value, Step 188 is taken and the survey question displayed to the person.
At Step 186, the robot detects whether the person gazes at or looks in direction toward the robot. If the detection result is negative, it may be unsuitable for a survey, since the person may be engaged with something else and have no time and interest for a survey. If the person gazes at the robot, it means the person pays attention to the robot and may be willing to respond to the robot's request. If the robot determines the person gazes at or looks in a direction toward the robot and the distance between them is below the threshold value, Step 188 is taken and the survey question displayed on the display.
In some embodiments, two or more steps among Steps 180-186 may be combined to generate a stricter requirement for Step 188 to happen. For example, if Steps 180 and 186 are combined, the robot may take Step 188 after it is detected that the person moves at a pace below the preset value, gazes at the robot, and concurrently the distance between the robot and the person is below the threshold value.
In some cases, Step 178 may be skipped and Step 188 may be performed based on one or any combination of Steps 180-186. For example, when the robot detects that a person walks at a pace below the preset value, the robot may go to Step 188 to display a survey question without checking the distance between the robot and the person. In another example, when the robot determines the person is in a good mood and looks at the robot, the robot shows a survey question as described at Step 188 without checking the distance between the robot and the person.
Optionally, at Step 188, the robot may present a survey question on the display, and concurrently, the robot may present the question audibly through a speaker. Alternatively, at Step 188, the robot may present a survey question audibly via the speaker, while the survey question is not presented on the display.
Optionally at Step 188, the robot may also nod its head at a person and/or wave at the person with a hand (or a hand-like part) while facing the person and displaying a survey question. When the robot appears paying special attention to the person with a smile, a nod, a waving gesture, and/or an approaching act (e.g., a step forward toward the person), it sends the person a warm welcoming signal that encourages the person to cooperate with the robot. As such, the robot may have a better chance to conduct a survey.
Further, embodiments provided above in this disclosure may be combined when there is no contradiction or conflict. For example, two or more methods depicted in
Thus it can be seen that systems and methods are introduced for dual-function buttons and buttons with a changeable size, and robots are employed as surveyors to conduct surveys.
The dual-function buttons have the following features and advantages:
Buttons with a changeable size have the following features and advantages:
Surveys with robots as surveyors have the following features and advantages:
Although the description above contains many specificities, these should not be construed as limiting the scope of the invention but as merely providing illustrations of some of the presently preferred embodiments. Numerous modifications will be obvious to those skilled in the art.
Ramifications:
Speech recognition and voice generation functions introduced in
Referring to the embodiment shown in
Furthermore, a motion sensing component such as accelerometer and/or gyroscope may be added to a client system which senses the motion of the client device. For example, shaking or waving a mobile phone in vertical direction, horizontal direction, or in circle may respectively represent the three answers of single-action survey.
For mobile phone users, the launch of a survey may feature flashes of light from an embedded light emitting module, so that a user may not need to look at the screen closely to know a survey has started after a target event is over. This feature, when combined with other easy steps, makes a survey even more convenient. Flashing lights may also be used to remind a user of that a survey is in a wait period.
The process described in the flow diagram of
In
A display or display device may include those which are designed for head mount and have a very small screen or a virtual screen. These displays may be used in virtual reality (VR) systems and augmented reality (AR) systems. Since VR and AR systems don't have a touch screen and computer mouse, button activation may be performed via other mechanisms like a hand and finger gesture, an eye movement, or a verbal input.
Lastly, when a device is equipped with proximity sensor or three-dimensional (3-D) gesture sensor, it may detect finger or hand position at a short distance away from it. Thus the finger and hand gesture and movement in the air may be used to complete a single action survey, too. Examples may include a check mark, circle, and straight line for the three answers created by a finger or hand in the air, preferably close to a screen of the device.
Therefore the scope of the invention should be determined by the appended claims and their legal equivalents, rather than by the examples given.
This is a continuation-in-part of U.S. patent application Ser. No. 17/857,016, filed Jul. 3, 2022, which is a continuation-in-part of U.S. application Ser. No. 16/831,663, filed Mar. 26, 2020, which is a continuation-in-part of U.S. application Ser. No. 15/702,724, filed Sep. 12, 2017, which is a continuation-in-part of U.S. application Ser. No. 15/279,433, filed Sep. 29, 2016, which is a continuation-in-part of U.S. application Ser. No. 14/194,793, filed Mar. 2, 2014, now U.S. Pat. No. 9,483,774, granted Nov. 1, 2016.
Number | Date | Country | |
---|---|---|---|
Parent | 17857016 | Jul 2022 | US |
Child | 19076939 | US |