This disclosure relates generally to data processing devices and, more particularly, to a method, a device, and/or a system of personal and/or team logistical support.
Individuals and teams of people working together (e.g., families, organizations, companies, business units, departments, government agencies) may have a large variety of things to remember, information to document, and/or objects to use and share. Faced with a growing and changing variety of information and available tools it may be increasingly difficult to remember personally relevant information and/or communicate information relevant to a team, including the efficient use of shared tools and resources. These challenges can result in logistical inefficiencies, for example trying to find where a family member or co-worker placed important objects, receiving an effective reminder that follows up and precipitates action, and/or conveying relevant information or documentation to a person at the most relevant time and within the most relevant context. With respect to both individuals and teams, there is a continuing need for technology that improves logistics in everyday tasks, including reminders, documentation, and/or object location, each of which may closely relate to the workflow of households, businesses, and government.
Disclosed are a method, a device, and/or a system of personal and/or team logistical support. In one embodiment, a system for geospatial reminder and documentation includes a server and a network communicatively coupling the server to a mobile device and/or a wearable device. The server includes a processor of the server, a memory of the server, a network interface controller of the server, a spatial documentation routine, and a documentation awareness routine.
The spatial documentation routine includes computer readable instructions that when executed on the processor of the server: receive a documentation placement request that includes a documentation content data (comprising of a text file of a documentation, a voice recording of the documentation, and/or a video recording of the documentation) and optionally a documentation name and a documentation category; receive a first location data from a mobile device and/or a wearable device; and generate a spatial documentation data and store the spatial documentation data. The spatial documentation data includes a documentation ID, the documentation content data, a documentation location data including a first coordinate of the first location data, and optionally the documentation name and the documentation category.
The documentation awareness routine includes computer readable instructions that when executed on the processor of the server: receive a second location data from the mobile device and/or the wearable device and determine a second coordinate of the second location data is within a threshold distance of the first coordinate of the documentation location data. The documentation awareness routine includes computer readable instructions that when executed on the processor of the server: determine an awareness indicator of the spatial documentation data and transmit a first indication instruction to trigger the awareness indicator on the mobile device and/or the wearable device. The awareness indicator is a sound and/or a vibration used to alert the user to documentation.
The server may further include a documentation retrieval routine including computer readable instructions that when executed on the processor of the server may: receive a documentation retrieval request that includes the documentation ID from, the retrieval request received from the mobile device and/or the wearable device; and/or transmit the documentation name, the documentation content data, and/or the documentation category.
The server may further include a locating routine including computer readable instructions that when executed on the processor of the server may: receive an object placement request that may include an object name and optionally an object description data and/or an object category; receive a third location data from the mobile device and/or the wearable device; generate a placed object data (that may include a placement ID, the object name and the object ID, an object location data, and/or the object description data and the object category); and/or store the placed object data.
The server may also include an object locating routine. The object locating routine may include computer readable instructions that when executed on the processor of the server may: receive an object locating request that includes the object name, the object ID, and/or the object description; determine a third coordinate of the object location data and/or an area name associated with the object location data; and/or transmit the third coordinate and/or the object location name to the wearable device and/or the mobile device.
The server may further include a voice recognition system and/or a remote procedure call to the voice recognition system, the voice recognition system receiving a voice input of a first user and generating a text output. The server may also include a set of computer readable instructions that when extracted extract from the text output: (i) the documentation content data, the documentation name, and the documentation category, and/or (ii) the object name, the object description data, the object category, and the area name.
The system may further include the wearable device of the first user. The wearable device of the first user may include a display screen of the wearable device, a processor of the wearable device, a network interface controller of the wearable device, and an activation button of the wearable device that is at least one of a virtual button and a physical button. The wearable device of the first user may also include a voice transmission routine of the wearable device including computer readable instructions that when executed on the processor of the wearable device may determine activation of the activation button and/or record the voice input of the first user; and transmit the voice input to the server.
The server may also include a group database and a collective memory engine. The group database may store an association between the user ID of the first user and a group ID. The collective memory engine may include computer readable instructions that when executed on the processor of the coordination server may: receive a fourth location data from a mobile device of a second user and/or a wearable device of the second user; determine a user ID of the second user is associated with the group ID; determine a third coordinate of the fourth location data is within the threshold distance of the coordinate of the documentation location data; determine the awareness indicator of the spatial documentation data; and transmit a second indication instruction to execute the awareness indicator on the at least one of the mobile device of the second user and/or the wearable device of the second user.
The collective memory engine may include computer readable instructions that when executed on the processor of the coordination server may: receive a second object locating request (that may include the object name, the object ID, and/or the object description) received from the at the mobile device of the second user and/or the wearable device of the second user; determine the third coordinate of the object location data and/or an area name associated with the object location data; and transmit the third coordinate and/or the area name to the mobile device of the second user and/or the wearable device of the second user.
In another embodiment, a personal and/or team logistics support system includes a support hub, a wearable device of a first user, and a network communicatively coupling the support hub and the wearable device of the first user. The support hub includes a processor of the support hub, a memory of the support hub, a network interface controller of the support hub, and a display screen of the support hub. A housing of the support hub stores the processor of the support hub, the memory of the support hub, the network interface controller of the support hub, and the display screen of the support hub is set in the housing.
The support hub also includes a voice recognition system and/or a remote procedure call to the voice recognition system, the voice recognition system receiving a voice input of a first user and generating a text output.
The support hub includes a calendar application comprising one or more calendar grids for display on the display screen and a reminder database storing a reminder data. The reminder data includes a reminder ID, a reminder name, a reminder condition data, a reminder content data (including a text file of a reminder, a voice recording of the reminder, and/or a video recording of the reminder), a reminder category, a user ID of a first user defining the reminder, and a reminder location data.
The support hub further includes a reminder routine having computer readable instructions that when executed on the processor of the support hub: (i) receive the text output of the first user; (ii) extract a reminder content data and a reminder condition from the text output; and (iii) record the reminder data in the reminder database.
The wearable device of the first user includes a processor of the wearable device, a network interface controller of the wearable device, a display screen of the wearable device, and an activation button of the wearable device that is at least one of a virtual button and a physical button. The wearable device of the first user further includes a voice transmission routine of the wearable device. The voice transmission routine of the wearable device includes computer readable instructions that when executed on the processor of the wearable device: determine activation of the activation button; record the voice input of the first user; and transmit the voice input to the support hub.
The system may further include a mobile device of the first user. The mobile device may include a processor of the mobile device, a memory of the mobile device, a GPS unit, and a voice transmission routine of the mobile device. The voice transmission routine of the mobile device includes computer readable instructions that when executed on the processor of the mobile device may record the voice input of the first user and/or transmit the voice input to the support hub.
The support hub may further include an object database storing a placed object data having an object name and an object location data, an object description data, an object category, and/or a user ID of the first user. The support hub may also include an object locating engine that includes computer readable instructions that when executed on the processor of the support hub may: receive a second text output of the first user; extract at least one of the object name, the object description, and/or the object category from the second text output of the first user; extract a coordinate of the object location data from a location data received from the mobile device; and record the placed object data in the object database.
The system may also include a coordination server. The coordination server may include a processor of the coordination server, a memory of the coordination server, a collective reminder database, and/or a collective object database. The coordination server may include collective memory engine that includes computer readable instructions that when executed on the processor of the coordination server: receive a second reminder data and a group ID from the first user (the first user may be associated with the group ID); store the second reminder data in the collective reminder database; lookup a second user associated with the group ID; and deliver the reminder data to a second support hub of the second user.
The coordination server may also include a collective object database. The collective memory engine may further include computer readable instructions that when executed on the processor of the coordination server may: receive a second placed object data and the group ID from the first user (the first user may be associated with the group ID); store a second object data in the collective object database; lookup the second user associated with the group ID; and/or deliver the second object data to the second support hub of the second user.
The support hub may include a display screen of the support hub that is a touchscreen. The support hub may also include a pen mount connected to the housing for storing a pen capable of stimulating a touch input of the touchscreen. The support hub may also include a writing recognition system and/or a second remote procedure call to the writing recognition system, the writing recognition system receiving a written input of the first user and generating the text output. The support hub may further include an event database storing a jurisdiction event data, a personal event data, and/or a collective event data. The support hub may yet further include a scheduling routine that includes computer readable instructions that when executed on the processor of the support hub: receive the text output of the first user; extract a date and optionally a time from the text output; and record an event data as an instance of the personal event data in the event database.
In yet another embodiment, a computer implemented method in support of personal and/or team logistics includes receiving a reminder request including a reminder content data (including a text file, a voice recording and/or a video recording), and a reminder category and/or a user ID of a first user generating the reminder request. The method generates a reminder condition data including a first reminder condition and a second reminder condition of higher urgency than the first reminder condition. The method includes associating within the reminder condition data a first communication medium ID with the first reminder condition and a second communication medium ID with the second reminder condition. The method generates and stores a reminder data including a reminder ID, the reminder condition data, the reminder content data, and optionally the user ID of the first user generating the reminder request.
The method may then determine the occurrence of the first reminder condition. The first communication medium ID that is associated with the first reminder condition may be determined. A reminder notification data that includes the reminder content data is generated and transmitted through the first communication medium to a wearable device of the first user, a mobile device of the first user, and/or a different computer device of the first user. The method may also determine the occurrence of the second reminder condition of the higher urgency, determine the second communication medium ID that is associated with the second reminder condition; and re-transmitting the reminder notification data through the second communication medium to the wearable device of the first user, the mobile device of the first user, and/or the different computer device of the first user.
The embodiments of this disclosure are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Other features of the present embodiments will be apparent from the accompanying drawings and from the detailed description that follows.
Disclosed are a method, a device, and/or system of personal and/or team logistical support. Although the present embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the various embodiments.
In one or more embodiments and the embodiment of
A scheduling routine 220 can receive data (e.g., generated by the user 102) and parse the data to define an event data to be stored in the events database 222. An object locating engine 230 can similarly receive and/or parse data (e.g., generated by the user 102) to define a placed object data 231 in the object database 232, in one or more embodiments. A recall engine 240 can receive and/or parse data to define a reminder data 241 in the reminder database 242, in one or more embodiments. A spatial documentation engine 250 can store receive and/or parse data to store a spatial documentation data 251 in a spatial documentation database 252, in one or more embodiments. Each of the functions, properties and/or advantages of the scheduling routine 220, the object locating engine 230, the recall engine 240, and/or the spatial documentation engine 250 will be shown and described below, and throughout the present embodiments.
In one or more embodiments, the user 102 may directly utilize the support hub 201 when in the presence of the support hub 201. In one or more embodiments, the user 102 may request scheduling of an event, log completion of a task, set a reminder, record a piece of documentation including within a spatial context and/or conditional context, and/or to record a placed object 134. For example, in what may be a basic example, the user 102 may activate the recording and transmission capability of the support hub 201, as shown and described herein, and say: “set a dinner party event for July fourteenth”, or “remind me to buy a birthday present for my wife three days before her birthday.” The voice input 161 is recorded as a voice input data 261, processed through a speech recognition system 260, and parsed to define one or more events in the events database 222, as shown and described in conjunction with
In one or more embodiments, the user 102 may also utilize the pen 215 to input data, which may be mounted on the support hub 201, by writing on the display screen 212. In one or more embodiments, the display screen 212 is a touchscreen and the pen 215 is a stylus usable to provide an input on the touchscreen. In one or more embodiments, the user 102 may be able to generate an event data by writing directly on the calendar grid 217, or a reminder data 241 and/or a spatial documentation data 251 by writing directly on a map. As a result, the user 102 may be able to emulate a familiar process of writing an event on a paper calendar, recording a reminder on a checklist, and/or providing a handwritten documentation note on a map. The writing of the user 102 can be parsed through a writing recognition system. Events, reminders, and/or documentation can then be recognized, extracted, and stored in one or more appropriate databases shown in the embodiment of
The support hub 201 may be configured for convenience within a home or office setting such that it always displays useable information to the user 102. For example, the display screen 212 of the support hub 201 may by default display a monthly calendar with events specified and/or a daily schedule presented. The support hub 201 may provide reminders for event data stored in the event database 222 (e.g., “you have a call with International Systems Incorporated in ten minutes”). In a warehouse environment, in contrast, the support hub 201 may primarily display a map of placed objects 134 or critical pieces of spatial documentation that can alert a user if they are determined to be near a hazard as sensed through location analysis of the wearable device 300 (e.g., resulting in a message: “Warning! Hydraulic oil leak at loading dock four”).
The user 102 may also use the support hub 201 to document placed objects 134. For example, the user 102 may say, “I placed my passport in the second desk drawer in my study”, or “I placed the key to my shed in the office bookshelf” The geographic location may also be inferred from GPS coordinates and/or other location placement information, as shown and described herein. A map can be displayed on the display screen 212 (e.g., a floorplan entered by the user 102, a satellite image from Google® Maps) and the user 102 can designate where a placed object 134 was placed utilizing the pen 215, including in conjunction with recording a voice memo that may be stored in the object database 232 (e.g., “the shed key is on the middle shelf”).
The user 102 may later request to query the object database 232, for example “Where did I put my passport?”. The support hub 201, as shown and described in the present embodiments, can then parse the query of the user 102 in natural language to determine presence of the query, check the object database 232, and reply through a voice output 166. The voice output 166, for example, might be “On June 22 at 3:46 PM you placed your Passport in the second drawer of the desk at 545 Westbrook Street.” In a family and/or workplace environment, uses may also be assigned certain authority to find objects of others, to be notified of certain reminders and/or documentation, and to participate in and/or receive other forms of logistical support.
The user 102 may also “place” documentation at a geospatial location and/or in association with some other piece of data specifying location. For example, as shown in the embodiment of
The wearable device 300 and/or the mobile device 500 may act as extensions and further augment the support hub 201, according to one or more embodiments. The wearable device 300 and/or the mobile device 500 may “extend the range” of the support hub 201 farther than the distance from which the support hub 201 (and/or the support server of
In addition, the wearable device 300 and/or the mobile device 500 can extend a capability of the support hub 201. For example, as may be known in the art, a combination of WiFi signal and GPS coordinates may permit a reasonably accurate determination of location within a building and such data can be automatically extracted and stored as the object location data 235 within the placed object data 231.
The wearable device 300 is a computing device attachable to the human body, for example a smart watch (e.g., Apple® Watch), and permitting communication from the user 102 to the support hub 201. In one or more preferred embodiments, the wearable device 300 can record a voice input 161 (e.g., to become the voice input data 261) of the user 102 and/or a text input. The wearable device 300 may also be able to provide a voice output 166 and/or a visual output (not shown in
In one or more embodiments, the user 102 may activate a routine on the wearable device 300 to transmit an input (e.g., a voice input data 261) to the support hub 201 through the network 101, for example to request scheduling of an event, to log completion of a task, and/or to document a placed object 134. The wearable device 300 may be connected to the support hub 201 through the network 101, for example through a shared WiFi connection and/or through a Bluetooth® connection. The wearable device 300 is shown and described in further detail in the embodiment of
The mobile device 500 is a computing device comprising a display screen, for example a smartphone and/or a tablet device. The mobile device can similarly record and transmit voice or text to the support hub 201. However, the mobile device 500 may also be enabled to retrieve and show (e.g., through use of a mobile application) the calendar and events in the events database 222, the reminders of the reminder database 242, the placed objects of the object database 232, an/or documentation of the spatial documentation database 252. The mobile device 500 is shown and described in further detail in the embodiment of
The coordination server 400 is a computing server that can enable more than one instance of the user 102 to utilize the support hub 201 (e.g., an executive and a secretary) and/or functionally associate one or more instances of the support hub 201 (and/or the support server 200 of
The support server 200 of
In one or more embodiments, the support server 200 may receive and record various media files from the user 102. For example, a “camera 507 of the mobile device 500 can be utilized for recording reminders of the user 102 which may then be transmitted to the support server 200, according to one or more embodiments. In a specific example, video recordings as reminders may be able to be used as a third-party reminder or remote accountability method, in which a third party reminds the user 102 to carry out or complete a task, or engage in a scheduled event.
The support server 200 includes interfacing elements sufficient for transmitting information to be generated on output devices for the user 102. For example, the support server 200 may transmit output to additional devices and systems over the network 101 (e.g., the support hub 201, the wearable device 300, the coordination server 400, the mobile device 500, and/or other devices and systems).
In one or more embodiments, the support server 200 is voice-enabled. The user 102 may generate a voice input 161 which is detected, recorded, and stored as a voice input data 261. The voice input data may be recorded upon detection of a “wake-word” such as “Memo”. Different wake works may also be assigned to and initiate certain requests, for example setting reminders, documentation, or recoding of placed objects 134. The voice input data 261 may be forwarded to a speech recognition system 260. The speech recognition system 260 comprises computer readable instructions that when executed on the processor 202 detect one or more words within the voice input data 261 and translate the one or more words into a text translation data 265. The speech recognition system 260 may also be provided on a different remote server and/or by a remote software service over the network 101. In such case, the support server 200 may include a remote procedure call (RPC) to the remote instance of the speech recognition system 260. In one or more other embodiments, the speech recognition system 260 may have both local (e.g., stored on the support hub 201 and/or the support server 200) and remote components specializing in certain aspects of voice recognition. For example, the speech recognition system 260 of the support hub 201 may have a sufficient library to recognize the wake word(s) and interpret some useful and/or common interactions in case connectivity issues with the network 101 arise. In a specific example, the local instance of the speech recognition system 260 as shown in
In one or more embodiments, the support server 200 may be writing enabled, that is, permitting the user 102 to provide informational input via writing to one or more input devices, including but not limited to the support hub 201 (e.g., the user 102 may provide input on an instance of that display screen 212 that is a touchscreen, as shown and described in conjunction with the embodiment of
The text translation data 265 may be parsed to determine the inclusion of one or more events, object placements, object locating requests, reminder requests, recall requests, documentation requests, and documentation awareness alerts and/or requests. For example, the scheduling routine 220 may determine inclusion of one or more events within the text translation data 265. The scheduling routine 220 comprises computer readable instructions that when executed on the processor 202 carry out a number of operations. A first operation receives the text translation data 265 of the user 102. A second operation determines an event is to be defined within the text translation data 265. For example, terms such as “event,” “schedule,” “birthday,” or other associated terms may be recognized. A third operation extracts a date and/or a time from the text translation data 265. A fourth operation generates an event data with the event and/or the time and stores the event data in the events database 222.
The events database 222 may include stored data defining one or more instances of the event data. The event data, for example, may be a jurisdictional event data 224 (e.g., a global awareness day, a national holiday, a state-government closure date, a local holiday), a personal event data 226 (e.g., an appointment of the user 102, a reminder the user 102 set for himself or herself), and/or a group event data 228 (an event in which the two or more instances of the user 102 are invitees, participants, and/or otherwise implicated or involved). Although not shown in the embodiment of
The scheduling routine 220 may also include computer readable instructions that when executed on the processor 202 determine a request for information from the user 102 stored in the events database 222 and queries the events database 222. For example, if the user 102 asks “what is my schedule tomorrow,” the scheduling routine 220 can execute instructions that determine the current date and add one day, then query the events database 222 for all events, then generate a voice output data 267 and read off the events to the user 102 through the speaker 208. Alternatively or in addition, the scheduling routine 220 could respond to the question of the user 102 by transmitting data for display on the calendar application 216 of the support hub 201 and/or the calendar application 516 of the mobile device 500 to change a view of the display screen 212 and/or the display screen 512, respectively, to expand and/or open the graphical representation of the next days' schedule such that an hour-by-hour view is shown. The calendar application 216 is further shown and described in conjunction with the embodiment of
In one or more embodiments, the support server 200 may comprise an object locating engine 230, a recall engine 240, and a spatial documentation engine 250, each of which will now be discussed.
The object locating engine 230 comprises computer readable instructions that when executed on the processor 202 may carry out a number of operations. First, a text translation data 265 of the user may be received, for example from the speech recognition system 260 or the writing recognition system 262. A second operation may extract at least one of an object name, an object description data 237, and/or an object category 239 from the text translation data 265 of the user 102. For example, the user 102 may have said “I am placing the hammer in my truck tool box.” The object name may be determined to be “hammer” and the location may be determined to be “truck of user” and/or “tool box of user.” A category may also be determined of the object and/or storage location, for example by reference to a predetermined and/or custom data table. For example, the hammer may be classified as a “tool.”
In one or more embodiments, a placed object data 231 can be defined through a question-and-answer workflow. For example, the user 102 can say “I am placing an object.” The object locating engine 230 can ask, “Please name the object,” then await the answer of the user 102. The object locating engine 230 can then follow up with “where are you placing the object?,” and await the next answer. And finally, for example, the user 102 can be asked, “please give a brief description of the item or provide a memo”, which the user 102 may then provide and which can be stored as the object description data 237 (abbreviated “Obj. Description Data 237” in
In a third process, a location may be determined and associated with the placed object data 231. For example, a location data 520 may be extracted from a GPS unit 515 of the mobile device 500, as shown and described in
The user 102 may query the object locating engine 230 verbally, for example by asking “where did I leave my watch?”, or directing a declarative such as “alert me next time I am close to an object I placed.” The object locating engine 230 may query the object database 232 and then transmit data over the network 101 for generating a voice output data 267 and/or a text output data 269 which can be communicated via the speaker 208 or displayed on the display screen 212 of the support hub 201, respectively (and/or the speaker 508 or the display screen 512 of the mobile device 500). Similarly, the voice output data 267 and/or the text output data 269 can be communicated to the wearable device 300.
In one or more embodiments, the object locating engine 230 may comprise an object placement routine 234 and an object locating routine 236. The object placement routine 234 may comprise computer readable instructions that when executed receive an object placement request comprising an object name and optionally an object description data 237 and an object category 239. The computer readable instruction of the object placement routine 234 may, when executed: (i) extracting a coordinate 155 from a location data received from at least one of the mobile device 500 (e.g., the location data 520) of the user 102 and/or from the wearable device 300 of the user 102 (e.g., the location data 320); (ii) store the coordinate 155 extracted from the location data as a coordinate of the object location data 235; and (iii) generate a placed object data 231 comprising a placement ID 233, the object name, the object location data 235, and/or optionally the object description data 237 and the object category 239. The object placement routine 234 may then include computer readable instructions that when executed store the placed object data 231 in the object database 232.
The object locating routine 236 may include computer readable instructions that when executed: (i) receive an object locating request including the object name, the object ID (not shown), and/or a second instance of an object description data; (ii) determine the coordinate 155 of the object location data 235 and/or an area name (not shown) associated with the placed object data 231; and (iii) transmit the coordinate 155 and/or the object location name to the wearable device 300 of the first user 102 and/or the mobile device 500 of the first user 102.
In one or more embodiments, the recall engine 240 may comprise a reminder routine 244, a recall routine 246, and a spatial recall agent 248. The reminder routine 244 may comprise computer readable instructions that when executed receive a reminder request that includes a reminder content data 247 comprising a text file, a voice recording, and/or a video recording. The reminder request may further include a reminder category (not shown), and a user ID 280 of a first user 102 generating the reminder request. The reminder routine 244 may comprise computer readable instructions that when executed generate a reminder condition data 249 that may include one or more conditions, for example a first reminder condition and a second reminder condition of higher urgency than the first reminder condition. As just one example, the first condition may be the expiration of one week, and the second reminder condition may be the expiration of another week. The reminder routine 244 may comprise computer readable instructions that when executed associate within the reminder condition data 249 a first communication medium ID 282 (e.g., an instance of the communication medium ID 282, as shown and abbreviated “Comm. Medium ID 282” in
The reminder routine 244 may comprise computer readable instructions that when executed generate a reminder data 241 comprising a reminder ID 243, the reminder condition data 249 (e.g., comprising the first reminder condition and the second reminder condition), the reminder content data 247, and optionally the user ID 280 of the user 102 generating the reminder request. Although not shown, a user ID 280 of a user to whom the reminder is addressed and/or is to be otherwise provided may also be designated or stored. The reminder routine 244 may then store the reminder data 241, for example in the reminder database 242.
The recall routine 246 may include computer readable instructions that when executed: (i) determine the occurrence of the first reminder condition; (ii) determine the first communication medium ID 282 that is associated with the first reminder condition; and (iii) generate a reminder notification data comprising the reminder content data 247. The recall routine 246 may further include computer readable instructions that when executed transmit the reminder content data 247 through the first communication medium to a wearable device 300 of the user 102, a mobile device 500 of the user 102, and/or a different computer device of the user 102 (e.g., a desktop computer, a laptop, a server). Similarly, the recall routine 246 may determine the occurrence of the second reminder condition of the higher urgency and determine the second communication medium ID 282 that is associated with the second reminder condition. The recall routine 246 may then execute computer readable instructions that re-transmit the reminder notification data through the second communication medium to the wearable device 300 of the user 102, the mobile device 500 of the user 102, and/or the different computer device of the user 102.
In one or more embodiments, a spatial component to the reminder may also be defined. For example, the reminder routine 244 may further include computer readable instructions that when executed: (i) extract a first coordinate 155 from a first location data received from at least one of the wearable device 300 of the user 102 (e.g., the location data 320) and/or the mobile device 500 of the user 102; and (ii) store the first coordinate 155 extracted from the first location data as the first coordinate 155 of a reminder location data 245 within the reminder data 241. A reminder associated with a coordinate 155 may be referred to as a placed reminder 144 (not shown). In combination with the storage of the reminder location data 245, the spatial recall agent 248 comprises computer readable instructions that when executed: (i) determine that a mobile device 500 of the first user 102 and/or the wearable device 300 of the first user 102 is within a threshold distance 156 of the coordinate 155 of the reminder location data 245 (e.g., within one meter, five meters, ten meters, 100 meters). In one or more embodiments, the first reminder condition and the second reminder condition may even be moving within the threshold distance 156 of the first coordinate 155 of the reminder location data 245 (e.g., the first time the user 102 enters an area and the second time the user 102 enters an area).
In one or embodiments, the spatial documentation engine 250 comprises a documentation routine 254, a documentation query routine 256, and a documentation awareness agent 258. The documentation routine 254 may include computer readable instructions that when executed receive a documentation placement request that may include a documentation content data 257 including a text file, a voice recording, and/or a video recording. The documentation placement request may optionally include a documentation name and a documentation category (neither of which are shown in the embodiment of
The spatial documentation data 251 may be manually queried, for example after viewing on the display screen 212 of the support hub 201, as shown in
In one or more embodiments, the spatial documentation data 251 may be automatically queried and/or a notification of its availability may be provided to the user 102, including within the context of spatial relevance. In one or more embodiments, the documentation awareness agent 258 comprises computer readable instructions that when executed determine that the mobile device 500 of the first user 102 and/or the wearable device 300 of the first user 102 is within a threshold distance 156 of the coordinate 155 of the documentation location data 255. The documentation awareness agent 258 may further include computer readable instructions that when executed determine an awareness indicator 259 of the spatial documentation data 251 (which may be a default, may be elevated based on importance, and/or may be specified by the user 102 at the time of generating the documentation placement request). In one or more embodiments, the awareness indicator 259 includes data specifying a sound (e.g., cause a ringing sound and/or a “ping” sound on the mobile device 500) and/or a vibration (e.g., cause the mobile device 500 and/or the wearable device 300 to buzz, shake, and/or vibrate). The documentation awareness agent 258 may further include computer readable instructions that when executed transmit an instruction to execute the awareness indicator (e.g., an a documentation awareness notification) on the mobile device 500 of the first user 102 and/or the wearable device 300 of the first user 102.
A machine learning interface 290 may include or more procedures for interfacing with the machine learning server 190. Referring back to
In one or more embodiments, the machine learning server may utilize an instance of the artificial neural network for recognizing request types (e.g., an event request, an object placement request, an object locating request, a reminder request, a documentation placement request, and/or a documentation retrieval request). For example, training datasets may include requests which are reviewed and marked (e.g., by human analysis) as a certain request type. In one or more embodiments, the artificial neural network may be used to build a database of information related to room names and associated coordinates 155. For example, users 102 may consistently include a location name within a request while a similar set of coordinates 155 are consistently received from devices of those users 102. Therefore, similar coordinates 155 in future requests may be correlated with the location name, even when the location name is not included in the request. In one or more embodiments, the artificial neural network may be usable to add metadata to the placed object data 231, the reminder data 241, and/or the spatial documentation data 251. For example, the artificial neural network may be trained to recognize an object category based on an object name. This may be especially useful for located placed objects 134. A user 102 may be able to ask “where are the building tools” (e.g., a general category of “building tool” and/or “tool”), and receive from the trained artificial neural network a coordinate showing a hammer stored within a building shed.
The support server 200 may include interfacing elements sufficient for receiving input information from the user 102. For example, the support server 200 may receive input from the user 102 via a microphone 210, through a touchscreen capability of the display screen 212 (including without limitation through use of the pen 215), a physical keyboard, a virtual keyboard displayed on the display screen, and/or through input of another communicatively coupled device (e.g., the mobile device 500).
The camera 207 may be a video camera that can be utilized for recording reminders of the user 102, according to one or more embodiments. For example, the user 102 may direct by voice activation (or press a button) enabling creation of a video or picture memo. In one or more other embodiments, the camera 207 may be able to be used as a third-party reminder or remote accountability method, in which a third party reminds the user 102 to carry out or complete a task or engage in scheduled event.
The support hub 201 may include interfacing elements sufficient for generating output information for the user 102. For example, the support server 200 may generate sound and/or voice output using the speaker 208, may display information visually on the display screen 212, and/or transmit output to additional devices and systems over the network 101 (e.g., the support server 200, the wearable device 300, the coordination server 400, the mobile device 500, the machine learning server 190, and/or other devices and systems).
In one or more embodiments, the support hub 201 may be writing enabled, that is, permitting the user 102 to provide informational input via writing to one or more input devices, including but not limited to the support hub 201. The user 102 may provide input on an instance of that display screen 212 that is a touchscreen.
The calendar application 216 may be provided for convenience and a wholistic approach to logistics for the user 102 and/or the group of users 102. The calendar application 216 may include computer readable instructions for displaying and managing a calendar, including one or more calendar grids 217 for display on the display screen and optionally one or more calendar graphics 218. An example of the calendar grid 217 is illustrated on the display screen 212 of
In one or more embodiments, in addition or as an alternative means to a wake word, the support hub 201 may have a voice interface activated through use of a physical button in a housing 203 of the support server 200 and/or a graphical button displayed on the display screen 212, where the display screen 212 is a touchscreen. The housing 203 may be made of metal, plastic, or another suitable encapsulating material. Examples of the housing 203 are shown and described in conjunction with the embodiment of
A voice transmission module 318 may read the stored voice input data 261 from the memory 304 and transmit the voice input data 261 through the network 101 to the support server 200 and/or the support hub 201, including without limitation through an instance of the mobile device 500 communicatively “paired” with the wearable device 300 (e.g., through a Bluetooth® or similar connection). In one or more other embodiments, the network interface controller 306 may include a WiFi and/or cellular (e.g., 4G, LTE, 5G) interface capability.
The wearable device 300 may also include computer readable instructions that when executed on the processor 302 receive a notification and/or message from the support server 200 and/or the support hub 201 and communicate the notification and/or message to the user 102. For example, a voice output data 267 (not shown in the embodiment of
In one or more embodiments, if the wearable device 300 includes the display screen 312, the user 102 may interact with the support server 200 and/or the support hub 201 through a graphical user interface. If the screen is relatively small, there may be one or more instances of a command button 319 available on the graphical user interface, for example that requests the next sequential event data to be displayed on the display screen 312 and/or announced on the speaker 308, or requests the documentation content data 257 following a documentation awareness notification. The command button 319, for example, may also return a short menu of placed objects 134 and/or instances of the placed documentation 154 within proximity of the wearable device 300.
The wearable device 300 may also be configured to generate a location data 320 based on WiFi connectivity, use of a GPS unit (not shown in the embodiment of
The wearable device 300 may include a fastener 314 for attaching to the human body. The fastener, for instance, may attach to the wrist, finger, arm, ankle, neck, forehead, or other human body part. The wearable device 300 may be, for example, an Apple Watch, an ASUS ZenWatch, Eoncore Eoncore GPS/GSM/Wifi Tracker, a FitBit Blaze, Revolar Instinct device, Ringly Luxe, a Vufine+ Wearable Display, Amazfit Verge Smartwatch, GUESS® Men's Stainless Steel Connect Smart Watch.
The group database 410 defines one or more group profiles which have associated user profiles (e.g., a user profile of the user 102). The group database 410 includes a group ID 412 and one or more associated instances of a user ID 280A through a user ID 280N. Each of the group ID 412 and the user ID 280 may be a unique identifier (e.g., an email address, a phone number, a user name) and/or a globally unique identifier (e.g., a random alphanumeric string). In turn, each user ID 280 may be associated with a known instance of the support hub 201 and/or support server 200, for example through a device ID such as a MAC address, IP address, or other identifier.
The collective events database 422 includes a group event data 228, which may include any of the data specified for an event data, as shown and described in conjunction with
The collective memory engine 470 includes computer readable instructions that manage the content, queries, and/or the permissions of the collective database 472. The collective database 472 may include data from the object database 232, the reminder database 242, and/or the spatial documentation database 252. For example, a placed object data 231 may further include the group ID 412 of a group which may query and/or otherwise have access to the data of the placed object data 231. The reminder data 241 and/or the spatial documentation data 251 may also include an associated instance of the group ID 412. Alternatively or in addition, a different user ID 280 may be defined to have access (for example, a user 102A associated with the user ID 280A defines a spatial documentation data 251 that is viewable by a user 102B associated with the user ID 280B). The reminder data 241 may further have a designated recipient user 102 (“addressee”), or a user 102 and/or group of users 102 to whom the reminder is addressed. The reminder data 241 may also have differing users depending on a triggered condition, for example within a business context a first reminder going to a lower-level manager and a second reminder going to a higher-level manager. Placed object data 231 from one or more object databases 232 may be designated for a group. New instances of a placed object data 231 accessible by a group may be defined, uploaded to the coordination server 400, and distributed similarly to a new event of the group event data 228.
The coordination server 400 may include an authentication system 406 and/or an authorization system 408. The authentication system 406 authenticates one or more users 102 requesting access to the data of the collective events database 422 and/or the collective database 472. Techniques known in the art of computing and cybersecurity may be used, such as two factor authentication. The authorization system 408 may determine whether a user 102 has sufficient permission to query, read, write, or otherwise interface with data stored in the collective database(s) 472. For example, a user 102 may have authorization to read from the documentation content data 257 of a spatial documentation data 251, but not to write to it. In another example, the user 102 may have the authority to receive a documentation awareness notification (e.g., such that the user 102 knows documentation is available), but not to request the associated documentation content data 257 without permission. Such permission may be requested through a message sent to an appropriate administrative user, including for example through the support server 200 and/or the support hub 201.
The coordination server 400 may include the speech recognition system 450 and the writing recognition system 452, as shown and described in conjunction with
In one or more other embodiments, where two or more instances of the support hub 201 and/or the support server 200 are networked, functions of the coordination server 400 (including but not limited storage of the group database 410 and/or the collective database 472) may be carried out by a designated instance of the support hub 201 as a master node.
For purposes of the following description, a first user 102A may have defined data in the collective database 472, and a second user 102B may be the recipient and/or beneficiary of the data. In one or more embodiments, the collective memory engine 470 comprises computer readable instructions that when executed: (i) determine the user 102B is associated with the group ID 412 and/or otherwise has authorization; (ii) determine the occurrence of the reminder condition (e.g., of the reminder condition data 249); (iii) determine the first communication medium ID 282 that is associated with the first reminder condition; (iv) generate a reminder notification data that includes the reminder content data 247; and (v) transmits the reminder content data 247 through the first communication medium to a wearable device 300 of the user 102B, a mobile device 500 of the user 102B, and/or a different computer device of the user 102B. In one or more embodiments, the collective memory engine 470 comprises computer readable instructions that when executed: (i) determine the user 102B is associated with the group ID 412 and/or otherwise has authorization; (ii) determine that the mobile device 500 of the user 102B and/or the wearable device 300 of the user 102B is within the threshold distance of the coordinate 155 of a documentation location data 255; (iii) determine the awareness indicator 259 of the spatial documentation data 251; and (iv) transmit the instruction to execute the awareness indicator 259 on the mobile device 500 of the user 102B and/or the wearable device 300 of the user 102B.
In one or more embodiments, the collective memory engine 470 comprises computer readable instructions that when executed: (i) determine the user 102B is associated with the group ID and/or otherwise has authorization; (ii) receive a documentation retrieval request including the documentation ID 253 from the mobile device 500 of the user 102B and/or the wearable device 300 of the second user 102B; and (iii) transmit the documentation name, the documentation content data 257, and/or the documentation category to the mobile device 500 of the user 102B and the wearable device 300 of the second user 102B.
In one or more embodiments, the collective memory engine 470 comprises computer readable instructions that when executed: (i) determine the user 102B is associated with the group ID and/or otherwise has authorization; (ii) receive an object locating request including the object name, the placement ID 233, and/or the object description data 237 from the wearable device 300 of the user 102B and the mobile device 500 of the user 102B; (iii) determine the coordinate 155 of the object location data 235 and an area name associated with the object location data 235; and (iv) transmit the coordinate 155 and/or the area name to the wearable device 300 of the user 102B and the mobile device 500 of the second user 102B. The reminder request and/or the reminder data 241 may include a user ID 280 of a user 102B to which the reminder content data 247 is addressed.
The mobile device 500 can carry out many of the functions of the wearable device 300 of
Operation 610 determines a communication medium (e.g., call, email, text, push notification) and/or device (e.g., the mobile device 500, the support hub 201 of a user 102) to assign to a reminder notification. For example, the determination of operation 610 may be designated and stored as the communication medium ID 282. Operation 612 determines whether another condition and/or recipient should be set. If another condition and/or recipient should be set, operation 612 returns to operation 606. Otherwise, operation 612 proceeds to operation 614. Operation 614 generates a reminder data 641, for example including one or more of the elements illustrated in the embodiment of
Operation 708 generates a reminder notification data. The reminder notification data may include data extracted from the reminder database 242 and/or the reminder data 241. For example, the reminder notification data may include a reminder location data 245 (including any associated coordinate 155), a reminder content data 247, and/or a user ID 280 (e.g., of a user 102 setting the reminder). Operation 710 transmits the reminder notification data to the target device(s) specified in operation 704 and through the specified communication medium(s). It should be noted that the reminder notification data may be sent to multiple instances of the user 102, sometimes on different devices and/or through different communication mediums. For example, a primarily responsible user 102A may receive a voice recording of a reminder from their manager sent to both the user 102A's mobile device 500 and their email, while a user 102B that is the manager may simultaneously receive just an email. Operation 712 determines if the reminder is resolved. For example, the user 102 may select to “reply” that the subject matter of the reminder has already been addressed, “snooze” the reminder, re-assign the reminder to a different user 102, or indicate the reminder is moot or no longer relevant. If the reminder is not resolved, operation 712 may retain the reminder data 241 in the reminder database 242, and operation 712 may return to operation 700. If the reminder is resolved, operation 712 may proceed to operation 714 which may delete the associated reminder data 241 or mark the reminder data 241 as resolved (e.g., such that future reminders may not be sent out and/or a location of the reminder is not displayed on a map).
Operation 804 may store the coordinate in a documentation location data 255, including any coordinate determined from the location name. Operation 806 determines whether an awareness indicator is to be defined. The awareness indicator, for example, may involve a passive monitoring process that indicates and/or notifies a user 102 of the availability of documentation upon a condition. The awareness indicator may therefore increase the probability that documentation that may be relevant to the user is presented in context. If no awareness indicator is to be defined, operation 806 may proceed to operation 814. However, if an awareness indicator is to be selected, operation 808 proceeds to operation 810, which selects an awareness indicator (e.g., from an available list). For example, the awareness indicator may be to initiate a vibration and/or sound on a device of a user, e.g., send a push notification to the mobile device 500. The awareness indicator may be stored as the awareness indicator 259.
Operation 812 may receive a selection of an importance level of the documentation. For example, certain pieces of documentation may relate to convenience or preference of family and/or coworkers (e.g., “please take off your shoes even when entering the laundry room”), whereas others may related to health and safety (e.g., “Warning: always ensure the pressure gage is below 400 psi before initiating the transfer of liquid nitrogen into the holding tank or severe injury could result”). The importance level may also change and/or determine the awareness indicator. Operation 814 generates a spatial documentation data 251, including for example storing any of the data as shown and described in conjunction with the embodiment of
Operation 908 determines an awareness indicator 259, for example the awareness indicator set in operation 810 of
Operation 1106 determines the placement ID 233, for example as determined from the index determined in operation 1102. It should be noted that the placement ID 233 may, alone or more embodiments, represent an identifier of the particular “placement” of the placed object 134, rather than an identifier of the placed object 134 itself. In one or more embodiments, the object may also have its own unique identifier which may be assigned and/or predetermined (e.g., an object ID, not shown in the embodiment of
Operation 1012 may receive location data of a device, for example the mobile device 500 and/or the wearable device 300. A coordinate 155 may be extracted from the location data. Operation 1114 determines whether the user 102 is within a threshold distance 156 of the placed object 134 and/or a defined area of the placed object 134, as may be determined from a location data of the device of the user 102. If the user 102 is not within the area, operation 1114 may proceed to operation 1116 which may determine whether a timer has expired. For example, the timer may have been set in association with the execution of operation 1110. If the timer has not expired, operation 1116 may return to operation 1114. Otherwise, if the timer has expired (e.g., a timeout), it may be inferred that the user 102 is no longer searching for the placed object 134 and operation 1116 may proceed to operation 1118A, which retains the placed object data 231, for example in the object database 232.
If the user 102 is within the area as determined in operation 1014, operation 1014 proceeds to operation 1120. Operation 1120 may determine if the placed object 134 was moved, including prompting the user 102 to provide information. In one or more embodiments, if the user 102 is determined to be proximate to the placed object 134 following a locating request in operation 1100, it may be assumed the user 102 found the placed object 134. If the placed object 134 has not been moved, as such information may be requested from the user 102 and/or automatically determined, operation 1120 may proceed to operation 1118B which retains the placed object data 231. If the placed object 134 has been determined to have moved (or is assumed to have moved), operation 1120 may proceed to operation 1122, which may delete and/or archive the placed object data 231 and/or prompt the user 102 to define a new placed object data 231.
Operation 1210 determines whether any attributes of the request are missing. For example, for defining an event data, operation 1210 may determine missing attributes of a placed object data 231, a reminder data 241, and/or a spatial documentation data 251 (and/or any necessary or highly desirable missing attributes, as may be predetermined). Where the request type is the retrieval of information, operation 1210 may determine if enough information has been obtained for a match against an index and/or whether a close match is obtained to one or more existing instances of the event data, the placed object data 231, the reminder data 241, and/or the spatial documentation data 251. Natural language search may also be used in this process. Operation 1210 then proceeds to operation 1212 which may query the user 102 (e.g., send a request for the additional values of the empty attributes) on the device of the user 102. Operation 1214 may then receive the missing values (or an attempt to submit the missing values) and return to operation 1210 to undergo another completeness evaluation. Once no missing attributes are determined, operation 1210 may proceed to operation 1216 which may utilize the data parsed from the text output data 269 in the request type.
Each of several examples will now be described. The examples are each plotted on a map of the business location and viewable on a tablet device (e.g., an instance of the mobile device 500) as if from an administrator's point of view (e.g., having permission and/or authority to view all information with databases). The tablet device may communicate through a local WiFi network or other network to the support hub 201 (e.g., located in the corporate offices), the support server 200 (e.g., located off-site), and/or the coordination server 400 (e.g., located off-site).
First, a set of placed objects 134, shown as the placed object 134A.1 through placed object 134A.n, may have been stored in a storage closet. The mapping application 517 plotting on the map and/or the user interface may group several geospatial points together, which can be expanded when the user 102 selects the grouped point on the touchscreen. The list of the placed object 134A.1 through the placed object 134A.n may be available to all employees (e.g., in case a piece of equipment may be needed in the main showroom), but the location may only be viewable by instances of the user 102 that are corporate personnel. A placed object 134B may be associated with a forklift. Unlike some instances of the placed object 134, the forklift may have installed a small device communicatively coupled over the network 101 (e.g., WiFi) to determine its whereabouts in real time, for example updating a corresponding instance of the object description data 237 of the placed object data 231 (and/or modeling the forklift as a permanent object with an Object ID). All employees may have an awareness of the location of the forklift, and may, for example, be notified if the forklift approaches a door between the warehouse and the showroom (e.g., to prepare employees for ensuring customers are out of the way).
A placed object 134C may be a set of inventory that is incorrectly listed in an enterprise resource planning (ERP) software of the business. For example, for logistical reasons the business may have temporarily moved inventory from one location in the warehouse where in normally should be located into a different area of the warehouse. A user 102 who is a warehouse personnel may have quickly provided a voice input 161 on a wearable device 300, for example: “I am placing a palette of our flat screen televisions in isle six of the warehouse so we have room to process our next shipment.” The voice input 161 may be processed through speech recognition (e.g., via the speech recognition system 260) and result in generation of the placed object data 231.
A placed documentation 154A may be appended to the placed object 134, e.g., such that the documentation location data 255 is updated with motion of the forklift. The placed documentation data 154 may have an awareness condition that any user 102 within a threshold distance of 5 meters is informed that are to be wearing a hard-hat, per regulatory requirements. In contrast, the placed documentation 154B may have no awareness condition defined, but rather document that a certain side-door is to remain unlocked during business hours per a city ordinance. The documentation of 154B, for example, may be available to janitorial staff, including new instance of the user 102 who are in training.
A placed reminder 144A may be associated with a location in a demonstration (“demo”) area. The placed reminder 144A may remind any employee walking by the demo area to check for out-of-place inventory or demonstration floor models that customers may be able to test, for example so to ensure they are not left where other customers could trip over or break them.
The reminder may be given at most once every two hours to up to one employee (e.g., a reminder condition) so that every employee is not reminded every time they walk by. The placed reminder 144B that may apply to janitorial personnel only. The placed reminder 144B may be a reminder that the bathroom stalls are to be checked before locking the bathroom. The placed reminder 144B may only be active for a several hour period following normal store hours (e.g., 7 PM to 9 PM), e.g., an example of a reminder condition that may be stored in a reminder condition data 249. The reminder may be especially important in the present example because a different location of the business may have accidentally locked a customer in the bathroom once, and wants to take great care that it does not happen again at this location. Therefore, a second reminder condition may send a message at 8 PM reminding the janitorial personnel to check the stalls.
Additional reminders may have no associated plot point on the map. For example, a reminder may be triggered when a message from an API of a shipping company is received that a shipment is incoming. The reminder content data 247 may include a short video instructing employees to check a back alleyway for obstructions, including pointing out several locations which should be checked but which are otherwise difficult to see from the loading dock.
Finally, one or more upcoming events may be displayed if associated with a location. The group event 1401 may be defined in a conference room for store personnel and corporate manages, for example a team meeting to prepare for an upcoming store-wide sale.
At first the users 102 may be asked to provide additional information along with their object placement requests, object locating requests, reminder requests, documentation placement requests, and/or documentation retrieval request. However, over time, a database may be developed associating the various locations with their location names, and assisting in categorizing and classifying commonly used objects. The support hub 201 and/or the support server 200 may get increasingly easy, fast, and accurate over time.
As a result of use of one or more aspects of the support network 100 and/or the support server 200, the business may have been able to increase efficiency, saving money and time. Objects may not be as easily misplaced or needlessly re-ordered when incorrectly thought to be lost. Important documentation has been recorded to increase the consistency, allow for cross-functional roles within the organization (e.g., corporate staff closing up the store if necessary), and even to improve the safety of staff and customers.
Although the present embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, engines and modules described herein may be enabled and operated using hardware circuitry (e.g., CMOS based logic circuitry), firmware, software or any combination of hardware, firmware, and software (e.g., embodied in a non-transitory machine-readable medium). For example, the various electrical structure and methods may be embodied using transistors, logic gates, and electrical circuits (e.g., application specific integrated (ASIC) circuitry and/or Digital Signal Processor (DSP) circuitry).
In addition, it will be appreciated that the various operations, processes and methods disclosed herein may be embodied in a non-transitory machine-readable medium and/or a machine-accessible medium compatible with a data processing system (e.g., the support server 200, the support hub 201, the wearable device 300, the coordination server 400, the mobile device 500). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The structures in the figures such as the engines, routines, and modules may be shown as distinct and communicating with only a few specific structures and not others. The structures may be merged with each other, may perform overlapping functions, and may communicate with other structures not shown to be connected in the figures. Accordingly, the specification and/or drawings may be regarded in an illustrative rather than a restrictive sense.
In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the preceding disclosure.
This patent application claims priority from, and hereby incorporates by reference: U.S. provisional patent application No. 62/915,374, titled ‘LOGISTICS AND ASSISTANCE SUPPORT HUB, SYSTEM AND METHOD’, filed Oct. 15, 2019.
Number | Date | Country | |
---|---|---|---|
62915374 | Oct 2019 | US |