The present disclosure relates to communications systems for evaluation of property by a remote viewer. More specifically, it relates to methods, software, and apparatuses for a login-free audiovisual teleconference between two users for property evaluation.
Traditional customer service systems may allow contact between users without travel or making appointments, but telephonic communication is virtually useless for allowing accurate property evaluation by remote means. Sending pictures is similarly deficient, especially if an owner does not understand how best to portray the property.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosure. The summary is not an extensive overview of the disclosure. It is neither intended to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure. The following summary merely presents some concepts of the disclosure in a simplified form as a prelude to the description below.
Aspects of the disclosure relate to methods, computer-readable media, and apparatuses for providing two-way audiovisual communication between a client, who may be an owner of the property being evaluated and an agent, who may be a user of a plurality of users associated with one or more systems or devices evaluating the property. The two-way audiovisual communication may be performed using a camera and microphone of a mobile computing device of the client and a camera and microphone of a computer of the agent remote from the client.
The plurality of users may be organized in a queue ranked by amount of time spent waiting to answer an owner's call. When the system receives a client call, the system may automatically route the call to an appropriate queue based on the prioritized ranking. The system may then utilize a camera on the client device to capture images and/or video of property according to the systems and methods described herein. Information gathered from the call may be used by the agent in evaluating the property.
Managers may be able to monitor the queue and to manage individual agents by modifying their attributes in order to keep the queue balanced with demand for agents appropriate to the distribution of clients currently calling.
Calls may be dynamically connected in a login-free environment in order to facilitate data collection for one or more organizations. Methods and systems may facilitate data collection, reconnection in the event of disconnections, and/or call degradation handling in the event of weak or unstable connections.
Other features and advantages of the disclosure will be apparent from the additional description provided herein.
A more complete understanding of the present invention and the advantages thereof may be acquired by referring to the following description in consideration of the accompanying drawings, in which like reference numbers indicate like features, and wherein:
In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration, various embodiments of the disclosure that may be practiced. It is to be understood that other embodiments may be utilized.
As will be appreciated by one of skill in the art upon reading the following disclosure, various aspects described herein may be embodied as a method, a computer system, or a computer program product. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, such aspects may take the form of a computer program product stored by one or more computer-readable storage media having computer-readable program code, or instructions, embodied in or on the storage media. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space).
The term “network” as used herein and depicted in the drawings refers not only to systems in which remote storage devices are coupled together via one or more communication paths, but also to stand-alone devices that may be coupled, from time to time, to such systems that have storage capability. Consequently, the term “network” includes not only a “physical network” but also a “content network,” which is comprised of the data—attributable to a single entity— which resides across all physical networks.
The components may include virtual collaboration server 103, web server 105, and client computers 107, 109. Virtual collaboration server 103 provides overall access, control and administration of databases and control software for performing one or more illustrative aspects described herein. Virtual collaboration server 103 may be connected to web server 105 through which users interact with and obtain data as requested. Alternatively, virtual collaboration server 103 may act as a web server itself and be directly connected to the Internet. Virtual collaboration server 103 may be connected to web server 105 through the network 101 (e.g., the Internet), via direct or indirect connection, or via some other network. Users may interact with the virtual collaboration server 103 using remote computers 107, 109, e.g., using a web browser to connect to the virtual collaboration server 103 via one or more externally exposed web sites hosted by web server 105. Client computers 107, 109 may be used in concert with virtual collaboration server 103 to access data stored therein, or may be used for other purposes. For example, from client device 107 a user may access web server 105 using an Internet browser, or by executing a software application that communicates with web server 105 and/or virtual collaboration server 103 over a computer network (such as the Internet).
Client computers 107 and 109 may also comprise a number of input and output devices, including a video camera (or “webcam”), microphone, speakers, and monitor, enabling two-way audiovisual communication to and from the client computers.
Servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines.
Each component 103, 105, 107, 109 may be any type of computer, server, or data processing device configured to perform the functions described herein (e.g., a desktop computer, infotainment system, commercial server, mobile phone, laptop, tablet, etc.). Virtual collaboration server 103, e.g., may include a processor 111 controlling overall operation of the virtual collaboration server 103. Virtual collaboration server 103 may further include RAM 113, ROM 115, network interface 117, input/output interfaces 119 (e.g., keyboard, mouse, display, printer, etc.), and memory 121. I/O 119 may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. Memory 121 may further store operating system software 123 for controlling overall operation of the virtual collaboration server 103, control logic 125 for instructing virtual collaboration server 103 to perform aspects described herein, and other application software 127 providing secondary, support, and/or other functionality which may or may not be used in conjunction with other aspects described herein. The control logic may also be referred to herein as the data server software 125. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, made manually by a user providing input into the system, and/or a combination of automatic processing based on user input (e.g., queries, data updates, etc.).
Memory 121 may also store data used in performance of one or more aspects described herein, including a first database 129 and a second database 131. In some embodiments, the first database 129 may include the second database 131 (e.g., as a separate table, report, etc.). That is, the information can be stored in a single database, or separated into different logical, virtual, or physical databases, depending on system design. Devices 105, 107, 109 may have similar or different architecture as described with respect to device 103. Those of skill in the art will appreciate that the functionality of virtual collaboration server 103 (or device 105, 107, 109) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, and/or to segregate transactions based on geographic location, user access level, quality of service (QoS), etc.
One or more aspects described herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
The mobile computing device 200 may include one or more output devices, such as a display 206 or one or more audio speakers 207. There may also be one or more user input devices, such as a number of buttons 208, as well as a microphone 209, a touchscreen built into display 206, and/or a forward-facing camera 210 (which may include multiple cameras for three-dimensional operation) for user gestures. The mobile computing device 200 may comprise additional sensors, including but not limited to a multiple-axis accelerometer 211 or rear-facing camera 212. Rear-facing camera 212 may further be an array of multiple cameras to allow the device to shoot three-dimensional video or determine depth. The mobile computing device may further comprise one or more antennas 213 for communicating via a cellular network, Wi-Fi or other wireless networking system, Bluetooth, near field communication (NFC), or other wireless communications protocols and methods.
The mobile device 200 is one example hardware configuration, and modifications may be made to add, remove, combine, divide, etc. components of mobile computing device 200 as desired. Multiple devices in communication with each other may be used, such as a mobile device in communication with a server or desktop computer over the Internet or another network, or a mobile device communicating with multiple sensors in other physical devices via Bluetooth, NFC, or other wireless communications protocols. Mobile computing device 200 may be a custom-built device comprising one or more of the features described above, or may be a wearable device, such as a smart watch or fitness tracking bracelet, with custom software installed, or may be a smartphone or other commercially available mobile device with a custom “app” or other software installed.
One or more aspects of the disclosure may be embodied in computer-usable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other data processing device. The computer-executable instructions may be stored on one or more computer readable media such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
Mobile device 200 may be used to run a mobile application into which the user, in some examples, inputs information, such as a username and/or password for login, or an actual name, claim number, property type, contact information, and/or any other information relevant to an insurance claim (such as information identifying an insurance claim or an insurance policy holder). The application may then use an internet connection and/or other network connection to contact the virtual collaboration server and initiate communications with the server and/or one or more client computers. The application may also access one or more cameras and/or a microphone of the mobile device and transmit video and/or audio to a remote computer, and play video and audio received in return, to allow communications between the mobile device's operator and a remote agent.
In step 301, the system may generate a queue data structure for tracking a number of logged-in agents (e.g., claims adjusters) and one or more attributes for each agent. Attributes may include, for example, amount of time spent in the queue, amount of time spent in the queue since a last event (such as completing a call with a property owner or going idle), a classification or skill of the adjuster (such as specialization in auto claims or claims related to other property, a licensing status of an adjuster, and/or locations where the adjuster is authorized and/or licensed to practice), or a manager assigned to the given adjuster. Each claims adjuster may be associated with a computing device configured to communicate with the system and/or with one or more mobile devices of one or more users.
In step 302, the system may add one or more claims adjusters to the queue. Each claims adjuster may begin by logging in with a unique user identification number or string entered into a user interface on a computing device such as device 107 or device 109 that is networked to or in communication with server 103.
When logging into the system, a claims adjuster may be prompted to select one of a number of video capture devices of the adjuster's computer to capture images and/or video during any two-way video transmissions with a user. The claims adjuster may similarly be prompted to select one of a number of audio capture devices of the adjuster's computer to capture audio during any two-way audio transmissions with a user. The adjuster may further be prompted to select one or more speakers to emit audio received from a user if more than one speaker is connected to the adjuster's computer.
In step 303, the system may receive a two-way communications request from a property owner (e.g., a client). Preferably, before initiating the communications, the property owner will move to the location of damaged property subject to an insurance claim, as depicted in
The request may include one or more attributes, including, for example, a property type that the property owner wishes the claims adjuster to see. The request may be received by a webserver as an HTTP (Hypertext Transfer Protocol) request, or may use another server-client style protocol or messaging architecture. The request may also comprise the property owner's name or a previously assigned username, contact information for the property owner, and/or a claim number already assigned.
In some instances, the request may comprise a login-free connection. In a login-free connection, the client may not present a username or other typical user identifier. Rather, the client may give information necessary for routing the call (e.g., property type, location, etc.) and provide identifying information on the call itself. This may serve to connect clients to an adjuster faster and with less friction (e.g., a user may not have to create a username or remember a password to initiate a call).
Property that may be damaged may include automobiles, other vehicles (such as boats, motorcycles, bicycles, mopeds, or airplanes), houses, other structures, or personal property (such as artwork, electronics, clothing, furniture, or anything else of value).
In step 304, the system may select a claims adjuster to whom the incoming call should be assigned. The system may select an adjuster on a basis of longest time waiting in queue (i.e. first in, first out), or may select based on one or more factors. For example, the system may select an adjuster who has been waiting the longest out of all adjusters with a particular attribute, such as experience with a property type identified in the request. The system may select an adjuster who has been waiting the longest out of all adjusters who are currently available and/or who has not marked himself or herself unavailable. The system may select an adjuster who has been waiting the longest out of all adjusters without being idle at his or her computer. The system may select an adjuster who has been waiting the longest out of all adjusters having a certain experience level. The experience level may be based on the adjuster's experience with handling vehicle claims and/or property claims with respect to vehicle claims. The Experience level may be based on the adjuster's experience handling cars that exceed a pre-set threshold amount (e.g., $100 k). The experience level may also be based on experience with particular property types (e.g., single family homes, townhomes, condominiums, etc.). The system may select an adjuster who has been flagged to receive the next incoming call regardless of place in the queue or time waited. The system may select an adjuster who has handled the fewest calls during a given period of time such as the last month, last week, last 24 hours, or last 8 hours. The system may select an adjuster who has declined the most or the fewest calls during a given period of time. The system may select an adjuster who has historically handled calls with a shortest call length. The system may use a number of the factors above, or other factors, to combine and score all adjusters with a numerical score on each of a plurality of criteria, selecting an adjuster with a highest overall score or an adjuster who has waited the longest in queue of all adjusters with a given score range.
Once an adjuster has been selected, in step 305, the adjuster selected by the system may be notified of the selection and prompted to accept or decline an incoming communication.
In step 306, a two-way audiovisual communication may be established between the property owner and the selected adjuster. A web-based protocol may be used for cross-platform communication between the system on server 103, the computing device 107 being operated by the claims adjuster, and the mobile computing device 200 being operated by the property owner. Any one of a number of existing open-source, commercial, or custom video transmission protocols and platforms may be used.
In an alternative embodiment, the system may direct that communications be established directly between adjuster's computing device 107 and property owner's mobile computing device 200, without passing through server 103.
In step 307, the adjuster may use the audiovisual communication to gather information regarding property that is damaged. The adjuster may view the property through a camera of mobile computing device 200, may hear the property (if, for example, it is a damaged television or musical instrument) through a microphone of the mobile computing device, may ask questions of the property owner and receive answers, or may direct the property owner to move the camera to allow the adjuster a better vantage point/different angles of viewing, or to move an obstruction out of the way for a better view.
If the adjuster determines that he or she is not suited to appraise the property—for example, because of user error in identifying a property type—the adjuster may input a command to terminate the call and re-generate the call request to repeat steps 303 and following, and to allow the owner to be connected to a different adjuster by the system.
The adjuster may be able to record the video from the call, record the audio from the call, or capture still images from the video. The data may be saved either locally on the adjuster's computing device or to a remote server for later retrieval. The adjuster may also be able to enter notes into a text field or via other user input field while viewing the property.
In step 308, the adjuster may conclude that there is sufficient data from the call to act, and may terminate the communications with the property owner.
In step 309, the adjuster may determine a next course of action and implement it. The adjuster may conclude based on the gathered information that a clear estimate of the property damage is possible, for example if there is no damage, if the property is a total loss, or if the damage is of a commonly encountered type. In this circumstance, the adjuster may be able to input an amount of money to be given to the property owner, and to automatically have a check created and mailed to the property owner, or automatically credited to a known account of the property owner. The adjuster may alternatively conclude that the damage will be difficult to estimate based on a remote viewing alone, and may be able to dispatch an adjuster to the property to view in person, or to make an appointment for the property owner to bring the property to an adjuster for appraisal and to notify the property owner of the appointment. The system may transmit an instruction to a computing device associated with this other adjuster so that the other adjuster will receive the pertinent information about the claim and information regarding where and when to perform an in-person, physical inspection of the property.
After the determination is made, the owner's device may notify the owner that funds have already been deposited in an account of the owner, or that the appraisal via video call was unsuccessful and that an appointment has been or must be made for an in-person appraisal by another claims adjuster.
In an alternative embodiment, the system could instead be used for appraisal by a doctor or claims adjuster of an individual insured with health insurance rather than a property owner. In such an embodiment, skill types saved as attributes for members of the queue could be fields of medical expertise or other medical skills, rather than property types. The operator of the mobile device may be a doctor, another medical personnel, or other third party who may help a remote doctor or adjuster to inspect or perform a physical on a person submitting a health insurance claim.
Upon initiating the request (which may be made via an online system, mobile application executing on the mobile device 200, or the like), a user interface may be displayed to a claims adjuster.
When the incoming communications request causes the adjuster to be selected by the system, an incoming call window 405 may appear. The adjuster may accept the call by clicking an appropriate button within the window. The adjuster may decline the call either by clicking a decline button, which may be present, or by failing to accept the call within a predetermined period of time, such as 3 seconds, 5 seconds, or 10 seconds.
In
In
In
In
In
A manager may furthermore be able to view/listen in real time to an ongoing call between an adjuster and a property owner. When an adjuster who is currently “In Call” is selected, an option may appear to allow one-way audiovisual communication from the adjuster to the manager and/or from the owner to the manager. Accordingly, the manager may be able to ensure that adjusters are appropriately performing duties and helping owners, and may use the information for training purposes with the adjuster after the call.
In some instances, captured images may be displayed on the screen to the adjuster during an ongoing call. For example, previously captured images may appear as thumbnails on the screen while the video call is ongoing. In some instances, the adjuster may select a thumbnail in order to view an enlarged version of the image during the call. This may allow an adjuster to see what images have previously been taken in order to assist the adjuster in determining what further images should be taken (e.g., an adjuster may determine from the thumbnails that a certain portion of the bumper has not been captured in an image).
Call degradation handling may assist in situations where connections are not ideal (e.g., intermittent connections, insufficient bandwidth, high jitter, etc.). An example method for call degradation handling is depicted in
In some instances, the system may dynamically manage connection settings such that adjuster communications are restricted enough to maximize client communications in order to capture accurate descriptions of damaged property. For example, the system may determine that a connection is still insufficient for bilateral communication using current settings, perhaps even after performing some prioritization of client communications in step 1010, at step 1015. The system may dynamically reduce adjuster communications until a connection is stable by first reducing adjuster communications at step 1020 up to a predetermined minimum (e.g., to a minimum resolution, minimum bitrate, or to audio-only). If the system determines that the connection is still insufficient at step 1025, the system may reduce client communications at step 1030 up to a minimum (e.g., to another minimum resolution, to another minimum bitrate, or to audio-only). In this manner stable communications may be maintained while prioritizing some communications that are more important than others (e.g., video of vehicle damage is prioritized over video of a claims adjuster). Even after a call is resumed at step 1035, the system may continue monitoring the call and repeat the method to compensate for changing network conditions.
Referring back to
A reconnect feature may allow for calls to be reconnected in a login-free environment. In an Internet-based call without logins, a user may be routed based solely on information provided during the course of a call, as described herein. However, due to the dynamic nature of Internet communications, it may be very difficult to “call back” a customer (e.g., in the event of a loss of communication, desire to obtain additional information after a communication has ended, or the like) using the application without logins or some other identifier. Instead, an ad hoc reconnection method may be used. A reconnect code 710 may provide a means for reconnecting a call. During the initial call process, a client may provide alternative contact information, such as an e-mail or phone number. If a disconnect happens, the adjuster can send a message (e.g., a text message, notification, and/or email) comprising a reconnect code. For example, the adjuster may send the client a text message and/or an email, and/or the adjuster may use the application to trigger a notification of the code on the phone or other device of the customer. The client may enter the reconnect code in the application (e.g., application executing on the client phone or other device) to be connected back with the agent who was initially conducting the call. For example, a client may be disconnected. The adjuster may send the client a reconnect code. The client may enter the reconnect code in a screen such as the reconnection screen 730 depicted in
In some instances, the reconnect code may be utilized to avoid long wait times in a login-free environment. If a client will experience a hold time of longer than a threshold (e.g., two minutes), then the system may automatically trigger a disconnection and initiate a reconnect when an adjuster is able to take the call. The system may monitor call wait times to determine if an unusually long wait times exist. If wait times exceed a threshold, then the system may present one or more clients with a screen comprising an option to request a “call back” (e.g., a screen indicating that the client may request to be called back by an adjuster rather than remaining on hold and waiting for the adjuster to take the call, and may display a “call back” button on the screen to initiate the call back feature). If the client selects the call back option, the system may disconnect the client, but maintain a ticket associated with the client in a queue associated with one or more adjusters. When the adjuster opens the ticket, the ticket may indicate that the client has requested a call back. The adjuster may then initiate a reconnect by using supplied contact information to send a message comprising a reconnection code for entering into the application, as described above, resulting in the client being connected directly with the adjuster who opened the ticket after the client enters the reconnection code. This may have the advantage of allowing a client to avoid holding for an adjuster while not requiring the client to re-enter information collected at the onset of a communication in a login-free environment.
A reconnect call button 820 may allow an adjuster to reconnect a call. For example, a call may have ended prematurely or an adjuster may desire to collect additional information. The reconnect call button 820 may send a reconnect code as described above and/or redirect an agent to a call screen such as that depicted in
When an adjuster selects to finish a call, such as by clicking the finish call button 825, the adjuster may be asked to enter some summary information for the call. The summary information may comprise information about the type of call handled as well as the result. For example, a claims adjuster may type in a claim number for the call. In some instances, the claim number may not be provided automatically so that the system does not need to utilize hooks in other systems to authenticate the number (e.g., entering a claim number manually may allow the system to support claim numbers for many different companies without connections to additional databases). In some instances, the claim number may be correlated to the client and/or property. For example, the claim number may be associated with a vehicle using the application or an external process. The adjuster may also indicate a result. For example, the adjuster may indicate that the claim was approved, that the claim resulted in a total loss, that the claim was denied, etc. Information gathered by the application may be used as tags on the gathered information 810. For example, photos presented in the call may be flagged as photographs of total loss regarding an exotic automobile in Indiana. This may allow for the creation of a database of gathered information 810 sorted according to the tags. Such a database may be utilized as a data source to train machine learning for an automated claims processing system.
In some instances, the gathered information 810 may be packaged and transmitted to a specified organization (such as a company specified in step 910). The exported information may be combined with the summary information and sent to a data depository associated with the specified organization. The organization's depository may process the claim number, gathered information 810, and/or other information to sort and store information as the organization sees fit. In this manner, the system may allow for claims to be handled for a large number of organizations with minimal ties into client systems, as the information may be transmitted with minimal information required from the organization's systems. By reducing access to proprietary databases and networks maintained by partner organizations, the system can support those organizations while minimizing security risks (e.g., a system may be able to transmit information to an insurer database but not receive information from the insurer database, which may reduce the risk of the system being used to illicitly obtain confidential client information from the insurer database).
The client may select an organization to contact at step 910. In some instances, the application may support multiple companies and/or insurance carriers. For example, the application may be licensed for use by multiple companies as a standardized application that is routed using a selection (e.g., a client may select his personal insurance carrier using a drop-down box).
At step 915, the client may select a product type for discussion. A company may provide services regarding a number of different types of products (e.g., home, auto, boat, motorcycle, etc.) or sub-classes of products and/or services (e.g., new drivers, senior citizens, basic or economy service, luxury or exotic automobiles, vacation homes, rental properties or vehicles, etc.). This may be used to route a call to an adjuster who is qualified and/or specialized for the call. For example, some adjusters may specialize in exotic vehicles. In another example, new drivers may be redirected to agents who specialize in helping young or inexperienced drivers with the claims process.
At step 920, the client may provide claim information. The claim information may provide additional details of the claim that may be useful in initiating the call. For example, the client may provide a geographic location regarding where they reside or where a collision took place (e.g., by selecting a state from a drop down list when initiating the call). Certain geographic locations may have associated licensing or qualification requirements. By gathering this information at the onset, the system may select an adjuster who is properly qualified to handle the call at step 930. The system may then route the call to a qualified adjuster.
At step 935, an adjuster selected according to the systems and/or methods described herein may be connected to the client and conduct the call. For example, a client may be routed to a licensed adjuster who handles rental properties based on information provided in steps 915 and 920.
In some instances, a client may receive feedback regarding and/or based on established hours of operation. Adjusters in a group assigned to handle a call may have limited hours. In one example, all adjusters for a company and/or insurance carrier may work from 8 AM to 5 PM Eastern. In another example, adjusters for another company assigned to handle exotic automobiles may have special hours and be available from 6 AM until 9 PM Eastern. The system may cause a message to be displayed to a client indicating that adjusters are unavailable during a particular time of day. This indication may be only be displayed when adjusters are unavailable due to the timing of a client call. For example, if a user selects a company with uniform hours, the client may be notified that a call is after hours and be further notified of when the adjusters are normally available. This notification may be sent after selecting the company to call but before proceeding to one or more further steps, as it may be known that no adjuster will be available regardless of the product type to be discussed. In another example where adjuster times vary depending on where the call is routed (e.g., adjusters for different product types keep different hours), the system may wait until the client enters more specific information (e.g., product type, geographic area, sub-class of product, etc.) in order to give the hours for the correct set of adjusters. This may allow a client to quickly identify when he should call back while reducing the amount of information that a client must provide before he is notified that his call cannot be completed at a given time (e.g., a client may not need to enter claim information before being notified that it is after hours and no adjuster is available).
At step 940, the system may determine if there has been a disconnection. The system may detect disconnections that occur during the process of a call, and/or ask if there has been a disconnection when a call has been terminated. If a disconnection has been detected, the system may send a reconnect code at step 945. For example, the system may send a text message comprising a reconnection code to a phone number gathered from the client at step 920. The system may receive the reconnect code at step 950 and reconnect the call. For example, a client may reopen the application and enter a code received in a text message. The system may then reconnect with an adjuster who has been waiting for the connection to be reinitiated. Further description of disconnections may be found in
At step 955, the system may end the call. For example, an adjuster may end a call by clicking an end call button 735. The system may then archive call information such as by using a claim wrap up screen 805. This information may be used to create a database of call information. Information taken during calls may be stored in a database and sorted using information gathered as part of the call process. For example, photographs of a damaged vehicle may be stored in a database for damage claims of exotic vehicles. Call metrics may also be recorded. For example, the system may track the average duration of calls by each adjuster, including time spent waiting for calls, handling calls, reconnecting, wrapping up calls, etc. This may allow for managers to perform quality control measures (e.g., reassigning adjusters to areas of need, training adjusters who are inefficient, etc.).
While the aspects described herein have been discussed with respect to specific examples, including various modes of carrying out aspects of the disclosure, those skilled in the art will appreciate that there are numerous variations and permutations of the above-described systems and techniques that fall within the spirit and scope of the invention.
This application is a continuation of U.S. patent application Ser. No. 16/919,899, filed Jul. 2, 2020, which is a continuation of U.S. patent application Ser. No. 15/874,629, filed Jan. 18, 2018 (now U.S. Pat. No. 10,742,812), which is a continuation-in-part of U.S. patent application Ser. No. 15/294,147, filed Oct. 14, 2016 (now U.S. Pat. No. 10,657,599), all of which are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
D297243 | Wells-Papanek et al. | Aug 1988 | S |
D298144 | Wells-Papanek et al. | Oct 1988 | S |
D299142 | Berg | Dec 1988 | S |
5870711 | Huffman | Feb 1999 | A |
D416240 | Jensen et al. | Nov 1999 | S |
6381561 | Bomar et al. | Apr 2002 | B1 |
D468748 | Inagaki | Jan 2003 | S |
6744878 | Komissarchik et al. | Jun 2004 | B1 |
6771765 | Crowther et al. | Aug 2004 | B1 |
D523442 | Hiramatsu | Jun 2006 | S |
7088814 | Shaffer et al. | Aug 2006 | B1 |
7103171 | Annadata et al. | Sep 2006 | B1 |
D534539 | Frey et al. | Jan 2007 | S |
D539808 | Cummins et al. | Apr 2007 | S |
D540341 | Cummins et al. | Apr 2007 | S |
D544494 | Cummins | Jun 2007 | S |
D547365 | Reyes et al. | Jul 2007 | S |
7289964 | Bowman-Amuah | Oct 2007 | B1 |
D562339 | Keohane | Feb 2008 | S |
D569871 | Anastasopoulos et al. | May 2008 | S |
D570864 | Sadler et al. | Jun 2008 | S |
D574008 | Armendariz et al. | Jul 2008 | S |
D576634 | Clark et al. | Sep 2008 | S |
D579943 | Clark et al. | Nov 2008 | S |
D580941 | Scott et al. | Nov 2008 | S |
D580942 | Oshiro et al. | Nov 2008 | S |
D582936 | Scalisi et al. | Dec 2008 | S |
D583386 | Tomizawa et al. | Dec 2008 | S |
D583823 | Chen et al. | Dec 2008 | S |
D587276 | Noviello et al. | Feb 2009 | S |
D590407 | Watanabe et al. | Apr 2009 | S |
D592219 | Agarwal et al. | May 2009 | S |
D594026 | Ball et al. | Jun 2009 | S |
D594872 | Akimoto | Jun 2009 | S |
D596192 | Shotel | Jul 2009 | S |
D608366 | Matas | Jan 2010 | S |
D614194 | Guntaur et al. | Apr 2010 | S |
D616450 | Simons et al. | May 2010 | S |
D617804 | Hirsch | Jun 2010 | S |
7936867 | Hill et al. | May 2011 | B1 |
8046281 | Urrutia | Oct 2011 | B1 |
D648735 | Arnold et al. | Nov 2011 | S |
8347295 | Robertson et al. | Jan 2013 | B1 |
D676456 | Walsh et al. | Feb 2013 | S |
D677275 | Wujcik et al. | Mar 2013 | S |
D677326 | Gleasman et al. | Mar 2013 | S |
D677686 | Reyna et al. | Mar 2013 | S |
D678904 | Phelan | Mar 2013 | S |
D681654 | Hirsch et al. | May 2013 | S |
D682849 | Aoshima | May 2013 | S |
D682873 | Frijlink et al. | May 2013 | S |
D683751 | Carpenter et al. | Jun 2013 | S |
D684587 | Plesnicher et al. | Jun 2013 | S |
8472590 | Curtis | Jun 2013 | B1 |
D685386 | Makhlouf | Jul 2013 | S |
D687061 | Cueto et al. | Jul 2013 | S |
D687454 | Edwards et al. | Aug 2013 | S |
D687455 | Edwards et al. | Aug 2013 | S |
8510196 | Brandmaier et al. | Aug 2013 | B1 |
D689068 | Edwards et al. | Sep 2013 | S |
D691157 | Ramesh et al. | Oct 2013 | S |
D691618 | Chen et al. | Oct 2013 | S |
D693835 | Daniel | Nov 2013 | S |
8712893 | Brandmaier et al. | Apr 2014 | B1 |
D704205 | Greisson et al. | May 2014 | S |
D706796 | Talbot | Jun 2014 | S |
D708210 | Capua et al. | Jul 2014 | S |
D709517 | Meegan et al. | Jul 2014 | S |
D711411 | Yu et al. | Aug 2014 | S |
D715814 | Brinda et al. | Oct 2014 | S |
D716329 | Wen et al. | Oct 2014 | S |
D719583 | Edwards et al. | Dec 2014 | S |
D719968 | Ebtekar et al. | Dec 2014 | S |
D720363 | Ranz et al. | Dec 2014 | S |
D725139 | Izotov et al. | Mar 2015 | S |
8977237 | Sander et al. | Mar 2015 | B1 |
D727931 | Kim et al. | Apr 2015 | S |
D729264 | Satalkar et al. | May 2015 | S |
D730371 | Lee | May 2015 | S |
D730388 | Rehberg et al. | May 2015 | S |
D731510 | Kiruluta et al. | Jun 2015 | S |
D731512 | Xu et al. | Jun 2015 | S |
D733185 | Smith et al. | Jun 2015 | S |
D734358 | Rehberg et al. | Jul 2015 | S |
D735221 | Mishra et al. | Jul 2015 | S |
D735223 | Prajapati et al. | Jul 2015 | S |
D735745 | Zuckerberg et al. | Aug 2015 | S |
D738894 | Kim et al. | Sep 2015 | S |
D738906 | Frijlink et al. | Sep 2015 | S |
D746862 | Lee et al. | Jan 2016 | S |
D748112 | Vonshak et al. | Jan 2016 | S |
D751086 | Winther et al. | Mar 2016 | S |
D752059 | Yoo | Mar 2016 | S |
D755830 | Chaudhri et al. | May 2016 | S |
D759080 | Luo et al. | Jun 2016 | S |
D759663 | Kim et al. | Jun 2016 | S |
D759687 | Chang et al. | Jun 2016 | S |
9367535 | Bedard et al. | Jun 2016 | B2 |
D760772 | Winther et al. | Jul 2016 | S |
D761303 | Nelson et al. | Jul 2016 | S |
D761841 | Jong et al. | Jul 2016 | S |
D763282 | Lee | Aug 2016 | S |
D764483 | Heinrich et al. | Aug 2016 | S |
9407874 | Laurentino et al. | Aug 2016 | B2 |
D766286 | Lee et al. | Sep 2016 | S |
D766289 | Bauer et al. | Sep 2016 | S |
D767598 | Choi | Sep 2016 | S |
9443270 | Friedman et al. | Sep 2016 | B1 |
D768162 | Chan et al. | Oct 2016 | S |
D768202 | Malkiewicz | Oct 2016 | S |
D769253 | Kim et al. | Oct 2016 | S |
D770513 | Choi et al. | Nov 2016 | S |
9501798 | Urrutia et al. | Nov 2016 | B1 |
D773481 | Everette et al. | Dec 2016 | S |
D773523 | Kisselev et al. | Dec 2016 | S |
D774078 | Kisselev et al. | Dec 2016 | S |
D775144 | Vazquez | Dec 2016 | S |
D780202 | Bradbury et al. | Feb 2017 | S |
D785009 | Lim et al. | Apr 2017 | S |
D789956 | Ortega et al. | Jun 2017 | S |
D792424 | Meegan et al. | Jul 2017 | S |
D792441 | Gedrich et al. | Jul 2017 | S |
D795287 | Sun | Aug 2017 | S |
D797117 | Sun | Sep 2017 | S |
D797769 | Li | Sep 2017 | S |
D800748 | Jungmann et al. | Oct 2017 | S |
9824453 | Collins et al. | Nov 2017 | B1 |
D806101 | Frick et al. | Dec 2017 | S |
D809542 | Lu | Feb 2018 | S |
D809561 | Forsblom | Feb 2018 | S |
D814518 | Martin et al. | Apr 2018 | S |
D814520 | Martin et al. | Apr 2018 | S |
D815667 | Yeung | Apr 2018 | S |
9947050 | Pietrus et al. | Apr 2018 | B1 |
D819647 | Chen et al. | Jun 2018 | S |
D820296 | Aufmann et al. | Jun 2018 | S |
D822688 | Lee et al. | Jul 2018 | S |
D822711 | Bachman et al. | Jul 2018 | S |
D826984 | Gatts et al. | Aug 2018 | S |
D830408 | Clediere | Oct 2018 | S |
D832875 | Yeung et al. | Nov 2018 | S |
D834613 | Lee et al. | Nov 2018 | S |
D837814 | Lamperti et al. | Jan 2019 | S |
10169856 | Farnsworth et al. | Jan 2019 | B1 |
D841669 | Hansen et al. | Feb 2019 | S |
D844020 | Spector | Mar 2019 | S |
D845332 | Shriram et al. | Apr 2019 | S |
D847161 | Chaudhri et al. | Apr 2019 | S |
D851112 | Papolu et al. | Jun 2019 | S |
D851126 | Tauban | Jun 2019 | S |
D851127 | Tauban | Jun 2019 | S |
D851663 | Guesnon, Jr. | Jun 2019 | S |
D851668 | Jiang et al. | Jun 2019 | S |
D852217 | Li | Jun 2019 | S |
D853407 | Park | Jul 2019 | S |
D858571 | Jang | Sep 2019 | S |
D859445 | Clediere | Sep 2019 | S |
D863340 | Akana | Oct 2019 | S |
D865795 | Koo | Nov 2019 | S |
D866582 | Koo | Nov 2019 | S |
20020029285 | Collins | Mar 2002 | A1 |
20020138549 | Urien | Sep 2002 | A1 |
20030187672 | Gibson et al. | Oct 2003 | A1 |
20040224772 | Canessa et al. | Nov 2004 | A1 |
20040249650 | Freedman et al. | Dec 2004 | A1 |
20050038682 | Gandee et al. | Feb 2005 | A1 |
20050204148 | Mayo et al. | Sep 2005 | A1 |
20060009213 | Sturniolo et al. | Jan 2006 | A1 |
20060269057 | Short et al. | Nov 2006 | A1 |
20070100669 | Wargin et al. | May 2007 | A1 |
20070130197 | Richardson et al. | Jun 2007 | A1 |
20070219816 | Van Luchene et al. | Sep 2007 | A1 |
20070265949 | Elder | Nov 2007 | A1 |
20070282639 | Leszuk et al. | Dec 2007 | A1 |
20080015887 | Drabek et al. | Jan 2008 | A1 |
20080147448 | McLaughlin et al. | Jun 2008 | A1 |
20080255917 | Mayfield et al. | Oct 2008 | A1 |
20080300924 | Savage et al. | Dec 2008 | A1 |
20090125589 | Anand | May 2009 | A1 |
20090183114 | Matulic | Jul 2009 | A1 |
20100125464 | Gross et al. | May 2010 | A1 |
20100130176 | Wan et al. | May 2010 | A1 |
20100205567 | Haire et al. | Aug 2010 | A1 |
20100223172 | Donnelly et al. | Sep 2010 | A1 |
20110015947 | Erry et al. | Jan 2011 | A1 |
20110035793 | Appelman | Feb 2011 | A1 |
20120284058 | Varanasi | Nov 2012 | A1 |
20130204645 | Lehman et al. | Aug 2013 | A1 |
20130226624 | Blessman et al. | Aug 2013 | A1 |
20130317864 | Tofte et al. | Nov 2013 | A1 |
20140104372 | Calman et al. | Apr 2014 | A1 |
20140240445 | Jaynes | Aug 2014 | A1 |
20140288976 | Thomas et al. | Sep 2014 | A1 |
20140320590 | Laurentino | Oct 2014 | A1 |
20140369668 | Onoda | Dec 2014 | A1 |
20150025915 | Lekas | Jan 2015 | A1 |
20150187017 | Weiss | Jul 2015 | A1 |
20150189362 | Lee et al. | Jul 2015 | A1 |
20150244751 | Lee et al. | Aug 2015 | A1 |
20150248730 | Pilot et al. | Sep 2015 | A1 |
20150278728 | Dinamani et al. | Oct 2015 | A1 |
20150365342 | McCormack | Dec 2015 | A1 |
20160080570 | O'Connor et al. | Mar 2016 | A1 |
20160171486 | Wagner et al. | Jun 2016 | A1 |
20160171622 | Perkins et al. | Jun 2016 | A1 |
20160203443 | Wheeling | Jul 2016 | A1 |
20160217433 | Walton et al. | Jul 2016 | A1 |
20170068526 | Seigel | Mar 2017 | A1 |
20170104876 | Hibbard | Apr 2017 | A1 |
20170126812 | Singhal | May 2017 | A1 |
20170154383 | Wood | Jun 2017 | A1 |
20170352103 | Choi et al. | Dec 2017 | A1 |
20180007059 | Innes et al. | Jan 2018 | A1 |
20180108091 | Beavers et al. | Apr 2018 | A1 |
20190149772 | Fernandes et al. | May 2019 | A1 |
Number | Date | Country |
---|---|---|
2477506 | Feb 2006 | CA |
0 793 184 | Sep 1997 | EP |
0793184 | Sep 1997 | EP |
2 648 364 | Oct 2013 | EP |
WO-2010120303 | Oct 2010 | WO |
WO-2013033259 | Mar 2013 | WO |
WO-2015131121 | Sep 2015 | WO |
Entry |
---|
Morris: “Collaborative Search Revisited” Proceedings of the 2013 conference on Computer supported cooperative work, San Antonio, Texas, Feb. 23-27, 2013 (Year: 2013). |
Non-Final Office Action on U.S. Appl. No. 17/958,061 dated May 23, 2023, 14 pages. |
Final Office Action on U.S. Appl. No. 17/958,061 dated Oct. 11, 2023, 17 pages. |
Baraghimian & Young, “GeoSpaces/sup TM/—A virtual collaborative software environment for interactive analysis and visualization of geospatial information,” IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217), pp. 1678-1680 (2001). |
Dolcourt, “OoVoo Mobile takes on Qik, Fring for Android video chat,” retrieved from https://www.cnet.com/tech/mobile/oovoo-mobile-takes-on-qik-fring-for-android-video-chat/, 3 pages (2010). |
Drippler, “15 Best Camera Apps for Android,” retrieved from http://drippler.com/drip/15-best-camera-apps-android, 21 pages (2016). |
Dufoe, “Tip of the Day: How to Move your Image in FaceTime,” retrieved from https://www.iphonelife.com/blog/32671/how-move-your-image-facetime, 11 pages (2015). |
Final Office Action for U.S. Appl. No. 16/248,277 dated Apr. 1, 2021, 20 pages. |
Final Office Action for U.S. Appl. No. 16/848,275 dated Feb. 18, 2022, 9 pages. |
Final Office Action on U.S. Appl. No. 16/919,899 dated May 2, 2022, 11 pages. |
Go Canvas, “Residential/Commercial Storm Damage Report Mobile App,” retrieved from https://www.gocanvas.com/mobile-forms-apps/22692-Residential-Commercial-Storm-Damage-Report, 6 pages (2016). |
International Search Report & Written Opinion for PCT/US2017/056490 dated Jan. 5, 2021, 9 pages. |
Kirma, “Screens icon set,” retrieved from https://www.iconfinder.com/iconsets/screens-2, 13 pages (2016). |
Marc, “How to make video calls with Viber on Android and iOS,” retrieved from https://viber.en.softonic.com/articles/how-to-make-video-calls-with-viber-on-android-and-ios, 11 pages (2014). |
Microsoft, “OoVoo—Video Calls and Messaging,” retrieved from https://www.microsoft.com/en-us/store/p/oovoo-video-calls-and-messaging/9wzdncrfj478, 6 pages (2017). |
Mitroff, “Who needs FaceTime? 4 video-calling apps for Android,” retrieved from https://www.cnet.com/tech/services-and-software/android-video-calling-apps/, 6 pages (2015). |
Morris, “Collaborative search revisited,” CSCW '13: Proceedings of the 2013 Conference on Computer Supported Cooperative Work, pp. 1181-1192 (2013). |
Narendra, et al., “MobiCoStream: Real-time collaborative video upstream for Mobile Augmented Reality applications,” 2014 IEEE International Conference on Advanced Networks and Telecommunications Systems, 6 pages (2014). |
Non-Final Office Action for U.S. Appl. No. 16/248,277 dated Sep. 10, 2021, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 16/848,275 dated Oct. 1, 2021, 9 pages. |
Non-Final Office Action for U.S. Appl. No. 16/919,899 dated Nov. 2, 2021, 13 pages. |
Notice of Allowance for U.S. Appl. No. 16/848,275 dated Dec. 9, 2022, 11 pages. |
Notice of Allowance for U.S. Appl. No. 16/919,899 dated Oct. 5, 2022, 9 pages. |
Notice of Allowance on U.S. Appl. No. 16/248,277 dated May 27, 2022, 9 pages. |
Office Action for U.S. Appl. No. 16/248,277 dated Jan. 26, 2022, 9 pages. |
Office Action on U.S. Appl. No. 16/848,275 dated Jun. 9, 2022, 13 pages. |
Olsen & Porter, “What We Know about Demand Surge: Brief Summary,” Natural Hazards Review 12(2), pp. 62-71 (2011). |
Osorio, et al., “A Service Integration Platform for Collaborative Networks,” Studies in Informatics and Control 20(1), pp. 19-30 (2011). |
Pollock, “iMessages and FaceTime Sharing Issues,” retrieved from https://royalwise.com/imessages-facetime-sharing-issues/, 7 pages (2014). |
PR Newswire, “New Inspection Mobile App Enables Real-Time Inspection of Insurance Claims,” retrieved from https://www.prnewswire.com/news-releases/new-inspection-mobile-app-enables-real-time-inspection-of-insurance-claims-300114092.html, 3 pages (2015). |
TIA Technology, “TIA launches mobile app for insured object inspection in the field,” retrieved from http://www.tiatechnology.com/en/whats-new/tia-technology-launches-mobile-app-for-insured-object-inspection-in-the-field/, 4 pages (2016). |
Zhang, “Leader delegation and trust in global software teams,” New Jersey Institute of Technology Electronic Theses and Dissertations, retrieved from https://digitalcommons.njit.edu/dissertations/853, 152 pages (2008). |
Number | Date | Country | |
---|---|---|---|
20230179710 A1 | Jun 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16919899 | Jul 2020 | US |
Child | 18106180 | US | |
Parent | 15874629 | Jan 2018 | US |
Child | 16919899 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15294147 | Oct 2016 | US |
Child | 15874629 | US |