The present disclosure relates to communications systems for evaluation of property by a remote viewing device. More specifically, it relates to methods, software, and apparatuses for a login-free audiovisual teleconference between two users for property evaluation.
Traditional customer service systems may allow contact between users without travel or making appointments, but telephonic communication is virtually useless for allowing accurate property evaluation by remote means. Sending pictures is similarly deficient, especially if an owner does not understand how best to portray the property.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosure. The summary is not an extensive overview of the disclosure. It is neither intended to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure. The following summary merely presents some concepts of the disclosure in a simplified form as a prelude to the description below.
Aspects of the disclosure relate to methods, computer-readable media, and apparatuses for providing two-way audiovisual communication between a client, who may be an owner of the property being evaluated and an agent, who may be a user of a plurality of users associated with one or more systems or devices evaluating the property. The two-way audiovisual communication may be performed using a camera and microphone of a mobile computing device of the client and a camera and microphone of a computer of the agent remote from the client.
The plurality of users may be organized in a queue ranked by amount of time spent waiting to answer an owner's call. When the system receives a client call, the system may automatically route the call to an appropriate queue based on the prioritized ranking. The system may then utilize a camera on the client device to capture images and/or video of property according to the systems and methods described herein. Information gathered from the call may be used in evaluating the property.
An administrative system may monitor the queue and to manage individual agents by modifying their attributes in order to keep the queue balanced with demand for agents appropriate to the distribution of clients currently calling.
Calls may be dynamically connected in a login-free environment in order to facilitate data collection for one or more organizations. Methods and systems may facilitate data collection, reconnection in the event of disconnections, and/or call degradation handling in the event of weak or unstable connections.
Agents may be reassigned using a skill-based approach wherein agents are assigned based on a primary skill, but may be reassigned based on one or more secondary skills if the need arises.
Photos taken throughout the call process may be analyzed to determine if there are any inconsistencies indicative of fraud.
Other features and advantages of the disclosure will be apparent from the additional description provided herein.
A more complete understanding of the present invention and the advantages thereof may be acquired by referring to the following description in consideration of the accompanying drawings, in which like reference numbers indicate like features, and wherein:
In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration, various embodiments of the disclosure that may be practiced. It is to be understood that other embodiments may be utilized.
As will be appreciated by one of skill in the art upon reading the following disclosure, various aspects described herein may be embodied as a method, a computer system, or a computer program product. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, such aspects may take the form of a computer program product stored by one or more computer-readable storage media having computer-readable program code, or instructions, embodied in or on the storage media. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space).
The term “network” as used herein and depicted in the drawings refers not only to systems in which remote storage devices are coupled together via one or more communication paths, but also to stand-alone devices that may be coupled, from time to time, to such systems that have storage capability. Consequently, the term “network” includes not only a “physical network” but also a “content network,” which is comprised of the data—attributable to a single entity—which resides across all physical networks.
The components may include virtual collaboration server 103, web server 105, and client computers 107, 109. Virtual collaboration server 103 provides overall access, control and administration of databases and control software for performing one or more illustrative aspects described herein. Virtual collaboration server 103 may be connected to web server 105 through which users interact with and obtain data as requested. Alternatively, virtual collaboration server 103 may act as a web server itself and be directly connected to the Internet. Virtual collaboration server 103 may be connected to web server 105 through the network 101 (e.g., the Internet), via direct or indirect connection, or via some other network. Users may interact with the virtual collaboration server 103 using remote computers 107, 109, e.g., using a web browser to connect to the virtual collaboration server 103 via one or more externally exposed web sites hosted by web server 105. Client computers 107, 109 may be used in concert with virtual collaboration server 103 to access data stored therein, or may be used for other purposes. For example, from client device 107 a user may access web server 105 using an Internet browser, or by executing a software application that communicates with web server 105 and/or virtual collaboration server 103 over a computer network (such as the Internet).
Client computers 107 and 109 may also comprise a number of input and output devices, including a video camera (or “webcam”), microphone, speakers, and monitor, enabling two-way audiovisual communication to and from the client computers.
Servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines.
Each component 103, 105, 107, 109 may be any type of computer, server, or data processing device configured to perform the functions described herein (e.g., a desktop computer, infotainment system, commercial server, mobile phone, laptop, tablet, etc.). Virtual collaboration server 103, e.g., may include a processor 111 controlling overall operation of the virtual collaboration server 103. Virtual collaboration server 103 may further include RAM 113, ROM 115, network interface 117, input/output interfaces 119 (e.g., keyboard, mouse, display, printer, etc.), and memory 121. I/O 119 may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. Memory 121 may further store operating system software 123 for controlling overall operation of the virtual collaboration server 103, control logic 125 for instructing virtual collaboration server 103 to perform aspects described herein, and other application software 127 providing secondary, support, and/or other functionality which may or may not be used in conjunction with other aspects described herein. The control logic may also be referred to herein as the data server software 125. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, made manually by a user providing input into the system, and/or a combination of automatic processing based on user input (e.g., queries, data updates, etc.).
Memory 121 may also store data used in performance of one or more aspects described herein, including a first database 129 and a second database 131. In some embodiments, the first database 129 may include the second database 131 (e.g., as a separate table, report, etc.). That is, the information can be stored in a single database, or separated into different logical, virtual, or physical databases, depending on system design. Devices 105, 107, 109 may have similar or different architecture as described with respect to device 103. Those of skill in the art will appreciate that the functionality of virtual collaboration server 103 (or device 105, 107, 109) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, and/or to segregate transactions based on geographic location, user access level, quality of service (QoS), etc.
One or more aspects described herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
The mobile computing device 200 may include one or more output devices, such as a display 206 or one or more audio speakers 207. There may also be one or more user input devices, such as a number of buttons 208, as well as a microphone 209, a touchscreen built into display 206, and/or a forward-facing camera 210 (which may include multiple cameras for three-dimensional operation) for user gestures. The mobile computing device 200 may comprise additional sensors, including but not limited to a multiple-axis accelerometer 211 or rear-facing camera 212. Rear-facing camera 212 may further be an array of multiple cameras to allow the device to shoot three-dimensional video or determine depth. The mobile computing device may further comprise one or more antennas 213 for communicating via a cellular network, Wi-Fi or other wireless networking system, Bluetooth, near field communication (NFC), or other wireless communications protocols and methods.
The mobile device 200 is one example hardware configuration, and modifications may be made to add, remove, combine, divide, etc. components of mobile computing device 200 as desired. Multiple devices in communication with each other may be used, such as a mobile device in communication with a server or desktop computer over the Internet or another network, or a mobile device communicating with multiple sensors in other physical devices via Bluetooth, NFC, or other wireless communications protocols. Mobile computing device 200 may be a custom-built device comprising one or more of the features described above, or may be a wearable device, such as a smart watch or fitness tracking bracelet, with custom software installed, or may be a smartphone or other commercially available mobile device with a custom “app” or other software installed.
One or more aspects of the disclosure may be embodied in computer-usable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other data processing device. The computer-executable instructions may be stored on one or more computer readable media such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
Mobile device 200 may be used to run a mobile application into which the user, in some examples, inputs information, such as a username and/or password for login, or an actual name, claim number, property type, contact information, and/or any other information relevant to an insurance claim (such as information identifying an insurance claim or an insurance policy holder). The application may then use an internet connection and/or other network connection to contact the virtual collaboration server and initiate communications with the server and/or one or more client computers. The application may also access one or more cameras and/or a microphone of the mobile device and transmit video and/or audio to a remote computer, and play video and audio received in return, to allow communications between the mobile device's operator and a remote agent.
In step 301, the system may generate a queue data structure for tracking a number of logged-in agents (e.g., claims adjusters) and one or more attributes for each agent. Attributes may include, for example, amount of time spent in the queue, amount of time spent in the queue since a last event (such as completing a call with a property owner or going idle), a classification or skill of the adjuster (such as specialization in auto claims or claims related to other property, a licensing status of an adjuster, and/or locations where the adjuster is authorized and/or licensed to practice), or a manager assigned to the given adjuster. Each claims adjuster may be associated with a computing device configured to communicate with the system and/or with one or more mobile devices of one or more users.
In step 302, the system may add one or more claims adjusters to the queue. Each claims adjuster may begin by logging in with a unique user identification such as a number or string entered into a user interface on a computing device such as device 107 or device 109 that is networked to or in communication with server 103.
When logging into the system, a claims adjuster may be prompted to select one of a number of video capture devices of the adjuster's computer to capture images and/or video during any two-way video transmissions with a user. The claims adjuster may similarly be prompted to select one of a number of audio capture devices of the adjuster's computer to capture audio during any two-way audio transmissions with a user. The adjuster may further be prompted to select one or more speakers to emit audio received from a user if more than one speaker is connected to the adjuster's computer.
In step 303, the system may receive a two-way communications request from a property owner (e.g., a client). Preferably, before initiating the communications, the property owner will move to the location of damaged property subject to an insurance claim, as depicted in
The request may include one or more attributes, including, for example, a property type that the property owner wishes the claims adjuster to see. The request may be received by a webserver as an HTTP (Hypertext Transfer Protocol) request, or may use another server-client style protocol or messaging architecture. The request may also comprise the property owner's name or a previously assigned username, contact information for the property owner, and/or a claim number already assigned.
An HTTP request may comprise a communicated link associated with information for a connection. The link may conform to HTTP or any other suitable standard. The link may be sent using any suitable electronic delivery method, such as e-mail, text message, etc. The link may be associated with a user so that certain identifying and/or useful information may be pre-filled automatically (e.g., the client's name, property type, damage type, location, etc.). For example, the client may be involved in a collision, and may contact his or her local agent at the site of the collision. The agent may enter in information associated with the client into the system, which may generate the personalized link which is sent to the client. If the client uses the link, the link may initiate a connection using one or more methods described herein. The link may automatically pre-fill known information for the client. For example, the link may open a webpage to initiate communications pre-filled with the client's name, automobile, and reason for initiating communication (e.g., a collision, or hail damage). In another example, the link may launch a local application with the pre-filled information.
In some instances, the request may comprise a login-free connection. In a login-free connection, the client may not present a username or other typical user identifier. Rather, the client may give information necessary for routing the call (e.g., property type, vehicle type, location, etc.) and provide identifying information on the call itself. This may serve to connect clients to an adjuster faster and with less friction (e.g., a user may not have to create a username or remember a password to initiate a call).
Property that may be damaged may include automobiles, other vehicles (such as boats, motorcycles, bicycles, mopeds, or airplanes), houses, other structures, or personal property (such as artwork, electronics, clothing, furniture, or anything else of value).
In step 304, the system may select a claims adjuster to whom the incoming call should be assigned. The system may select an adjuster on a basis of longest time waiting in queue (i.e. first in, first out), or may select based on one or more factors. For example, the system may select an adjuster who has been waiting the longest out of all adjusters with a particular attribute, such as experience with a property type identified in the request. The system may select an adjuster who has been waiting the longest out of all adjusters who are currently available and/or who has not marked himself or herself unavailable. The system may select an adjuster who has been waiting the longest out of all adjusters without being idle at his or her computer. The system may select an adjuster who has been waiting the longest out of all adjusters having a certain experience level. The experience level may be based on the adjuster's experience with handling vehicle claims and/or property claims with respect to vehicle claims. The Experience level may be based on the adjuster's experience handling cars that exceed a pre-set threshold amount (e.g., $100 k). The experience level may also be based on experience with particular property types (e.g., single family homes, townhomes, condominiums, etc.). The system may select an adjuster who has been flagged to receive the next incoming call regardless of place in the queue or time waited. The system may select an adjuster who has handled the fewest calls during a given period of time such as the last month, last week, last 24 hours, or last 8 hours. The system may select an adjuster who has declined the most or the fewest calls during a given period of time. The system may select an adjuster who has historically handled calls with a shortest call length. The system may use a number of the factors above, or other factors, to combine and score all adjusters with a numerical score on each of a plurality of criteria, selecting an adjuster with a highest overall score or an adjuster who has waited the longest in queue of all adjusters with a given score range.
Once an adjuster has been selected, in step 305, the adjuster selected by the system may be notified of the selection and prompted to accept or decline an incoming communication.
In step 306, a two-way audiovisual communication may be established between the property owner and the selected adjuster. A web-based protocol may be used for cross-platform communication between the system on server 103, the computing device 107 being operated by the claims adjuster, and the mobile computing device 200 being operated by the property owner. Any one of a number of existing open-source, commercial, or custom video transmission protocols and platforms may be used.
In an alternative embodiment, the system may direct that communications be established directly between adjuster's computing device 107 and property owner's mobile computing device 200, without passing through server 103.
In step 307, the adjuster may use the audiovisual communication to gather information regarding property that is damaged. The adjuster may view the property through a camera of mobile computing device 200, may hear the property (if, for example, it is a damaged television or musical instrument) through a microphone of the mobile computing device, may ask questions of the property owner and receive answers, or may direct the property owner to move the camera to allow the adjuster a better vantage point/different angles of viewing, or to move an obstruction out of the way for a better view.
If the adjuster determines that he or she is not suited to appraise the property—for example, because of user error in identifying a property type—the adjuster may input a command to terminate the call and re-generate the call request to repeat steps 303 and following, and to allow the owner to be connected to a different adjuster by the system.
The adjuster may be able to record the video from the call, record the audio from the call, or capture still images from the video. The data may be saved either locally on the adjuster's computing device or to a remote server for later retrieval. The adjuster may also be able to enter notes into a text field, annotate the video or still images, or via other user input field while viewing the property.
In step 308, the adjuster may conclude that there is sufficient data from the call to act, and may terminate the communications with the property owner.
In step 309, the adjuster may determine a next course of action and implement it. The adjuster may conclude based on the gathered information that a clear estimate of the property damage is possible, for example if there is no damage, if the property is a total loss, or if the damage is of a commonly encountered type. In this circumstance, the adjuster may be able to input an amount of money to be given to the property owner, and to automatically have a check created and mailed to the property owner, or automatically credited to a known account of the property owner. The adjuster may alternatively conclude that the damage will be difficult to estimate based on a remote viewing alone, and may be able to dispatch an adjuster to the property to view in person, or to make an appointment for the property owner to bring the property to an adjuster for appraisal and to notify the property owner of the appointment. The system may transmit an instruction to a computing device associated with this other adjuster so that the other adjuster will receive the pertinent information about the claim and information regarding where and when to perform an in-person, physical inspection of the property.
After the determination is made, the owner's device may notify the owner that funds have already been deposited in an account of the owner, or that the appraisal via video call was unsuccessful and that an appointment has been or must be made for an in-person appraisal by another claims adjuster.
In an alternative embodiment, the system could instead be used for appraisal by a doctor or claims adjuster of an individual insured with health insurance rather than a property owner. In such an embodiment, skill types saved as attributes for members of the queue could be fields of medical expertise or other medical skills, rather than property types. The operator of the mobile device may be a doctor, another medical personnel, or other third party who may help a remote doctor or adjuster to inspect or perform a physical on a person submitting a health insurance claim.
Upon initiating the request (which may be made via an online system, mobile application executing on the mobile device 200, or the like), a user interface may be displayed to a claims adjuster.
When the incoming communications request causes the adjuster to be selected by the system, an incoming call window 405 may appear. The adjuster may accept the call by clicking an appropriate button within the window. The adjuster may decline the call either by clicking a decline button, which may be present, or by failing to accept the call within a predetermined period of time, such as 3 seconds, 5 seconds, or 10 seconds. In the event that the adjuster fails to accept the call, the call may be re-routed to the next available adjuster, such as by using the same logic that was used for the initial call connection.
In
In
In
In
In
A manager may furthermore be able to view/listen in real time to an ongoing call between an adjuster and a property owner. When an adjuster who is currently “In Call” is selected, an option may appear to allow one-way audiovisual communication from the adjuster to the manager and/or from the owner to the manager. Accordingly, the manager may be able to ensure that adjusters are appropriately performing duties and helping owners, and may use the information for training purposes with the adjuster after the call.
In some instances, captured images may be displayed on the screen to the adjuster during an ongoing call. For example, previously captured images may appear as thumbnails on the screen while the video call is ongoing. In some instances, the adjuster may select a thumbnail in order to view an enlarged version of the image during the call. This may allow an adjuster to see what images have previously been taken in order to assist the adjuster in determining what further images should be taken (e.g., an adjuster may determine from the thumbnails that a certain portion of the bumper has not been captured in an image).
Call degradation handling may assist in situations where connections are not ideal (e.g., intermittent connections, insufficient bandwidth, high jitter, etc.). An example method for call degradation handling is depicted in
In some instances, the system may dynamically manage connection settings such that adjuster communications are restricted enough to maximize client communications in order to capture accurate descriptions of damaged property. For example, the system may determine that a connection is still insufficient for bilateral communication using current settings, perhaps even after performing some prioritization of client communications in step 1010, at step 1015. The system may dynamically reduce adjuster communications until a connection is stable by first reducing adjuster communications at step 1020 up to a predetermined minimum (e.g., to a minimum resolution, minimum bitrate, or to audio-only). If the system determines that the connection is still insufficient at step 1025, the system may reduce client communications at step 1030 up to a minimum (e.g., to another minimum resolution, to another minimum bitrate, or to audio-only). In this manner stable communications may be maintained while prioritizing some communications that are more important than others (e.g., video of vehicle damage is prioritized over video of a claims adjuster). Even after a call is resumed at step 1035, the system may continue monitoring the call and repeat the method to compensate for changing network conditions.
Referring back to
A reconnect feature may allow for calls to be reconnected in a login-free environment. In an Internet-based call without logins, a user may be routed based solely on information provided during the course of a call, as described herein. However, due to the dynamic nature of Internet communications, it may be very difficult to “call back” a customer (e.g., in the event of a loss of communication, desire to obtain additional information after a communication has ended, or the like) using the application without logins or some other identifier. Instead, an ad hoc reconnection method may be used. A reconnect code 710 may provide a means for reconnecting a call. During the initial call process, a client may provide alternative contact information, such as an e-mail or phone number. If a disconnect happens, the adjuster can send a message (e.g., a text message, notification, and/or email) comprising a reconnect code. For example, the adjuster may send the client a text message and/or an email, and/or the adjuster may use the application to trigger a notification of the code on the phone or other device of the customer. The client may enter the reconnect code in the application (e.g., application executing on the client phone or other device) to be connected back with the agent who was initially conducting the call. For example, a client may be disconnected. The adjuster may send the client a reconnect code. The client may enter the reconnect code in a screen such as the reconnection screen 730 depicted in
In some instances, the reconnect code may be utilized to avoid long wait times in a login-free environment. If a client will experience a hold time of longer than a threshold (e.g., two minutes), then the system may automatically trigger a disconnection and initiate a reconnect when an adjuster is able to take the call. The system may monitor call wait times to determine if an unusually long wait times exist. If wait times exceed a threshold, then the system may present one or more clients with a screen comprising an option to request a “call back” (e.g., a screen indicating that the client may request to be called back by an adjuster rather than remaining on hold and waiting for the adjuster to take the call, and may display a “call back” button on the screen to initiate the call back feature). If the client selects the call back option, the system may disconnect the client, but maintain a ticket associated with the client in a queue associated with one or more adjusters. When the adjuster opens the ticket, the ticket may indicate that the client has requested a call back. The adjuster may then initiate a reconnect by using supplied contact information to send a message comprising a reconnection code for entering into the application, as described above, resulting in the client being connected directly with the adjuster who opened the ticket after the client enters the reconnection code. This may have the advantage of allowing a client to avoid holding for an adjuster while not requiring the client to re-enter information collected at the onset of a communication in a login-free environment.
A reconnect call button 820 may allow an adjuster to reconnect a call. For example, a call may have ended prematurely or an adjuster may desire to collect additional information. The reconnect call button 820 may send a reconnect code as described above and/or redirect an agent to a call screen such as that depicted in
When an adjuster selects to finish a call, such as by clicking the finish call button 825, the adjuster may be asked to enter some summary information for the call. The summary information may comprise information about the type of call handled as well as the result. For example, a claims adjuster may type in a claim number for the call. In some instances, the claim number may not be provided automatically so that the system does not need to utilize hooks in other systems to authenticate the number (e.g., entering a claim number manually may allow the system to support claim numbers for many different companies without connections to additional databases). In some instances, the claim number may be correlated to the client and/or property. For example, the claim number may be associated with a vehicle using the application or an external process. The adjuster may also indicate a result. For example, the adjuster may indicate that the claim was approved, that the claim resulted in a total loss, that the claim was denied, etc. Information gathered by the application may be used as tags on the gathered information 810. For example, photos presented in the call may be flagged as photographs of total loss regarding an exotic automobile in Indiana. This may allow for the creation of a database of gathered information 810 sorted according to the tags. Such a database may be utilized as a data source to train machine learning for an automated claims processing system.
In some instances, the gathered information 810 may be packaged and transmitted to a specified organization (such as a company specified in step 910). The exported information may be combined with the summary information and sent to a data depository associated with the specified organization. The organization's depository may process the claim number, gathered information 810, and/or other information to sort and store information as the organization sees fit. In this manner, the system may allow for claims to be handled for a large number of organizations with minimal ties into client systems, as the information may be transmitted with minimal information required from the organization's systems. By reducing access to proprietary databases and networks maintained by partner organizations, the system can support those organizations while minimizing security risks (e.g., a system may be able to transmit information to an insurer database but not receive information from the insurer database, which may reduce the risk of the system being used to illicitly obtain confidential client information from the insurer database).
In some instances, it may be advantageous to allow an adjuster to take an additional call prior to finishing work in a first call (e.g., taking a second call while still entering summary information). The system may allow an adjuster to “pause” work on a first call in order to take the second call while minimizing lost work. The system may notify the adjuster that he or she is requested on an additional call, such as for high call volume. The system may then allow the adjuster's work to be stored as a saved state. In some instances, the adjuster may initiate saving the state of his or her work manually, and may do so with or without prompting of another incoming call. In some instances, the saving may be automatic, though the adjuster may be permitted to refuse a second call and/or decline pausing his or her work via a prompt. A saved state may comprise information indicating work done by the adjuster and/or the current state of the interface presented to the adjuster. The work done by the adjuster, which may be information entered by the adjuster (e.g., summary information, claim notes, annotations, etc.) may be stored as data locally or remotely. The current state of the interface may be information indicating the information as visually depicted to the adjuster and/or underlying information entered by the adjuster. This information may be stored locally or remotely.
An adjuster may resume a previous saved state by loading the saved state. Upon loading the saved state, the adjuster may be presented with the interface as was visually presented to the adjuster when the state was saved. For example, an adjuster may take a first call, begin entering summary information, accept a second call (which automatically saves his state), and then load his saved state after finishing his second call. The loaded state information may populate any or all fields previously populated by the adjuster, make any selections made by the adjuster, present any visual information (e.g., selected images) in the manner they were previously depicted, and/or may recreate the state of the interface upon saving (e.g., if a photograph was highlighted and/or selected at the time the state was saved, it may be highlighted and/or selected upon loading).
The client may select an organization to contact at step 910. In some instances, the application may support multiple companies and/or insurance carriers. For example, the application may be licensed for use by multiple companies as a standardized application that is routed using a selection (e.g., a client may select his personal insurance carrier using a drop-down box).
At step 915, the client may select a product type for discussion. A company may provide services regarding a number of different types of products (e.g., home, auto, boat, motorcycle, etc.) or sub-classes of products and/or services (e.g., new drivers, senior citizens, basic or economy service, luxury or exotic automobiles, vacation homes, rental properties or vehicles, etc.). This may be used to route a call to an adjuster who is qualified and/or specialized for the call. For example, some adjusters may specialize in exotic vehicles. In another example, new drivers may be redirected to agents who specialize in helping young or inexperienced drivers with the claims process.
At step 920, the client may provide claim information. The claim information may provide additional details of the claim that may be useful in initiating the call. For example, the client may provide a geographic location regarding where they reside or where a collision took place (e.g., by selecting a state from a drop down list when initiating the call). Certain geographic locations may have associated licensing or qualification requirements. By gathering this information at the onset, the system may select an adjuster who is properly qualified to handle the call at step 930. The system may then route the call to a qualified adjuster.
At step 935, an adjuster selected according to the systems and/or methods described herein may be connected to the client and conduct the call. For example, a client may be routed to a licensed adjuster who handles rental properties based on information provided in steps 915 and 920.
In some instances, a client may receive feedback regarding and/or based on established hours of operation. Adjusters in a group assigned to handle a call may have limited hours. In one example, all adjusters for a company and/or insurance carrier may work from 8 AM to 5 PM Eastern. In another example, adjusters for another company assigned to handle exotic automobiles may have special hours and be available from 6 AM until 9 PM Eastern. The system may cause a message to be displayed to a client indicating that adjusters are unavailable during a particular time of day. This indication may only be displayed when adjusters are unavailable due to the timing of a client call. For example, if a user selects a company with uniform hours, the client may be notified that a call is after hours and be further notified of when the adjusters are normally available. This notification may be sent after selecting the company to call but before proceeding to one or more further steps, as it may be known that no adjuster will be available regardless of the product type to be discussed. In another example where adjuster times vary depending on where the call is routed (e.g., adjusters for different product types keep different hours), the system may wait until the client enters more specific information (e.g., product type, geographic area, sub-class of product, etc.) in order to give the hours for the correct set of adjusters. This may allow a client to quickly identify when he should call back while reducing the amount of information that a client must provide before he is notified that his call cannot be completed at a given time (e.g., a client may not need to enter claim information before being notified that it is after hours and no adjuster is available).
At step 940, the system may determine if there has been a disconnection. The system may detect disconnections that occur during the process of a call, and/or ask if there has been a disconnection when a call has been terminated. If a disconnection has been detected, the system may send a reconnect code at step 945. For example, the system may send a text message comprising a reconnection code to a phone number gathered from the client at step 920. In some examples, the reconnection code may be transmitted automatically upon detecting a disconnection. The system may receive the reconnect code at step 950 and reconnect the call. For example, a client may reopen the application and enter a code received in a text message. The system may then reconnect with an adjuster who has been waiting for the connection to be reinitiated. Further description of disconnections may be found in
In some example cases of a disconnection, a state may be saved (such as described above). For example, upon disconnection the system may prompt the adjuster that a disconnection has occurred. The adjuster may be presented with the option of sending the reconnect code, and/or of saving the current state. In some instances, the adjuster may be permitted to send a reconnect code, continue working for a time, and then save the state of his or her work. Saving the state upon disconnection may occur automatically when a reconnect code is generated. Upon resuming the call, the system may automatically load the saved state. This may have the advantage of allowing the adjuster to pick up where he or she left off with a disconnected call without requiring the adjuster to keep a call active to preserve his or her work.
At step 955, the system may end the call. For example, an adjuster may end a call by clicking an end call button 735. The system may then archive call information such as by using a claim wrap up screen 805. This information may be used to create a database of call information. Information taken during calls may be stored in a database and sorted using information gathered as part of the call process. For example, photographs of a damaged vehicle may be stored in a database for damage claims of exotic vehicles. Call metrics may also be recorded. For example, the system may track the average duration of calls by each adjuster, including time spent waiting for calls, handling calls, reconnecting, wrapping up calls, etc. This may allow for managers to perform quality control measures (e.g., reassigning adjusters to areas of need, training adjusters who are inefficient, etc.).
At step 1110, the system may determine a primary skill for the adjuster. A database associated with the system (e.g., a database 129) may store indications of primary and/or secondary skills associated with one or more adjusters. The indications may be preconfigured (e.g., a manager may preconfigure a login and associated primary skill for the adjuster). In some instances, primary and/or secondary skills may be automatically configured. For example, the system may assign a primary skill to an adjuster based on a weighted formula (e.g., weighting more valuable skills over less valuable skills), wherein the skills are weighted according to one or more factors (e.g., number of calls handled for each skill, average monetary value of a claim for each skill, etc.). In some instances, the system may use a combination of pre-configured and automatically generated skills (e.g., an adjuster may be configured with a primary skill in in hail damage at a first time, but may be given a primary skill in exotic cars once the adjuster handles enough exotic car claims).
At step 1115, an adjuster may be assigned to a first queue based on his or her primary skill. The first queue may be a call queue for adjusters, which may be consistent with one or more queues as discussed above. A first queue may have an assigned skill, and adjusters with a primary skill matching the first queue's skill may be assigned to the first queue. For example, adjusters with a primary skill of “hail damage” may be assigned to a “hail damage queue” upon initial login. A single queue may be associated with a plurality of skills. For example, the first queue may be assigned for adjusters with a primary skill in hail damage or tree damage. The assignments for a queue may be based on a combination of one or more skills. For example, the first queue may be assigned for adjusters with a primary skill of exotic automobiles or adjusters with both a primary skill in collision claims and a secondary skill in exotic automobiles.
At step 1120, the system may receive a request from a property owner. The request may comprise a communication requests associated with a claim, and the claim may be associated with a property type. More discussion of requests may be found above, such as in the discussion above regarding
At step 1130, the system may determine if there are sufficient adjusters for the second queue. The system may use one or more factors to determine if there are sufficient adjusters (e.g., hold times, number of calls on hold, number of call requests, etc.). If the system determines that there is not a sufficient number of adjusters, then the system may proceed with determining if adjusters should be reassigned (e.g., steps 1135, 1145, and 1150). If the number of adjusters is sufficient in the second queue, then the call may be assigned based on the existing queue assignments at step 1140. Step 1140 may correspond to one or more methods and/or systems described herein, such as steps 303 to 309.
At step 1135, the system may determine if one or more adjusters are available to switch to the second queue. The system may determine if adjusters have a secondary skill that corresponds to a skill associated with the second queue. If such adjusters exist, then the system may determine if it should assign them at step 1145. Else, the system may proceed with existing queue assignments as in step 1140.
At step 1145, the system may determine whether to prioritize the second queue over the first queue. The adjuster may be assigned to the first queue based on a primary skill. If the adjuster is moved to the second queue based on his secondary skill, it will deprive the first queue of an adjuster. So the system may determine if such a reassignment is beneficial. The system may compare one or more factors of each queue in reaching this determination. For example, the system may compare the weight times of each queue, the number of calls handled by each queue, the monetary value of a claim in each queue, the monetary value of a policy handled by each queue, the number of people on hold in each queue, or any such factor (alone or in combination). If the system determines that the comparison favors the first queue, then the adjuster may not be reassigned and the system may proceed with existing queue assignments as in step 1140. If the system determines that reassigning an adjuster to the second queue from the first queue is beneficial, then the system may proceed with the reassignment at step 1150.
At step 1150, the system may reassign the adjuster to the second queue. In some instances, the reassignment may be for a limited duration (e.g., a limited duration of time, until weight times fall below a threshold, until a certain number of calls are handled by the adjuster, until the adjuster logs off, etc.). In some instances, the system may choose to move a reassigned adjuster back to a first queue due to insufficiencies related to the first queue. For example, prior to looking for secondary skills, if the system detects that there are insufficient adjusters for the first queue it may undo one or more reassignments of adjusters from the first queue to other queues. In some instances, the system may manage reassignments such that queue lengths are relatively equal. For example, one or more adjusters may be moved from a first queue with a first weight time to a second queue with a second weight time, wherein the second weight time is longer than the first weight time. When the second weight time becomes the same or lower than the first weight time, the system may reverse one or more reassignments to maintain balance across the queues. In some instances, the reassignment may be overridden. For example, when the system determines that an adjuster should be reassigned, a prompt may be provided to an individual (e.g., the adjuster or a manager) that allows the individual to accept or deny the reassignment.
At step 1210, the system may analyze initial damage photos in a request. For example, the system may store photos submitted at the time of a collision and/or received with an initial request to initiate communication for a claim. Other forms of data capture may be substituted for photos throughout as appropriate, such as video or three-dimensional surface capture. The analysis may comprise indexing one or more instances of damage associated with property. For example, the system may determine one or more instances of hail damage in the image. The analysis may be performed using machine learning. The analysis may be performed in conjunction with an automated indexing system, such as a system for imaging vehicle damage as depicted in
At step 1215, the system may receive further photos as part of a call. For example, the system may receive further photos as discussed regarding step 307 of
At step 1220, the system may compare the initial damage photos received in step 1210 with the call damage photos received at step 1220. The system may determine if any evidence of damage in the initially received photos corresponds with the damage in the photos in the later call. For example, the system may determine if there are additional dents in the call photos that were not present in the initial damage photos. The system may also compare the photos for other inconsistencies. For example, the system may analyze the photos to determine if the initial damage photos were of the same vehicle that is depicted during the call. If there are differences detected at step 1225, the system may flag inconsistencies for further analysis at step 1230. For example, the if the system detects that additional damage may have occurred to the vehicle, then the system may flag the photos for further automated or manually analysis in order to determine if fraud has occurred.
At step 1235, the system may receive photos taken at intake from a repair center. For example, a repair center may take photos of a car when it is received. These photos may be compared with the call damage photos and/or the initial damage photos in order to determine if there are inconsistencies. The system may detect, at step 1245, if there are differences. For example, there could be intervening damage and/or the vehicle may not be the vehicle shown during the call. If there are differences detected at step 1245, the system may flag inconsistencies for further analysis at step 1250. For example, if the system detects that there has been inadvertent damage, then the system may flag the photos for further automated or manual analysis in order to determine if fraud has occurred. This may have the advantage of preventing a customer from fraudulent obtaining repairs. In another example, the system may detect that damage from the call is not present, or that a different vehicle is at repair than was in the call, which may prevent a client and/or repair shop from committing insurance fraud.
At step 1255, the system may receive post-repair photos from a repair center. For example, the repair center may submit photos of a vehicle after having completed their contracted repairs. At step 1265, the system may determine differences between the repaired vehicle and the photos taken prior to the repair. The system may determine if one or more instances of damage flagged for repair have been addressed and repaired. For example, the system may determine if several instances of hail damage that had requested repair were sufficiently repaired. The photos may also be used at a later time as evidence that repairs were completed.
At step 1270, the system may analyze the differences before and after repair. For example, the system may determine that one or more instances of hail damage were not repaired. In another example, the system may determine that differences from intake photos indicate that the repair shop further damaged a vehicle. These differences may be flagged for further analysis based on the situation. For example, the system may flag a failure to properly repair a vehicle for agent review. In another example, the system may flag any damage suspected to be caused by the repair shop for further review. If the differences correspond to damage that was successfully repaired, no further action may be taken. In some instances, any issues detected during 1270 may be required to be resolved (e.g., by an agent) prior to issuing payment to a repair center.
While the aspects described herein have been discussed with respect to specific examples, including various modes of carrying out aspects of the disclosure, those skilled in the art will appreciate that there are numerous variations and permutations of the above-described systems and techniques that fall within the spirit and scope of the invention.
This application is a continuation-in-part of U.S. patent application Ser. No. 15/874,629, filed Jan. 18, 2018, entitled “BILATERAL COMMUNICATION IN A LOGIN-FREE ENVIRONMENT”, which is a continuation-in-part of U.S. patent application Ser. No. 15/294,147, filed Oct. 14, 2016, entitled “VIRTUAL COLLABORATION”, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
D297243 | Wells-Papanek et al. | Aug 1988 | S |
D298144 | Wells-Papanek et al. | Oct 1988 | S |
D299142 | Berg | Dec 1988 | S |
5870711 | Huffman | Feb 1999 | A |
D416240 | Jensen et al. | Nov 1999 | S |
D468748 | Inagaki | Jan 2003 | S |
6744878 | Komissarchik et al. | Jun 2004 | B1 |
6771765 | Crowther et al. | Aug 2004 | B1 |
D523442 | Hiramatsu | Jun 2006 | S |
7088814 | Shaffer | Aug 2006 | B1 |
7103171 | Annadata et al. | Sep 2006 | B1 |
D534539 | Frey et al. | Jan 2007 | S |
D539808 | Cummins et al. | Apr 2007 | S |
D544494 | Cummins | Jun 2007 | S |
D547365 | Reyes et al. | Jul 2007 | S |
7289964 | Bowman-Amuah | Oct 2007 | B1 |
D562339 | Keohane | Feb 2008 | S |
D569871 | Anastasopoulos et al. | May 2008 | S |
D570864 | Sadler et al. | Jun 2008 | S |
D574008 | Armendariz et al. | Jul 2008 | S |
D576634 | Clark et al. | Sep 2008 | S |
D579943 | Clark et al. | Nov 2008 | S |
D580941 | Scott et al. | Nov 2008 | S |
D580942 | Oshiro et al. | Nov 2008 | S |
D582936 | Scalisi et al. | Dec 2008 | S |
D583386 | Tomizawa et al. | Dec 2008 | S |
D583823 | Chen et al. | Dec 2008 | S |
D587276 | Noviello et al. | Feb 2009 | S |
D590407 | Watanabe et al. | Apr 2009 | S |
D592219 | Agarwal et al. | May 2009 | S |
D594026 | Ball et al. | Jun 2009 | S |
D594872 | Akimoto | Jun 2009 | S |
D596192 | Shotel | Jul 2009 | S |
D608366 | Matas | Jan 2010 | S |
D614194 | Guntaur et al. | Apr 2010 | S |
D616450 | Simons et al. | May 2010 | S |
D617804 | Hirsch | Jun 2010 | S |
7936867 | Hill et al. | May 2011 | B1 |
8046281 | Urrutia | Oct 2011 | B1 |
D648735 | Arnold et al. | Nov 2011 | S |
8347295 | Robertson et al. | Jan 2013 | B1 |
D676456 | Walsh et al. | Feb 2013 | S |
D677275 | Wujcik et al. | Mar 2013 | S |
D677326 | Gleasman et al. | Mar 2013 | S |
D677686 | Reyna et al. | Mar 2013 | S |
D678904 | Phelan | Mar 2013 | S |
D681654 | Hirsch et al. | May 2013 | S |
D682849 | Aoshima | May 2013 | S |
D682873 | Frijlink et al. | May 2013 | S |
D683751 | Carpenter et al. | Jun 2013 | S |
D684587 | Plesnicher et al. | Jun 2013 | S |
D685386 | Makhlouf | Jul 2013 | S |
D687061 | Cueto et al. | Jul 2013 | S |
D687454 | Edwards et al. | Aug 2013 | S |
D687455 | Edwards et al. | Aug 2013 | S |
8510196 | Brandmaier et al. | Aug 2013 | B1 |
D689068 | Edwards et al. | Sep 2013 | S |
D691157 | Ramesh et al. | Oct 2013 | S |
D691618 | Chen et al. | Oct 2013 | S |
D693835 | Daniel | Nov 2013 | S |
8712893 | Brandmaier | Apr 2014 | B1 |
D704205 | Greisson et al. | May 2014 | S |
D706796 | Talbot | Jun 2014 | S |
D708210 | Capua et al. | Jul 2014 | S |
D709517 | Meegan et al. | Jul 2014 | S |
D711411 | Yu et al. | Aug 2014 | S |
D715814 | Brinda et al. | Oct 2014 | S |
D716329 | Wen et al. | Oct 2014 | S |
D719583 | Edwards et al. | Dec 2014 | S |
D719968 | Ebtekar et al. | Dec 2014 | S |
D720363 | Ranz et al. | Dec 2014 | S |
D725139 | Izotov et al. | Mar 2015 | S |
8977237 | Sander et al. | Mar 2015 | B1 |
D727931 | Kim et al. | Apr 2015 | S |
D729264 | Satalkar et al. | May 2015 | S |
D730371 | Lee | May 2015 | S |
D730388 | Rehberg et al. | May 2015 | S |
D731510 | Kiruluta et al. | Jun 2015 | S |
D731512 | Xu et al. | Jun 2015 | S |
D733185 | Smith et al. | Jun 2015 | S |
D734358 | Rehberg et al. | Jul 2015 | S |
D735221 | Mishra et al. | Jul 2015 | S |
D735223 | Prajapati et al. | Jul 2015 | S |
D735745 | Zuckerberg et al. | Aug 2015 | S |
D738894 | Kim et al. | Sep 2015 | S |
D738906 | Frijlink et al. | Sep 2015 | S |
D746862 | Lee et al. | Jan 2016 | S |
D748112 | Vonshak et al. | Jan 2016 | S |
D751086 | Winther et al. | Mar 2016 | S |
D752059 | Yoo | Mar 2016 | S |
D755830 | Chaudhri et al. | May 2016 | S |
D759080 | Luo et al. | Jun 2016 | S |
D759663 | Kim et al. | Jun 2016 | S |
D759687 | Chang et al. | Jun 2016 | S |
9367535 | Bedard et al. | Jun 2016 | B2 |
D760772 | Winther et al. | Jul 2016 | S |
D761303 | Nelson et al. | Jul 2016 | S |
D761841 | Jong et al. | Jul 2016 | S |
D763282 | Lee | Aug 2016 | S |
D764483 | Heinrich et al. | Aug 2016 | S |
9407874 | Laurentino et al. | Aug 2016 | B2 |
D766286 | Lee et al. | Sep 2016 | S |
D766289 | Bauer et al. | Sep 2016 | S |
D767598 | Choi | Sep 2016 | S |
9443270 | Friedman et al. | Sep 2016 | B1 |
D768162 | Chan et al. | Oct 2016 | S |
D768202 | Malkiewicz | Oct 2016 | S |
D769253 | Kim et al. | Oct 2016 | S |
D770513 | Choi et al. | Nov 2016 | S |
9501798 | Urrutia et al. | Nov 2016 | B1 |
D773481 | Everette et al. | Dec 2016 | S |
D773523 | Kisselev et al. | Dec 2016 | S |
D774078 | Kisselev et al. | Dec 2016 | S |
D775144 | Vazquez | Dec 2016 | S |
D780202 | Bradbury et al. | Feb 2017 | S |
D785009 | Lim et al. | Apr 2017 | S |
D789956 | Ortega et al. | Jun 2017 | S |
D792424 | Meegan et al. | Jul 2017 | S |
D792441 | Gedrich et al. | Jul 2017 | S |
D795287 | Sun | Aug 2017 | S |
D797117 | Sun | Sep 2017 | S |
D797769 | Li | Sep 2017 | S |
D800748 | Jungmann et al. | Oct 2017 | S |
9824453 | Collins et al. | Nov 2017 | B1 |
D806101 | Frick et al. | Dec 2017 | S |
D809542 | Lu | Feb 2018 | S |
D809561 | Forsblom | Feb 2018 | S |
D814518 | Martin et al. | Apr 2018 | S |
D814520 | Martin et al. | Apr 2018 | S |
D815667 | Yeung | Apr 2018 | S |
9947050 | Pietrus et al. | Apr 2018 | B1 |
D819647 | Chen et al. | Jun 2018 | S |
D820296 | Aufmann et al. | Jun 2018 | S |
D822688 | Lee et al. | Jul 2018 | S |
D822711 | Bachman et al. | Jul 2018 | S |
D826984 | Gatts et al. | Aug 2018 | S |
D830408 | Clediere | Oct 2018 | S |
D832875 | Yeung et al. | Nov 2018 | S |
D834613 | Lee et al. | Nov 2018 | S |
D837814 | Lamperti et al. | Jan 2019 | S |
D841669 | Hansen et al. | Feb 2019 | S |
D844020 | Spector | Mar 2019 | S |
D845332 | Shriram et al. | Apr 2019 | S |
D847161 | Chaudhri et al. | Apr 2019 | S |
D851112 | Papolu et al. | Jun 2019 | S |
D851126 | Tauban | Jun 2019 | S |
D851127 | Tauban | Jun 2019 | S |
D851663 | Guesnon, Jr. | Jun 2019 | S |
D851668 | Jiang et al. | Jun 2019 | S |
D852217 | Li | Jun 2019 | S |
D853407 | Park | Jul 2019 | S |
D858571 | Jang | Sep 2019 | S |
D859445 | Clediere | Sep 2019 | S |
D863340 | Kana | Oct 2019 | S |
D865795 | Koo | Nov 2019 | S |
D866582 | Koo | Nov 2019 | S |
20020029285 | Collins | Mar 2002 | A1 |
20030187672 | Gibson et al. | Oct 2003 | A1 |
20040224772 | Canessa et al. | Nov 2004 | A1 |
20040249650 | Freedman | Dec 2004 | A1 |
20050038682 | Gandee et al. | Feb 2005 | A1 |
20050204148 | Mayo | Sep 2005 | A1 |
20060009213 | Sturniolo | Jan 2006 | A1 |
20070130197 | Richardson et al. | Jun 2007 | A1 |
20070219816 | Van Luchene et al. | Sep 2007 | A1 |
20070265949 | Elder | Nov 2007 | A1 |
20070282639 | Leszuk et al. | Dec 2007 | A1 |
20080015887 | Drabek et al. | Jan 2008 | A1 |
20080147448 | McLaughlin | Jun 2008 | A1 |
20080255917 | Mayfield et al. | Oct 2008 | A1 |
20080300924 | Savage et al. | Dec 2008 | A1 |
20090183114 | Matulic | Jul 2009 | A1 |
20100125464 | Gross et al. | May 2010 | A1 |
20100130176 | Wan | May 2010 | A1 |
20100205567 | Haire et al. | Aug 2010 | A1 |
20100223172 | Donnelly et al. | Sep 2010 | A1 |
20110015947 | Erry et al. | Jan 2011 | A1 |
20130204645 | Lehman et al. | Aug 2013 | A1 |
20130226624 | Blessman et al. | Aug 2013 | A1 |
20130317864 | Tofte et al. | Nov 2013 | A1 |
20140104372 | Calman et al. | Apr 2014 | A1 |
20140240445 | Jaynes | Aug 2014 | A1 |
20140288976 | Thomas et al. | Sep 2014 | A1 |
20140320590 | Laurentino et al. | Oct 2014 | A1 |
20140369668 | Onoda | Dec 2014 | A1 |
20150025915 | Lekas | Jan 2015 | A1 |
20150187017 | Weiss | Jul 2015 | A1 |
20150189362 | Lee et al. | Jul 2015 | A1 |
20150244751 | Lee et al. | Aug 2015 | A1 |
20150248730 | Pilot | Sep 2015 | A1 |
20150278728 | Dinamani et al. | Oct 2015 | A1 |
20150365342 | McCormack | Dec 2015 | A1 |
20160080570 | O'Connor et al. | Mar 2016 | A1 |
20160171486 | Wagner | Jun 2016 | A1 |
20160171622 | Perkins et al. | Jun 2016 | A1 |
20160203443 | Wheeling | Jul 2016 | A1 |
20160217433 | Walton et al. | Jul 2016 | A1 |
20170068526 | Seigel | Mar 2017 | A1 |
20170126812 | Singhal | May 2017 | A1 |
20170154383 | Wood | Jun 2017 | A1 |
20170352103 | Choi et al. | Dec 2017 | A1 |
20180007059 | Innes et al. | Jan 2018 | A1 |
20180108091 | Beavers et al. | Apr 2018 | A1 |
20190149772 | Fernandes et al. | May 2019 | A1 |
Number | Date | Country |
---|---|---|
2 648 364 | Oct 2013 | EP |
2010120303 | Oct 2010 | WO |
WO-2013033259 | Mar 2013 | WO |
WO-2015131121 | Sep 2015 | WO |
Entry |
---|
Narendra et al: “MobiCoStream: Real-time Collaborative Video Upstream for Mobile Augmented Reality Applications”, 2014 IEEE International Conference on Advanced Networks and Telecommuncations Systems (ANTS), Dec. 2014. (Year: 2014). |
Osório et al: “A Service Integration Platform for Collaborative Networks” Studies in Informatics and Control, vol. 20, No. 1, Mar. 2011 (Year: 2011). |
Aug. 23, 2019—U.S. Final Office Action—U.S. Appl. No. 15/294,147. |
Mar. 12, 2019—U.S. Non-Final Office Action—U.S. Appl. No. 15/294,147. |
“TIA launches mobile app for insured object inspection in the field” http://www.tiatechnology.com/en/whats-new/tia-technology-launches-mobile-app-for-insured-object-inspection-in-the-field/ site visited Sep. 19, 2016, pp. 1-4. |
“New Inspection Mobile App Enables Real-Time Inspection of Insurance Claims” http://www.prnewswire.com/news-releases/new-inspection-mobile-app-enables-real-time-inspection-of-insurance-claims-300114092.html Jul. 16, 2015, pp. 1-3. |
“Residential/Commercial Storm Damage Report Mobile App” http://www.gocanvas.com/mobile-forms-apps/22692-Residential-Commercial-Storm-Damage-Report site visited Sep. 19, 2016, pp. 1-6. |
Oct. 17, 2017—U.S. Non-Final Office Action—U.S. Appl. No. 15/679,946. |
Royalwise; “iMessages and FaceTime Sharing Issues”; Publication date: Dec. 10, 2014; Date Accessed: Nov. 8, 2017; URL: <http://royalwise.com/imessages-facetime-sharing-issues/>. |
Drippler; “15 Best Camera Apps for Android”; Publication date: Jun. 8, 2016; Date Accessed: Nov. 8, 2017; URL: <http://drippler.com/drip/15-best-camera-apps-android>. |
IPhone Life; “Tip of the Day: How to Move your Image in FaceTime”; Publication date: Feb. 16, 2015; Date Accessed: Nov. 8, 2017; URL: <https://www.iphonelife.com/blog/32671/how-move-your-image-facetime>. |
CNET; “OoVoo Mobile takes on Qik Fring for Android video chat”; Publication date: Dec. 15, 2010; Date Accessed: Nov. 8, 2017; URL: <https://www.cnet.com/news/oovoo-mobile-takes-on-qik-fring-for-android-video-chat/>. |
Microsoft; “OoVoo—Video Calls and Messaging”; Publication date unknown but prior to filing date; Date Accessed: Nov. 8, 2017; URL: <https://www.microsoft.com/en-us/store/p/oovoo-video-calls-and-messaging/9wzdncrfj478>. |
Softonic; “How to make video calls with Viber on Android and iOS”; Publication date: Sep. 12, 2014; Date Accessed: Nov. 8, 2017; URL: <https://en.softonic.com/articles/how-to-make-video-calls-with-viber-on-android-and-ios>. |
CNET; “Who needs FaceTime? 4 video-calling apps for Android”; Publication date: Mar. 20, 2015; Date Accessed: Nov. 8, 2017; URL: <https://www.cnet.com/news/android-video-calling-apps/>. |
Jan. 5, 2018—(WO) International Search Report—PCT/US17/56490. |
Apr. 27, 2018—U.S. Final Office Action—U.S. Appl. No. 15/679,946. |
Sep. 6, 2018—U.S. Notice of Allowance—U.S. Appl. No. 15/679,946. |
“Leader Delegation and Trust in Global Software Teams”, Zhang, New Jersey Institute of Technology, ProQuest Dissertations Publishing, Year 2008. |
Jan. 9, 2019—U.S. Non-Final Office Action—U.S. Appl. No. 29/627,412. |
Mar. 4, 2019—(CA) Office Action—Application 181524. |
Jun. 25, 2019—U.S. Final Office Action—U.S. Appl. No. 29/627,412. |
Screens Icons, Andrejs Kirma, Dec. 28, 2016, iconfinder.com [online], [site visited Jun. 19, 2019], https://www.iconfinder.com/iconsets/screens-2, Year 2016. |
Baraghimian & Young, “GeoSpaces/sup TM/-A virtual collaborative software environment for interactive analysis and visualization of geospatial information,” IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217), pp. 1678-1680 (2001). |
Non-Final Office Action for U.S. Appl. No. 16/848,275 dated Oct. 1, 2021, 9 pages. |
Non-Final Office Action for U.S. Appl. No. 16/919,899 dated Nov. 2, 2021, 13 pages. |
Final Office Action on U.S. Appl. No. 16/919,899 dated May 2, 2022, 11 pages. |
Final Office Action for U.S. Appl. No. 16/848,275 dated Feb. 18, 2022, 9 pages. |
Office Action on U.S. Appl. No. 16/848,275 dated Jun. 9, 2022, 13 pages. |
Number | Date | Country | |
---|---|---|---|
Parent | 15874629 | Jan 2018 | US |
Child | 16248277 | US | |
Parent | 15294147 | Oct 2016 | US |
Child | 15874629 | US |