The present disclosure generally relates to smart routing of intercommunication device communications, and more specifically, to integrated/hybrid systems and methods for leveraging intercommunication device communications within the operations of a building or region.
An intercommunication device, or “intercom,” is a communications system typically used within a building or collection of buildings, that functions independently of a public switched telephone network (PSTN).
In some embodiments, an apparatus includes a processor and a memory operably coupled to the processor. The memory stores instructions to cause the processor to receive, from an intercom system, a call request and receive, in response to the call request, routing data. The memory also stores instructions to cause the processor to modify, based on the routing data, a first call routing sequence associated with a first set of receivers to produce a second call routing sequence associated with a second set of receivers. Additionally, the memory stores instructions to cause the processor to cause, in response to the call request and based on the second call routing sequence, a message to be sent to a compute device associated with a receiver from the second set of receivers.
In some embodiments, a non-transitory, processor-readable medium stores instructions that, when executed by a processor, cause the processor to receive a call request associated with a caller. The instructions also cause the processor to generate a receiver identifier based on at least one of (1) badge event data associated with a receiver or (2) video data associated with the caller. Additionally, a modified call routing sequence is generated based on the receiver identifier and a predefined call routing sequence that is different from the modified call routing sequence. The modified call routing sequence includes the receiver identifier. The instructions also cause the processor to cause a message to be sent to a compute device associated with the receiver based on the modified call routing sequence.
In some embodiments, an apparatus includes a processor and a memory operably coupled to the processor. The memory stores instructions to cause the processor to receive a call request via an intercom operably coupled to the processor and further receive at least one of (1) badge event data via an access control device operably coupled to the processor, or (2) video data via a camera operably coupled to the processor, the intercom, the access control device, and the camera being co-located at an ingress location. Additionally, the instructions cause the processor to cause display, via a graphical user interface operably coupled to the processor, of a depiction of a modified call routing sequence that is generated based on a predefined call routing sequence and the at least one of the badge event data or the video data.
Some embodiments set forth herein include a hybrid/“smart” intercom system for hybrid intercom, video monitoring, and/or access control operations. The smart intercom system (or “smart intercom” and/or “intercom system”) can be implemented in software and/or hardware, and configured to intelligently route calls based on one or more predefined sequences of steps (also referred to herein as “sequencing directives” or “call routing sequences”), each of which can reference one or multiple receivers (e.g., within a given networked communications system, such as for a building or geographic region). The smart intercom can prioritize some receivers over others, for example based on the sequencing directives and/or based on data associated with one or more of: the incoming call(s), data associated with one or more video recording systems associated with the networked communications system, data associated with one or more access control systems (e.g., door access control systems) associated with the networked communications system, or data associated with one or more receivers associated with the networked communications system. Incoming calls to an intercom can be initiated by a “visitor” or other user who is physically present at the intercom at the time of the call.
In some embodiments, a hybrid/“smart” intercom system is configured to perform hybrid intercom, video monitoring, and access control operations. The smart intercom system can intelligently route incoming calls based on predefined sequences of steps, each referencing one or multiple receivers (e.g., by including one or more receiver identifiers) within a networked communications system. The smart intercom can prioritize some receivers over others based on the predefined sequences of steps (which can be based on, for example, time of day, day of week, etc.). The smart intercom can also dynamically modify intercom call sequences based on data associated with the incoming call, data associated with one or more video recording systems, data associated with one or more access control systems, and/or data associated with one or more of the receivers (e.g., status data indicating availability of the one or more receivers, such as “here” and “away” statuses). Incoming calls to the smart intercom system can be initiated by a “visitor” or other user who is physically present at the intercom at the time of the call. For example, the visitor can cause a call request to be received by the smart intercom to initiate the incoming call by pressing a button operably coupled to the smart intercom, by being positioned within a field of view of the one or more video systems configured to detect the visitor (e.g., using machine vision models), and/or the like. Alternatively or in addition, an individual other than the visitor (e.g., a receptionist and/or the like) can initiate a call to a visitor who is within the field of view of the one or more video systems and/or in proximity to the smart intercom.
As shown in
In some embodiments, an intercom (e.g., the intercom(s) 102 of
In some embodiments, an intercom/smart intercom includes, and is configured to call multiple different types of receivers using, a unified (e.g., a single) calling interface, implemented in software and/or hardware, thereby creating a consistent experience across the receivers. In other words, the unified calling interface on the intercom can be configured to call many different receiver types but treat them all identically (e.g., using a similar/common syntax, format, etc.), such that the call sequence and associated call scheduling logic are intuitive from a user's perspective. In some implementations, first party applications and the public switched telephone network (PSTN) system may be called simultaneously, or overlapping in time, (e.g., within a common given call step) via a unified interface that interacts with different calling backends while maintaining a single user interface.
As described herein, the smart intercom system can be configured to reduce the number of calls dispatched to recipients who, for example, are unavailable to receive the call and/or who are scheduled to be absent. For example, the smart intercom system can be configured to remove an indication of a receiver from a call routing sequence based on (1) a badge event data indicating that the receiver is absent and/or (2) a status associated with a compute device of the receiver indicating that the receiver has not interacted with the compute device for at least a predefined period of time. As a result, the smart intercom system can prevent and/or reduce the initiation of calls that are likely to go unanswered by a recipient, which can conserve system resources such as network bandwidth and/or the like.
In some embodiments, representations of one or more events (e.g., event data 120I of
In some embodiments, a smart intercom system of the present disclosure (e.g., smart intercom system 100A of
In some embodiments, a smart intercom system of the present disclosure (e.g., smart intercom system 100A of
In some embodiments, a smart intercom system of the present disclosure (e.g., smart intercom system 100A of
In some embodiments, a smart intercom system is configured to facilitate two-way communications (including audio and/or video) between a caller at an intercom and one or more receiver compute devices (e.g., smart phone, desk phone, tablet, web app, mobile app, etc.). In some implementations, a single smart intercom system manages one or multiple intercoms associated with a given building, group of buildings (e.g., a campus), or geographic area. In other implementations, multiple intercom systems can be used to manage one or multiple intercoms associated with a given building, group of buildings (e.g., a campus), or geographic area.
In some embodiments, a smart intercom system is configured to determine one or more device statuses (e.g., “Away” in Zoom) associated with one or more receivers and/or users associated with a call sequence, and dynamically adjust (e.g., modify) the call sequence based on the device statuses. For example, the smart intercom system may determine the one or more device statuses using an “ask” feature of one or more apps (e.g., to retrieve schedule data, access control data, etc.). Alternatively or in addition, the smart intercom system may determine the absence or presence of a receiver and/or user associated with a call sequence based on a history of whether the user has badged in since their last badge-out.
In some embodiments, a smart intercom system is configured to go to voicemail if an intercom call is not answered upon completion of a call sequence.
In some embodiments, a smart intercom system is configured to call multiple different client devices of a variety of different types. Alternatively or in addition, the smart intercom system can be configured to automatically change a call recipient (e.g., within a call sequence) based on contextual data (e.g., detection of a package possessed and/or being carried by a caller, and re-routing the call to a different receiver/recipient based on the detected package). Alternatively or in addition, the smart intercom system can be configured to perform call forwarding in response to speech recognition. Alternatively or in addition, the smart intercom system can be configured to perform, using facial recognition, a person of interest search on video/camera data. The person of interest search can include, for example, selecting a video frame and conducting people analytics to determine historical data associated with a person detected in the video frame (e.g., historical visit logs (dates, times) indicated by historical event data, roles/tags associated with the person, access control decisions associated with the person, etc.). Alternatively or in addition, the smart intercom system can be configured to perform auto-framing/auto-zooming of video imagery of an intercom caller. Alternatively or in addition, the smart intercom system can be configured to do a background check on an intercom caller, e.g., based on an image of their face, a form of identification of the intercom caller being displayed to a camera of the intercom, and/or historical caller profile data associated with the intercom caller. Alternatively or in addition, the smart intercom system can be configured to implement a talk down feature. For example, a building occupant (e.g., a receptionist) can converse with a visitor (e.g., who is positioned outside an entry of the building or who otherwise is physically/geographically remote from the building occupant) via a speaker and microphone that are operably coupled to the smart intercom system.
In one or more embodiments, a smart intercom system can leverage the co-location and/or interconnectedness of a plurality of sensors and/or subsystems, such as camera(s), access control system(s), intercom(s), speaker(s), microphone(s), biometric sensor(s) (e.g., palm vein scanners, retinal scanners, etc.), facial recognition software, voice recognition software, and/or the like, to dynamically/reactively generate or modify a call routing sequence and/or to grant or modify access to a resource more efficiently than would otherwise be possible (e.g., using an individual component of the overall system), and/or may identify/classify visitors with a greater accuracy based on data from those sensors and/or subsystems (i.e., via sensor “fusion”). For example, a first camera of an intercom positioned at a building ingress may capture a facial image of a visitor and make a first determination, based on the facial image, as to the identity and/or legitimacy of the visitor. A second camera, positioned behind the visitor and in the vicinity of the building ingress may capture a full-body image of the visitor and make a second determination, based on the full-body image, as to the identity and/or legitimacy of the visitor. A voice recognition software can make a third determination, based on voice data collected by a microphone into which the visitor is speaking, as to the identity and/or legitimacy of the visitor. The smart intercom system can then generate or modify a call routing sequence and/or to grant or modify access to a resource (e.g., entry to the building) based on an analysis of the first determination, the second determination, and/or third determination. The analysis of the first determination, the second determination, and/or third determination can include prioritizing one or more predefined data types, attempting to reconcile pairs or subsets of the determinations, calculating one or more scores based on the three determinations and a visitor type/category, etc. For example, if the first determination and the second determination match (same identity is determined, and they agree that the visitor is legitimate) but the third determination conflicts (e.g., identity not recognized and/or the visitor legitimacy has a low associated confidence), entry to the building may be denied, or alternatively, additional data may be retrieved and taken into account by the smart intercom system (e.g., additional “safety measures” may be triggered). In other words, the smart intercom system may (optionally automatically, without human intervention) request or retrieve additional data from one or more sensors/data sources in response to identifying a conflict between two or more determinations as to the identity and/or legitimacy of the visitor. For example, the smart intercom system may prompt the visitor to repeat their name into the microphone, in an attempt to capture a higher-quality voice sample, re-run the voice recognition software, and then recalculate the third determination as to the identity and/or legitimacy of the visitor, which may in turn result in granting the visitor access/entry to the building. Although described above as involving three determinations as to the identity and/or legitimacy of the visitor (based on associated sensors and/or subsystems), any other number of determinations is also contemplated (e.g., two determinations, or four or more determinations).
In some implementations, the smart intercom system can be configured to generate a call routing sequence and/or grant access based on, for example, a combination of at least two of video data collected using the one or more cameras, voice data collected using the intercom, and/or schedule data. For example, the smart intercom system can use (1) a first machine learning model to classify a visitor based on video and/or image data that depicts the visitor and (2) a second machine learning model to classify a visitor based on audio (e.g., speech) data collected from the visitor via a microphone of the intercom. The first machine learning model and the second machine learning model can be jointly trained, such that their respective outputs (e.g., classifications) can be associated with a common and/or equivalent vector space. As a result of the common and/or equivalent vector space, the classifications of the first machine learning model and the second machine learning model can be compared. For example, the first machine learning model can classify the visitor as a delivery person based on a uniform worn by the visitor and/or a package possessed by the visitor. Alternatively or in addition, the first machine learning model can classify the visitor as a specific individual based on facial recognition. The second machine learning model can classify the visitor as a delivery person based on, for example, audio data that represents a statement made by the visitor and received at the microphone (e.g., “I work for FedEx® and I have a package to deliver”). Alternatively or in addition, the second machine learning model can classify the visitor as a specific individual based on speech patterns within the audio data.
The smart intercom system can be configured to compare the respective classifications of the first machine learning model and the second machine learning model to verify the identity of the visitor. For example, if each of the first machine learning model and the second machine learning model generates an output that indicates that the visitor works for UPS®, the smart intercom system can (1) configure the call routing sequence to include an indication of a shipping and receiving department and/or (2) grant the visitor access to a warehouse. If, however, the first machine learning model generates a first classification that indicates that the visitor is plain-clothed, and the second machine learning model generates an output that indicates that the visitor works for UPS® (e.g., as a result of the visitor falsely claiming that they work for UPS®), the smart intercom system can be configured to, for example, (1) play a message to the visitor that instructs the visitor to wait and/or (2) cause an alert to be sent to security indicating that the presence of suspicious visitor. In some implementations, the smart intercom system can be configured to verify a classification(s) generated by the first machine learning model and/or the second machine learning model based on the schedule data. For example, the smart intercom system can verify a delivery person classification for a visitor based on the date and/or time of the visit occurring within predefined business hours (e.g., between 9 am and 5 pm on weekdays).
In some implementations, the smart intercom system can be configured to generate a confidence score associated with a classification(s) generated by the first machine learning model and/or the second machine learning model. For example, if, as depicted in the video data, a visitor's face is occluded (e.g., because the visitor is wearing a mask, hat, etc.) and/or the visitor is not depicted clearly (e.g., due to low lighting, poor image quality, etc.), a confidence score associated with a classification from the first machine learning model can be low (e.g., below a threshold value). As a result, the smart intercom system can be configured to select the classification generated by the second machine learning model as the predicted classification for the visitor.
In some embodiments, an apparatus includes a processor and a memory operably coupled to the processor. The memory stores instructions to cause the processor to receive, from an intercom system, a call request and receive, in response to the call request, routing data. The memory also stores instructions to cause the processor to modify, based on the routing data, a first call routing sequence associated with a first set of receivers to produce a second call routing sequence associated with a second set of receivers. Additionally, the memory stores instructions to cause the processor to cause, in response to the call request and based on the second call routing sequence, a message to be sent to a compute device associated with a receiver from the second set of receivers.
In some implementations, the memory can further store instructions to cause the processor to receive a video stream including a series of video frames and generate, using a computer vision model, caller profile data associated with a person depicted in at least one video frame from the series of video frames. Additionally, the instructions to cause the processor to receive the routing data can include instructions to receive the caller profile data, and the instructions to cause the processor to modify the first call routing sequence can include instructions to, based on the caller profile data, one of add a receiver identifier associated with the receiver to the first call routing sequence or increase a priority of the receiver identifier in the first call routing sequence.
In some implementations, the intercom system can further include a video camera operably coupled to the processor, the video stream being generated by the video camera. In some implementations, the instructions to cause the processor to generate the caller profile data can include instructions to generate metadata based on the person having been depicted in a previous video stream received from the intercom system. Additionally, the instructions to cause the processor to modify the first call routing sequence can include instructions to one of add a receiver identifier associated with the receiver to the first call routing sequence or increase a priority of the receiver identifier in the first call routing sequence, based on the metadata.
In some implementations, the caller profile data can be associated with at least one of a uniform worn by the person or a package possessed by the person. In some implementations, the memory can further store instructions to send a signal to cause a background check to be performed based on the caller profile data. In some implementations, the routing data can include presence data associated with the receiver. Additionally, the instructions to cause the processor to modify the first call routing sequence can include instructions to one of add a receiver identifier associated with the receiver to the first call routing sequence or increase a priority of the receiver identifier in the first call routing sequence, based on the presence data. In some implementations, the presence data can be associated with a badge scan event.
In some implementations, the presence data can be generated using at least one of facial recognition or speech recognition. In some implementations, the routing data can include schedule data. Additionally, the instructions to cause the processor to modify the first call routing sequence can include instructions to replace, based on the schedule data, the first call routing sequence with the second call routing sequence selected from a plurality of predefined call routing sequences that includes the first call routing sequence and the second call routing sequence. In some implementations, the memory can further store instructions to cause the processor to modify the second call routing sequence based on the schedule data indicating a holiday. In some implementations, the message can indicate a future return date based on the schedule data.
In some implementations, the routing data can include status data associated with the compute device. Additionally, the instructions to cause the processor to modify the first call routing sequence can include instructions to, based on the status data, one of add a receiver identifier associated with the receiver to the first call routing sequence or increase a priority of the receiver identifier in the first call routing sequence. In some implementations, the message can be associated with at least one of Short Message Service (SMS), a telephone call, an email, or a mobile push notification.
In some embodiments, a non-transitory, processor-readable medium stores instructions that, when executed by a processor, cause the processor to receive a call request associated with a caller. The instructions also cause the processor to generate a receiver identifier based on at least one of (1) badge event data associated with a receiver or (2) video data associated with the caller. Additionally, a modified call routing sequence is generated based on the receiver identifier and a predefined call routing sequence that is different from the modified call routing sequence. The modified call routing sequence includes the receiver identifier. The instructions also cause the processor to cause a message to be sent to a compute device associated with the receiver based on the modified call routing sequence.
In some implementations, the instructions to generate the receiver identifier include instructions to generate the receiver identifier based on a predefined schedule of receiver availability. In some implementations, the instructions to cause the message to be sent can include instructions to cause the message to be sent to at least two compute devices associated with at least receivers identified by the modified call routing sequence. In some implementations, the non-transitory, processor-readable medium can further store instructions to cause the processor to delete at least one receiver identifier from at least one of the predefined call routing sequence or the modified call routing sequence based on at least one of the badge event data or status data. In some implementations, the video data can depict the caller at least one of (1) wearing a uniform or (2) in possession of a package. Additionally, the instructions to generate the receiver identifier can include instructions to detect, using a computer vision model and based on the video data, at least one of the uniform or the package. In some implementations, the instructions to generate the receiver identifier can include instructions to generate the receiver identifier based on historical event data associated with the caller.
In some embodiments, an apparatus includes a processor and a memory operably coupled to the processor. The memory stores instructions to cause the processor to receive a call request via an intercom operably coupled to the processor and further receive at least one of (1) badge event data via an access control device operably coupled to the processor, or (2) video data via a camera operably coupled to the processor, the intercom, the access control device, and the camera being co-located at an ingress location. Additionally, the instructions cause the processor to cause display, via a graphical user interface operably coupled to the processor, of a depiction of a modified call routing sequence that is generated based on a predefined call routing sequence and the at least one of the badge event data or the video data.
All combinations of the foregoing concepts and additional concepts discussed herewithin (provided such concepts are not mutually inconsistent) are contemplated as being part of the subject matter disclosed herein. The terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
The drawings are primarily for illustrative purposes, and are not intended to limit the scope of the subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements).
The entirety of this application (including the Cover Page, Title, Headings, Background, Summary, Brief Description of the Drawings, Detailed Description, Embodiments, Abstract, Figures, Appendices, and otherwise) shows, by way of illustration, various embodiments in which the embodiments may be practiced. The advantages and features of the application are of a representative sample of embodiments only, and are not exhaustive and/or exclusive. Rather, they are presented to assist in understanding and teach the embodiments, and are not representative of all embodiments. As such, certain aspects of the disclosure have not been discussed herein. That alternate embodiments may not have been presented for a specific portion of the innovations or that further undescribed alternate embodiments may be available for a portion is not to be considered to exclude such alternate embodiments from the scope of the disclosure. It will be appreciated that many of those undescribed embodiments incorporate the same principles of the innovations and others are equivalent. Thus, it is to be understood that other embodiments may be utilized and functional, logical, operational, organizational, structural and/or topological modifications may be made without departing from the scope and/or spirit of the disclosure. As such, all examples and/or embodiments are deemed to be non-limiting throughout this disclosure.
Also, no inference should be drawn regarding those embodiments discussed herein relative to those not discussed herein other than it is as such for purposes of reducing space and repetition. For instance, it is to be understood that the logical and/or topological structure of any combination of any program components (a component collection), other components and/or any present feature sets as described in the figures and/or throughout are not limited to a fixed operating order and/or arrangement, but rather, any disclosed order is exemplary and all equivalents, regardless of order, are contemplated by the disclosure.
The term “automatically” is used herein to modify actions that occur without direct input or prompting by an external source such as a user. Automatically occurring actions can occur periodically, sporadically, in response to a detected event (e.g., a user logging in), or according to a predetermined schedule.
The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
The phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.”
The term “processor” should be interpreted broadly to encompass a general purpose processor, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a controller, a microcontroller, a state machine and so forth. Under some circumstances, a “processor” may refer to an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), etc. The term “processor” may refer to a combination of processing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core or any other such configuration.
The term “memory” should be interpreted broadly to encompass any electronic component capable of storing electronic information. The term memory may refer to various types of processor-readable media such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, etc. Memory is said to be in electronic communication with a processor if the processor can read information from and/or write information to the memory. Memory that is integral to a processor is in electronic communication with the processor.
The terms “instructions” and “code” should be interpreted broadly to include any type of computer-readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may comprise a single computer-readable statement or many computer-readable statements.
Some embodiments described herein relate to a computer storage product with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations. The computer-readable medium (or processor-readable medium) is non-transitory in the sense that it does not include transitory propagating signals per se (e.g., a propagating electromagnetic wave carrying information on a transmission medium such as space or a cable). The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of non-transitory computer-readable media include, but are not limited to, magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Read-Only Memory (ROM) and Random-Access Memory (RAM) devices. Other embodiments described herein relate to a computer program product, which can include, for example, the instructions and/or computer code discussed herein.
Some embodiments and/or methods described herein can be performed by software (executed on hardware), hardware, or a combination thereof. Hardware modules may include, for example, a general-purpose processor, a field programmable gate array (FPGA), and/or an application specific integrated circuit (ASIC). Software modules (executed on hardware) can be expressed in a variety of software languages (e.g., computer code), including C, C++, Java™, Ruby, Visual Basic™, and/or other object-oriented, procedural, or other programming language and development tools. Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Various concepts may be embodied as one or more methods, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. Put differently, it is to be understood that such features may not necessarily be limited to a particular order of execution, but rather, any number of threads, processes, services, servers, and/or the like that may execute serially, asynchronously, concurrently, in parallel, simultaneously, synchronously, and/or the like in a manner consistent with the disclosure. As such, some of these features may be mutually contradictory, in that they cannot be simultaneously present in a single embodiment. Similarly, some features are applicable to one aspect of the innovations, and inapplicable to others.
In addition, the disclosure may include other innovations not presently described. Applicant reserves all rights in such innovations, including the right to embodiment such innovations, file additional applications, continuations, continuations-in-part, divisionals, and/or the like thereof. As such, it should be understood that advantages, embodiments, examples, functional, features, logical, operational, organizational, structural, topological, and/or other aspects of the disclosure are not to be considered limitations on the disclosure as defined by the embodiments or limitations on equivalents to the embodiments. Depending on the particular desires and/or characteristics of an individual and/or enterprise user, database configuration and/or relational model, data type, data transmission and/or network framework, syntax structure, and/or the like, various embodiments of the technology disclosed herein may be implemented in a manner that enables a great deal of flexibility and customization as described herein.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
As used herein, in particular embodiments, the terms “about” or “approximately” when preceding a numerical value indicates the value plus or minus a range of 10%. Where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit unless the context clearly dictates otherwise, between the upper and lower limit of that range and any other stated or intervening value in that stated range is encompassed within the disclosure. That the upper and lower limits of these smaller ranges can independently be included in the smaller ranges is also encompassed within the disclosure, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either or both of those included limits are also included in the disclosure.
The indefinite articles “a” and “an,” as used herein in the specification and in the embodiments, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the embodiments, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the embodiments, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the embodiments, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the embodiments, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the embodiments, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
In the embodiments, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03.
This application claims priority to and the benefit of U.S. Provisional Patent Application No. 63/483,478, filed Feb. 6, 2023 and titled “Systems and Methods for Hybrid Intercom, Video Monitoring, and Access Control Operations,” the content of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63483478 | Feb 2023 | US |