A system or environment such as a vehicle may be equipped with security features. For example, security features for a vehicle may determine whether a person attempting to enter and/or operate the vehicle is an authorized user. In some instances, security-related components, such as camera sensors and computerized image processing, can represent significant vehicle sensing and processing resources.
The present disclosure describes techniques for camera sensor actuation in a system, which may include a vehicle system. However, although example systems disclosed herein can include one or more camera sensors mounted on a vehicle, camera sensor actuation may be utilized to control access to other types of systems, such as systems for controlling access to buildings or facilities within buildings, or for controlling access to complex machinery other than vehicle systems. The camera sensors can have a field-of-view that permits the sensors to observe static or moving objects located in any direction with respect to the vehicle while the vehicle is proceeding along a path or while the vehicle is deactivated, parked (i.e., at rest), and unattended. Actuation of camera sensors in response to the vehicle being deactivated, parked, and unattended by a vehicle user (e.g., an operator or passenger) can operate to monitor the area external to the vehicle over extended periods of time. Such monitoring of the environment external to the vehicle may deter vehicle theft, vehicle break-in, damage, vandalism, etc., as well as providing a reporting channel from the vehicle to the vehicle's owner, an insurance company, a security services company, etc. Vehicle-mounted camera sensors can also be used in image recognition vehicle applications, in which an image of a person approaching the vehicle can be captured and processed utilizing image processing programming of a vehicle computer. Such image processing may permit the vehicle computer to determine whether the approaching person is an authorized user, e.g., an operator or authorized passenger, of the vehicle.
However, camera sensor monitoring of areas external to a deactivated vehicle can cause a significant drain on the battery resources of the vehicle. Thus, the duration of camera sensor operation may be reduced, so as to conserve battery resources so that such resources can be available to activate, e.g., actuate ignition of the vehicle, and/or to activate various battery-powered vehicle systems. In response to reducing the duration of camera sensor monitoring in an environment external to a deactivated vehicle, the vehicle may be more prone to theft, break-in, damage, vandalism, etc. In addition, the ability to obtain still or video images of persons involved in such events, e.g., for subsequent investigations, may be diminished. Further, other vehicle convenience features, e.g., illuminating cabin lighting when an authorized user (e.g., authorized operator or passenger) approaches the vehicle, unlocking of vehicle doors to permit keyless entry into the vehicle, etc., may be unavailable due to a loss of battery resources to execute these functions.
As described herein a vehicle radar may be utilized to monitor the vehicle's external environment. The radar may be utilized in place of or to supplement camera sensor monitoring of an environment external to a deactivated vehicle. Thus, a vehicle radar, which may be used during driving operations to detect static or moving objects in the vehicle's environment, may also be utilized to provide monitoring of the vehicle's external environment while the vehicle is parked, deactivated, and unattended. As described herein, the vehicle radar may be operated in a Doppler mode, which may operate to transmit Doppler signals and, based on Doppler signals received from a moving object, determine whether the moving object represents an authorized user, e.g., operator or passenger, of the vehicle. In an example, in response to the Doppler radar detecting that a person approaching the vehicle does not represent an authorized user, the vehicle computer may selectively actuate a camera sensor to record images of the approaching moving object and then deactivate the camera sensor after recording and/or transmitting images of the unauthorized person to a server via a wireless communications link. In another example, in response to the Doppler radar detecting that a person approaching the vehicle represents an authorized user, the vehicle computer may maintain the camera sensor in a deactivated state. As a result of such selective use of camera sensors and computerized image processing, battery resources may be conserved to as to be available so that the vehicle computer can actuate vehicle engine ignition as well as actuate convenience features, such as permitting keyless entry into the vehicle, illuminating cabin lighting, etc.
In an example, a system can include a computer having a processor coupled to a memory. The memory can store instructions executable by the processor to process a received Doppler radar signal to extract a feature that represents a characteristic of a moving object. The instructions may additionally, in response to determining that the moving object excludes an authorized user based on the extracted feature, actuate a camera sensor.
In an example, the characteristic can be at least one of a gait, a heel strike, an arm swing, a head nod, a twist, or a turn of the authorized user.
In an example, the Doppler radar signal can include a dominant signal reflected from the moving object, and wherein the extracted feature indicates a characteristic gait based on the dominant signal.
In an example, the instructions can further include instructions to compute a composite feature of the authorized user based on a relationship between two or more extracted features of the Doppler radar signal.
In an example, the instructions can further include instructions to determine a trajectory of the moving object based on a dominant feature of the Doppler radar signal.
In an example, the instructions can further include instructions to determine a trajectory of the moving object and to assign a threat index to the moving object based on the trajectory.
In an example, the instructions can further include instructions to increase the threat index based on the trajectory indicating movement of the moving object toward the system.
An example, the camera sensor can include a vehicle-mounted camera sensor having a field-of-view that includes the location of the moving object.
In an example, the camera sensor can include a vehicle-mounted camera sensor having a field-of-view that includes the moving object, in which actuation of the camera sensor includes capturing video images of the moving object.
In an example, the instructions can further include instructions to maintain the camera sensor in a deactivated state responsive to the moving object being determined to include the authorized user.
In an example, the instructions further include instructions to transition the system from a Doppler radar operational mode to a Bluetooth operational mode after executing the instructions to actuate the camera sensor.
In an example, the instructions can further include instructions to refine a computer model that includes the characteristic of the moving object based on successful recognitions that the moving object includes the authorized user.
In another example, a method can include processing a received Doppler radar signal to extract a feature that represents a characteristic of a moving object. The method can additionally include determining that the moving object excludes an authorized user based on the extracted feature and actuating a camera sensor.
In an example, the characteristic can be at least one of a gait, a heel strike, an arm swing, a head nod, a twist, or a turn of the authorized user.
In an example, the method can additionally include determining a trajectory of the moving object based on a dominant feature of the Doppler radar signal reflected from the moving object.
In an example, the method can additionally include determining a trajectory of the moving object and assigning a threat index to the moving object based on the trajectory.
In an example, the method can additionally include assigning an increased threat index based on the trajectory indicating movement of the moving object toward the camera sensor.
In an example, the method can additionally include maintaining the camera sensor in a deactivated state responsive to determining that the moving object includes the authorized user.
In an example, the method can additionally include transitioning the Doppler radar to a Bluetooth operational mode after actuating the camera sensor.
In an example, the method can additionally include refining a computer model that includes the characteristic of the authorized user based on successful recognitions that the moving object includes the authorized user.
Sensors 108 of vehicle 102 additionally include radio frequency (RF) sensors 108B. In the example of
In addition, and as described in greater detail below in reference to
Similar to extraction of facial features to determine whether person 120 represents an authorized user, e.g., an authorized operator or passenger, of vehicle 102, a Doppler signal feature can be utilized to determine whether person 120 represents an authorized user, e.g., an authorized operator or passenger, of vehicle 102. In an example, in response to a first threshold level of correlation, e.g., 0.85, 0.9, 0.95, between a Doppler signal feature that represents person 120 and the Doppler signal feature that represents an authorized user, e.g., operator or passenger, of vehicle 102 stored in a memory accessible to computer 104, the vehicle may illuminate cabin lighting of the vehicle, start the vehicle's engine, actuate the vehicle's horn, etc. In response to determining the first level of correlation between the Doppler signal features of person 120 and the Doppler signal features of an authorized user, camera sensor 108A can remain deactivated, thus conserving battery resources of vehicle 102.
In another example, in response to a second threshold level of correlation, e.g., 0.7, 0.75, 0.8, between the Doppler signal features of person 120 and the Doppler signal features of authorized users of vehicle 102, programming of computer 104 may actuate one or more of camera sensors 108A. Actuation of one or more of camera sensors 108A may result in the camera sensors capturing still or video images of person 120 for transmission via communications component 114 through network 130 to server 140. Server 140 can store features extracted from still or video images of person 120, 220, for use by the owner of vehicle 102, an insurance company, a security organization, etc.
RF sensors 108B can include at least two modes of operation. In a first mode of operation, sensors 108B can perform Doppler radar functions, including transmitting a Doppler radar signal to an area external to vehicle 102 and processing a received a Doppler radar signal reflected from a moving object within field-of-view 115. With respect to a moving object, RF sensors 108B can determine Doppler signal features, which include a dominant Doppler signal feature and one or more Doppler side frequency features as discussed above. In addition, RF sensors 108B can transition from a Doppler mode to a Bluetooth localization mode in response to control signals from computer 104. In response to such transitioning, RF sensors 108B can localize an authorized user, e.g., operator or passenger, based on credentials stored within the authorized user's mobile communications device 122. In the example of
Sensors 108 of vehicle 102 can include sensors other than camera sensors 108A and RF sensors 108B, such as sensors for monitoring and detecting aspects of the vehicle's driving environment. Such sensors can include long-range radar sensors, LIDAR sensors, ultrasonic transducers, motion detectors, etc. Further, vehicle 102 can include sensors for measuring internal vehicle states and modes, such as a sensor to measure engine speed (in revolutions per minute) such as a tachometer, wheel speed sensors, tire pressure sensors, etc.
Computer 104 can be generally programmed for communications on vehicle network 106, which may include a communications bus such as a CAN bus, LIN bus, etc., and/or other wired and/or wireless technologies, e.g., Ethernet, WIFI, Bluetooth®, Ultra-Wideband (UWB), etc. Vehicle network 106 can represent one or more mechanisms by which computer 104 may communicate with a remote computer, such as a remote server. Accordingly, vehicle network 106 can include one or more of various wired or wireless communication mechanisms, including any desired combination of wired (e.g., cable and fiber) and/or wireless, e.g., cellular, wireless, satellite, microwave, and radio frequency) communication mechanisms and any desired network topology (or topologies when multiple communication mechanisms are utilized). Exemplary communication networks include wireless communication networks, e.g., using Bluetooth®, Bluetooth® Low Energy (BLE), UWB, IEEE 802.11, vehicle-to-vehicle (V2V) such as Dedicated Short-Range Communications (DSRC), etc.), local area networks (LAN) and/or wide area networks (WAN), including the Internet, providing data communication services.
Alternatively or additionally, in examples in which computer 104 includes multiple devices, vehicle network 106 can be used for communications between devices represented as computer 104. In an example, computer 104 can be a generic computer with a processor and memory as described above and/or may include an electronic control unit (ECU) or controller or the like for a specific function or set of functions, and/or a dedicated electronic circuit including an ASIC that is manufactured for a particular operation, e.g., an ASIC for processing sensor data and/or communicating the sensor data. In another example, computer 104 may include an FPGA (Field-Programmable Gate Array) which is an integrated circuit manufactured to be configurable by an authorized user. Typically, a hardware description language such as VHDL (Very-High Speed Integrated Circuit Hardware Description Language) is used in electronic design automation to describe digital and mixed-signal systems such as FPGA and ASIC. For example, an ASIC is manufactured based on VHDL programming provided pre-manufacturing, whereas logical components inside an FPGA may be configured based on VHDL programming, e.g., stored in a memory electrically connected to the FPGA circuit. In some examples, a combination of processor(s), ASIC(s), and/or FPGA circuits may be included in computer 104. In addition, computer 104 may be programmed for communicating with communications component 114, human-machine interface (HMI) 112, and vehicle components utilizing vehicle network 106, which, as described below, may include various wired and/or wireless networking technologies, e.g., cellular, Bluetooth®, Bluetooth® Low Energy (BLE), UWB, wired and/or wireless packet networks, etc.
Vehicle 102 can include a variety of additional devices, such as vehicle components 110, which may utilize vehicle network 106 to transmit and receive, for example, data relating to vehicle speed, acceleration, location, subsystem and/or component status, etc. In an example, additional sensors of vehicle 102 can provide data for controlling operation of a component of a set of vehicle components 110, such as a component for controlling an aspect of vehicle operation. Vehicle components 110 can include other hardware entities adapted to perform a mechanical or electromechanical function or operation, such as moving vehicle 102, slowing or stopping the vehicle, steering the vehicle, etc. Vehicle components 110 can include transmission components, steering components, (which may include one or more of a steering wheel, a steering rack, etc.), a brake component, a park assist component, an adaptive cruise control component, an adaptive steering component, a movable seat, and the like. Vehicle components 110 can further include computing devices, e.g., electronic control units (ECUs) or the like and/or computing devices such as described above with respect to computer 104, and that likewise communicate via vehicle network 106.
User identification programming 104A can additionally include a table or the like storing Doppler signal features, e.g., dominant Doppler signal features and Doppler side frequency features, of authorized operators and passengers of vehicle 102. Thus, in response to receiving a Doppler signal feature from RF sensor 108B that represents person 120, user identification programming 104A can determine whether the received Doppler signal features correlate with stored Doppler signal features of authorized users, e.g., operators or passengers, of vehicle 102. In an example, in response to determining a threshold correlation, e.g., 0.85, 0.9, 0.95, etc., between received Doppler signal features and stored Doppler signal features representative of authorized users of vehicle 102, programming 104A can actuate keyless entry of vehicle 102, illuminate cabin lights of vehicle 102, etc. In addition, user identification programming 104A can maintain camera sensor 108A in a deactivated state. In another example, in response to determining a lower correlation, e.g., less than 0.85, user identification programming 104A can deactivate keyless entry of vehicle 102 and maintain cabin lights in a deactivated state. Further, user identification programming 104A can actuate camera sensor 108A to capture still or video images of person 120 for transmission to server 140.
As described in reference to
Such transitions between operating modes of RF sensor 108B can permit sensors 108B to perform a dual or twofold function under the control of user identification programming 104A. For example, during a first interval, user identification programming 104A can instruct RF sensor 108B to operate in a Doppler radar mode, in which Doppler radar signals are transmitted to and received from a moving object external to vehicle 102. RF sensor 108B can then receive a signal reflected from the moving object and extract Doppler signal features, such as dominant Doppler signal features 310 and Doppler side frequency feature 315, from received Doppler radar signal. RF sensor 108B can then transition to a Bluetooth phone-as-a-key mode, to detect a beacon transmitted from mobile communications device 122.
Thus, as seen in
Signal profile 305 additionally shows first Doppler side frequency feature 315, having frequency F2 and having an amplitude of D2. In the example of
Signal profile 355 additionally shows Doppler side frequency feature 365, having a frequency of F5 and having an amplitude of D5. Signal profile 355 further shows Doppler side frequency feature 370, having a frequency of F6 and an amplitude of D6. In the example of
Thus, it can be seen from
Accordingly, two or more features, e.g., dominant Doppler signal feature 310, Doppler side frequency feature 315, Doppler side frequency 320, may be combined to represent a composite feature of person 120 computed or derived from the two or more features. Similarly, in the example of person 220, being of larger stature than person 120, dominant Doppler signal feature 360 includes a frequency F4, and an amplitude of D4. Further, Doppler side frequency features 365 and 370, which result from characteristic arm swing and leg motions, exhibit amplitudes D5 and D6, respectively. In an example, a composite amplitude feature of person 120 may be computed in response to dominant Doppler signal feature 310 having an amplitude (D1) of −60 decibels relative to one milliwatt (dBm) to Doppler side frequency feature 315 having an amplitude (D2) of −75 dBm. In this example, a composite amplitude feature of person 120 may be the ratio D1/D2, which would equal 0.8 (dimensionless). In another example, a composite frequency feature of person 120 may be computed in response to dominant Doppler signal feature 310 having a frequency (F1) of 6.01 gigahertz to Doppler side frequency feature 315 having a frequency (F2) of 6.02 gigahertz. In such an example, a composite frequency feature of person 120 may be the ratio F1/F2, which would equal 0.998 (dimensionless).
Thus, similar to that of person 120, as person 220 moves toward vehicle 102, the ratios (or other type of numerical relationships) D4 and D5 and between D4 and D6 remain constant since the cross-sectional areas presented by the portions of person 220, e.g., head, chest, arms, hands, legs, feet, etc., remain constant. Accordingly, two or more features, e.g., dominant Doppler signal feature 360, Doppler side frequency feature 365, Doppler side frequency 370, may form a composite feature of person 220 computed or derived from the two or more features Accordingly, as persons 120 and 220 move from a point relatively distant from vehicle 102 to a point relatively close to the vehicle, the overall shape of signal profiles 305 and 355 remain unchanged although each signal profile (305, 355) may be scaled along the received Doppler signal amplitude (vertical) axis.
Also as seen in
Thus, as seen in
As previously described, dominant Doppler signal features, e.g., 310, 360, and 410, can be utilized to determine the gait or walking speed of a person, e.g., 120, 220. Thus, in response to detection of a moving object, e.g., a person, moving at a speed that is greater than a walking speed, e.g., 6.0 km/hour, 7.0 km/hour, 8.0 km/hour, the dominant Doppler signal feature may indicate that a person, e.g., 120, 220, is running toward vehicle 102. Thus, based on the speed as indicated by a dominant Doppler signal feature (310, 360), e.g., person 120, 220, running toward vehicle 102, user identification programming 104A can assign a threat index, e.g., indicating possible imminent damage to vehicle 102. In an example, a person running toward vehicle 102, which may be indicated by a dominant Doppler signal feature (310, 360) examining a frequency shift proportional to the speed of the approaching person, along with Doppler side frequencies (315, 320, 365, 370) indicating high-speed arm swings and leg motions. In such an example, programming 104A can actuate camera sensor 108A to begin capturing still or video images of the approaching person, e.g., 120, 220, for transmission via communications component 114 through network 130 to server 140. Server 140 can store features extracted from still or video images of person 120, 220, for use by the owner of vehicle 102, an insurance company, a security organization, etc.
In another example, in response to numerous dominant Doppler signal features indicating that a group of persons, e.g., a chaotic mob of persons, appear to be approaching vehicle 102, programming 104A can assign a threat index, e.g., indicating possible imminent damage to the vehicle 102. In such an instance, a chaotic mob of persons may be determined by RF sensor 108B detecting a group of dominant Doppler signal features and a group of Doppler side frequencies In response, programming 104A can actuate camera sensor 108A so that still or video images of the group of approaching, and potentially chaotic, persons can be transmitted to server 140. In an example, a threat index may be based on a variety of factors, such as the speed of an approaching person, e.g., as determined by a frequency shift of a dominant Doppler signal feature, (e.g., 310, 360), a distance between the approaching person and vehicle 102, as determined by an amplitude of a dominant Doppler signal feature, (e.g., 310, 360), and/or a number of Doppler side frequency features, (e.g. 315, 320, 365, 370). A threat index can include a value of between 0.0 and 1.0 depending on various factors, such as the speed at which the person (120, 220) approaches vehicle 102, the number of persons approaching the vehicle, etc. In an example, a solitary person (120) approaching vehicle 102 that has not been identified as an authorized user, e.g., operator or passenger of vehicle 102, may be assigned a threat index of 0.4. In another example, a group of persons who appear to be moving rapidly toward vehicle 102 may be assigned an increased threat index, such as 0.9. In an example, as described in reference to
In an example, user identification programming 104A can utilize machine learning programming to refine a computer model that correlates dominant Doppler signal features, e.g., 310, 360, 410, and Doppler side frequency features, e.g., 315, 320, 365, 370, 415, 420, with known or enrolled users of vehicle 102. Alternatively or in addition, machine learning program could be trained to correlate dominant Doppler signal features and Doppler side frequencies to distinguish between a person, or group of persons, running toward a vehicle versus a person, or group of persons, casually strolling toward a vehicle. In an example, a user, e.g., an operator or passenger, of vehicle 102 may undergo an enrollment process, which allows an operator or a passenger to be recognized by user identification programming 104A. In an enrollment process, RF sensor 108B of vehicle 102 can capture reflected radar signals from a prospective user, e.g., an operator or passenger of the vehicle. RF sensor 108B may then extract dominant Doppler signal features (310, 360, 410) and Doppler side frequency features (315, 320, 365, 370, 415, 420) as the prospective user, e.g., operator or passenger, approaches vehicle 102. The dominant Doppler signal and side frequency features can then be utilized as a training set of input data to the computer model that implementing machine learning programming. The machine learning programming of user identification programming 104A can further training and/or refine the computer model over instances of successful recognition (e.g., true positives), incorrect recognition, (e.g., false positives), and related recognition events with respect to an authorized user, e.g., 120, 220. Accordingly, over time, the machine learning programming can enhance the computer model of the user so as to increase the capability of programming of computer 104 to recognize an authorized user, e.g., 120, 220.
It an example, a suitable machine learning program to implement the computer model may include a deep neural network (DNN) may be trained and then used to output, filter and/or predict whether an approaching person, e.g., 120, 220, includes an authorized user, e.g., operator or passenger, of vehicle 102. A DNN can include programming, which can be loaded in memory and executed by a processor included in computer 104 of vehicle 102, and/or a processor of the RF sensor 108B. In an example implementation, the DNN can include, but is not limited to, a convolutional neural network CNN, R-CNN Region-based CNN, Fast R-CNN, and Faster R-CNN. The DNN includes multiple nodes or neurons. The neurons are arranged so that the DNN includes an input layer, one or more hidden layers, and an output layer. The input and output layers may also include more than one node.
As an example, the DNN can be utilized to train the computer model with ground truth data, i.e., data about a real-world condition or state. For example, the DNN be utilized to train the computer model with ground truth data and/or updated with additional data. Weights can be initialized by using a Gaussian distribution, for example, and a bias for each node can be set to zero. Training of the computer model via the DNN can include updating weights and biases via suitable techniques such as back-propagation with optimizations. Ground truth data means data deemed to represent a real-world environment, e.g., conditions and/or objects in the environment. Thus, ground truth data can include sensor data depicting an environment, e.g., a time of day, location, weather conditions, etc., along with a label or labels describing the environment, e.g., a label describing the data, e.g., calendar event, receipt of a prior access message, etc.
In an example, user identification programming 104A can additionally utilize a time series of recognitions of a person, e.g., 120, 220, to determine whether the person is advancing toward (as described in reference to
Prior to initiating process 500, programming of computer 104 and/or server 140 can execute a machine learning program as a separate process to detect an authorized user (e.g., an operator or passenger) of vehicle 102. In an example, RF sensor 108B can capture reflected radar signals from a prospective user, e.g., operator or passenger of vehicle 102. RF sensor 108B can then form dominant Doppler signal features (310, 360, 410) and Doppler side frequency features (315, 320, etc.) as the prospective user, e.g., operator or passenger, approaches vehicle 102. The machine learning programming of user identification programming 104A can further train and refine a computer model over instances of successful recognition (e.g., true positives), incorrect recognition (e.g., false positives), and related recognition events of an authorized user.
Process 500 can begin at block 510, which includes transmitting, such as via RF sensor 108B, a Doppler radar signal to an area external to vehicle 102. Transmission of Doppler radar signals can be performed while vehicle 102 is deactivated, parked, and unattended as a security measure to monitor areas external to vehicle 102.
Process 500 can continue at block 515, which includes processing received signals from a moving object, such as person 120, 220. Block 515 can additionally include processing received signals to determine dominant Doppler signal features (310, 360, 410), as well as Doppler side frequency features (315, 320, 365, 370, 415, 420).
Process 500 can continue at block 520, in which Doppler signal features, such as dominant Doppler signal features (310, 360, 410) and Doppler side frequency features (315, 320, 365, 370, 415, 420) are transmitted to user identification programming 104A. Programming 104A can determine a correlation between the Doppler signal features and the signal features of authorized users, e.g., operators or passengers, of vehicle 102 utilizing a table or other data structure stored in a memory accessible to programming 104A.
Process 500 can continue at block 525, at which programming 104A can utilize a time-series of recognitions of a person (120, 220), to determine a trajectory or other type of path over a time series of signal profiles (305, 355, 405). A time series of recognitions of a person (120, 220) may extend over any suitable duration, such as five seconds, 10 seconds, 15 seconds, etc.
Process 500 can continue at block 530, at which programming 104A can determine whether a person (120, 220) is advancing towards or retreating from vehicle 102 utilizing the trajectory determined at block 525. In an example, such as described in reference to
In response to the decision of block 530 determining that the moving object is moving toward vehicle 102, the process 500 can continue at block 535, at which user identification programming 104A can assign a threat index to a moving object. In an example, a threat index may be based on a variety of factors, such as the speed of an approaching person, e.g., as determined by a frequency shift of a dominant Doppler signal feature (310, 360), a distance between the approaching person and vehicle 102, as determined by an amplitude of a dominant Doppler signal feature (310, 360), and/or a number of distinct dominant Doppler signal features (310, 360, 410) and Doppler side frequency features (315, 320, 365, 370).
Process 500 can continue at block 540, at which programming of computer 104 may actuate camera sensor 108A, which can result in the camera sensor capturing still or video images of person 120 for transmission via communications component 114 through network 130 to server 140.
Process 500 can continue at block 545, at which programming of computer 104 can transition RF sensor 108B from a Doppler radar mode to a Bluetooth phone-as-a-key mode, which allows a person, e.g., person 120, to unlock doors, illuminate interior lighting, etc., utilizing mobile communications device 122.
Process 500 can continue at block 550, at which RF sensor 108B can perform Bluetooth phone-as-a-key functions. Such functions may include receiving a signal from a Bluetooth low energy beacon from mobile communications device 122 of person 120.
Process 500 can continue at block 555, at which programming of computer 104 can transition RF sensor 108B to perform Doppler radar functions. In response to the transition of block 555, process 500 can return to block 510.
After block 555, process 500 ends.
The descriptions of the various examples and implementations have been presented for purposes of illustration but are not intended to be exhaustive or limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen to best explain the principles of the implementations, the practical application or technical enhancements over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the implementations disclosed herein.
As will be appreciated, the methods and systems described may be implemented as a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out operations discussed herein.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media, e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some implementations, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry.
Various implementations are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
All terms used in the claims are intended to be given their plain and ordinary meanings as understood by those skilled in the art unless an explicit indication to the contrary is made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary. Use of “in response to” and “upon determining” indicates a causal relationship, not merely a temporal relationship.
The disclosure has been described in an illustrative manner, and it is to be understood that the terminology which has been used is intended to be in the nature of words of description rather than of limitation. Many modifications and variations of the present disclosure are possible in light of the above teachings, and the disclosure may be practiced otherwise than as specifically described.