Adaptive perception affected by V2X signal

Information

  • Patent Application
  • 20240005794
  • Publication Number
    20240005794
  • Date Filed
    September 15, 2023
    8 months ago
  • Date Published
    January 04, 2024
    4 months ago
Abstract
A method for field related driving, the method includes receiving content from an information source located outside of a vehicle; obtaining object information regarding one or more objects located within an environment of the vehicle; estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle. At least one step of the obtaining, the estimating and the determining is impacted by the content.
Description
BACKGROUND

Autonomous vehicles (AVs) could help vastly reduce the number of traffic accidents and CO2 emissions as well as contribute to a more efficient transportation system. However, today's candidate AV technologies are not scalable in the following three ways:


Limited field of view, lighting and weather challenges, and occlusions all lead to detection error and noisy localization/kinematics. In order to deal with such poor real-world perception output, one approach to AV technology is to invest in expensive sensors and/or to integrate specialized infrastructure into the road network. However, such an endeavor is very costly and—in the case of infrastructure—geographically limited, and therefore cannot lead to generally accessible AV technology.


AV technology, which is not based on costly hardware and infrastructure, relies entirely on machine learning and hence data to handle real-world situations. In order to deal with detection error as well as to learn a good enough driving policy for the complex task of driving, a vast amount of data and computational resources are required and still there are edge cases that are not handled correctly. The common denominator in these edge cases is that the machine learning model does not generalize well to unseen or confusing situations and due to the black-box nature of deep neural networks it is difficult to analyze faulty behavior.


Current road-ready automated driving is implemented in the form of separate ADAS functions such as ACC, AEB, and LCA. To arrive at fully autonomous driving would require seamlessly joining existing ADAS functions together as well as covering any currently non-automated gaps by adding more such functions (e.g. lane change, intersection handling etc.). In short, current automated driving is not based on a holistic approach that can readily be extended to produce full autonomous driving.





BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments of the disclosure will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which:



FIG. 1 illustrates an example of a method;



FIG. 2 illustrates an example of a method;



FIG. 3 illustrates an example of a method;



FIG. 4 illustrates an example of a method;



FIG. 5 is an example of a vehicle;



FIGS. 6-9 illustrate examples of situations and of perception fields;



FIG. 10 illustrates an example of a method;



FIG. 11 illustrates an example of a secne;



FIG. 12 illustrates an example of a method;



FIGS. 13-16 illustrate examples of images;



FIG. 17 illustrates an example of a method;



FIG. 18 illustrates an example of an image;



FIG. 19 illustrates an example of an image;



FIG. 20 illustrates an example of an image;



FIG. 21 illustrates examples of models and of vehicle units such as sensor and optics; and



FIG. 22 illustrates an example of a vehicle.





DESCRIPTION OF EXAMPLE EMBODIMENTS

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.


The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings.


It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.


Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.


Any reference in the specification to a method should be applied mutatis mutandis to a device or system capable of executing the method and/or to a non-transitory computer readable medium that stores instructions for executing the method.


Any reference in the specification to a system or device should be applied mutatis mutandis to a method that may be executed by the system, and/or may be applied mutatis mutandis to non-transitory computer readable medium that stores instructions executable by the system.


Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a device or system capable of executing instructions stored in the non-transitory computer readable medium and/or may be applied mutatis mutandis to a method for executing the instructions.


Any combination of any module or unit listed in any of the figures, any part of the specification and/or any claims may be provided.


Any one of the units and/or modules that are illustrated in the application, may be implemented in hardware and/or code, instructions and/or commands stored in a non-transitory computer readable medium, may be included in a vehicle, outside a vehicle, in a mobile device, in a server, and the like.


The vehicle may be any type of vehicle that a ground transportation vehicle, an airborne vehicle, and a water vessel.


The specification and/or drawings may refer to an image. An image is an example of a media unit. Any reference to an image may be applied mutatis mutandis to a media unit. A media unit may be an example of sensed information unit (SIU). Any reference to a media unit may be applied mutatis mutandis to any type of natural signal such as but not limited to signal generated by nature, signal representing human behavior, signal representing operations related to the stock market, a medical signal, financial series, geodetic signals, geophysical, chemical, molecular, textual and numerical signals, time series, and the like. Any reference to a media unit may be applied mutatis mutandis to a sensed information unit (SIU). The SIU may be of any kind and may be sensed by any type of sensors—such as a visual light camera, an audio sensor, a sensor that may sense infrared, radar imagery, ultrasound, electro-optics, radiography, LIDAR (light detection and ranging), a thermal sensor, a passive sensor, an active sensor, etc. The sensing may include generating samples (for example, pixel, audio signals) that represent the signal that was transmitted, or otherwise reach the sensor. The SIU may be one or more images, one or more video clips, textual information regarding the one or more images, text describing kinematic information about an object, and the like.


Object information may include any type of information related to an object such as but not limited to a location of the object, a behavior of the object, a velocity of the object, an acceleration of the object, a direction of a propagation of the object, a type of the object, one or more dimensions of the object, and the like. The object information may be a raw SIU, a processed SIU, text information, information derived from the SIU, and the like.


An obtaining of object information may include receiving the object information, generating the object information, participating in a processing of the object information, processing only a part of the object information and/or receiving only another part of the object information.


The obtaining of the object information may include object detection or may be executed without performing object detection.


A processing of the object information may include at least one out of object detection, noise reduction, improvement of signal to noise ratio, defining bounding boxes, and the like.


The object information may be received from one or more sources such as one or more sensors, one or more communication units, one or more memory units, one or more image processors, and the like.


The object information may be provided in one or more manners—for example in an absolute manner (for example—providing the coordinates of a location of an object), or in a relative manner—for example in relation to a vehicle (for example the object is located at a certain distance and at a certain angle in relation to the vehicle.


The vehicle is also referred to as an ego-vehicle.


The specification and/or drawings may refer to a processor or to a processing circuitry. The processor may be a processing circuitry. The processing circuitry may be implemented as a central processing unit (CPU), and/or one or more other integrated circuits such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), full-custom integrated circuits, etc., or a combination of such integrated circuits.


Any combination of any steps of any method illustrated in the specification and/or drawings may be provided.


Any combination of any subject matter of any of claims may be provided.


Any combinations of systems, units, components, processors, sensors, illustrated in the specification and/or drawings may be provided.


Any reference to an object may be applicable to a pattern. Accordingly—any reference to object detection is applicable mutatis mutandis to a pattern detection.


Although successful driving is contingent upon circumnavigating surrounding road objects based on their location and movement, humans are notoriously bad at estimating kinematics. We suspect that humans employ an internal representation of surrounding objects in the form of virtual force fields that immediately imply action, thus circumventing the need for kinematics estimation. Consider a scenario in which the ego vehicle drives in one lane and a vehicle diagonally in front in an adjacent lane starts swerving into the ego lane. The human response to brake or veer off would be immediate and instinctive and can be experienced as a virtual force repelling the ego from the swerving vehicle. This virtual force representation is learned and associated with the specific road object.


Inspired by the above considerations we propose the novel concept of perception fields. Perception fields are a learned representation of road objects in the form of a virtual force field that is “sensed” through the control system of the ego vehicle in the form of ADAS and/or AV software. A field is here defined as a mathematical function which depends on spatial position (or an analogous quantity)


An example of an inference method 100 is illustrates in FIG. 1 and include:


Method 100 may be executed per one or more frames of an environment of the vehicle.


Step 110 of method 100 may include detecting and/or tracking one or more objects (including, for example, one or more road users). The detecting and/or tracking may be done in any manner. The one or more objects may be any object that may affect the behavior of the vehicle. For example—a road user (pedestrian, another vehicle), the road and/or path on which the vehicle is progressing (for example the state of the road or path, the shape of the road—for example a curve, a straight road segments), traffic signs, traffic light, road crossings, a school, a kindergarten, and the like. Step 110 may include obtaining additional information such as kinematic and contextual variables related to the one or more objects. The obtaining may include receiving or generating. The obtaining may include processing the one or more frames to generate the kinematic and contextual variables.


It should be noted that step 110 may include obtaining he kinematic variables (even without obtaining the one or more frames).


Method 100 may also include step 120 of obtaining respective perception field related to the one or more objects. Step 120 may include determining which mapping between objects and should be retrieved and/or used, and the like.


Step 110 (and even step 120) may be followed by step 130 of determining the one or more virtual forces associated with the one or more objects by passing the perception field (and one or more virtual physical model functions) the relevant input variables, such as kinematic and contextual variables.


Step 130 may be followed by step 140 of determining a total virtual force applied on the vehicle-based on the one or more virtual forces associated with the one or more objects. For example—step 140 may include performing a vector weighted sum (or other function) on the one or more virtual forces associated with the one or more objects.


Step 140 may be followed by step 150 of determining, based on the total virtual force, a desired (or target) virtual acceleration—for example based on the equivalent of Newton's second law. The desired virtual acceleration may be a vector—or otherwise have a direction.


Step 150 may be followed by step 160 of converting the desired virtual acceleration to one or more vehicle driving operations that will cause the vehicle to propagate according to the desired virtual acceleration.


For example—step 160 may include translating the desired acceleration to acceleration or deceleration or changing direction of progress of the vehicle—using gas pedal movement, brake pedal movement and/or steering wheel angle. The translation may be based on a dynamics model of the vehicle with a certain control scheme.


The advantages of perception fields include, for example—explainability, generalizability and a robustness to noisy input.


Explainability. Representing ego movement as the composition of individual perception fields implies decomposing actions into more fundamental components and is in itself a significant step towards explainability. The possibility to visualize these fields and to apply intuition from physics in order to predict ego motion represent further explainability as compared to common end-to-end, black-box deep learning approaches. This increased transparency also leads to passengers and drivers being able to trust AV or ADAS technology more.


Generalizability. Representing ego reactions to unknown road objects as repellent virtual force fields constitutes an inductive bias in unseen situations. There is a potential advantage to this representation in that it can handle edge cases in a safe way with less training. Furthermore, the perception field model is holistic in the sense that the same approach can be used for all aspects of the driving policy. It can also be divided into narrow driving functions to be used in ADAS such as ACC, AEB, LCA etc. Lastly, the composite nature of perception fields allows the model to be trained on atomic scenarios and still be able to properly handle more complicated scenarios.


Robustness to noisy input: Physical constraints on the time evolution of perception fields in combination with potential filtering of inputs may lead to better handling of noise in the input data as compared to pure filtering of localization and kinematic data.


Physical or virtual forces allow for a mathematical formulation—for example—in terms of a second order ordinary differential equation comprising a so called dynamical system. The benefits of representing a control policy as such is that it is susceptible to intuition from the theory of dynamical systems and it is a simple matter to incorporate external modules such as prediction, navigation, and filtering of in—puts/outputs.


An additional benefit to the perception field approach is that it is not dependent on any specific hardware, and not computationally more expensive than existing methods.


Training Process


The process for learning perception fields can be of one of two types or a combination thereof, namely behavioral cloning (BC) and reinforcement learning (RL). BC approximates the control policy by fitting a neural network to observed human state-action pairs whereas RL entails learning by trial and error in a simulation environment without reference to expert demonstrations.


One can combine these two classes of learning algorithms by first learning a policy through BC to use it as an initial policy to be fine-tuned using RL. Another way to combine the two approaches is to first learn the so called reward function (to be used in RL) through behavioral cloning to infer what constitutes desirable behavior to humans, and later to train through trial and error using regular RL. This latter approach goes under the name of inverse RL (IRL).



FIG. 2 is an example of a training method 200 employed for learning through BC.


Method 200 may start by step 210 of collecting human data taken to be expert demonstrations for how to handle the scenario.


Step 210 may be followed by step 220 of constructing a loss function that punishes the difference between a kinematic variable resulting from the perception field model and the corresponding kinematic variable of the human demonstrations.


Step 220 may be followed by step 230 of updating parameters of the perception field and auxiliary functions (that may be virtual physical model functions that differ from perception fields) to minimize the loss function by means of some optimization algorithm such as gradient descent.



FIG. 3 is an example of a training method 250 employed for reinforcement learning.


Method 250 may start by step 260 of building a realistic simulation environment.


Step 260 may be followed by step 270 of constructing a reward function, either by learning it from expert demonstrations or by manual design.


Step 270 may be followed by step 280 of running episodes in the simulation environment and continually update the parameters of the perception field and auxiliary functions to maximize the expected accumulated rewards by means of some algorithm such as proximal policy optimization.



FIG. 4 illustrates an example of method 400.


Method 400 may be for perception fields driving related operations.


Method 400 may start by initializing step 410.


Initializing step 410 may include receiving a group of NNs that are trained to execute step 440 of method 400.


Alternatively, step 410 may include training a group of NNs that to execute step 440 of method 400.


Various examples of training the group of NNs are provided below.

    • The group of NNs may be trained to map the object information to the one or more virtual forces using behavioral cloning.
    • The group of NNs may be trained to map the object information to the one or more virtual forces using reinforcement learning.
    • The group of NNs may be trained to map the object information to the one or more virtual forces using a combination of reinforcement learning and behavioral cloning.
    • The group of NNs may be trained to map the object information to the one or more virtual forces using a reinforcement learning that has a reward function that is defined using behavioral cloning.
    • The group of NNs may be trained to map the object information to the one or more virtual forces using a reinforcement learning that has an initial policy that is defined using behavioral cloning.
    • The group of NNs may be trained to map the object information to the one or more virtual forces and one or more virtual physical model functions that differ from the perception fields.
    • The group of NN may include a first NN and a second NN, wherein the first NN is trained to map the object information to the one or more perception fields and the second NN was trained to map the object information to the one or more virtual physical model functions.


Initializing step 410 may be followed by step 420 of obtaining object information regarding one or more objects located within an environment of a vehicle. Step 410 may be repeated multiple times—and the following steps may also repeated multiple times. The object information may include video, images, audio, or any other sensed information.


Step 420 may be followed by step 440 of determining, using one or more neural network (NNs), one or more virtual forces that are applied on the vehicle.


The one or more NNs may be the entire group of NNs (from initialization step 410) or may be only a part of the group of NNs—leaving one or more non-selected NNs of the group.


The one or more virtual forces represent one or more impacts of the one or more objects on a behavior of the vehicle. The impact may be a future impact or a current impact. The impact may cause the vehicle to change its progress.


The one or more virtual forces belong to a virtual physical model. The virtual physical model is a virtual model that may virtually apply rules of physics (for example mechanical rules, electromagnetic rules, optical rules) on the vehicle and/or the objects.


Step 440 may include at least one of the following steps:

    • Calculating, based on the one or more virtual forces applied on the vehicle, a total virtual force that is applied on the vehicle.
    • Determining a desired virtual acceleration of the vehicle based on an total virtual acceleration that is applied on the vehicle by the total virtual force. The desired virtual acceleration may equal the total virtual acceleration—or may differ from it.


Method 400 may also include at least one of step 431, 432, 433, 434, 435 and 436.


Step 431 may include determining a situation of the vehicle, based on the object information.


Step 431 may be followed by step 432 of selecting the one or more NNs based on the situation.


Additionally or alternatively, step 431 may be followed by step 433 of feeding the one or more NNs with situation metadata.


Step 434 may include detecting a class of each one of the one or more objects, based on the object information.


Step 434 may be followed by step 435 of selecting the one or more NNs based on a class of at least one object of the one or more objects.


Additionally or alternatively, step 434 may be followed by step 436 of feeding the one or more NNs with class metadata indicative of a class of at least one object of the one or more objects.


Step 440 may be followed by step 450 of performing one or more driving related operations of the vehicle based on the one or more virtual forces.


Step 450 may be executed with, or without human driver intervention and may include changing the speed and/or acceleration and/or the direction of progress of the vehicle. This may include performing autonomous driving or performing advanced driver assistance system (ADAS) driving operations that may include momentarily taking control over the vehicle and/or over one or more driving related unit of the vehicle. This may include setting, with or without human driver involvement, an acceleration of the vehicle to the desired virtual acceleration.


Step 440 may include suggesting to a driver to set an acceleration of the vehicle to the desired virtual acceleration.



FIG. 5 is an example of a vehicle. The vehicle may include one or more sensing units 501, one or more driving related units 510 (such as autonomous driving units, ADAS units, and the like, a processor 560 configured to execute any of the methods, a memory unit 508 for storing instructions and/or method results, functions and the like, and a communication unit 504.



FIG. 6 illustrate examples of a method 600 for lane centering RL with lane sample points as inputs. The lane sample points are located within the environment of the vehicle.


The RL assumes a simulation environment that generated input data in which an agent (ego vehicle) can implement its learned policy (perception fields).


Method 600 may start by step 610 of detecting closest lane or side of road sample points (XL,i,YL,i) and (XR,i,YR,i) where L is left, R is right and index i refers to the sample points. The velocity of the ego vehicle (previously referred to as the vehicle) is denoted Vego.


Step 610 may be followed by step 620 of concentrating left lane input vectors (XL,i,YL,i) and Vego into XL and concentrating right lane input vectors (XR,i,YR,i) and Vego into XR.


Step 620 may be followed by step 630 of calculating lane perception fields fθ(XL) and fθ(XR). This is done by one or more NNs.


Step 630 may be followed by step 640 of constructing a differential equation that describes ego acceleration applied on the ego vehicle: a=fθ(XL)+fθ(XR).


This may be the output of the inference process. Step 640 may be followed by step 450 (not shown).


The method may include updating the one or more NNs. In this case the RL may assume a reward function that is either learnt based on expert demonstrations or handcrafted), in the example of FIG. 6 the reward function may increase for every timestamp in which the ego vehicle maintains its lane.


The updating may include step 670 of implementing, in a simulation environment and the RL learning algorithm records what happens in the next time step including the obtaining reward.


Step 670 may include using a specific RL algorithm (for example PPO, SAC, TTD3 to sequentially update the network parameters θ in order to maximize average award.



FIG. 7 illustrates method 700 for multi-object RL with visual input.


Step 710 of method 700 may include receiving a sequence of panoptically segmented images over short time window from ego perspective (images obtained by the ego vehicle), relative distance to individual objects Xrel,i.


Step 710 may be followed by step 720 of applying spatio-temporal CNN to individual instances (objects) to capture high-level spatio-temporal features Xi.


Step 720 may be followed by step 730 of computing individual perception fields fθ(Xi,i) and sum Σfθ(Xrel,I,Xi,i).


Step 730 may be followed by step 740 of constructing a differential equation that describes ego acceleration applied on the ego vehicle: a=Σfθ(Xrel,I,Xi,i).


This may be the output of the inference process. Step 740 may be followed by step 450 (not shown).


The method may include updating the one or more network parameters θ using some RL process.


The method may include step 760 of implementing a in the simulation environment and the RL learning algorithm records what happens in the next time step, including the obtained reward.


The RL may assume a reward function that is either learned based on expert demonstrations or handcrafted.


Step 760 may be followed by step 770 of using specific RL algorithm such as PPO, SAC, TTD3 to sequentially update the network parameters θ in order to maximize average reward.



FIG. 8 illustrates method 800 for multi-object BC with kinematics input.


Step 810 of method 800 may include receiving a list of detected object relative kinematics (Xrel,i,Vrel,i) wherein Xrel,i is a relative location of detected object i—in relation to the ego vehicle and Vrel,i is a relative velocity of detected object i—in relation to the ego vehicle. Also receiving the ego vehicle velocity Vego.


Step 810 may be followed by step 820 of calculating for each object the perception field fθ(Xrel,i,Vrel,i,Vego,i).


Step 820 may be followed by step 830 of summing the contributions from individual perception fields. Step 830 may also include normalizing so that the magnitude of the resulting 2d vector is equal to the highest magnitude of the individual terms: N*Σfθ(Xrel,i,Vrel,i,Vego,i).


Step 830 may be followed by step 840 of constructing a differential equation that describes ego acceleration applied on the ego vehicle: a=N*Σfθ(Xrel,i,Vrel,i,Vego,i).


This may be the output of the inference process. Step 840 may be followed by step 450 (not shown).


The method may include updating the one or more network parameters.


The method may include step 860 of computing ego trajectory given initial conditions {circumflex over (X)}(t;x0,v0)


Step 860 may be followed by step 870 of computing a loss function=Σ({circumflex over (X)}(t;x0,v0))−x(t;x0,v0))2. And propagating the loss accordingly.



FIG. 9 illustrates method 900 of inference with the addition of a loss function for an adaptive cruise control model implemented with kinematic variables as inputs.


Step 910 of method 900 may include receiving a location of the ego vehicle Xego, the speed of the ego vehicle Vego, the location of the nearest vehicle in front of the ego vehicle XCIPV, and the speed of the nearest vehicle in front of the ego vehicle VCIPV.


Step 910 may be followed by step 920 of calculating the relative location Xrel=Xego−XCIPV, and the and the relative speed Vrel=Vego−VCIPV.


Step 920 may be followed by step 930 of:

    • Calculating, by a first NN, a perception field function gθ(Xrel,VCIPV)
    • Calculating, by a second NN, an auxiliary function hψ(Vrel)
    • Multiplying gθ(Xrel,VCIPV) by hψ(Vrel) to provide a target acceleration (which equals the target force).


This may be the output of the inference process. Step 930 may be followed by step 450 (not shown).


The method may include updating the one or more NN parameters.


The method may include step 960 of computing ego trajectory given initial conditions {circumflex over (X)}(t;x0,v0).


Step 960 may be followed by step 970 of computing a loss function=Σ({circumflex over (X)}(t;x0,v0))−x(t;x0,v0))2. And propagating the loss accordingly.


Visualization


Perception fields are a novel computational framework to generate driving policies in an autonomous ego-vehicle in different traffic environments (e.g., highway, urban, rural) and for different driving tasks (e.g., collision avoidance, lane keep—ing, ACC, overtaking, etc). Perception fields are attributes of road objects and encode force fields, which emerge from each road object i of category c (e.g., other vehicles, pedestrians, traffic signs, road boundaries, etc.), and act on the ego vehicle, inducing driving behavior. The key to obtaining desirable driving behavior from a perception field representation of the ego environment is modeling the force fields so that they are general enough to allow for versatile driving behaviors but specific enough to allow for efficient learning using human driving data. The application of perception fields has several advantages over existing methods (e.g., end-to-end approaches), such as task-decomposition and enhanced explainability and generalization capabilities, resulting in versatile driving behavior.



FIG. 10 illustrates an example of a method 3000 for visualization.


According to an embodiment, method 3000 starts by step 3010 of obtaining object information regarding one or more objects located within an environment of the vehicle.


According to an embodiment, step 3010 also includes analyzing the object information. The analysis may include determining location information and/or movement information of the one or more objects. The location information and the movement information may include the relative location of the one or more objects (in relation to the vehicle) and/or the relative movement of the one or more objects (in relation to the vehicle).


According to an embodiment, step 3010 is followed by step 3020 of determining, by a processing circuit and based on the object information, one or more virtual fields of the one or more objects, the one or more virtual fields represent a potential impact of the one or more objects on a behavior of the vehicle.


Step 3020 may be driven from the virtual physical model. For example—assuming that the virtual physical model represents objects as electromagnetic charges—the one or more virtual fields are virtual electromagnetic fields and the virtual force represents an electromagnetic force generated due to the virtual charges. For example—assuming that the virtual physical model is a mechanical model—then virtual force fields are driven from the acceleration of the objects. It should be noted that processing circuit can be trained using, at least, any of the training methods illustrated in the applications—for example by applying, mutatis mutandis, any one of methods 200, 300 and 400. The training may be based, for example on behavioral cloning (BC) and/or on reinforcement learning (RL).


According to an embodiment, step 3020 is followed by step 3030 of generating, based on the one or more fields, visualization information for use in visualizing the one or more virtual fields to the driver.


According to an embodiment, the visualization information represents multiple field lines per virtual field.


According to an embodiment, the multiple field lines per virtual field form multiple ellipses per object of the one or more objects.


The visualization information may be a displayed as a part of a graphical interface that includes graphical elements that represent the virtual fields. The method may include providing a user (such as a driver of the vehicle) with a visual representation of the visual fields.


The visualization information and/or a graphical user interface may be a displayed on a display of a vehicle, on a display of a user device (for example on a mobile phone), and the like.



FIG. 11 illustrates an example of an image 3091 of an environment of a vehicle, as seen from a vehicle sensor, with multiple fields lines 3092 per virtual field of an object that is another vehicle.



FIG. 12 illustrates an example of a method 3001 for visualization.


According to an embodiment, method 3001 starts by step 3010.


According to an embodiment, step 3010 is followed by step 3020.


According to an embodiment, step 3020 is followed by step 3040 of determining, based on the one or more virtual fields, one or more virtual forces that are virtually applied on the vehicle by the one or more objects.


The one or more virtual forces are associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle.


According to an embodiment, a virtual force is a force field.


According to an embodiment, a virtual force is a potential field.


According to an embodiment, a virtual force of the one or more virtual forces is represented by virtual curves that are indicative of a strength of the virtual force.


The strength of the virtual force may be represented by one or more of an intensity of the virtual curves, a shape of the virtual curves, or a size (for example width, length, and the like) of the virtual curves.


Step 3040 may include determining a total virtual force virtually applied on the vehicle. The total virtual force may be a sum of the one or more virtual forces.


According to an embodiment, step 3040 is followed by step 3050 of calculating a desired virtual acceleration of the vehicle, based on the virtual force.


Step 3050 may be executed based on assumption regarding a relationship between the virtual force and a desired virtual acceleration of the vehicle. For example—the virtual force may have a virtual acceleration (that is virtually applied on the vehicle) and the desired virtual acceleration of the vehicle may counter the virtual acceleration that is virtually applied on the vehicle.


According to an embodiment—the desired virtual acceleration has a same magnitude as the virtually applied acceleration—but may be directed in an opposite direction.


According to an embodiment—the desired virtual acceleration has an magnitude that differs from the magnitude of the virtually applied acceleration.


According to an embodiment—the desired virtual acceleration has a direction that is not opposite to a direction of the virtually applied acceleration.


According to an embodiment step 3050 is followed by step 3060 of generating, based on the one or more fields, visualization information for use in visualizing the one or more virtual fields and force information.


The force information may represent the one or more virtual forces and/or the desired virtual acceleration.


According to an embodiment, the visualization information represents multiple field lines per virtual field.


According to an embodiment, the multiple field lines per virtual field form multiple ellipses per object of the one or more objects.


According to an embodiment step 3060 is followed by step 3070 of responding to the visualization information.


Step 3060 may include transmitting the visualization information and/or storing the visualization information and/or displaying content represented by the visualization information.


Step 3060 may include displaying the visualization information as a part of a graphical interface that includes graphical elements that represent the virtual fields and/or the desired acceleration, and the like. The graphical user interface provide a user (such as a driver of the vehicle) with a visual representation of the visual fields and/or the desired acceleration.


According to an embodiment, step 3050 is also followed by step 3080 of further responding to the desired virtual acceleration of the vehicle.


According to an embodiment, step 3080 includes at least one of:

    • Triggering a determining of a driving related operation based on the one or more virtual fields.
    • Triggering a performing of a driving related operation based on the one or more virtual fields.
    • Requesting or instructing an execution of a driving related operation.
    • Triggering a calculation of a driving related operation, based on the desired virtual acceleration.
    • Requesting or instructing a calculation of a driving related operation, based on the desired virtual acceleration Sending information about the desired virtual acceleration to a control unit of the vehicle.
    • Taking control over the vehicle—transferring the control from the driver to an autonomous driving unit.



FIG. 13 illustrates an example of an environment of a vehicle, as seen from an aerial image, with multiple fields lines per virtual forces applied on the vehicle of an object that is another vehicle. In FIG. 13, any one (or a combination of two or more) of color, direction, and magnitude of the points illustrated in FIG. 13 may indicate the one or more virtual forces being applied at those points.



FIG. 14 illustrates an example of an image 3093 of an environment of a vehicle, as seen from a vehicle sensor, with multiple fields lines 3092 per virtual field of an object that is another vehicle, and with indications 3094 of the virtual force applied by the object. In FIG. 14 the indications 3094 are part of ellipses that also include the multiple field lines 3092.



FIG. 15 illustrates an image 1033 of an example of a scene.



FIG. 15 illustrates an example of vehicle 1031 that is located within a segment of a first road.


A pedestrian 1022 starts crossing the segment—in front of the vehicle 1301. The pedestrian is represented by a pedestrian virtual field (illustrated by virtual equipotential field lines 1022′ and force indicators 1025), FIG. 15 also illustrates and directional vector 1041 (that may be display or not be displayed) that repels vehicle 1031.


Another vehicle 1039 drives at an opposite lane, has a other vehicle virtual field 1049, other force indicators 1049 and applies a another virtual force 1049 (that may be displayed or not be displayed) on vehicle 1031.


The virtual force applied on vehicle 1031 (as a result of the pedestrian and the other vehicle) is denoted 1071 (and may be displayed or not be displayed). FIG. 15 also illustrates the desired acceleration 1042 of the vehicle. The desired acceleration may be displayed or not be displayed.



FIG. 16 is an image 3095 that illustrates an environment of a vehicle and another example of visualization—that uses scalar fields. The visualization information was generated by sampling points and determining at what locations within the environment a force from a virtual field would be equal to zero. FIG. 16 includes concentric ellipses 3096 of decreasing intensity up until the locations where the virtual force is zero to show where the virtual field of an object “ends”. This type of visualization information illustrates how much force would be exerted on the vehicle were it to be located at any position within the virtual field.


Field Related Driving and External Context


According to an embodiment, the field related driving benefits from using content received from an information source that is located outside the vehicle.


According to an embodiment, the content may impact one or more steps related to the field related driving in various manners.



FIG. 17 is an example of method 1700 for field related driving.


According to an embodiment, method 1700 includes step 1710 of receiving content from an information source located outside of a vehicle.


According to an embodiment, the content is indicative of an object. The object may be a road user, a pedestrian or any other object. The content may provide location information related to the location of the object. The content may include contextual information such as a situation to be faced by the vehicle. The content may be indicative of an object that is not yet sensed by the vehicle (also referred to as a hidden object or a non-sensed object).


According to an embodiment, the content is conveyed over a vehicle to anything (V2X) communication channel.


According to an embodiment, the V2X communication channel a vehicle to vehicle communication channel and/or a vehicle to infrastructure communication channel and/or a vehicle to pedestrian communication channel and/or a vehicle to network communication channel.


According to an embodiment, method 1700 includes step 1720 of obtaining object information regarding one or more objects located within an environment of the vehicle.


According to an embodiment, step 1720 is followed by step 1730 of estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects.


According to an embodiment, step 1730 is followed by step 1740 of determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle.


According to an embodiment, step 1740 is followed by step 1750 of responding to the virtual force. Non-limiting example of responding include:

    • Generating, based on the one or more fields, visualization information for use in visualizing the one or more virtual fields to the driver.
    • Calculating a desired virtual acceleration of the vehicle, based on the virtual force.
    • Triggering a determining of a driving related operation based on the one or more virtual fields.
    • Triggering a performing of a driving related operation based on the one or more virtual fields.
    • Requesting or instructing an execution of a driving related operation.
    • Triggering a calculation of a driving related operation, based on the desired virtual acceleration.
    • Requesting or instructing a calculation of a driving related operation, based on the desired virtual acceleration Sending information about the desired virtual acceleration to a control unit of the vehicle.
    • Taking control over the vehicle—transferring the control from the driver to an autonomous driving unit.


According to an embodiment, at least one step of steps 1720, 1730, 1740 or 1750 is impacted by the content received during step 1710. This is illustrated by the dashed lines between step 1710 and steps 1720, 1730, 1740 and 1750.


According to an embodiment, wherein the at least one step of the obtaining, the estimating and the determining is selectively executed based on the content. Selectively executed may mean that a value of one or more parameter related to the execution of the step is determined based on the content—for example selecting a model to apply out of multiple models, changing one or more SIU acquisition parameter, determine how many resources to allocate to classification and/or detection. An allocation of resources and/or selection of module may include, for example, determining the size of a model and/or determining a number of feature vectors per feature map, determining a connectivity between nodes of a neural network (more connectivity may mean more resources), and the like.


According to an embodiment, executing a model involves retrieving coefficients that represent nodes from the memory and multiplying the coefficients by content inputted to the nodes and performing node based summing. Allocation less resources may involve less memory and/or fewer multiplications and/or node based additions.


According to an embodiment the content may facilitate a prediction of an object that is currently hidden and may provide a longer response period to the presence of the object—for example longer time to apply an emergency breaking, longer time to determine the progress of the vehicle (related to the virtual forces applied on the vehicle). This may enable to use lesser resources by avoiding urgent last moment rush or peak processing required when an object located in proximity to the vehicle is suddenly detected.


According to an embodiment, step 1720 of obtaining object information regarding one or more objects located within an environment of the vehicle is impacted by the content. The object information may include the content or may not include the content. The object information may be generated by one or more sensors of the vehicle.


According to an embodiment, step 1720 is in accordance with the content.


According to an embodiment, step 1720 includes receiving at least a part of the object information from the information source.


According to an embodiment, the impact on step 1720 includes determining the manner in which the object information is acquired and/or processed.


According to an embodiment, the impact on step 1720 includes at least one of (a) determining the manner in which the object information is acquired and/or processed, (b) acquiring and/or processing the object information according to the manner in which the object information is acquired and/or processed, (c) triggering acquiring and/or processing the object information according to the manner in which the object information is acquired and/or processed.


According to an embodiment, there may be provided a perception module that includes at least one sensor and/or at least one or more processing circuits for processing detection signals from the one or more sensor. Any parameter of the perception module is modifiable according to the content.


According to an embodiment, the manner in which the object information is acquired and/or processed, includes at least one of:

    • The field of view (FOV) of one or more sensor.
      • i. Concentrating on the region in which the object should be located.
      • ii. Increasing the FOV.
      • iii. Reducing the FOV.
      • iv. Setting the shape of the FOV.
    • The focus of the sensor.
    • The frequency of acquisition of SIUs.
    • A magnification of the sensor.
    • An exposure time of the sensor.
    • A polarization of the sensor.
    • A sensitivity of the sensor.
    • A frequency of reading the sensor.
    • A frequency or generating the detection sensor.
    • Any parameter related to optics that preceded the sensor.
    • Any parameter related to processing the detection signals generated from the sensor.
      • i. Noise reduction processing.
      • ii. Filtering.
      • iii. Determination of a bounding shape related to one or more objects.


According to an embodiment, step 1730 of estimating, by using the NN, the one or more virtual fields of the one or more objects is impacted by the content.


According to an embodiment, the manner in which the NN is impacted includes at least one of:

    • Applying a prediction based model (the prediction is the content) instead of another model that is not based on the prediction (which is the content). The prediction based model is configured to determine the one or more virtual fields based on the assumption that there is an object (identified by the content)—even when the object is not currently sensed by the vehicle.
    • Applying a model that allocates less resources to classification—as the content provides a cue about the object to be sensed (or which is already sensed) by the vehicle.
    • Applying a model that allocates less resources (e.g. less compute resources and/or storage resources) to classification and to detection—when the content provides a cue about the object (and a location of the object) to be sensed (or which is already sensed) by the vehicle.
    • When the content can be used to determine a region of an SIU in which the object will appear—applying a model that allocates more, or more dedicated resources for the processing of that region, in comparison to one or more other regions.
    • Performing an object detection with a lower certainty threshold (or a lower confidence level threshold)—as the cue already provides an indication about an object to be sensed (or which is already sensed) by the vehicle.
    • Taking into account an object that appears within the SIU at a certain size—even if the objects should be ignored (based on size considerations—for example not processing objects that appear in the SIU below a certain size threshold) at an absence of the content.


According to an embodiment, step 1730 of determining the virtual force includes avoiding a direct impact of the not-sensed object on the vehicle. Thus—no virtual fields related to the non-sensed object are computed. An example of avoiding a direct impact is not taking an object identified by the content—until the vehicle senses the object.


According to an embodiment, step 1730 is contingent on the determined spatial relationships between the information source, the specified object and the vehicle. Method 1700 may include obtaining location information pertaining to a location of the information source, and determining the spatial relationships between the information source, the specified object and the vehicle, such that the virtual force is determined in an accurate manner.


The information source has a different view point of the specified object—than the view point of the vehicle (even when the vehicle senses the specified object). Accordingly, the information source and the vehicle may use coordinate systems—which are not aligned to each other—and in order to determine the location of the object in the coordinate system of the vehicle—there is a need to determine the spatial relationships between the information source, the specified object and the vehicle.


According to an embodiment, the specified object may appear in one part (or region) of an SIU captured by the information source and may appear in another part (or region) of an SIU captured by the vehicle. Method 1700 may include determining the location of the specified object within one or more SIU (may be currently acquired SIUs or SIUs to be acquired in the future) in order to process the SIUs properly—for example in order to process the part (or region) of the SIU in which the object appears—in a manner that impacted by the presence of the object in that part (or region).


For example, with the content identified from the information source pertains primarily to a child that is currently hidden. The method may include allocating resources for sensing the child (for example at a certain SIU part) by the vehicle and once sensed—to calculate at least a virtual field representing the impact of the child on the vehicle.


According to an embodiment, method 1700 includes selecting the NN out of a group of NNs based on the content. The selected NN may be better trained or be better fit to detect the object indicated by the content. For example—if the content is indicative of a pedestrian than a NN trained to detect a pedestrian may be selected—and not an NN trained to detect a vehicle. The same applicable to a selection of a model.


According to an embodiment, the selection is made based on the content and on a size in which an object identified by the content appears in the SIU.


According to an embodiment, step 1740 of determining the virtual force is impacted by the content. According to an embodiment, the virtual force is determined based on current values of the one or more virtual fields and on predicted future values of the one or more virtual fields.


According to an embodiment, even when step 1740 is executed before the object is sensed by the vehicle—the predicted future values of the one or more virtual fields take into account the content.


According to an embodiment, step 1750 of responding to the virtual force is impacted by the content. The responding may take into account the presence of an object identified by the object even before the object is sensed by the vehicle. For example—the vehicle may generate an audio alert when reaching near an object that is currently concealed. For example—assuming that the content is indicative of a presence of a pedestrian that is located behind a school bus—then the vehicle may generate an audio alert indicative of the presence of the vehicle when approaching the bus.


Yet for another example—the responding may include a visualization of the currently hidden object and/or of a virtual field related to the hidden object.


According to an embodiment, the content may be associated with different trust levels. For example—the content may be associated with a lower trust level that will require an object associated with the content to be sensed by the vehicle—before the vehicle will determine a virtual field associated with the object. Yet for another example—the content may be associated with a higher level of trust that will allow the vehicle to detect the object at a lower confidence level and/or when allocating lower resources to the detection of the object. Yet for a further example—a virtual force may be calculated in association to the object—even before the object is sensed.


According to an embodiment, method 1700 (for example step 1750) includes communicating, over the V2X channel, at least one the virtual force and the one or more virtual fields.


According to an embodiment, the content is related to a specified object that is located at a region within the environment, and the impacting is related to the region. Thus—SIU part related to the region may be processed differently than other parts of the SIU—for example a lower confidence threshold may be used to detect the object, the SIU part related to the region may be processed more frequently than other parts, the FOV of a sensor may be set to acquire the region, and the like.



FIG. 18 illustrates an example of image 1033A that differs from image 1330 of FIG. 15 by including another pedestrian 1023 that is concealed from vehicle 1031 by other vehicle 1039. In the example of FIG. 19 there is no virtual field related to the other pedestrian.



FIG. 18 also illustrates two scenarios:

    • A first scenario in which the V2X communication channel is a vehicle to vehicle (V2V) communication channel 1028 and the content 1029 (indicative of the other pedestrian 1023) is sent to vehicle 1031 from the other vehicle 1039.
    • A second scenario in which the V2X communication channel is a vehicle to pedestrian device (V2PD) communication channel 1028a and the content 1029 (indicative of the other pedestrian 1023) is sent to vehicle 1031 from a device of the pedestrian.



FIG. 19 illustrates an example of image 1033B that differs from image 1330 of FIG. 15 by including another pedestrian 1023 that is concealed from vehicle 1031 by other vehicle 1039. In the example of FIG. 20 there is a virtual field (illustrated by virtual equipotential field lines 1023′) related to the other pedestrian. There is also a directional vector 1028 (that may be displayed or not be displayed) that repels vehicle 1031. The force indicators related to the other pedestrian are not shown for simplicity of explanation.



FIG. 20 illustrates an image 1801 that includes multiple regions—wherein the other pedestrian is expected to be captured in one of upper left regions of the image, FIG. 20 illustrates that the FOV of a sensor may be changed from covering the entire image 1801 to covering some upper left regions (collectively denoted 1802) of the entire image 1801.


Regardless of the FOV—some upper left regions be processed in a different manner than other regions of the image, as indicated by the text accompanying FIG. 17.



FIG. 21 illustrated examples of different models such as prediction based model 1811, another model (differs from the prediction based model)1812, models 1813-1816 that differ from each other by the resources allocated to classification and/or detection, models associated with different objects 1817(1)-1817(R), models associated with different scenes 1818(1)-1818(S), models associated with different contents 1819(1)-1819(T), models associated with different certainty levels 1821(1)-1821(U), one or more sensors of a vehicle 1820(1) and one or more optics (1820(2)) related to the one or more sensors of the vehicle.


All these entities may be selected based on the content and/or adjusted and/or tunes based on the content.



FIG. 22 illustrates an example of vehicle 2700. Vehicle 2700 includes a vehicle sending unit 2710 that may include one or more sensors such as vehicle sensors 2712 and 2714. Vehicle 2700 also includes one or more processing circuits denoted 2720, memory unit 2730, communication unit 2740 (for example a V2X communication unit), and one or more vehicle units (collectively denoted 2750) such as one or more vehicle computers, units controlled by the one or more vehicle units, motor units, chassis, wheels, and the like. The one or more processing circuits (also referred to as processing circuitry) are configured to execute any of the methods illustrated in this application.


The one or more processing circuits 2720 may implement one or more neural network, and/or may execute one or more models.


In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims. Moreover, the terms “front,” “back,” “top,” “bottom,” “over,” “under” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. Furthermore, the terms “assert” or “set” and “negate” (or “deassert” or “clear”) are used herein when referring to the rendering of a signal, status bit, or similar apparatus into its logically true or logically false state, respectively. If the logically true state is a logic level one, the logically false state is a logic level zero. And if the logically true state is a logic level zero, the logically false state is a logic level one. Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures may be implemented which achieve the same functionality. Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality. Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner. However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. It is appreciated that various features of the embodiments of the disclosure which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the embodiments of the disclosure which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable sub-combination. It will be appreciated by persons skilled in the art that the embodiments of the disclosure are not limited by what has been particularly shown and described hereinabove. Rather the scope of the embodiments of the disclosure is defined by the appended claims and equivalents thereof.


In relation reference to any model and/or neural network the term “less” may be less than 5%, 10%, 15%, 20%, 25%, 30%, 35%, 40%, 45%, 50%, 55%, 60%, 65%, 70%, 80%, 85%, 90%, 95% or 99%. In relation reference to any model and/or neural network the term “more” may be more than 5%, 10%, 15%, 20%, 25%, 30%, 35%, 40%, 45%, 50%, 55%, 60%, 65%, 70%, 80%, 85%, 90%, 95% or 99%. For example, a model with less resources may be a model with less than 5%, 10%, 15%, 20%, 25%, 30%, 35%, 40%, 45%, 50%, 55%, 60%, 65%, 70%, 80%, 85%, 90%, 95% or 99% that the resources of another model.

Claims
  • 1. A method that is computer implemented and for field related driving, the method comprises: receiving content from an information source located outside of a vehicle;obtaining object information regarding one or more objects located within an environment of the vehicle;estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; anddetermining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle;wherein at least one step of the obtaining, the estimating and the determining is impacted by the content.
  • 2. The method according to claim 1, wherein the content is conveyed over a vehicle to everything (V2X) communication channel.
  • 3. The method according to claim 2, wherein the V2X communication channel is at least one of a vehicle to vehicle communication channel, a vehicle to infrastructure communication channel, a vehicle to pedestrian communication channel; and a vehicle to network communication channel.
  • 4. The method according to claim 2, communicating, over the V2X channel, at least one the virtual force and the one or more virtual fields.
  • 5. The method according to claim 1, wherein the content is related to a specified object that is located at a region within the environment, wherein the impacting is related to the region.
  • 6. The method according to claim 1, where obtaining the object information is in accordance with the content.
  • 7. The method according to claim 1 wherein the content is related to an not-sensed object that is not sensed by the vehicle.
  • 8. The method according to claim 7, wherein the determining the virtual force comprises avoiding a direct impact of the not-sensed object on the vehicle.
  • 9. The method according to claim 1, wherein the obtaining of the information comprises receiving at least a part of the object information from the information source.
  • 10. The method according to claim 1 further comprising: obtaining location information pertaining to a location of the information source; anddetermining spatial relationships between the information source, the specified object and the vehicle, such that the virtual force is determined contingent on the determined spatial relationships between the information source, the specified object and the vehicle.
  • 11. The method according to claim 1 wherein the at least one step of the obtaining, the estimating and the determining is selectively executed based on the content.
  • 12. A non-transitory computer readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations for field related driving, comprising: receiving content from an information source located outside of a vehicle;obtaining object information regarding one or more objects located within an environment of the vehicle;estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; anddetermining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; andwherein at least one step of the obtaining, the estimating and the determining is impacted by the content.
  • 13. The non-transitory computer readable medium according to claim 12, wherein the content is conveyed over a vehicle to anything (V2X) communication channel.
  • 14. The non-transitory computer readable medium according to claim 13, wherein the V2X communication channel is at least one of a vehicle to vehicle communication channel, a vehicle to infrastructure communication channel, a vehicle to pedestrian communication channel; and a vehicle to network communication channel.
  • 15. The non-transitory computer readable medium according to claim 13, communicating, over the V2X channel, at least one the virtual force and the one or more virtual fields.
  • 16. The non-transitory computer readable medium according to claim 12, wherein the content is related to a specified object that is located at a region within the environment, wherein the impacting is related to the region.
  • 17. The non-transitory computer readable medium according to claim 12, where obtaining the object information is in accordance with the content.
  • 18. The non-transitory computer readable medium according to claim 12, wherein the content is related to an not-sensed object that is not sensed by the vehicle.
  • 19. The non-transitory computer readable medium according to claim 18, wherein the determining the virtual force comprises avoiding a direct impact of the not-sensed object on the vehicle.
  • 20. The non-transitory computer readable medium according to claim 12, wherein the obtaining of the information comprises receiving at least a part of the object information from the information source.
CROSS REFERENCE

This application claims priority from U.S. provisional patent 63/376,057 filing date Sep. 16, 2022 which is incorporated herein by reference. This application is a continuation in part of U.S. patent application Ser. No. 17/823,069 filing date august 29, 2022, that claims priority from U.S. provisional application 63/260,839 which is incorporated herein by reference. This application claims priority from U.S. provisional patent Ser. No. 63/383,914 filing date Nov. 10, 2022 which is incorporated herein in its entirety.

Provisional Applications (3)
Number Date Country
63383914 Nov 2022 US
63376057 Sep 2022 US
63260839 Sep 2021 US
Continuation in Parts (1)
Number Date Country
Parent 17823069 Aug 2022 US
Child 18468628 US