Laser Radar Driving Environmental Recognition System Based on Visual Area Guidance

Information

  • Patent Application
  • 20250208292
  • Publication Number
    20250208292
  • Date Filed
    December 02, 2022
    2 years ago
  • Date Published
    June 26, 2025
    3 months ago
Abstract
Embodiments of the present disclosure provide a laser radar driving environmental recognition system based on visual area guidance, including a visual system, a laser radar system, a data processing module and a vehicle action control module. The data processing module includes: a streaming media analysis module, a key target spherical coordinate data processing module; a key target three-dimensional trajectory prediction module, wherein the laser radar system is configured for measuring real distances of the key targets, and outputting real spherical coordinates of key targets to the key target three-dimensional trajectory prediction module, the key target three-dimensional trajectory prediction module being configured for generating real-time trajectory prediction data of key targets, and outputting the real-time trajectory prediction data of key targets to the vehicle action control module. The laser radar driving environmental recognition system can achieve accurate measurement of key targets and prediction of movement trajectories.
Description
TECHNICAL FIELD

The present disclosure relates to the field of intelligent driving environmental recognition technology, and in particular to a laser radar driving environmental recognition system based on visual area guidance.


BACKGROUND

A self-driving automobile, also known as an unmanned automobile, a computer-driven automobile or a wheeled mobile robot, is an intelligent automobile that realizes driverless driving through a computer system. Herein, the self-driving automobile relies on the collaboration of artificial intelligence, visual computing, radar, monitoring devices and positioning system, to allow a computer to operate the motor vehicle automatically and safely without any active human intervention.


With the continuous development of intelligent driving technology, it is particularly important for vehicles to recognize and monitor targets in the surrounding environment. The vehicle's recognition and monitoring of the targets in the surrounding environment is mainly realized by one of the following systems: pure visual recognition system, laser radar recognition system, or millimeter-wave radar recognition system.


It should be pointed out that for the pure visual recognition system, it has wide-area recognition capability, however, in the process of using the pure visual recognition system to realize the recognition and monitoring of the targets in the vehicle's surrounding environment, it is necessary to incorporate with high-performance artificial intelligence neural network auxiliary algorithms, and perform a large-scale, long-term data collection and learning process, therefore it has a disadvantage of long evolution cycle. Further, there are also leakage risks of map data and sensitive information.


For the laser radar recognition system, it has an advantage of high accuracy for distance measurement, however, the mechanical parts of the laser radar recognition system are prone to damage under the condition of vehicle vibration, and the existing laser radar recognition system needs to pursue for high laser spot scanning density, and environmental objects can only be effectively recognized and distinguished under the condition of higher laser spot scanning density. Further, the laser radar recognition system cannot recognize color information of the object, and it is expensive, bulky, and inconvenient for large-scale vehicle applications. In addition, in order to improve the recognition accuracy of the existing laser radar recognition system, it is necessary to continuously increase the number of measuring points, thereby increasing the detection cost; that is, the laser radar recognition system has problems of excessively large laser radar scanning range (scanning without distinction in the whole area), waste of scanning resources, and insufficient scanning dot density in key area.


For the millimeter-wave radar recognition system, the millimeter-wave radar has a problem of limited detection range and accuracy, therefore it is only suitable for auxiliary detection to judge obstacles in front of and behind the vehicle.


SUMMARY

In view of the above-mentioned problems, the object of the present disclosure is to provide a laser radar driving environmental recognition system based on visual area guidance. A visual system and a laser radar system are effectively combined to form the laser radar driving environmental recognition system based on visual area guidance, so as to achieve accurate measurement of key targets and prediction of movement trajectories, thereby avoiding the defects of large amount of calculation in the existing pure visual recognition system and waste of laser dot scanning resources and insufficient scanning dot density in key area in the existing laser radar recognition system.


The technical solutions of the present disclosure include:

    • A laser radar driving environmental recognition system based on visual area guidance, comprising a visual system, a laser radar system, a data processing module and a vehicle action control module, wherein the data processing module comprises:
    • a streaming media analysis module, configured for analyzing and processing streaming media data of a vehicle driving environment sampled by the visual system, to classify and determine moving objects and stationary objects in the vehicle driving environment, and mark the moving objects as key targets and output spherical coordinates of key targets, the streaming media analysis module being configured for outputting data of the stationary objects to the vehicle action control module;
    • a key target spherical coordinate data processing module, configured for analyzing and processing spherical coordinate data of key targets output by the streaming media analysis module, and outputting azimuth coordinates in the processed spherical coordinates of key targets to the laser radar system;
    • a key target three-dimensional trajectory prediction module, wherein the laser radar system is configured for measuring real distances of the key targets, and outputting real spherical coordinates of key targets to the key target three-dimensional trajectory prediction module, the key target three-dimensional trajectory prediction module being configured for generating real-time trajectory prediction data of key targets, and outputting the real-time trajectory prediction data of key targets to the vehicle action control module.


According to some embodiments of the present disclosure, the laser radar driving environmental recognition system is configured for implementing driving environmental recognition and feedback in such a method, the method comprising:

    • step a: sampling environmental streaming media of a vehicle driving environment through the visual system;
    • step b: sending by the visual system the sampled environmental streaming media data to the streaming media analysis module, and analyzing and processing by the streaming media analysis module the environmental streaming media data sampled by the visual system to classify and determine moving objects and stationary objects in the vehicle driving environment, and mark the moving objects as key targets and output spherical coordinates of key targets;
    • outputting stationary object data of the streaming media analysis module to the vehicle action control module, so that the vehicle action control module is capable of controlling driving status of vehicle according to the stationary object information;
    • meanwhile, outputting by the streaming media analysis module the spherical coordinate data of key targets to the key target spherical coordinate data processing module;
    • step c: analyzing and processing by the key target spherical coordinate data processing module the spherical coordinates of key targets, and inputting azimuth coordinates in the processed spherical coordinates of key targets to the laser radar system;
    • step d: accurately measuring by the laser radar system the real distance of the key targets, and outputting the real spherical coordinates of key targets to the key target three-dimensional trajectory prediction module;
    • step e: generating by the key target three-dimensional trajectory prediction module the real-time trajectory prediction data of key targets according to the real spherical coordinate data of key targets;
    • step f: outputting by the key target three-dimensional trajectory prediction module the real-time trajectory prediction data of key targets to the vehicle action control module, so that the vehicle action control module is capable of controlling driving status of vehicle according to moving object information.


According to some embodiments of the present disclosure, in the step d, in case of a driving vehicle as key target, the laser radar system is configured to select four measuring points of the same vehicle as key target to measure distances, the laser radar system accurately measures real distances of the above four measuring points of the vehicle as key target, and outputs real spherical coordinates of the above four measuring points of the vehicle as key target to the key target three-dimensional trajectory prediction module, wherein two measuring points form a measuring point group, that is, the four measuring points form two measuring point groups;

    • for a vehicle as key target traveling in the same direction and located in the left front, the above four measuring points are a right front wheel, a right rear wheel, a left rear light, and a right rear light of the vehicle as key target in sequence, wherein the measuring point of right front wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left rear light and the measuring point of right rear light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located in the right front, the above four measuring points are a left front wheel, a left rear wheel, a left rear light, and a right rear light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of left rear wheel form a measuring point group, and the measuring point of left rear light and the measuring point of right rear light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located in the left rear, the above four measuring points are a right front wheel, a right rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of right front wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located in the right rear, the above four measuring points are a left front wheel, a left rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of left rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located directly in front, the above four measuring points are a left rear wheel, a right rear wheel, a left rear light, and a right rear light of the vehicle as key target in sequence, wherein the measuring point of left rear wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left rear light and the measuring point of right rear light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located directly in the rear, the above four measuring points are a left front wheel, a right front wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of right front wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in a reverse direction and located in the left front, the above four measuring points are a left front wheel, a left rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of left rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in a reverse direction and located in the right front, the above four measuring points are a right front wheel, a right rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of right front wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group.


According to some embodiments of the present disclosure, in the step d, in case of a multi-wheeled truck as key target, the laser radar system is configured to select at least three measuring points of the same multi-wheeled truck as key target to measure distances, the laser radar system accurately measures real distance of each measuring point of the multi-wheeled truck as key target, and outputs real spherical coordinates of each measuring point to the key target three-dimensional trajectory prediction module;

    • for a multi-wheeled truck as key target traveling in the same direction and located in the left front, at least three outer wheels on the right side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in the same direction and located in the right front, at least three outer wheels on the left side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in the same direction and located in the left rear, at least three outer wheels on the right side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in the same direction and located in the right rear, at least three outer wheels on the left side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in a reverse direction and located in the left front, at least three outer wheels on the left side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in a reverse direction and located in the right front, at least three outer wheels on the right side of the multi-wheeled truck as key target are selected as measuring points.


According to some embodiments of the present disclosure, the visual system comprises six vehicle-mounted cameras, respectively arranged in the middle of the upper end of a front windshield, the left side of the upper end of the front windshield, the right side of the upper end of the front windshield, the upper part of a A-pillar, the upper part of a B-pillar, and the middle of the upper end of a rear windshield of the vehicle, the vehicle-mounted cameras are all wide-angle cameras, with a field of view higher than 90 degrees.


According to some embodiments of the present disclosure, a shooting frame rate of the vehicle-mounted cameras is 30-50 frames, and the visual system has a data processing frequency of 30-50 Hz.


According to some embodiments of the present disclosure, the laser radar system is a solid-state laser radar system, and the laser radar system comprises four high-precision positioning solid-state laser radars arranged in the middle of the upper end of the front windshield, the upper part of a A-pillar, the upper part of a B-pillar, and the middle of the upper end of the rear windshield of the vehicle, and the high-precision positioning solid-state laser radars are all solid-state optical waveguides phased array laser radars.


According to some embodiments of the present disclosure, the stationary objects comprise stationary vehicles, lane edges, buildings, fire facilities, and traffic signs.


According to some embodiments of the present disclosure, in the step b, during determining the spherical coordinates of key targets by the streaming media analysis module, the streaming media analysis module performs real-time processing of the streaming media data of the vehicle driving environment sampled by the visual system in a form of streaming media, and determines the spherical coordinates (r′, θ, φ) of key targets with a fixed camera coordinate positioning method, where θ and φ are definite values respectively, and r′ is a value to be corrected.


According to some embodiments of the present disclosure, in the step d, during measuring the real spherical coordinates of key targets by the laser radar system, the laser radar system performs vector control on the laser through electro-optic effect, to detect corrected spherical coordinates (r, θ, φ) of key targets, and then obtains the real spherical coordinates of key targets.


Compared with the prior art, the beneficial effects of the present disclosure include: the laser radar driving environmental recognition system based on visual area guidance described in the present disclosure includes a visual system, a laser radar system, and a data processing module, and the data processing module includes a streaming media analysis module, a key target spherical coordinate data processing module, and a key target three-dimensional trajectory prediction module. The visual system samples the environmental streaming media of the vehicle driving environment; the streaming media analysis module analyzes and processes the environmental streaming media data sampled by the visual system to classify and determine moving objects and stationary objects in the vehicle driving environment, and mark the moving objects as key targets and output spherical coordinates of key targets; the key target spherical coordinate data processing module analyzes and processes the spherical coordinates of key targets, and inputs azimuth coordinates in the processed spherical coordinates of key targets to the laser radar system; the laser radar system accurately measures the real distance of the key targets, and outputs the real spherical coordinates of key targets to the key target three-dimensional trajectory prediction module, the key target three-dimensional trajectory prediction module generates the real-time trajectory prediction data of key targets. Further, a visual system and a laser radar system are effectively combined to form the laser radar driving environmental recognition system based on visual area guidance, so as to achieve accurate measurement of key targets and prediction of movement trajectories, thereby avoiding the deficiencies of the large amount of calculation associated with the existing pure visual recognition system and the waste of laser dot scanning resources and insufficient scanning dot density over key areas associated with the existing laser radar recognition system.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of the operation logic of the present disclosure;



FIG. 2 is a schematic diagram showing the tracking of key targets in the present disclosure;



FIG. 3 is a schematic diagram showing the installation of vehicle-mounted cameras in the visual system of the present disclosure;



FIG. 4 is a schematic diagram showing the installation of solid-state optical waveguides phased array laser radars in the laser radar system of the present disclosure;



FIG. 5 is a schematic diagram showing the measurement of the laser radar system for the case where the key target is a driving vehicle;



FIG. 6 is a schematic diagram showing the measurement of the laser radar system for the case where the key target is a multi-wheeled truck.










    • 1—vehicle-mounted camera; 2—streaming media analysis module; 3—key target spherical coordinate data processing module; 4—solid-state laser radar; 5—key target three-dimensional trajectory prediction module; 6—vehicle action control module.





DETAILED DESCRIPTION OF EMBODIMENTS

The specific embodiments of the present disclosure will be described in detail below in conjunction with the accompanying drawings, but it should be understood that the protection scope of the present disclosure is not limited to the specific embodiments.


As shown in FIGS. 1-2, the laser radar driving environmental recognition system based on visual area guidance includes a visual system, a laser radar system, a data processing module and a vehicle action control module 6.


As shown in FIG. 3, the visual system includes six vehicle-mounted cameras 1, respectively arranged in the middle of the upper end of a front windshield, the left side of the upper end of the front windshield, the right side of the upper end of the front windshield, the upper part of a A-pillar, the upper part of a B-pillar, and the middle of the upper end of a rear windshield of the vehicle, the vehicle-mounted cameras 1 are all wide-angle cameras, with a field of view higher than 90 degrees. The shooting frame rate of the vehicle-mounted cameras 1 is 30-50 frames, and the data processing frequency of the visual system is 30-50 Hz.


As shown in FIG. 4, the laser radar system is a solid-state laser radar system, and the laser radar system includes four high-precision positioning solid-state laser radars 4 arranged in the middle of the upper end of the front windshield, the upper part of a A-pillar, the upper part of a B-pillar, and the middle of the upper end of the rear windshield of the vehicle, and the high-precision positioning solid-state laser radars 4 are all solid-state optical waveguides phased array laser radars.


The data processing module includes a streaming media analysis module 2, a key target spherical coordinate data processing module 3 and a key target three-dimensional trajectory prediction module 5. The streaming media analysis module 2 can analyze and process the streaming media data of the vehicle driving environment sampled by the visual system, to classify and determine moving objects and stationary objects in the vehicle driving environment, and mark the moving objects as key targets and output the spherical coordinates of the key targets, the streaming media analysis module 2 can output the data of the stationary objects to the vehicle action control module 6. The stationary objects include stationary vehicles, lane edges, buildings, fire facilities, and traffic signs. The key target spherical coordinate data processing module 3 can analyze and process the spherical coordinate data of key targets output by the streaming media analysis module 2, and output azimuth coordinates in the processed spherical coordinates of key targets to the laser radar system. For the key target three-dimensional trajectory prediction module 5, the laser radar system can measure the real distance of the key target, and output the real spherical coordinates of key targets to the key target three-dimensional trajectory prediction module 5, the key target three-dimensional trajectory prediction module 5 can generate real-time trajectory prediction data of key targets, and output the real-time trajectory prediction data of key targets to the vehicle action control module 6.


The above-mentioned embodiment adopts the following method to implement driving environmental recognition and feedback, the method includes:

    • step a: sampling environmental streaming media of a vehicle driving environment through the visual system;
    • step b: sending by the visual system the sampled environmental streaming media data to the streaming media analysis module 2, and analyzing and processing by the streaming media analysis module 2 the environmental streaming media data sampled by the visual system to classify and determine moving objects and stationary objects in the vehicle driving environment, and mark the moving objects as key targets and output spherical coordinates of key targets;
    • outputting stationary object data of the streaming media analysis module 2 to the vehicle action control module 6, so that the vehicle action control module 6 is capable of controlling driving status of vehicle according to the stationary object information;
    • meanwhile, outputting by the streaming media analysis module 2 the spherical coordinate data of key targets to the key target spherical coordinate data processing module 3;
    • step c: analyzing and processing by the key target spherical coordinate data processing module 3 the spherical coordinates of key targets, and inputting azimuth coordinates in the processed spherical coordinates of key targets to the laser radar system;
    • step d: accurately measuring by the laser radar system the real distance of the key targets, and outputting the real spherical coordinates of key targets to the key target three-dimensional trajectory prediction module 5;
    • step e: generating by the key target three-dimensional trajectory prediction module 5 the real-time trajectory prediction data of key targets according to the real spherical coordinate data of key targets;
    • step f: outputting by the key target three-dimensional trajectory prediction module 5 the real-time trajectory prediction data of key targets to the vehicle action control module 6, so that the vehicle action control module 6 is capable of controlling driving status of vehicle according to moving object information.


As shown in FIG. 5, in the step d, in case of a driving vehicle as key target, the laser radar system is configured to select four measuring points of the same vehicle as key target to measure distances, the laser radar system accurately measures real distances of the above four measuring points of the vehicle as key target, and outputs real spherical coordinates of the above four measuring points of the vehicle as key target to the key target three-dimensional trajectory prediction module 5, wherein two measuring points form a measuring point group, that is, the four measuring points form two measuring point groups;

    • for a vehicle as key target traveling in the same direction and located in the left front, the above four measuring points are a right front wheel, a right rear wheel, a left rear light, and a right rear light of the vehicle as key target in sequence, wherein the measuring point of right front wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left rear light and the measuring point of right rear light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located in the right front, the above four measuring points are a left front wheel, a left rear wheel, a left rear light, and a right rear light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of left rear wheel form a measuring point group, and the measuring point of left rear light and the measuring point of right rear light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located in the left rear, the above four measuring points are a right front wheel, a right rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of right front wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located in the right rear, the above four measuring points are a left front wheel, a left rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of left rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located directly in front, the above four measuring points are a left rear wheel, a right rear wheel, a left rear light, and a right rear light of the vehicle as key target in sequence, wherein the measuring point of left rear wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left rear light and the measuring point of right rear light form a measuring point group;
    • for a vehicle as key target traveling in the same direction and located directly in the rear, the above four measuring points are a left front wheel, a right front wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of right front wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in a reverse direction and located in the left front, the above four measuring points are a left front wheel, a left rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of left front wheel and the measuring point of left rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group;
    • for a vehicle as key target traveling in a reverse direction and located in the right front, the above four measuring points are a right front wheel, a right rear wheel, a left front light, and a right front light of the vehicle as key target in sequence, wherein the measuring point of right front wheel and the measuring point of right rear wheel form a measuring point group, and the measuring point of left front light and the measuring point of right front light form a measuring point group.


As shown in FIG. 6, in the step d, in case of a multi-wheeled truck as key target, the laser radar system is configured to select at least three measuring points of the same multi-wheeled truck as key target to measure distances, the laser radar system accurately measures real distance of each measuring point of the multi-wheeled truck as key target, and outputs real spherical coordinates of each measuring point to the key target three-dimensional trajectory prediction module 5;

    • for a multi-wheeled truck as key target traveling in the same direction and located in the left front, at least three outer wheels on the right side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in the same direction and located in the right front, at least three outer wheels on the left side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in the same direction and located in the left rear, at least three outer wheels on the right side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in the same direction and located in the right rear, at least three outer wheels on the left side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in a reverse direction and located in the left front, at least three outer wheels on the left side of the multi-wheeled truck as key target are selected as measuring points;
    • for a multi-wheeled truck as key target traveling in a reverse direction and located in the right front, at least three outer wheels on the right side of the multi-wheeled truck as key target are selected as measuring points.


In the step b, during determining the spherical coordinates of key targets by the streaming media analysis module 2, the streaming media analysis module 2 performs real-time processing of the streaming media data of the vehicle driving environment sampled by the visual system in a form of streaming media, and determines the spherical coordinates (r′, θ, φ) of key targets with a fixed camera coordinate positioning method, where θ and φ are definite values respectively, and r′ is a value to be corrected. In the step d, during measuring the real spherical coordinates of key targets by the laser radar system, the laser radar system performs vector control on the laser through electro-optic effect, to detect corrected spherical coordinates (r, θ, φ) of key targets, and then obtains the real spherical coordinates of key targets.


The specific models of the above-mentioned electronic components do not need to be specified; they may be chosen from commercially available common products as long as they can meet the requirements of the present disclosure.


The purposes, technical solutions and beneficial effects of the present disclosure have been described in detail with reference to the specific embodiments described above. It should be understood that the above descriptions only refer to specific embodiments of the present disclosure but do not intend to limit the present disclosure. Any modifications, equivalent replacements, and improvements made within the spirit and principle of the present disclosure are all included in the protection scope of the present disclosure.

Claims
  • 1. A pair of smart glasses, comprising: temples, wherein at least one temple is provide with a cavity therein, a first opening is provided in a front surface of the cavity, and a second opening is provided in a side surface of the cavity;a panel, matching the first opening in shape and closing the first opening in front of the temple,a circuit board, fixedly connected to the panel and extending backward from the panel, wherein the panel matches the cavity in shape; an electrical contact is provided on the circuit board, and the electrical contact has elasticity in an inner-outer direction; when the circuit board is inserted from the first opening into the cavity, the electrical contact is compressed under a pressure from a side wall of the temple; when the circuit board is inserted from the first opening to a predetermined position of the cavity, the electrical contact faces the second opening and extends out of the second opening.
  • 2. The pair of smart glasses according to claim 1, wherein the electrical contact is connected to the circuit board by a spring.
  • 3. The pair of smart glasses according to claim 1, wherein a receiving groove is provided in the circuit board, and the electrical contact is slidably received in the receiving groove.
  • 4. The pair of smart glasses according to claim 3, wherein the second opening is arranged in an inner side of the temple.
  • 5. The pair of smart glasses according to claim 1, wherein the temple further comprises a third opening arranged in a rear wall of the temple, the circuit board comprises an interface arranged in the third opening.
  • 6. The pair of smart glasses according to claim 1, wherein a front end of the temple protrudes outward and downward to form the cavity; the cavity gradually narrows upward and inward from front to back; and a rear end of the temple is provided with a bending portion cooperating with an ear root.
  • 7. The pair of smart glasses according to claim 1, wherein the circuit board is attached to a wall of the cavity at the rear of the cavity.
  • 8. The pair of smart glasses according to claim 1, further comprising an optical device arranged on a front surface of the panel.
  • 9. The pair of smart glasses according to claim 1, wherein the temple comprises a first temple and a second temple, and the first temple and the second temple are connected to each other through a crossbeam.
  • 10. The pair of smart glasses according to claim 9, wherein the temples comprise a first temple and a second temple; the first temple and the second temple are formed substantially symmetrically, a first cavity is provided in the first temple, a second cavity is provided in the second temple, both the first cavity and the second cavity are provided with said first opening facing forward, and at least one of the first cavity and the second cavity is provided with said second opening;the panel comprises a first panel and a second panel, the first panel closes the first cavity in front of the first temple, and the second panel closes the second cavity at the second temple;the circuit board comprises a first circuit board and a second circuit board, the first circuit board extends backward from the first panel, and the second circuit board extends backward from the second panel.
  • 11. The pair of smart glasses according to claim 10, wherein the crossbeam comprises a front crossbeam portion and a rear crossbeam portion, the first panel and the second panel are fixedly connected to the front crossbeam portion; the first temple and the second temple are fixedly connected to each other through the rear crossbeam portion; the front crossbeam portion and the rear crossbeam portion are arranged in a front-rear direction to form the crossbeam with a hollow passage; an electric wire is provided in the passage to connect the first circuit board with the second circuit board.
  • 12. A method for manufacturing the pair of smart glasses according to claim 1, comprising: obtaining the temples;obtaining the panel and the circuit board;inserting the circuit board fixedly provided with the panel into the cavity of at least one temple.
Priority Claims (1)
Number Date Country Kind
202211034263.5 Aug 2022 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2022/136141 12/2/2022 WO