TREADMILL AND SPEED CONTROL METHOD THEREOF

Information

  • Patent Application
  • 20240042269
  • Publication Number
    20240042269
  • Date Filed
    May 26, 2023
    a year ago
  • Date Published
    February 08, 2024
    9 months ago
Abstract
A treadmill and a speed control method thereof are provided. The treadmill includes a treadmill body, an event-based vision sensor, and a processor. The treadmill body includes a running belt. The event-based vision sensor is disposed on the treadmill body and generates a sensing image. The processor is coupled to the event-based vision sensor, obtains the sensing image, and performs runner detection on the sensing image. In response to determining that a runner is detected from the sensing image, the processor inputs the sensing image to a depth estimation model, obtains position information of the runner relative to the running belt, and controls the running speed of the running belt according to the position information of the runner.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the priority benefit of Taiwan application serial no. 111129282, filed on Aug. 4, 2022. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.


BACKGROUND
Technology Field

The disclosure relates to a kind of sports equipment, and especially relates to a treadmill and a speed control method thereof.


Description of Related Art

Modern people pay more and more attention to the importance of exercise, and the treadmill is a piece of sports equipment that is very common and popular among the public. A user may speed walk or run on the running belt of the treadmill to achieve exercise. However, when the user's pace cannot keep up with the treadmill or a foreign object (such as a pet, a child, a water bottle or other sports equipment, etc.) is close to the treadmill, such situation may cause the user to fall or the child or the pet to be drawn into the bottom of the treadmill, resulting in non-negligible harm. Currently, the existing accident prevention method for the treadmill is to have a safety key installed. One end of the safety key is inserted on the treadmill, and the other end of the safety key is fastened to the user. Once the user on the treadmill falls, the safety key is pulled out, causing the treadmill to stop operating to avoid further harm. However, since the safety key needs to be fastened to the user, and the safety key may be accidentally pulled out due to shaking of the user's body or hand swings, this method is not welcomed by the user. In addition, the safety key cannot detect whether a foreign object is close to a running treadmill.


SUMMARY

In view of this, the disclosure proposes a treadmill and a speed control method thereof, which may automatically control a speed of the treadmill according to position information of a runner on the treadmill, so as to improve the use safety of the treadmill.


An embodiment of the disclosure provides a treadmill, which includes a treadmill body, an event-based vision sensor, and a processor. The treadmill body includes a running belt. The event-based vision sensor is disposed on the treadmill body and generates a sensing image. The processor is coupled to the event-based vision sensor, obtains the sensing image, and performs runner detection on the sensing image. In response to determining that a runner is detected from the sensing image, the processor inputs the sensing image to a depth estimation model, obtains position information of the runner relative to the running belt, and controls the running speed of the running belt according to the position information of the runner.


An embodiment of the disclosure provides a speed control method of a treadmill, and the method includes the following steps. A sensing image is generated through an event-based vision sensor disposed on the treadmill. Runner detection is performed on the sensing image. In response to determining that a runner is detected from the sensing image, the sensing image is input to a depth estimation model, and position information of the runner relative to a running belt is obtained. A running speed of the running belt is controlled according to the position information of the runner.


Based on the above, in the embodiment of the disclosure, the event-based vision sensor is disposed on the treadmill body to perform a shooting operation and generate the sensing image. When the runner is identified from the sensing image, the sensing image and the trained depth estimation model may be used to estimate the position information of the runner relative to the running belt, so as to determine whether to lower the running speed of the running belt according to the position information of the runner. Based on this, when the user may be about to have an accident, the treadmill may perform handling actions or issue a warning in advance, thereby improving the use safety of the treadmill.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a treadmill according to an embodiment of the disclosure.



FIG. 2 is a flowchart of a speed control method of a treadmill according to an embodiment of the disclosure.



FIG. 3 is a flowchart of detecting a runner according to an embodiment of the disclosure.



FIGS. 4A and 4B are schematic diagrams of identifying a runner according to an embodiment of the disclosure.



FIG. 5 is a flowchart of a speed control method of a treadmill according to an embodiment of the disclosure.



FIG. 6 is a schematic diagram of generating a depth map according to an embodiment of the disclosure.



FIG. 7 is a schematic diagram of performing motion detection on a background area according to an embodiment of the disclosure.





DESCRIPTION OF THE EMBODIMENTS

Some embodiments of the disclosure accompanied with drawings are described in detail as follows. The reference numerals used in the following description are regarded as the same or similar elements when the same reference numerals appear in different drawings. These embodiments are only a part of the disclosure, and do not disclose all the possible implementations of the disclosure. To be more precise, the embodiments are only examples of methods and devices in the scope of the claims of the disclosure.



FIG. 1 is a schematic diagram of a treadmill according to an embodiment of the disclosure. Referring to FIG. 1, a treadmill 100 includes a treadmill body 110, an event-based vision sensor 120, and a processor 130. The event-based vision sensor 120 and the processor 130 are disposed on the treadmill body 110, and the processor 130 is coupled to the event-based vision sensor 120.


The treadmill body 110 may include a base 111, a running belt 112, an input device 113, a console 114, and a monitor 115. The base 111 is provided with the running belt 112. When the treadmill 100 is in operation, the running belt 112 on the base 111 is driven by the motor to run. The running belt 112 is for a runner to step on, and the feet of the runner repeatedly step along with the running belt 112. The monitor 115 and the input device 113 are disposed on the console 114. The runner may input a set speed through the input device 113 to control the running speed of the running belt 112. The input device 113 is, for example, a control panel including keys or buttons, which is not limited in the disclosure.


The event-based vision sensor 120 is, for example, a dynamic vision sensor (DVS) or a dynamic and active-pixel vision sensor (DAVIS). The shooting direction of the event-based vision sensor 120 is opposite to the running direction of the runner, so as to capture a sensing image towards the runner who is using the treadmill 100. In some embodiments, the event-based vision sensor 120 may be disposed on the console 114. The event-based vision sensor 120 may be configured to sense the change of light intensity in the shooting scene and generate a sensing image. In other words, each pixel in the sensing image generated by the event-based vision sensor 120 represents the variation of light intensity. The event-based vision sensor 120 has the characteristics of low data volume, fast response time, and low power consumption, and may protect the privacy of the user.


The processor 130 may be configured to control the actions of various components of the treadmill 100, such as a central processing unit (CPU), or other programmable general-purpose or special-purpose microprocessors, a digital signal processor (DSP), a programmable controller, an application specific integrated circuit (ASIC), a programmable logic device (PLD) or other similar elements or a combination of the aforementioned elements.


In the embodiment of the disclosure, the event-based vision sensor 120 may continuously generate multiple sensing images when the running belt 112 of the treadmill 100 is running. The processor 130 may detect whether an abnormality in the operating mode or the operating environment of the treadmill 100 occurs according to the sensing images, so as to prevent accidents from happening. The accidents include a runner falling down or a foreign object being drawn into the bottom of the treadmill 100 and the like. In this way, before an accident of using the treadmill 100 occurs, the processor 130 may control the running belt 112 to reduce the running speed, so as to prevent the occurrence of the accident or reduce the accidental injury.


In detail, FIG. 2 is a flowchart of a speed control method of a treadmill according to an embodiment of the disclosure. Referring to FIGS. 1 and 2 at the same time, the method of the embodiment is adapted for the treadmill 100 mentioned above. The detailed steps of the speed control method of the treadmill in the embodiment are described below together with the various elements of the treadmill 100.


In step S210, a sensing image is generated through the event-based vision sensor 120 disposed on the treadmill 100. As mentioned above, the event-based vision sensor 120 may continuously perform sensing and generate multiple sensing images corresponding to different sensing time points. It can be seen that if a runner is exercising on the treadmill 100, the event-based vision sensor 120 may capture the sensing image including the runner. In addition, the embodiment of the disclosure does not limit the image resolution of the sensing image, which may be determined according to the actual application. In addition, in some embodiments, the event-based vision sensor 120 may be a dynamic vision sensor (DVS), and the sensing image is a DVS image.


In step S220, the processor 130 performs runner detection on the sensing image. That is to say, the processor 130 may determine whether a runner is exercising on the treadmill 100 by performing runner detection on the sensing image. In addition, in some embodiments, the processor 130 may further determine whether a runner is exercising on the treadmill 100 with the assistance of other sensing technologies, such as an infrared sensing technology or a weight sensing technology.


For details, please refer to FIG. 3, which is a flowchart of detecting a runner according to an embodiment of the disclosure. Step S220 shown in FIG. 2 may be implemented as steps S221 to S224. In step S221, the processor 130 performs person detection on the sensing image and obtains a person bounding box on the sensing image. In some embodiments, the processor 130 may input the sensing image to a trained deep learning model for person detection. The deep learning model is, for example, a convolution neural network (CNN) model, which may capture features from the sensing image for person detection. For example, the processor 130 may apply a deep learning model based on the YOLO algorithm to perform person detection on the sensing image, but the disclosure is not limited thereto. If the deep learning model detects a person, the deep learning model uses a rectangular person bounding box to mark the position of the person, so the processor 130 may obtain the person bounding box in the sensing image according to the output of the deep learning model. For example, the deep learning model may output a vertex coordinate, a box length, and a box width of the person bounding box.


In step S222, the processor 130 determines whether the runner is detected according to whether the person bounding box is located within a predetermined area on the sensing image. The aforementioned predetermined area is, for example, a central area of the sensing image. However, the size and position of the predetermined area may be designed according to the position where the event-based vision sensor 120 is disposed. By determining whether the person bounding box is located within the predetermined area on the sensing image, the processor 130 may identify whether the person marked by the person bounding box is the runner on the treadmill 100.


If the person bounding box is located within the predetermined area on the sensing image (yes is determined in step S222), in step S223, the processor 130 determines that the runner is detected. On the contrary, if the person bounding box is not located within the predetermined area on the sensing image (no is determined in step S222), in step S224, the processor 130 determines that the runner is not detected.


For example, FIGS. 4A and 4B are schematic diagrams of identifying a runner according to an embodiment of the disclosure. In the embodiment, it is assumed that the predetermined area is the central area of the sensing image. Referring to FIG. 4A, the processor 130 determines that a person bounding box B1 is not located within a central area Z1 of a sensing image Img_1, so the processor 130 determines that the runner is not detected. On the contrary, referring to FIG. 4B, the processor 130 determines that the person bounding box B1 is located within the central area Z1 of the sensing image Img_1, so the processor 130 determines that the runner is detected.


Referring to FIG. 2 again, in step S230, in response to determining that the runner is detected from the sensing image, the processor 130 inputs the sensing image to a depth estimation model and obtains position information of the runner relative to the running belt 112. Specifically, when determining that the runner is exercising on the treadmill 100, the processor 130 inputs the sensing image to the depth estimation model to obtain depth information of the runner, thereby obtaining the position information of the runner on the running belt 112. The depth estimation model may be a deep learning model applying a monocular depth estimation (Monodepth) algorithm. In some embodiments, the depth estimation model may use the DVS image as a training data set for model training. In addition, the depth information of the runner generated by the depth estimation model is the depth information between the runner and the event-based vision sensor 120. In some embodiments, according to the disposing position of the event-based vision sensor 120, the processor 130 may obtain the position information of the runner on the running belt 112 according to the depth information generated by the depth estimation model. For example, according to the relative positional relationship between the event-based vision sensor 120 and a component on the console 114, the position information of the runner may be the distance between the runner and the component on the console 114.


In step S240, the processor 130 controls the running speed of the running belt 112 according to the position information of the runner. Specifically, when the processor 130 finds that the runner is too far away from the console 114 or is located at the end area of the running belt 112 according to the position information of the runner, such finding means that the runner cannot keep up with the running speed of the running belt 112. Thus, the processor 130 may automatically lower the running speed of the running belt 112. In some embodiments, the processor 130 may gradually lower the running speed of the running belt 112. In addition, in some embodiments, the processor 130 may further provide a warning according to the position information of the runner, and the above-mentioned warning is, for example, a light warning or a sound effect warning and the like. In this way, the processor 130 may monitor the exercising state of the runner in real time and accordingly control the running speed of the running belt 112 or provide the warning to prevent the runner from falling due to being unable to keep up with the running speed of the running belt 112.


It is worth mentioning that, in some embodiments, the sensing image generated by the event-based vision sensor 120 may further be configured to detect whether a foreign object is too close to the treadmill 100, so as to avoid the accident caused by the foreign object affecting the runner's exercise.


In detail, FIG. 5 is a flowchart of a speed control method of a treadmill according to an embodiment of the disclosure. Referring to FIGS. 1 and 5 at the same time, the method of the embodiment is adapted for the treadmill 100 mentioned above. The detailed steps of the speed control method of the treadmill in the embodiment are described below together with the various elements of the treadmill 100.


In step S510, the processor 130 generates a sensing image through the event-based vision sensor 120 disposed on the treadmill 100. In step S520, the processor 130 performs runner detection on the sensing image. The detailed implementation manners of the above steps S510 to S520 have been clearly described in steps S210 to S220 of the embodiment in FIG. 2, and are not repeated here.


In step S530, in response to determining that the runner is detected from the sensing image, the processor 130 inputs the sensing image to the depth estimation model and obtains position information of the runner relative to the running belt 112. Here, step S530 may be implemented as steps S531 to S532.


In step S531, in response to determining that the runner is detected from the sensing image, the processor 130 inputs the sensing image to the depth estimation model and obtains a depth map output by the depth estimation model. As shown in FIG. 6, which is a schematic diagram of generating a depth map according to an embodiment of the disclosure. The processor 130 may input the sensing image Img_1 to a depth estimation model M1, and obtain a depth map D1 output by the depth estimation model M1. The depth map D1 may include a depth value corresponding to each pixel in the sensing image Img_1, that is, multiple depth values in the depth map D1 respectively correspond to multiple pixels in the sensing image Img_1. In some embodiments, the depth values in the depth map D1 may range from 0 to 255.


It should be noted that when the sensing image is implemented as a DVS image, the depth estimation model may complete model training according to multiple DVS images as the training data set and the corresponding ground truth. The above-mentioned ground truth may be a depth map obtained by performing depth estimation according to RGB images. In this way, when the processor 130 applies the depth estimation model, the processor 130 may input the DVS image generated by the event-based vision sensor 120 to the depth estimation model and obtain a corresponding depth map.


In step S532, the processor 130 determines a first distance between the runner and a reference position according to the depth map. In detail, in some embodiments, the processor 130 may obtain the depth information corresponding to the runner from the depth map according to the person bounding box. For example, the processor 130 may extract a depth value corresponding to the runner from the depth map according to the center position of the person bounding box. Alternatively, the processor 130 may annotate multiple depth values from the depth map according to the coordinate position of the person bounding box, perform statistical calculation on the depth values, and obtain a depth value corresponding to the runner.


Then, the processor 130 may calculate the first distance between the runner and the reference position according to the depth information of the runner. Specifically, in some embodiments, the position information of the runner may be the first distance between the runner and the reference position, and the above-mentioned reference position is, for example, the disposing position of the event-based vision sensor 120 or the disposing positions of other components on the console 114. For example, it is assumed that the console 114 of the treadmill 100 is provided with a monitor 115. After obtaining the depth information of the runner from the depth map, based on the relative positional relationship between the monitor 115 and the event-based vision sensor 120, the processor 130 may calculate the first distance between the runner and the monitor 115 according to the depth information of the runner. Therefore, the processor 130 may instantly determine whether the situation where the runner cannot keep up with the running speed of the running belt 112 occurs according to the first distance.


In step S540, the processor 130 controls the running speed of the running belt according to the position information of the runner. Here, step S540 may be implemented as steps S541 to S543.


In step S541, the processor 130 determines whether the first distance is greater than a first threshold value. The first threshold value may be set according to the actual application, which is not limited in the disclosure. If the first distance is greater than the first threshold value (yes is determined in step S541), such condition means that the runner may not be able to keep up with the running speed of the running belt 112. In step S542, the processor 130 controls the running speed of the running belt 112 to decrease. On the contrary, if the first distance is not greater than the first threshold value (no is determined in step S541), in step S543, the processor 130 maintains the running speed of the running belt 112, that is, the processor 130 does not adjust the running speed of the running belt 112.


On the other hand, in step S550, in response to determining that the runner is detected from the sensing image, the processor 130 performs motion detection on the background area in the sensing image to detect a moving object in the background area. In some embodiments, the processor 130 may use the area outside the person bounding box as the background area in the sensing image. Alternatively, the background area may also be a pre-defined area in the sensing image.


In some embodiments, the processor 130 may compare the background area of the current sensing image with the background area of the previous sensing image to determine whether the moving object appears in the background area. For example, the processor 130 may detect the moving object through image subtraction or optical flow, but the disclosure is not limited thereto. For example, FIG. 7 is a schematic diagram of performing motion detection on a background area according to an embodiment of the disclosure. Referring to FIG. 7, based on the continuous sensing performed by the event-based vision sensor 120, the processor 130 may obtain a previous sensing image Img_2 and a current sensing image Img_3. By comparing a background area ZB2 of the previous sensing image Img_2 with a background area ZB3 of the current sensing image Img_3, the processor 130 may detect a moving object Obj_m. The background area ZB3 of the current sensing image Img_3 may be an area outside the person bounding box B1.


In step S560, in response to the detection of the moving object, the processor 130 inputs the sensing image to the depth estimation model and obtains position information of the moving object. Here, step S560 may be implemented as steps S561 to S562.


In step S561, in response to the detection of the moving object, the processor 130 inputs the sensing image to the depth estimation model and obtains a depth map output by the depth estimation model. In step S562, the processor 130 determines a second distance between the moving object and a reference position according to the depth map. It should be noted that after the motion detection, the processor 130 may also obtain a bounding box configured to mark the moving object, and the operation method of obtaining the position information of the moving object is similar to the operation method of obtaining the position information of the runner. That is, the detailed implementation manners of steps S561 to S562 are similar to the detailed implementation manners of steps S531 to S532, and are not repeated here.


In step S570, the processor 130 controls the running speed of the running belt according to the position information of the moving object. Here, step S570 may be implemented as steps S571 to S573. In step S571, the processor 130 determines whether the second distance is less than a second threshold value. If the second distance is less than the second threshold value (yes is determined in step S571), such condition means that the moving object is very close to the treadmill 100. In step S572, the processor 130 controls the running speed of the running belt 112 to decrease. In some embodiments, if the second distance is less than the second threshold value, the processor 130 may also provide a sound and light warning to the runner. On the contrary, if the second distance is not less than the second threshold value (no is determined in step S571), in step S573, the processor 130 maintains the running speed of the running belt 112. In this way, the runner may be notified in advance that a foreign object is approaching the treadmill 100 in operation, so as to prevent the foreign object from being drawn into the bottom of the base 111 by the running belt 112 or to prevent the foreign object from disturbing the runner.


To sum up, in the embodiment of the disclosure, the event-based vision sensor is disposed on the treadmill body to perform sensing. When a runner is exercising on the treadmill, the position information of the runner may be estimated according to the sensing image generated by the event-based vision sensor and the depth estimation model, so as to have the speed of the treadmill controlled according to the position information of the runner. In this way, the runner may be prevented from falling on the treadmill in advance. In addition, when the runner is exercising on the treadmill, the moving object may be detected according to the sensing image generated by the event-based vision sensor. By estimating the position information of the moving object according to the sensing image and the depth estimation model, the speed of the treadmill may be controlled according to the position information of the moving object. In this way, the runner may be prevented from being disturbed by the foreign object or the foreign object may be prevented from being drawn into the bottom of the treadmill in advance. In light of the above, the safety of the treadmill may be significantly improved.


Although the disclosure has been described with reference to the above embodiments, the described embodiments are not intended to limit the disclosure. People of ordinary skill in the art may make some changes and modifications without departing from the spirit and the scope of the disclosure. Thus, the scope of the disclosure shall be subject to those defined by the attached claims.

Claims
  • 1. A treadmill, comprising: a treadmill body, comprising a running belt;an event-based vision sensor, disposed on the treadmill body and generating a sensing image;a processor, coupled to the event-based vision sensor, obtaining the sensing image, and performing runner detection on the sensing image, in response to determining that a runner is detected from the sensing image, the processor inputting the sensing image to a depth estimation model and obtaining position information of the runner relative to the running belt, and controlling a running speed of the running belt according to the position information of the runner.
  • 2. The treadmill according to claim 1, wherein the processor performs person detection on the sensing image, obtains a person bounding box on the sensing image, and determines whether the runner is detected according to whether the person bounding box is located within a predetermined area on the sensing image.
  • 3. The treadmill according to claim 2, wherein if the person bounding box is not located within the predetermined area on the sensing image, the processor determines that the runner is not detected; and if the person bounding box is located within the predetermined area on the sensing image, the processor determines that the runner is detected.
  • 4. The treadmill according to claim 1, wherein in response to determining that a runner is detected from the sensing image, the processor inputs the sensing image to a depth estimation model, obtains a depth map output by the depth estimation model, and determines a first distance between the runner and a reference position according to the depth map.
  • 5. The treadmill according to claim 4, wherein if the first distance is greater than a first threshold value, the processor controls the running speed of the running belt to decrease; and if the first distance is not greater than the first threshold value, the processor maintains the running speed of the running belt.
  • 6. The treadmill according to claim 1, wherein in response to determining that the runner is detected from the sensing image, the processor performs motion detection on a background area in the sensing image to detect a moving object in the background area.
  • 7. The treadmill according to claim 6, wherein in response to the detection of the moving object, the processor inputs the sensing image to the depth estimation model and obtains position information of the moving object, so as to control the running speed of the running belt according to the position information of the moving object.
  • 8. The treadmill according to claim 7, wherein in response to the detection of the moving object, the processor inputs the sensing image to the depth estimation model, obtains a depth map output by the depth estimation model, and determines a second distance between the moving object and a reference position according to the depth map.
  • 9. The treadmill according to claim 8, wherein if the second distance is less than a second threshold value, the processor controls the running speed of the running belt to decrease; and if the second distance is not less than the second threshold value, the processor maintains the running speed of the running belt.
  • 10. A speed control method of a treadmill, comprising: generating a sensing image through an event-based vision sensor disposed on the treadmill;performing runner detection on the sensing image;in response to determining that a runner is detected from the sensing image, inputting the sensing image to a depth estimation model and obtaining position information of the runner relative to a running belt of the treadmill; andcontrolling a running speed of the running belt according to the position information of the runner.
  • 11. The speed control method according to claim 10, wherein the step of performing the runner detection on the sensing image comprising: performing person detection on the sensing image and obtaining a person bounding box on the sensing image; anddetermining whether the runner is detected according to whether the person bounding box is located within a predetermined area on the sensing image.
  • 12. The speed control method according to claim 11, wherein the step of determining whether the runner is detected according to whether the person bounding box is located within the predetermined area on the sensing image comprising: determining that the runner is not detected if the person bounding box is not located within the predetermined area on the sensing image; anddetermining that the runner is detected if the person bounding box is located within the predetermined area on the sensing image.
  • 13. The speed control method according to claim 10, wherein the step of in response to determining that the runner is detected from the sensing image, inputting the sensing image to the depth estimation model and obtaining the position information of the runner relative to the running belt comprising: in response to determining that the runner is detected from the sensing image, inputting the sensing image to the depth estimation model and obtaining a depth map output by the depth estimation model; anddetermining a first distance between the runner and a reference position according to the depth map.
  • 14. The speed control method according to claim 13, wherein the step of controlling the running speed of the running belt according to the position information of the runner comprising: controlling the running speed of the running belt to decrease if the first distance is greater than a first threshold value; andmaintaining the running speed of the running belt if the first distance is not greater than the first threshold value.
  • 15. The speed control method according to claim 10, further comprising: in response to determining that the runner is detected from the sensing image, performing motion detection on a background area in the sensing image to detect a moving object in the background area.
  • 16. The speed control method according to claim 15, further comprising: in response to the detection of the moving object, inputting the sensing image to the depth estimation model and obtaining position information of the moving object; andcontrolling the running speed of the running belt according to the position information of the moving object.
  • 17. The speed control method according to claim 16, wherein the step of in response to the detection of the moving object, inputting the sensing image to the depth estimation model and obtaining the position information of the moving object comprising: in response to the detection of the moving object, inputting the sensing image to the depth estimation model and obtaining the depth map output by the depth estimation model; anddetermining a second distance between the moving object and a reference position according to the depth map.
  • 18. The speed control method according to claim 17, wherein the step of controlling the running speed of the running belt according to the position information of the moving object comprising: controlling the running speed of the running belt to decrease if the second distance is less than a second threshold value; andmaintaining the running speed of the running belt if the second distance is not less than the second threshold value.
Priority Claims (1)
Number Date Country Kind
111129282 Aug 2022 TW national