CONTROL METHOD FOR CLOTHES TREATMENT DEVICE

Information

  • Patent Application
  • 20240301604
  • Publication Number
    20240301604
  • Date Filed
    June 16, 2022
    2 years ago
  • Date Published
    September 12, 2024
    5 months ago
  • CPC
    • D06F33/36
    • D06F2103/04
    • D06F2105/60
  • International Classifications
    • D06F33/36
    • D06F103/04
    • D06F105/60
Abstract
A control method for a clothes treatment device, to solve the problem of it being impossible to realize silent startup of a speech module. The control method includes: acquiring an image of a user with an image acquisition module; on the basis of the image, determining whether there are clothes in the hand of the user; if so, further determining whether the weight of clothes to be treated in a clothes treatment drum currently exceeds a threshold value; and from a determination result, selectively starting a speech module to perform washing reminding or starting a weighing module to perform clothes weighing. When there are clothes in the hand of a user, a speech or weighing module is further started according to whether the weight of clothes in a clothes treatment drum currently exceeds a threshold value, such that the speech module can be started without any operation of the user.
Description
FIELD

The present disclosure relates to the technical field of clothing treatment, and specifically provides a control method for a clothing treatment apparatus.


BACKGROUND

With the continuous development of science and technology, smart washing machines are also equipped with constantly increasing functions. For example, the smart washing machine is provided with a voice module, which can enable users to interact with the smart washing machine, or broadcast a current working status of the smart washing machine, etc., so that users can make better use of the smart washing machine for improvement of the user experience.


However, in current washing apparatuses, wake-up keywords are required before the voice module is activated. For example, the user sends an instruction to activate the voice module, and the washing apparatus activates the voice module after receiving the instruction; alternatively, the washing apparatus or mobile terminal has a button provided thereon for activating the voice module, and the user can select this button as required to activate the voice module. It can be seen that the voice modules currently cannot truly achieve senseless activation.


Accordingly, there is a need for a new technical solution in the art to solve the above problem.


SUMMARY

The present disclosure aims to solve the above technical problem, that is, to solve the problem that senseless activation of the voice module cannot be achieved in the prior art.


In a first aspect, the present disclosure provides a control method for a clothing treatment apparatus, in which the clothing treatment apparatus includes a clothing treatment cylinder for holding clothing-to-be-treated, and the clothing treatment apparatus is provided with an image acquisition module, a voice module and a weighing module; the control method includes: acquiring an image of a user through the image acquisition module; judging whether the user has clothing in the hand based on the image; if there is clothing, then further judging whether the weight of the current clothing-to-be-treated in the clothing treatment cylinder exceeds a threshold; and selectively activating the voice module for washing reminder or activating the weighing module for clothing weighing based on a judgment result.


In a preferred technical solution of the control method described above, the step of “selectively activating the voice module for washing reminder or activating the weighing module for clothing weighing based on a judgment result” further includes: if the weight of the clothing-to-be-treated exceeds the threshold, then activating the voice module for washing reminder; and if the weight of the clothing-to-be-treated does not exceed the threshold, then activating the weighing module for clothing weighing.


In a preferred technical solution of the control method described above, the voice module has a pronunciation mode and a pickup mode, and the step of “activating the voice module” further includes: controlling the voice module to enter the pronunciation mode.


In a preferred technical solution of the control method described above, the voice module has a pronunciation mode and a pickup mode, and the step of “activating the voice module” further includes: controlling the voice module to enter the pronunciation mode.


In a preferred technical solution of the control method described above, after the step of “controlling the voice module to enter the pronunciation mode”, the control method further includes: controlling the voice module to send a “wash the clothing” reminder.


In a preferred technical solution of the control method described above, after the step of “activating the weighing module for clothing weighing”, the control method further includes: acquiring the weight of the clothing-to-be-treated in the clothing treatment cylinder and saving the weight.


In a preferred technical solution of the control method described above, after the step of “controlling the voice module to enter the pronunciation mode” or after the step of “acquiring the weight of the clothing-to-be-treated in the clothing treatment cylinder”, the control method further includes: controlling the voice module to enter the pickup mode.


In a preferred technical solution of the control method described above, the control method further includes: if the voice module does not receive a valid instruction within a preset time from the voice module's entry into the pickup mode, then controlling the voice module to exit the pickup mode; in which the valid instruction includes a start washing instruction, a stop washing instruction, a power on instruction, and a shutdown instruction.


In a preferred technical solution of the control method described above, after the step of “controlling the voice module to exit the pickup mode”, the control method further includes: controlling the voice module to operate at a first power.


In a preferred technical solution of the control method described above, the control method further includes: if there is no clothing, then turning off the image acquisition module.


In a preferred technical solution of the control method described above, the clothing treatment apparatus is further provided with a human body detection module, and before the step of “acquiring an image of a user through the image acquisition module”, the control method further includes: detecting whether a user has entered a preset area through the human body detection module; and if a user has entered the preset area, then activating the image acquisition module.


In the technical solutions of the present disclosure, the clothing treatment apparatus of the present disclosure includes a clothing treatment cylinder for holding the clothing-to-be-treated, and when the clothing is washed, the clothing-to-be-treated is placed into the clothing treatment cylinder for washing. The clothing treatment apparatus is provided with an image acquisition module, a voice module, and a weighing module. The control method of the present disclosure includes: acquiring an image of a user through the image acquisition module; judging whether the user has clothing in the hand based on the image; if there is clothing, then further judging whether the weight of the current clothing-to-be-treated in the clothing treatment cylinder exceeds a threshold; and selectively activating the voice module for washing reminder or activating the weighing module for clothing weighing based on a judgment result. Through this control method, when the user has clothing in the hand, it indicates that the user has the intention to wash the clothing. Then, it is further judged whether the weight of the current clothing-to-be-treated in the clothing treatment cylinder exceeds the threshold. Based on the judgment result, the voice module can be activated for washing reminder or the weighing module can be activated for clothing weighing. In this way, the voice module can be activated with no need for the user to perform any operation, thus achieving senseless activation of the voice module, and improving the user experience. It is also possible to activate the weighing module with no need for the user to perform any operation, so as to acquire the weight of the clothing-to-be-washed in the clothing treatment cylinder. This weight can be used as the weight of the current clothing-to-be-treated in the next time of judgment for comparison with the threshold, and then it is further determined whether to activate the voice module based on the judgment result, thereby better achieving the senseless activation of the voice module.


If the user does not have any clothing in the hand, it indicates that the user has no intention to wash the clothing and may only pass by the clothing treatment apparatus. In this case, the image acquisition module is turned off to save electrical energy.


Further, if the weight of the clothing-to-be-treated exceeds the threshold, it indicates that there is already enough clothing-to-be-treated in the clothing treatment cylinder and washing can be carried out. At this point, the voice module is activated for washing reminder. If the weight of the clothing-to-be-treated does not exceed the threshold, it indicates that there is still less clothing-to-be-treated in the clothing treatment cylinder, and addition of clothing into the clothing treatment cylinder can be continued. At this point, the weighing module is activated to weigh the clothing. Through this control method, the operation of the voice module or the weighing module can be controlled based on the weight of the current clothing-to-be-treated in the clothing treatment cylinder, thereby better serving the user.


Further, the voice module has a pronunciation mode and a pickup mode, and the step of “activating the voice module” further includes: controlling the voice module to enter the pronunciation mode. In this way, washing reminder can be carried out through voice reminder. After the step of “controlling the voice module to enter the pronunciation mode”, the voice module is controlled to send a “wash the clothing” reminder. After hearing this reminder, the users know that they can wash the clothing, which can better achieve the purpose of reminding users.


Further, after the step of “activating the weighing module for clothing weighing”, the control method of the present disclosure further includes: acquiring the weight of the clothing-to-be-treated in the clothing treatment cylinder and saving the weight. Based on this weight, it can be further judged whether the weight exceeds the threshold, and then based on the judgment result, it can be further determined whether to activate the voice module or not, thereby better achieving the senseless activation of the voice module.


Further, after the step of “controlling the voice module to enter the pronunciation mode” or after the step of “acquiring the weight of the clothing-to-be-treated in the clothing treatment cylinder”, the control method further includes: controlling the voice module to enter the pickup mode. In this way, users can achieve human-machine interaction with the clothing treatment apparatus through the voice module, which can better meet user needs.


Further, if, within a preset time from the voice module's entry into the pickup mode, the voice module does not receive a valid instruction, which includes a start washing instruction, a stop washing instruction, a power on instruction and a shutdown instruction, it indicates that the user currently does not intend to control the clothing treatment apparatus through effective human-machine interaction with the clothing treatment apparatus. In this case, the voice module will be controlled to exit the pickup mode, and the human-machine interaction channel is closed.


Further, after the step of “controlling the voice module to exit the pickup mode”, the voice module is controlled to operate at a first power. It should be noted that the first power is lower than the power at which the voice module operates normally, that is, the voice module is controlled to operate at a low power. Through this control method, the consumption of electrical energy can be effectively reduced.


Further, the clothing treatment apparatus is further provided with a human body detection module, and before the step of “acquiring an image of a user through the image acquisition module”, the control method of the present disclosure further includes: detecting whether a user has entered a preset area through the human body detection module. If a user has entered the preset area, it indicates that the user may have the intention to wash the clothing, and then the image acquisition module is activated. By acquiring the image of the user through the image acquisition module, and then analyzing whether the user has clothing in the hand based on this image, it is further judged whether the user has the intention to wash the clothing, which can better achieve intelligent control of the clothing treatment apparatus and improve the user experience.





BRIEF DESCRIPTION OF DRAWINGS

In the following, the control method for the clothing treatment apparatus of the present disclosure will be described in connection with the accompanying drawings, in which:



FIG. 1 is a main flowchart of the control method for the clothing treatment apparatus according to an embodiment of the present disclosure;



FIG. 2 is a flowchart of the voice module entering the pickup mode according to an embodiment of the present disclosure;



FIG. 3 is a flowchart of activating the image acquisition module according to an embodiment of the present disclosure;



FIG. 4 is a flowchart of calling a preset model to analyze image and judging whether the user has clothing in the hand according to an embodiment of the present disclosure;



FIG. 5 is a flowchart of analyzing image using the ResNet18 model according to an embodiment of the present disclosure; and



FIG. 6 is a flowchart of judging whether the user has clothing in the hand based on an analysis result of the ResNet18 model according to an embodiment of the present disclosure.





DETAILED DESCRIPTION

Preferred embodiments of the present disclosure will be described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only used to explain the technical principles of the present disclosure, and are not intended to limit the scope of protection of the present disclosure. Although the embodiment is illustrated by using a washing machine as an example, the same principle is also applicable to various types of clothing treatment apparatuses such as shoe washers, clothing dryers, washing-drying integrated machines, and clothing care machines. For example, if the clothing treatment apparatus is a shoe washer, when performing image analysis, it is required to judge whether the user has a shoe in the hand.


It should be noted that terms “first” and “second” are only used for descriptive purpose, and should not be understood as indicating or implying relative importance.


In order to better serve users, current smart washing machines are often provided with a voice module, which can enable users to interact with the smart washing machines, or broadcast a current working status of the smart washing machines, etc. However, current voice modules all require users to send an instruction or manually select during activation, which cannot truly achieve senseless activation. For this purpose, the control method for the clothing treatment apparatus of the present disclosure judges whether the user has clothing in the hand based on the image of the user. If there is clothing, it is further judged whether the weight of the current clothing-to-be-treated in the clothing treatment cylinder exceeds a threshold. Based on the judgment result, the voice module is activated for washing reminder or the weighing module is activated for clothing weighing, so that the voice module can be activated with no need for the user to perform any operation, thus achieving senseless activation of the voice module, and improving the user experience.


In this embodiment, the washing machine includes a washing cylinder, which is used to hold the clothing-to-be-washed. For washing the clothing, it is only required to put the clothing to be washed into the washing cylinder. The washing machine is provided with an image acquisition module, a voice module, a weighing module, and a control module. The image acquisition module, the voice module, and the weighing module are all connected to the control module. The image acquisition module is configured to acquire the image of the user. The voice module is configured to achieve interaction between the user and the washing machine, or to broadcast a voice reminder. The weighing module is configured to acquire the weight of the clothing-to-be-washed in the washing cylinder. The control module is configured to control a preset model to analyze the image of the user acquired by the image acquisition module. Based on the analysis result, it is judged whether the user has clothing in the hand, so as to judge whether the weight of the clothing-to-be-washed in the washing cylinder exceeds the threshold, and based on the judgment result, activate the voice module for voice reminder or activate the weighing module for clothing weighing.


In this embodiment, the image acquisition module may be, but is not limited to, a video camera, an image camera, etc., which can be flexibly selected by those skilled in the art. The image acquisition module can be provided on a front panel of the washing machine; obviously, it can also be provided at other positions for facilitating acquiring the image of the user.


The voice module may be, but is not limited to, a device that can broadcast voice reminders, such as a horn or a speaker; it may also be, but is not limited to, a device that can collect ambient sounds, such as a pickup or a microphone. The voice module can be provided on the front panel or a side panel of the washing machine; obviously, it can also be provided at other locations for facilitating broadcasting voices and collecting sounds.


The weighing module may be, but is not limited to, a weighing sensor, a weighing meter, etc., which can be provided inside the washing cylinder to acquire the weight of the clothing-to-be-washed in the washing cylinder.


First, with reference to FIG. 1, the control method for the clothing treatment apparatus of the present disclosure will be described. FIG. 1 is a main flowchart of the control method for the clothing treatment apparatus according to an embodiment of the present disclosure.


As shown in FIG. 1, in a possible embodiment, the control method of the present disclosure includes steps S10 to S50.


Step S10: acquiring an image of the user through the image acquisition module.


In step S10, the image of the user is acquired through the image acquisition module.


Step S20: judging whether the user has clothing in the hand based on the image; if not, executing step S30; and if yes, executing step S40.


In step S20, based on the image of the user acquired in step S10, image analysis is performed to judge whether the user has clothing in the hand based on the analysis result. The specific steps of image analysis are described in detail below. If the user does not have clothing in the hand, step S30 is executed. If the user has clothing in the hand, step S40 is executed.


Step S30: turning off the image acquisition module.


In step S30, based on the judgment result in step S20, if the user does not have clothing in the hand, it indicates that the user has no intention to wash the clothing and may only pass by the washing machine. In this case, the image acquisition module is turned off, and only the human body detection module is controlled to continue detecting whether there is clothing entering the preset area, thus saving electrical energy.


Step S40: if there is clothing, then further judging whether the weight of the current clothing-to-be-washed in the washing cylinder exceeds a threshold.


Step S50: selectively activating the voice module for washing reminder or activating the weighing module for clothing weighing based on a judgment result.


In step S40, based on the judgment result in step S20, if the user has clothing in the hand, it indicates that the user has the intention to wash the clothing, and then it is further judged whether the weight of the current clothing-to-be-washed in the washing cylinder exceeds a threshold.


In step S50, based on the judgment result in step S40, the voice module is activated for washing reminder or the weighing module is activated for clothing weighing. In this way, the voice module can be activated with no need for the user to perform any operation, thus achieving senseless activation of the voice module, and improving the user experience. It is also possible to activate the weighing module with no need for the user to perform any operation, so as to acquire the weight of the clothing-to-be-washed in the washing cylinder. This weight can be used as the weight of the current clothing-to-be-washed in the next time of judgment for comparison with the threshold, and then it is further determined whether to activate the voice module based on the judgment result, thereby better achieving the senseless activation of the voice module.


Specifically, if the weight of the clothing-to-be-washed in the washing cylinder exceeds the threshold, it indicates that there is already enough clothing-to-be-washed in the clothing treatment cylinder and washing can be carried out. For example, the weight of the clothing-to-be-washed is 4.8 kg, and the threshold is 4.5 kg. At this point, the voice module is activated for washing reminder.


It should be noted that the voice module has a pronunciation mode and a pickup mode. When the voice module enters the pronunciation mode, it can send a “wash the clothing” reminder. Obviously, voice reminders such as “power on” can also be sent by the voice module, and the voice reminders can be flexibly chosen by those skilled in the art according to specific application scenes, as long as the washing reminder can be sent by the voice module. When the voice module enters the pickup mode, it can collect ambient sounds, such as collecting various instructions sent by the user, such as a washing instruction, a power on instruction, etc.


Preferably, the “activating the voice module for washing reminder” specifically includes: controlling the voice module to enter the pronunciation mode. After the step of “controlling the voice module to enter the pronunciation mode”, the voice module is controlled to send a “wash the clothing” reminder. After hearing this reminder, the users know that they can wash the clothing, which can better achieve the purpose of reminding users.


Obviously, the voice module can also provide the washing reminder through other means, such as using a “beeping” alarm sound for washing reminder. Those skilled in the art can flexibly choose specific reminder means.


If the weight of the clothing-to-be-washed in the washing cylinder does not exceed the threshold, it indicates that there is still less clothing-to-be-washed in the clothing treatment cylinder, and addition of clothing into the clothing treatment cylinder can be continued. For example, the weight of the clothing-to-be-washed is 3 kg, and the threshold is 4.5 kg. At this point, the weighing module is activated for clothing weighing.


In a possible embodiment, after the step of “activating the weighing module for clothing weighing”, the control method of the present disclosure further includes: acquiring the weight of the clothing-to-be-washed in the washing cylinder through the weighing module and saving the weight. After acquiring the weight of the clothing-to-be-washed, the weight is saved. When it is judged in the next time that the user has clothing in the hand, this weight will be used as the weight of the current clothing-to-be-washed in the washing cylinder and compared with the threshold. Then, based on the judgment result, it is further determined whether to activate the voice module, so as to better achieve senseless activation of the voice module.


Through this control method, the operation of the voice module or the weighing module can be controlled based on the weight of the current clothing-to-be-washed in the washing cylinder, thereby better serving the user.


After the voice module is controlled to enter the pronunciation mode or after the weight of the clothing-to-be-washed in the washing cylinder is acquired, the voice module is controlled to enter the pickup mode to collect ambient sounds in the pickup mode and achieve human-machine interaction. Specifically, in the following, the control method for the clothing treatment apparatus of the present disclosure will be further described with reference to FIG. 2, which is a flowchart of the voice module entering the pickup mode according to an embodiment of the present disclosure.


As shown in FIG. 2, in a possible embodiment, after the voice module is controlled to enter the pronunciation mode or after the weight of the clothing-to-be-washed in the washing cylinder is acquired, the control method of the present disclosure further includes steps S61 to S63.


Step S61: after controlling the voice module to enter the pronunciation mode or after acquiring the weight of the clothing-to-be-washed in the washing cylinder, controlling the voice module to enter the pickup mode.


In step S61, after controlling the voice module to enter the pronunciation mode, the washing reminder can be sent to remind the user that the clothing can be washed. Then, the voice module is controlled to enter the pickup mode and collect ambient sounds, such as user instructions.


Alternatively, after acquiring the weight of the clothing-to-be-washed in the washing cylinder and saving the weight, the voice mode is controlled to enter the pickup mode and collect ambient sounds. In this way, when the weight of the clothing-to-be-washed exceeds the threshold, after the voice module sends the washing reminder, the user can send instructions to the washing machine through voice, such as an instruction to start washing. When the weight of the clothing-to-be-washed does not exceed the threshold, if the user still wants to wash the clothing, the instructions can also be sent through voice, such as the instruction to start washing.


Through this control method, after the voice module enters the pronunciation mode or after the weight of the clothing-to-be-washed is acquired, the voice module is placed in the pickup mode, which can achieve human-machine interaction between the user and the washing machine, better control the washing machine, and improve the user experience.


Step S62: if the voice module does not receive a valid instruction within a preset time from the voice module's entry into the pickup mode, then controlling the voice module to exit the pickup mode.


The valid instruction includes a start washing instruction, a stop washing instruction, a power on instruction, and a shutdown instruction. It should be noted that the types of valid instructions listed above are only specific types of the instructions, which do not limit the specific content of the instructions that the voice module can receive, as long as the valid instructions received by the voice module can control the washing machine to start washing, stop washing, power on, and shut down.


In step S62, after controlling the voice module to enter the pickup mode in step S61, the duration of the voice module's entry into the pickup mode is acquired. If the voice module has entered the pickup mode for a preset duration, and the voice module does not receive any valid instruction within this preset duration (for example, the preset duration is 5 minutes, and the voice module does not receive any valid instruction within 5 minutes from entry into the pickup mode), it indicates that the user currently has no intention to wash the clothing. At this point, the voice module is controlled to exit the pickup mode.


Step S63: controlling the voice module to operate at a first power.


In step S63, after controlling the voice module to exit the pickup mode in step S62 above, the voice module is controlled to operate at the first power. It should be noted that the first power is lower than the normal operating power of the voice module. For example, the voice module is controlled to be in a sleep status or operate at an extremely low first power. By controlling the voice module to operate at a lower first power, electrical energy can be effectively saved.


In a possible embodiment, a relay is provided on a power supply circuit of the voice module. After the voice module has exited the pickup mode, or before judging whether the user has clothing in the hand through step S20, the relay is controlled to open, so as to control the voice module to operate at the first power; for example, the voice module is in the sleep status, or the voice module operates at an extremely low power. After judging whether the user has clothing in the hand and the judgment result is yes, the relay is controlled to close, so as to control the voice module to operate at a second power, which is larger than the first power. Specifically, the voice module operates at its normal operating power. Through this control method, the voice module is controlled to operate at a lower first power before determining whether the user has clothing in the hand or when the user does not have clothing in the hand, even if the voice module is in the sleep status, thus saving electrical energy. After it is determined that the user has clothing in the hand, the voice module is controlled to operate at a higher second power. Even if the voice module is activated normally, the user can interact with the washing machine through the voice module, thereby achieving senseless activation of the voice module.


Of course, it is also possible not to provide a relay on the power supply circuit of the voice module; instead, the control module directly controls the voice module to operate at the first or second power, which can be flexibly chosen by those skilled in the art, as long as the voice module can be controlled to operate at a lower first power at the appropriate time to save electrical energy, or operate at a higher second power to ensure the normal operation of the voice module.


In order to provide more accurate services to the user, before acquiring the image of the user in step S10 above, it is first detected whether a user has entered a preset area, and then it is determined whether to activate the image acquisition module based on whether a user has entered the preset area. Specifically, in the following, the control method for the clothing treatment apparatus of the present disclosure will be further described with reference to FIG. 3, which is a flowchart of activating the image acquisition module according to an embodiment of the present disclosure.


As shown in FIG. 3, in a possible embodiment, before the step S10 of “acquiring an image of a user through the image acquisition module”, the control method of the present disclosure further includes:


step S100: detecting whether a user has entered a preset area through a human body detection module; and


step S200: if a user has entered the preset area, then activating the image acquisition module.


In this embodiment, the washing machine is also provided with a human body detection module, which is configured to detect whether a user has entered the preset area. The human body detection module can be provided on the front panel of the washing machine; obviously, it can also be provided at other positions for facilitating detecting whether a user has entered the preset area.


It should be noted that the human body detection module can be any device that can detect whether a user has entered the preset area, such as an infrared detection module or a radar detection module. Regardless of whichever method used to detect whether a user has entered the preset area, any specific detection method should not constitute a limitation to the present disclosure.


In step S100, the human body detection module such as the infrared detection module and the radar detection module is used to detect whether a user has entered the preset area.


It should be noted that the human body detection module can detect in real time, or detect at a preset time interval, in which the preset time interval can be 15 seconds, 30 seconds, 1 minute, 3 minutes, etc. Of course, these preset time intervals are only illustrative and not restrictive. Those skilled in the art can flexibly adjust and set the preset time intervals in practical applications based on the frequency of users entering the location where the washing machine is placed. Regardless of how the preset time intervals are adjusted and set, any preset time interval can be chosen, as long as the human body detection module can accurately detect whether a user has entered the preset area.


It should be noted that the preset area can be a detection area during the normal operation of the human body detection module, or it can be an area preset by those skilled in the art based on experiments or experience. The preset area can be within an area having a certain linear distance from the washing machine, such as 0.8 m, 1.2 m, or 1.5 m. The preset area can be flexibly adjusted and set by those skilled in the art.


In step S200, based on the detection result in step S100, if it is detected that a user has entered the preset area, it indicates that the user may have the intention to wash the clothing; of course, it is also possible that the user may only pass by the washing machine. In order to further judge the user's intention, the image acquisition module is activated, so that the image of the user is further acquired by the image acquisition module.


After it is determined that a user has entered the preset area, step S10 is executed to acquire the image of the user, and it is further judged whether the user has clothing in the hand based on this image. Specifically, in the following, the control method for the clothing treatment apparatus of the present disclosure will be further described with reference to FIGS. 4 to 6, in which FIG. 4 is a flowchart of calling a preset model to analyze image and judging whether the user has clothing in the hand according to an embodiment of the present disclosure, FIG. 5 is a flowchart of analyzing image using the ResNet18 model according to an embodiment of the present disclosure, and FIG. 6 is a flowchart of judging whether the user has clothing in the hand based on an analysis result of the ResNet18 model according to an embodiment of the present disclosure.


As shown in FIG. 4, in a possible embodiment, the step of “judging whether the user has clothing in the hand based on the image” in step S20 further includes:


step S201: calling a preset model to analyze the image; and


step S202: judging whether the user has clothing in the hand based on an analysis result.


In step S201, the preset model is pre-stored on the washing machine, which can be, but is not limited to, CNN model, ResNet18 model, ResNet101 model, Deeplab V3+model, ResNext model, and HRNet model.


Preferably, the preset model is the ResNet18 model, and a network structure of the ResNet18 model is shown in Table 1 below.









TABLE 1







network structure of the ResNet18 model









Layer Name
Output
ResNet-18





conv1
112 × 112 × 64
7 × 7, 64, stride2


conv2_x
56 × 56 × 64
3 × 3, max pool, stride2












[





3
×
3

,

6

4








3
×
3

,

6

4





]

×
2









conv3_x
28 ×28 × 128





[





3
×
3

,
128







3
×
3

,
128




]

×
2









conv4_x
14 × 14 × 256





[





3
×
3

,

2

5

6








3
×
3

,

2

5

6





]

×
2









conv5_x
7 × 7 × 512





[





3
×
3

,

5

1

2








3
×
3

,

5

1

2





]

×
2









average pool
1 × 1 × 512
7 × 7 average pool


fully connected
1
512 × 1 fully connections


softmax
1









The ResNet18 model adopts non-linear connections between layers, so the ResNet18 model is overall also in non-linear connection. It should be noted that the calculation method and operating principle of the ResNet18 model are common knowledge in the art and will not be described in detail herein. Of course, other models such as ResNet50 model and ResNet101 model can also be used to analyze the image of the user to determine whether the user has clothing in the hand.


In the following, the specific way of “calling a preset model to analyze the image” in step S201 will be described by referring to FIG. 5 and using the ResNet18 model as the preset model:


step S2011: extracting multiple sub-images from the image according to a preset method;


step S2012: inputting all the sub-images into the ResNet18 model; and


step S2013: calculating and acquiring a feature value of the image by the ResNet18 model based on all the sub-images.


In the preset method, different activity boxes can be set, the image in each activity box is extracted, and the extracted images are used as the sub-images; alternatively, in the preset method, the image of the user is divided into N parts based on its size, e.g., 5 parts, 10 parts, 15 parts, or 20 parts, where N is a positive integer; each part of the image is extracted separately, and the extracted images are used as the sub-images. Of course, the preset method is not limited to the methods listed above. Regardless of whichever method used, any preset method can be adopted, as long as multiple sub-images can be extracted from the image.


As shown in FIG. 6, in step S202, the step of “judging whether the user has clothing in the hand based on an analysis result” specifically includes:


step S2021: judging whether the feature value is larger than a preset value;


step S2022: if the feature value is larger than the preset value, then judging that the user has clothing in the hand; and


step S2023: if the feature value is smaller than or equal to the preset value, then judging that the user does not have clothing in the hand.


In step S2021, the feature value calculated in step S2013 is compared with the preset value, and it is finally judged whether the user has clothing in the hand based on a comparison result between the feature value and the preset value.


In step S2022, if the feature value is larger than the preset value, for example, if the preset value is 0.5, and the calculated feature value in step S2013 is 0.95, which is larger than the preset value, then it indicates that the user has clothing in the hand, and it is judged that the user has clothing in the hand.


In step S2023, if the feature value is smaller than or equal to the preset value, for example, if the preset value is 0.5, and the calculated feature value in step S2013 is 0.05, which is smaller than the preset value, then it indicates that the user does not have clothing in the hand, and it is judged that the user does not have clothing in the hand.


It should be noted that the preset value listed above is only illustrative and not restrictive. Those skilled in the art can flexibly adjust and set the preset value based on the accuracy of judging whether the user has clothing in the hand. For example, the preset value can also be 0.7, 0.8, 0.9 or 1. Any preset value can be selected, as long as it can be accurately judged whether the user has clothing in the hand.


It should also be noted that in the above process, steps S2022 and S2023 are not executed in sequence, but in parallel. The order of execution is only related to the judgment result of whether the feature value is larger than the preset value. The corresponding steps will be executed based on different judgment results.


In summary, in the preferred technical solutions of the present disclosure, the image of the user is acquired through the image acquisition module, and then the image is analyzed. It is judged whether the user has clothing in the hand based on the analysis result. If there is clothing, it is further judged whether the weight of the current clothing-to-be-treated in the clothing treatment cylinder exceeds the threshold. Based on the judgment result, the voice module is activated for washing reminder or the weighing module is activated for clothing weighing. In this way, the voice module can be activated with no need for the user to perform any operation, thus achieving senseless activation of the voice module, and improving the user experience. If there is no clothing, the image acquisition module is turned off to save electrical energy. If the weight of the clothing-to-be-treated exceeds the threshold, the voice module is controlled to send a “wash the clothing” reminder; and if the weight of the clothing-to-be-treated does not exceed the threshold, the weight of the clothing-to-be-treated in the clothing treatment cylinder is acquired through the weighing module and this weight saved. After the voice module enters the pronunciation mode or after the weight of the clothing-to-be-treated in the clothing treatment cylinder is acquired, the voice module is controlled to enter the pickup mode, thereby achieving human-machine interaction between the user and the washing machine. If the voice module does not receive a valid instruction within a preset time from the voice module's entry into the pickup mode, then the voice module will exit the pickup mode and the voice module will be controlled to operate at a first power, thus saving electrical energy.


Although various steps in the above embodiment have been described in the above sequential order, it can be understood by those skilled in the art that in order to achieve the effect of the embodiment, different steps are not necessarily executed in this order. They can be executed simultaneously (in parallel) or in a reverse order. These simple changes are all within the scope of protection of this application.


Hitherto, the technical solutions of the present disclosure have been described in connection with the preferred embodiments shown in the accompanying drawings, but it is easily understood by those skilled in the art that the scope of protection of the present disclosure is obviously not limited to these specific embodiments. Without departing from the principles of the present disclosure, those skilled in the art can make equivalent changes or replacements to relevant technical features, and all the technical solutions after these changes or replacements will fall within the scope of protection of the present disclosure.

Claims
  • 1-10. (canceled)
  • 11. A control method for a clothing treatment apparatus, wherein the clothing treatment apparatus comprises a clothing treatment cylinder for holding clothing-to-be-treated, and the clothing treatment apparatus is provided with an image acquisition module, a voice module and a weighing module; the control method comprising:acquiring an image of a user through the image acquisition module;judging whether the user has clothing in the hand based on the image;when there is clothing, then further judging whether the weight of the current clothing-to-be-treated in the clothing treatment cylinder exceeds a threshold; andselectively activating the voice module for washing reminder or activating the weighing module for clothing weighing based on a judgment result.
  • 12. The control method according to claim 11, wherein the step of selectively activating the voice module for washing reminder or activating the weighing module for clothing weighing based on a judgment result further comprises: when the weight of the clothing-to-be-treated exceeds the threshold, then activating the voice module for washing reminder; andwhen the weight of the clothing-to-be-treated does not exceed the threshold, then activating the weighing module for clothing weighing.
  • 13. The control method according to claim 12, wherein the voice module has a pronunciation mode and a pickup mode, and the step of activating the voice module further comprises: controlling the voice module to enter the pronunciation mode.
  • 14. The control method according to claim 13, wherein after the step of controlling the voice module to enter the pronunciation mode, the control method further comprises: controlling the voice module to send a wash the clothing reminder.
  • 15. The control method according to claim 13, wherein after the step of activating the weighing module, the control method further comprises: acquiring the weight of the clothing-to-be-treated in the clothing treatment cylinder and saving the weight.
  • 16. The control method according to claim 15, wherein after the step of controlling the voice module to enter the pronunciation mode or after the step of acquiring the weight of the clothing-to-be-treated in the clothing treatment cylinder, the control method further comprises: controlling the voice module to enter the pickup mode.
  • 17. The control method according to claim 16, further comprising: when the voice module does not receive a valid instruction within a preset time from the voice module's entry into the pickup mode, then controlling the voice module to exit the pickup mode;wherein the valid instruction comprises a start washing instruction, a stop washing instruction, a power on instruction, and a shutdown instruction.
  • 18. The control method according to claim 17, wherein after the step of controlling the voice module to exit the pickup mode, the control method further comprises: controlling the voice module to operate at a first power.
  • 19. The control method according to claim 11, further comprising: when there is no clothing, then turning off the image acquisition module.
  • 20. The control method according to claim 11, wherein the clothing treatment apparatus is further provided with a human body detection module; and before the step of acquiring an image of a user through the image acquisition module, the control method further comprises:detecting whether a user has entered a preset area through the human body detection module; andwhen a user has entered the preset area, then activating the image acquisition module.
Priority Claims (1)
Number Date Country Kind
202110757378.6 Jul 2021 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2022/099067 6/16/2022 WO