The present disclosure relates to a method and a device of estimating weight of food objects.
Food processing machines used e.g. to cut food objects into portions, e.g. fixed weight slices, use three dimensional (3D) surface profiles of the food objects as input in creating the cutting profiles for the food objects, or simply to evaluate the total weight of the food objects. This is commonly done using a line laser 101 as shown in
Also, when utilizing a 3D image in estimating weight of a food object, the weight is determined by multiplying the volume of the food object with the density of the food object, the assumption of fixed density is made. This assumption can however lead to additional inaccuracy because the density can vary—both from food object to food object, and also within the same food object. As an example, if the food object is a fish fillet, the density at the tail part is commonly different from the density at the head part.
On the above background it is an object of embodiments of the present disclosure to improve the accuracy in weighing using 3D surface profiles. It is a further object to enable a more flexible and simple use of 3D imaging devices for determining weight of food objects.
In general, the disclosure preferably seeks to mitigate, alleviate or eliminate one or more of the above mentioned disadvantages of the prior art singly or in any combination. In particular, it may be seen as an object of embodiments of the present disclosure to provide a method that solves the above mentioned problems, or other problems.
To better address one or more of these concerns, in a first aspect of the disclosure, a method is provided for estimating weights of food objects. The method comprises
providing a processor with an artificial neural network software module,
capturing three dimensional (3D) training image data and associated training weight data of a plurality of training food objects by use of a 3D imaging device and a scale,
training the artificial neural network software module by use of the image data and associated weight data,
capturing a three dimensional (3D) image of a food object by a 3D imaging device,
using the trained artificial neural network software module and the captured image to provide a weight correlated data estimate for said food object.
By training such an artificial neural network software module, the software learns to correlate specific shapes of food objects to specific characteristics of the food objects. As an example of such learning is that specific 3D shapes may indicate empty space between the food object and the surface on which it rests, while other 3D shapes may indicate that the food object is flat against the surface and therefore indicate no empty space. Another example may be that shorter food objects, e.g. a fish fillet, are more likely to have a specific part, e.g. the tail part of the fish fillet, lifted upwards. This typical characteristic implies an empty space between the tail and the surface, whereas a longer fish fillet is less disposed to form empty spaces. Yet another example, which will be discussed in more details later, is to take varying density into account, where the software may learn e.g. that the density at the tail part is larger than at the head part, and this difference may be taken into account when estimating the weight of the food object.
The step of capturing the 3D image data by said 3D imaging device may, in one embodiment, comprise using a digital camera, and/or a line laser positioned above an incoming food item, typically in an angular position relative to a vertical axis, where the reflection of the light from the surface of the food object may be captured by a detector, e.g. a camera, that outputs the 3D surface profile.
The food objects may include any type of food products, e.g. fish fillets, smoked fish fillets, meat/food products of e.g. fish, poultry, pork, or cattle, poultry products such as breast or legs or wings, slices or fillets of food products etc.
The training of the artificial neural network software module for said similar or identical food species as said food objects may involve scanning the first hundreds or thousands of similar or identical food species as said food objects to capture the 3D image data for each of them, weighing them, and associating the respective weight to the image data.
In one embodiment, the weight correlated data comprises a weight estimate. In this case, there is a single output indicating the weight estimate of the food object.
A salmon filet can have higher density in its tail than in the rest of the filet, and in general, variations between food objects or within one and the same food object may be important for the estimation of weight based on an image.
In one embodiment, the weight correlated data comprises a density estimate. From the weight density, the weight estimate may be calculated by multiplying the density with the measured volume. The artificial neural network software module may particularly be trained to identify different densities within the same food object and to use the different densities for determining the weight of the food object.
In an embodiment, the food object is a portion from a larger food object such that multiple of such portions define the whole larger food object. Such a larger food object can e.g. be a fish fillet, a meat product, etc. where density may vary along these larger food objects.
As an example, it is common that the density at the tail part of a fish fillet is different from the density at the head part. Same applies for meat products. Since the density of fat is lower than the density of meat, a varying fat content may cause a varying density. Accordingly, when a larger food object is received into e.g. a cutting device that scans such a larger food object, the density distribution for this particular larger food object can be taken into account when calculating a cutting profile to obtain fixed weight portions.
The artificial neural network software module may be trained to identify in the 3D image a non-uniform density of the food object, and it may be configured to determine the weight correlated data estimate based on the non-uniform density. As an example, the artificial neural network software module may identify a specific shape or pattern of shapes in the 3D image, e.g. a circular shape, an oval shape, an oblong shape, the shape of a tail, the shape of a head or similar characteristic shapes. For such characteristic shapes, the artificial neural network software module may be trained to account for different densities pertaining to the specific shape or pattern of shapes and which are typical for food objects having such a characteristic shape. The artificial neural network software module may use the shape identified in the 3D image and be trained to provide the density at least partly based on the shape.
In one example, the artificial neural network is configured for hamburgers, and during training of the network, the system will learn that a circular hamburger may have a density which is different from the density of less circular hamburgers, and the system will provide a better indication of the weight correlated data based on the shape. In another example, the above mentioned tail part of a salmon is identified with a different density compared to the head part or abdomen part of the salmon etc. and the non-uniform density of the salmon is taken into account when estimating the weight correlated data estimate.
The artificial neural network software module may be trained to determine a density at least partly based on a determined surface texture of the food object. As an example, it may be identified, that the food object has a rough surface, and the training may enable that such food objects are considered to have a density which is different from the density of a food object having a smoother surface.
The 3D image may be captured such that one or more air-pockets are included in the 3D image of the food object. Such air-pockets may be shadowed by said food object and thereby not be directly visible in the 3D image. In these situations, the artificial neural network software module may be trained to identify shapes where air-pockets are likely to exist and to compensate for such air-pockets when assigning a density to the food objects. By means of an example, if an air-pocket is present beneath a food object there may be a visible bulk on the food object over the air-pocket. The training may enable the artificial neural network software module to identify such situations.
If an air-pocket beneath a food object is to be captured visually by a camera, it would require the imaging device to capture the images sideways, or from below the food object. This complicates the device and the arrangement of the imaging device. Since the training of the artificial neural network software module is carried out to identify shadowed air-pockets, i.e. air-pockets which are not directly visible in the 3D-image, the image may be captured with less attention on specific and complicated arrangement of the imaging device. Accordingly, the image may be captured e.g. from above the food objects.
The 3D-image may be captured in a direction which is essentially perpendicular to a conveyor belt on which the food objects are supported.
The laser light source may be positioned e.g. next to the 3D imaging device. Both the laser light source and the 3D imaging device may be pointed downwards towards the food objects. This provides a simple setup and easy maintenance of the 3D imaging device as compared with devices arranged at different positions and angles circumferentially around the food object to directly image potential air-pockets.
In one embodiment, the step of training the artificial neural network software module includes the step of:
cutting said similar or identical larger food objects into smaller pieces,
acquiring a weight and a 3D image of each of the smaller pieces, and
associating the weight with the 3D image for each of smaller pieces.
Accordingly, an advantageous training process is provided to obtain the density of the smaller pieces to obtain a density distribution of the food objects where the variety in the density distribution may be taken into account when estimating the weight of said food object.
In one embodiment, when training the artificial neural network software module, each of said smaller pieces is associated with a position data indicating the position of the smaller food pieces within said similar or identical food species. It is thus ensured that the position of the associated weight and 3D image data for each of said smaller food pieces is known. Therefore, a density distribution of said food object may further be utilized to take variable densities into account when performing the weight estimate of the food object. As an example, if the food product is pork meat, e.g. a piece of a pork flank, the difference in density due to different lean/fat average ratio may be taken into account when estimating the weight of the piece of pork flank. Thereby, due to the additional position data, the training of the artificial neural network software module will be improved.
In one embodiment, when training the artificial neural network software module the 3D image of each of smaller pieces is determined before said cutting is performed. As an example, during the training process, hundreds or thousands similar or identical food species may be run through said line laser, or the device may be used to capture said 3D data, and based on this data the 3D image data may be utilized to determine the volume of each smaller food piece. In another embodiment, this 3D image may be captured after said cutting is performed.
In a second aspect, the disclosure provides a device for providing weight correlated data estimate for a food object, the device comprising:
a 3D imaging device configured to provide three dimensional (3D) image data of the food object
a processor configured with a trained artificial neural network software module configured to output the weight correlated data estimate for said food object based on the three dimensional image data, the artificial neural network software module being trained for similar or identical food species as said food object, where the training of the artificial neural network software module is based on collected 3D image data with associated weight data for said similar or identical food species.
The 3D imaging device may be positioned above the food object, and the device may, in one embodiment, comprise only one 3D imaging device.
The position of the 3D imaging device may be arranged such that air-pocket can be shadowed by the food object and therefore contribute to the volume of the captured 3D image, and where the artificial neural network is trained to identify such air-pockets and take them into account when determining a density and thus the weight correlated data estimate.
Embodiments will be described, by way of example only, with reference to the drawings, in which
In step (S1) 201, capturing three dimensional (3D) image data of a food object is performed by a 3D imaging device. The 3D imaging device may comprise a digital camera, a combination of a line laser pointed towards the food object and a camera, where the reflection of light from the surface of the food object is detected by the camera, and where based thereon a 3D profile is created of the food object.
In step (S2) 202, a processor utilizes the captured 3D image data as input in an artificial neural network software module. As will be discussed in more details later, the artificial neural network software module has previously been trained for similar or identical food species as said food objects based on collected 3D image data with associated weight data for said similar or identical food species.
In step (S3) 203, a weight correlated data estimate is outputted for said food object. The term weight correlated data may be interpreted as the actual weight estimate in grams or kilograms, or the estimate may be the density estimate.
In step (S1′) 301 the training includes capturing three dimensional (3D) image data of a food object by a 3D imaging device, which can be any kind of imaging device, a camera, line scanner etc..
In step (S2′) 302 the food object is weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.
In step (S3′), 303 the captured 3D image data and the weighing data are used as input data, training data, for an artificial neural network software module.
Steps S1′ to S2′ are then repeated for thousands, hundreds of thousands of objects where the data is stored.
S3′ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data. After the training, the software module can make highly accurate weight estimates.
The method steps in the flowchart in
In step (S1″) 601, the training includes acquiring three dimensional (3D) image data of a food object by a 3D imaging device, which can be any kind of imaging device, a camera, a line scanner etc..
In step (S2″) 602, the food object is cut into smaller pieces, e.g. pieces of the same thickness.
In step (S3″), 603 the smaller pieces are weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.
In step (S4″) 604, the captured 3D image data and the weighing data are used as input data, i.e. training data, for an artificial neural network software module.
Steps S1″ to S3″′ are repeated for hundreds or thousands of objects and stored. Step S4″ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data. After the training, the software module can make highly accurate density distribution for such food objects, and thereby highly accurate weight estimate.
The flowchart in
Conveyor 709 conveys individual piece to a scale 703 where each piece 712 is weighed. Accordingly, the input data into the artificial neural network includes the 3D image of each individual piece and the associated weight. Additional input data may be position data indicating the position of the individual piece within the object 701.
In step (S1″′) 801, a food object is cut into smaller pieces of e.g. the same thickness.
In step (S2″′) 802, three dimensional (3D) image data of each of the smaller pieces is captured by a 3D imaging device, which can be any kind of imaging device, a camera, line scanner etc..
In step (S3″′) 803, the smaller pieces are weighed by any type of a weighing device, e.g. a stationary weighing device or a dynamic scale.
In step (S4″′) 804, the captured 3D image data and the weighing data are used as input data, training data, for an artificial neural network software module.
Steps S2″′ and S3″′ may just as well be reversed, i.e. S3″′ may be performed prior to step S2″′.
Steps S1″′ to S3″′ are repeated for hundreds or thousands of objects and stored. Step S4″′ is the training step, which is repeated hundreds of thousands or millions of times based on the stored data.
The flowchart in
In
The same scenario is shown in
In this embodiment, the output 1209 is then used to operate a cutting device 1211 to cut the food object into a plurality of pieces 1212 which may e.g. be portions of fixed weight. In this process parameters like differences in the density along the food object is taken into account.
The table in
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the disclosure is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The disclosure further provides the following numbered embodiments:
1. A method of estimating weights of food objects, comprising:
2. The method according to embodiment 1, wherein the weight correlated data comprises a weight estimate.
3. The method according to embodiment 1, wherein the weight correlated data comprises a density estimate.
4. The method according to any proceeding embodiments, wherein the food object is a portion from a larger food object such that multiple of such portions define the whole larger food object.
5. The method according to any of the preceding embodiments, wherein the step of training the artificial neural network software module includes the step of:
6. The method according to embodiment 5, wherein each of said smaller pieces is associated with a position data indicating the position of the smaller food pieces within said similar or identical food species.
7. The method according to any of the preceding embodiments, wherein the volume of each of the smaller pieces is determined before said cutting is performed.
8. The method according to any of the preceding embodiments, wherein the volume of each of the smaller pieces is captured after said cutting is performed.
Number | Date | Country | Kind |
---|---|---|---|
18211923 | Dec 2018 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/084959 | 12/12/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/120702 | 6/18/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
10350779 | Seager | Jul 2019 | B2 |
20090137195 | Bottemiller et al. | May 2009 | A1 |
20160140413 | Maga | May 2016 | A1 |
20190210067 | Kumar | Jul 2019 | A1 |
20200193112 | Pang | Jun 2020 | A1 |
Number | Date | Country |
---|---|---|
2029330 | Mar 2009 | EP |
2009138088 | Nov 2009 | WO |
Entry |
---|
Search Report from corresponding European Application No. EP18211923, dated May 8, 2019. |
International Search Report and Written Opinion from PCT Application No. PCT/EP2019/084959, dated Feb. 12, 2020. |
Number | Date | Country | |
---|---|---|---|
20220026259 A1 | Jan 2022 | US |