The present specification relates generally to the field of rearing animals and/or livestock on farms for the processing and production of meat products derived therefrom. More specifically, the present specification is related to the use of three-dimensional (3D) stationary gantry computed tomography (CT) systems for improving farming practices that lead to enhanced quality of reared animal products in addition to improved management of abattoir production processes.
Farms produce livestock destined for consumption in human and animal food chains, including but not limited to, poultry, pigs, goats, sheep and cattle. In contrast to other industries where a blending of product is possible to achieve a level of consistency, each animal has individual characteristics that warrant consumer satisfaction. The manner in which the animals are raised or treated on the farm tend to have an effect on the characteristics that affect customer satisfaction with meat products derived from the animals (such as, for example, a beefsteak or lamb chop). Consumers place increasing emphasis on consumption quality, food safety, and food traceability of the resultant meat product. As an example, animals reared at cattle farms are sold and processed at meat factories to produce a variety of meat products within the food chain. Strict quality control measures exist to ensure that the animals that enter the factory are optimally processed to produce products that meet desired consumer satisfaction in terms of cating quality, food chain traceability, and food safety.
To satisfy such consumer demands, the farmer needs to demonstrate conformance to standards and practices, in addition to regular farming activities, which place considerable burden on the farmer. The objective of a farmer is thus to breed the highest value animal for the farming conditions at a particular farm location (high altitude, low altitude, warm, cool, wet, dry, lush, barren) and to do this at the lowest possible cost. This means managing food, water, veterinary needs, transportation, and maintenance costs to deliver the greatest return. Currently, farmers use a range of information sources to plan their farming practices including weather forecasting, satellite imagery for pasture and water management, animal tracking to determine optimal location of feed and water troughs, genetic profiling for herd development and veterinary records. In general, such information is processed by the farmer using his own farming experience in order to optimize animal health, lean meat yield (the amount of meat compared to fat or bone), and consequent return on investment.
Once an animal reaches a meat processing plant or factory, the animals are typically slaughtered first; the head, viscera, hide and extremities are subsequently removed; and the carcasses are then placed into a cool room for a period of time to hang while fat solidifies. Once the carcass is rigid, it is then sectioned into major pieces (known as primals). Each primal is then passed on to a de-boning area in which retail ready cuts of meat are processed into bone-in or boneless cuts prior to packaging and transfer into the retail supply chain. Hundreds of people stand shoulder-to-shoulder to each perform a certain set of actions as the carcass or primal passes in front of them, with the carcass typically being suspended from a moving rail and the primal typically on a moving conveyor belt in this labor-intensive process. Instructions are provided to each individual in the de-boning area with regard to which cuts are required on each day to satisfy customer demand and meet production targets. The result is a productive process but not one that typically operates at peak efficiency.
Efficiency losses come from trimming excess meat off the retail cut, thus putting valuable product into a lower-grade food supply chain, for example overcutting valuable rib-eye muscle such that it ends up destined for lower value minced meat. Further efficiency losses come from inaccurate production planning in which a carcass is processed into a sub-optimal set of retail cuts. This typically occurs because the cutting team of individuals is provided with a production plan that is not specific to each individual carcass but rather reflects an average production target across the full set of carcasses to be processed that day.
Each individual working in the plant has an obligation to meet high standards of food safety, but in some cases, the carcass may contain invisible contamination or health defects that are hidden beneath the visible surface of the carcass that are not possible for the individual to determine. This can result in occasional, yet significant, food safety issues that can be expensive and complex to mitigate. Further, as retail cuts are produced and packaged, there are occasional errors in food labelling and packaging which result in shipping incorrect products to customers. Such errors lead to rejection, sometimes of large quantities, of product by retail customers or consumers. In these cases, there is an adverse financial impact on the processer and the rejected product usually needs to be destroyed. It should also be noted that meat processing plants or factories predominantly employ individual workers who use knives to stage-by-stage dissect a carcass into required consumer products. Thus, the individual workers in a meat processing line responsible for the slaughter of an animal all the way to the final packaging of a product must undergo a high level of training to achieve proper cutting technique on a repeatable basis at the processing line speed required to achieve a commercially satisfactory outcome.
In some sectors, the use of automation to either substitute for or augment the labor force is prevalent (for example, in poultry processing) but in other sectors, the use of automation is limited (for example, beef processing). In large part, this is driven by the complexity and variation of the anatomy between one carcass and another. In poultry, such variations are relatively minimal whereas in beef the variations can be large depending on the breed and weight of the carcass being processed.
On the retail end, customers of meat products have specific requirements for the quality and cut of the products that they buy from a meat factory. These may include meat grading, fat thickness, weight and other factors that the processor must conform to regardless of the supply of animals into the factory. Given that the processor only understands the actual anatomy of the carcass during the dissection process in the factory, it is hard to plan optimal production based on the significant variation in size, weight and quality of the animals that arrive at the factory. This may lead to directing higher quality product to lower value output streams thereby resulting in reduction in yield and factory efficiency.
Meat quality grading systems tend to rely on relatively subjective measurements of a carcass and may include characteristics such as, but not limited to: a) comparison of meat color to a standard color chart at a specific location in the carcass; b) comparison of marbling and fat content of the carcass compared to a set of standardized photographs; and c) the amount force needed to indent a particular point on the surface of the carcass among other subjective indicators. Such measurements tend to be point-based and do not measure the natural variation in meat quality that can occur either within a particular muscle group or between muscle groups.
There is therefore a need for use of X-ray scanning systems and methods to improve farming practices leading to a higher valuation of reared animals. There is also need for the use of X-ray screening at various stages of the animal life cycle during development on a farm so that meat products derived from a herd are better characterized in terms of food quality and food safety. There is also a need to improve production efficiency, to reduce labor utilization, to take a carcass-centric approach to production, to enhance plant and food safety performance and to reduce losses due to poorly labelled and poorly packaged product. Accordingly, there is need for use of X-ray scanning systems and methods for improved quality control, consumption quality, carcass valuation and food safety in meat processing factories or abattoirs. There is also need for the use of X-ray screening to aid overall production planning and automation for improved abattoir management.
The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods, which are meant to be exemplary and illustrative, and not limiting in scope. The present application discloses numerous embodiments.
The present specification discloses an imaging system configured to evaluate meat, comprising: an X-ray scanning system configured to generate X-ray scan data of meat; a hyperspectral imaging system configured to generate hyperspectral imaging data; a computing device in data communication with the X-ray scanning system and the hyperspectral imaging system, wherein the computing device includes a processor and memory storing a plurality of programmatic instructions which when executed by the processor, configures the processor to: acquire the X-ray scan data and hyperspectral imaging data; automatically determine a quality of the meat by analyzing the acquired X-ray scan data in combination with the hyperspectral imaging data; categorize the meat, based on the determined quality, into one of acceptable quality and unacceptable quality categories; and generate data indicative of the quality of the meat.
Optionally, the X-ray scanning system comprises a two-dimensional projection X-ray imaging system having at least one of a single-view or a dual-view configuration, in combination with multi-energy X-ray (MEXA) sensors. Optionally, the X-ray scanning system comprises an inclined conveyor such that an entrance end of the conveyor is at a lower height position than an exit end of the conveyor. Optionally, the X-ray scanning system uses a declining conveyor such that an entrance end of the conveyor is at a higher height position than an exit end of the conveyor.
Optionally, the hyperspectral scan data comprises data in a visible light wavelength range and a shortwave infrared wavelength range.
Optionally, the meat comprises offal and organs.
Optionally, the system further comprises at least one of an ink-jet, a laser beam, a LED strip or an augmented reality headset adapted to generate a visual indication of quality in relation to the meat.
Optionally, the processor is further configured to: generate at least one graphical user interface to display at least one image corresponding to the X-ray scan data, and determine the quality based on data indicative of a thickness and/or a density of the meat.
Optionally, the system further comprises a conveyor that translates the meat through the system at a speed ranging from 0.1 m/s to 1.0 m/s.
Optionally, the multi-sensor imaging system has an inspection tunnel having a length ranging from 1100 mm to 5000 mm, a width ranging from 500 mm to 1000 mm, and a height ranging from 300 mm to 1000 mm.
Optionally, the X-ray scanning system comprises a first X-ray source of 120 to 160 keV with 0.2 to 1.25 mA beam current and a second X-ray source of 120 to 160 keV with 0.2 to 1.25 mA beam current, wherein the first X-ray source is configured in up-shooter configuration and the second X-ray source is configured in a side-shooter configuration. Optionally, the X-ray scanning system comprises multi-energy photon counting X-ray sensor arrays. Optionally, the X-ray scanning system comprises 6 to 22 data acquisition boards corresponding to the first X-ray source and 4 to 20 data acquisition boards corresponding to the second X-ray source.
Optionally, the X-ray scanning system is configured to acquire data in a plurality of energy bands, wherein the plurality of energy bands ranges from 3 to 20 and wherein each of the energy bands are in the range of 20-160 keV.
Optionally, the hyperspectral imaging system comprises a first camera sensor configured for visible imaging in 200 to 1200 wavelength bands and a second camera sensor configured for shortwave infrared imaging in 400 to 700 wavelength bands. Optionally, the first camera sensor is configured to operate in a range of 400 nm to 900 nm and have a spectral resolution of at least 20 nm with a pixel size not exceeding 2.0 mm across a width of a conveyor. Optionally, the second camera sensor operates is configured to operate in a range of 900 nm to 1800 nm and have a spectral resolution of at least 20 nm with a pixel size not exceeding 2.0 mm across the width of the conveyor.
Optionally, the hyperspectral imaging system is configured to have an acquisition rate of 30 to 150 Hz.
Optionally, the X-ray scanning system and the hyperspectral imaging system are synchronized to an X-ray base frequency ranging from 150 to 500 Hz.
Optionally, the processor is further configured to determine a type of meat based on the acquired X-ray scan data and hyperspectral imaging data.
Optionally, the processor is further configured to: generate at least one graphical user interface to display at least one image corresponding to the hyperspectral imaging data; identify regions indicative of anomalies in the at least one image; and apply an annotation to the identified regions, wherein the annotation is at least one of a shape or a color. Optionally, the processor is configured to implement at least one machine learning model, wherein the machine learning model is configured to analyze the hyperspectral imaging data in order to determine the quality of the meat and the regions indicative of anomalies. Optionally, the machine learning model is adapted to be trained using K-means clustering in order to identify the regions indicative of anomalies.
Optionally, the data indicative of a quality of the meat includes at least one of a lean meat yield, a ratio of intra-muscular fat to tissue, an amount of inter-muscular fat, an absolute size of individual organs, a relative size of individual organs, a muscle volume, a number of ribs, a presence or an absence of diseases, a presence or an absence of cysts, a presence or an absence of tumors, a presence or an absence of pleurisy, or a presence or an absence of foreign objects.
The present specification also discloses a system for generating data indicative of animal breeding practices and meat production practices, comprising: a plurality of geographically distributed meat production sites having associated multi-sensor imaging systems, wherein each of the multi-sensor imaging system includes an X-ray scanning system and a hyperspectral imaging system; at least one server in data communication with a database and each of the multi-sensor imaging systems, wherein the at least one server includes a processor and memory storing a plurality of programmatic instructions which when executed by the processor, configures the processor to: implement at least one machine learning model; provide as input to the at least one machine learning model a plurality of data accessed from the database, wherein the at least one machine learning model is configured to analyze the plurality of data in order to generate said data, wherein said data are directed towards maximizing a plurality of positive parameters and minimizing a plurality of negative parameters associated with animal breeding and meat production; and enable a plurality of geographically distributed computing devices to access the generated data.
Optionally, the plurality of data corresponds to an aggregate of a plurality of animal and meat related data from each of the plurality of geographically distributed meat production sites and wherein the plurality of animal and meat related data comprises at least one of an animal ID, an animal type, a breed of animal, X-ray scan data corresponding to each of different ages of an animal, X-ray scan data of the animal's carcass and/or primal, hyperspectral image data of the animal's meat and organs, geographical location of a livestock farm and/or meat production site, climate, weather, season, feed type, time of year of meat production, vaccination history, medications, disease history, age of animal when received in the meat production site, a lean meat yield, a ratio of intra-muscular fat to tissue, an amount of inter-muscular fat, an absolute size of individual organs, a relative size of individual organs, a muscle volume, a number of ribs, a presence or an absence of diseases, a presence or an absence of cysts, a presence or an absence of tumors, a presence or an absence of pleurisy or a presence or an absence of foreign objects.
Optionally, the plurality of positive parameters comprises at least one of a reduced need for medication, a lower carbon footprint, a variable cost efficiency, a reputation protection, lower health risks to consumers, or improvements in the lean meat yield, the ratio of intra-muscular fat to tissue, the amount of inter-muscular fat, the absolute size of individual organs, the relative size of individual organs, the muscle volume, the number of ribs, the absence of diseases, the absence of cysts, the absence of tumors, the absence of pleurisy or the absence of foreign objects.
Optionally, the plurality of negative parameters comprises at least one of increases in the presence of diseases, the presence of cysts, the presence of tumors, the presence of pleurisy or the presence of foreign objects.
In some embodiments, the present specification discloses a method of evaluating quality of meat, comprising: operating a multi-sensor imaging system comprising: an X-ray scanning system configured to generate X-ray scan data of meat; a hyperspectral imaging system configured to generate hyperspectral imaging data; acquiring the X-ray scan data and hyperspectral imaging data; automatically determining a health status of the meat by analyzing the acquired X-ray scan data and/or hyperspectral imaging data; sorting the meat, based on the determined health status, into one of healthy and unhealthy categories; and generating data indicative of a quality of the meat.
Optionally, the X-ray scanning system uses 2D projection X-ray imaging in single-view or dual-view configurations with dual-energy or multi-energy X-ray (MEXA) sensors.
Optionally, the X-ray scanning system uses a conveyor that is positioned on an incline such that a first end of the conveyor is at a lower height position than a second opposing end of the conveyor.
Optionally, the X-ray scanning system uses a conveyor that is positioned on a decline such that a first end of the conveyor is at a higher height position than a second opposing end of the conveyor.
Optionally, the hyperspectral scan data includes visible and shortwave infrared scan data. Optionally, the meat includes offal and organs.
Optionally, the multi-sensor imaging system includes an ink-jet, laser beam, LED strip or augmented reality headset to indicate presence of health issues upon scanning the meat.
Optionally, the method further comprises generating at least one graphical user interface to display at least one image corresponding to the X-ray scan data, and determining the health status based on a threshold indicative of thickness and/or density of the meat.
Optionally, the multi-sensor imaging system includes a conveyor that translates the meat through the wherein the multi-sensor imaging system at a speed of about 0.2 m/s.
Optionally, the multi-sensor imaging system has an inspection tunnel of 1360 mm length, 630 mm width and 400 mm height.
Optionally, the X-ray scanning system includes first and second X-ray sources of 160 ke V with 1.0 mA beam current, wherein the first X-ray source is configured in up-shooter configuration and the second X-ray source is configured in a side-shooter configuration.
Optionally, the X-ray scanning system includes multi-energy photon counting X-ray sensor arrays.
Optionally, the X-ray scanning system includes 11 data acquisition boards corresponding to the first X-ray source and 9 data acquisition boards corresponding to the second X-ray source.
Optionally, the X-ray scanning system is configured to have an acquisition rate of 300 Hz in six energy bands, and wherein the six energy bands are in the range of 20-160 kcV.
Optionally, the hyperspectral imaging system includes a first camera sensor configured for visible imaging in 300 wavelength bands and a second camera sensor configured for shortwave infrared imaging in 512 wavelength bands.
Optionally, the first camera sensor operates in a range of 400 nm to 900 nm, has a spectral resolution of at least 20 nm with pixel size not exceeding 2.0 mm across a width of a conveyor.
Optionally, the second camera sensor operates in a range of 900 nm to 1800 nm, has a spectral resolution of at least 20 nm with pixel size not exceeding 2.0 mm across the width of the conveyor.
Optionally, the hyperspectral imaging system is configured to have an acquisition rate of 30 to 150 Hz.
The X-ray scanning system and the hyperspectral imaging system are synchronized to an X-ray base frequency of 300 Hz.
Optionally, the method further comprises determining a type of meat based on the acquired X-ray scan data and hyperspectral imaging data.
Optionally, the method further comprises generating at least one graphical user interface to display at least one image corresponding to the hyperspectral imaging data; identifying regions indicative of anomalies in the at least one image; and applying color and/or a shaped annotation to the identified regions, wherein the shaped annotation is one of circle or box.
Optionally, a machine learning model is configured to analyze the hyperspectral imaging data in order to determine the health status of the meat and the identify regions indicative of anomalies. Optionally, the machine learning model is trained using K-means clustering in order to identify the regions indicative of anomalies.
Optionally, the data indicative of a quality of the meat includes a plurality of after-sale parameters including lean meat yield, ratio of intra-muscular fat to tissue, amount of inter-muscular fat, absolute and relative size of individual organs, muscle volume, number of ribs, and presence or absence of diseases such as cysts, tumors, pleurisy and foreign objects.
The aforementioned and other embodiments of the present specification shall be described in greater depth in the drawings and detailed description provided below.
These and other features and advantages of the present specification will be further appreciated, as they become better understood by reference to the following detailed description when considered in connection with the accompanying drawings:
In an embodiment, the present specification describes the use of three-dimensional (3D) stationary gantry X-ray computed tomography systems to scan animals and/or livestock for enabling improved management of animal farming processes, functions, or events. The resultant scan information, particularly when generated or applied at different stages during the development of an animal, may be used to drive farming practices for individual animals and for overall development of one or more herds. When such farming practices are driven based on scan information of animals and herds, the result is improved valuation of animals, a reduction in farming costs, and a concurrent improvement in eating or consumption quality of each animal thereby leading to improved farm economics and consumer satisfaction.
The present specification also discloses the use of 3D stationary gantry X-ray computed tomography systems for carcass screening and improved abattoir production planning, execution, and automation. In various embodiments, the use of scanning technology supports high throughput, automated, meat-processing lines with reduced manual labor, objectively measured product quality and improved food safety standards.
In an embodiment, the present specification discloses the use of 3D X-ray inspection to generate an image of an entire carcass and sections of the carcass, during the stages of dissection, final product preparation, and packaging of the carcass. The generated images are used to derive metrics on, but not limited to, eating quality, animal health, lean meat yield (the amount of meat, fat and bone present in the carcass), carcass value, and 3D carcass structure. The derived metrics also drive abattoir efficiency through process automation, precise production planning, provision of accurate consumption quality through each muscle within the carcass, rejection of unhealthy carcasses from the food chain, payment based on carcass value and not just on weight, quality control measures to ensure integrity of safe product to consumers, and supply chain assurance for customers to validate the supply chain of the meat that they purchase.
In an embodiment, the present specification also discloses a method for automating and increasing the efficiency of meat production in a meat processing plant. In an embodiment, the present specification provides for the use of network connected 2D and 3D X-ray imaging modalities along with visible and hand-held sensors such as, but not limited to, RFID and barcode readers in a meat producing plant. The networked imaging and screening modalities are used to generate data that is processed in real-time by specific algorithms in conjunction with production requirement information stored in a database that is coupled with the network, to generate individualized carcass-driven optimization of the meat production process as a whole. In an embodiment, the present specification provides a method for automatic and robotic cutting of carcasses.
In various embodiments, a computing device includes an input/output controller, at least one communication interface and a system memory. The system memory includes at least one random access memory (RAM) and at least one read-only memory (ROM). These elements are in communication with a central processing unit (CPU) to enable operation of the computing device.
In various embodiments, the computing device may be a conventional standalone computer or alternatively, the functions of the computing device may be distributed across a network of multiple computer systems and architectures. In some embodiments, execution of a plurality of sequences of programmatic instructions or code, which are stored in one or more non-volatile memories, enable or cause the CPU of the computing device to perform various functions and processes such as, for example, performing tomographic image reconstruction for display on a screen. In alternate embodiments, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of systems and methods described in this application. Thus, the systems and methods described are not limited to any specific combination of hardware and software.
The term “pass”, “passes”, “passes through”, “passing through”, or “traverses” used in this disclosure encompass all forms of active and passive animal movement, including walking, being carried in a container, hanging from a structure or being conveyed/driven using a conveyor.
The term “meat” used in this disclosure may refer to flesh of animals used for food. In some embodiments, “meat” may refer to flesh inclusive of bone and edible parts but exclusive of inedible parts. Edible parts may include prime cuts, choice cuts, edible offal (head or head meat, tongue, brains, heart, liver, spleen, stomach or tripes and, in some cases, other parts such as feet, throat and lungs). Inedible parts may include hides and skins (except in the case of pigs), as well as hoofs and stomach contents.
The term “K-means clustering” used in this disclosure may refer to an unsupervised learning algorithm. There is no labeled data for this type of clustering, unlike with supervised learning. K-Means clustering is used to perform the division of objects into clusters that share similarities and are dissimilar to the objects belonging to another cluster.
The present specification is directed towards multiple embodiments. The following disclosure is provided in order to enable a person having ordinary skill in the art to practice the invention. Language used in this specification should not be interpreted as a general disavowal of any one specific embodiment or used to limit the claims beyond the meaning of the terms used therein. The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the invention. In addition, the terminology and phraseology used is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed. For purpose of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail so as not to unnecessarily obscure the present invention.
In the description and claims of the application, each of the words “comprise” “include” and “have”, and forms thereof, are not necessarily limited to members in a list with which the words may be associated. It should be noted herein that any feature or component described in association with a specific embodiment may be used and implemented with any other embodiment unless clearly indicated otherwise.
As used herein, the indefinite articles “a” and “an” mean “at least one” or “one or more” unless the context clearly dictates otherwise.
In some embodiments, a first inclined ramp 105 is adapted to enable the animal to pass onto a horizontal platform 106 that lies in the scanning region, area or aperture 150 to eventually pass down using a second inclined ramp 107. In other words, the animal enters the scanning region, area or aperture 150 from the left portion in the figure and exits the scanning region, area or aperture 150 at the right in the figure.
In some embodiments, the system 100 is enclosed within a food safe, environmentally protected enclosure 115 manufactured using materials such as, but not limited to, stainless steel and/or plastic. In some embodiments, the system 100 is surrounded with at least one radiation shielding enclosure. A control room is provided for one or more system operators to review the performance of the system 100 on one or more inspection workstations in data communication with the system 100. In various embodiments, the one or more inspection workstations are computing devices.
In some embodiments, the system 100 is configured for dual-plane scanning and comprises a first plurality of linear multi-focus X-ray sources 145a along with an associated first array of detectors 155a positioned or deployed around the scanning region, area or aperture 150 to scan the animal in a first imaging plane 142 and a second plurality of linear multi-focus X-ray sources 145b along with an associated second array of detectors 155b also positioned or deployed around the scanning region, area or aperture 150 to scan the animal in a second imaging plane 143. Thus, the system 100 is constructed in two separate planes 142, 143 with data combined together, at the one or more inspection workstations, to create a single reconstructed volume.
In some embodiments, the scanning region, area or aperture 150 has a substantially rectangular geometry or shape. In some embodiments, a value representative of an entire width of the scanning area 150 is within 85% of a value representative of an entire height of the scanning area 150. In some embodiments, the scanning region, area or aperture 150 has dimensions 1500 mm (width)×1800 mm (height). In alternate embodiments, the scanning region, area or aperture 150 has a substantially square or polygonal geometry or shape. In some embodiments, the first imaging plane 142 comprises, say, four linear multi-focus X-ray sources 145a separated from each other and positioned around or along a perimeter of the scanning region, area or aperture 150. In some embodiments, the second imaging plane 143 comprises, say, four linear multi-focus X-ray sources 145b separated from each other and positioned around or along the perimeter of the scanning region, area or aperture 150.
In some embodiments, as shown in
In some embodiments, the first and second imaging planes 142, 143 are disposed along a direction perpendicular to the direction of motion of the animal over the horizontal platform 106 and through the inspection region, area or aperture 150 during scanning. In embodiments, the first and second imaging planes 142, 143 are separated from each other, along the direction of motion of the animal during scanning, by a distance ‘d’ ranging from 100 mm to 2000 mm. Thus, the first plurality of linear multi-focus X-ray sources 145a and the associated first array of detectors 155a are deployed in the first imaging plane 142 while the second plurality of linear multi-focus X-ray sources 145b and the associated second array of detectors 155b are deployed in the second imaging plane 143.
In embodiments, the first plurality of linear multi-focus X-ray sources 145a are offset or displaced from the associated first array of detectors 155a, in the first imaging plane 142, by a distance d1 while the second plurality of linear multi-focus X-ray sources are offset or displaced from the associated second array of detectors 155b, in the second imaging plane 143, by a distance d2. In some embodiments, d1 is equal to d2. In various embodiments, the distances d1 and d2 range from 2 mm to 20 mm. It should be appreciated that the first and second array of detectors 155a, 155b are displaced from the respective planes of the first and second X-ray sources 145a, 145b so that X-rays from a source on one side of the scanning region, area or aperture 150 pass above the detector array adjacent to the source but interact in the detector array opposite to the source at the other side of the scanning region, area or aperture 150.
In an embodiment, the 3D stationary gantry X-ray CT imaging system 100 comprises a series of X-ray tubes operating in tandem, instead of a multi focus X-ray source shown in
In some embodiments, as shown in
It should be appreciated that, in various embodiments, the controller 188 implements a plurality of instructions or programmatic code to a) ensure that the switching circuits 184 are controlled to fire in a predetermined sequence, and b) perform process steps corresponding to various workflows and methods described in this specification.
Referring to
In some embodiments, first, second and third supports 222a, 222b, 222c are deployed to support the anode 215 along a longitudinal axis. The first and second supports 222a, 222b are deployed at two ends while the third support 222c is deployed at the center of the anode 215. In some embodiments, the first and second supports 222a, 222b also function as coolant feed-through units while the third support 222c enables high voltage feed-through. In some embodiments, the anode 215 supports an operating tube voltage in a range of 100 kV to 300 kV. In some embodiments, each electron gun, cathode or source/emission point 210 emits a tube current in a range of 1 mA to 500 mA depending on animal thickness and inspection area, aperture or size-larger the inspection aperture and thicker the animal, higher the required tube current.
For scanning livestock (for example, cows and buffaloes), a suitable optimization is 225 k V tube voltage and 20 mA beam current, with total X-ray beam power of 4.5 kW. Coupled with tube filtration of minimum 3 mm aluminum this results in dose to the animal in a range of 2 μSv (microsievert) to 20 μSv, and in embodiments, around 10 μSv. To put this in context, typical individual dose to humans due to naturally occurring background radiation is 2 mSv/year (millisievert/year). An exposure of 10 μSv corresponds to 0.5% of one year of natural background radiation or around 2 days of natural background radiation.
In some embodiments, each electron gun 210 is configured to irradiate an area or focal spot on the anode 215 ranging between 0.5 mm to 3.0 mm diameters. Specific dimensions of the focal spot are selected to maximize image quality and minimize heating of the anode 215 during X-ray exposure. Higher the product of tube current and tube voltage, larger the focal spot is typically designed to be.
In some embodiments, a first inclined ramp 305 is adapted to enable the animal to pass onto a horizontal platform 306 that lies in the scanning region, area or aperture 350 and eventually pass down using a second inclined ramp 307. In other words, the animal enters the scanning region, area or aperture 350 from the left in the view 301b and exits the scanning region, area or aperture 350 at the right in the 301b.
In some embodiments, the system 300 is enclosed within a food safe, environmentally protected enclosure 315 manufactured using materials such as, but not limited to, stainless steel, aluminum and/or plastic. In some embodiments, the system 100 is surrounded with at least one radiation shielding enclosure. In some embodiments, the system 300 has a multi-focus X-ray source 345 disposed in a plane around the scanning region, area or aperture 350. The source 345 comprises a plurality of X-ray source emission points, electron guns or cathodes 346 (also referred to as an electron gun array) around an anode 347. The plurality of X-ray source emission pints 346 and the anode 347 are enclosed in a vacuum envelope or tube 310. In some embodiments, the source 345 comprises 200 to 500 X-ray source emission points 346 arranged around a single anode 347 that is held at positive high voltage with respect to the corresponding electron gun array 346. In some embodiments, tube voltage is maintained in a range of 120 kV to 200 kV with tube current in a range 1 mA to 20 mA. In an embodiment, a single source 345 comprises a plurality of X-ray source emission points is employed for scanning small animals (such as, for example, sheep, pigs, and goats); while a plurality of linear multi-focus X-ray sources disposed around a scanning tunnel (such as, for example, shown in
An array of detectors 355 is also positioned or deployed around the scanning region, area or aperture 350 to scan the animal as it passes through the scanning region, area or aperture 350. In some embodiments, the scanning region, area or aperture 350 has a substantially rectangular geometry or shape. In some embodiments, the scanning region, area or aperture 350 has a substantially square or polygonal geometry or shape. In some embodiments, the scanning region, area or aperture 350 has a width ranging from 400 mm to 800 mm and a height ranging from 600 mm to 1000 mm height. In an embodiment, as shown in
A control room may be provided for one or more system operators to review the performance of the system 300 on one or more inspection workstations in data communication with the system 300. Alternatively, mobile computing devices may be used to inspect image data and control system operation. In various embodiments, the one or more inspection workstations are computing devices. At least one controller, positioned within the one or more inspection workstations, is configured to control an activation and deactivation of each of the plurality of X-ray source emission points.
It should be appreciated that, in various embodiments, the controller implements a plurality of instructions or programmatic code to a) ensure that the plurality of X-ray source emission points are controlled to fire in a predetermined sequence, and b) perform process steps corresponding to various workflows and methods described in this specification.
During a scanning operation, each X-ray source point within a multi-focus X-ray source is switched on, in turn, and where at least a portion of the X-rays pass through the animal, and the resultant projection data is collected for that one source point. When the exposure is complete, a different X-ray source point is switched on, for example, within a different multi-focus X-ray source (in embodiments that employ a plurality of linear multi-focus X-ray sources) to create a next X-ray projection. The scanning process continues until all X-ray sources have been fired/activated in a sequence that is configured to optimize a reconstructed X-ray image quality. In some embodiments, it is preferable to activate a non-adjacent source in the next part of the scanning sequence. In embodiments, it is preferable to activate a source positioned at approximately 20 to 90 degrees away from a currently active source point.
In embodiments employing a plurality of linear multi-focus X-ray sources, each source point within a first linear multi-focus X-ray source is switched on and then (only after going through each of the source points within the first linear multi-focus X-ray source) each source point within a second linear multi-focus X-ray source is switched on. In some embodiments employing a plurality of linear multi-focus X-ray sources, one source point within a first linear multi-focus X-ray source is switched on and subsequently, one source point within a second linear multi-focus X-ray source is switched on, thus, alternating back and forth (between the first and second linear multi-focus X-ray sources) until all source points have been activated.
In an embodiment, the system 300 comprises a series of X-ray source tubes operating in tandem, instead of the multi-focus X-ray source 345. In other words, the X-ray sources are a plurality of X-ray tubes and do not contain multiple source points.
While passing through the scanning region, area or aperture 350 the animal may move at an uncontrolled speed, especially if walking and not ambulatory, and may also possibly move from side to side. Consequently, the X-ray projection data needs to be motion corrected prior to implementing or executing back-projection algorithm. In some embodiments, this is enabled directly from the X-ray projection data itself by analyzing each set of data and forward projecting through the partial reconstructed X-ray data to see where the new projection is most likely to have come from. However, this is computationally expensive and so, in some embodiments, it is advantageous to use a secondary sensor system for monitoring surface profile of the animal and so measure motion directly. This information can then be used to determine where each new X-ray projection should be back-projected into the 3D reconstructed image volume.
Various types of 3D (three-dimensional) surface sensing technology may be used including, for example, point cloud optical and radar imaging sensors.
In some embodiments, the radar imaging or inspection system 360 is operated in stepped frequency continuous wave radar scanning sequence or mode 400, as shown in
In parallel, outputs from all Rx transceiver elements 515 are mixed with the Tx frequency, at Rx amplifier and mixer elements 530, to generate a lower frequency signal that can be measured by an analogue-to-digital converter (ADC) 520 and transferred to internal memories of the FPGA 505. Further, signal processing may be done in the FPGA 505 to reduce data bandwidth, or alternatively all data can be transferred through a high-speed interface to a host-computing device for processing.
In some embodiments, Tx and Rx transceiver elements 510, 515 employ circular polarization such that reflected waves return in an opposite polarization to the transmitted wave. This reduces cross talk between Tx and Rx transceiver elements 510, 515 thereby simplifying analogue front-end design as well as algorithmic complexity in image reconstruction.
Another view 602, along a direction perpendicular to the direction of motion of the animal through scanning region, area or aperture 650, shows a plurality of radar transceivers or transceiver modules 610, which may also be referred to as “cards” in some embodiments. Each of the transceivers 610 comprises a plurality of Tx and Rx elements (or analogue circuits) 612, 614. In some embodiments, each of the transceivers 610 comprises 8 Rx and 8 Tx elements 612, 614. In some embodiments, the Rx elements 614 are offset, in a vertical direction, by spacing of half an element from the Tx elements 612.
In some embodiments, the transmitter and receiver elements or analogue circuits 612, 614 with ADCs (Analog to Digital Circuits) are soldered to a same PCB (Printed Circuit Board) as the antenna structures with an overall FPGA for system control and data acquisition. Each of the transceivers 610 further comprises data transmission connectors 616 and a readout control circuit 618. Ribbon cables are used to transfer signals from one card to the next to allow flexibility in overall system configuration.
In accordance with some embodiments, each of the 3D X-ray computed tomography scanning systems of the present specification may be housed in a container that is located on the farm. When in use, doors at entry and exit ends of the container may be opened, the X-ray system powered up and scanning conducted by herding animals from the entry side of the container to the exit side of the container. In some embodiments, by reconciling RFID (Radio Frequency Identification) tag or other animal-specific IDs to the X-ray image data, quantitative information from an X-ray scan is associated back to individual animals to aid overall farm processes as well as food supply chain integrity process. In embodiments, containerized 3D X-ray computed tomography scanning systems may be installed permanently at the farm, or a particular container may be transported using a truck or trailer from one location to another as required to service multiple farms.
In accordance with some embodiments, the 3D X-ray computed tomography scanning systems of the present specification may be supported on mobile, roadworthy, scanning platforms such as, for example, a truck, van and/or a trailer. This enables the system to be transported on public and private roads to a required farm scanning site, the necessary scans conducted and the system then driven off to another farm where the scanning process can be repeated.
It should be noted that, in alternate embodiments, 3D high-resolution imaging methods such as, for example, magnetic resonance imaging, may be substituted for X-ray computed tomography. In addition, in various alternate embodiments, rotating gantry and/or single, dual and multi-plane stationary gantry X-ray computed tomography methods may be used interchangeably.
In accordance with aspects of the present specification, 3D scan image data of an animal provides effective Z (atomic number) and density information (block 705) leading to insight related to a 3D structure (comprising bony structure, size of each muscle and location and amount of fat) of the animal (block 706). This enables a farmer to optimize a plurality of farming processes (block 708) such as, for example, calculating lean meat yield and thereby determining how best to optimize a go forward plan for the herd including how much exercise, feed, feed supplements and water to include in the plan for the animal.
In some embodiments, the 3D scan image data of the animal is analyzed to deliver objective metrics or measurement data for all muscle groups within the animal, on an individual basis, in order to determine eating quality (block 710). In embodiments, the metrics or measurement data are determined by analysis of intra-muscular fat (marbling) and inter-muscular fat. As is known, inter-muscular fat is the fat that surrounds a muscle, and typically lies between the muscle and the skin of an animal. In embodiments, the metrics or measurement data are determined by analysis of a ratio of intra-muscular fat (marbling) to tissue. The farmer can use this data to plan to increase overall eating quality and/or to improve the quality of selected muscle groups in the highest value part of the animal-thereby leading to improved sale price or valuation of the animal (block 712).
In some embodiments, further analysis of the 3D image data provides metrics, measurement data or information on animal health (block 715) such as, for example, the absolute and relative size of individual organs (such as kidneys, liver, heart and lungs), the presence or absence of cysts and tumors, the presence of chronic conditions such as pleurisy and the presence of foreign objects such as barbed wire and needles that may lead to infection. Collectively information on animal health leads to improving overall quality control in food safety (block 717).
Blocks 802, 804, 806 and 808 respectively represent functions/events related to genetic selection, importing semen, conception and birth of an animal for rearing at the farm. At step 810, an initial/early scan of the animal is taken soon after birth, such as within 0-36 hours after birth, and in some cases longer, or before the animal reaches an age of 6 months using a 3D X-ray computed tomography scanning system such as those described with reference to
In embodiments, 3D X-ray computed tomography scans are taken of an animal during various stages of development. For example, in embodiments, at a first stage of development, an animal may be in a first age range, beginning at a first start date and ending at a first end date. In embodiments, a first stage of development corresponds to an early stage. In embodiments, at a second stage of development, an animal may be in a second age range beginning at a second start date and ending at a second end date. In embodiments, a second stage of development corresponds to a mid-range stage. In embodiments, at a third stage of development, an animal may be in a third age range beginning at a third start date and ending at a third end date. In embodiments, the third stage of development corresponds to a late stage. In embodiments, at a fourth stage of development, an animal may be in a fourth age range beginning at a fourth start date and ending at a fourth end date.
In embodiments, the first start date corresponds to the date of birth of the animal and is before each of the first end date, the second start date, the second end date, the third start date, the third end date, the fourth start date, and the fourth end date.
In embodiments, the first end date is after each of the first start date and before each of the second start date, the second end date, the third start date, the third end date, the fourth start date, and the fourth end date.
In embodiments, the second start date is after each of the first start date and the first end date and before each of the second end date, the third start date, the third end date, the fourth start date, and the fourth end date.
In embodiments, the second end date is after each of the first start date, the first end date and the second start date and before each of the third start date, the third end date, the fourth start date, and the fourth end date.
In embodiments, the third start date is after each of the first start date, the first end date, the second start date, and the second end date and before each of the third end date, the fourth start date, and the fourth end date.
In embodiments, the third end date is after each of the first start date, the first end date, the second start date, the second end date, and the third start date and before each of the fourth start date and the fourth end date.
In embodiments, the fourth start date is after each of the first start date, the first end date, the second start date, the second end date, the third start date, and the third end date and before each of the fourth end date.
In embodiments, the fourth end date is after each of the first start date, the first end date, the second start date, the second end date, the third start date, the third end date, and the fourth start date.
In embodiments, there may be n stages of development, with nth start dates and nth end dates, appearing in chronological order as described above. In embodiments, the first end date may be on the same day as, or one day, before the second start date. In embodiments, the second end date may be on the same day as, or one day before, the third start date. In embodiments, the third end date may be on the same day as, or one day before, the fourth start date. In embodiments, the fourth end date may be the same day as, or one day before, the nth start date. It should be noted that the various age ranges of development is dependent upon the animal species.
At step 814, a 3D X-ray computed tomography scan of the animal is acquired after the animal completes a first stage (block 812) in development, that is, when the animal is in a first age range. The scan at step 814 is directed towards determining any abnormalities or health conditions (such as, for example, presence or absence of cysts, tumors, pleurisy and foreign objects) that may affect the ultimate value of the animal.
At step 818, another 3D X-ray computed tomography scan of the animal is acquired after the animal completes a mid-stage (block 816) in development, that is, when the animal is in a second age range. The quality control scan at step 818 enables driving optimization of the animal and the herd as a whole. It is at this stage that significant transformation in valuation can be achieved of the animal and the herd.
At step 822, a yet another scan of the animal is acquired once the animal has been reared through late-stage farming (block 820) and is ready to leave the farm, that is, when the animal is in a third age range. The 3D X-ray computed tomography scan, at step 822, is used to generate a complete analysis of the animal (to generate metrics or measurement data such as, for example, lean meat yield, localized eating quality and health) which together describe the animal sufficiently for presentation at auction and so achieve a final purchase price. In various embodiments, data from scan steps 818 and 822 is evaluated/analyzed by a plurality of programmatic code or instructions in order to determine data indicative of a value of the animal based on at least one of a plurality of pre-sale parameters including lean meat yield, ratio of intra-muscular fat to tissue, amount of inter-muscular fat, absolute and relative size of individual organs, muscle volume, number of ribs, and presence or absence of cysts, tumors, pleurisy and foreign objects. In various embodiments, the plurality of programmatic code or instructions generate data indicative of lean meat yield, which is associated with a first range of values; generate data indicative of a ratio of intra-muscular fat to tissue, which is associated with a second range of values; generate data indicative of an amount of inter-muscular fat, which is associated with a third range of values; generate data indicative of absolute and relative size of individual organs, which is associated with a fourth range of values; generate data indicative of muscle volume, which is associated with a fifth range of values; generate data indicative of number of ribs, which is associated with a sixth range of values; and generate data indicative of presence or absence of cysts, tumors, pleurisy and foreign objects.
It is known that transfer of animals from the farm to sale yards is stressful for the animal and expensive for the farmer. Therefore, the ability to conduct virtual auctions with electronic data, including that from the 3D X-ray computed tomography data, is beneficial.
Following sale and transportation (blocks 824, 826 respectively) of the animal from the farm, a 3D X-ray computed tomography scan is acquired at a feedlot, at step 828. The scan at step 828 is directed towards performing an incoming check of the animal post auction to validate the electronic data that was presented at auction and also to check on animal health where animals from multiple herds are being combined. Thus, data from the scan at step 828 is used to determine one or more of a plurality of after-sale parameters. In embodiments, the validation of the electronic data involves comparing at least a portion of the plurality of pre-sale parameters with at least a portion of a plurality of after-sale parameters. In embodiments, the plurality of after-sale parameters include lean meat yield, ratio of intra-muscular fat to tissue, amount of inter-muscular fat, absolute and relative size of individual organs, muscle volume, number of ribs, and presence or absence of cysts, tumors, pleurisy and foreign objects.
At step 832, a final scan of the animal is conducted at the end of the feedlot process (block 830) where the animal has generally been fattened prior to slaughter. This final scan provides initial data to enable planning production schedules/processes (block 834) and hence optimize a factory process and, thereafter, final dispatch to customers (block 836).
Persons of ordinary skill in the art should appreciate that, in some embodiments, the scan information generated on an animal at a particular stage in development is aggregated with information from other animals at similar and different stages in development to determine, using methods such as (for example) artificial intelligence and big data analytics, a predicted outcome for the animal as well as an impact on overall development of a herd within a particular farm and also between different farms.
In embodiments, multi-energy computed tomography and transmission X-ray screening may be employed for the purposes of the present specification. In embodiments, the use of multi-energy transmission X-ray screening enables improved Zefr recovery in single-view and stereo-view imaging systems leading to improved chemical lean accuracy and improved location of bone structure especially in high attenuation regions. In addition, the use of multi-energy transmission X-ray screening enables improved Zeff recovery for use in foreign object detection and final product quality control.
In embodiments, the technologies described above may be integrated with meat processing and plant safety practices. In embodiments, the present specification employs the use of software to link three-dimensional imaging and multi-energy meat processing technology to plant operations. In embodiments, the present specification employs the use of software to link three-dimensional imaging and multi-energy meat processing technology to farming practices. In embodiments, the present specification employs the use of modified security technology, such as personnel and baggage screening systems, such that these technologies can be employed within the meat industry across several applications.
In some embodiments, the system 900 is enclosed within a food safe, environmentally protected enclosure 915 manufactured using materials such as, but not limited to, stainless steel and/or plastic. In some embodiments, the system 900 is surrounded with at least one radiation shielding enclosure or tunnel 920. A control room 925 is provided for one or more system operators to review the performance of the system 900 on one or more inspection workstations 927. A service access 930 is also provided to the system 900. In various embodiments, the one or more inspection workstations 927 are computing devices.
In some embodiments, the system 900 is configured for dual-plane scanning of carcasses and comprises a first plurality of linear multi-focus X-ray sources along with an associated first array of detectors positioned or deployed around an inspection region, area or aperture to scan carcasses in a first imaging plane 942 and a second plurality of linear multi-focus X-ray sources along with an associated second array of detectors also positioned or deployed around the inspection region, area or aperture to scan carcasses in a second imaging plane 943. In some embodiments, the first and second imaging planes 942, 943 are along a direction parallel to the direction of motion of the carcasses along the conveyor rail 910. In embodiments, the first plurality of linear multi-focus X-ray sources are offset from the associated first array of detectors, in the first imaging plane 942, by a distance d1 while the second plurality of linear multi-focus X-ray sources are offset from the associated second array of detectors, in the second imaging plane 943, by a distance d2. In some embodiments, d1 is equal to d2. In various embodiments, the distances d1 and d2 range from 1 mm to 10 mm.
In some embodiments, as shown in
The inspection area or aperture 1050 is bounded by a food safe environmental enclosure or housing 1015. The inspection area or aperture 1050 is surrounded by an array of X-ray detectors 1055a positioned in the first imaging plane 1042 such that the X-ray detectors 1055a lie between the linear multi-focus X-ray sources 1045a and the housing 1015. The array of detectors 1055a is offset, by a distance of 1 mm to 10 mm from the plane of the X-ray sources 1045a such that X-rays from a multi-focus X-ray source on one side of the inspection aperture 1050 can pass above the adjacent X-ray detectors and interact with X-ray detectors on an opposing side of the inspection area 1050, thereby forming a transmission image through a carcass under inspection.
The second cross-sectional view 1040b is along the direction parallel to the motion of carcasses along the conveyor rail 1010 and perpendicular to a second imaging plane. In embodiments, the second imaging plane also comprises a plurality of separate linear multi-focus X-ray sources 1045b arranged around the inspection area 1050. In some embodiments, the second imaging plane 1043 comprises, say, five linear multi-focus X-ray sources 1045b separated from each other and positioned around or along the perimeter of the inspection area 1050. In some embodiments, the five linear multi-focus X-ray sources 1045b (in the second imaging plane 1043) are disposed or positioned so as to fill the gaps separating the five linear multi-focus X-ray sources 1045a (in the first imaging plane 1042).
The inspection area or aperture 1050 is surrounded by another array of X-ray detectors 1055b positioned in the second imaging plane 1043 such that the X-ray detectors 1055b lie between the linear multi-focus X-ray sources 1045b and the housing 1015. The array of detectors 1055b is also offset, by a few millimeters, from the plane of the X-ray sources 1045b such that X-rays from a multi-focus X-ray source on one side of the inspection aperture 1050 can pass above the adjacent X-ray detectors and interact with X-ray detectors on an opposing side of the inspection area 1050, thereby forming a transmission image through the carcass under inspection.
The third cross-sectional view 1040c illustrates a composite representation of the first and second imaging planes 1042, 1043 as the carcass moves through the system 1000. The view 1040c shows a complete locus of multi-focus X-ray source points about the inspection area 1050 as required to form a high-quality 3D tomographic image of the carcass. A small region 1060 of missing data is observable adjacent to a hook on which the carcass is transported. Accordingly, an image reconstruction algorithm of the system 1000 is configured to minimize an impact of the missing data in a final image.
During a scanning operation, each X-ray source point within an individual multi-focus X-ray source (1045a, 1045b) is switched on in turn and projection data through the carcass is collected for that one source point. When the exposure is complete, a different X-ray source point is switched on, say, for example, within a different multi-focus X-ray source in the system 1000 to create a next X-ray projection. The scanning process continues until all X-ray sources have been fired in a sequence that is configured to optimize a reconstructed X-ray image quality.
In some embodiments, the inspection area 1050 has a cross-sectional shape, which is a composite of a first rectangular shape mounted by a second triangular shape. In some embodiments, the first rectangular cross-sectional shape has an exemplary size defined by a width that is less than 20%, preferably less than 40% of a height. In some embodiments, the first rectangular cross-sectional shape has an exemplary size (area) of 1500 mm (width)×3900 mm (height). In some embodiments, the area of the second triangular shape is substantially less or negligible compared to the area of the first rectangular shape. Therefore, for practical purposes, the exemplary size (area) of 1500 mm (width)×3900 mm (height) for the first rectangular cross-sectional shape is representative of the composite—that is, the inspection area 1050. It should be appreciated that this size (area) of 1500 mm (width)×3900 mm (height) of the inspection area or aperture 1050 is suited to scanning beef carcasses, in some embodiments.
According to aspects of the present specification, a size of an inspection region can be configured for specific carcass-based applications by deploying a specific imaging geometry comprising a) selecting the number and position of multi-focus X-ray sources (such as, sources 1045a, 1045b) to be used and b) configuring the array of X-ray detectors (such as, detectors 1055a, 1055b) to suit the X-ray source positions. The specific imaging system geometry is passed to the X-ray 3D image reconstruction algorithm where a one-time re-calculation of weighting functions is conducted to ensure accurate image reconstruction. The embodiments of
For example,
In accordance with an aspect of the present specification, the inspection area or aperture 1150 has a polygonal geometry or shape to approximate a round or circular cross-section. The polygonal shape or geometry is suited to scan carcasses of lamb, pigs and goats. In some embodiments, the inspection area or aperture 1150 has a maximum width of 1500 mm and a maximum height of 2000 mm. In some embodiments, the inspection area or aperture 1150 has a maximum width that is less than 10%, preferably less than 20% of a maximum height.
In some embodiments, the inspection area or aperture 1150 is bounded by a food safe environmental enclosure or housing 1115. The inspection area or aperture 1150 is surrounded by an array of X-ray detectors 1155a positioned in the first imaging plane such that the X-ray detectors 1155a lie between the linear multi-focus X-ray sources 1145a and the housing 1115. The array of detectors 1155a is offset, by a few millimeters, from the plane of the X-ray sources 1145a such that X-rays from a multi-focus X-ray source on one side of the inspection aperture 1150 can pass above the adjacent X-ray detectors and interact with X-ray detectors on an opposing side of the inspection area 1150, thereby forming a transmission image through a carcass under inspection.
The second cross-sectional view 1140b is along the direction parallel to the motion of carcasses along the conveyor rail 1110 and perpendicular to a second imaging plane. In embodiments, the second imaging plane also comprises a plurality of separate linear multi-focus X-ray sources 1145b arranged around the inspection area 1150. In some embodiments, the second imaging plane comprises, say, three linear multi-focus X-ray sources 1145b separated from each other and positioned along the perimeter of the inspection area 1150. In some embodiments, the three linear multi-focus X-ray sources 1145b (in the second imaging plane) are disposed or positioned so as to fill the gaps separating the three linear multi-focus X-ray sources 1145a (in the first imaging plane).
The inspection area or aperture 1150 is surrounded by another array of X-ray detectors 1155b positioned in the second imaging plane such that the X-ray detectors 1155b lie between the linear multi-focus X-ray sources 1145b and the housing 1115. The array of detectors 1155b is also offset, by a few millimeters, from the plane of the X-ray sources 1145b such that X-rays from a multi-focus X-ray source on one side of the inspection aperture 1150 can pass above the adjacent X-ray detectors and interact with X-ray detectors on an opposing side of the inspection area 1150, thereby forming a transmission image through the carcass under inspection.
The third cross-sectional view 1140c illustrates a composite representation of the first and second imaging planes as the carcass moves through the system 1100. The view 1140c shows a complete locus of multi-focus X-ray source points about the inspection area 1150 as required to form a high-quality 3D tomographic image of the carcass. A small region 1160 of missing data is observable adjacent to a hook on which the carcass is transported. Accordingly, an image reconstruction algorithm of the system 1100 is configured to minimize an impact of the missing data in a final image.
As another example,
The figure also shows a plurality of first structures 1270 for enabling heat dissipation from the plurality of X-ray sources 1245 and at least one second structure 1275 for enabling heat dissipation from and also for providing voltage supply to the plurality of X-ray sources 1245. In embodiments, the first structure 1270 is designed to maximize mechanical integrity and heat conductivity. The at least one second structure 1275 comprises a thermally conductive element to dissipate heat from an anode region and also a metal rod that passes through its center to supply voltage.
In accordance with an aspect of the present specification, the inspection region, area or aperture 1250 has a substantially non-circular geometry or shape such as rectangular or square, for example. The rectangular or square shape or geometry is suited to scan whole poultry and beef, lamp, pig and goat carcass sections during de-boning process. In some embodiments, the inspection area or aperture 1250 has a size of 600 mm (width)×450 mm (height).
In some embodiments, first, second and third supports 1322a, 1322b, 1322c are deployed to support the anode 1315 along a longitudinal axis. The first and second supports 1322a, 1322b are deployed at two ends while the third support 1322c is deployed at the center of the anode 1315. In some embodiments, the first and second supports 1322a, 1322b also function as coolant feed-through units while the third support 1322c enables high voltage feed-through. In some embodiments, the anode 1315 supports an operating tube voltage in a range of 100 kV to 300 kV. In some embodiments, each electron gun, cathode or source/emission point 1310 emits a tube current in a range of 1 mA to 500 mA depending on carcass thickness and inspection area, aperture or size-larger the inspection aperture and thicker the carcass, higher the required tube current.
In some embodiments, each electron gun 1310 is configured to irradiate an area or focal spot on the anode 1315 ranging between 0.5 mm to 3.0 mm diameters. Specific dimensions of the focal spot are selected to maximize image quality and minimize heating of the anode 1315 during X-ray exposure. Higher the product of tube current and tube voltage, larger the focal spot is typically designed to be.
In accordance with aspects of the present specification, the 3D stationary gantry X-ray CT imaging system 100 includes a plurality of design features and fabrication methods of a CT tube having improved performance and stability, both from a physics and mechanical standpoint, in addition to reduced production costs. In embodiments, the housing is fabricated from stainless steel and is formed using hydroforming as opposed to metal stamping. In an embodiment, the hydroforming manufacturing method uses high pressure fluid to press a material into a mold to form a desired shape. Metal stamping in contrast, uses a custom male mold and custom female mold to press a material into a desired shape. There are many advantages to using hydroforming over metal stamping including the advantages presented in the text that follows. For example, there is less material waste in the forming process (on the order of 0-10%); stamping is typically 20% waste or more. Further, hydroforming has a lower upfront cost since it requires just one custom mold as opposed to two molds in stamping. This also contributes to a shorter production lead time and reduced cost for volume production of the parts. In addition, hydroforming provides the capability for forming more intricate shapes and features, often features that would be impossible to create in stamping. Even with the added complexity, hydroformed parts are typically manufactured at a faster rate, which also contributes to shorter manufacturing time once the system is in production. Still further, hydroforming provides better surface finishes due to water forming the material instead of another metal part surface, which translates to better stability in the CT tube when high voltage is applied. Typically, hydroformed parts have greater strength properties over stamped parts. The even distribution of compressive forces from the liquid in forming process usually results in a more rigid part. Stamping has a greater potential cause formed sheet material to thin out to an undesirable thickness with weaker strength attributes in certain areas. Because the tube is put under atmospheric vacuum pressure while trying to maintain a relatively compact shape and low weight, hydroforming contributes positively to the overall uniformity of the design execution. Still further, hydroformed parts typically experience less material spring back in hydroforming, resulting in more accurate and consistent geometries, which is critical for the CT tube housing. It should be noted that the hydroforming process does not produce uniform material thicknesses or flatness as might be expected from machining.
In accordance with some embodiments, a CT multi-energy detector module consists of a printed circuit board (PCB), electrical components soldered onto the PCB to create a printed circuit board assembly (PCBA), and a detector crystal (CdTe or CdZnTe) assembled onto the PCBA. The final processing steps to complete the detector module assembly includes attaching a high-voltage flex circuit (HV Flex) and adding a protective coating.
In accordance with aspects of the present specification, 3D scan image data of a carcass provides effective Z (atomic number) and density information (block 1405) leading to insight related to the 3D structure (comprising bone, fat and tissue structure) of the carcass (block 1406) and therefore may be used to drive a system for automatic cutting (block 1407) of the carcass based on its structure. It is known, for example, that lamb carcasses have 8 ribs typically, but sometimes a lamb may have just 7 or even 9. To continue this example, in order to plan optimal output from an abattoir, it needs to be determined as to how many lamb chops are required as opposed to rack of lamb, which typically comprises 7 ribs. Therefore, a carcass may yield 1 rack, 1 rack and 1 chop or 1 rack and 2 chops. The decision on whether the carcass should be processed into individual chops or into rack plus chop(s) is ideally made prior to the start of a day's production. In some embodiments, therefore, the use of 3D imaging can drive optimal production planning (block 1408) and establish a correct cutting sequence for one or more automated cutting equipment. As a non-limiting example,
In some embodiments, the 3D scan image data of the carcass can also be used to determine eating quality (block 1410) in 3D within the carcass as a whole. It is known that the density of fat and muscle are dissimilar. Therefore, they appear at different grey levels in the reconstructed X-ray image. Metrics of eating quality in beef, for example, are determined by a) a ratio of intra-muscular fat to tissue (marbling) as well as b) an amount of inter-muscular fat. Analysis of eating quality through these metrics, at each point in each muscle, determines a first amount or portion of each muscle within the carcass that will be destined for highest value output, a second amount or portion that will be destined for standard output and a third amount or portion that will be destined for low value output. This analysis drives the overall valuation (block 1412) of the carcass and ensures that farmers can be remunerated fairly for producing high quality animals and not simply on carcass weight or lean meat yield (the percentage of meat, fat and bone in the carcass).
In some embodiments, further analysis of the 3D image data provides information on carcass/animal health (block 1415), for example the presence of foreign objects such as syringe needles and barbed wire inclusions, and also the presence of cysts and tumors, oversized organs, pleurisy and other common diseases. Collectively, this information also drives carcass valuation since an unhealthy carcass will be diverted to a low value food chain while simultaneously improving overall quality control in food safety (block 1417).
At step 1502, an animal is processed to remove skin, offal, extremities and trim waste. At step 1504, full carcass scanning or inspection is conducted while a temperature of the carcass is in a range of 10 to 50 degrees Celsius, and preferably is greater than 10 degrees Celsius, using a 3D X-ray computed tomography scanning system such as those described with reference to
Consequently, at step 1506, non-food products of the carcass are sent to alternative processing streams. At step 1508, scanning is conducted of offal and other by-products to provide further input to animal health measurements (for example, inspection of individual organs for abnormalities and presence or absence of cysts, tumors, pleurisy and foreign objects). This can again affect carcass health, carcass valuation and subsequent production process planning. Thereafter, at step 1510, the carcass is sent for storage in a cool room that is maintained at a temperature of less than 15 degrees Celsius and preferably at about 12 degrees Celsius. Production requirements are planned, at step 1512, based on cold carcass inventory.
Now, at step 1514, full scanning of the carcass is conducted once the carcass has been stored in the cold room for a period ranging from 24 to 36 hours. At this point, the carcass will have settled into a rigid shape and re-imaging with the 3D X-ray computed tomography system ensures that the most accurate scan image data, indicative of the bone, fat and tissue structure and, therefore, of areas of contiguous meat of a predefined quality level (determined by, for example, ratio of intra-muscular fat to tissue and amount of inter-muscular fat), is sent to automated cutting systems that are used to do initial carcass segmentation into smaller pieces for more effective processing in a boning room. At step 1516, the carcass is sent to the boning room and thereafter, at step 1518, the automated cutting systems perform major carcass cuts to segment the carcass to manageable sizes for final dissection.
Next, at step 1520, in some embodiments, a 3D X-ray screening system with smaller inspection area, aperture, tunnel or region (such as that of the screening system of
At step 1524, in some embodiments, the 3D X-ray screening system with smaller inspection area, aperture, tunnel or region is used to scan the meat and the scan image data is analyzed to determine measurements related to individual dissected cuts, such as a T-bone or rib-eye steak, for key quality metrics such as eating quality, fat thickness and presence of foreign objects including bone fragments. The amount of meat remaining on the bone after de-boning is also determined. If excess meat remains, the bone may be sent back for further processing to extract the remaining meat into the food chain. Subsequently, at step 1526, a quality control function is performed to ensure final product conformance to customer requirements and then, at step 1528, individual meat products are packaged.
Next, at step 1530, a quality control scanning is performed of individual cuts following packaging. This inspection is targeted towards looking for foreign objects as well as for measures such as fat thickness surrounding a piece of steak, for example, in order to ensure that customer requirements have been met. In some embodiments, this step is done with a 3D X-Ray CT system (e.g.
Now, at step 1534, an entire box of packaged meat is scanned through the 3D X-ray computed tomography system with a smaller inspection area, aperture, tunnel or region to facilitate a final quality control function. During the final quality control function, at step 1536, a packing list to be given to the customer is compared against the actual contents of the box using automated analysis methods, such as deep learning methods, for example, to validate that the correct number of each type of product are in the box with the desired eating quality, shape and size specifications wherein the eating quality is determined based on at least one of a ratio of intra-muscular fat to tissue and an amount of inter-muscular fat. Finally, at step 1538, the boxed product is dispatched to the customer.
In embodiments, steps 1504, 1508, 1514, 1520, 1524, 1530 and 1534 highlight processes where 3D X-ray carcass inspection adds value to improving overall abattoir production operation.
In embodiments, the common communications/data network 1628 enables storage and retrieval of data in real-time from the database 1610 thereby providing a rapid search facility in order to store and retrieve data.
The common communications/data network 1628 also facilitates transmission of image data from the sensing elements (such as, but not limited to, the 3D X-Ray tomographic scanners 1602, the 2D X-Ray tomographic scanners 1604, the hyperspectral and fluorescence scanners 1606, and the handheld devices 1608), in real-time, to the algorithm processing units that can analyze the data from said sensing elements to generate information required for optimal operation of the meat production process.
The common communications/data network 1628 also enables the data from the sensing elements to be passed in real-time to automated cutting systems employed in the meat production process as well as to human operators to direct cutting of carcasses and/or primals into retail cuts on a carcass-by-carcass basis. The common communications/data network 1628 also enables the data from the sensing elements employed in the meat processing plant to be analyzed by automated quality control processes 1626 and human quality control staff to ensure accurate processing and food safety standards. In an embodiment, the common communications/data network 1628 provides means for real-time display of production metrics and other data (such as financial reports) that support meat production plant management in delivering the highest possible productivity from the plant.
Referring to
A hyperspectral camera generates hyperspectral data which comprises a plurality of different wavelengths detected at each pixel location. Accordingly, instead of a given pixel having a single color value assigned thereto, hyperspectral data comprises a plurality of detected wavelengths at every pixel location. The plurality of wavelengths include one or more wavelengths in the range of 100 nm to 15,000 nm or any increment or subrange of values therein. The resulting image therefore comprises more than wavelength, from a spectral continuum, detected at each pixel.
In an embodiment, status information displayed by the inspection workstations/dashboard 1612 comprises at least one of: real-time notification of any package mis-labelling or incorrect shippable carton contents; real-time notification and location of any animal health defects identified by any sensing element or human operator within the plant; real-time production data including output over adjustable time scales (e.g. current shift, day, week, month or year); real-time plan variance; real-time notification of areas of production backlog or product non-conformity that require management action; real-time financial data on retail product value based on objective measurement from suitable sensors within the plant; and other relevant data such as, but not limited to, staff utilization, staff efficiency and work accuracy.
In an embodiment, the present specification provides a method of identifying the locations of all staff working in a meat processing plant in real time, by providing each member of the staff with Wi-Fi, GPS or other suitable location sensors. Referring to
In an embodiment, the present specification provides an augmented reality based method for achieving optimal cutting of carcasses, primals and retail cuts in a meat processing plant.
Referring to
In an embodiment, the automated, real-time, carcass valuation algorithms 1620 identify a carcass/an item derived from a carcass as being contaminated (for example, by using hyperspectral and fluorescence scanners 1606). Carcass valuation algorithms 1620 also identify the products (primals and cuts) derived from the same carcass as the contaminated item and marks all such products for de-contamination or further analysis depending on a type of contamination.
In an embodiment, the automated, real-time, carcass valuation algorithms 1620 also identify health defects in carcasses, animal offal, and primals. For example, pleurisy; metal contamination from sources such as, but not limited to, fence wire or syringe needles; tumors or cysts may be identified in carcasses. In addition, tumors, cysts, enlarged organs, and worms may be identified in offal by using for example using hyperspectral and fluorescence scanners 1606 and 3D X-Ray tomography scanners 1602, 2D X-Ray tomography scanners 1604. Further, worm nodules, tumors and cysts may be identified in primals; and discoloration, worms, tumors, and cysts may be identified in retail cuts being processed in the meat processing plant by using for example 3D X-ray tomographic imaging. In another embodiment, the automated, real-time, carcass valuation algorithms 1620 also identifies 3D spatial location of bone structure, muscles, inter-muscular fat or health defects within carcasses and primals in order to drive automated cutting equipment and to direct human operators, for example, by using 3D X-ray computed tomography image sensors.
In an embodiment, the automated, real-time product quality check and validation algorithms 1626 identifies meat quality spatially distributed within a carcass, primal or retail cut or packaged product against suitable grading standards such as the Australian MSA standard or the USDA meat quality standard by using imaging data obtained from sensing devices employed in the meat processing plant, such as, but not limited to 3D X-Ray tomography scanners 1602, 2D X-Ray tomography scanners 1604, hyperspectral and fluorescence scanners 1606, and handheld devices 1608.
Further, in an embodiment, the automated, real-time, carcass production planning algorithms 1622 performs carcass valuation, including determining optimal ways to cut the carcass to maximize product revenue given the current customer product delivery requirements. In an embodiment, production planning algorithms 1622 operates by combining objective measurement data derived from sensor systems such as the 3D X-Ray tomography scanners 1602, 2D X-Ray tomography scanners 1604, hyperspectral and fluorescence scanners 1606, and handheld devices 1608 installed in the meat processing plant including spatially localized information on meat grading, muscle volume, animal health, number of ribs in the carcass and animal health data obtained via the meat grading algorithms 1618, carcass valuation algorithms 1620, and the animal health algorithms 1624.
In an embodiment, real-time, meat grading algorithms 1618 determines the constituents of trim boxes to determine the exact ratio of fat to lean meat. In an embodiment data from sensing elements such as 3D X-ray tomography system employed in the meat producing plant is used by the meat grading algorithms 1618 to generate metrics for both percentage fraction of fat and lean as well as the size distribution of lean and fat items within the trim box.
In an embodiment, real-time, product quality check and validation algorithms 1626 determine if the labelling of packaged retail cuts is done as per predefined rules. In an embodiment, data from sensing elements such as 3D X-ray tomography system in combination with hyperspectral imaging employed in the meat producing plant is used to determine the weight, meat grade, meat color, fat content, fat thickness and cut-type of the products produced at the plant.
In an embodiment, real-time, product quality check and validation algorithms 1626 also determines if the contents of cartons containing multiple packaged retail cuts is as per predefined customer requirements. In an embodiment data from sensing elements such as 3D X-ray tomography system 1602 employed in the meat producing plant is used by the product quality check and validation algorithms 1626 to determine parameters such as cut type, meat grading score, weight and fat thickness of each retail cut within the carton, which parameters are then compared to the customer supplied product requirements obtained from the production database 1610. In an embodiment, real-time, product quality check and validation algorithms 1626 also performs automated tracking of product throughout the plant by using sensing technology such as, but not limited to, RFID, barcode, video tracking and time, velocity and distance based methods.
In embodiments, real-time data analysis algorithms provided by the present specification also perform time and motion analysis of individual operators and groups of operators based on video camera and location sensor measurements throughout a meat processing plant. It would be apparent to persons skilled in the art that other automated analysis algorithms may also be employed in meat processing plant. Examples of some such real time automated algorithms comprise algorithms for monitoring temperature distribution, humidity variation, throughput and other associated production metrics such as touch labor time per carcass, and the examples of real time analysis algorithms provided herein are for representative purpose only and should not be considered limiting the scope of the present specification.
Referring to
In another embodiment, the 3D X-ray tomographic scanner 1602 is used for performing primal scanning for determining a sub-millimeter 3D location of carcass features immediately prior to automated or manual cutting equipment in the boning room. During such scanning, in some embodiments, the primal is fixed to rigid support structures that may be used to transfer the primal from the imaging system (scanners 1602) to an automated, robotic, cutting equipment in a known frame of reference in the meat processing plant. In an embodiment, the 3D X-ray tomographic scanner 1602 is used for performing retail cut scanning to determine a cut type, a meat grade, a weight, a fat thickness and an orientation of a cut within a package. The obtained scanned data may then be used to cross-correlate with the label applied to the package using optical character recognition technology taken from a video camera image or a bar code reader. In another embodiment, the scanned data may be used to auto generate an accurate label which may then be applied directly to the packaged retail cut. In an embodiment, the 3D X-ray tomographic scanner 1602 is used for scanning packaged carton in order to verify that the entire contents of the carton containing multiple retail cuts reflects accurately the label that is applied to the outside of the carton. In embodiments, each retail cut within the carton is analyzed from the obtained 3D X-ray image in order to determine a cut type, a meat grade, a weight, a fat-thickness and a 3D location of each retail cut within the carton.
Referring to
In some embodiments, the system of the present specification applies a plurality of programmatic instructions to evaluate the X-ray image and generate data indicative of a quality of meat, wherein said quality is quantified by a first range of values; to generate data indicative of intramuscular fat (IMF) deposition and/or content, which is associated with a second range of values; and to generate data indicative of an extent of marbling of the meat, which is associated with a third range of values. In some embodiments, one or more camera systems are installed for slice location and meat color imaging.
The present specification also provides for the use of image sensors such as 2D projection X-ray imaging in single-view or dual-view configurations with dual or multi-energy X-ray sensors. In various embodiments, the 2D projection X-ray imaging may be used in various applications in a meat producing plant. In an embodiment, said imaging is used for performing analysis of offal after removal from the carcass into trays, wherein one tray of green offal (e.g. stomach, intestines and bowel) and one tray of red offal (e.g. heart, lungs, liver, kidneys) are produced per carcass. In embodiments, the X-ray system is used to look for foreign objects such as metal items and worm nodules as well as for health defects such as tumors, cysts and enlarged organs. In an embodiment, said imaging is also used for analysis of cartons containing trim to determine the fraction of lean to fat tissue averaged over the whole carton.
In some embodiments, the present specification provides for the use of image sensors such as 2D projection X-ray imaging in a single-view and/or dual-view configuration with dual or multi-energy X-ray (MEXA) sensors to detect the presence of brisket worm nodules in the scanned meat. It should be appreciated that programmatic instructions are configured to process the X-ray image data to identify at least one of shapes, attenuation values, clustering, density, or other values indicative of one or more brisket worm nodules. In some embodiments, the X-ray system uses a conveyor that is positioned on an incline, wherein a first end of the conveyor is at a lower height position than the second, opposing end of the conveyor, or a decline, wherein a first end of the conveyor is at a higher height position than the second, opposing end of the conveyor, to minimize radiation dose and overall system size. In some embodiments, the X-ray system provides an ink-jet, laser beam, LED strip or augmented reality headset to indicate presence of worm nodules.
Referring to
Referring to
Referring to
In various embodiments, various different types of sensors and applications may be used in the abattoir, of a meat processing plant, such as, but not limited to, fixed installations of 3D video camera systems; radar range finding systems for determining carcass volume, meat grading and meat color; hand held systems for measuring temperature, pH, color, contamination and other parameters. Such and other sensors may be integrated within the overall framework disclosed in the present specification for further increasing the efficiency and profitability of a meat processing plant, without departing from the scope of the present specification.
In an embodiment of the present specification, each of the carcasses being processed in a meat processing plant, each of the primals that are cut from said carcasses and each subsequent retail cut from each of said primals are provided with a unique identifier (ID) to ensure traceability of all products. For example, if a carcass entering an abattoir cool room of the meat processing plant has an ID of ‘63’, and subsequently, six primals are cut from the carcass, said primals may be provided with ID's such as, ‘63:1’ through ‘63:6’. If the primal ‘63:1’ is then processed into 26 retail cuts said cuts may be provided with ID's such as ‘63:1:1’ to ‘63:1:26’. If the primal 63:2 is processed into 15 retail cuts said cuts may be provided with ID's such as ‘63:2:1’ to ‘63:2:15’. IDs for the retail cuts from the remaining primals from carcass ID ‘63’ may be similarly provided. It would be apparent to persons of skill in the art, that multiple carcass, primal and retail cut labelling schemes are possible and may be employed in the present specification, and that the above given example is just one of such labelling schemes. In various embodiments, the IDs generated for the carcass, primal and retail cut are also associated with the date and time stamp at which a primal was cut from a carcass or a retail cut was separated from its primal.
In another embodiment, the present specification provides a method for tracking a location and time or arrival of each carcass, primal and retail cut through a meat processing plant.
In an embodiment, the present specification employs a video camera technology to track a primal as it is cut from a carcass and transferred to a conveyor or a secondary hanging rail.
In embodiments, for the points where human operators lift or otherwise remove primals or product from a rail or conveyor into a subsequent processing step, such as trimming fat from a primal or packing the product, one or more video cameras are used to monitor the location of the product and any parts that may be cut from it in order to maintain product location and ID assurance.
In embodiments, before and after photographic data is recorded and associated with an initial and final product for quality assurance purposes at points where automated process equipment, such as rotating blades, band saws, pulling devices or water jet cutters, removes or modifies carcass, primal or retail cuts. Where automated handling equipment moves carcasses, primals or retail cuts from one location to another, the carcass, primal or retail cut IDs are transferred automatically from the initial location to the final hook or conveyor location.
In embodiments, at each point in the meat processing plant, where a carcass, primal, retail cut or a packaged product is scanned by a sensor, the carcass, primal, retail cut or packaged product ID is associated directly with the data produced by the sensor to allow instant recall of the data from that sensor via the data network (such as 1628,
In some embodiments, the present specification describes a multi-sensor imaging system/platform that is designed to use 2D (two-dimensional) projection X-ray imaging in single-view or dual-view configurations with dual-energy or multi-energy X-ray (MEXA) sensors in combination with hyperspectral imaging for offal inspection and sortation. In some embodiments, the X-ray system uses a conveyor that is positioned on an incline, wherein a first end of the conveyor is at a lower height position than the second, opposing end of the conveyor, or a decline, wherein a first end of the conveyor is at a higher height position than the second, opposing end of the conveyor, to minimize radiation dose and overall system size.
In some embodiments, the multi-sensor imaging system/platform combines multi-energy X-ray attenuation (MEXA) with visible and shortwave infra-red (SWIR) hyperspectral camera data and applies a plurality of programmatic code, instructions or algorithms to automatically detect and sort cattle and sheep organs with defects in abattoirs. The hyperspectral data provides detailed information on the surface whereas the X-rays penetrate tissues providing information inside the organs. In some embodiments, the multi-sensor imaging system/platform provides an ink-jet, laser beam, LED strip or augmented reality headset to indicate presence of health issues upon scanning the meat.
The present specification describes a multi-sensor platform and associated plurality of programmatic code, instructions or algorithms to process the X-ray scan data and hyperspectral imaging data for the detection of defects in animal tissue, particularly beef and sheep organs. It should be appreciated that in order to collect data, normal and abnormal (where abnormal is diseased or sick) organs were acquired from abattoirs, scanned by the multi-sensor system, and histopathological inspection was performed by expert veterinarians. The collected data is then used to develop various algorithms for the automatic detection of abnormal organs using various machine learning and deep learning algorithms, both supervised and unsupervised. Automatic identification of defects in both beef and sheep organs using hyperspectral imaging data have an accuracy of up to at least 92%. In embodiments, the plurality of programmatic code, instructions or algorithms may be used to automatically either ‘flag’ organs with defects after classification, or produce an image (that may be, but is not limited to, RGB, X-ray and/or hyperspectral) with colored or otherwise differentiated regions where the anomaly is detected, which may assist inspectors for further inspection. The plurality of programmatic code, instructions or algorithms is configured to generate at least one graphical user interface (GUI) in order to display the image and apply color or other demarcations (such as stippling) to regions of the image in order to indicate that the regions contain one or more anomalies.
In addition, the plurality of programmatic code, instructions or algorithms analyze target X-ray scan data in order to determine whether an organ of interest (i.e. by abnormal thickness upon palpation or discoloration) is too dense compared to healthy organs within a library of X-ray images (stored in a database). In some embodiments, each X-ray image in the library has an associated thickness and density data. The plurality of programmatic code, instructions or algorithms is configured to process the associated thickness and density data in order to determine appropriate thresholds for acceptance or rejection of a target X-ray image as containing healthy or unhealthy meat/organ, respectively. In some embodiments, the plurality of programmatic code, instructions or algorithms is further configured to identify diseases in a target X-ray scan data based on the library or database containing marked-up X-ray images with information from several lesions.
The multi-sensory platform of the present specification provides improved offal throughput including optimization of the rollers and protective lead shielding, and automation of image analysis to view and inspect a scanned organ in real-time and determine if a second scan is necessary, and allowing for longer projection times in order to scan more than one organ in succession. Roller spacing, strength and size can be optimized based on a weight and/or distribution of scanned offal. Thick offal lowers the detected scan signal (due to more attenuation) and therefore statistical accuracy (that is, more noise). In such cases, the multi-sensory platform is configured to scan at slower speed (for longer) in order to improve image/detection accuracy and precision.
In some embodiments, system 2220 includes a conveyor belt 2204 (that translates at a speed ranging from 0.1 m/s to 1.0 m/s and preferably at approximately 0.2 m/s).
In some embodiments, system 2200 includes an inspection tunnel 2206 having a length ranging from 1100 mm to 5000 mm, a width ranging from 500 mm to 1000 mm and a height ranging from 300 mm to 1000 mm, and preferably a size of 1360 mm length×630 mm width×400 mm height.
In some embodiments, system 2200 includes a dual-view X-ray scanning system 2210 comprising first and second X-ray sources of 160 keV each (wherein, the first source is in up-shooter configuration and the second source is in a side-shooter configuration). In some embodiments, the system 2210 has 10 to 42 DABs, and preferably, 6 to 22 for up-shooter view and 4 to 20 for side-shooter view. In an embodiment, the system 2210 has 20 data acquisition boards (DABs), and particularly, 11 for up-shooter view and 9 for side-shooter view (112 pixels per board).
In some embodiments, the system 2210 includes high spatial resolution multi-energy photon counting X-ray sensor arrays such as, for example, a cadmium telluride detector (CdTe: 0.8 mm×1.2 mm×2 mm).
In various embodiments, an X-ray imaging acquisition rate of the system 2210, ranges from 150 Hz to 500 Hz in 3 to 20 energy bands in the range 20-160 keV. In some embodiments, an X-ray imaging acquisition rate is of 300 Hz in six energy bands in the range 20-160 kcV.
In various embodiments, system 2210 includes a hyperspectral imaging system 2215 comprising camera sensors. In some embodiments, the camera sensors include a Visible/IR (Infrared) sensor operating in a wavelength range of 450 nm-900 nm. In various embodiments, the camera sensor is configured for imaging in 200 to 1200 wavelength bands. In an embodiment, the camera sensor is configured for imaging in 300 wavelength bands. In some embodiments, the camera sensors include a SWIR (shortwave infrared) sensor operating in a wavelength range of 900 nm-1700 nm. In various embodiments, the camera sensor is configured for imaging in 400 to 700 wavelength bands. In an embodiment, the camera sensor is configured for imaging in 512 wavelength bands. In some embodiments, the hyperspectral imaging acquisition rate ranges from 30 Hz-150 Hz depending on image resolution/size and to scale to X-ray image capture.
The X-ray and hyperspectral imaging systems 2210, 2215 are in data communication with a computing device having memory, associated database system and a controller/processor that implements a plurality of instructions, programmatic code or algorithms (for example, an Ubuntu (Linux) Cube computer program) configured to control exposure time, image size, and acquisition rate as well as perform various analyses of X-ray images and/or hyperspectral images in order to identify anomalies, diseases and types of meat/organs/offal, classify healthy and unhealthy meat as well as implement associated functionalities and features, as described in the present specification.
The multi-sensory imaging system 2200 is configured to allow samples to be loaded from a first end, pass through the scanner, and emerge from a second end. Also, the system 2200 is characterized by real-time energy and intensity calibration of the multi-energy X-ray sensor arrays, integration of the two hyperspectral cameras at close to full GigE bandwidth on both cameras, synchronized store to disk and recall for MEXA, visible and SWIR camera data in DICOM (Digital Imaging and Communications in Medicine) format with associated TDRs (Threat Detection Reports), and a consolidated graphical user interface (GUI) for detailed review of all image types simultaneously.
In some embodiments, the system 2200 comprises a plurality of characteristics described as follows. In some embodiments, the general characteristics include that a) the sensing system is designed to operate in a hygienic abattoir environment; b) the sensing system is wash-down proof; c) the sensing system is designed to meet food safety standards; d) the sensing system is designed to meet ARPANSA (Australian Radiation Protection and Nuclear Safety Agency) radiation safety requirements; e) the sensing system has a 630 mm (W)×430 mm (H) tunnel size; and/or f) the sensing system has a conveyor speed of 200 mm/s.
In some embodiments, the system 2200 has the following imaging characteristics or specifications: a) the system is configured to enable dual-view X-ray imaging. One view is directed upwards through the center of the inspection area. Another view is directed horizontally through the inspection area; b) the X-ray imaging views use 120 to 160 keV X-ray beam quality with 0.2 to 1.25 mA beam current (For example, use 120 keV. 0.2 mA for low dose, low radiation exposure settings, such as, for light curtains, or curtainless shrouds); c) the X-ray imaging views use multi-energy X-ray (MEXA) sensors with 0.8 mm pitch sensor elements, wherein each sensor element counts photons into one of six energy bins with linear X-ray count rate capability up to 106 X-rays/mm2/s; d) a visible wavelength hyperspectral imaging sensor operates in the range of 400 nm to 900 nm with spectral resolution of at least 20 nm over the full spectral region with pixel size not to exceed 2.0 mm across the conveyor width; e) a short wave infra-red (SWIR) hyperspectral imaging sensor operates in the range of 900 nm to 1800 nm with spectral resolution of at least 20 nm over the full spectral range with pixel size not to exceed 2.0 mm across the conveyor width; f) in various embodiments, the X-ray, visible and SWIR camera systems are synchronized to an X-ray base frequency ranging from 150 Hz to 500 Hz., and in particular, the X-ray, visible and SWIR camera systems are synchronized to an X-ray base frequency ranging of 300 Hz; g) X-ray scan data from the X-ray scanning system 2210 and the hyperspectral imaging data from the hyperspectral imaging system 2215 is transferred to the computing device for subsequent real-time visualization (via one or more images displayed in one or more graphical user interfaces) and analysis using a plurality of programmatic code, instructions or algorithms.
In some embodiments, the system 2200 has the following software characteristics or configurations, which are implemented by the plurality of programmatic code, instructions or algorithms. In embodiments, the computing device, associated with the multi-sensor imaging system 2200, generates at least one graphical user interface (GUI) that provides the system operator with pass/fail risk indication for all offal items. In embodiments, the at least one graphical user interface includes a scrolling image to show offal currently in the X-ray tunnel together with overlaid inspection results from automated health screening algorithms. When available, offal data is correlated with carcass ID using RFID, QR code, Bar Code or other similar ID technology by linking scanner and central abattoir databases. The computing device, associated with the system 2200, provides image review tools for retrospective analysis of offal samples including both X-ray manipulation and hyperspectral data manipulation tools. The software meets relevant cyber security standards such as ISO 27001.
In some embodiments, the system 2200 has the following algorithmic characteristics or configurations. The system 2200 is configured to apply a plurality of programmatic code or instructions to combine X-ray and hyperspectral image data to identify each type of offal as it passes through the scanning system. The target performance is at least 90% correct classification. The system 2200 is configured to apply a plurality of programmatic code or instructions to provide a risk assessment for each offal item as it passes through the scanning system, wherein the risk assessment is indicative of a probability of each offal item being healthy or unhealthy. In some embodiments, data indicative of the risk assessment is associated with a first range of values for healthy offal and is associated with a second range of values for unhealthy offal. The system 2200 is configured to apply a plurality of programmatic code or instructions to combine image-derived information with other abattoir provided information, such as animal type, age, sex and farming data when available in order to maximize risk prediction accuracy. The system 2200 is configured to apply a plurality of programmatic code or instructions to generate a total risk score as an aggregate of all underlying algorithm risk score results. The total risk score is used to generate a pass/fail result that shall also be used to apply color to the X-ray and/or hyperspectral image being displayed in at least one graphical user interface for the specific piece of offal to which the result relates.
In some embodiments, the system 2200 is configured to enable the following integrations. The system 2200 is configured to interface with abattoir database systems to recall information about a specific carcass and to store pass/fail information for each offal item for each carcass. The system 2200 is configured to integrate mechanically with abattoir conveying systems to pass offal items through the X-ray scanning and hyperspectral imaging systems 2210, 2215 in a controlled manner. The system 2200 is configured to interface with subsequent robotic systems for automatic offal selection and rejection.
Referring back to
In the SWIR region, the QIR source was almost an order of magnitude brighter than the halogen lamp while there was no illumination in this region from the LED. Therefore, the QIR light source was adopted for the broadband illumination task. Given that the QIR light source is very efficient at producing heat, the X-ray scanner control system is modified such that the QIR light source is configured to only switch on when the X-ray beam is on. This restricts heating in the scanning tunnel 2206 to only those seconds when a scan is actually being conducted.
A series of tests were then conducted using both X-ray absorbing and optically reflective bar code patterns to verify that the correct X-ray data is associated with the correct visible light data and that these are both associated with the correct SWIR data.
In accordance with some embodiments, lamb pluck (heart, liver and lungs) was acquired from a local butcher and X-ray image data was acquired as shown in
In images 2602, 2604, the heart is identifiable in the X-ray data as distinct from the liver and lungs and thus, an automated algorithm was configured for identification of the heart in the images 2602, 2604. In embodiments, a combination of vertical and horizontal view X-ray data is analyzed by a plurality of programmatic code or instruction in order to distinguish lung from liver and to determine the thickness of the tissues to calculate density and effective atomic number at each location in the image with a reasonable level of accuracy. The loss of image contrast over time is due to, in part, blood leaching from the organs.
As part of the imaging optimization, the optimal operating conditions selected for the multi-sensor system 2200 were determined to be as follows (in various embodiments as shown in Table 1).
With these optimized settings, the synchronized first, second and third image data 2702, 2704, 2706 respectively for the MEXA, visible and SWIR sensors is shown in
In order to verify data acquisition synchronization between the two hyperspectral cameras, a simulation pattern was developed for playback from each camera. In this case, one camera outputted magenta data and the other green data. When perfectly aligned, the result will sum to white and or will otherwise result in magenta or green leading or trailing pixels.
The result of this pattern 2800 for a badly synchronized hyperspectral imaging system is shown in
Hyperspectral image data for liver from a lamb is shown in
All organs were collected from collaborating abattoirs or butchers (˜30 km from the scanning and pathology laboratories), transported chilled on ice (2° C.) from the site of collection to the laboratory and scanned with the multisensory platform 2200 within 1.86±0.16 days from slaughter. Subsequently, all organs were examined to confirm abnormalities during post-mortem inspection by experienced veterinary pathologists (grossly and histologically) within 0.81±0.12 days from scanning. In total, 126 organs were collected as follows:
Livers were the most commonly affected organs and therefore, provided the strongest dataset for algorithm development.
A total of 52 beef cattle livers considered as not fit or rejected for human consumption by the meat inspectors were collected from a collaborating abattoir. Organs were processed and were stored at 2° C. until scanning.
A total of 43 beef cattle livers considered as fit for human consumption were collected from a commercial sale point and stored at 2° C. until scanning.
Organs were scanned using the multi-sensory platform 2200 (encompassing multi-energy X-ray attenuation at six energy levels, and visible and short-wave infrared hyperspectral imaging) and were then examined grossly and, subsequently, histologically by veterinary pathologists to confirm abnormalities.
In order to scan the organs, individually, livers were placed into containment bags, which were opened, and then scanned using the multi-sensory scanning system 2200 of the present specification. Each liver specimen was positioned with the diaphragmatic surface upward and the caudate lobe at the lower left-hand side, and then scanned following a standard protocol. A total of six radiographs for each of the six X-ray energy bands were produced simultaneously. The handling of the specimens were performed following a standard PC2 workflow. Normal RGB (Red Green Blue) images were also obtained in the position scanned and the area of interest recorded for later image markup.
Representative image outcomes and spectral signal are displayed in
All organs were systematically examined for gross lesions by qualified veterinarians specializing in pathology. Should a lesion be present the following data were recorded: location, distribution, demarcation, color, shape, appearance of the cut surface, and consistency. The most likely cause of the lesion was also recorded. Organs were also examined for off-colors and inconsistency in texture by palpation, with sectioning and sampling for histopathology in some instances, to confirm the identity of the lesions. During this process, photos were taken of abnormal findings. After completion of each post-mortem examination, findings were recorded.
Machine and deep learning models are often sensitive to the dataset distribution, and data variations or redundancy may lead to performance decline of a deep learning model. To alleviate the negative impacts of distractive or redundant information from visible HS (hyperspectral) images such as image 3100, pre-processing operations, as illustrated in
At step 3102, firstly, the regions outside of the tray area are excluded to avoid ‘misleading’ the deep learning model's attention to these irrelevant regions and hence resulting in erroneous prediction and classification. The ROI is manually selected because the data is complex and insufficient. Moreover, the non-beef tissue component (conveyor belt) is a much higher component of the original image than the beef component (organ and iron plate). Therefore, the ROI is manually selected to speed up training (time constraints) and get good results on small data. In subsequent commercial scenarios, there is no need to develop a particular automated pre-process program, but only to fix the position of the iron plate containing the beef at the time of scanning to complete the ROI segmentation.
At step 3104, secondly, distinctive spectra with high signal to noise ratio (SNR) are selected. In the case of images, SNR is the ratio of the pixel mean to the variance of the image. Because the information generally presents a specific mode or diagram, it has a minor variance, i.e., a higher signal-to-noise ratio. Since different image bands do not carry equally important information and some bands are highly noisy and of low SNR, which impose significant challenges to distinguish the overall outline of the organ. These randomly distributed noisy data can be very disruptive to the model's performance for classification and prediction.
The final step 3106 is normalizing the image values to normalize the intensities of each image band within a fixed range and to maintain the intensity ratio among channels. The image pixel values are normalized to [0,1] L2-norm, where the most significant pixel values are mapped to 1.
Subsequently, the entire data set is divided into a training set and a test set in a ratio of 2:1 to develop and evaluate the prediction or detection models, respectively.
Automated inspection, using deep learning classifications, is a viable solution to the challenges of improving economic efficiency and reducing the risk of human infection. Also, in general, deep learning exhibits better performance for screening tasks. In order to develop deep learning-based classifications, the analysis is based on an assumption that an abnormal image is caused by abnormal elements, i.e., anomaly, which would not appear in a normal image. Since deep learning networks are trained by the use of a loss function in order to produce the result as expected, the loss function is determined according to the assumption that abnormal images, of unhealthy meat, contain tissues not found in normal images of healthy meat. As illustrated in
In some embodiments, the deep learning network 3200 includes a down-sampling and up-sampling phase. The down-sampling stage compresses the spatial information to obtain a larger field of perception, which gives a complete picture of defects in the image. However, as the information is compressed, the network gradually loses the corresponding location information. Although the up-sampling stage might recover the compressed spatial information to find the exact location of the defects, the convolution itself is very sensitive to deformation.
The discriminator 3202 is trained according to an assumption that only anomalous pixels are present in anomalous images. The anomalous pixels are conceptually anomalous, i.e., they cause, for example, the liver in the image to be anomalous, and there is no need to define exactly what pixels are anomalous in advance (unsupervised). The network 3200 automatically defines and finds the anomaly during the training process. The discriminator 3202 is required to predict each pixel for a given image and display the results as a heat map. The higher the heat map value, the higher the probability that a feature in the image is anomalous (0 for the normal, 1 for anomaly). Thus, the network 3200 must predict each pixel for a given image and display the results as a heat map. The higher the heat map value, the higher the probability that a feature in the image is a defect (0 for the normal, 1 for a defect). Although the location of the defects in the image containing the defects may be unknown, the defects in the image should be the maximum in the image.
During training, the discriminator 3202 outputs for a normal organ image. After pre-processing, the image already contains only the liver and the iron plate. The heatmap of a normal organ can therefore be all zeros and the same color. In contrast, the output for an abnormal image is set to at least one pixel of one, i.e., the abnormal image should have an abnormal feature. Furthermore, the training strategy 3204 also calculates the difference between the heat map of the anomalous image and the other normal images to ensure that the features found in the anomalous image are not found anomalous relative to all normal images. In some embodiments, the training strategy 3204 is to compare all tissues of a meat type (of, for example, beef liver) with all corresponding meat types (beef livers) in the normal dataset. If the predicted tissues do not appear in the normal data, they are identified as defects.
In some embodiments, the training strategy 3204 is configured such that it automatically adjusts the learning rate to adapt to a calculated gradient by calculating a first-order moment estimation and a second-order moment estimation of the gradient. In some embodiments, for the defect screening task, where an image is provided as input to the network 3200 to determine whether it contains defects, accuracy, precision, sensitivity, and specificity are calculated to evaluate the performance.
Referring now to
Each SWIR image consists of sub-images of different bands, which can provide much more information than the corresponding RGB image.
The pixel values of the same pixel position in every sub-image will form a pixel vector. However, the range of each pixel vector varies dramatically, which make distances between different pair of pixel vectors also not comparable. Therefore, band wise normalization is conducted by the following formula:
The normalized pixel vectors 3320 have the same range and become comparable as shown in
The use of distance as a metric to identify similarity using the k-means algorithm does not perform well with high dimension data because the distances between vectors tend to be closer as the dimensions increase (the curse of dimensionality). To reduce the dimensions, the principal component analysis (PCA) algorithm is applied. The component size of 6 is chosen to ensure the amount of variance explained by selected components was over 97%. The reduced SWIR sub-images 3330 are shown in
The k-means clustering algorithm is used to partition pixel PCA vectors into K clusters in which each pixel vector belongs to the cluster with the nearest mean distance to the cluster centroid.
It contains two steps when finding the optimal clustering:
where each pixel vector xp is assigned to label i and mi(t) is the centroid of the label. ∥⋅∥ is the distance between vectors.
The algorithm will converge when the assignments are not changed.
Since K value is a critical hyperparameter for k-means algorithm, K value is selected automatically according to Within-Cluster-Sum of Squared Errors (WSS) and the K of the elbow on the curve 3335 is used in the model as shown in
After the k-means model converged, similar pixel vectors will have the same label, and the image can be segmented based on these labels. However, since the k-means is an unsupervised algorithm, it cannot identify whether each cluster is normal or not.
Since it is challenging to precisely differentiate the boundary between healthy tissue and sick regions, an erosion process is performed for each label to avoid inclusion of the labels with lower confidence. Thus, the k-means clustering algorithm generates a localization of defects. It should be appreciated that images with manually outlined defect locations are not used in the training stage of the deep learning network 3200, and instead the defect locations are outlined or generated entirely automatically by the network 3200.
The deep learning network 3200 and associated methods of image analysis provide at least the following advantages. Firstly, it provides a higher level of automation, mainly in data processing. The deep learning network 3200 does not require a fixed size for the ROI. In prior art systems selecting ROIs requires a manual component. The manual selection of ROIs has uncertainty which may lead to errors in the final prediction results. In some embodiments, the deep learning network 3200 uses a U-net structure, which allows for pixel-level prediction at any input size. Therefore, the network 3200 and associated image analysis methods can reduce the complexity when processing data and thus has a higher degree of automation. Secondly, as another advantage, the training strategy 3204 can be configured to perform semi-supervised localization training of defects. Also, the training strategy 3204 can be configured to allow for the training of the localization task to be completed in the absence of the mask.
In some embodiments, each of the plurality of meat production sites or abattoirs 6405a, 6405b, 6405c to 6405n is in data communication with at least one server 6420 over a network 6430. The at least one server 6420 has an associated database system 6425. In some embodiments, a plurality of end-user computing devices 6440 are also in data communication with the at least one server 6420 over the network 6430. In a non-limiting scenario, for example, some of the plurality of end-user computing devices 6440 may be co-located with some of the plurality of meat production sites or abattoirs 6405a, 6405b, 6405c to 6405n and/or the associated livestock farms or breeders 6435 while some of the plurality of end-user computing devices 6440 may be geographically distributed remote from the plurality of meat production sites or abattoirs 6405a, 6405b, 6405c to 6405n and the associated livestock farms or breeders 6435.
In some embodiments, each multi-sensor imaging system 6410 includes a 2D projection X-ray imaging system in single-view or dual-view configurations with dual or multi-energy X-ray attenuation (MEXA) sensors in combination with a hyperspectral imaging system in data communication with a computing device 6415 and a producer's database system 6416. Each producer's database system 6416 stores a plurality of local or site-specific meat production and quality data related to the associated meat production site or abattoir 6405 and livestock farm or breeder 6435. The plurality of local meat production and quality data includes data such as, but not limited to, animal ID (corresponding to, for example, an identification tag associated with the animal), animal type (fish, chicken, pig, cattle, lamb, etc.), breed of animal, X-ray scan data corresponding to each of different ages of the animal, X-ray scan data of the animal's carcass and/or primal, hyperspectral image data of the animal's meat and organs, geographical location (of the livestock farm and/or meat production site or abattoir), climate, weather, season, feed type, time of year of meat production, vaccination history, medications, disease history, age of animal (when received in the meat production site or abattoir), a plurality of after-sale parameters including-lean meat yield (that is, percentage of meat, fat and bone), ratio of intra-muscular fat to tissue, amount of inter-muscular fat, absolute and relative size of individual organs, muscle volume, number of ribs, and presence or absence of diseases such as cysts, tumors, pleurisy and foreign objects.
In accordance with aspects of the present specification, the plurality of local or site-specific meat production and quality data from each producer's database system 6416 (at each of the plurality of meat production sites or abattoirs 6405) are aggregated and stored in the database system 6425 (associated with the at least one server 6420) in order to generate a plurality of global meat production and quality data. In some embodiments, the plurality of local or site-specific meat production and quality data from each producer's database system 6416 is aggregated based on one or more parameters such as, but not limited to, animal type, geographical location, feed type and/or climatic conditions.
In accordance with aspects of the present specification, the at least one server 6420 implements a plurality of instructions or programmatic code representative of at least one machine learning model. In some embodiments, the at least one machine learning model implements modelling techniques such as, but not limited to, partial least squares discriminant analysis, random forest and artificial neural networks. In some embodiments, the at least one machine learning model implements at least one deep learning or artificial neural network (ANN) such as, for example, a convolutional neural network (CNN).
In some embodiments, the at least one machine learning model is configured to detect (and therefore differentiate) unhealthy/diseased scan data from healthy scan data and consequently infer and output global best livestock farming and meat production practices, patterns and insights for maximizing a plurality of positive parameters and minimizing a plurality of negative parameters based on the plurality of global meat production and quality data. The plurality of positive parameters corresponds to, for example, reduced need for medication, lower carbon footprint, variable cost efficiency, reputation protection, lower health risks to consumers and improvements in the plurality of after-sale parameters including-lean meat yield, ratio of intra-muscular fat to tissue, amount of inter-muscular fat, absolute and relative size of individual organs, muscle volume, number of ribs, and absence of diseases such as cysts, tumors, pleurisy and foreign objects. The plurality of negative parameters corresponds to, for example, presence of abnormalities/diseases such as cysts, tumors, pleurisy and foreign objects.
In some embodiments, the at least one machine learning model is configured to analyze scan data (such as, any one, all or any combination of X-ray scan data corresponding to each of different ages of the animal, X-ray scan data of the animal's carcass and/or primal and hyperspectral image data) with additional information (such as, animal type, geographical location (of the livestock farm and/or meat production site or abattoir), climate, weather, season, feed type, time of year of meat production, vaccination history, medications, disease history, age of animal (when received in the meat production site or abattoir), and the plurality of after-sale parameters) to identify and output global best livestock farming and meat production practices, patterns and insights of what maximizes the plurality of positive parameters and what minimizes the plurality of negative parameters.
In some embodiments, the analyses of scan data with the additional information is performed based on local meat production and quality data corresponding to each of the plurality of meat production sites or abattoirs 6405 and associated livestock farms or breeders 6435 in order to identify global best livestock farming and meat production practices, patterns and insights. The identified global best livestock farming and meat production practices, patterns and insights are then communicated back to the plurality of meat production sites or abattoirs 6405 and associated livestock farms or breeders 6435.
In some embodiments, the at least one machine learning model is trained using input data. In some embodiments, the input data includes healthy or unhealthy/diseased hyperspectral image data of an animal's meat and organs along with at least a portion of associated additional information in the meat production and quality data such as animal ID, animal type (fish, chicken, pig, cattle, lamb, etc.), breed of animal, X-ray scan data corresponding to each of different ages of the animal, X-ray scan data of the animal's carcass and/or primal, geographical location (of the livestock farm and/or meat production site or abattoir), climate, weather, season, feed type, time of year of meat production, vaccination history, medications, disease history, age of animal (when received in the meat production site or abattoir), a plurality of after-sale parameters including-lean meat yield (that is, percentage of meat, fat and bone), ratio of intra-muscular fat to tissue, amount of inter-muscular fat, absolute and relative size of individual organs, muscle volume, number of ribs, and presence or absence of diseases such as cysts, tumors, pleurisy and foreign objects.
The hyperspectral image data and the additional information from the meat production and quality data are associated with the animal ID in the database and hence retrievable as training input data. In some embodiments, training input data represents a sample from the global meat production and quality data stored in the database system 6425. In some embodiments, the sample is representative of each geographical location of the meat production sites or abattoirs 6405 and the associated livestock farms or breeders 6435.
In some embodiments, the input data is processed in order to generate processed training input data for training the at least one machine learning model. In some embodiments, the processing is applied directly to the input data (including, healthy or diseased scans), and the input data does not require manual annotation or being manually designated healthy or diseased. Thus, the training is based on a hypothesis that diseased, unhealthy or abnormal image is caused by abnormal elements, i.e., anomaly, which would not appear in a healthy/normal image.
As known to persons of ordinary skill in the art, a hyperspectral image corresponding to the hyperspectral image data includes a plurality of pixels wherein each pixel includes a plurality of hyperspectral bands. In some embodiments, the plurality of hyperspectral bands is broken down in to a plurality of bins, where each of the plurality of bins is associated with characterizing data. In some embodiments, the characterizing data is indicative of hyperspectral reflectance intensity from the surface of the target organ/meat. In some embodiments, the characterizing data is further processed by mathematical functions such as, for example, differentiation, in order to achieve feature highlighting. Thus, the hyperspectral image is segmented into bits that are of no interest and bits that are of interest (for example, those associated with lean (meat), fat, bone, healthy organ, and disease organ). For example, N spectral bands of the hyperspectral image are grouped into predefined number of M bins. In some embodiments, the number of bins M is 6. It should be appreciated that the Principle Component Analysis algorithm sets M=6, in some embodiments, in order to achieve an Explained Variance Ratio (EVR) above 97%. However, in alternate embodiments, M could take lower values, for example 1, at lesser EVR, and up to 300 for Visible and 512 for SWIR hyperspectral scan data (these values being the maximum spectral bins=pixels on the imaging sensor). Thus, the characterizing data is associated with one of the M bins. Preferably, samples are spread equally across the M bins, but could have an orthogonal set of data where populations would vary. The samples are spread equally across the M bins as a result of normalization, so that each of the M bins has the same intensity, and therefore the same statistical precision—that is each of the M bins carries the same weight in the final result.
In embodiments, the bins have the following characteristics: a) the bins need to be sufficiently separated or discriminated within a grid space. Also, data is presented against multiple dimensional structure. Therefore, principal components are chosen for each of 3 axes; and b) the bins should have no more than 5% overlap, even more preferably no overlap in volume.
Each of the M bins corresponds to a separate set of processed training input data. The at least one machine learning model is trained using data from the full set of M bins in order to be able to a) detect unhealthy or diseased hyperspectral image data from healthy hyperspectral image data and b) infer and output global best livestock farming and meat production practices, patterns and insights based on the plurality of global meat production and quality data associated with unhealthy/diseased and healthy hyperspectral image data.
In embodiments, a selection of the hyperspectral bands is designed to improve the quality of the data, enhance accuracy of the at least one machine learning model, suppress overfitting, and increase efficiency, and ultimately improve the performance of the at least one machine learning model. The peak signal-to-noise ratio (PSNR) is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation, and this is used for band selection for K-means and deep learning models.
For each sample in dataset, the hyperspectral image stack with the shape (width, height, band) has the intensity ranging from 0 to 2n−1, where n is the intensity resolution. In some embodiments, n=64. Image of band i is denoted as Bi and the intensity of pixel j as pj. The total intensity Ii of Bi is calculated using formula as:
After the total intensity calculation of each band, the maximum total intensity Ig and its corresponding band g can be found. Image Bg is used as reference image to calculate peak signal-to-noise ratio (PSNR) of band i using formula as:
where MAXn is 2n−1 and MSEi is the mean squared error between Bi and Bg. The PSNR value is used as threshold for bands selection. In accordance with some embodiments, the threshold is chosen as 20 dB (the total intensity is around 50% of the Ig) since typical values below 20 dB are normally with unacceptable quality.
In accordance with some aspects of the present specification, the at least one server 6420 also implements a plurality of instructions or programmatic code in order to generate at least one graphical user interface (GUI) for access by the plurality of end-user computing devices 6440 over the network 6430. The at least one GUI is configured to acquire user query. The acquired user query is provided as input to the at least one trained machine learning model that processes the user query and outputs a response for display, to the user, in the at least one GUI. In some embodiments, the response is based on the global best livestock farming and meat production practices, patterns and insights inferred by the at least one machine learning model. In some embodiments, the end-user is a livestock farmer or breeder. In some embodiments, the end-user is a meat producer.
In various embodiments, the at least one GUI enables an end-user to input query data corresponding to various scenarios into the at least one trained machine learning model in order to simulate and understand how the scenarios (associated with the input query data) will affect the plurality of positive and negative parameters. For example, an end-user's input query data may correspond to a scenario of “what happens if I invest in a $3 vaccine?”. The output response from the at least one trained machine learning model could be “this leads to $20 in improved health/meat quality outcome”. Similarly, the end-user's input query data may correspond to, for example, what happens if no vaccines are used, there is a change in feed type, there is a change in temperature, rain or any other climatic, weather or seasonal changes.
It should be appreciated that the analyses and therefore the response of the at least one machine learning model is highly geographically (and climate) specific. Consequently, in some embodiments, the at least one GUI enables the end-user to specify (via selection of a pre-populated drop down list, for example) the geographic location along with the input query data. In some embodiments, the at least one GUI enables the end-user to filter the responses on the basis of one or more geographic locations and/or climatic, weather and seasonal characteristics.
The findings recorded during scanning of the sick livers (rejected for human consumption by meat inspectors) and the gross descriptions noted during post-mortem inspection are displayed in Table 2. A total of 32 out of 52 livers rejected for human consumptions showed various degrees of discoloration, 7 livers had abscesses, 5 ducts thickening, 4 fibrosis, 2 flukes and 2 cysts. Results for kidneys and lungs are not presented because they were all healthy.
For the anomaly classification task, where an image is provided to determine whether it is an anomaly or a normal image, the systems and methods of the present specification can achieve accuracy and sensitivity measures of over 90% (Table 3) and can also show the location of the abnormal pixels.
In addition to the binary automated classification of abnormal and normal, as illustrated in
As shown in images 4020, 4022 of
The pixel vectors within each cluster are sampled for further spectral feature comparison 4040 as shown in
From
As shown in images 4120, 4122 of
The pixel vectors within each cluster are sampled for further spectral feature comparison 4140 as shown in
From
Scanning of sheep organs and post-mortem inspection was performed in a similar way to that of beef organs. However, image data pre-processing, feature extraction, and machine learning models developed were different than those described above for beef cattle. The main difference was that data extraction was done manually in selected regions of interest and no spatial pixel information was analyzed.
Sheep organs were examined using the same X-ray procedure as per the cattle trial. Different organs were sampled from a collaborating abattoir and point of sale. Furthermore, one lamb pluck (heart, lungs and liver) were evaluated for differentiation of organ type within the same image. The color differences due to tissue density made this easy to the human eye and as a result, the marked-up X-ray image 4204 in
Image processing software was employed to determine the intensity of the X-rays through each type of organ tissue in
Sheep lungs were selected from the collaborating abattoir due to the presence of caseous lymphadenitis (CLA), known commonly as cheesy glands, in the lymph nodes surrounding the lungs.
Specifically,
Similarly, the difference in intensity encountered when imaging sheep lungs with abscessation also occurred in the sheep lung showing evidence of CLA. The images 4340, 4342 in
The RGB image 4402 of a diseased sheep liver in
An RGB image 4602 of a sheep liver containing visible evidence of a lesion is displayed in
Lamb Lung. Lamb lungs were also scanned using the multi-sensory system of the present specification. Example of MEXA image data 4704 is presented in
Cheesy glands. Another issue of importance in the abattoirs is the detection of cheesy glands. Example photographs 4802 of cheesy glands in mutton (closed and opened) are shown in
The visible and SWIR surface-reflected hyperspectral intensity spectra 4902 for 102 mixed sheep and beef organs (Table 4) are displayed in
The first derivative of the absorbance of visible and SWIR hyperspectral spectra 5102, 5104 for 89 healthy and diseased sheep organs (Table 5) are displayed in
The mean visible and SWIR hyperspectral reflectance spectra 5302, 5304 for 108 (54 grain and 54 grass, frozen) beef steaks are displayed in
Beef primals (the wholesale rib sets) were scanned as a component of a larger trial involving the development of optimum carcass endpoints in feedlot cattle depending on breed. At 0, 50, 100, 150 and 200 days on feed at a commercial feedlot, cattle were slaughtered and selected wholesale rib sets were selected for X-ray scanning to develop prediction algorithms for proportions of fat, muscle, and bone.
A data collection run was performed of steak samples. In the images of
Sample first X-ray image data 5602 and second X-ray image data 5604 of two lambs is provided in
The present specification is directed towards evaluating the use of X-ray technology on the detection of the pathologies of foodborne concern. Among the fifty-two livers rejected for human consumptions scanned, 32 were found with various degrees of discoloration (focal and multifocal, located in the different lobes, extended and local), 7 abscesses, 5 ducts thickening, 4 fibrosis, 2 flukes and 2 cysts. Therefore, discoloration not accompanied by a change in tissue density is not expected to be captured by X-ray absorptiometry.
Lesions such as abscesses and fluke lead to modifications of the hepatic tissue involving calcification and thickening processes that alter the physiological radiological density of the organ and could be detected through the X-ray images. In lungs, a less dense tissue than livers, abscesses and CLA lesions were much more easily noticeable. Visual and X-ray intensity comparisons showed differences between livers, kidneys and lungs other than their size and shape.
Soft tissue abscesses are focal or localized collections of pus caused by bacteria or other pathogens surrounded by a peripheral rim or abscess membrane found within the soft tissues in any part of the body. Even if X-rays are generally of limited value for the evaluation of a soft tissue abscess, they might show soft tissue gas or foreign bodies, increasing suspicion for an infectious disease process or reveal other causes for underlying soft tissue swelling.
Fascioliasis or liver fluke is a food-borne hepatic trematode zoonosis, caused by Fasciola hepatica and Fasciola gigantica. F. hepatica is a flat, leaf-shaped hermaphroditic parasite. Radiological findings can often demonstrate characteristic changes, and thereby, assist in the diagnosis of fascioliasis. The early parenchymal phase of the disease may demonstrate subcapsular low attenuation regions in the liver.
While the X-ray technology did not seem to recognize the shape of the lesions, though the images described various degrees of modifications of the hepatic pattern depending on the lesions found. For each liver scanned, an area of interest was marked based on the macroscopical aspect of the organ and the mark-up was then confirmed during the post-mortem inspection. By the lesion marked, the six radiographs displayed lighter shades of grey when compared to the healthy tissue. The hepatic lesions caused by the pathologies observed (i.e., duct thickening, calcification, etc.) weren't accurately recognized by the radiographs, the technology was only capable of showing unusual shades of grey by the marked-up areas.
The present specification exhibits the use of multi-energy X-ray technology to differentiate organs in a simulated abattoir setting and to present X-ray images compared with whole and dissected photographic images of the same organs, with markings and notes to train a neural network for differentiating the lesions upon the X-ray images compared with the corresponding region of interest upon healthy organs.
The livers from cattle are significantly larger than sheep, and with several presenting noticeable lesions indicative of disease processes. The images within Trial 2 were of mixed species (mostly sheep) and organ type, with lamb pluck X-rays showing differences in density for different types of organs, while the bovine livers were shown to be significantly denser than ovine livers, and a Wagyu liver was shown to be denser than a non-Wagyu liver. Therefore, the size of the tissue or organ being scanned may influence the ability of the X-ray sensor to detect differences of abnormal tissues. In embodiments, the region and distance between the X-ray images may be adjusted for different animal species.
When a lesion is visible to the naked eye, or tissue abnormalities are felt via palpation, prior to sectioning, an X-ray image can discern its shape and further information without requiring sectioning. However, when an organ is simply discolored or there is some subliminal evidence of disease process due to an overly thick surface such as liver fluke deep within a large bovine liver or capsular fibrosis, the X-ray images cannot be marked and therefore intensity histograms from a given ROI would be the optimal method to determine whether an organ can be passed as fit for human consumption or not.
Hyperspectral (HS) imaging, in the form of two sensors within the multi-sensory platform 2200, is a non-contact technology encompassing the visible spectrum (400-900 nm) and short-wave infrared spectrum (900-1700 nm). The HS images generated from frame-by-frame slices of a hypercube within a region of interest (ROI) are surface-based and can detect differences in spectral signatures within different products such as organs, meat, and agro-food products.
These spectral signatures are extracted from given a ROI across each sample, and can be compared and contrasted with one another using machine learning modelling techniques such as partial least squares discriminant analysis, random forest and artificial neural networks. As a non-contact, non-destructive classification tool, HS may be used to classify organs by organ type, and determining whether an organ of a particular type is diseased. Various algorithms of the present specification may be integrated with the multisensory platform.
In accordance with aspects of the present specification, the multi-sensory imaging system/platform 2200 can be used in organ processing scenarios under commercial conditions such as abattoirs or processing plants. One non-limiting example is where organs are mixed and need to be identified by both species and type. The spectra of each organ and results of classification algorithms to differentiate organs by species and type are described hereunder. Visible (VIS) reflectance spectra 5700a, 5700b and short-wave infrared (SWIR) reflectance spectra 5700c, 5700d for each of the four organs (liver, heart, lung, kidney) and each species (beef 5700b, 5700d and sheep 5700a, 5700c) are shown in
In some embodiments, the datasets were pooled across species to develop algorithms for organ and species differentiation as if a mix of organs from both species were scanned through the platform. These algorithms were developed and validated using 5-fold cross-validation. When using both species, the three different spectral regions (VIS, SWIR, and combination VIS and SWIR-COMB) each performed best using a different discriminant analysis model for predictions. As shown in
In some embodiments, algorithms could scan entire organs and then search for abnormal regions, which could be assisted by X-ray spectroscopy. Similarly, identifying different components of an organ sampled from the abattoir such as lymph nodes, fat, and bile ducts could assist in identification of the organ, as well as detection of defects, diseases, or abnormalities. However, this would require larger ROI such as marking-up entire organs from an HS image.
Hyperspectral sensors can only measure the electromagnetic radiation from the surface of products and thus, cannot measure the characteristics inside organs to detect defects or abnormalities below the surface. However, the multi-sensory platform of the present specification is used to collect data from a multi-energy X-ray sensor which can penetrate tissues much further. In some embodiments, the platform 2200 contains six X-ray sensors that penetrate at different depths and identifies abnormalities that the HS sensors cannot.
As shown in
On the other hand,
The presently disclosed embodiments can be used to aid inspectors in the abattoir and infer the presence of potential lesions and their location within organs. In addition, X-ray can determine whether an organ of interest (i.e., by abnormal thickness upon palpation or discoloration) is too dense compared to healthy organs within the image library if this is expanded to have more organs, with appropriate thresholds for acceptance or rejection confirmed. In addition, disease processes may also be identified by X-ray imaging once there is a significant amount of marked-up data with information from several lesions.
Hyperspectral imaging technologies are non-invasive and non-contact, and may enable automatic sorting of livestock organs and disease detection in the abattoir, allowing animal health reports to be provided for producers.
The above examples are merely illustrative of the many applications of the system of present specification. Although only a few embodiments of the present invention have been described herein, it should be understood that the present invention might be embodied in many other specific forms without departing from the spirit or scope of the invention. Therefore, the present examples and embodiments are to be considered as illustrative and not restrictive, and the invention may be modified within the scope of the appended claims.