The present invention relates to an in-situ quantitative measurement of texture for food products using acoustic techniques.
Texture is one of the most important sensory characteristics that determine consumer preference for food products and is usually assessed by sensory evaluation. However, sensory evaluation is time-consuming and expensive, and therefore, reliable and practical instrumental methods are needed to accurately predict sensory texture attributes and other food snack properties.
When a food snack such as potato chip is manufactured, textural properties are dependent on raw material characteristics (i.e. low solids or high solids potatoes) and the processing conditions that the raw material undergoes such as temperature profile, slice thickness, pulse electric field strength intensity and so on.
The crispiness, softness and/or crunchiness of a potato chip are just a few examples of texture and mouthfeel characteristics that make food appealing and satisfying to consumers. Texture is one of the major criteria which consumers use to judge the quality and freshness of many foods. When a food produces a physical sensation in the mouth (hard, soft, crisp, moist, dry), the consumer has a basis for determining the food's quality (fresh, stale, tender, ripe).
A major challenge is how to accurately and objectively measure texture and mouthfeel. Texture is a composite property related to a number of physical properties (e.g., hardness and fracturability), and the relationship is complex. Texture or mouthfeel cannot be quantitatively measured in a single value obtained from an instrument. Mouthfeel is hard to define as it involves food's entire physical and chemical interaction in the mouth—from initial perception on the palate, to first bite, through mastication and finally, the act of swallowing. There is a need to quantitatively measure the food interaction in the mouth.
A problem with hardness is that their correlations with sensory tests are not always as high as expected. In many instances, the metric of peak force exerted on a potato chip does not adequately replicate the energy experienced by consumers. Therefore, consumers' judgments of Hardness can be more nuanced than a simple peak force metric from a destructive analytical test.
Presently, there is no good correlation of any type between instrument readings and taste panel scores. The issue is that no instrument is capable of manipulating a food product precisely the same way as the human mouth during mastication. For example, an instrument may compress a food product between two plates, while a human would be biting down with incisors. Therefore, there is a need for a quantitative texture measurement that has a good correlation with a qualitative measurement from an expert panel.
An Universal TA-XT2 Texture Analyzer from Texture Technologies Corp. can perform a complete TPA calculation and comes with multiple standard probes, including various sizes of needles, cones, cylinders, punches, knives and balls.
As generally shown in
As generally shown in
Consequently, there is a need for a quantitative texture measurement that accomplishes the following objectives:
While these objectives should not be understood to limit the teachings of the present invention, in general these objectives are achieved in part or in whole by the disclosed invention that is discussed in the following sections. One skilled in the art will no doubt be able to select aspects of the present invention as disclosed to affect any combination of the objectives described above.
The present invention in various embodiments addresses one or more of the above objectives in the following manner. The apparatus includes an acoustic capturing device and a data processing unit. When a human being eats/drinks a food snack, the physical interaction in the mouth sends pressure waves that propagate through the ear bone and produce an acoustic signal. The acoustic capturing device records and forwards the signal to a data processing unit. The data processing unit further comprises a digital signal processing module that smoothens, transforms and filters the received acoustic signal. A statistical processing module further filters the acoustic signal from the data processing unit and generates a quantitative acoustic model for texture attributes such as hardness and fracturability. The quantitative model is correlated with a qualitative texture measurement from a descriptive expert panel. Another method includes a food snack fingerprinting using an in-situ quantitative food property measurement.
The present invention system may be utilized in the context of method of quantitatively measuring texture of a food snack, the method comprises the steps of:
Integration of this and other preferred exemplary embodiment methods in conjunction with a variety of preferred exemplary embodiment systems described herein in anticipation by the overall scope of the present invention.
For a fuller understanding of the advantages provided by the invention, reference should be made to the following detailed description together with the accompanying drawings wherein:
While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detailed preferred embodiment of the invention with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the broad aspect of the invention to the embodiment illustrated.
The numerous innovative teachings of the present application will be described with particular reference to the presently exemplary embodiment, wherein these innovative teachings are advantageously applied to in-situ quantitative measurement of texture attributes for food snacks apparatus and method. However, it should be understood that this embodiment is only one example of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed inventions. Moreover, some statements may apply to some inventive features but not to others.
The term “texture” as used herein is defined a composite property related to a number of physical properties such as hardness, fracturability, tooth-pack, roughness of mass, moistness of mass, residual greasiness, surface roughness, and surface oiliness. It should be noted that the term “texture” and “texture attribute” is used interchangeably to indicate one or more properties of texture. It should be noted that the terms “descriptive panel number”, “taste panel score”, “qualitative texture number” and “taste panel number” are used inter-changeably to indicate a qualitative measurement of texture measurements by an expert panel. It should be noted that the terms “in-situ acoustic model,” “acoustic model,” “acoustic texture model,” and “quantitative texture attribute model,” are used inter-changeably to indicate a quantitative model for a texture attribute of a food snack. The term texture as used herein with respect to a liquid or a beverage refers to properties such as viscosity, density, rheology and/or mouthfeel.
One aspect of the present invention provides an in-situ method to quantitatively measure the texture attributes of food snacks. Another aspect of the present invention involves correlating the in-situ quantitative texture attribute measurement to a qualitatively measured texture attribute by an expert panel. The present invention is also directed towards developing a texture attribute model based on relevant frequencies in a captured acoustic signal. According to yet another aspect of the present invention, food snacks are identified (“food finger printing”) based on an in-situ quantitative food snack property measurement.
Applicants herein have created a system that comprises an acoustic capturing device for recording/capturing an acoustic signal from a food snack and a data processing unit that processes the captured acoustic signal and generates a texture attribute model. There are a number of embodiments of this invention which fall within the scope of the invention in its broadest sense.
The present invention may be seen in more detail as generally illustrated in
The acoustic capturing device (0503) may be connected physically with a conducting cable to the DPU (0502) via an input-output module in the DPU (0502). In an alternate arrangement, the acoustic capturing device (0503) may forward an acoustic signal to the input-output module in the DPU (0404) wirelessly. The wireless protocol may use standard protocols such as WIFI or Bluetooth. In an exemplary embodiment, the acoustic capturing device (0503) may be remotely located and the acoustic signal may be forwarded wirelessly to the DPU (0502) with a protocol such as LTE, 3G and/or 4G. In another exemplary embodiment, the remotely located DPU (0502) may be connected to the acoustic capturing device (0503) with wired protocol such as Ethernet. The acoustic capturing device may capture the acoustic signal across a wide range of frequencies. Additionally, the acoustic capturing device may be placed an angle directly in front of the human being. According to a preferred exemplary embodiment, the acoustic capturing device captures acoustic signals in a unidirectional manner. According to another preferred exemplary embodiment, the acoustic capturing device captures acoustic signals in omnidirectional manner. The acoustic capturing device may forward the captured acoustic signal to a processing device physically through a cable. According to a preferred exemplary embodiment, the acoustic capturing device is a wireless microphone that contains a radio transmitter. In a preferred exemplary embodiment, the acoustic capturing device is a dynamic microphone. In another preferred exemplary embodiment, the acoustic capturing device is a fiber optic microphone. A fiber optic microphone converts acoustic waves into electrical signals by sensing changes in light intensity, instead of sensing changes in capacitance or magnetic fields as with conventional microphones. The acoustic capturing device may use electromagnetic induction (dynamic microphones), capacitance change (condenser microphones) or piezoelectricity (piezoelectric microphones) to produce an electrical signal from air pressure variations. The microphones may be connected to a preamplifier before the signal can be amplified with an audio power amplifier or recorded. The microphones may be regularly calibrated due to the sensitivity of the measurement. In another preferred exemplary embodiment, the acoustic capturing device has a digital interface that directly outputs a digital audio stream through an XLR or XLD male connector. The digital audio stream may be processed further without significant signal loss.
According to a preferred exemplary embodiment, the acoustic signal may then be captured for a period of time. The acoustic signal may be represented as Intensity (dB) vs. Time (secs). According to a preferred exemplary embodiment, the acoustic signal is captured for 1 sec to 5 minutes. According to yet another preferred exemplary embodiment, the acoustic signal from the food snack is captured for 2 sec. According to a more preferred exemplary embodiment, the acoustic signal from the food snack is captured for 1 sec. According to a most preferred exemplary embodiment, the acoustic signal from the food snack is captured for 10 sec.
According to a preferred exemplary embodiment, the food snack may be processed in a human mouth for 1 sec to 3 minutes. According to yet another preferred exemplary embodiment, the food snack may be processed in a human mouth less than second. According to a more preferred exemplary embodiment, the food snack may be processed in a human mouth for greater than 3 minutes. According to a most preferred exemplary embodiment, the food snack may be processed in a human mouth for 10 seconds to 20 seconds. According to another most preferred exemplary embodiment, the food snack may be processed in a human mouth for 5 seconds to 10 seconds.
The acoustic model may be developed using the method described in more detail in
Hardness=f(X1-n,I1-n)
Hardness=I1C1+I2C2+I3C3+ . . . InCn (1)
Similar acoustic models may be developed for models for other food properties such a moisture, solids content, oil content, slice thickness, density, blister density and topical seasonings. The relevant frequencies and associated intensities and the coefficients of the developed model may change depending on the food property. A generic model that may represent a food property may be described below:
Food property=f(Z1-n,P1-n)
Food Property=P1D1+P2D2+P3D3+ . . . PnDn (2)
It should be noted that even though the above represented model (1) shows a linear relationship between the texture attribute and intensities, a quadratic or polynomial model may also be represented to calculate the texture attributes. The food property may also be compensated for changes in the characteristics of the human saliva when the food snack is consumed. A table (table 1.0) may be used to measure food properties as shown below from a captured and processed acoustic signal. The values shown below in table 1.0 are for illustration purposes only and should not be construed as a limitation.
As generally illustrated in
As generally illustrated in
The processing unit may include a digital signal processing unit (0703) and a statistical processing unit (0704). The digital signal processing unit (0703) may get input from an input-output module (0702). The statistical processing unit (0704) may receive input from the digital processing unit (0703) and further process the input to find relevant frequencies for generating a quantitative acoustic model for a food snack. When an acoustic capturing device captures an acoustic signal, the signal may be forwarded to the DPU (0701) via the input-output module (0702). The input output module (0702) may further comprise a customized hardware such an analog to digital convertor (ADC) for capturing and processing a captured acoustic signal. The acoustic signal may be forwarded to the DPU using a wired or a wireless connection. The connection protocol and connecting conducting wires may be chosen such that there is minimum loss of signal and the signal to noise ratio is acceptable for further processing. A general purpose bus may carry data to and from different modules of the DPU (0701). It should be noted that the operation of the bus is beyond the scope of this invention.
The microcontroller (0707) may perform instructions from a memory or a ROM (0710). The instruction set of the microcontroller may be implemented to process the data of the acoustic signal. A custom instruction set may also be used by the microcontroller to prioritize and expedite the processing of the acoustic signal in real time during a manufacturing operation. The customization of the instruction set is beyond the scope of this invention. The logic controller may perform operations such as sequencing, prioritization and automation of tasks. The logic controller may also oversee the hand shake protocol for the bus interface. According to an exemplary embodiment, the logic controller controls the logic for identifying relevant frequencies in an acoustic signal. The logic controller may comprise a matching module that contains predefined frequencies for a plurality of food snacks. The logic controller may subsequently match the captured frequencies in the acoustic signal and quickly determine the texture of the food snack and the quality of the texture. For example, the matching module may include specific frequencies such as 14000 Hz and 75000 Hz. When a recorded acoustic signal comprises the frequencies 14000 Hz or 75000 Hz, then the logic controller may determine a match and alert the microcontroller with an interrupt signal. The microcontroller may then display the texture information on the display (0708) via GUI (0709). The logic controller may further continuously monitor the state of input devices and make decisions based upon a custom program to control the state of output devices.
Similar to the digital signal processing unit (0703) shown in
According to an exemplary embodiment, the acoustic smoothing module (0801) receives input from an input-module in a data processing unit and smoothens the received raw acoustic signal. Acoustic signals are inherently noisy and the data is discrete. The acoustic signals may be represented as Intensity (dB) vs. Time (secs or micro seconds). The data is made continuous by applying a windowing function to the discrete data. Windowing functions that may be applied to the discrete data may include Barlett, Blackmon, FlatTop, Hanning, Hamming, Kaiser-Bessel, Turkey and Welch windowing functions. A smoothing window with good frequency resolution and low spectral leakage for a random signal type may be chosen to smoothen the data. It should be noted that any commonly known windowing function may be applied to a raw acoustic signal to smoothen and interpolate the raw acoustic data.
The smoothened acoustic signal from the smoothing module (0801) may be forwarded to a data transformation module (0802). The data transformation module (0802) may transform the acoustic signal represented in time domain as Intensity (dB) vs. Time (secs) to frequency domain as Intensity (dB) vs. Frequency (Hz) as generally shown in
The transformed frequency signal from the transformation module may be noisy. A signal to noise enhancement module (0803) may receive the transformed signal from the data transform module (0802) and enhance the signal-to-noise ratio of the signal for further processing. A technique for smoothing the data to increase the signal-to-noise ratio without greatly distorting the signal may be used. A process such as convolution may also be used to increase the signal-to-noise ratio. The convolution process may fit successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. Normalization module (0804) may receive the enhanced signal-to-noise frequency domain signal from the signal to noise enhancement module (0803).
The DSP (0800) may also identify pertinent frequencies and associated intensities from the enhanced signal-to-noise frequency domain signal and store the information in a database. A texture attribute computing unit (0712) in the DPU (0701) may further retrieve the stored frequency and intensity information to compute a texture attribute of a food snack. After a photo acoustic model has been developed, the texture attribute computing unit (0712) may store coefficients for different food snacks. The texture attribute computing unit (0712) may then retrieve the stored coefficients and the stores frequency and intensity information to compute a texture attribute measurement or to fingerprint a food snack.
Similar to the statistical processing unit (0704) shown in
The smoothened, transformed and normalized signal from the digital signal processing unit (0703) is forwarded to SPU (0704) for developing texture attribute model with good correlation. The high dimensionality of spectral data requires statistical filtering to build meaningful models. For example, the acoustically smoothed signal may be sampled at 512 linearly spaced frequencies, and each value may be averaged across replicates and used to create a statistical model. According to a preferred exemplary embodiment, the dimensionality regression module reduces the total frequencies of the spectral data to a reasonably acceptable number for model development with high correlation. According to another preferred exemplary embodiment, dimensionality reduction of the frequencies for variable selection is done using n the foregoing example, the total frequencies may be reduced from 512 to 18.
The data from the dimensionality regression module (0901) may be processed with a Variance inflation factors module (VIF) (0902). The VIF module measures how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related. The VIF is used to describe how much multicollinearity (correlation between predictors) exists in a regression analysis. As it is known, Multicollinearity is problematic because it can increase the variance of the regression coefficients, making them unstable and difficult to interpret. The square root of the variance inflation factor indicates how much larger the standard error is, compared with what it would be if that variable were uncorrelated with the other predictor variables in the model. For Example, if the variance inflation factor of a predictor variable were 5.27 (√5.27=2.3) this means that the standard error for the coefficient of that predictor variable is 2.3 times as large as it would be if that predictor variable were uncorrelated with the other predictor variables.
The data from variance inflation factors module (VIF) (0902) may further be processed with a principal component analysis module (0903). Principal component analysis (PCA) is a technique used to emphasize variation and bring out strong patterns in a dataset. It's often used to make data easy to explore and visualize. As defined in the art, Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to (i.e., uncorrelated with) the preceding components. According to a preferred exemplary embodiment, a principal components analysis is used to determine most relevant frequencies in the acoustic signal for developing a quantitative acoustic texture model. It should be noted that any other analysis technique known in the art may be used to identify principal components such as the relevant frequencies.
The data from the PCA module (0903) is further regressed with a best subsets regression module (0904) which is used to determine which of these most relevant frequencies are best for texture attribute model building with good correlation. An R2 value greater than 0.9 may be considered a good correlation between the measure value from the model and descriptive expert panel number.
As generally shown in
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description. According to a preferred exemplary embodiment, when a food or beverage item is consumed a texture attribute may be measured with the acoustic fingerprint of each food and beverage item which include the interaction with human saliva. Differentiating sweeteners at the concentrations they are found in beverages for example a Diet Pepsi® vs. a regular Pepsi® and when in contact with saliva, different sweeteners can have different interactions with human saliva given their chemical composition, the mixture of the beverage and the saliva produces viscosity differences that can be modeled with an in-situ model as described above in
As generally shown in
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
As generally shown in
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
It should be noted that the method used to generate the aforementioned texture attribute model may be used to generate models for other food properties such a moisture, solids content, oil content, slice thickness, density, blister density and topical seasonings. The relevant frequencies and associated intensities and the coefficients of the developed model may change depending on the food property that is measured with the acoustic method.
As generally illustrated in
As generally shown in
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
As generally shown in
As generally shown in
The above method enables a human being to distinguish and identify foods or beverages by a simple act of consumption and recording the acoustic signal. For example, a sweetened beverage can be distinguished from another sweetened beverage by consuming both the beverages separately and recording the acoustic signals. The acoustic signals may then be matched to a preexisting database and then identified. The exemplary method (1600) may be utilized to conduct blind taste testing and target specific responses of the taste testing. A harder food snack may generate an acoustic signal associated with frequencies and intensities that are different than a softer food snack. Similarly, a food snack with a greater oil content may generate an acoustic signal associated with frequencies and intensities that are different than a less oil content food snack. Likewise, a beverage which is acidic may generate an acoustic signal associated with frequencies and intensities that are different than a non-acidic beverage. This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
As generally illustrated in
As generally illustrated in
A discrete feedback method for controlling a texture attribute of food product continuously output from a food processing unit, the method comprises the steps of:
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
A discrete feedback system for controlling texture of a food product in a continuous manufacturing process using the method described above in
According to another preferred exemplary embodiment, a discrete feedforward system for controlling texture of a food product in a continuous manufacturing process, may comprise a food pre-processing unit, a food processing unit, a texture measuring tool positioned downstream from the food pre-processing unit, wherein the texture measuring tool is configured to quantitatively measure an input attribute of food ingredients that are input to said food pre-processing unit when a human being eats or drinks a portion of the food ingredients and an acoustic capturing device to capture the acoustic signal generated by the eating activity; and a controller controlling a plurality of input parameters to the food processing unit and the food pre-processing unit based on input from the texture measuring tool. A feedforward method for controlling output texture of a food product using the aforementioned feedforward system, the method may be generally described in terms of the following steps:
As generally illustrated in
As generally illustrated in
As generally illustrated in
The present invention system anticipates a wide variety of variations in the basic theme of in-situ texture measurement with an apparatus that includes an acoustic capturing device and a data processing unit. When a human being eats/drinks a food snack, the physical interaction in the mouth sends pressure waves that propagate through the ear bone and produce an acoustic signal. The acoustic capturing device records and forwards the signal to a data processing unit. The data processing unit further comprises a digital signal processing module that smoothens, transforms and filters the received acoustic signal. A statistical processing module further filters the acoustic signal from the data processing unit and generates a quantitative acoustic model for texture attributes such as hardness and fracturability. The quantitative model is correlated with a qualitative texture measurement from a descriptive expert panel. Another method includes a food snack fingerprinting using an in-situ quantitative food property measurement.
This general system summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
The present invention method anticipates a wide variety of variations in the basic theme of implementation, but can be generalized as a method of quantitatively measuring texture of a food snack, the method comprises the steps of:
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
The present invention anticipates a wide variety of variations in the basic theme of in-situ quantitative texture attribute measurement. The examples presented previously do not represent the entire scope of possible usages. They are meant to cite a few of the almost limitless possibilities.
This basic system and method may be augmented with a variety of ancillary embodiments, including but not limited to:
One skilled in the art will recognize that other embodiments are possible based on combinations of elements taught within the above invention description.
The present invention system anticipates a wide variety of variations in the basic theme of a discrete feedback system for controlling texture of a food snack in a manufacturing process. The system comprises an in-situ texture measuring tool positioned downstream of a food processing unit along with a human being consume a food snack from the food processing unit at a set interval. The in-situ tool quantitatively measures a texture attribute of the food snack when the human being consumes the food snack. When the texture attribute is outside of an acceptable limit, the human being controls input parameters to the food processing unit such that a subsequent texture attribute of a food snack output from the food processing unit falls within the acceptable limit.
This general system summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
The present invention method anticipates a wide variety of variations in the basic theme of implementation, but can be generalized as a method of quantitatively measuring texture of a food snack, the method comprises the steps of:
This general method summary may be augmented by the various elements described herein to produce a wide variety of invention embodiments consistent with this overall design description.
The present invention claims priority to U.S. Provisional Application No. 62/303,511 filed Mar. 4, 2016. Additionally, the present invention claims priority to U.S. application Ser. No. 15/380,622 filed Dec. 15, 2016, which is a Continuation of U.S. application Ser. No. 14/864,593 filed Sep. 24, 2015, now U.S. Pat. No. 9,541,537 issued Jan. 10, 2017. Lastly, the present invention claims priority to U.S. application Ser. No. 14/864,728 filed Sep. 24, 2015.
Number | Name | Date | Kind |
---|---|---|---|
4051372 | Aine | Sep 1977 | A |
4169662 | Kaule et al. | Oct 1979 | A |
4184768 | Murphy | Jan 1980 | A |
4187026 | Schaffer | Feb 1980 | A |
4234258 | Frosch | Nov 1980 | A |
4236827 | Horiba | Dec 1980 | A |
4325252 | Miller | Apr 1982 | A |
4381148 | Ulrich et al. | Apr 1983 | A |
4479265 | Muscatell | Oct 1984 | A |
4562736 | Iwasaki | Jan 1986 | A |
4866283 | Hill | Sep 1989 | A |
4899589 | Thompson | Feb 1990 | A |
5048340 | Thompson | Sep 1991 | A |
5070733 | Nagata | Dec 1991 | A |
5121426 | Baumhauer | Jun 1992 | A |
5151590 | Takamoto et al. | Sep 1992 | A |
5152401 | Affeldt | Oct 1992 | A |
5226076 | Baumhauer | Jul 1993 | A |
5251486 | Thompson | Oct 1993 | A |
5286313 | Schultz | Feb 1994 | A |
5372030 | Prussia | Dec 1994 | A |
5526689 | Coulter | Jun 1996 | A |
5588428 | Smith | Dec 1996 | A |
5691473 | Peleg | Nov 1997 | A |
5751416 | Singh | May 1998 | A |
5780724 | Olender | Jul 1998 | A |
5804727 | Lu | Sep 1998 | A |
5825898 | Marash | Oct 1998 | A |
5827974 | Nussinovitch | Oct 1998 | A |
5847825 | Alexander | Dec 1998 | A |
5848172 | Allen | Dec 1998 | A |
5922387 | Parada | Jul 1999 | A |
6034768 | Fraser | Mar 2000 | A |
6057927 | Levesque et al. | May 2000 | A |
6122389 | Grosz | Sep 2000 | A |
6276536 | Terasaki et al. | Aug 2001 | B1 |
6311558 | Clark | Nov 2001 | B1 |
6385558 | Schlemm | May 2002 | B1 |
6407811 | Snyder | Jun 2002 | B1 |
6466309 | Kossakovski | Oct 2002 | B1 |
6494098 | Leybovich | Dec 2002 | B1 |
6531707 | Favreau | Mar 2003 | B1 |
6532821 | Lamouche | Mar 2003 | B2 |
6539781 | Crezee | Apr 2003 | B1 |
6628404 | Kelley | Sep 2003 | B1 |
6657721 | Palleschi | Dec 2003 | B1 |
6694173 | Bende | Feb 2004 | B1 |
6753957 | Graft | Jun 2004 | B1 |
6771368 | Chadwick | Aug 2004 | B1 |
6792324 | Trinkel | Sep 2004 | B2 |
6823736 | Brock | Nov 2004 | B1 |
6857317 | Sakurai | Feb 2005 | B2 |
6909505 | Lucas | Jun 2005 | B2 |
6944204 | Zhou | Sep 2005 | B2 |
6987564 | Gornushkin | Jan 2006 | B2 |
7092807 | Kumar | Aug 2006 | B2 |
7117034 | Kronberg | Oct 2006 | B2 |
7165451 | Brooks | Jan 2007 | B1 |
7195731 | Jones | Mar 2007 | B2 |
7595463 | Weick | Sep 2009 | B2 |
7692788 | Popp | Apr 2010 | B2 |
7802477 | Sakurai | Sep 2010 | B2 |
7860277 | Mulder | Dec 2010 | B2 |
8319964 | Hahn | Nov 2012 | B2 |
8368289 | Karabutov | Feb 2013 | B2 |
8567250 | Loeser | Oct 2013 | B2 |
8619255 | Gennadievich | Dec 2013 | B2 |
8659753 | Cabaio | Feb 2014 | B1 |
8638956 | Deng | Apr 2014 | B2 |
8891073 | Effenberger, Jr. | Nov 2014 | B2 |
9032798 | Sakakibara | May 2015 | B2 |
9068926 | Schade | Jun 2015 | B2 |
9159126 | Johnson | Oct 2015 | B2 |
9285310 | Patel | Mar 2016 | B2 |
9358636 | Hammann | Jun 2016 | B2 |
20020039186 | Rosenberg | Apr 2002 | A1 |
20020144458 | Hunter | Oct 2002 | A1 |
20030095266 | Detalle | May 2003 | A1 |
20030216875 | Sakurai | Nov 2003 | A1 |
20040156616 | Strub | Aug 2004 | A1 |
20070218556 | Harris | Sep 2007 | A1 |
20070229834 | Patel | Oct 2007 | A1 |
20080003339 | Johnson | Jan 2008 | A1 |
20080093775 | Menoni | Apr 2008 | A1 |
20080124433 | Yelden | May 2008 | A1 |
20080204757 | Manning | Aug 2008 | A1 |
20080253648 | Mulder | Oct 2008 | A1 |
20090316927 | Ferrill | Dec 2009 | A1 |
20100070197 | Wang | Mar 2010 | A1 |
20100297671 | Tschmelak | Nov 2010 | A1 |
20110033062 | Deng | Feb 2011 | A1 |
20110088477 | Someda | Apr 2011 | A1 |
20120002193 | Elliott | Jan 2012 | A1 |
20120008802 | Felber | Jan 2012 | A1 |
20120014534 | Bodley | Jan 2012 | A1 |
20120020485 | Visser | Jan 2012 | A1 |
20120099732 | Visser | Apr 2012 | A1 |
20120202277 | Wagner | Aug 2012 | A1 |
20120206722 | Grigoropoulos | Aug 2012 | A1 |
20120234102 | Johnson | Aug 2012 | A1 |
20120314214 | Alexander | Dec 2012 | A1 |
20130058514 | Akino | Mar 2013 | A1 |
20130118227 | Sakaibara | May 2013 | A1 |
20130150114 | Bodley | Jun 2013 | A1 |
20130201316 | Binder | Aug 2013 | A1 |
20130228016 | Sakurai | Sep 2013 | A1 |
20130266925 | Nunamaker | Oct 2013 | A1 |
20130344208 | Singh | Dec 2013 | A1 |
20140011690 | Dimov | Jan 2014 | A1 |
20140003819 | Loeser | Feb 2014 | A1 |
20140079248 | Short | Mar 2014 | A1 |
20140125965 | Nagli | May 2014 | A1 |
20150204822 | Horan | Jul 2015 | A1 |
20170027168 | Heath | Feb 2017 | A1 |
Number | Date | Country |
---|---|---|
329319 | Nov 1920 | DE |
3939411 | Jun 1991 | DE |
19716672 | Jun 1998 | DE |
69320728 | Jan 1999 | DE |
10315541 | Oct 2001 | DE |
102005051643 | Apr 2006 | DE |
102006035730 | Jan 2008 | DE |
0829225 | Mar 1998 | EP |
1348955 | Oct 2003 | EP |
2147141 | Aug 2000 | ES |
2004085303 | Mar 2004 | JP |
2004085303 | Mar 2004 | JP |
2009008696 | Jan 2009 | JP |
104233 | Jan 2014 | UA |
9425851 | Nov 1994 | WO |
9857145 | Dec 1998 | WO |
9915890 | Apr 1999 | WO |
02057774 | Jul 2002 | WO |
02079765 | Oct 2002 | WO |
2009047549 | Apr 2009 | WO |
2013004210 | Jan 2013 | WO |
2014180568 | Nov 2014 | WO |
Entry |
---|
Srisawas et al. “Acoustic Testing of Snack Food Crispness Using Neural Networks” Journal of Texture Studes, vol. 34 (2003) pp. 401-420. |
Mohamed, Abdellatif et al., “Estimation of HRW wheat heat damage by DSC, capillary zone electrophoresis, photoacoustic spectroscopy and rheometry”—Science Direct (19 pages). |
Aguilera, Jose Miguel, “Why food microstructure?” J. Food Engineering 67 (2005) 3-11 (9 pages). |
Chauvin, Maite A., et al., “Relationship Between Instrumental and Sensory Determination of Apple and Pear Texture,” J. Food Quality, 33 (2010) 181-198 (18 pages). |
Khairi, Mohd, “Contact and non-contact ultrasonic measurement in the food industry: a review”, Measurement Science and Technology, vol. 27, No. 1, Dec. 1, 2015, abstract, 14 pages. |
Abdel-Salam et al., “Qualitative evaluation of maternal milk and commerical infant formulas via LIBS” Talanta 115 (2013)422-426 (5 pages). |
Applied Spectra, Inc.—Technique—Gate Delay, from http://www.appliedspectra.com/technology/gate-delay.html printed Sep. 29, 2014 (6 pages). |
Assion et al., “Femtosecond laser-induced-breakdown spectrometry for Ca2+ analysis of biological samples with high spatial resolution,” Appl Phys. 2003, 77:391-97. |
Kongbonga et al., Classification of vegetable oils based on their concentration of saturated fatty acids using laser induced breakdown spectroscopy (LIBS), Food Chemistry 147 (2014) 327-331 (5 pages). |
Berer et al., “Remote photoacoustic imaging for material inspection” 2nd International Symposium on Laser-Ultrasonics—Science, Technology and Applications, Journal of Physics: Conference Series 278 (2011) 012034 (4 pages). |
Kowalczyk et al., “Bulk measurement of copper and sodium content in Culn0.7Ga0.3Se2 (CIGS) solar cells with nanosecond pulse length laser induce breakdown spectroscopy (LIBS)” Department of Physics and Astronomy, University of Hawaii, Jan. 8, 2013 (6 pages). |
Lanza et al., “Calibrating the ChemCm laser-induced breakdown spectroscopy instrument for carbonate minerals on Mars” May 1, 2010, vol. 49, No. 13, Applied Optics (7 pages). |
Lei et al., “Time-resolved characterization of laser-induced plasma from fresh potatoes” Spectrochimica Acta Part B 64 (2009) 891-898 (8 pages). |
Menut et al., “Micor-laser-induced breakdown spectroscopy technique: a powerful method for performing quantitative surface mapping on conductive and nonconductive samples,” Oct. 2003, Applied Optics, vol. 42, No. 3 0, pp. 6063-6071. |
“NRC-CNRC ““Laser-Induced Breakdown Spectroscopy (LIBS) Optical Sensing Technology for Rapid On-site ChemicalAnalysis”” (4 pages)”. |
PCT International Search Report and Written Opinion for PCT/US2015/052510 dated Dec. 14, 2015 (9 pages). |
Pedarnig, “Application of laser-induced breakdown spectroscopy to the analysis of secondary materials in industrial production” 2014 Woodhead Publishing Limited (26 pages). |
Ravishankar, et al., “Photo-acoustic emission measurements in liquid-based food and aqueous products,” 2007, 12 pages. |
Samek et al., “Ultra-short laser puls ablation using shear-force feedback: Femtosecond laser induced breakdown spectroscopy feasability study,” Spectrochimica Acta Part B, pp. 1225-1229. |
Slaughter, “Nondestructive Quality of Measurement of Horticultural Crops,” University of CA, Davis, 2011, 13 pages. |
Sun et al., “Correction of self-absorption effect in calibration-free laser-induced breakdown spectroscopy by an internal reference method” Talanta 79 (2009) 388-395 (8 pages). |
“TSI Laser Induced Breakdown Spectroscopy, Chemreveal LIBS Desktop Elemental Analyzer from http://www.tsi.com/ChemReveai-LIBS-Desktop-Analyzer/, printed Aug. 6, 2014 (3 pages)”. |
“What is LIBS from http://www.spectrolabsystems.net/products/analytical-instruments/laser-induced-breakdown . . . , printed Aug. 6, 2014(1 page)”. |
Cravetchi et al., “Scanning microanalysis of Al alloys by laser-induced breakdown spectroscopy” Spectrochimica Acta Part B 59 (2004) 1439-1450 (12 pages). |
Kossakovski et al., “Topographical and Chemical Microanalysis of Surfaces with a Scanning Probe Microscope and Laser-Induced Breakdown Spectroscopy” Anal. Chem. 2000, 72, 4731-4737 (7 pages). |
Chauvin et al., Standard Scales for Crispness, Crackliness and Crunchiness in Dry and Wet Foods: Relationship with Acoustical Determinations, Journal of Texture Studies, vol. 39, No. 4, Aug. 1, 2008, pp. 345-368. |
De Belie et al., “Crispness Judgement of Royal Gala Apples Based on Chewing Sounds”, Biosystems Engineering, Academic Press, UK, vol. 81, No. 3, Mar. 1, 2002, pp. 297-303. |
Duizer et al., “A review of acoustic research for studying the sensory perception of crisp, crunchy and crackly textures”, Trends in Food Science and Technology, Elsevier Science Publishers, GB, vol. 12, No. 1, Jan. 1, 2001, pp. 17-24. |
Roudaut et al., “Crispness: a critical review on sensory and material science approaches”, Trends in Food Science and Technology, Elsevier Science Publishers, GB, vol. 13, No. 6-7, Jun. 1, 2002, pp. 217-227. |
European Patent Office, “Supplemental European Search Report” for related EP Application No. 17760967.4, dated Sep. 2019, 13 pages. |
Patent Cooperation Treaty, “International Preliminary Report on Patentability,” for related PCT Application No. PCT/US/2018/051779, dated Oct. 11, 2019, 30 pages. |
Examination Report dated Dec. 7, 2020 in EP 17760967.4. |
Number | Date | Country | |
---|---|---|---|
20170176309 A1 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
62303511 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14864593 | Sep 2015 | US |
Child | 15380622 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15380622 | Dec 2016 | US |
Child | 15448853 | US | |
Parent | 14864728 | Sep 2015 | US |
Child | 14864593 | US |