The present disclosure relates generally to refrigerator appliances, and more particularly to a multi-camera vision system in a refrigerator appliance and methods of operating the same.
Refrigerator appliances generally include a cabinet that defines a chilled chamber for receipt of food articles for storage. In addition, refrigerator appliances include one or more doors rotatably hinged to the cabinet to permit selective access to food items stored in chilled chamber(s). The refrigerator appliances can also include various storage components mounted within the chilled chamber and designed to facilitate storage of food items therein. Such storage components can include racks, bins, shelves, or drawers that receive food items and assist with organizing and arranging of such food items within the chilled chamber.
Notably, it is frequently desirable to have an updated inventory of items that are present within the refrigerator appliance, for example (e.g.), to facilitate reorders, to ensure food freshness or avoid spoilage, etcetera (etc.). Thus, it may be desirable to monitor food items that are added to or removed from refrigerator appliance and obtain other information related to the presence, quantity, or quality of such food items. Certain conventional refrigerator appliances have systems for monitoring food items in the refrigerator appliance. However, such systems often require user interaction, e.g., via direct input through a control panel as to the food items added or removed. By contrast, certain appliances include a camera for monitoring food items as they are added or removed from the refrigerator appliance. However, conventional camera systems may have trouble identifying a particular object, distinguishing between similar products, and precisely identifying the location of an object within the chilled chamber. In particular, conventional camera systems that include a single or limited number of cameras may have trouble performing such tasks.
Accordingly, a refrigerator appliance with systems for improved inventory management would be useful. More particularly, a refrigerator appliance that includes an inventory management system having a multi-camera system that is capable of monitoring entering and exiting inventory along with the positioning of objects within the chilled chamber would be particularly beneficial.
Aspects and advantages of the present disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the embodiments.
In one example embodiment, a refrigerator appliance is provided. The refrigerator appliance can include a cabinet defining a chilled chamber. The refrigerator appliance can further include a door being rotatably hinged to the cabinet to provide selective access to the chilled chamber. The refrigerator appliance can further include a camera assembly that can be coupled to the cabinet and operable to monitor the chilled chamber. The camera assembly can include a plurality of cameras that can be coupled to a plurality of electrical cables. The plurality of cameras can be operable to concurrently capture data associated with the chilled chamber. Each camera of the plurality of cameras can be coupled to an electrical cable of the plurality of electrical cables. The camera assembly can further include a multiplexer device that can be coupled to the plurality of electrical cables. The multiplexer device can be operable to multiplex different data signals concurrently provided to the multiplexer device by the plurality of cameras via the plurality of electrical cables and output a multiplex signal having the different data signals. The different data signals can include the data associated with the chilled chamber. The camera assembly can further include a controller coupled to the multiplexer device. The controller can be configured to perform one or more operations based at least in part on receipt of the multiplex signal.
In another example embodiment, a method of implementing inventory management within a refrigerator appliance is provided. The refrigerator appliance can include a chilled chamber and a camera assembly having a plurality of cameras positioned to monitor the chilled chamber. The method can include obtaining, by a controller operatively coupled to the camera assembly, a multiplex signal from a multiplex device coupled to the controller. The multiplex signal can include different data signals concurrently provided to the multiplexer device by the plurality of cameras via a plurality of electrical cables coupled to the multiplexer device and the plurality of cameras. The different data signals can include data associated with the chilled chamber. The method can further include performing, by the controller, one or more operations based at least in part on receipt of the multiplex signal from the multiplexer device.
In another example embodiment, a refrigerator appliance is provided. The refrigerator appliance can include a cabinet defining a chilled chamber. The refrigerator appliance can further include a door being rotatably hinged to the cabinet to provide selective access to the chilled chamber. The refrigerator appliance can further include a camera assembly that can be coupled to the cabinet and operable to monitor the chilled chamber. The camera assembly can include a first multiplexer device that can be coupled to a first pair of cameras and a first electrical cable. The first multiplexer device can be operable to output a first multiplex signal onto the first electrical cable. The first multiplex signal can include different first data signals concurrently provided to the first multiplexer device by the first pair of cameras. The camera assembly can further include a second multiplexer device that can be coupled to a second pair of cameras and a second electrical cable. The second multiplexer device can be operable to output a second multiplex signal onto the second electrical cable. The second multiplex signal can include different second data signals concurrently provided to the second multiplexer device by the second pair of cameras. The camera assembly can further include a demultiplexer device that can be coupled to the first electrical cable and the second electrical cable. The demultiplexer device can be operable to demultiplex the first multiplex signal into the different first data signals and the second multiplex signal into the different second data signals. The camera assembly can further include a controller that can be coupled to the demultiplexer device. The controller can be configured to perform one or more operations based at least in part on receipt of at least one of the different first data signals or the different second data signals.
These and other features, aspects, and advantages of various embodiments of the present disclosure will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present disclosure and, together with the description, serve to explain the related principles of the present disclosure.
A full and enabling disclosure of the present disclosure, including the best mode thereof, directed to one of ordinary skill in the art, is set forth in the specification, which makes reference to the appended figures.
Repeat use of reference characters and/or numerals in the present specification and/or drawings is intended to represent the same or analogous features, elements, or operations of the present disclosure.
Reference now will be made in detail to embodiments of the present disclosure, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the present disclosure, not limitation of the disclosure. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present disclosure without departing from the scope or spirit of the disclosure. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure covers such modifications and variations as come within the scope of the appended claims and their equivalents.
As referenced herein, the term “entity” refers to a human, a user, an end-user, a consumer, a computing device and/or program (e.g., a processor, computing hardware and/or software, an application, etc.), an agent, a machine learning (ML) and/or artificial intelligence (AI) algorithm, model, system, and/or application, and/or another type of entity that can implement and/or facilitate implementation of one or more embodiments of the present disclosure as described herein, illustrated in the accompanying drawings, and/or included in the appended claims. As used herein, the terms “couple,” “couples,” “coupled,” and/or “coupling” refer to chemical coupling (e.g., chemical bonding), communicative coupling, electrical and/or electromagnetic coupling (e.g., capacitive coupling, inductive coupling, direct and/or connected coupling, etc.), mechanical coupling, operative coupling, optical coupling, and/or physical coupling.
As used herein, the terms “upstream” and “downstream” refer to the relative flow direction with respect to fluid flow in a fluid pathway. For example, “upstream” refers to the flow direction from which the fluid flows, and “downstream” refers to the flow direction to which the fluid flows. As referred to herein, the terms “includes” and “including” are intended to be inclusive in a manner similar to the term “comprising.” As referenced herein, the terms “or” and “and/or” are generally intended to be inclusive, that is (i.e.), “A or B” or “A and/or B” are each intended to mean “A or B or both.” As referred to herein, the terms “first,” “second,” “third,” and so on, can be used interchangeably to distinguish one component or entity from another and are not intended to signify location, functionality, or importance of the individual components or entities.
Approximating language, as used herein throughout the specification and claims, is applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately,” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language can correspond to the precision of an instrument for measuring the value. For example, the approximating language can refer to being within a 10 percent margin.
Referring now to the figures. Example refrigerator appliances, inventory management systems, camera assemblies, and corresponding methods of operation will be described in accordance with one or more embodiments of the present disclosure.
According to example embodiments, refrigerator appliance 100 includes a cabinet 102 that is generally configured for containing and/or supporting various components of refrigerator appliance 100 and which can also define one or more internal chambers or compartments of refrigerator appliance 100. In this regard, as used herein, the terms “cabinet,” “housing,” and the like are generally intended to refer to an outer frame or support structure for refrigerator appliance 100, for example (e.g.), including any suitable number, type, and configuration of support structures formed from any suitable materials, such as a system of elongated support members, a plurality of interconnected panels, or some combination thereof. It should be appreciated that cabinet 102 does not necessarily require an enclosure and can simply include open structure supporting various elements of refrigerator appliance 100. By contrast, cabinet 102 can enclose some or all portions of an interior of cabinet 102. It should be appreciated that cabinet 102 can have any suitable size, shape, and configuration while remaining within the scope of the present disclosure.
As illustrated, cabinet 102 generally extends between a top 104 and a bottom 106 along the vertical direction V, between a first side 108 (e.g., the left side when viewed from the front as in
Cabinet 102 defines chilled chambers for receipt of food items for storage. In particular, cabinet 102 defines fresh food chamber 122 positioned at or adjacent top 104 of cabinet 102 and a freezer chamber 124 arranged at or adjacent bottom 106 of cabinet 102. As such, refrigerator appliance 100 is generally referred to as a bottom mount refrigerator. It is recognized, however, that the benefits of the present disclosure apply to other types and styles of refrigerator appliances such as, e.g., a top mount refrigerator appliance, a side-by-side style refrigerator appliance, or a single door refrigerator appliance. Moreover, aspects of the present disclosure can be applied to other appliances as well. Consequently, the description set forth herein is for illustrative purposes only and is not intended to be limiting in any aspect to any particular appliance or configuration.
Refrigerator doors 128 are rotatably hinged to an edge of cabinet 102 for selectively accessing fresh food chamber 122. In addition, a freezer door 130 is arranged below refrigerator doors 128 for selectively accessing freezer chamber 124. Freezer door 130 is coupled to a freezer drawer (not shown) slidably mounted within freezer chamber 124. In general, refrigerator doors 128 form a seal over a front opening 132 (
Referring again to
Dispensing assembly 140 and its various components can be positioned at least in part within a dispenser recess 142 defined on one of refrigerator doors 128. In this regard, dispenser recess 142 is defined on a front side 112 of refrigerator appliance 100 such that a user can operate dispensing assembly 140 without opening refrigerator door 128. In addition, dispenser recess 142 is positioned at a predetermined elevation convenient for a user to access ice and enabling the user to access ice without the need to bend-over. In the example embodiment, dispenser recess 142 is positioned at a level that approximates the chest level of a user.
Dispensing assembly 140 includes an ice or water dispenser 144 including a discharging outlet 146 for discharging ice from dispensing assembly 140. An actuating mechanism 148, shown as a paddle, is mounted below discharging outlet 146 for operating ice or water dispenser 144. In alternative example embodiments, any suitable actuating mechanism can be used to operate ice dispenser 144. For example, ice or water dispenser 144 can include a sensor (e.g., an ultrasonic sensor) or a button rather than the paddle. Discharging outlet 146 and actuating mechanism 148 are an external part of ice or water dispenser 144 and are mounted in dispenser recess 142. By contrast, refrigerator door 128 can define an icebox compartment 150 (
A control panel 152 is provided for controlling the mode of operation. For example, control panel 152 includes one or more selector inputs 154, such as knobs, buttons, touchscreen interfaces, etcetera (etc.), such as a water dispensing button and an ice-dispensing button, for selecting a desired mode of operation such as crushed or non-crushed ice. In addition, inputs 154 can be used to specify a fill volume or method of operating dispensing assembly 140. In this regard, inputs 154 can be in communication with a processing device or controller 156. Signals generated in controller 156 operate refrigerator appliance 100 and dispensing assembly 140 in response to selector input(s) 154. Additionally, a display 158, such as an indicator light or a screen, can be provided on control panel 152. Display 158 can be in communication with controller 156 and can display information in response to signals from controller 156.
As used herein, “processing device” or “controller” can refer to one or more microprocessors or semiconductor devices and is not restricted necessarily to a single element. The processing device or controller (e.g., controller 156) can be programmed to operate refrigerator appliance 100, dispensing assembly 140, and one or more other components of refrigerator appliance 100. The processing device or controller (e.g., controller 156) can include, or be associated with, one or more memory elements (e.g., non-transitory storage media, non-transitory computer-readable storage media). In some embodiments, such memory element(s) include electrically erasable, programmable read only memory (EEPROM). Generally, the memory element(s) can store information accessible by a processing device or controller (e.g., controller 156), including instructions that can be executed by the processing device or controller. Optionally, the instructions can be software or any set of instructions and/or data that when executed by the processing device or controller (e.g., controller 156), cause the processing device to perform operations.
Referring still to
For example, external communication system 170 permits controller 156 of refrigerator appliance 100 to communicate with a separate device external to refrigerator appliance 100, referred to generally herein as an external device 172. As described in more detail below, these communications can be facilitated using a wired or wireless connection, such as via a network 174. In general, external device 172 can be any suitable device separate from refrigerator appliance 100 that is configured to provide and/or receive communications, information, data, or commands from a user. In this regard, external device 172 can be, for example, a personal phone, a smartphone, a tablet, a laptop or personal computer, a wearable device, a smart home system, or another mobile or remote device.
In addition, a remote server 176 can be in communication with refrigerator appliance 100 and/or external device 172 through network 174. In this regard, for example, remote server 176 can be a cloud-based server, and is thus located at a distant location, such as in a separate state, country, etc. According to an example embodiment, external device 172 can communicate with a remote server 176 over network 174, such as the Internet, to transmit and/or receive data or information, provide user inputs, receive user notifications or instructions, interact with or control refrigerator appliance 100, etc. In addition, external device 172 and remote server 176 can communicate with refrigerator appliance 100 to communicate similar information. According to example embodiments, remote server 176 can be configured to receive and analyze images, video, audio, and/or other data obtained by a camera assembly 190 (
In general, communication between refrigerator appliance 100, external device 172, remote server 176, and/or other user devices or appliances can be carried using any type of wired or wireless connection and using any suitable type of communication network, non-limiting examples of which are provided below. For example, external device 172 can be in direct or indirect communication with refrigerator appliance 100 through any suitable wired or wireless communication connections or interfaces, such as network 174. For example, network 174 can include one or more of a local area network (LAN), a wide area network (WAN), a personal area network (PAN), the Internet, a cellular network, any other suitable short-range or long-range wireless networks, etc. In addition, communications can be transmitted using any suitable communications devices or protocols, such as via Wi-Fi®, Bluetooth®, Zigbee®, wireless radio, laser, infrared, Ethernet type devices and interfaces, etc. In addition, such communication can use a variety of communication protocols (e.g., transmission control protocol/internet protocol (TCP/IP), hypertext transfer protocol (HTTP), simple mail transfer protocol (SMTP), file transfer protocol (FTP), etc.), encodings or formats (e.g., hypertext markup language (HTML), extensible markup language (XML), etc.), and/or protection schemes (e.g., virtual private network (VPN), secure HTTP, secure shell (SSH), secure sockets layer (SSL), etc.).
External communication system 170 is described herein according to an example embodiment of the present disclosure. However, it should be appreciated that the example functions and configurations of external communication system 170 provided herein are used only as examples to facilitate description of aspects of the present disclosure. System configurations can vary, other communication devices can be used to communicate directly or indirectly with one or more associated appliances, other communication protocols and steps can be implemented, etc. These variations and modifications are contemplated as within the scope of the present disclosure.
Referring now generally to
As shown schematically in
Although a single camera 192 is illustrated in
Notably, however, it can be desirable to position each camera 192 proximate front opening 132 of fresh food chamber 122 and orient each camera 192 such that the field of view of each camera 192 is directed into fresh food chamber 122. In this manner, privacy concerns related to obtaining images of the user of the refrigerator appliance 100 can be mitigated or avoided altogether. According to example embodiments, camera assembly 190 can be used to facilitate an inventory management process for refrigerator appliance 100. As such, each camera 192 can be positioned at an opening to fresh food chamber 122 to monitor objects 182 (e.g., food items, beverages) that are being added to or removed from fresh food chamber 122.
According to still other embodiments, each camera 192 can be oriented in any other suitable manner for monitoring any other suitable region within or around refrigerator appliance 100. It should be appreciated that according to alternative embodiments, camera assembly 190 can include any suitable number, type, size, and configuration of camera(s) 192 for obtaining images of any suitable areas or regions within or around refrigerator appliance 100. In addition, it should be appreciated that each camera 192 can include features for adjusting its field of view and/or orientation.
It should be appreciated that the images and/or video obtained by camera assembly 190 can vary in number, frequency, angle, resolution, detail, etc. in order to improve the clarity of the particular regions surrounding or within refrigerator appliance 100. In addition, according to example embodiments, controller 156 can be configured for illuminating the chilled chamber using one or more light sources prior to obtaining images. Notably, controller 156 of refrigerator appliance 100 (or any other suitable dedicated controller) can be communicatively coupled to camera assembly 190 and can be programmed or configured for analyzing the images obtained by camera assembly 190, e.g., in order to identify items being added or removed from refrigerator appliance 100, as described in detail below.
In general, controller 156 can be coupled (e.g., electrically, communicatively, operatively) to camera assembly 190 for analyzing one or more images and/or video obtained by camera assembly 190 to extract useful information regarding objects 182 located within fresh food chamber 122. In this regard, for example, images and/or video obtained by camera assembly 190 can be used to extract a barcode, identify a product, monitor the motion of the product, or obtain other product information related to object 182. Notably, this analysis can be performed locally (e.g., on controller 156) or can be transmitted to a remote server (e.g., remote server 176 via external communication system 170) for analysis. Such analysis is intended to facilitate inventory management, e.g., by identifying a food item being added to and/or removed from the chilled chamber.
Now that the construction and configuration of refrigerator appliance 100 and camera assembly 190 have been presented according to an example embodiment of the present disclosure, an example method 200 for operating a camera assembly 190 is provided. Method 200 can be used to operate camera assembly 190, or to operate any other suitable camera assembly for monitoring appliance operation or inventory. In this regard, for example, controller 156 can be configured for implementing method 200. However, it should be appreciated that the example method 200 is discussed herein only to describe example aspects of the present disclosure and is not intended to be limiting.
As shown in
Notably, camera assembly 190 can obtain images upon any suitable trigger, such as a time-based imaging schedule where camera assembly 190 periodically images and monitors fresh food chamber 122. According to still other embodiments, camera assembly 190 can periodically take low resolution images until motion is detected (e.g., via image differentiation of low resolution images), at which time one or more high resolution images can be obtained. According to still other embodiments, refrigerator appliance 100 can include one or more motion sensors (e.g., optical, acoustic, electromagnetic, etc.) that are triggered when an object 182 is being added to or removed from fresh food chamber 122, and camera assembly 190 can be operably coupled to such motion sensors to obtain images of the object 182 during such movement.
According to still other embodiments, refrigerator appliance 100 can include a door switch that detects when refrigerator door 128 is opened, at which point camera assembly 190 can begin obtaining one or more images. According to example embodiments, the images 300, 302 can be obtained continuously or periodically while refrigerator doors 128 are open. In this regard, obtaining images 300, 302 can include determining that the door of the refrigerator appliance is open and capturing images at a set frame rate while the door is open. Notably, the motion of the food items between image frames can be used to determine whether the object 182 is being removed from or added into fresh food chamber 122. It should be appreciated that the images obtained by camera assembly 190 can vary in number, frequency, angle, resolution, detail, etc. in order to improve the clarity of objects 182. In addition, according to example embodiments, controller 156 can be configured for illuminating a refrigerator light (not shown) while obtaining images 300, 302. Other suitable triggers are possible and within the scope of the present disclosure.
Step 220 can include analyzing the first image using a machine learning image recognition process to identify an object in the first image. It should be appreciated that this analysis can utilize any suitable image analysis techniques, image decomposition, image segmentation, image processing, etc. This analysis can be performed entirely by controller 156, can be offloaded to a remote server for analysis, can be analyzed with user assistance (e.g., via control panel 152), or can be analyzed in any other suitable manner. According to example embodiments of the present disclosure, the analysis performed at step 220 can include a machine learning image recognition process.
According to example embodiments, this image analysis can use any suitable image processing technique, image recognition process, etc. As used herein, the terms “image analysis” and the like can be used generally to refer to any suitable method of observation, analysis, image decomposition, feature extraction, image classification, etc. of one or more images, videos, or other visual representations of an object. As explained in more detail below, this image analysis can include the implementation of image processing techniques, image recognition techniques, or any suitable combination thereof. In this regard, the image analysis can use any suitable image analysis software or algorithm to constantly or periodically monitor a moving object within fresh food chamber 122. It should be appreciated that this image analysis or processing can be performed locally (e.g., by controller 156) or remotely (e.g., by offloading image data to a remote server or network, e.g., remote server 176).
Specifically, the analysis of the one or more images can include implementation of an image processing algorithm. As used herein, the terms “image processing” and the like are generally intended to refer to any suitable methods or algorithms for analyzing images that do not rely on artificial intelligence or machine learning techniques (e.g., in contrast to the machine learning image recognition processes described below). For example, the image processing algorithm can rely on image differentiation, e.g., such as a pixel-by-pixel comparison of two sequential images. This comparison can help identify substantial differences between the sequentially obtained images, e.g., to identify movement, the presence of a particular object, the existence of a certain condition, etc. For example, one or more reference images can be obtained when a particular condition exists, and these references images can be stored for future comparison with images obtained during appliance operation. Similarities and/or differences between the reference image and the obtained image can be used to extract useful information for improving appliance performance. For example, image differentiation can be used to determine when a pixel level motion metric passes a predetermined motion threshold.
The processing algorithm can further include measures for isolating or eliminating noise in the image comparison, e.g., due to image resolution, data transmission errors, inconsistent lighting, or other imaging errors. By eliminating such noise, the image processing algorithms can improve accurate object detection, avoid erroneous object detection, and isolate the important object, region, or pattern within an image. In addition, or alternatively, the image processing algorithms can use other suitable techniques for recognizing or identifying particular items or objects, such as edge matching, divide-and-conquer searching, greyscale matching, histograms of receptive field responses, or another suitable routine (e.g., executed at the controller 156 based on one or more captured images from one or more cameras). Other image processing techniques are possible and within the scope of the present disclosure.
In addition to the image processing techniques described above, the image analysis can include utilizing artificial intelligence (AI), such as a machine learning image recognition process, a neural network classification module, any other suitable artificial intelligence (AI) technique, and/or any other suitable image analysis techniques, examples of which will be described in more detail below. Moreover, each of the example image analysis or evaluation processes described below can be used independently, collectively, or interchangeably to extract detailed information regarding the images being analyzed to facilitate performance of one or more methods described herein or to otherwise improve appliance operation. According to example embodiments, any suitable number and combination of image processing, image recognition, or other image analysis techniques can be used to obtain an accurate analysis of the obtained images.
In this regard, the image recognition process can use any suitable artificial intelligence technique, for example, any suitable machine learning technique, or for example, any suitable deep learning technique. According to an example embodiment, the image recognition process can include the implementation of a form of image recognition called region based convolutional neural network (R-CNN) image recognition. Generally speaking, R-CNN can include taking an input image and extracting region proposals that include a potential object or region of an image. In this regard, a “region proposal” can be one or more regions in an image that could belong to a particular object or can include adjacent regions that share common pixel characteristics. A convolutional neural network is then used to compute features from the region proposals and the extracted features will then be used to determine a classification for each particular region.
According to still other embodiments, an image segmentation process can be used along with the R-CNN image recognition. In general, image segmentation creates a pixel-based mask for each object in an image and provides a more detailed or granular understanding of the various objects within a given image. In this regard, instead of processing an entire image—that is (i.e.), a large collection of pixels, many of which might not contain useful information—image segmentation can involve dividing an image into segments (e.g., into groups of pixels containing similar attributes) that can be analyzed independently or in parallel to obtain a more detailed representation of the object or objects in an image. This can be referred to herein as “mask R-CNN” and the like, as opposed to a regular R-CNN architecture. For example, mask R-CNN can be based on fast R-CNN which is slightly different than R-CNN. For example, R-CNN first applies a convolutional neural network (CNN) and then allocates it to zone recommendations on the property map instead of the initially split into zone recommendations. In addition, according to example embodiments, standard CNN can be used to obtain, identify, or detect any other qualitative or quantitative data related to one or more objects or regions within the one or more images. In additional or alternative embodiments, a K-means algorithm can be used.
According to still other embodiments, the image recognition process can use any other suitable neural network process while remaining within the scope of the present disclosure. For example, the step of analyzing the one or more images can include using a deep belief network (DBN) image recognition process. A DBN image recognition process can generally include stacking many individual unsupervised networks that use each network's hidden layer as the input for the next layer. According to still other embodiments, the step of analyzing one or more images can include the implementation of a deep neural network (DNN) image recognition process, which generally includes the use of a neural network (e.g., computing systems inspired by and/or based on the biological neural networks) with multiple layers between input and output. Other suitable image recognition processes, neural network processes, artificial intelligence analysis techniques, and combinations of the above described or other known methods can be used while remaining within the scope of the present disclosure.
In addition, it should be appreciated that various transfer techniques can be used but use of such techniques is not required. If using transfer techniques learning, a neural network architecture can be pretrained such as VGG16, VGG19, or ResNet50 with a public dataset then the last layer can be retrained with an appliance specific dataset. In addition, or alternatively, the image recognition process can include detection of certain conditions based on comparison of initial conditions and/or can rely on image subtraction techniques, image stacking techniques, image concatenation, etc. For example, the subtracted image can be used to train a neural network with multiple classes for future comparison and image classification.
It should be appreciated that the machine learning image recognition models can be actively trained by the appliance with new images, can be supplied with training data from the manufacturer or from another remote source, or can be trained in any other suitable manner. For example, according to example embodiments, this image recognition process relies at least in part on a neural network trained with a plurality of images of the appliance in different configurations, experiencing different conditions, or being interacted with in different manners. This training data can be stored locally or remotely and can be communicated to a remote server for training other appliances and models.
It should be appreciated that image processing and machine learning image recognition processes can be used together to facilitate improved image analysis, object detection, or to extract other useful qualitative or quantitative data or information from the one or more images that can be used to improve the operation or performance of the appliance. Indeed, the methods described herein can use any or all of these techniques interchangeably to improve image analysis process and facilitate improved appliance performance and consumer satisfaction. The image processing algorithms and machine learning image recognition processes described herein are only example and are not intended to limit the scope of the present disclosure in any manner.
Step 230 can include obtaining a second image 302 using the camera assembly. For example, second image 302 can be obtained immediately after first image 300 is obtained at step 210. In general, first image 300 and second image 302 can both be obtained while object 182 is in the process of being inserted in to or removed from fresh food chamber 122, such that the trajectory of object 182 can be determined, as described in more detail below.
Step 240 can include analyzing the second image using a machine learning image recognition process to identify the object in the second image. In this regard, step 240 can include similar image analysis as that described above with regard to step 220.
Referring now briefly to
It should be appreciated that the confidence score 310 can be increased by obtaining more images of the same object 182 at different angles, at different times, different positions, etc. Accordingly, method 200 can further include the step of obtaining a third image using camera assembly 190 where the third image also contains object 182 from first image 300 and second image 302. Method 200 can further include analyzing a third image to identify the object in the third image in increasing the confidence score based at least in part on the analysis of the third image to identify the object. In this regard, if the machine learning model identifies a single object 182 as being the same orange, the confidence level can be increased, e.g., as shown from the object identifications in
Notably, confidence score 310 can be an output from the machine learning model and can be based on any suitable characteristics of the object 182 being monitored or tracked. For example, each object 182 can have identifiable features, such as stems, discolorations, imperfections, or other features which can be identifiable and associated with that particular object 182 (e.g., similar to a fingerprint for that object). Machine learning image recognition model can identify each object based on its particular fingerprint and can use identifiable features from other images to increase the accuracy of object identification. Although this comparison of multiple images to improve the confidence score of an object identification is described herein with respect to individual oranges or apples, it should be appreciated that the models can be extrapolated to the identification of any of a plurality of objects using any suitable number of images.
Step 250 can include determining a motion vector of the object based on a position of the object in the first image and the second image. Specifically, as best illustrated in
In addition, identification of adjacent objects 182 of a plurality of objects and their associated motion vectors 320 can improve the confidence score 310 of an object identification and its associated motion vector 320. In this regard, for example, method 200 can include analyzing the first image 300 to identify a second object in the first image 300 (e.g., such as an apple positioned adjacent the orange). Method 200 can further include determining a spatial relationship between the first object 182 and the second object 182 (e.g., a relative positioning of the two objects in a three-dimensional space). Method 200 can further include determining a predicted motion vector of the second object (e.g., as identified generally by reference numeral 322) based at least in part on the motion vector 320 of the first object 182 and the spatial relationship between the first object in the second object.
Thus, method 200 can include obtaining a plurality of images of objects 182 being added to or removed from the chilled chamber. In this regard, continuing example from above, controller 156 or another suitable processing device can analyze these images to identify objects 182 and/or their trajectories into or out of fresh food chamber 122 and/or freezer chamber 124. By identifying whether objects 182 are being added to or removed from fresh food chamber 122 and/or freezer chamber 124, controller 156 can monitor and track inventory within refrigerator appliance 100. For example, controller 156 can maintain a record of food items positioned within or removed from fresh food chamber 122.
Inventory management system 180 and method 200 of operating a refrigerator appliance as described above can generally facilitate improved inventory management within a refrigerator appliance. In this regard, this system facilitates object identification where a frame-by-frame object analysis method can be used to support inventory management. This is advantageous when tracking multiple objects belonging to a single class (e.g., similar objects) stored in a refrigerator. In some embodiments, multiple images from a camera can be used for tracking items moving through its field of view, where objects are captured frame-by-frame. Objects can be compared for congruency between frames in a neural network. The neural network can be designed to give a probability that both images are of the same item. If multiple images of a single object are available multiple comparisons can be made, then the average confidence can be used.
The neural network effectively generates feature vectors or maps for each object and compares. High confidence vectors are given to objects that are positively identified between frames. Relative position between unknown items can be used to identify them in the next step. If items are moving together, another item can be located in a known position. If an item is not moving it will be found in the same position. Either case can be used to link an item identification between frames. An appliance-centric database can be built up over the course of one or more interactions with the appliance (e.g., many frames). Each image of the same item identification is available for future comparisons, making it easier and easier to track.
For example, if there is a 50% confidence that a given orange is the same orange based on a pair of frames, older images of the same orange can be run through the same comparison, yielding matches as high as 90% confidence with an average of 75%. Thus, using older images effectively can bring up the confidence of a match, e.g., by using maximum match confidence, using average match confidence, using other similar metrics such as quartiles, median, etc. Hence, the method is useful for tracking items going into the appliance to their final storage locations and an item age (e.g., even if an item is moved around). In addition, the method (e.g., method 200) determines which item is leaving the storage space when it is removed and also suggests the user to remove the oldest item and show it in an image.
In some embodiments, each electrical cable 904 can constitute and/or include an analog cable, a digital cable, a communication cable, a network cable, a data cable, a media cable, a control cable, a coaxial cable, or another type of electrical cable. In some embodiments, each electrical cable 904 can constitute and/or include an electrical cable (e.g., a coaxial cable) that can be used to communicate image data, video data, audio data, control data (e.g., control signals), and/or other data between each camera 192 and controller 156.
In the example embodiment depicted in
Although some example embodiments of the present disclosure describe and/or depict use of coaxial cables, MIPI to coaxial cable adapters, MIPI camera cables, and MIPI cameras, the present disclosure is not so limiting. For example, use of other hardware operable to concurrently capture image data, video data, and/or audio data using a plurality of cameras, multiplex the different signals of the plurality of cameras into a single signal, and/or process the single signal using an image signal processor (ISP) (e.g., an ISP coupled to and/or integrated with a controller (e.g., an SBC)) can be implemented in accordance with one or more embodiments described herein without deviating from the intent and/or scope of the present disclosure. For instance, different combinations of other types of cameras (e.g., universal serial bus (USB) cameras), cables (e.g., USB cables), adapters, and/or different quantities of image signal processors (ISP) and/or single board computers (SBC) can be implemented in accordance with one or more embodiments described herein without deviating from the intent and/or scope of the present disclosure.
In at least one embodiment, cameras 192 can be operable to concurrently (e.g., simultaneously, at approximately the same time) capture data (e.g., image data, video data, audio data) associated with a chilled chamber such as, for instance, fresh food chamber 122 and/or freezer chamber 124. For example, cameras 192 can concurrently capture images and/or video of one or more objects 182 positioned within, being added to, and/or being removed from fresh food chamber 122 and/or freezer chamber 124. For instance, when a motion sensor (e.g., optical, acoustic, electromagnetic) and/or a door switch of refrigerator appliance 100 is triggered and/or detects a refrigerator door 128 is opened, inventory management system 180 and/or camera assembly 190 can use cameras 192 to concurrently capture images and/or video of one or more objects 182 being added to or removed from fresh food chamber 122. In this example, controller 156 can receive a signal indicating that a refrigerator door 128 and/or freezer door 130 is open (e.g., controller 156 can receive such a signal from a motion sensor and/or door sensor of refrigerator appliance 100). In this example, based at least in part on receipt of such a signal, controller 156 can operate (e.g., via inventory management system 180, camera assembly 190) one or more cameras 192 while refrigerator door 128 and/or freezer door 130 is open to concurrently capture such data associated with fresh food chamber 122 and/or freezer chamber 124, respectively (e.g., data associated with one or more objects 182 positioned within, being added to, and/or being removed from fresh food chamber 122 and/or freezer chamber 124).
In the example embodiment depicted in
As illustrated in the example embodiment depicted in
In alternative or additional embodiments, inventory management system 180 illustrated in the example embodiment depicted in
In one or more embodiments, based at least in part on receipt of the above-described multiplex signal and/or the different data signals from multiplexer device 902, controller 156 can perform one or more operations. For example, as described above with reference to
In some embodiments, controller 156 can utilize external communication system 170 to communicate the multiplex signal, the different data signals, and/or the data associated with fresh food chamber 122 and/or freezer chamber 124 to external device 172 and/or remote server 176 via network 174. In some embodiments, controller 156 can facilitate adjustment of a camera 192 to adjust a monitoring range, a monitoring zone, or a field of view of such a camera 192. For example, in these embodiments, multiplexer device 902 can be coupled to and/or integrated with controller 156 such that controller 156 can be coupled (e.g., electrically, communicatively, operatively) to one or more electrical cables 904, which can be coupled to one or more cameras 192 (e.g., via adapter(s) 906 and camera cable(s) 908). In these embodiments, controller 156 can send one or more control signals to a camera 192 via an electrical cable 904 to, for example, facilitate the above-described adjustment of such a camera 192 and/or another operation associated with such a camera 192 (e.g., power on, power off, adjust camera settings).
It should be appreciated that positioning of MIPI cameras on and/or within refrigerator appliance 100 can be limited by the length of an MIPI camera cable coupled to and/or associated with each of such MIPI cameras (e.g., MIPI camera cables are approximately 12 inches in length). However, it should also be appreciated that by using a plurality of electrical cables 904, adapters 906, and camera cables 908 to couple cameras 192 to multiplexer device 902 and controller 156 as illustrated in
In the example embodiment depicted in
In the example embodiment depicted in
In the example embodiment depicted in
In the example embodiment depicted in
In the example embodiment depicted in
In the example embodiment depicted in
In some embodiments, third multiplexer device 902 and/or demultiplexer device 1002 can be coupled to and/or integrated with a controller such as, for instance, controller 156 as illustrated in
The example embodiment illustrated in
At 1202, method 1200 can include obtaining, by a controller (e.g., controller 156) operatively coupled to a camera assembly (e.g., camera assembly 190), a multiplex signal (e.g., the multiplex signal described above with reference to
Although not depicted in
At 1204, method 1200 can include performing, by the controller, one or more operations based at least in part on receipt of the multiplex signal from the multiplexer device (e.g., controller 156 can perform one or more of the operations described above with reference to the example embodiments depicted in
In another example, although not depicted in
In another example, although not depicted in
This written description uses examples to disclose the present disclosure, including the best mode, and also to enable any person skilled in the art to practice the present disclosure, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the present disclosure is defined by the claims, and can include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they include structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.