MOTION ACTIVATED TOY FOR SENSORIMOTOR ASSESSMENT AND INTERVENTION

Information

  • Patent Application
  • 20240252944
  • Publication Number
    20240252944
  • Date Filed
    January 30, 2024
    11 months ago
  • Date Published
    August 01, 2024
    5 months ago
Abstract
A toy system includes a camera that captures an image of an infant, a processing circuit, and a toy component. The processing circuit is configured to detect a body position of the infant in the image of the infant, and based on a detection that the infant is in a head-raised-prone-body position, generate an activation signal. The toy component is activated to provide a stimulus to the infant, based on the activation signal.
Description
BACKGROUND

Importance of Prone Play or “Tummy Time”: The “Back to Sleep” campaign was initiated in 1994 to reduce the incidence of sudden infant death syndrome (SIDS). SIDS is described as unexplained death, usually during sleep, of a healthy infant less than a year old. Sleeping in prone position was identified as an easily modifiable factor contributing to SIDS. Over the last 20 years the American Academy of Pediatrics (AAP) has shifted the educational campaign to “Back to Sleep, Prone to Play” in recognition of the importance of tummy time. A lack of tummy time is associated with transient gross motor delays, increasing rate of deformational plagiocephaly, and limited motor activity in infants whose alternative to tummy time is a lot of time in supine or in seating devices such as car seats, strollers and swings. Two thirds of the PTs and OTs report motor delays in infants who spend too much time on their backs while awake. Evidence-based recommendations suggest that encouraging tummy time is a key strategy for preventing positional plagiocephaly and torticollis. While fairly new to the literature, there are increasing reports that “tummy time” is a form of exercise and tolerance is associated with future physical activity levels. In fact, the duration of daily tummy time is negatively correlated with screen time use and childhood obesity. While the importance of tummy time to promote development and prevent plagiocephaly is clear in the evidence and in our public health policy, implementation is a challenge.


While parents are typically aware of the recommendation to provide tummy time, parents find incorporating tummy time in their infant's routine challenging. Infants are born with weak neck and trunk muscles that require training to improve strength and control. Tummy time provides opportunities to develop head and trunk control as infants practice holding a heavy upper body on their forearms and arms. With supine sleeping recommended, infants' falling asleep and awakening times are in the supine position increasing the opportunities for spontaneous supine play, rather than tummy time as was the case before 1994. Parent's fear of SIDS, even during playtime, further contributes to limited prone time thus reducing practice and resulting in weakness of upper body muscles. Infants' intolerance towards prone positioning exhibited as fussing and crying limits parents' willingness to implement tummy time in their infant's routine. Over 50% of parents surveyed in the waiting room of a pediatrician's office reported their infant was intolerant of tummy time and the majority of those infants spent less than 15 minutes per day in prone. A recent systematic review found that in healthy infants under 12 months spending more than 15 minutes per day in prone at 2 months of age was positively associated with later prone motor skills. The challenges of implementing the public health policy for tummy time suggests a need for research on strategies to improve implementation. A decrease in practice and opportunity to improve head control, upper body strength, limits independent mobility which typically begins in prone. The risk of head deformities such as plagiocephaly (flattening of the back of the head) escalates in infants who spend a lot of time in supine or seating devices such as car seats, strollers and swings. In 2008, American Physical Therapy Association (APTA) carried out a national survey of 400 pediatric physical and occupational therapists, where two thirds of the PTs and OTs reported motor delays seen in infants who spend too much time on their backs while awake. This survey identified infants' poor tolerance for prone positioning and uncertainty of parents on how to practice tummy time as prime contributors of limited tummy time in an infant's routine, placing the infant at risk of motor delays.


Current Intervention Approaches: Current approaches to improve tolerance for tummy time in infants include educating parents through brochures and using commercially available prone positional supports such as boppy pillows and play gyms. Educational strategies alone are not effective in increasing prone tolerance, motor skills, or reducing plagiocephaly. The current intervention strategies lack scientific rigor and focus on supporting the infant's trunk in prone to make tummy time less challenging. These positional supports do not positively reinforce the infant's attempts to lift the head higher or for a longer period of time. While commercial play gyms may have toys, the toy's activity is not related to infant's movement. Likewise, the toys are often accessible with the head down in prone or another position such as supine or sitting. Without positive reinforcement during tummy time, infants may disengage, fuss or roll into supine. One study even suggests that families using equipment, including commercial tummy time gyms, which do not provide any reinforcement, encourage the infant to be passive and result in lower motor development scores. An evidence-based intervention approach is needed to address this significant gap in supporting the development of prone motor skills in infants with limited prone tolerance.


Theoretically Grounded Intervention that can be incorporated into the daily routine of all infants: Previous research led by Dusing has shown that infants as young as 3 months can learn that if they lift their head, they activate a toy. This opens the door to using the toy to 1) positively reinforce the head lifting behavior increasing motivation to lift the head and increasing practice time (duration) playing on the tummy, 2) vary “game” to keep the child engaged and trying to activate the toy supporting motor and cognitive development (intensity), 3) through the use of operant conditioning and the just right challenge we can train the infant to lift their head in a variety of different strategies (variability). Interventions that have the ability to vary duration, intensity, and practice are ideally suited to train infants with typical development as well as those with developmental delays or early brain injury. These factors, which are not present in other tummy time toys, are consistent with the goals of dose principles in therapy interventions and thus can support parents in providing an increased dose of high-quality intervention during playtime at home and maximizing learning.


Operant conditioning (OC) is a form of associative learning that emphasizes encouraging a certain behavior by associating it with positive reinforcement. OC techniques have been used with interventions to enhance sucking, vocalization, smiling, head turning, and reaching in infants. 25,26 For instance, by using a pacifier that plays mother's voice when sucked at a certain pressure, physical therapists and nurses in the Neonatal Intensive Care Unit (NICU) were able to encourage sucking and improve feeding outcomes in infants born preterm and facilitate early discharge. In another study, toys that moved and made sound only upon contacts encouraged 2.9 months old infants to contact more and practice reaching and object exploration. Our intervention harnesses the benefits of OC to encourage motor behaviors in prone which have not previously been documented in the literature.


While the recommendations are clear that tummy time is essential, parents and day care centers are struggling to meet the recommended 30 minutes per day in prone. In addition, the use of seats and other devices that limit active movement are popular with parents and daycares but restrict rather than enhance the activity of infants.


Accordingly, there is a need for improved devices for encouraging infants to play in the prone position.


SUMMARY

In general, one or more embodiments of the invention relate to a toy system comprising: a camera that captures an image of an infant; a processing circuit configured to: detecting a body position of the infant in the image of the infant, and based on a detection that the infant is in a head-raised-prone-body position, generate an activation signal; and a toy component that is activated to provide a stimulus to the infant, based on the activation signal.


In general, one or more embodiments of the invention relate to a method for training tummy time, the method comprising: capturing an image of an infant using a camera of a toy system; detecting, by a detection engine executing on a processing circuit, a body position of the infant in the image of the infant; based on a detection that the infant is in a head-raised-prone-body position, generate an activation signal; and activating a toy component of the toy system to provide a stimulus to the infant, based on the activation signal.


In general, one or more embodiments of the invention relate to a non-transitory computer readable medium (CRM) storing computer readable program code for operating a toy system, wherein the computer readable program code causes a computer system of the toy system to: capturing an image of an infant using a camera of a toy system; detect, in an image of an infant obtained by a camera of the toy system, a body position of the infant in the image of the infant; based on a detection that the infant is in a head-raised-prone-body position, generate an activation signal; and activate a toy component of the toy system to provide a stimulus to the infant, based on the activation signal.


Other aspects of the invention will be apparent from the following description and the appended claims.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 schematically shows a motion activated toy and infant assessment system, according to some embodiments.



FIG. 2A shows a toy for encouraging infants to play in the prone position, according to some embodiments.



FIG. 2B shows a base section of a toy, according to some embodiments.



FIGS. 3A and 3B show flowcharts of methods, according to some embodiments.



FIG. 4 shows a computing system, according to some embodiments.





DETAILED DESCRIPTION

Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.


In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.


Throughout the application, ordinal numbers (e.g., first, second, third) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create a particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before,” “after,” “single,” and other such terminology. Rather the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and may succeed (or precede) the second element in an ordering of elements.


In general, embodiments of the invention provide an apparatus, a method, and a non-transitory computer readable medium (CRM) for motion activated infant toy and infant assessment.


Technological innovations like virtual reality robotics and computer-assisted training systems have facilitated rehabilitation outcomes. The introduction of AI enabled technology has changed the way of using machines for healthcare purposes. The key features of the new technological advances are that they have been portable, equipped with clinical intelligence, reduced manual effort, less monitoring and coaching capabilities. Aligning with these ideas, embodiments of the disclosure include an AI-enabled toy. The toy may be used to measure motor learning in infants. The toy may further be used in interventions targeted at improving the prone tolerance, prone motor skills, and problem-solving in infants. When coupled with other technology the toy can provide reinforcement of specific motor control strategies. In the “prone position” infants may be lying flat on their stomach. An “infant” may be a human baby less than 1 year old or an older child who had motor skills similar to an infant less than 1 year old.


Toys in accordance with embodiments of the disclosure may have various benefits and/or advantages. Toys in accordance with embodiments of the disclosure may be used to assess and improve sensorimotor skills, motor learning, and/or problem-solving. Toys in accordance with embodiments of the disclosure comprise an integration of technology and science designed/devised to improve tummy time tolerance in infants. In some embodiments, a toy is an AI-enabled device that detects an infant's face or, more generally, the infant's head, limbs and body position, orientation and configuration with respect to each other, the support surface, an object, a direction, and/or gravity, in order to activate or deactivate the toy when certain conditions are met or cease to be met. Detection of the face may serve as an indication that the infant is raising the head, which may activate the toy to provide a rewarding experience to the infant. The toy may be programmed to measure learning and provide intervention to full-term and preterm infants. In some embodiments, the toy may be used to perform an initial evaluation of the infant's ability to raise the head from the floor to establish a baseline. This baseline, once established, may be used as a threshold to activate the toy to provide a stimulus (e.g., music, lights, etc.). For example, the toy may activate only if the infant raises the head for at least a certain duration and/or to at least a certain height, using the arms to push up, or to a certain angle with respect to the torso or gravity). The threshold may be dynamically adjusted over time/practice days, e.g., requiring the head to be raised to a higher level, etc. The threshold for providing positive reinforcement related to head raise height may be tailored for every infant as a part of the motor learning, control, or intervention program set in the toy system/configuration. A more detailed description is provided below.


The tummy time toy in accordance with embodiments of the disclosure helps facilitate good head and neck control in infants during their early months of development (e.g., 2-4 months). Good head control in tummy time is associated with various benefits to the infant from developmental aspects.


The tummy time toy may further be used as the infant is acquiring additional motor skills. For example, the same tummy time toy, programmed differently, may be used by a child sitting down, facing the toy, and reaching for the toy. In this application, the detection that is performed may include movement, e.g., of the arms.



FIG. 1 schematically shows a motion-activated toy and infant assessment system (100) in interaction with an infant (198) according to some embodiments. The system (100) includes a toy component (110), a camera (120), and a processing circuit (130). Each of these components is subsequently described. Other components may be included without departing from the disclosure.


The toy component (110) may be any kind of component suitable to trigger the infant's attention and curiosity and/or provide a rewarding experience. The toy component (110) may include moving (including vibrating) components, may generate noises such as music, voices, may produce light effects, color effects, etc. The toy component (110) may produce any type of salient stimulus. Examples of toy components include, but are not limited to, a model of a merry-go-round, a crawling baby crab, an animated cactus which can record a voice and play it back when activated, a mobile, a light projecting toy, etc. The toy component (110) may be selected in an age-appropriate manner and/or based on the child's reaction or preference. Alternatively, the toy component (110) may be a stimulus-generating component that is not necessarily in the shape of a toy. In such a configuration, the toy component may be a display, a speaker, a light source, etc. In some embodiments, the toy component is an interchangeable component of the motion activated toy and infant assessment system (100).


The toy component (110) may be intended to provide stimuli that catch the infant's attention. Such stimuli may include one or more of motion, light stimuli, and acoustic stimuli. The stimuli may be generated by functions of the toy component. For example, elements of the toy may be in motion when activated, lights may be activated (including different patterns, colors, etc.), sounds may be played back (including basic sounds, music, pre-recorded or customized voices, etc.). The toy component (110) may be activated by other components of the motion activated toy and infant assessment system (100) using an activation signal. A single activation signal may be used, or a set of activation signals may be used to selectively activate one or more of the functions of the toy component (110).


The camera (120) may be any type of camera, for example, a monochrome or color CCD or CMOS camera. The camera (120) may have any resolution as long as it is sufficient to detect at least the body position and/or some facial features of the infant (198). The camera (120) may have any optical characteristics. For example, the camera, including any combination of lenses of the camera, may operate at any wavelength in the visible or invisible spectrum of light, may have any focal length, may have any magnification, any sensor size, etc. The camera (120) may be placed (positioned/oriented) such that it is capable of capturing at least some facial features of the infant (198). In some embodiments, the camera (120) is integrated in the toy component (110). An example is provided below in reference to FIGS. 2A and 2B. The camera may be such that it does not or cannot capture human-visible images for the sake of maintaining privacy and security, and only captures sufficient information to serve the purpose of the device.


The processing circuit (130) in some embodiments is configured to activate one or more functions of the toy component (110) based on input obtained from the camera (120). The processing circuit (130) may perform operations as described below in reference to the flowcharts of FIGS. 3A and 3B. In some embodiments, the processing circuit (130) is or includes a computer system as shown in FIG. 4. The processing circuit may be a single board computer such as a Raspberry Pi.


In some embodiments, the processing circuit (130) executes a detection engine (132). The detection engine (132) may perform the operations described in FIG. 3A. The detection engine (132) may receive images from the camera (120). The images may be received at a fixed frame rate, e.g., in the form of a video. The images may be received in real-time. In some embodiments, the detection engine (132) performs a face detection to detect the presence or absence of the infant's face. The presence of the infant's face may be used as an indication of the infant being in a prone body position with the head raised. The detection engine (132) may be based on any face detection algorithm, using any face detection methods. In some embodiments, an artificial neural network-based face detection algorithm is used. The face detection algorithm does not necessarily need to have the capability to discriminate between different faces. However, the face detection algorithm needs to be able to distinguish the presence or absence of a face in an image obtained from the camera (120). Additionally or alternatively, the detection engine (132) may perform a body position analysis to determine whether the infant is in a prone position with the head raised, or the relative position and configuration of head, limbs and torso with each other or the external environment or objects. When using such an algorithm, the face of the infant may not need to face the camera (120) for the purpose of performing the detection. The algorithm may be similar to the algorithm for face detection, although trained using a different set of training images. More generally, the detection engine may perform a detection of a body configuration that is image-based, without requiring markers.


The detection engine (132) may further perform additional operations, e.g., to detect and differentiate limb movements. The detection engine (132) may also preprocess the images received from the camera (120), including down-sampling or up-sampling in space and/or time, normalization, etc.


In some embodiments, the output of the detection engine (132) is an activation signal. The activation signal may be provided when the infant is detected as being in a prone body position, with the head raised (head-raised-prone-body position). Additional requirements may need to be met prior to issuance of the activation signal, as further discussed below in reference to FIG. 3A. For example, the head-raised-prone-body position may need to be detected for at least a minimum time, the head may need to be raised to at least a certain level, etc.


In some embodiments, the processing circuit (130) executes a toy component driver (134). The toy component driver (134) may perform the operations described in FIG. 3B. The toy component driver (134) may be communicatively interfaced with the detection engine (132), enabling the toy component driver (134) to receive the activation signal when issued by the detection engine (132). Upon receipt of the activation signal, the toy component driver (134) may activate one or more of the functions of the toy component (110).


In some embodiments, the processing circuit (130) executes an external device interface (136). The external device interface may enable configuration of the motion activated toy and infant assessment system (100) by an external computing device such as a laptop, smartphone or a tablet computer (not shown). The interface may be wireless (e.g., Bluetooth or Wi-Fi-based) or wired (e.g., USB or Ethernet based) and may connect to the external computing device, a cloud environment, etc., via a network (not shown). The external device interface (136) communicatively interfaces with the detection engine (132) and/or the toy component driver (134).


In some embodiments, the external device interface (136) may be used to configure the detection engine (132). For example, a regular operating mode or an initial assessment mode (described below in reference to FIG. 3A) may be selected. Further, parameters such as a required minimum time in the head-raised-prone-body position before issuance of the activation signal may be set or tweaked. Similarly, a threshold for the head being raised to at least a certain level to trigger issuance of the activation signal may be set.


In some embodiments, the external device interface (136) may be used to configure the toy component driver (134). For example, for a toy component (110) that has multiple functions, one or more of these functions may be selected for activation upon receipt of the activation signal. Duration of the activation may further be selected. Also, a randomization between the multiple functions may be selected. In addition, a function may be personalized. For example, a custom audio signal may be selected for playback. The custom audio signal may be recorded using the external computing device. In one example, the custom audio signal is a message recorded by the infant's parent.


In some embodiments, the external device interface (136) is provided in the form of a web application, e.g., based on Python Flask. The web application can be hosted on a microprocessor system such as the Raspberry Pi 4b, configured as an access point. Devices such as smartphones, tablets, and laptops can be connected with the Raspberry Pi via Wi-Fi.


The front end for the web application may be loaded in a browser window on the user side. The web application may communicate with the backends via http requests.


In some embodiments, one or more of the detection engine (132), the toy component driver (134), and the external device interface (136) are executed in real-time or near-real-time by the processing circuit. Accordingly, the infant may be provided with instantaneous feedback when raising the head.


In some embodiments, the toy component (110), the camera (120), and the processing circuit (230) are mechanically integrated, as discussed below in reference to FIGS. 2A and 2B.



FIG. 2A shows a motion activated toy and infant assessment system (200) configured to encourage infants to play in the prone position (tummy time toy), according to some embodiments. The tummy time toy (200) includes a toy component (210) for an infant and a base section (260) attached to or proximate to the toy component. The base section (260) includes a housing (262) having a sidewall. The camera (220) is mounted on the sidewall and configured to acquire an image and/or video of the infant interacting with the toy component (210).



FIG. 2B shows the base section (260) of the tummy time toy (200), according to some embodiments. The processing circuit (230) is mounted in the housing (262) of the base section (260). The housing may be equipped with a sidewall (264) with an opening for the camera (220). The processing circuit (230) communicatively interfaces with the camera (220) to receive images from the camera and to evaluate the received images. The processing circuit (230) may be or may include elements of the computer system described below in reference to FIG. 4. In some embodiments, the base section (260) is shaped to accommodate the toy component (210) as illustrated in FIG. 2A. Accordingly, the base section (260) may be shaped differently, depending on the shape of the toy component (210). Also, mechanical adapters may be used for compatibility of the base section (260) with different toy components (210). Accordingly, different toy components (210) may be used interchangeably with the base section (260).


While FIGS. 1, 2A, and 2B show various configurations of hardware components and/or software components, other configurations may be used without departing from the scope of the disclosure. For example, the system may further include motion sensors (e.g., accelerometers, force plates, IR motion and proximity sensors, etc.) installed on the child's bed and/or worn by the child, which may be used to activate the toy component. These sensors may activate the toy component when a desired motor behavior of the child is detected and may be used to reinforce this motor behavior. In addition, various components in FIGS. 1, 2A, and 2B may be combined to create a single component. As another example, the functionality performed by a single component may be performed by two or more components.



FIGS. 3A and 3B show flowcharts of methods according to one or more embodiments. The methods may be implemented using instructions stored on a non-transitory medium that may be executed by a computer system as shown in FIG. 4.


While the various steps in FIGS. 3A and 3B are presented and described sequentially, one of ordinary skill in the art will appreciate that some or all of the blocks may be executed in different orders, may be combined or omitted, and some or all of the blocks may be executed in parallel. Furthermore, the blocks may be performed actively or passively. While not explicitly shown, the methods may be repetitively executed, e.g., in a loop.


Turning to the method (300) shown in FIG. 3A, in Step 302, an image of an infant is obtained. One or more images (e.g., in the form of a video) may be obtained. The image may be obtained by a camera, as previously described.


In Step 304, the image is processed to determine whether the infant in the image is in a head-raised-prone-body position. The head-raised-prone-body position may be detected as previously described, e.g., using AI methods applied to the image. The detection may be based on a detection of the infant's face or facial features in the image, and/or based on a body position analysis (e.g., based on a detection of torso, neck, and/or head in certain configurations).


In Step 306, a test is performed to determine whether the infant is in the head-raised-prone-body position. This test may be performed in various different manners. In one embodiment, the test concludes that the infant is in the head-raised-prone-body position as soon as the head-raised-prone-body position has been detected in Step 304. Alternatively, additional conditions may need to be met as subsequently discussed.


In one embodiment, the head-raised-prone-body position needs to be detected for at least a predetermined time. The predetermined time may be set, e.g., using a smartphone app available to configure the toy system. The predetermined time may be static, or it may change over time. For example, based on monitoring progress (discussed below), the predetermined time may be automatically increased to gradually intensify the training.


In one embodiment, head-raised-prone-body position requires the infant to lift the head to at least a certain distance from the floor or angle from the rest position or angle with respect to the torso. Such a detection may be made, for example, by further analyzing the image for the location of the face in the image to determine the distance to the floor. Body posture may be analyzed in a similar manner to determine the distance to the floor. The required distance to the floor may be static, or it may change over time. For example, based on monitoring progress (discussed below), the required distance may be automatically increased to gradually intensify the training.


In one embodiment, a combination of a minimum predetermined time and a required distance is used when performing the test of Step 306.


While not shown, embodiments of the disclosure may initially operate in an assessment mode. While in the assessment mode, one or more baselines for the infant's performance or capabilities are established. In some embodiments, when in the assessment mode, the infant's ability to raise the head while in the prone position may be assessed during a time interval, e.g., one minute, two minutes, five minutes, etc. The baselines may be obtained while the toy component is deactivated. In one embodiment, each time the infant lifts the head, the distance between the floor and the eye level (or other detectable feature indicative of lift height) is automatically calculated and the average of all lifts during this baseline phase is calculated by the computer system. The average lift height may be displayed to the user, e.g., on an external computing device to allow examination, documentation, etc. by the user of the external computing device. Similar to calculating the average head lift height the device may measure the duration of each head lift, allowing for the average head lift time to be calculated. The average lift time and/or height may be used as variables that are manipulated in the intervention or training paradigm or compared against other children of the same age etc. For example if the average head height at baseline is 10 cm, maintained for 5 seconds, a threshold of 80% of the mean (i.e., 8 cm) may be set for any duration to re-enforce lifting the head to at least 8 cm.


If the test of Step 306 concludes that the condition(s) have been met, the method may proceed with the execution of Step 308. Alternatively, if the test of Step 306 concludes that the condition(s) have not been met, the method may proceed with the execution of Step 302.


In Step 308, an activation signal is generated. The activation signal may be a flag, an electrical or optical signal, etc.


As previously noted, the method (300) may be repetitively executed, e.g., in a loop. The subsequent execution of the method (300) may be used with the same parameterization for the detection of the head-raised-prone-body position, or with an updated parameterization. Specifically, the predetermined time required for the detection of the head-raised-prone-body position may be changed. Similarly, the required distance from the floor or the angle from the rest position may be changed. In some embodiments, these parameters are changed based on the progress of the infant. Embodiments of the disclosure may track proficiency of the infant, e.g., by monitoring the actual time in the head-raised-prone-body position and/or the actual distance from the floor or the angle from the rest position when in the head-raised-prone-body position over time, as the infant interact with the system. The progress measured in this manner may result in an adjustment of the predetermined time and/or distance from the floor/angle from the rest position required for the detection of the head-raised-prone-body position. For example, the required predetermined time and/or the required distance from the floor or the angle from the rest position may be increased if the infant repeatedly has completed a number of trials without failure.


More generally, the infant's increased task proficiency may result in an increased tolerance to being in the prone position. In addition to being able to raise the head for a prolonged time and to a higher level, this may result in more frequent use of the system by the infant over time. This may be quantified, for example, by determining the length of a time window during which the infant is practicing using the system, the number of times that the infant successfully completes a trial during a fixed time window, etc.


The steps of method (300) may be performed by the detection engine (132) which may send the activation signal to the toy component driver (134).


Turning to the method (350) shown in FIG. 3B, in Step 352, a test is performed to determine whether an activation signal is present. If the activation signal is present, the execution of the method may proceed with Step 354. In absence of the activation signal, the execution of the method may not proceed from Step 352.


In Step 354, the toy component is activated to provide a stimulus to the infant. If the toy component has various functions, one of more of these functions may be activated in Step 354. Which function(s) are activated may be configurable. Alternatively, a random selection of one or more functions may be performed. The toy component may remain activated for a set time, e.g., 10 seconds, unless the infant lowers the head below the threshold.


The steps of method (350) may be performed by the toy component driver (134) which may receive the activation signal from the detection engine (132).


While executing the methods of FIGS. 3A and 3B over time, the infant's progress may be monitored and recorded, for example, by tracking the duration of time intervals during which the infant is interacting with the toy, head raise level, and/or head raise time. Progress may be visualized numerically and/or in the form of charts, e.g., on a smartphone display. Progress may be compared within and across subjects and populations to further extract information and understanding for a given individual, population, treatment, training regimen, etc.



FIG. 4 shows a computing system, according to one or more embodiments. Embodiments may be implemented on a computer system. FIG. 4 is a block diagram of a computer system (402) used to provide computational functionalities associated with described algorithms, methods, functions, processes, flows, and procedures as described in the instant disclosure, according to an implementation. The illustrated computer (402) is intended to encompass any computing device such as a high-performance computing (HPC) device, a server, desktop computer, laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device, including both physical or virtual instances (or both) of the computing device. Additionally, the computer (402) may include a computer that includes an input device, such as a keypad, keyboard, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the computer (402), including digital data, visual, or audio information (or a combination of information), or a GUI.


The computer (402) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. The illustrated computer (402) is communicably coupled with a network (430). In some implementations, one or more components of the computer (402) may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).


At a high level, the computer (402) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (402) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).


The computer (402) can receive requests over network (430) from a client application (for example, executing on another computer (402)) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (402) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.


Each of the components of the computer (402) can communicate using a system bus (403). In some implementations, any or all of the components of the computer (402), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (404) (or a combination of both) over the system bus (403) using an application programming interface (API) (412) or a service layer (413) (or a combination of the API (412) and service layer (413). The API (412) may include specifications for routines, data structures, and object classes. The API (412) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (413) provides software services to the computer (402) or other components (whether or not illustrated) that are communicably coupled to the computer (402). The functionality of the computer (402) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (413), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or other suitable format. While illustrated as an integrated component of the computer (402), alternative implementations may illustrate the API (412) or the service layer (413) as stand-alone components in relation to other components of the computer (402) or other components (whether or not illustrated) that are communicably coupled to the computer (402). Moreover, any or all parts of the API (412) or the service layer (413) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.


The computer (402) includes an interface (404). Although illustrated as a single interface (404) in FIG. 4, two or more interfaces (404) may be used according to particular needs, desires, or particular implementations of the computer (402). The interface (404) is used by the computer (402) for communicating with other systems in a distributed environment that are connected to the network (430). Generally, the interface (404 includes logic encoded in software or hardware (or a combination of software and hardware) and operable to communicate with the network (430). More specifically, the interface (404) may include software supporting one or more communication protocols associated with communications such that the network (430) or interface's hardware is operable to communicate physical signals within and outside of the illustrated computer (402).


The computer (402) includes at least one computer processor (405). Although illustrated as a single computer processor (405) in FIG. 4, two or more processors may be used according to particular needs, desires, or particular implementations of the computer (402). Generally, the computer processor (405) executes instructions and manipulates data to perform the operations of the computer (402) and any algorithms, methods, functions, processes, flows, and procedures as described in the instant disclosure.


The computer (402) also includes a memory (406) that holds data for the computer (402) or other components (or a combination of both) that can be connected to the network (430). For example, memory (406) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (406) in FIG. 4, two or more memories may be used according to particular needs, desires, or particular implementations of the computer (402) and the described functionality. While memory (406) is illustrated as an integral component of the computer (402), in alternative implementations, memory (406) can be external to the computer (402).


The application (407) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (402), particularly with respect to functionality described in this disclosure. For example, application (407) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (407), the application (407) may be implemented as multiple applications (407) on the computer (402). In addition, although illustrated as integral to the computer (402), in alternative implementations, the application (407) can be external to the computer (402).


There may be any number of computers (402) associated with, or external to, a computer system containing computer (402), each computer (402) communicating over network (430). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (402), or that one user may use multiple computers (402).


In some embodiments, the computer (402) is implemented as part of a cloud computing system. For example, a cloud computing system may include one or more remote servers along with various other cloud components, such as cloud storage units and edge servers. In particular, a cloud computing system may perform one or more computing operations without direct active management by a user device or local computer system. As such, a cloud computing system may have different functions distributed over multiple locations from a central server, which may be performed using one or more Internet connections. More specifically, a cloud computing system may operate according to one or more service models, such as infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), mobile “backend” as a service (MBaaS), serverless computing, artificial intelligence (AI) as a service (AIaaS), and/or function as a service (FaaS).


Although the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that various other embodiments may be devised without departing from the scope of the present invention. Accordingly, the scope of the invention should be limited only by the attached claims.

Claims
  • 1. A toy system, comprising: a camera that captures an image of an infant;a processing circuit configured to: detect a body position of the infant in the image of the infant, andbased on a detection that the infant is in a head-raised-prone-body position, generate an activation signal; anda toy component that is activated to provide a stimulus to the infant, based on the activation signal.
  • 2. The toy system of claim 1, wherein the stimulus comprises at least one selected from a group consisting of a light stimulus, an acoustic stimulus, and a motion.
  • 3. The toy system of claim 1, wherein the toy component comprises a base section attached to or proximate to the toy component, the base section housing the camera and the processing circuit.
  • 4. The toy system of claim 1, wherein the detection that the infant is in the head-raised-prone-body position comprises a detection of the infant being in the head-raised-prone-body position for at least a predetermined time.
  • 5. The toy system of claim 1, wherein the detection that the infant is in the head-raised-prone-body position comprises a detection of the infant being in the head-raised-prone-body position with a head of the infant being raised at least a certain distance.
  • 6. The toy system of claim 1, wherein the processing circuit is further configured to, prior to detecting that the infant is in a head-raised-prone-body position: determine a baseline for an ability of the infant to be in the head-raised-prone-body position.
  • 7. The toy system of claim 1, wherein the processing circuit is further configured to: track a proficiency of the infant reaching the head-raised-prone-body position.
  • 8. The toy system of claim 7, wherein the processing circuit is further configured to: based on determining that the proficiency has increased, adjust parameters used for the detection that the infant is in a head-raised-prone-body position.
  • 9. The toy system of claim 1, wherein the processing circuit comprises an external device interface configured to communicatively interface with an external computing device.
  • 10. The toy system of claim 9, wherein the external device interface enables at least one of a configuration of the toy system using the external computing device and monitoring of a performance of the infant.
  • 11. A method for training tummy time, the method comprising: capturing an image of an infant using a camera of a toy system;detecting, by a detection engine executing on a processing circuit, a body position of the infant in the image of the infant;based on a detection that the infant is in a head-raised-prone-body position, generate an activation signal; andactivating a toy component of the toy system to provide a stimulus to the infant, based on the activation signal.
  • 12. The method of claim 11, wherein the detection that the infant is in the head-raised-prone-body position comprises a detection of the infant being in the head-raised-prone-body position for at least a predetermined time.
  • 13. The method of claim 11, wherein the detection that the infant is in the head-raised-prone-body position comprises a detection of the infant being in the head-raised-prone-body position with a head of the infant being raised at least a certain distance.
  • 14. The method of claim 11, further comprising, prior to detecting that the infant is in a head-raised-prone-body position: determining a baseline for an ability of the infant to be in the head-raised-prone-body position.
  • 15. The method of claim 11, further comprising: tracking a proficiency of the infant reaching the head-raised-prone-body position.
  • 16. The method of claim 15, further comprising: based on determining that the proficiency has increased, adjust parameters used for the detection that the infant is in a head-raised-prone-body position.
  • 17. A non-transitory computer readable medium (CRM) storing computer readable program code for operating a toy system, wherein the computer readable program code causes a computer system of the toy system to: capturing an image of an infant using a camera of a toy system;detect, in an image of an infant obtained by a camera of the toy system, a body position of the infant in the image of the infant;based on a detection that the infant is in a head-raised-prone-body position, generate an activation signal; andactivate a toy component of the toy system to provide a stimulus to the infant, based on the activation signal.
  • 18. The non-transitory CRM of claim 17, wherein the computer readable program code further causes the computer system of the toy system to: prior to detecting that the infant is in a head-raised-prone-body position: determine a baseline for an ability of the infant to be in the head-raised-prone-body position.
  • 19. The non-transitory CRM of claim 17, wherein the computer readable program code further causes the computer system of the toy system to: track a proficiency of the infant reaching the head-raised-prone-body position.
  • 20. The non-transitory CRM of claim 19, wherein the computer readable program code further causes the computer system of the toy system to: based on determining that the proficiency has increased, adjust parameters used for the detection that the infant is in a head-raised-prone-body position.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 63/442,046 filed on Jan. 30, 2023, which is hereby incorporated by reference herein in its entirety.

Provisional Applications (1)
Number Date Country
63442046 Jan 2023 US