Toy pieces in the form of toy bricks such as LEGO® brand toy bricks have been available for many decades. Toy bricks typically have releasable couplings between bricks, which allow them to be connected to form a larger structure. In their simplest form they build unanimated objects such as castles or houses. In some cases, the toy created using toy bricks can be supported on a baseplate having coupling elements to provide stability or proper positioning, or both, for the toy.
An advancement of toy bricks was the addition of bricks with a rotating joint or axel coupled to a wheel. Such a toy brick can be attached to an inanimate structure in order to make that structure roll along a surface when pushed.
A further advancement of toy bricks was the addition of “pull back motors.” These motors are mechanical energy storage elements, which store energy in a watch spring or flywheel. Typically these are toy bricks which have the “pull back motor” mechanism contained within the brick. There is a shaft from the mechanism, which when turned in one direction winds up the motor and then when released will turn in the opposite direction. A toy brick car, for example, equipped with such a motor will wind up when pulled back and then go forwards when released. An example of this is the LEGO Pullback Motor.
The next stage of advancement of a toy brick is an electric motor contained within one brick, having a protruding shaft and another toy brick with a battery compartment. These battery and motor bricks can be coupled to each other directly or through wires in order to create a simple mechanism that is electrically actuated. Typically a switch is present on the brick containing the batteries that can turn the motor on or off or revere its direction. Variations on the actuator can be lights, instead of a motor. An example of this is the LEGO eLab.
Toy bricks containing motors and toy bricks containing batteries can be further enhanced by the insertion of a remote control receiver in between them, such that the passage of power can be modified remotely. Typically a hand held remote control transmitter transmits a signal to a receiver brick, which can change the speed or direction of the motor. By way of example, a toy brick vehicle constructed in such a manner can be steered remotely and also have its speed controlled remotely. An example of this is the LEGO Power Functions.
The most complex state of prior art is the programmable robotics kit sold by the LEGO Group under the trademark Mindstorms®. The kit typically includes a handheld programmable computer, to which sensors and actuators can be plugged in, along with toy bricks and specialized components for making a variety of projects. Actuators can be motors, or solenoids, speakers, or lights. Sensors can be switches, microphones, light sensors or ultrasonic rangefinders. By way of example, a program can be downloaded into the handheld computer, so as to control a motor in a manner so as to avoid collisions with objects in the direction of motion. Another example would be to make a noise when motion is detected. Another programmable Mindstorms programmable robot is the Micro Scout. It is a motorized wheeled robot in which several preprogrammed sequences can be executed when a light is shined on the robot.
US patent publication US2011/0217898 A1 describes a toy brick with a tilt sensor and lights of the same color turning on and off or flashing alternately in response to a shaking motion. U.S. Pat. No. 7,708,615 discloses a toy brick system having separate sensor bricks, logic bricks and function bricks. The following toy bricks also emit sound when a switch is closed. LEGO doorbell Brick #5771, LEGO Space Sound Brick #55206C05.
Various devices generate images on display screens. One type of image generating device is a computer, such as pad computer, which can be designed to permit interaction with the computer through the display screen. This is commonly through touchscreen technology which permits actions to be initiated by, for example, selecting appropriate icons on the display screen, as well as lines to be drawn on the display screen. In addition to touchscreen technologies, interaction with the computer through the display screen can also be through the use of devices commonly referred to as light pens. See, for example, U.S. Pat. No. 4,677,428. In Light pen based interaction, images are generated on a Cathode Ray Tube (CRT) by excitation of the phosphor on the screen by an electron beam. This excitation causes the emission of light. Since a single point electron beam scans the image in a raster pattern, the light at any one point on the screen fades with time, as the beam progresses to a different part of the screen. During the next scan of the screen the image is refreshed. The intensity at any one point on the screen will flicker at the rate of refresh of the screen, and is typically a sawtooth type waveform with a fast rise and a slower decay if plotted in time. The light from any given point on the screen will increase sharply as the electron beam passes by any location as long as the image is not completely black at that point on the screen. The display knows the position of the electron beam at any given time, and this position can be captured at the instant when a sharp jump in a light level is seen by the light pen. By this method the light pen can be used as a pointing device, typically with additional buttons similar to mouse buttons, which are sometimes arranged so as to be mechanically activated when the pen is pressed against a surface.
A method transmits an optically encoded message image to a playing piece on an image display region of an image generating device. Position information relative to the position of the playing piece is sensed by the playing piece on the image display region. At least positional information is transmitted by the playing piece to the image generating device based on the sensed position information. The following is generated by the image generating device and displayed on the image display region: (1) an optically encoded message image only at the location of the playing piece as the playing piece moves over the image display region, the optically encoded message image including said position information, and (2) visual images elsewhere on the image display region.
In some examples the method can include one or more the following. Initial position information can be provided on at least a portion of the image display region, and an optical receptor of the playing piece can be at the at least a portion of the image display region. Position information can be displayed on a computer display screen, the computer display screen providing the image display region. Position information sensing can include using a playing piece comprising an optical receptor for receiving optical information from the image display region, the optical information including the position information; the can receive optical receptor receives position information in the form of display region grid coordinates. The position information sensing can be carried out with a playing piece having a size and shape to at least cover the optically encoded message image. The position information sensing can be carried out with the playing piece having a releasable coupling. The can have image display region has an integrated touchscreen, can be touched the playing piece can be positioned on the touchscreen, and the touchscreen by a human user. The positional information transmitting step can transmit a unique identifier for the playing piece; the unique identifier can be an address into a data repository, the data repository comprising at least one of a local database, a remote database, and a look-up table, with the data repository including information regarding the playing piece. The visual images displayed on the image display region can the overlaid with a further visual image, the further visual image associated with the playing piece, and at least one of the visual images and the further visual image being dependent on the unique identifier of the playing piece.
In some additional examples the method can also include one or more the following. A playing piece can be selected, the playing piece having first and second optical receptors positioned at first and second sides of the playing piece with the first and second sides facing different directions; the playing piece can be placed on the image display region with a chosen one of the first and second optical receptors facing the image display region; the visual images can be generated, the visual images based at least in part on which of the first and second optical receptors is facing the display region. A playing piece having first and second optical receptors positioned spatially separated on the same side of the playing piece can be selected; the playing piece can be placed on the image display region with both the first and second optical receptors facing the image display region; the visual images can be generated based at least in part on the orientation of the second optical receptor with respect to the first optical receptor. The can include positional information transmitting step comprises transmitting the positional information from a messaging transponder of the playing piece with the receptor of the image generating device being a transponder capable of bi-directional communication with the messaging transponder; an actuator carried by the playing piece can be activated based on a message received by the messaging transponder from the image generating device.
In some further examples, first and second of the can be placed playing pieces at the first and second positions on the image display region, and the optically encoded message image can be generated at each of the first and second positions on the image display region. First and second of the playing pieces can be placed at first and second locations on the image display regions of respective first and second image generating devices; the first and second image generating devices can be operably coupled; the visual images can be generated on the second image generating device at least partially based upon the positional information from the first playing piece. The playing piece can include an optical light guide to direct light from the image display region to one or more surfaces of the playing piece. An external environmental input or a user input can be sensed by a sensor of the playing piece with information relating to the sensed input, in addition to said positional information, transmitted by the playing piece information to the image generating device based on the sensed position information.
Other features, aspects and advantages of the present invention can be seen on review the drawings, the detailed description, and the claims which follow.
The following description will typically be with reference to specific structural embodiments and methods. It is to be understood that there is no intention to limit the invention to the specifically disclosed embodiments and methods but that the invention may be practiced using other features, elements, methods and embodiments. Preferred embodiments are described to illustrate the present invention, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art will recognize a variety of equivalent variations on the description that follows. Like elements in various embodiments are commonly referred to with like reference numerals.
The prior art discussed above consists of inanimate toy bricks suitable for small children, or more complex powered and wired or coupled toy brick elements, which must be assembled intelligently, in order to perform a function. The toy bricks which require intelligent coupling in order to perform a function are suitable for much older children. Examples of the toy brick described herein allow some animation functions to be experienced by younger children, without requiring them to understand electrical concepts. The toy bricks, as well as other playing pieces, are also well-suited for use with baseplate assemblies discussed below starting with
In addition, the prior art discussed above typically requires wiring between blocks to provide power to and control functions between the blocks. Such wires or connection between blocks distract from the object to be created by the blocks. Examples of the toy brick will also allow some functions to be achieved without the use of wires. While the toy brick building system disclosed in U.S. Pat. No. 7,708,615 does not require wires, it discloses the use of function bricks, sensor bricks and logic bricks which require intelligent assembly and thus may not be suitable for younger children.
An intent of the various examples of the toy brick is to provide the end user with a rich experience from a toy brick, without burdening the user with needing to gain knowledge of how that experience is delivered. Typically a user would perform an action in order to initiate the experience, sensors and a controller within the toy brick would detect the interaction of the user with the brick, the toy brick will then automatically perform an action, in response to the stimulus.
As shown in
Such a toy brick 10 would perform a function in response to a stimulus. The function to be performed is dependent on the sensors present, the programming of the controller, and the actuators present on toy brick 10, which are discussed in detail below.
The provision of a rechargeable power source 29 within the toy brick 10 will allow the toy brick 10 to be incorporated into structures without the need for wires. Further, recharging capability will allow any model or other structure built with the toy brick 10 to exist without requiring disassembly for replacing or recharging the batteries. The ability to transfer electrical power without electrical contact will also allow the brick to be hermetically sealed, so as to be child friendly.
A function of some examples of the toy brick is to detect an input via the sensing element 30, then determine via computation or other logic as described below if the input conditions satisfy the predetermined requirements to actuate one or more actuators 34, and if so actuate one or more actuators 34, typically in sequence or simultaneously as per a predetermined pattern.
Sensing elements 30 can be one or more of the following: (1) a microphone 40 for reception of a sound encoded trigger, such as, but not limited to a clapping sound or voice recognition as shown in
A gripping force sensor 56, typically in the form of a strain gauge rosette as shown in
In some examples, not illustrated, toy brick 10 may be constructed so that it takes more force to decouple a component, such as power source 29, actuator 34 or sensing element 30, from housing 12 than it does to decouple the housing 12 of one toy brick 10 from the housing 12 of another toy brick 10.
Computing control element 32, in the example of
Peripherals can include but are not limited to: USB (Universal Serial Bus), USART (universal synchronous/asynchronous receiver transmitter), I2C (I-squared-C) computer bus, ADC (Analog to Digital Converter), DAC (Digital to Analog Converter), Timers, Pulse Width Modulators, Flash Memory, RAM Memory, EEPROM (Electrically Erasable Programmable Read Only Memory), Bluetooth interface, Ethernet interface, liquid crystal driver interface. An example of such microcontrollers would be the Texas Instruments TMS320LF28XX family or MSP430 family of microcontrollers.
Typically a microcontroller is designed to perform a specific task, and only requires a subset of all possible peripherals to be present in order to perform that task. Usually only the input and output of the peripheral devices are externally accessible via metal pins. The internal data and memory access bus structure is not typically connected to the externally accessible pins of the chip.
The microcontroller receives signals as electrical voltages or currents, presented to one or more of its externally accessible pins. These signals are typically sampled on a one time basis, continuously, or at a regular time intervals by circuitry within the microcontroller, such as an analog to digital converter. The time course and amplitude of such a signal may be kept in the internal memory and analyzed by algorithms. By way of example, a speech recognition algorithm may analyze digitized speech from a microphone, or a motion detection algorithm may analyze signals from accelerometers or tilt switches.
The algorithms which analyze the digitized electrical signals, can be written in a language such as Basic, C or Assembly. The Algorithms may implement logical functions such as: “IF INPUT signal is GREATER THAN a VALUE THEN turn ON an OUTPUT”. The signals may in addition be transformed by transforms such as but not limited to the Fourier transform, or form feedback based algorithms in the S or Z domain such as Kalman Filters. Other algorithms such as neural network based fuzzy logic are also implementable. Indeed almost any algorithm that can be run on a personal computer can be implemented on a microcontroller based design.
Signals received may also be from a communication device, such as a Bluetooth link to an external device such as an iPad® or other tablet computer. Such signals may contain a full message of actions to perform, requiring the microcontroller to perform those actions rather than attempt to make a decision as to if actuation is warranted.
Computing control element 32, in the form of microcontroller 32, receives electrical signals, performs analysis of said signals and then performs an action. Signals for actuation are sent as electrical signals from the pins of microcontroller 32. By way of example, actuation such as making a noise may require microcontroller 32 to create a time course of electrical signal amplitudes, which may be accomplished by means of a DAC (Digital to Analog Converter) which varies the amplitude of the voltage on a pin of microcontroller 32. In another embodiment, actuation of a display, for example, may require microcontroller 32 to send out RGB (Red/Green/Blue) intensities to various display pixels in order to create an image.
Microcontroller 32 may in addition manage battery charging and also conservation of power by powering down peripherals, and even entering a low power mode (sleep mode) and only exit from the low power mode (wake up) at either certain intervals to check if signals are present, or may wake up due to a signal being presented to one or more peripherals which are capable of waking the microcontroller from a sleep state.
Computing control element 32 analyzes the signals from the one or more sensing elements 30, as described below by way of example in
An example of a process for power management, signal detection and actuation is shown in
Actuators which generate the output of a toy brick 10 can be, but are not limited to, one or more light sources 80, as shown in
By way of example, in one embodiment, a single brick 10, similar to that shown in
In yet another embodiment, a single brick with integral solar power battery and Bluetooth receiver, again see
In yet another embodiment, shown used as a component of a racecar 100 in
In yet another embodiment, a clear brick 10, similar to that of
In yet another embodiment as Shown in
In yet another embodiment, not illustrated, a toy brick 10 with a camera 48 and integral face or object recognition algorithm may greet a child with a sound such as “Hello John” when approached. The face to be recognized and the sound to be emitted by the brick may be user downloadable into the toy brick 10 via radio link. The face may even be self-learned by the video captured by the camera itself. Alternatively when the face is recognized the toy brick may transmit a signal to a fixed or mobile computing device.
In yet another embodiment, a sequence of sensing and a sequence of actuation may be programmed, typically by an adult, into the toy brick 10, with perhaps the aid of a user interface running on a fixed or mobile computing device, with radio link or other connection to the toy brick. Once programmed, a child may interact with the brick in a much simpler manner.
In yet another embodiment, several different shaped bricks may be manipulated by a child or other user. The bricks will transmit their shape and position to a fixed or mobile computing device which will show the manipulation of the bricks, with correct shape and size in a virtual building environment on a display screen. Transmission of position may be done by GPS signal, or by a more localized triangulation method, such as through the use of a baseplate, on which the toy bricks 10 are supported, with triangulation capability. The following are three examples of methods of position triangulation.
Measurement of time delay of signals from a signal source of known position: One or more signal sources of known position may send a pulse (“ping”) or encoded message via sound, light or radio wave, at a certain time. The message may contain the time that this signal was sent. The message will be received at a later time by the object that is to be triangulated, in this case typically a toy brick 10. By receiving messages from 3 or more such sources of known positions, and by computing the distance to those sources by measuring the delay between the time that the signal was sent and the time that the signal was received, it is possible to triangulate by standard trigonometric methods the position of the object to be triangulated. A simplified embodiment of a toy brick baseplate can be constructed to be capable of triangulating an object, such as toy brick 10, placed upon it. Such a triangulating baseplate may contain four or more signal emitters at the corners, in the plane of the baseplate and also above the plane of the baseplate. These emitters will emit encoded signals, preferably simultaneously. Then by measurement of the time delay between reception of the signals, it would be possible to locate the three-dimensional position of a toy brick in the vicinity of the baseplate.
Measurement of the position of known landmarks, by image analysis: The object to be triangulated may contain a camera and may compute its position by measurement of angles to various landmarks present in the image. By way of example, a toy brick 10 may contain a camera 48 and analyze the position of, for example, specific colored or marked bricks or flashing lights, placed in and above the plane of a base plate.
Measurement of the position of an object by analysis of its position relative to a known landscape: An object may be photographed in two or more, preferably orthogonal, views against a known landscape and its position computed. By way of example, a toy brick baseplate assembly may be constructed to contain two or more cameras capable of photographing the object in plan and elevation, against the baseplate and/or an orthogonal vertical wall with features present upon the baseplate/wall, such as uniquely marked bricks or flashing lights, whose positions are known.
The bricks may be cemented into position in the virtual environment by a gesture of the brick (such as but not limited to a clicking motion) or by pushing a button on the brick as described in the computer algorithm described below with reference to
In yet another embodiment, a toy brick with an accelerometer may be placed in a brick constructed car, such as that shown in
In yet another embodiment, bricks may be grouped by electronic addressing scheme, as described below with reference to in
In another embodiment, such as shown in
The final algorithm to be discussed is the algorithm for avatar manipulation 152 shown in the flow diagram of
In some examples, computing control element 32 is a user reprogrammable computer control element in contrast with a computer control element that cannot be reprogrammed during normal use, but typically only in a manufacturing-type environment. Such reprogramming can take place in the manners discussed above with regard to the communication algorithm of
In some examples, toy brick 10 can generate an output based upon a currently sensed input value and a previously sensed input value. This is opposed to a decision based on a current input only, such as single push of a button. This aspect is based in part on things that happened prior to an event, e.g., two buttons pushed one second apart. In digital computing terms current and previous means more than one clock apart, which in the current generation of computers running at say 4 GHz is 1/(4×10̂9)=0.25 nanoseconds. A computer's ability to define NOW and BEFORE is defined by its clock speed, since it can only sense things once per clock cycle. However it is possible to have an analog computer do a continuous time integral, for example, the time integral of acceleration yields velocity, and you could have a trigger that triggers when the velocity, as computed by a continuous integral of acceleration, exceeds a certain velocity. In another example, toy brick 10 may be provided an input in the form of a signal received by RF transceiver 44 telling toy brick to await further instruction in the form of an oral command received by microphone 40.
In some examples, toy brick 10 can generate an output(s) or time course of output(s) based on a time course an input(s), wherein the current output(s) or time course of output(s), is determined by mathematical computations based on previous input(s) as well as the current input(s). An example of this is a force or acceleration sensor(s) the signals from which can be integrated to find velocity and integrated again to compute position. Integration is the area under the curve, which is a function of the past history of the signal amplitude over time. In other examples, the mathematical function described can be altered in the field via wired or wireless download of new algorithms. An example of this is a brick which can emit green light when shaken, or can be, for example, reprogrammed via Bluetooth connection to emit red light when shaken. In a further example, each input has more than two possible states (with on and off being two states). Instead, each input may have a continuum of gradually changing values, such as would exist with the input from an accelerometer, the brick may be programmed to continuously change through all the colors of the rainbow as it is tilted in various orientations.
In other examples, toy brick 10 can perform one way or two way communication with an external device wirelessly. The messaging between the devices being more complicated than the detection and/or generation of an instantaneous presence or absence of signal, and is a decoding of the time course of such a signal, said time course carrying an embedded message. An example of this type of toy brick is one which responds to the complex on/off time course of pulsations of light carrying a message from, for example, an infrared remote control.
It can be seen to a person skilled in the art that such a self-contained brick with power, sensing, actuation and control elements within it, sacrifices little of the complex functions possible with the multi-brick prior art. Instead it allows a simple user experience for a small child, and removes the burden of programming the function to the factory, a parent, a teacher, or an older child. The intelligent toy brick provides a much different, much more accessible user experience than the multi-brick intelligent systems described in prior art.
Display screen 206 may be a flat panel display where the light generating pixels are directly visible, such as with the screens of tablet computers. Other examples may be a different implementation where the image is generated remotely and transmitted to baseplate 202; one example of this is shown in
The image can also be transferred to the upper surface 214 of the baseplate 202 in other manners. Two such examples are shown in
Another type of three-dimensional imaging can be through the use of holographic projection. Holographic projection can be created by projecting a laser through a film that contains a prerecorded interference pattern of light from a solid object. A moving hologram can be created by replacing the film with a “Spatial Light Modulator” which can be an array of small movable mirrors as in a DLP chip. The mirrors can generate a varying interference pattern as would be created by a moving object, thus creating a moving hologram.
In some situations computer 204 includes a touch sensitive membrane 224 as a part of display screen 206 as shown in
In some examples, computer 204 will send an optically coded message as a series of intensity variations in time. These intensity variations will be received by toy bricks 10, capable of receiving and responding to the optically coded message, that have been placed onto baseplate 202. An example of what is sometimes referred to as an intelligent toy brick 10 including a light detector 42 is shown in
In some examples, it is possible to simultaneously stimulate more than one position with different optically encoded messages, since each patch of pixels, at each coupling element 14, may simultaneously have different encoded intensity variations, the message encoding the position being stimulated. It is possible for one or more toy bricks 10 to simultaneously communicate with one or more receptors 236, as is done by way of example in CDMA (code division multiple access) cell phones or as done in Anti Collision NFC Tags. Each toy brick 10 mounted to baseplate 202 will send the message it receives from the display screen 206 in addition to information about the properties of the toy brick, thereby enabling the image generating device 204 to compute the position and type of toy bricks placed upon it.
It can be seen by a person skilled in the art that the intensity variations encoding the message sent by the image generating device 204 can be at a level imperceptible to a user viewing the entire display region 208, but is detectible by sensitive electronics on the toy brick 10 as placed upon the display region 208. The encoding can be of adequate complexity so as to even be detectable over the intensity variations of a moving image. By way of example, the encoded message may be encoded on a carrier of a known frequency, as for example IR remote controls encode the message on a carrier at 40 KHz or so. An example of a miniature optical receiver is the SFH506 IR receiver/demodulator device made by Siemens, which is a fully integrated device capable of delivering a digital data stream from a modulated light signal. Such encoding allowing signals resulting from varying of an image to be distinguished from the encoded message, in much the same manner as one radio station can be heard even though many radio stations and sources of radio frequency noise are present in the ether simultaneously.
The communication from the image generating device 204 to the toy brick 10 includes one or more of information requests and information sent, such as but not limited to send brick type information, send avatar image, send gaming powers of/weapons possessed, receives new avatar image, receive new gaming powers/weapons, and enable RFID/RF transponder for X seconds.
The communication from the toy brick 10 back to the display computer 204 through receptor 236 can be by way of example but not limited to:
The communications from the toy brick 10 to the baseplate assembly 200 contain information such as but not limited to:
The message from the display can be encoded in space rather than time, such as a one-dimensional or two-dimensional barcode.
An example of a formal software implementation of a scanning routine, is as shown in
Similarly, as shown in
Further, as shown in
The modulation function U(n)(t) can be simple amplitude modulation of a carrier such as A Sin(wt), or a more complex scheme like CDMA which allows many devices to talk at once.
The contents of the data received from a stimulated brick can then be stored in another 1024×768 RAM. In this manner information, such as the positions, gaming powers/weapons or Avatar images, of all toy bricks placed on the display baseplate is made available to any concurrently running gaming software, as a “map”. By way of example, a block diagram of the data path for such a scheme is as shown in
A playing piece 10 which can interact with a baseplate assembly 200 capable of triangulating its position in a manner as shown in
A tablet computer and smart phones with embedded NFC readers, such as the Google Nexus 10, typically have smaller interrogation coils which do not encircle the entire display screen 206 as shown in
It is also possible to have a toy brick or other playing piece 10 as shown in
Coupling elements 14 may be loose fitting bumps or pockets on the baseplate so as to constrain the bricks in the plane of the display but allow them to be easily removed when lifted up from the plane of the display. As suggested in
A higher density of LEDs, or other light emitters 246, per releasable coupling element 14 in structure such as shown in
Examples of baseplate assembly 200 have the ability to ascertain the position, orientation and characteristics of a toy brick 10 placed upon it, by passive means such as a camera and optical recognition, or by active means such but not limited to RFID or radio frequency triangulation. The toy bricks 10 placed upon baseplate 202 may in addition have sensors on them to transmit their orientation and motion. By way of example, a toy brick figure when manipulated in a waddling or walking manner may cause the scenery displayed on the baseplate to advance as if the toy brick figure were walking through the environment.
The manipulation of smaller toy bricks 10 across upper surface 214 of baseplate 202 may also cause avatars in 2D or 3D to appear on display screen 206 and interact with other features of the displayed image. The virtual characteristics of a toy brick or toy brick figure may be stored in nonvolatile memory on the baseplate assembly 200 or even nonvolatile memory on the toy brick 10 being manipulated. Further, the virtual characteristics of the toy brick being manipulated may change due to interaction with the environment on upper surface 214 of baseplate 202. The changed characteristics may be retained in the physical toy brick 10, or elsewhere, such as at a remote location on the internet, such that the toy brick when taken to a different baseplate assembly 200, the current baseplate assembly 200 may recall the exact environment on the display screen 206 of the prior baseplate assembly 200 and also the characteristics of the avatar from the previous interactive experience with the prior baseplate assembly.
The interaction between the baseplate assembly 200 and the toy brick 10 placed upon it may be two-way. By way of example, a toy brick 10 that is equipped with a similar but smaller display device may receive images to be displayed on its surface, dependent on its position on the baseplate. By way of example, a figural toy brick 10 may change its displayed image to a beach garment when moved onto a beach scene on the baseplate 202. By way of another example, a toy brick could make a splashing noise when placed on a part of a display region 208 which has a water feature; the display screen 206 may in addition show the resulting water splash.
A baseplate assembly 200 with triangulation capability may also be used as a virtual building environment. A toy brick 10 that is moved over upper surface 214 can cause an avatar of the same toy brick 10 to appear on display screen 206, and then by a clicking/cementing motion/gesture, the avatar associated with that toy brick may be cemented to a virtual structure, and the procedure repeated. The avatar need not be of the same shape as the physical toy brick, and selection of the shape of the avatar may be by menu structure displayed on display screen 206 or even by some physical manipulation of the toy brick or other triangulatable object.
In another example, the display screen 206 may show schematic instructions, for example, for the building a toy brick structure or even an electrical circuit with circuit elements made of releasable couplings such as in Snap-Circuits® sold by Elenco Electronics, Inc., of Wheeling Ill. The exact life size image of the building block or circuit element may be displayed on the display screen 206 under the releasable coupling elements 14 where it is to be snapped in, so that a child may create the assembly with ease.
It should be noted that an image generating device 204 may have all the features that by way of example an iPad, or similar computing device, can have. By way of example, one or more the following may be possible: reaction of the image to touch, rechargeable power supply, programmable response to motion or time course of motion, or orientation, integral camera, Bluetooth connection, Wi-Fi connection, NFC reader, ability to play movies, ability to display a touch sensitive interactive game, ability to send and receive audible signals or optically encoded transmission and the like.
In another embodiment, baseplate assembly 200 may form a board game such a Monopoly board game. The Monopoly figures, houses, and hotels, may all be toy brick pieces, and their motion and position may be automatically sensed as discussed above. By way of another example, a game of Scrabble® may be played with toy bricks with letters on them being placed on upper surface 214 displaying a Scrabble game board, the score even may be automatically computed and displayed by automatic identification of the position and type of toy bricks 10, acting as letter tiles, placed on baseplate 202.
In another embodiment, players of a game may interact with a baseplate assembly 200 by means of smaller computing devices such as smart phones. Each player may affect the main displayed image on display screen 206 by means of software on the baseplate assembly 200 and which communicates with software on smaller computing devices. The smaller computing devices may in addition have clear baseplates attached, and placement of toy bricks on the baseplate on the smaller devices may affect a displayed image or game in the larger baseplate assembly 200, or even on a display screen 206 with no baseplate 202. Several smaller devices may simultaneously or sequentially communicate with, and affect the environment of the larger baseplate assembly 200. The environment may be fully interactive, such that by way of example, Monopoly money may be taken from one player and given to another player, and the amounts displayed on the main baseplate assembly 200, or even transferred between the smaller computing devices, depending by way of example on movement of toy brick figures on the main baseplate assembly 200.
In another embodiment, is also possible to extend and route the display image and messaging in a 3rd dimension away from the plane of the display with the use of opaque, translucent or clear toy bricks 10 with optical fibers 274 or other light guides embedded in them as shown in
An example of an image generating and playing-piece-interacting assembly 296 is shown in
The optically encoded message image 235, is a one way signal from the display screen 206 of image generating device 204, and sometimes through display region 208, to the optical display message sensor 237 of playing piece 10. Optical display message sensor 237 generates a first signal 241 based at least in part on the optically encoded message image 235 and is a distinct component from any other sensor on the playing piece 10.
The second signal 238 is a one-way, or a two-way, transaction between the messaging transponder 248 of the playing piece 10 and the receptor 236. This messaging transponder 248 on the playing piece 10 is distinct from any other actuator on the playing piece. The messaging transponder 248 can be by way of example but not limited to, NFC, WiFi, Zigbee, Bluetooth, or infrared signal.
Sensors 30 are distinct from the optical display message sensor 237 which receives the first signal 235. Sensors 30 may include components such as but not limited to temperature sensors, touch sensors, force sensors. In some examples, toy piece 10 does not include any sensors 30.
Actuators 34 are distinct from the messaging transponder 248 on the playing piece 10 which creates and transmits the second signal 238. Actuators 34 may be, but are not limited to, light emitters or sound emitters or another transponder on the playing piece 10. As with sensor 30, in some examples, toy piece 10 does not include any actuators 34.
Receptor 236 communicates with the messaging transponder 248 on the playing piece 10. The receptor 236 may be a one way or two way transponder. The following are examples of methods of triangulation of toy pieces 10 using optically encoded message images 235 thereby determining the physical location of a playing piece 10, typically relative to the display screen 206.
In a first example, the same optically encoded image message 235 being scanned across the display screen 206 is scanned sequentially across patches of pixels. In this example, the message is essentially “turn on messaging transponder 248”. The receipt of the first optically encoded message image by the optical display message sensor 237 turns on the messaging transponder 248, described as a transmitter/transceiver in
In another example, a different first optically encoded message image 235 is sent at different physical locations of the display screen 206. These different message images 235 can be sent simultaneously at all locations or scanned one patch of pixels at a time. The differences between the message images can be, by way of example but not limited to, determined by encoding the X,Y coordinates of the location which is being stimulated. The playing piece 10 receives this message via the optical display message sensor 237 and can, when communicating with the receptor 236 at a subsequent time, by way of the messaging transponder 248, not necessarily coincident with the time of receipt of the first optically encoded message image 235, send the contents of first optically encoded message image 235 received in addition to data about the playing piece 10 itself. The image generating device 204 then knows the position of the playing piece 10 and the type of playing piece 10.
Messaging can also be in addition to or instead of triangulation. For example, optically encoded message image 235 can contain data for actuators 34 on the playing piece 10. For example, the data for an actuator 34 can be to turn the playing piece 10 to a blue color. This optically encoded message image 235 may be sent coincident with a visual image 223 showing water, such that any playing piece 10 placed on the visual image of water will turn blue. It should be noted that this does not require generation of a second signal 238 to receptor 236, nor does it require triangulation of the position of the playing piece 10.
In another example, second signal 238 sent by the messaging transponder 248 on the playing piece 10 to the receptor 236 may contain additional data from sensors 30 on the playing piece 10 in addition to other data. For example, the temperature of the playing piece 10 may be sent to receptor 236, or the push of a button on the playing piece 10 can send a “shoot” signal to the receptor.
The message interaction involving second signal 238 between the messaging transponder 248 on the playing piece 10 and the receptor 236, may be a two way communication, which can send data for actuators 34 on the playing piece 10. For example, speech can be sent to a speaker type of actuator on the playing piece 10 by way of the second message interaction.
Two or more playing pieces 10 on the display screen 206, or on the display region 208 of a baseplate 202 when used, may interact with each other through the display screen based first signal 235 and subsequent second signal 238 to the receptor 236. Examples include but are not limited to the following.
Two playing pieces 10 may be placed and oriented to face each other and a shoot button type of sensor 30 on each toy piece pushed, the progress of the bullet or other projectile is shown on the display screen 206, either directly on the display screen or as viewed on the display region 208 when a baseplate 202 is used. This could be followed by the playing piece 10 turning red if hit. Such an interaction using the first and second signals 235, 238 to compute position, in addition to the second signal 238 encoding the shoot button being pushed, in addition the one way optically encoded message image or the second signal which is a two way transaction in this example, sending a command to the playing piece 10 being hit to turn red.
Two or more playing pieces 10 on the display screen 206, or baseplate 202 when used, may interact with each other directly without using the display transponder 248 through piece-to-piece signal 254. For example, the playing pieces 10 may compute their positions with the information in the first display message image 235. Then the playing pieces 10 may communicate directly with other playing pieces 10 using the messaging transponder 248 or another separate transponder; receptor 236 is not involved in the transaction.
The above descriptions may have used terms such as above, below, top, bottom, over, under, et cetera. These terms may be used in the description and claims to aid understanding of the invention and not used in a limiting sense.
In prior art, systems are described where the optically encoded message images are visible to the user for short periods of time, as scans are performed to locate the playing piece. In more complex embodiments of prior art the message images destined for the playing piece are made invisible to the user by way of example but not limited to, the use of invisible radiation, or by the use of high speed modulation of visible radiation which the eye cannot discern, but which a message sensor can discern and filter out from the visual image which is much slower. However, for commercial success, it is likely necessary for any messaging method to be compatible with the current installed base of displays, in the form of tablets, PC screens, and the like. In the current installed base the refresh rate of the display is typically 60 Hz and at most 240 Hz in high end systems, due to the fact they are optimized for human viewing of visual images, and typically humans perceive flickering below 60 Hz of refresh rate. When message images are sent via a display designed for human viewing, a problem arises in the fact that, by way of example, a screen 206 is divided into 256 in X by 256 in Y squares; to resolve position, then it requires 8 bits (2̂8=256) in X and the same in Y in order to describe position. Then there needs to be a repeating pattern of 16 transitions of white (1) and black or grey (0), transmitted at each location or patch of pixels, which if transmitted at a (best) 240 Hz refresh rate will yield a pattern that lasts about 16/240 or around one tenth of a second. To get smooth motion tracking, it is necessary to get about 10 updates of position per second, and it can be seen that 10 updates per second each lasting 1/10th of a second occupies the entire time with the message image, and a visual image cannot be shown for any appreciable amount of time at this location. One possible way around this problem is to flicker the visual image itself in order to create the message image, and by using encoding schemes such as Manchester encoding, which sends a one as 10 and a zero as 01, a time invariant visual image can be made not to appear to flicker, since the 10 and 01 variation occurs at 240 Hz/2, above the flicker threshold of humans. The dimming caused by the 50% on off ratio of a Manchester encoded image can be, however, mitigated by increasing the brightness of the pixels. To be clear on such an instance the 1 is sent as a 2× bright visual image and a 0 is sent as black, such that on a 01 or 10 an average 1× bright visual image is seen. However a problem arises when the visual image itself is a time variant image such as a movie or moving gaming image, which changes in brightness, in such an instance the variation of the message image and the variation of the visual image are at about the same frequency and cannot be easily distinguished from each other. A further problem occurs if the visual image is dark such that the modulation of the image does not yield enough difference between a one (dark image) and a zero (black) signal to discern the message.
According to the technology disclosed herein, solution to the problem of sending a message image without the message image interfering with the visual image is to send the message image only under the, typically opaque, playing piece 10, and the visual image in other areas, such that the user sees the visual image and the optical sensors under the playing piece 10 sees the message image. Both images can then be optimized for the intended recipient, user or sensor, without compromise. In such a scheme it is necessary to track the playing piece 10 and dynamically move the window containing the message image to keep it under the optical message sensor 237 in the playing piece 10, as the playing piece is moved across the screen 206. In the instance that the playing piece 10 is very small, of dimensions approaching the optical sensor, the message image would appear as a small glowing area under and around the playing piece 10, which would still not appreciably interfere with the visual image.
In such further embodiment the optically encoded message image 235 M(n)(t) of
The re-centering process is further illustrated in
The playing piece 10 will thus appear to drag a messaging window 301 containing message image 235, as shown in
The message image M(n)(t) can also be time invariant and can be M(n) a static image which varies with physical position (n). In this case M(n) can encode the x,y coordinates on the screen 206. The plurality of messages M(n) over pixels (n) or patches of pixels (n) on the screen 206 can be thought to form an image over the entire screen 206, wherein given a view of a small portion of the entire image, the position of that small portion within the entire image of the screen 206 can be determined having a priori knowledge of the pattern displayed.
The static pattern can, for example, be simply displayed coordinates as shown in
This static pattern M(n) can be thought of as being present under the visual image at all times. The static pattern shows through the window 301 in the visual image 223 under the playing piece 10 which contains an optical message sensor 237, such as a camera, and movement of the playing piece will cause a change in the image received by the camera, which can be used to compute the new position of the playing piece 10. The new computed position of the playing piece is used, in turn, to center the window 301 in the visual image 223 into the messaging image, so as to keep the optically encoded message image 235 in window 301, typically centered under the camera or other optical message sensor 237 carried by the playing piece. The messaging window 301 will thus appear to drag along with the playing piece 10 as the playing piece is moved.
To initially determine the position of the playing piece 10 and start the windowed messaging process, the static pattern can be displayed across the entire image display region 208 for a short period of time which will then collapse into windows 301 where sensors or cameras 237 are detected, or the static pattern can be made visible, for example, on the edges of the screen 206, or in permanent messaging area 302 on the screen 206, which then spawns a window under the playing piece 10 as the playing piece 10 is dragged across the area containing the static pattern. It should be noted that the position can be determined to be much better than the granularity of M(n) by interpolating between one or more messages M(n) that is simultaneously visible to the camera or other image message sensor 237. In this instance the resolution of position can be improved to be better than the granularity of M(n), and is only limited by the resolution of the camera. The static image can also be combined with a time varying modulation of the image to convey further information.
Use of windows which contain the optically encoded message images 235 under the playing pieces 10 allows the gaming or other visual image 223 and the messaging image 235 to coexist without the visual gaming image 223 interfering with the optically encoded message image 235, or the messaging image 235 interfering with the visual gaming image 223. This method allows the triangulation of a playing piece 10 on a standard visual display without the need for extra emitters of invisible radiation, coils for sending magnetic or radio frequency signals encoding position.
A further layer of visual effects can be added over the visual image, said effects being linked to the position and orientation of the playing piece 10. These effects can move synchronized with the movement and orientation of the playing piece 10. These effects can be by way of example, but not limited to, another visual image in the form of, for example, a jet blast 303 from the rear of a rocket shaped toy 10 as shown in
The playing piece 10 can also have multiple facets such as a dice, one or more facets may contain optical message sensors 237, and the visual image or the visual image overlay may depend on which facet and optical message sensor 237 is faced towards the screen 206.
It should also be noted that the messaging transponder 248 and a single optical message sensor 237 can form a single module. And several said modules can be implanted at different points with a single playing piece 10. With each transponder 248 having a unique ID, the position of the transponders within the playing piece 10, and the behavior of the playing piece 10 can be linked to one or more unique ID's at the time of manufacture of the playing piece 10; this behavior can be stored in a remote or local database accessible by the image generating device 204.
The messaging window 301 containing the optically encoded message image 235 may be, for example, generated one per optical message sensor 237, or one per playing piece 10, said single window transmitting different message images to a plurality of sensors 237. The windows need not be circular in shape and can be any arbitrary shape. Typically the size and shape of the playing piece 10 are sufficient to cover the message images 235. The unique ID of the playing piece 10 gives the image generating device 204 knowledge of which points on the screen 206 are therefore not covered by the playing piece 10 and are visible to the user. In this instance by way of example, but not limited to, the centroid of the message images 235 would track the centroid of the optical message sensors 237.
The above descriptions may have used terms such as above, below, top, bottom, over, under, et cetera. These terms may be used in the description and claims to aid understanding of the invention and not used in a limiting sense.
While the present technology is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will occur to those skilled in the art, which modifications and combinations will be within the spirit of the technology and the scope of the following claims. For example, images may be transmitted to display region 208 using a fiber optic array extending between image generating device 204 and the display region of the baseplate 202 as shown in
The following clauses describe aspects of various examples of the technology disclosed. The reference numerals are included for convenient reference to the drawing figures and not in a limiting sense.
1. A method for transmitting an optically encoded message image 235 to a playing piece 10 on an image display region 208 of an image generating device 204, comprising:
sensing, by the playing piece 10, position information relative to the position of the playing piece 10 on the image display region 208;
transmitting, by the playing piece 10, at least positional information to the image generating device 204 based on the sensed position information, and;
generating by the image generating device and displaying on the image display region 208:
an optically encoded message image 235 only at the location of the playing piece 10 as the playing piece 10 moves over the image display region 208, the optically encoded message image 235 including said position information; and visual images 223 elsewhere on the image display region 208.
2. The method according to clause 1, further comprising:
providing initial position information on at least a portion 302 of the image display region 208; and
positioning an optical receptor 237 of the playing piece 10 at the at least a portion 302 of the image display region 208.
3. The method according to clause 1 or 2, further comprising displaying position information on a computer display screen, the computer display screen providing the image display region.
4. The method according to any of clauses 1-3, wherein the position information sensing further comprises using a playing piece 10 comprising an optical receptor 237 for receiving optical information from the image display region, the optical information including the position information.
5. The method according to clause 4, wherein the optical receptor 237 receives position information in the form of display region grid coordinates.
6. The method according to any of clauses 1-5, wherein the position information sensing is carried out with a playing piece 10 having a size and shape to at least cover the optically encoded message image 235.
7. The method according to any of clauses 1-6, wherein the position information sensing is carried out with the playing piece having a releasable coupling.
8. The method according to any of clauses 1-7, wherein the image display region has an integrated touchscreen, and further comprising:
positioning the playing piece 10 on the touchscreen; and
touching the touchscreen by a human user.
9. The method according to any of clauses 1-8, wherein the positional information transmitting step comprises transmitting a unique identifier for the playing piece 10.
10. The method according to clause 9, further comprising using the unique identifier as an address into a data repository, the data repository comprising at least one of a local database, a remote database, and a look-up table, the data repository including information regarding the playing piece 10.
11. The method according to either of clauses 9 or 10, further comprising:
overlaying the visual images 223 displayed on the image display region 208 with a further visual image 303, the further visual image associated with the playing piece 10; and
at least one of the visual images and the further visual image being dependent on the unique identifier of the playing piece.
12. The method according to any of clauses 1-11, further comprising:
selecting a playing piece 10 having first and second optical receptors 237 positioned at first and second sides of the playing piece, the first and second sides facing different directions;
placing the playing piece 10 on the image display region 208 with a chosen one of the first and second optical receptors 237 facing the image display region 208; and
generating the visual images 223 based at least in part on which of the first and second optical receptors is facing the display region 208.
13. The method according to any of clauses 1-12, further comprising:
selecting a playing piece 10 having first and second optical receptors 237 positioned spatially separated on the same side of the playing piece;
placing the playing piece 10 on the image display region 208 with both the first and second optical receptors 237 facing the image display region 208; and
generating the visual images 223 based at least in part on the orientation of the second optical receptor 237 with respect to the first optical receptor 237.
14. The method according to any of clauses 1-13 wherein:
the positional information transmitting step comprises transmitting said positional information from a messaging transponder 248 of the playing piece 10; and
the receptor 236 of the image generating device 204 is a transponder capable of bi-directional communication with the messaging transponder 248.
15. The method according to either of clauses 13 or 14, further comprising activating an actuator carried by the playing piece 10 based on a message received by the messaging transponder 248 from the image generating device 204.
16. The method according to any of clauses 1-15, further comprising:
placing first and second of said playing pieces 10 at the first and second positions on the image display region 208; and
generating the optically encoded message image 235 at each of the first and second positions on the image display region 208.
17. The method according to any of clauses 1-16, further comprising:
placing first and second of said playing pieces 10 at first and second locations on the image display regions 208 of respective first and second of said image generating devices;
operably coupling the first and second image generating devices; and
generating the visual images 223 on the second image generating device at least partially based upon the positional information from the first playing piece.
18. The method according to any of clauses 1-17, wherein the playing piece 10 comprises an optical light guide to direct light from the image display region 208 to one or more surfaces of the playing piece.
19. The playing piece according to any of clauses 1-18, further comprising:
sensing, by a sensor of the playing piece 10, an external environmental input, or user input; and
transmitting, by the playing piece 10, in addition to said positional information, information relating to the sensed input, to the image generating device 204 based on the sensed position information.
Any and all patents, patent applications and printed publications referred to above are incorporated by reference.
This application is related to the following US patents: U.S. Pat. No. 9,403,100, Attorney Docket number KARU 1002-1; U.S. Pat. No. 9,561,447, Attorney Docket KARU 1002-11; U.S. Pat. No. 9,168,464, Attorney Docket number KARU 1002-8; and U.S. Pat. No. 9,555,338, Attorney Docket number KARU 1002-9.