The disclosed subject matter relates to the field of layout apps for interior design and more particularly to methods and systems for using an interior decoration design system based on augmented reality (AR) technology for presenting objects placed in AR scene.
The following references may be considered to be relevant as background art to the presently disclosed subject matter.
CN106791778A discloses an AR (Augmented Reality) technology based interior decoration design system. The AR technology based interior decoration design system comprises an image information collecting unit for collecting and recording lighting information, interior azimuth information and original equipment information in a to-be-decorated room; a modularized editing unit for receiving data transmitted by the image information collecting unit and performing modularized editing on the original scenery information and environment information in the room; and an AR presenting unit. The AR technology based interior decoration design system disclosed in CN106791778A can design and show the interior decoration style vividly in advance by adopting the AR technology, so that a user can have a feeling of living in the room more really in advance; and further improvement also can be carried out in a decoration process.
US2020118339A1 discloses a system for augmented reality layout includes an augmented reality layout server and an augmented reality layout device, including a processor; a non-transitory memory; an input/output; a model viewer providing two-dimensional top, three-dimensional, and augmented reality views of a design model; a model editor; a model synchronizer, which aligns and realigns the design model with a video stream of an environment; a model cache; and an object cache. Also disclosed is method for augmented reality layout including creating model outline, identifying alignment vector, creating layout, verifying design model, editing design model, and realigning design model.
WO2018099400A1 discloses to an augmented reality-based interior design system, comprising: an environmental information collector; a modeling module, comprising a scene simulator and an object simulator, the scene simulator generating a virtual scene module on the basis of three-dimensional information collected by the environmental information collector; a display module, displaying the virtual object module over a real-world scene in a superimposing manner or displaying the virtual scene module and the virtual object module over a real-world scene in a superimposing manner; and an input module, operated by a user to change the spatial attribute and/or form attribute of the simulated object. The described augmented reality-based interior design system, by using the environmental information collector to collect the three-dimensional information of the user's home and modeling on the basis of the three-dimensional information, can be more adaptable to the particular environment and requirements of the user.
US2021133850A1 discloses techniques for providing a machine learning prediction of a recommended product to a user using augmented reality include identifying at least one real-world object and a virtual product in an AR viewpoint of the user. The AR viewpoint includes a camera image of the real-world object(s) and an image of the virtual product. The image of the virtual product is inserted into the camera image of the real-world object. A candidate product is predicted from a set of recommendation images using a machine learning algorithm based on, for example, a type of the virtual product to provide a recommendation that includes both the virtual product and the candidate product. The recommendation can include different types of products that are complementary to each other, in an embodiment. An image of the selected candidate product is inserted into the AR viewpoint along with the image of the virtual product.
It will be appreciated that acknowledgement of the above references is not to be inferred as meaning that these are in any way relevant to the patentability of the presently disclosed subject matter.
In accordance with an aspect of the disclosed subject matter there is provided an interior decoration system operable by a user, comprising:
In accordance with another aspect of the disclosed subject matter, there is provided an interior decoration method for providing an augmented reality viewpoint to a user, the method comprising:
In accordance with yet an aspect of the disclosed subject matter there is provided a system, the system comprising:
In a further aspect of the disclosed subject matter a method is provided, the method comprising:
Any one or more of the following features, designs and configurations can be applied to a system and method according to the aspects of the present disclosure, separately or in various combinations thereof:
It will be appreciated that further aspects and various embodiments and features pertaining to these or other aspects of the present subject matter are discussed in the description and illustrated in the drawings. Features from any of the disclosed embodiments may be used in combination with one another, without limitation. In addition, other features and advantages of the present disclosure will become apparent to those of ordinary skill in the art through consideration of the following detailed description and the accompanying drawings.
In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, non-limiting examples of embodiments will now be described, with reference to the accompanying drawings, in which:
The following description sets forth exemplary methods, systems, techniques, instruction sequences, computing machine programs, applications and the like. Such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. The purpose of the specific details in the explanation as presented are set forth for the purposes of understanding of various embodiments of the disclosed subject matter. It will be evident to those skilled in the art that the embodiments of the disclosed subject matter may be practiced with or without such details. The description does not show in detail all possible examples of protocols, structures, techniques etc. When operations are presented the presentation is not limited to specific order or sequence of operation or method steps as disclosed.
As presented herein, one or more embodiments disclosed herein are described in the context of an augmented reality (AR), application-based space (e.g., a room) design and decoration service for a user. As such, for purposes of the present disclosure, the term “user” will be used in reference to a potential consumer or person who is operating an AR capable mobile computing device which can execute an AR mobile application, in some physical, real world environment space (e.g., a room) in an effort to visualize that space which is enhanced with some added content such as products the user would be recommended and would like to have in the space.
In general, AR is the integration of digital information with the user's real-world environment in real time. An AR system may create a composite view for the user that is the combination of the physical space (or part of it) viewed by the user and a virtual scene generated by the computer that augments the scene with additional information. For example, a retailer would provide an application that lets the user view their room with a piece of artwork or a plant superimposed on the room, while the artwork for example has been chosen to fit the décor style of the room, the dimensions of the room or a part of it, and/or its type. If the room is a nursery, the artwork that will be proposed will be stylistically suitable to the room type, taking into account at least the color scheme of the room, the theme and/or the location of the proposed placement of the artwork by the application. In general, a variety of AR capable mobile computing devices (also referred to as “electronic device(s)”) are envisioned for use in the disclosed subject matter, such as smart mobile phones, tablet computers, head-mounted displays, AR headset and/or glasses with a built-in display, herein after “device” capable of performing the method of the disclosed subject matter.
Using the application in accordance with an example of the disclosed subject matter, a user can initiate an AR viewing session using a user interface on the device. During the AR viewing session, the mobile computing device uses input received from a combination of one or more image sensors and/or motion sensors to generate a representation that corresponds with the real world environment physical space in which the mobile computing device is being operated.
The representation is constructed via mobile sensors which may include streams of multiple types of data such as but not limited to RGB images, depth images obtained from 3D sensors, motion sensors, referred to herein as “image sensor”. It should be noted that depth images can be further obtained for RGB images using for example machine learning. This representation data, referred to hereafter as “real-world environment image”, is used to determine the position of the mobile computing device capturing the image relative to physical features of the space, objects and/or various surfaces and planes in the images being captured by the image sensor of the electronic device. It will be appreciated that the image sensor is capable of capturing a single image (e.g., a still image) or a sequence of images (e.g., a video stream) to represent the real-world environment image.
Furthermore, using computer vision and object recognition analysis, the image (or images) received from the user's device is analyzed to identify 3D features of the space such as floor, walls, obstacles, doors, windows, etc. which define the boundary of the room and objects such as a sofa, table, artwork, plants, lams etc. (and their attributes) present in the image. Accordingly, the information extracted from analyzing an image is used, to calculate boundary points of the scene to identify one or more designable areas (e.g., on a wall, on a floor, on a table) while taking into account the boundaries of the designable area to determine other areas and to query one or more databases of products to quickly and efficiently identify products that may be both complementary to those objects identified in the image, and suiting the style and type of the room in the image. It will be appreciated that while the application makes “automated” recommendations and suggestions for the designable area, the user may manipulate the suggestions switching between different product quantity, options and types.
During the viewing session, the user of the AR mobile application can view a physical space, augmented with a designable area marking mask superimposed thereon and/or superimposed image(s) of product(s) selected and positioned by the user and proceed to purchase the product(s) chosen during the session using the same application, e.g., “in-app purchasing.” It will be appreciated that the designable marking mask may be a mask of a product, e.g., a generic representation of a product (e.g., plant, lamp, curtains, table, sofa, frame, etc.). Such a generic representation may be a hologram-like representation or a generic “blueprint” representation of the product category proposed by the application or as selected by the user from available options provided or supported by the application (seen in
Other aspects of the disclosed subject matter will be readily appreciated from the description of the figures that follow. In the following, the components of an embodiment of a system for augmented reality layout 100 is described with reference to
The components of the system described with reference to
In general, user interacts with the system using any one of a variety of AR-capable mobile computing electronic devices 110, such as mobile phones, tablet computers, head-mounted displays, glasses with a built-in heads-up display etc.
Various modules, electronic devices, and engines are referenced in the system 100 of
In some examples, the processor(s) of any of the modules, electronic devices, and/or engines described herein includes hardware for executing instructions (e.g., instructions for carrying out one or more portions of any of the methods disclosed herein), such as those making up a computer program. For example, to execute instructions, the processor(s) of the modules, electronic devices, and/or engines may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory, or a storage device and decode and execute them. In particular examples, processor(s) may include one or more internal caches for data. As an example, the processor(s) may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory or the storage device. In some examples, the processor may be configured (e.g., include programming stored thereon or executed thereby) to carry out one or more portions of any of the example methods disclosed herein.
In some examples, the processor is configured to perform any of the acts (e.g., analyzing, determining, processing, transmitting, generating) described herein in relation to the respective modules, electronic devices, and/or engines, or cause one or more portions of the modules, electronic devices, and/or engines to perform at least one of the acts disclosed herein. Such configuration can include one or more operational programs (e.g., computer program products) that are executable by the at least one processor.
The modules, electronic devices, and/or engines may include at least one memory storage medium. For example, the modules, electronic devices, and/or engines may include memory operably coupled to the processor(s). The memory may be used for storing data, metadata, and programs for execution by the processor(s). The memory may include one or more of volatile and non-volatile memories, such as Random Access Memory (RAM), Read Only Memory (ROM), a solid state disk (SSD), Flash, Phase Change Memory (PCM), or other types of data storage. The memory may be internal or distributed memory.
The modules, electronic devices, and/or engines may include a storage device having storage for storing data or instructions. The storage device may be operably coupled to the at least one processor. In some examples, the storage device can comprise a non-transitory memory storage medium, such as any of those described above. The storage device (e.g., non-transitory storage medium) may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. The storage device may include removable or non-removable (or fixed) media. The storage device may be internal or external to the modules, electronic devices, and/or engines. In some examples, storage device may include non-volatile, solid-state memory. In some examples, the storage device may include read-only memory (ROM). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. In some examples, one or more portions of the memory and/or the storage device (e.g., memory storage medium(s)) may store one or more databases thereon.
In some examples, computer-readable instructions may be stored in a memory storage medium such as one or more of the at least one processor (e.g., internal cache of the processor), the memory, and/or the storage device of the modules, electronic devices, and/or engines described herein. In some examples, the at least one processor may be configured to access (e.g., via a bus) the memory storage medium(s) such as one or more of the memory or the storage device. For example, the at least one processor may receive and store the data (e.g., look-up tables) as a plurality of data points in the memory storage medium(s). The at least one processor may execute programming stored therein adapted access the data in the memory storage medium(s) to automatically perform any of the acts described herein.
The modules, electronic devices, and/or engines described herein also may include one or more I/O devices/interfaces, which are provided to allow a user to provide input to, receive output from, and otherwise transfer data to and from the computing device. These I/O devices/interfaces may include a mouse, keypad or a keyboard, a touch screen, camera, optical scanner, network interface, web-based access, modem, a port, other known I/O devices or a combination of such I/O devices/interfaces. The touch screen may be activated with a stylus or a finger. The I/O devices/interfaces may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen or monitor), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain examples, I/O devices/interfaces are configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
The modules, electronic devices, and/or engines described herein also may include a communication interface. The communication interface may include hardware, software, or both. The communication interface can provide one or more interfaces for communication (such as, for example, packet-based communication) between the modules, electronic devices, and/or engines or one or more networks. For example, communication interface may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI.
Any suitable network and any suitable communication interface may be used. For example, the modules, electronic devices, and/or engines described herein may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, the modules, electronic devices, and/or engines may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination thereof. The modules, electronic devices, and/or engines may include any suitable communication interface for any of these networks, where appropriate.
The modules, electronic devices, and/or engines described herein may include a bus. The bus can include hardware, software, or both that couples components of the modules, electronic devices, and/or engines to each other. For example, the bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination thereof.
Returning to
In accordance with some embodiments, the image collecting, processing and management engine 130 receives individual image(s), or in some instances a sequence of images or video stream, from the electronic device 110, represents these in real time on the user display (e.g., the display device) through the user interface 120, stores these for processing in a database and processes these using the computer vision (CV) and classification engine 140. It will be appreciated that the engine 140 may comprise multiple modules functioning independently or simultaneously, in a sequence or in a cycle (e.g., using the one or more processing engines).
The image input is processed to identify 3D features which define the scene boundary (floor, wall, obstacles, objects etc.) and objects present therein, to map out the scene using the CV and classification engine 140, to classify the scene type, style, theme (indoor, outdoor, room, bedroom, nursery, living room, outdoor veranda etc., the materials, color scheme and design style of the real world environment, e.g., modern, classic, industrial, gothic, etc.). In addition, the system can perform value assessment for the quality of the captured environment to propose product categories and products that would suit the general style or theme of the environment (e.g., luxurious, affordable etc.). The data is stored in a data storing module and fed into the designable area recognition and suggestion engine 150 and to identify based on the end point boundaries of the processed features at least one area that is a candidate for decoration which is referenced herein as “designable area”.
It will be appreciated that the designable area engine further preforms an analysis of all identified designable areas to ensure there is no overlap between these and further with any of the features of the scene in the image. Thus, one or more designable areas may be a free space on a wall at a height suitable for product placement (e.g., an artwork, shelves) between a TV set and a standing lamp (as will be further described in examples of screen shots seen in
The engine 150 further receives input from the CV and classification engine 140 on a type of the area identified as a candidate (e.g., wall, floor, table top etc.). The user interface is then presented with a 2D/3D layout of the designable area on the real-world scene displayed on the display of the device to the user and may present one or more suggestions on a type of product(s) compatible with the area identified as a candidate designable area. It will be appreciated that the designable area can be a feature recognized by the system and not represented on the user interface per se. In which case the system will present on the user interface a mask or a representation of the product (e.g., as seen in
Next, in a system operation 202, the 2D and/or 3D data is analyzed to sense boundary points (boundary points as used herein refer to boundaries of the 2D and/or 3D features of the environment such as e.g., depth data, a wall, door, a floor, windows, obstacles etc. as well as objects and other products identified in the real world scene and further boundary points of the one or more designable areas when these are defined in operation 203.
In operation 203, the system using designable area recognition and suggestion engine 150 defines one or more designable areas for product placement taking into calculations at least the output of the operation performed at 202 and reiterating the process to ensure that any recognized designable areas do not overlap with any of the objects and features present the real world scene, any of the other designable areas and further that the designable areas follow the decoration rules given as instructions to the system (e.g., if a designable area is on the floor, that it will not obstruct access to any other objects nearby such as a door, if it's on a wall that the area is at a height suitable for example for hanging an artwork etc.).
At operation 204, the system generates one or more categories for the designable area and classifies the data (e.g., dimensions, location, orientation etc.) of the designable area in relation at least to the surrounding environment at the real-world environment scene and optionally other designable areas as identified by the system.
A virtual environment is further generated at 205 to allow a display of a virtual mask layout in one or more of the one dimension, two dimensions or three dimensions. The layout displayed by the system at 205 can be of the virtual mask of designable area the system recognized and defined using for example engine 150. For the one or more designable areas, the system may provide a display of one or more categories generated for the designable area at 204. It will be appreciated that the designable area can be a feature recognized by the system and not represented on the user interface as per step 206. In which case the system will present on the user interface a mask or a representation of a product following a query for product suggestion as discussed hereinbelow.
The system is further configured to initiate a query for product suggestion as seen at 207 based on the user input and selection from the suggested categories, and receive suggested products as an input in the form of small icons or images of the suggested products and its design variations to be displayed on the display using the user interface as shown in 208.
Upon selection of a product, the system is configured to execute a set of instructions to create using the modeling engine (e.g., 140) a render of the product that may be superimposed on the real world environment scene image on the display of the electronic device in a virtual augmented reality environment. Thus, the system makes it possible for the user to virtually view the positioning of the suggested one or more products and appreciate the compatibility with the desired design of the space, further allowing for changes, redesign etc., before actually placing the products in the physical environment. The system may further be provided with a module that allows the user to view a price of the recommended one or more products, adding the product(s) to a virtual shopping card and purchasing thereof. It will be appreciated that other features may be present in the system to allow suggestion of product(s), display, purchasing, sharing of the suggested design using applications connected to the application (electronic mail, text messaging, social media applications etc.) etc. In the latter the system may have a module for remote consultation by either sharing a screen or presenting the designable areas remotely on a remote device, displaying and purchasing of products from marketplaces or from the local database of products.
It will be appreciated that a consultation may be sharing of the real world environment images with before and after the virtual placement of the suggested products views. Such sharing can further be accompanied with a digital link to the suggested products and their attributes (category, size, shape, price etc.).
Sharing can also be in the context of location an installer for the suggested product which can further comprise a database of potential installers which are provided to the user with an option of communication with one or more of such installers through the system (not shown).
In accordance with some examples, a user initiates an AR application on their electronic device (e.g., mobile computing device such as a phone, an AR headset etc.) by selecting a user interface via the display of the device. The user directs the device and in particular the image sensor (e.g., a camera) towards a real-world environment on which the user desires to receive recommendation from the application (e.g., as seen in
At 302, a server engine receives and stores an image (or images) as captured or derived with the image sensor on the electronic device 110 (e.g., using a module as shown in
In accordance with the disclosed subject matter the one or more CV and classification engines (e.g., feature identified as 140 in
At the next step 303 the data from the images as classified is analyzed and processed for yielding an output comprising the attributed information of the real world scene in the image(s) and stored (e.g., as 2D data, 3D data including data of the 3D features, surface data, spatial data, etc.) in a database.
At method operation 304, boundary points of the attributes in the real world scene data are analyzed to identify, using a sense engine (as seen in
Next, at 305 one or more candidate designable area within the real-world environment are defined as an output based on at least the sense engine module output. The output (candidate designable area) is analyzed at method operation 306 for its spatial attributes (e.g., size, dimensions, location, orientations, proximity to other objects, etc.) relative the real-world environment scene data of operation 302 and other designable areas based on the data from 304. Based on the analysis it may be determined what product category (one or more) is suitable for the designable area (e.g., if the designable area is on the wall, a product category may be shelves, artwork etc.) if a table or other furniture present it may propose placement of related product category thereon (e.g., plant, lamp, tablecloth, tableware etc.) or under (e.g., a carpet). At 307 the application method may present on user's device the image(s) of the real-world environment and a virtual mask layout of the one or more designable area (or virtual mask of product category) with or without a suggestion (one or more) related to one or more product types suitable to the attributes of the real-world environment being captured by the user. Optionally, the system may present the user with a virtual mask (e.g., “blueprint”, hologram, generic representation etc.) of the product at the designable area to allow the user to visualize the proposed product category (seen in
At method operation 308, upon receiving an indication of a selection by the user of the at least one designable area recognized and/or suggested in 307, a query is generated using attributes of the category and type of product recommended for the designable area and executed against a database of products to identify and present to the user on the user interface a set of candidate product images.
At method operation 309, upon user indication of a selection of the product, the product is rendered on the virtual designable area and superimposed over the real world environment scene as represented on the display of the user using an AR mode on the user interface. It will be appreciated that the product selection and placement options being further manipulatable by the user through manual input using the user interface; the placement option being relative the designable area and corresponding to the real-world data identified by the image sensor. Variations of placements are presented to the user until the user confirms the placement and may proceed to e.g., purchasing the recommended product(s) using the application by adding the product to the online shopping cart (not shown).
Turning now to
As seen in
Upon user input of the choices for the type and the artwork the display with present renders 445 thereof over the designable area, the renders being generated by the modeling engine discussed e.g., in
As further seen in
It will appreciated that the application can provide the user with either a fixed dimension frames or other products or can propose site specific frame dimensions that would be suited to the designable area and the overall real world environment being designed. In this connection, while the reference is made to one or more frames, modularity of the products that could be proposed to the user apply to other product categories, such as shelves or shelving systems, sofa or living room arrangements, tables, garden decks, etc.
Upon selection of the products for one or more of the designable areas, as seen in
Attention is now directed to
While various aspects and embodiments have been disclosed herein, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting.
This application claims priority to U.S. Provisional Application No. 63/308,149 filed on 9 Feb. 2022, the disclosure of which is incorporated herein, in its entirety, by this reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2023/051176 | 2/9/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63308149 | Feb 2022 | US |