This application claims priority to Korean Patent Application No. 10-2019-0107799, filed on Aug. 30, 2019, the contents of which are all hereby incorporated by reference in their entirety.
The present disclosure relates to an intelligent device for providing a customized service to a user, and a method for controlling the same.
Various applications for providing different services may be installed in an electronic device. However, there are limited user information which the applications installed in the electronic device can access. As long as the user directly executes a specific application, it is hard to expect a decent-quality service that matches the user's characteristics. Therefore, there is need of a method for providing a service customized for each user.
The present disclosure aims to address the aforementioned need and/or problem.
In addition, the present disclosure is to provide a customized service based on comprehension of an individual.
In addition, the present disclosure is to derive a profile for providing a customized service by processing/analyzing data corresponding privacy data only within a device.
In addition, the present disclosure is to precisely derive a profile for providing a customized service.
In addition, the present disclosure is to effectively use a user's profile for providing a customized service.
In addition, the present disclosure is to improve security when it comes to utilizing the user's profile.
It is to be understood that technical objects to be achieved by the present disclosure are not limited to the aforementioned technical objects and other technical objects which are not mentioned herein will be apparent from the following description to one of ordinary skill in the art to which the present disclosure pertains.
In one general aspect of the present disclosure, there is provided a method for controlling an intelligent device that generates user profile data to provide a customized service, the method including: collecting source data related to an individual characteristic of a user; determining at least one of the individual characteristic by analyzing the source data; and generating the user profile data by aggregating the individual characteristic, wherein the source data is data related to at least one of information on an application installed in the intelligent device and operation record of the application, and wherein the individual characteristic is a characteristic related to at least one service among multiple services provided through applications installed in the intelligent device.
The individual characteristic may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand.
The collecting of the source data may include: collecting information on at least one application installed in the intelligent device and log information related to operation of the at least one application; and extracting tag data related to the individual characteristic from information on the at least one application and the log information, and storing the extracted tag data as the source data.
When the individual characteristic is related to whether the user has a child, the determining of the individual characteristic may include: retrieving a keyword related to whether having a child from the source data of at least one application of a message application or a contact list application, and matching the retrieved keyword with a keyword set preset regarding whether having a child; analyzing an operating time of a kid-related application from the source data; and determining whether the user has a child, using a matching result and an analytic result of the analyzed operating time.
When the individual characteristic is related to whether the user is married, the determining of the individual characteristic may include: retrieving a marriage-related keyword from the source data of the contact list application and determining whether the user is married, based on whether the user has a child and whether there is any retrieved keyword.
When the individual characteristic is related to whether the user is married, the determining of the individual characteristic may include: retrieving tag data of a pet-related image from source data of a media-related application; based on at least one information of a photographing date, a photographing place, or a photographing device in the tag data, determining whether the pet-related image is photographed at home of the user; and, based on a number of pet-related images photographed at the home of the user, determining whether the user has a pet.
When the individual characteristic is related to a means of transportation of the user, the determining of the individual characteristic may include: determining whether the user has a car by retrieving tag data on vehicle audio connection from source data of a Bluetooth connection application; acquiring a walking duration of the user in a predetermined time period from source data of a Global Positioning System (GPS) application; and, based on whether the user has a car and the walking duration of the user, determining a car or a public transportation vehicle.
When the individual characteristic is related to a means of transportation of the user, the determining of the individual characteristic may include: retrieving tag data related to deposition of salary from source data from a message application; retrieving installation and usage record of an employee or university student-related application from the source data; and, based on whether there is any message related to the deposition of the salary and the installation and usage record of the application, determining an occupation of the user.
When the individual characteristic is related to a preferred brand of the user, the determining of the individual characteristic may include: retrieving tag data related to payment from source data from a message application or a payment-related application; retrieving a brand according to a mart type from the retrieved tag data; and determining a brand having retrieved a predetermined number of times among retrieved brands as a preferred brand.
When the individual characteristic is related to gender of the user, the determining of the individual characteristic may include: extracting voice data of the user from source data of a voice assistant application, and acquiring an analytic result by inputting the extracted voice data into a pre-trained voice analysis model; retrieving a gender based title-related keyword from source data of a contact list application; and, based on the analytic result and a retrieval result regarding the gender based title-related keyword, determining the gender of the user.
The method may further include: determining an application related to the individual characteristic among applications installed in the intelligent device; and allowing the application related to the individual characteristic to access the user profile data.
The access to the user profile data may be allowed only through an Application Programming Interface (API).
The collecting of the source data may include: accessing a 5G wireless communication system; receiving log information on an Internet of Thing (IoT) device used by the user and log information related to operation of the IoT device; and extracting tag data related to the individual characteristic from the information on the IoT device and the log information, and storing the extracted tag data as the source data.
The 5G communication system may support massive Machine Type Communication (mMTC) or Narrowband Internet of Things (NB-IoT), and the information on the IoT device and the log information may be received through an MTC Physical Downlink Shared Channel (MPDSCH) or a Narrowband Physical Downlink Shared Channel (NPDSCH).
The IoT device may be at least one of an autonomous vehicle, a wearable device, a refrigerator, a washing machine, a drone, or a smart TV.
In another general aspect of the present disclosure, there is provided an intelligent device for providing a customized service, the device including: a communication module; a memory; a display; and a processor configured to control the communication module, the memory, and the display, wherein the processor is configured to: collect source data related to an individual characteristic; determine at least one of the individual characteristic by analyzing the source data; and generate the user profile data by aggregating the individual characteristic, wherein the source data is data related to at least one of information on an application installed in the intelligent device and operation record of the application, and wherein the individual characteristic is a characteristic related to at least one service among multiple services provided through applications installed in the intelligent device.
According to an embodiment of the present disclosure, user profile data may be generated to provide a customized service.
In addition, according to an embodiment of the present disclosure, a user's individual characteristic is determined to generate the user profile. The user's individual characteristic may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand. Accordingly, the user's profile may be categorized
In addition, according to an embodiment of the present disclosure, information on an application installed in the device or log information related to operation of the corresponding application are collected, and tag data related to an individual characteristic is extracted and stored as source data. The user profile data is generated from the source data. Accordingly, a more customized service may be provided based on usage record of the corresponding device.
In addition, according to an embodiment of the present disclosure, source data related to an individual characteristic is collected from various Internet of Thing (IoT) devices through access of a wireless communication system. As the source data for determining an individual characteristic is collected not from a single device but from various devices, the user's individual characteristic may be determined more accurately.
In addition, according to an embodiment of the present disclosure, only an application related to an individual characteristic among applications installed in the device are allowed to access the user profile data. Accordingly, it is possible to prevent reckless use of the user profile data.
In addition, according to an embodiment of the present disclosure, the user profile data can be accessed only through an Application Programming Interface (API). Accordingly, the user profile data cannot be leaked to the outside, and thus, it is possible to prevent privacy information.
Effects which may be obtained by the present disclosure are not limited to the aforementioned effects, and other technical effects not described above may be evidently understood by a person having ordinary skill in the art to which the present disclosure pertains from the following description.
The accompanying drawings, which are included to provide a further understanding of the invention, illustrate embodiments of the invention and together with the description serve to explain the principle of the invention.
Hereinafter, embodiments of the disclosure will be described in detail with reference to the attached drawings. The same or similar components are given the same reference numbers and redundant description thereof is omitted. The suffixes “module” and “unit” of elements herein are used for convenience of description and thus can be used interchangeably and do not have any distinguishable meanings or functions. Further, in the following description, if a detailed description of known techniques associated with the present disclosure would unnecessarily obscure the gist of the present disclosure, detailed description thereof will be omitted. In addition, the attached drawings are provided for easy understanding of embodiments of the disclosure and do not limit technical spirits of the disclosure, and the embodiments should be construed as including all modifications, equivalents, and alternatives falling within the spirit and scope of the embodiments.
While terms, such as “first”, “second”, etc., may be used to describe various components, such components must not be limited by the above terms. The above terms are used only to distinguish one component from another.
When an element is “coupled” or “connected” to another element, it should be understood that a third element may be present between the two elements although the element may be directly coupled or connected to the other element. When an element is “directly coupled” or “directly connected” to another element, it should be understood that no element is present between the two elements.
The singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In addition, in the specification, it will be further understood that the terms “comprise” and “include” specify the presence of stated features, integers, steps, operations, elements, components, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or combinations.
Hereinafter, 5G communication (5th generation mobile communication) required by an apparatus requiring AI processed information and/or an AI processor will be described through paragraphs A through G.
A. Example of Block Diagram of UE and 5G Network
Referring to
A 5G network including another device (AI server) communicating with the AI device is defined as a second communication device (920 of
The 5G network may be represented as the first communication device and the AI device may be represented as the second communication device.
For example, the first communication device or the second communication device may be a base station, a network node, a transmission terminal, a reception terminal, a wireless device, a wireless communication device, an autonomous device, or the like.
For example, the first communication device or the second communication device may be a base station, a network node, a transmission terminal, a reception terminal, a wireless device, a wireless communication device, a vehicle, a vehicle having an autonomous function, a connected car, a drone (Unmanned Aerial Vehicle, UAV), and AI (Artificial Intelligence) module, a robot, an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, a hologram device, a public safety device, an MTC device, an IoT device, a medical device, a Fin Tech device (or financial device), a security device, a climate/environment device, a device associated with 5G services, or other devices associated with the fourth industrial revolution field.
For example, a terminal or user equipment (UE) may include a cellular phone, a smart phone, a laptop computer, a digital broadcast terminal, personal digital assistants (PDAs), a portable multimedia player (PMP), a navigation device, a slate PC, a tablet PC, an ultrabook, a wearable device (e.g., a smartwatch, a smart glass and a head mounted display (HMD)), etc. For example, the HMD may be a display device worn on the head of a user. For example, the HMD may be used to realize VR, AR or MR. For example, the drone may be a flying object that flies by wireless control signals without a person therein. For example, the VR device may include a device that implements objects or backgrounds of a virtual world. For example, the AR device may include a device that connects and implements objects or background of a virtual world to objects, backgrounds, or the like of a real world. For example, the MR device may include a device that unites and implements objects or background of a virtual world to objects, backgrounds, or the like of a real world. For example, the hologram device may include a device that implements 360-degree 3D images by recording and playing 3D information using the interference phenomenon of light that is generated by two lasers meeting each other which is called holography. For example, the public safety device may include an image repeater or an imaging device that can be worn on the body of a user. For example, the MTC device and the IoT device may be devices that do not require direct interference or operation by a person. For example, the MTC device and the IoT device may include a smart meter, a bending machine, a thermometer, a smart bulb, a door lock, various sensors, or the like. For example, the medical device may be a device that is used to diagnose, treat, attenuate, remove, or prevent diseases. For example, the medical device may be a device that is used to diagnose, treat, attenuate, or correct injuries or disorders. For example, the medial device may be a device that is used to examine, replace, or change structures or functions. For example, the medical device may be a device that is used to control pregnancy. For example, the medical device may include a device for medical treatment, a device for operations, a device for (external) diagnose, a hearing aid, an operation device, or the like. For example, the security device may be a device that is installed to prevent a danger that is likely to occur and to keep safety. For example, the security device may be a camera, a CCTV, a recorder, a black box, or the like. For example, the Fin Tech device may be a device that can provide financial services such as mobile payment.
Referring to
UL (communication from the second communication device to the first communication device) is processed in the first communication device 910 in a way similar to that described in association with a receiver function in the second communication device 920. Each Tx/Rx module 925 receives a signal through each antenna 926. Each Tx/Rx module provides RF carriers and information to the Rx processor 923. The processor 921 may be related to the memory 924 that stores program code and data. The memory may be referred to as a computer-readable medium.
B. Signal Transmission/Reception Method in Wireless Communication System
Referring to
Meanwhile, when the UE initially accesses the BS or has no radio resource for signal transmission, the UE can perform a random access procedure (RACH) for the BS (steps S203 to S206). To this end, the UE can transmit a specific sequence as a preamble through a physical random access channel (PRACH) (S203 and S205) and receive a random access response (RAR) message for the preamble through a PDCCH and a corresponding PDSCH (S204 and S206). In the case of a contention-based RACH, a contention resolution procedure may be additionally performed.
After the UE performs the above-described process, the UE can perform PDCCH/PDSCH reception (S207) and physical uplink shared channel (PUSCH)/physical uplink control channel (PUCCH) transmission (S208) as normal uplink/downlink signal transmission processes. Particularly, the UE receives downlink control information (DCI) through the PDCCH. The UE monitors a set of PDCCH candidates in monitoring occasions set for one or more control element sets (CORESET) on a serving cell according to corresponding search space configurations. A set of PDCCH candidates to be monitored by the UE is defined in terms of search space sets, and a search space set may be a common search space set or a UE-specific search space set. CORESET includes a set of (physical) resource blocks having a duration of one to three OFDM symbols. A network can configure the UE such that the UE has a plurality of CORESETs. The UE monitors PDCCH candidates in one or more search space sets. Here, monitoring means attempting decoding of PDCCH candidate(s) in a search space. When the UE has successfully decoded one of PDCCH candidates in a search space, the UE determines that a PDCCH has been detected from the PDCCH candidate and performs PDSCH reception or PUSCH transmission on the basis of DCI in the detected PDCCH. The PDCCH can be used to schedule DL transmissions over a PDSCH and UL transmissions over a PUSCH. Here, the DCI in the PDCCH includes downlink assignment (i.e., downlink grant (DL grant)) related to a physical downlink shared channel and including at least a modulation and coding format and resource allocation information, or an uplink grant (UL grant) related to a physical uplink shared channel and including a modulation and coding format and resource allocation information.
An initial access (IA) procedure in a 5G communication system will be additionally described with reference to
The UE can perform cell search, system information acquisition, beam alignment for initial access, and DL measurement on the basis of an SSB. The SSB is interchangeably used with a synchronization signal/physical broadcast channel (SS/PBCH) block.
The SSB includes a PSS, an SSS and a PBCH. The SSB is configured in four consecutive OFDM symbols, and a PSS, a PBCH, an SSS/PBCH or a PBCH is transmitted for each OFDM symbol. Each of the PSS and the SSS includes one OFDM symbol and 127 subcarriers, and the PBCH includes 3 OFDM symbols and 576 subcarriers.
Cell search refers to a process in which a UE acquires time/frequency synchronization of a cell and detects a cell identifier (ID) (e.g., physical layer cell ID (PCI)) of the cell. The PSS is used to detect a cell ID in a cell ID group and the SSS is used to detect a cell ID group. The PBCH is used to detect an SSB (time) index and a half-frame.
There are 336 cell ID groups and there are 3 cell IDs per cell ID group. A total of 1008 cell IDs are present. Information on a cell ID group to which a cell ID of a cell belongs is provided/acquired through an SSS of the cell, and information on the cell ID among 336 cell ID groups is provided/acquired through a PSS.
The SSB is periodically transmitted in accordance with SSB periodicity. A default SSB periodicity assumed by a UE during initial cell search is defined as 20 ms. After cell access, the SSB periodicity can be set to one of {5 ms, 10 ms, 20 ms, 40 ms, 80 ms, 160 ms} by a network (e.g., a BS).
Next, acquisition of system information (SI) will be described.
SI is divided into a master information block (MIB) and a plurality of system information blocks (SIBs). SI other than the MIB may be referred to as remaining minimum system information. The MIB includes information/parameter for monitoring a PDCCH that schedules a PDSCH carrying SIB1 (SystemInformationBlock1) and is transmitted by a BS through a PBCH of an SSB. SIB1 includes information related to availability and scheduling (e.g., transmission periodicity and SI-window size) of the remaining SIBs (hereinafter, SIBx, x is an integer equal to or greater than 2). SiBx is included in an SI message and transmitted over a PDSCH. Each SI message is transmitted within a periodically generated time window (i.e., SI-window).
A random access (RA) procedure in a 5G communication system will be additionally described with reference to
A random access procedure is used for various purposes. For example, the random access procedure can be used for network initial access, handover, and UE-triggered UL data transmission. A UE can acquire UL synchronization and UL transmission resources through the random access procedure. The random access procedure is classified into a contention-based random access procedure and a contention-free random access procedure. A detailed procedure for the contention-based random access procedure is as follows.
A UE can transmit a random access preamble through a PRACH as Msg1 of a random access procedure in UL. Random access preamble sequences having different two lengths are supported. A long sequence length 839 is applied to subcarrier spacings of 1.25 kHz and 5 kHz and a short sequence length 139 is applied to subcarrier spacings of 15 kHz, 30 kHz, 60 kHz and 120 kHz.
When a BS receives the random access preamble from the UE, the BS transmits a random access response (RAR) message (Msg2) to the UE. A PDCCH that schedules a PDSCH carrying a RAR is CRC masked by a random access (RA) radio network temporary identifier (RNTI) (RA-RNTI) and transmitted. Upon detection of the PDCCH masked by the RA-RNTI, the UE can receive a RAR from the PDSCH scheduled by DCI carried by the PDCCH. The UE checks whether the RAR includes random access response information with respect to the preamble transmitted by the UE, that is, Msg1. Presence or absence of random access information with respect to Msg1 transmitted by the UE can be determined according to presence or absence of a random access preamble ID with respect to the preamble transmitted by the UE. If there is no response to Msg1, the UE can retransmit the RACH preamble less than a predetermined number of times while performing power ramping. The UE calculates PRACH transmission power for preamble retransmission on the basis of most recent pathloss and a power ramping counter.
The UE can perform UL transmission through Msg3 of the random access procedure over a physical uplink shared channel on the basis of the random access response information. Msg3 can include an RRC connection request and a UE ID. The network can transmit Msg4 as a response to Msg3, and Msg4 can be handled as a contention resolution message on DL. The UE can enter an RRC connected state by receiving Msg4.
C. Beam Management (BM) Procedure of 5G Communication System
A BM procedure can be divided into (1) a DL MB procedure using an SSB or a CSI-RS and (2) a UL BM procedure using a sounding reference signal (SRS). In addition, each BM procedure can include Tx beam swiping for determining a Tx beam and Rx beam swiping for determining an Rx beam.
The DL BM procedure using an SSB will be described.
Configuration of a beam report using an SSB is performed when channel state information (CSI)/beam is configured in RRC_CONNECTED.
When a CSI-RS resource is configured in the same OFDM symbols as an SSB and ‘QCL-TypeD’ is applicable, the UE can assume that the CSI-RS and the SSB are quasi co-located (QCL) from the viewpoint of ‘QCL-TypeD’. Here, QCL-TypeD may mean that antenna ports are quasi co-located from the viewpoint of a spatial Rx parameter. When the UE receives signals of a plurality of DL antenna ports in a QCL-TypeD relationship, the same Rx beam can be applied.
Next, a DL BM procedure using a CSI-RS will be described.
An Rx beam determination (or refinement) procedure of a UE and a Tx beam swiping procedure of a BS using a CSI-RS will be sequentially described. A repetition parameter is set to ‘ON’ in the Rx beam determination procedure of a UE and set to ‘OFF’ in the Tx beam swiping procedure of a BS.
First, the Rx beam determination procedure of a UE will be described.
Next, the Tx beam determination procedure of a BS will be described.
Next, the UL BM procedure using an SRS will be described.
The UE determines Tx beamforming for SRS resources to be transmitted on the basis of SRS-SpatialRelation Info included in the SRS-Config IE. Here, SRS-SpatialRelation Info is set for each SRS resource and indicates whether the same beamforming as that used for an SSB, a CSI-RS or an SRS will be applied for each SRS resource.
Next, a beam failure recovery (BFR) procedure will be described.
In a beamformed system, radio link failure (RLF) may frequently occur due to rotation, movement or beamforming blockage of a UE. Accordingly, NR supports BFR in order to prevent frequent occurrence of RLF. BFR is similar to a radio link failure recovery procedure and can be supported when a UE knows new candidate beams. For beam failure detection, a BS configures beam failure detection reference signals for a UE, and the UE declares beam failure when the number of beam failure indications from the physical layer of the UE reaches a threshold set through RRC signaling within a period set through RRC signaling of the BS. After beam failure detection, the UE triggers beam failure recovery by initiating a random access procedure in a PCell and performs beam failure recovery by selecting a suitable beam. (When the BS provides dedicated random access resources for certain beams, these are prioritized by the UE). Completion of the aforementioned random access procedure is regarded as completion of beam failure recovery.
D. URLLC (Ultra-Reliable and Low Latency Communication)
URLLC transmission defined in NR can refer to (1) a relatively low traffic size, (2) a relatively low arrival rate, (3) extremely low latency requirements (e.g., 0.5 and 1 ms), (4) relatively short transmission duration (e.g., 2 OFDM symbols), (5) urgent services/messages, etc. In the case of UL, transmission of traffic of a specific type (e.g., URLLC) needs to be multiplexed with another transmission (e.g., eMBB) scheduled in advance in order to satisfy more stringent latency requirements. In this regard, a method of providing information indicating preemption of specific resources to a UE scheduled in advance and allowing a URLLC UE to use the resources for UL transmission is provided.
NR supports dynamic resource sharing between eMBB and URLLC. eMBB and URLLC services can be scheduled on non-overlapping time/frequency resources, and URLLC transmission can occur in resources scheduled for ongoing eMBB traffic. An eMBB UE may not ascertain whether PDSCH transmission of the corresponding UE has been partially punctured and the UE may not decode a PDSCH due to corrupted coded bits. In view of this, NR provides a preemption indication. The preemption indication may also be referred to as an interrupted transmission indication.
With regard to the preemption indication, a UE receives DownlinkPreemption IE through RRC signaling from a BS. When the UE is provided with DownlinkPreemption IE, the UE is configured with INT-RNTI provided by a parameter int-RNTI in DownlinkPreemption IE for monitoring of a PDCCH that conveys DCI format 2_1. The UE is additionally configured with a corresponding set of positions for fields in DCI format 2_1 according to a set of serving cells and positionInDCI by INT-ConfigurationPerServing Cell including a set of serving cell indexes provided by servingCellID, configured having an information payload size for DCI format 2_1 according to dci-Payloadsize, and configured with indication granularity of time-frequency resources according to timeFrequencySect.
The UE receives DCI format 2_1 from the BS on the basis of the DownlinkPreemption IE.
When the UE detects DCI format 2_1 for a serving cell in a configured set of serving cells, the UE can assume that there is no transmission to the UE in PRBs and symbols indicated by the DCI format 2_1 in a set of PRBs and a set of symbols in a last monitoring period before a monitoring period to which the DCI format 2_1 belongs. For example, the UE assumes that a signal in a time-frequency resource indicated according to preemption is not DL transmission scheduled therefor and decodes data on the basis of signals received in the remaining resource region.
E. mMTC (Massive MTC)
mMTC (massive Machine Type Communication) is one of 5G scenarios for supporting a hyper-connection service providing simultaneous communication with a large number of UEs. In this environment, a UE intermittently performs communication with a very low speed and mobility. Accordingly, a main goal of mMTC is operating a UE for a long time at a low cost. With respect to mMTC, 3GPP deals with MTC and NB (NarrowBand)-IoT.
mMTC has features such as repetitive transmission of a PDCCH, a PUCCH, a PDSCH (physical downlink shared channel), a PUSCH, etc., frequency hopping, retuning, and a guard period.
That is, a PUSCH (or a PUCCH (particularly, a long PUCCH) or a PRACH) including specific information and a PDSCH (or a PDCCH) including a response to the specific information are repeatedly transmitted. Repetitive transmission is performed through frequency hopping, and for repetitive transmission, (RF) retuning from a first frequency resource to a second frequency resource is performed in a guard period and the specific information and the response to the specific information can be transmitted/received through a narrowband (e.g., 6 resource blocks (RBs) or 1 RB).
F. Basic Operation of AI Processing Using 5G Communication
The UE transmits specific information to the 5G network (S1). The 5G network may perform 5G processing related to the specific information (S2). Here, the 5G processing may include AI processing. And the 5G network may transmit response including AI processing result to UE (S3).
G. Applied Operations Between UE and 5G Network in 5G Communication System
Hereinafter, the operation of an autonomous vehicle using 5G communication will be described in more detail with reference to wireless communication technology (BM procedure, URLLC, mMTC, etc.) described in
First, a basic procedure of an applied operation to which a method proposed by the present disclosure which will be described later and eMBB of 5G communication are applied will be described.
As in steps S1 and S3 of
More specifically, the autonomous vehicle performs an initial access procedure with the 5G network on the basis of an SSB in order to acquire DL synchronization and system information. A beam management (BM) procedure and a beam failure recovery procedure may be added in the initial access procedure, and quasi-co-location (QCL) relation may be added in a process in which the autonomous vehicle receives a signal from the 5G network.
In addition, the autonomous vehicle performs a random access procedure with the 5G network for UL synchronization acquisition and/or UL transmission. The 5G network can transmit, to the autonomous vehicle, a UL grant for scheduling transmission of specific information. Accordingly, the autonomous vehicle transmits the specific information to the 5G network on the basis of the UL grant. In addition, the 5G network transmits, to the autonomous vehicle, a DL grant for scheduling transmission of 5G processing results with respect to the specific information. Accordingly, the 5G network can transmit, to the autonomous vehicle, information (or a signal) related to remote control on the basis of the DL grant.
Next, a basic procedure of an applied operation to which a method proposed by the present disclosure which will be described later and URLLC of 5G communication are applied will be described.
As described above, an autonomous vehicle can receive DownlinkPreemption IE from the 5G network after the autonomous vehicle performs an initial access procedure and/or a random access procedure with the 5G network. Then, the autonomous vehicle receives DCI format 2_1 including a preemption indication from the 5G network on the basis of DownlinkPreemption IE. The autonomous vehicle does not perform (or expect or assume) reception of eMBB data in resources (PRBs and/or OFDM symbols) indicated by the preemption indication. Thereafter, when the autonomous vehicle needs to transmit specific information, the autonomous vehicle can receive a UL grant from the 5G network.
Next, a basic procedure of an applied operation to which a method proposed by the present disclosure which will be described later and mMTC of 5G communication are applied will be described.
Description will focus on parts in the steps of
In step S1 of
The above-described 5G communication technology can be combined with methods proposed in the present disclosure which will be described later and applied or can complement the methods proposed in the present disclosure to make technical features of the methods concrete and clear.
Referring to
More specifically, the wireless communication unit 110 typically includes one or more components which permit wireless communication between the electronic device 100 and a wireless communication system or network within which the mobile terminal is located. The wireless communication unit 110 typically includes one or more modules which permit communications such as wireless communications between the electronic device 100 and a wireless communication system, communications between the electronic device 100 and another mobile terminal, communications between the electronic device 100 and an external server. Further, the wireless communication unit 110 typically includes one or more modules which connect the electronic device 100 to one or more networks.
To facilitate such communications, the wireless communication unit 110 includes one or more of a broadcast receiving module 111, a mobile communication module 112, a wireless Internet module 113, a short-range communication module 114, and a location information module 115.
The input unit 120 includes a camera 121 for obtaining images or video, a microphone 122, which is one type of audio input device for inputting an audio signal, and a user input unit 123 (for example, a touch key, a push key, a mechanical key, a soft key, and the like) for allowing a user to input information. Data (for example, audio, video, image, and the like) is obtained by the input unit 120 and may be analyzed and processed by controller 180 according to device parameters, user commands, and combinations thereof.
The sensing unit 140 is typically implemented using one or more sensors configured to sense internal information of the mobile terminal, the surrounding environment of the mobile terminal, user information, and the like. For example, in
The output unit 150 is typically configured to output various types of information, such as audio, video, tactile output, and the like. The output unit 150 is shown having a display unit 151, an audio output module 152, a haptic module 153, and an optical output module 154. The display unit 151 may have an inter-layered structure or an integrated structure with a touch sensor in order to facilitate a touch screen. The touch screen may provide an output interface between the electronic device 100 and a user, as well as function as the user input unit 123 which provides an input interface between the electronic device 100 and the user.
The interface unit 160 serves as an interface with various types of external devices that can be coupled to the electronic device 100. The interface unit 160, for example, may include any of wired or wireless ports, external power supply ports, wired or wireless data ports, memory card ports, ports for connecting a device having an identification module, audio input/output (I/O) ports, video I/O ports, earphone ports, and the like. In some cases, the electronic device 100 may perform assorted control functions associated with a connected external device, in response to the external device being connected to the interface unit 160.
The memory 170 is typically implemented to store data to support various functions or features of the electronic device 100. For instance, the memory 170 may be configured to store application programs executed in the electronic device 100, data or instructions for operations of the electronic device 100, and the like. Some of these application programs may be downloaded from an external server via wireless communication. Other application programs may be installed within the electronic device 100 at time of manufacturing or shipping, which is typically the case for basic functions of the electronic device 100 (for example, receiving a call, placing a call, receiving a message, sending a message, and the like). It is common for application programs to be stored in the memory 170, installed in the electronic device 100, and executed by the controller 180 to perform an operation (or function) for the electronic device 100.
The controller 180 typically functions to control overall operation of the electronic device 100, in addition to the operations associated with the application programs. The controller 180 may provide or process information or functions appropriate for a user by processing signals, data, information and the like, which are input or output by the various components depicted in
In addition, the controller 180 may control at least some of the components described with reference to
The power supply unit 190 can be configured to receive external power or provide internal power in order to supply appropriate power required for operating elements and components included in the electronic device 100. The power supply unit 190 may include a battery, and the battery may be configured to be embedded in the terminal body, or configured to be detachable from the terminal body.
At least some of the aforementioned components may operate in cooperation to implement operations, control or control methods of mobile terminals according to various embodiments which will be described below. In addition, operations, control or control methods of mobile terminals may be implemented by executing at least one application program stored in the memory 170.
Referring still to
Regarding the wireless communication unit 110, the broadcast receiving module 111 is typically configured to receive a broadcast signal and/or broadcast associated information from an external broadcast managing entity via a broadcast channel. The broadcast channel may include a satellite channel, a terrestrial channel, or both. In some embodiments, two or more broadcast receiving modules 111 may be utilized to facilitate simultaneously receiving of two or more broadcast channels, or to support switching among broadcast channels.
The mobile communication module 112 can transmit and/or receive wireless signals to and from one or more network entities. Typical examples of a network entity include a base station, an external mobile terminal, a server, and the like. Such network entities form part of a mobile communication network, which is constructed according to technical standards or communication methods for mobile communications (for example, Global System for Mobile Communication (GSM), Code Division Multi Access (CDMA), CDMA2000 (Code Division Multi Access 2000), EV-DO (Enhanced Voice-Data Optimized or Enhanced Voice-Data Only), Wideband CDMA (WCDMA), High Speed Downlink Packet access (HSDPA), HSUPA (High Speed Uplink Packet Access), Long Term Evolution (LTE), LTE-A (Long Term Evolution-Advanced), and the like).
Examples of wireless signals transmitted and/or received via the mobile communication module 112 include audio call signals, video (telephony) call signals, or various formats of data to support communication of text and multimedia messages.
The wireless Internet module 113 is configured to facilitate wireless Internet access. This module may be internally or externally coupled to the electronic device 100. The wireless Internet module 113 may transmit and/or receive wireless signals via communication networks according to wireless Internet technologies.
Examples of such wireless Internet access include Wireless LAN (WLAN), Wireless Fidelity (Wi-Fi), Wi-Fi Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro), Worldwide Interoperability for Microwave Access (WiMAX), High Speed Downlink Packet Access (HSDPA), HSUPA (High Speed Uplink Packet Access), Long Term Evolution (LTE), LTE-A (Long Term Evolution-Advanced), and the like. The wireless Internet module 113 may transmit/receive data according to one or more of such wireless Internet technologies, and other Internet technologies as well.
In some embodiments, when the wireless Internet access is implemented according to, for example, WiBro, HSDPA, HSUPA, GSM, CDMA, WCDMA, LTE, LTE-A and the like, as part of a mobile communication network, the wireless Internet module 113 performs such wireless Internet access. As such, the Internet module 113 may cooperate with, or function as, the mobile communication module 112.
The short-range communication module 114 is configured to facilitate short-range communications. Suitable technologies for implementing such short-range communications include BLUETOOTH™, Radio Frequency IDentification (RFID), Infrared Data Association (IrDA), Ultra-WideBand (UWB), ZigBee, Near Field Communication (NFC), Wireless-Fidelity (Wi-Fi), Wi-Fi Direct, Wireless USB (Wireless Universal Serial Bus), and the like. The short-range communication module 114 in general supports wireless communications between the electronic device 100 and a wireless communication system, communications between the electronic device 100 and another electronic device 100, or communications between the mobile terminal and a network where another electronic device 100 (or an external server) is located, via wireless area networks. One example of the wireless area networks is a wireless personal area networks.
In some embodiments, another mobile terminal (which may be configured similarly to electronic device 100) may be a wearable device, for example, a smart watch, a smart glass or a head mounted display (HMD), which is able to exchange data with the electronic device 100 (or otherwise cooperate with the electronic device 100). The short-range communication module 114 may sense or recognize the wearable device, and permit communication between the wearable device and the electronic device 100. In addition, when the sensed wearable device is a device which is authenticated to communicate with the electronic device 100, the controller 180, for example, may cause transmission of data processed in the electronic device 100 to the wearable device via the short-range communication module 114. Hence, a user of the wearable device may use the data processed in the electronic device 100 on the wearable device. For example, when a call is received in the electronic device 100, the user may answer the call using the wearable device. Also, when a message is received in the electronic device 100, the user can check the received message using the wearable device.
The location information module 115 is generally configured to detect, calculate, derive or otherwise identify a position of the mobile terminal. As an example, the location information module 115 includes a Global Position System (GPS) module, a Wi-Fi module, or both. If desired, the location information module 115 may alternatively or additionally function with any of the other modules of the wireless communication unit 110 to obtain data related to the position of the mobile terminal. As one example, when the mobile terminal uses a GPS module, a position of the mobile terminal may be acquired using a signal sent from a GPS satellite. As another example, when the mobile terminal uses the Wi-Fi module, a position of the mobile terminal can be acquired based on information related to a wireless access point (AP) which transmits or receives a wireless signal to or from the Wi-Fi module.
The input unit 120 may be configured to permit various types of input to the mobile terminal 120. Examples of such input include audio, image, video, data, and user input. Image and video input is often obtained using one or more cameras 121. Such cameras 121 may process image frames of still pictures or video obtained by image sensors in a video or image capture mode. The processed image frames can be displayed on the display unit 151 or stored in memory 170. In some cases, the cameras 121 may be arranged in a matrix configuration to permit a plurality of images having various angles or focal points to be input to the electronic device 100. As another example, the cameras 121 may be located in a stereoscopic arrangement to acquire left and right images for implementing a stereoscopic image.
The microphone 122 is generally implemented to permit audio input to the electronic device 100. The audio input can be processed in various manners according to a function being executed in the electronic device 100. If desired, the microphone 122 may include assorted noise removing algorithms to remove unwanted noise generated in the course of receiving the external audio.
The user input unit 123 is a component that permits input by a user. Such user input may enable the controller 180 to control operation of the electronic device 100. The user input unit 123 may include one or more of a mechanical input element (for example, a key, a button located on a front and/or rear surface or a side surface of the electronic device 100, a dome switch, a jog wheel, a jog switch, and the like), or a touch-sensitive input, among others. As one example, the touch-sensitive input may be a virtual key or a soft key, which is displayed on a touch screen through software processing, or a touch key which is located on the mobile terminal at a location that is other than the touch screen. On the other hand, the virtual key or the visual key may be displayed on the touch screen in various shapes, for example, graphic, text, icon, video, or a combination thereof.
The sensing unit 140 is generally configured to sense one or more of internal information of the mobile terminal, surrounding environment information of the mobile terminal, user information, or the like. The controller 180 generally cooperates with the sending unit 140 to control operation of the electronic device 100 or execute data processing, a function or an operation associated with an application program installed in the mobile terminal based on the sensing provided by the sensing unit 140. The sensing unit 140 may be implemented using any of a variety of sensors, some of which will now be described in more detail.
The proximity sensor 141 may include a sensor to sense presence or absence of an object approaching a surface, or an object located near a surface, by using an electromagnetic field, infrared rays, or the like without a mechanical contact. The proximity sensor 141 may be arranged at an inner region of the mobile terminal covered by the touch screen, or near the touch screen.
The proximity sensor 141, for example, may include any of a transmissive type photoelectric sensor, a direct reflective type photoelectric sensor, a mirror reflective type photoelectric sensor, a high-frequency oscillation proximity sensor, a capacitance type proximity sensor, a magnetic type proximity sensor, an infrared rays proximity sensor, and the like. When the touch screen is implemented as a capacitance type, the proximity sensor 141 can sense proximity of a pointer relative to the touch screen by changes of an electromagnetic field, which is responsive to an approach of an object with conductivity. In this case, the touch screen (touch sensor) may also be categorized as a proximity sensor.
The term “proximity touch” will often be referred to herein to denote the scenario in which a pointer is positioned to be proximate to the touch screen without contacting the touch screen. The term “contact touch” will often be referred to herein to denote the scenario in which a pointer makes physical contact with the touch screen. For the position corresponding to the proximity touch of the pointer relative to the touch screen, such position will correspond to a position where the pointer is perpendicular to the touch screen. The proximity sensor 141 may sense proximity touch, and proximity touch patterns (for example, distance, direction, speed, time, position, moving status, and the like). In general, controller 180 processes data corresponding to proximity touches and proximity touch patterns sensed by the proximity sensor 141, and cause output of visual information on the touch screen. In addition, the controller 180 can control the electronic device 100 to execute different operations or process different data according to whether a touch with respect to a point on the touch screen is either a proximity touch or a contact touch.
A touch sensor can sense a touch applied to the touch screen, such as display unit 151, using any of a variety of touch methods. Examples of such touch methods include a resistive type, a capacitive type, an infrared type, and a magnetic field type, among others.
As one example, the touch sensor may be configured to convert changes of pressure applied to a specific part of the display unit 151, or convert capacitance occurring at a specific part of the display unit 151, into electric input signals. The touch sensor may also be configured to sense not only a touched position and a touched area, but also touch pressure and/or touch capacitance. A touch object is generally used to apply a touch input to the touch sensor. Examples of typical touch objects include a finger, a touch pen, a stylus pen, a pointer, or the like.
When a touch input is sensed by a touch sensor, corresponding signals may be transmitted to a touch controller. The touch controller may process the received signals, and then transmit corresponding data to the controller 180. Accordingly, the controller 180 may sense which region of the display unit 151 has been touched. Here, the touch controller may be a component separate from the controller 180, the controller 180, and combinations thereof.
In some embodiments, the controller 180 may execute the same or different controls according to a type of touch object that touches the touch screen or a touch key provided in addition to the touch screen. Whether to execute the same or different control according to the object which provides a touch input may be decided based on a current operating state of the electronic device 100 or a currently executed application program, for example.
The touch sensor and the proximity sensor may be implemented individually, or in combination, to sense various types of touches. Such touches includes a short (or tap) touch, a long touch, a multi-touch, a drag touch, a flick touch, a pinch-in touch, a pinch-out touch, a swipe touch, a hovering touch, and the like.
If desired, an ultrasonic sensor may be implemented to recognize position information relating to a touch object using ultrasonic waves. The controller 180, for example, may calculate a position of a wave generation source based on information sensed by an illumination sensor and a plurality of ultrasonic sensors. Since light is much faster than ultrasonic waves, the time for which the light reaches the optical sensor is much shorter than the time for which the ultrasonic wave reaches the ultrasonic sensor. The position of the wave generation source may be calculated using this fact. For instance, the position of the wave generation source may be calculated using the time difference from the time that the ultrasonic wave reaches the sensor based on the light as a reference signal.
The camera 121 typically includes at least one a camera sensor (CCD, CMOS etc.), a photo sensor (or image sensors), and a laser sensor.
Implementing the camera 121 with a laser sensor may allow detection of a touch of a physical object with respect to a 3D stereoscopic image. The photo sensor may be laminated on, or overlapped with, the display device. The photo sensor may be configured to scan movement of the physical object in proximity to the touch screen. In more detail, the photo sensor may include photo diodes and transistors at rows and columns to scan content received at the photo sensor using an electrical signal which changes according to the quantity of applied light. Namely, the photo sensor may calculate the coordinates of the physical object according to variation of light to thus obtain position information of the physical object.
The display unit 151 is generally configured to output information processed in the electronic device 100. For example, the display unit 151 may display execution screen information of an application program executing at the electronic device 100 or user interface (UI) and graphic user interface (GUI) information in response to the execution screen information.
In some embodiments, the display unit 151 may be implemented as a stereoscopic display unit for displaying stereoscopic images.
A typical stereoscopic display unit may employ a stereoscopic display scheme such as a stereoscopic scheme (a glass scheme), an auto-stereoscopic scheme (glassless scheme), a projection scheme (holographic scheme), or the like.
The display unit 151 of the mobile terminal according to an embodiment of the present disclosure includes a transparent display, and the display unit 151 will be called a transparent display 151 in description of the structure of the electronic device 100 and description of embodiments.
The audio output module 152 is generally configured to output audio data. Such audio data may be obtained from any of a number of different sources, such that the audio data may be received from the wireless communication unit 110 or may have been stored in the memory 170. The audio data may be output during modes such as a signal reception mode, a call mode, a record mode, a voice recognition mode, a broadcast reception mode, and the like. The audio output module 152 can provide audible output related to a particular function (e.g., a call signal reception sound, a message reception sound, etc.) performed by the electronic device 100. The audio output module 152 may also be implemented as a receiver, a speaker, a buzzer, or the like.
A haptic module 153 can be configured to generate various tactile effects that a user feels, perceive, or otherwise experience. A typical example of a tactile effect generated by the haptic module 153 is vibration. The strength, pattern and the like of the vibration generated by the haptic module 153 can be controlled by user selection or setting by the controller. For example, the haptic module 153 may output different vibrations in a combining manner or a sequential manner.
Besides vibration, the haptic module 153 can generate various other tactile effects, including an effect by stimulation such as a pin arrangement vertically moving to contact skin, a spray force or suction force of air through a jet orifice or a suction opening, a touch to the skin, a contact of an electrode, electrostatic force, an effect by reproducing the sense of cold and warmth using an element that can absorb or generate heat, and the like.
The haptic module 153 can also be implemented to allow the user to feel a tactile effect through a muscle sensation such as the user's fingers or arm, as well as transferring the tactile effect through direct contact. Two or more haptic modules 153 may be provided according to the particular configuration of the electronic device 100.
An optical output module 154 can output a signal for indicating an event generation using light of a light source. Examples of events generated in the electronic device 100 may include message reception, call signal reception, a missed call, an alarm, a schedule notice, an email reception, information reception through an application, and the like.
A signal output by the optical output module 154 may be implemented in such a manner that the mobile terminal emits monochromatic light or light with a plurality of colors. The signal output may be terminated as the mobile terminal senses that a user has checked the generated event, for example.
The interface unit 160 serves as an interface for external devices to be connected with the electronic device 100. For example, the interface unit 160 can receive data transmitted from an external device, receive power to transfer to elements and components within the electronic device 100, or transmit internal data of the electronic device 100 to such external device. The interface unit 160 may include wired or wireless headset ports, external power supply ports, wired or wireless data ports, memory card ports, ports for connecting a device having an identification module, audio input/output (I/O) ports, video I/O ports, earphone ports, or the like.
The identification module may be a chip that stores various information for authenticating authority of using the electronic device 100 and may include a user identity module (UIM), a subscriber identity module (SIM), a universal subscriber identity module (USIM), and the like. In addition, the device having the identification module (also referred to herein as an “identifying device”) may take the form of a smart card. Accordingly, the identifying device can be connected with the terminal 100 via the interface unit 160.
When the electronic device 100 is connected with an external cradle, the interface unit 160 can serve as a passage to allow power from the cradle to be supplied to the electronic device 100 or may serve as a passage to allow various command signals input by the user from the cradle to be transferred to the mobile terminal there through. Various command signals or power input from the cradle may operate as signals for recognizing that the mobile terminal is properly mounted on the cradle.
The memory 170 can store programs to support operations of the controller 180 and store input/output data (for example, phonebook, messages, still images, videos, etc.). The memory 170 may store data related to various patterns of vibrations and audio which are output in response to touch inputs on the touch screen.
The memory 170 may include one or more types of storage mediums including a Flash memory, a hard disk, a solid state disk, a silicon disk, a multimedia card micro type, a card-type memory (e.g., SD or DX memory, etc), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Programmable Read-Only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. The electronic device 100 may also be operated in relation to a network storage device that performs the storage function of the memory 170 over a network, such as the Internet.
The controller 180 may typically control the general operations of the electronic device 100. For example, the controller 180 may set or release a lock state for restricting a user from inputting a control command with respect to applications when a status of the mobile terminal meets a preset condition.
The controller 180 can also perform the controlling and processing associated with voice calls, data communications, video calls, and the like, or perform pattern recognition processing to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images, respectively. In addition, the controller 180 can control one or a combination of those components in order to implement various exemplary embodiments disclosed herein.
The power supply unit 190 receives external power or provide internal power and supply the appropriate power required for operating respective elements and components included in the electronic device 100. The power supply unit 190 may include a battery, which is typically rechargeable or be detachably coupled to the terminal body for charging.
The power supply unit 190 may include a connection port. The connection port may be configured as one example of the interface unit 160 to which an external charger for supplying power to recharge the battery is electrically connected.
As another example, the power supply unit 190 may be configured to recharge the battery in a wireless manner without use of the connection port. In this example, the power supply unit 190 can receive power, transferred from an external wireless power transmitter, using at least one of an inductive coupling method which is based on magnetic induction or a magnetic resonance coupling method which is based on electromagnetic resonance.
Various embodiments described herein may be implemented in a computer-readable medium, a machine-readable medium, or similar medium using, for example, software, hardware, or any combination thereof.
An AI device 20 may include an electronic device including an AI module that can perform AI processing, a server including the AI module, or the like. Further, the AI device 20 may be included as at least one component of the vehicle 10 shown in
The AI processing may include all operations related to driving of the vehicle 10 shown in
The AI device 20 may include an AI processor 21, a memory 25, and/or a communication unit 27.
The AI device 20, which is a computing device that can learn a neural network, may be implemented as various electronic devices such as a server, a desktop PC, a notebook PC, and a tablet PC.
The AI processor 21 can learn a neural network using programs stored in the memory 25. In particular, the AI processor 21 can learn a neural network for recognizing data related to vehicles. Here, the neural network for recognizing data related to vehicles may be designed to simulate the brain structure of human on a computer and may include a plurality of network nodes having weights and simulating the neurons of human neural network. The plurality of network nodes can transmit and receive data in accordance with each connection relationship to simulate the synaptic activity of neurons in which neurons transmit and receive signals through synapses. Here, the neural network may include a deep learning model developed from a neural network model. In the deep learning model, a plurality of network nodes is positioned in different layers and can transmit and receive data in accordance with a convolution connection relationship. The neural network, for example, includes various deep learning techniques such as deep neural networks (DNN), convolutional deep neural networks (CNN), recurrent neural networks (RNN), a restricted boltzmann machine (RBM), deep belief networks (DBN), and a deep Q-network, and can be applied to fields such as computer vision, voice recognition, natural language processing, and voice/signal processing.
Meanwhile, a processor that performs the functions described above may be a general purpose processor (e.g., a CPU), but may be an AI-only processor (e.g., a GPU) for artificial intelligence learning.
The memory 25 can store various programs and data for the operation of the AI device 20. The memory 25 may be a nonvolatile memory, a volatile memory, a flash-memory, a hard disk drive (HDD), a solid state drive (SDD), or the like. The memory 25 is accessed by the AI processor 21 and reading-out/recording/correcting/deleting/updating, etc. of data by the AI processor 21 can be performed. Further, the memory 25 can store a neural network model (e.g., a deep learning model 26) generated through a learning algorithm for data classification/recognition according to an embodiment of the present disclosure.
Meanwhile, the AI processor 21 may include a data learning unit 22 that learns a neural network for data classification/recognition. The data learning unit 22 can learn references about what learning data are used and how to classify and recognize data using the learning data in order to determine data classification/recognition. The data learning unit 22 can learn a deep learning model by acquiring learning data to be used for learning and by applying the acquired learning data to the deep learning model.
The data learning unit 22 may be manufactured in the type of at least one hardware chip and mounted on the AI device 20. For example, the data learning unit 22 may be manufactured in a hardware chip type only for artificial intelligence, and may be manufactured as a part of a general purpose processor (CPU) or a graphics processing unit (GPU) and mounted on the AI device 20. Further, the data learning unit 22 may be implemented as a software module. When the data leaning unit 22 is implemented as a software module (or a program module including instructions), the software module may be stored in non-transitory computer readable media that can be read through a computer. In this case, at least one software module may be provided by an OS (operating system) or may be provided by an application.
The data learning unit 22 may include a learning data acquiring unit 23 and a model learning unit 24.
The learning data acquiring unit 23 can acquire learning data required for a neural network model for classifying and recognizing data. For example, the learning data acquiring unit 23 can acquire, as learning data, vehicle data and/or sample data to be input to a neural network model.
The model learning unit 24 can perform learning such that a neural network model has a determination reference about how to classify predetermined data, using the acquired learning data. In this case, the model learning unit 24 can train a neural network model through supervised learning that uses at least some of learning data as a determination reference. Alternatively, the model learning data 24 can train a neural network model through unsupervised learning that finds out a determination reference by performing learning by itself using learning data without supervision. Further, the model learning unit 24 can train a neural network model through reinforcement learning using feedback about whether the result of situation determination according to learning is correct. Further, the model learning unit 24 can train a neural network model using a learning algorithm including error back-propagation or gradient decent.
When a neural network model is learned, the model learning unit 24 can store the learned neural network model in the memory. The model learning unit 24 may store the learned neural network model in the memory of a server connected with the AI device 20 through a wire or wireless network.
The data learning unit 22 may further include a learning data preprocessor (not shown) and a learning data selector (not shown) to improve the analysis result of a recognition model or reduce resources or time for generating a recognition model.
The learning data preprocessor can preprocess acquired data such that the acquired data can be used in learning for situation determination. For example, the learning data preprocessor can process acquired data in a predetermined format such that the model learning unit 24 can use learning data acquired for learning for image recognition.
Further, the learning data selector can select data for learning from the learning data acquired by the learning data acquiring unit 23 or the learning data preprocessed by the preprocessor. The selected learning data can be provided to the model learning unit 24. For example, the learning data selector can select only data for objects included in a specific area as learning data by detecting the specific area in an image acquired through a camera of a vehicle.
Further, the data learning unit 22 may further include a model estimator (not shown) to improve the analysis result of a neural network model.
The model estimator inputs estimation data to a neural network model, and when an analysis result output from the estimation data does not satisfy a predetermined reference, it can make the model learning unit 22 perform learning again. In this case, the estimation data may be data defined in advance for estimating a recognition model. For example, when the number or ratio of estimation data with an incorrect analysis result of the analysis result of a recognition model learned with respect to estimation data exceeds a predetermined threshold, the model estimator can estimate that a predetermined reference is not satisfied.
The communication unit 27 can transmit the AI processing result by the AI processor 21 to an external electronic device.
Here, the external electronic device may be defined as an autonomous vehicle. Further, the AI device 20 may be defined as another vehicle or a 5G network that communicates with the autonomous vehicle. Meanwhile, the AI device 20 may be implemented by being functionally embedded in an autonomous module included in a vehicle. Further, the 5G network may include a server or a module that performs control related to autonomous driving.
Meanwhile, the AI device 20 shown in
A deep neural network (DNN) is an artificial neural network (ANN) with multiple hidden layers between an input layer and an output layer. The deep neural network can model complex non-linear relationships like a typical artificial neural network. The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing artificial neural network.
For example, in DNN architectures for object identification models, each object is expressed as a layered composition of image primitives.
The “deep” in “deep learning” refers to the number of layers in the artificial neural network. Deep learning is a machine learning paradigm that uses such a sufficiently deep artificial neural network as a learning model. Also, the sufficiently deep artificial neural network used for deep learning is commonly referred to as a deep neural network (DNN).
In the present disclosure, data sets required to train a POI data creation model may be fed into the input layer of the DNN, and meaningful data that can be used by the user may be created through the output layer as the data sets flow through the hidden layers.
While in the specification of the present disclosure, these artificial neural networks used for this deep learning method are commonly referred to as DNNs, it is needless to say that another deep learning method is applicable as long as meaningful data can be outputted in a way similar to the above deep learning method.
The OCR model is an automatic recognition technology that converts text and images on printed or captured images into digital data. Examples of using the technology include recognition of text of business cards or handwriting information on papers. The related art OCR model operates as a subdivided module such as a module for finding a text line and a module for splitting letters (i.e., characters). Features that recognize different patterns of these characters must to be designed by a developer. Further, the OCR model limitedly operate only in high quality images.
In recent years, the field of OCR has improved in accuracy by applying deep learning, and it generates rules (feature extraction) that recognizes text in images through massive data learning on its own. The following is an example of an OCR model using the deep learning technology.
According to an embodiment, the controller 180 may perform pre-processing by applying the deep learning-based OCR model (S71).
Computers may recognize pixels having similar brightness values as a chunk, and more easily detect a letter having a color different from the periphery and having a different structure or point of continuity. Thus, a recognition rate may be significantly improved through pre-processing.
An example of such pre-processing is as follows. A low-color image is converted into grayscale. Subsequently, histogram equalization is performed. A sharper image may be obtained by maximizing contrast by redistributing a brightness distribution of the image. However, there is still a limitation in clearly distinguishing between a background and a letter. To solve this problem, binarization is performed. If a pixel value is 255 (white), it is changed to ‘0’, and if it is 0 to 254 (gray and black), it is changed to ‘1’. As a result, the background and the letter may be separated more clearly.
The controller 180 may perform a text detecting operation by applying an OCR model based on deep learning (S72).
After the image is put into the DNN, feature values are obtained. The data to be obtained is a text area (text box) and a rotation angle of the text box. Picking out the text area from the input image may reduce unnecessary computation. Rotation information is used to make the tilted text area horizontal. Thereafter, the image is cut into text units. Through this step, an individual character image or word image may be obtained.
The controller 180 may perform a text recognition operation by applying a deep learning based OCR model (S73).
In order to recognize which letter each image contains, a DNN is used. The DNN learns how to recognize individual words and letters in the form of images. Meanwhile, the types of words or strings that the DNN may recognize vary by languages. Therefore, for general-purpose OCR, a module for estimating language using only images may be necessary.
The controller 180 may perform post-processing by applying an OCR model based on deep learning (S74).
OCR post-processes character recognition errors in a similar way that humans accept text. There are two ways. The first is to use features of each letter. An error is corrected by distinguishing between similar letters (similar pairs) such as “’, ‘’, and ‘’. The second way is to use contextual information. To this end, a language model or a dictionary may be necessary, and a language model that learns numerous text data on the web may be constructed through deep learning.
The present disclosure is to apply an existing deep learning-based OCR model in a more advanced form through federated learning (to be described later).
Text of a business card may be recognized through the camera of the terminal, the above-described deep learning-based OCR model may be used to store the text of the business card. To train the OCR model, a large amount of labeled training data is required. However, even with the OCR model trained with a large amount of data, an error inevitably occurs when new data is input in an actual use environment.
In the training method of the OCR model proposed in the present disclosure, the data generated through an inference error of the model is obtained directly from an edge device, which is an environment in which the actual model is used, and then learned, a result of the learning is transmitted to a model averaging server and merged to create a better OCR model, and thereafter, the model is transmitted to each edge-device.
Hereinafter, the concept of federated learning applied to exemplary embodiments of the present disclosure will be described.
The three main requirement areas in the 5G system are (1) enhanced Mobile Broadband (eMBB) area, (2) massive Machine Type Communication (mMTC) area, and (3) Ultra-Reliable and Low Latency Communication (URLLC) area.
Some use case may require a plurality of areas for optimization, but other use case may focus only one Key Performance Indicator (KPI). The 5G system supports various use cases in a flexible and reliable manner.
eMBB far surpasses the basic mobile Internet access, supports various interactive works, and covers media and entertainment applications in the cloud computing or augmented reality environment. Data is one of core driving elements of the 5G system, which is so abundant that for the first time, the voice-only service may be disappeared. In the 5G, voice is expected to be handled simply by an application program using a data connection provided by the communication system. Primary causes of increased volume of traffic are increase of content size and increase of the number of applications requiring a high data transfer rate. Streaming service (audio and video), interactive video, and mobile Internet connection will be more heavily used as more and more devices are connected to the Internet. These application programs require always-on connectivity to push real-time information and notifications to the user. Cloud-based storage and applications are growing rapidly in the mobile communication platforms, which may be applied to both of business and entertainment uses. And the cloud-based storage is a special use case that drives growth of uplink data transfer rate. The 5G is also used for cloud-based remote works and requires a much shorter end-to-end latency to ensure excellent user experience when a tactile interface is used. Entertainment, for example, cloud-based game and video streaming, is another core element that strengthens the requirement for mobile broadband capability. Entertainment is essential for smartphones and tablets in any place including a high mobility environment such as a train, car, and plane. Another use case is augmented reality for entertainment and information search. Here, augmented reality requires very low latency and instantaneous data transfer.
Also, one of highly expected 5G use cases is the function that connects embedded sensors seamlessly in every possible area, namely the use case based on mMTC. Up to 2020, the number of potential IoT devices is expected to reach 20.4 billion. Industrial IoT is one of key areas where the 5G performs a primary role to maintain infrastructure for smart city, asset tracking, smart utility, agriculture and security.
URLLC includes new services which may transform industry through ultra-reliable/ultra-low latency links, such as remote control of major infrastructure and self-driving cars. The level of reliability and latency are essential for smart grid control, industry automation, robotics, and drone control and coordination.
Next, a plurality of use cases will be described in more detail.
The 5G may complement Fiber-To-The-Home (FTTH) and cable-based broadband (or DOCSIS) as a means to provide a stream estimated to occupy hundreds of megabits per second up to gigabits per second. This fast speed is required not only for virtual reality and augmented reality but also for transferring video with a resolution more than 4K (6K, 8K or more). VR and AR applications almost always include immersive sports games. Specific application programs may require a special network configuration. For example, in the case of VR game, to minimize latency, game service providers may have to integrate a core server with the edge network service of the network operator.
Automobiles are expected to be a new important driving force for the 5G system together with various use cases of mobile communication for vehicles. For example, entertainment for passengers requires high capacity and high mobile broadband at the same time. This is so because users continue to expect a high-quality connection irrespective of their location and moving speed. Another use case in the automotive field is an augmented reality dashboard. The augmented reality dashboard overlays information, which is a perception result of an object in the dark and contains distance to the object and object motion, on what is seen through the front window. In a future, a wireless module enables communication among vehicles, information exchange between a vehicle and supporting infrastructure, and information exchange among a vehicle and other connected devices (for example, devices carried by a pedestrian). A safety system guides alternative courses of driving so that a driver may drive his or her vehicle more safely and to reduce the risk of accident. The next step will be a remotely driven or self-driven vehicle. This step requires highly reliable and highly fast communication between different self-driving vehicles and between a self-driving vehicle and infrastructure. In the future, it is expected that a self-driving vehicle takes care of all of the driving activities while a human driver focuses on dealing with an abnormal driving situation that the self-driving vehicle is unable to recognize. Technical requirements of a self-driving vehicle demand ultra-low latency and ultra-fast reliability up to the level that traffic safety may not be reached by human drivers.
The smart city and smart home, which are regarded as essential to realize a smart society, will be embedded into a high-density wireless sensor network. Distributed networks comprising intelligent sensors may identify conditions for cost-efficient and energy-efficient conditions for maintaining cities and homes. A similar configuration may be applied for each home. Temperature sensors, window and heating controllers, anti-theft alarm devices, and home appliances will be all connected wirelessly. Many of these sensors typified with a low data transfer rate, low power, and low cost. However, for example, real-time HD video may require specific types of devices for the purpose of surveillance.
As consumption and distribution of energy including heat or gas is being highly distributed, automated control of a distributed sensor network is required. A smart grid collects information and interconnect sensors by using digital information and communication technologies so that the distributed sensor network operates according to the collected information. Since the information may include behaviors of energy suppliers and consumers, the smart grid may help improving distribution of fuels such as electricity in terms of efficiency, reliability, economics, production sustainability, and automation. The smart grid may be regarded as a different type of sensor network with a low latency.
The health-care sector has many application programs that may benefit from mobile communication. A communication system may support telemedicine providing a clinical care from a distance. Telemedicine may help reduce barriers to distance and improve access to medical services that are not readily available in remote rural areas. It may also be used to save lives in critical medical and emergency situations. A wireless sensor network based on mobile communication may provide remote monitoring and sensors for parameters such as the heart rate and blood pressure.
Wireless and mobile communication are becoming increasingly important for industrial applications. Cable wiring requires high installation and maintenance costs. Therefore, replacement of cables with reconfigurable wireless links is an attractive opportunity for many industrial applications. However, to exploit the opportunity, the wireless connection is required to function with a latency similar to that in the cable connection, to be reliable and of large capacity, and to be managed in a simple manner. Low latency and very low error probability are new requirements that lead to the introduction of the 5G system.
Logistics and freight tracking are important use cases of mobile communication, which require tracking of an inventory and packages from any place by using location-based information system. The use of logistics and freight tracking typically requires a low data rate but requires large-scale and reliable location information.
The present disclosure to be described below may be implemented by combining or modifying the respective embodiments to satisfy the aforementioned requirements of the 5G system.
Referring to
The cloud network 10 may comprise part of the cloud computing infrastructure or refer to a network existing in the cloud computing infrastructure. Here, the cloud network 10 may be constructed by using the 3G network, 4G or Long Term Evolution (LTE) network, or 5G network.
In other words, individual devices (11 to 16) constituting the AI system may be connected to each other through the cloud network 10. In particular, each individual device (11 to 16) may communicate with each other through the eNB but may communicate directly to each other without relying on the eNB.
The AI server 16 may include a server performing AI processing and a server performing computations on big data.
The AI server 16 may be connected to at least one or more of the robot 11, self-driving vehicle 12, XR device 13, smartphone 14, or home appliance 15, which are AI devices constituting the AI system, through the cloud network 10 and may help at least part of AI processing conducted in the connected AI devices (11 to 15).
At this time, the AI server 16 may teach the artificial neural network according to a machine learning algorithm on behalf of the AI device (11 to 15), directly store the learning model, or transmit the learning model to the AI device (11 to 15).
At this time, the AI server 16 may receive input data from the AI device (11 to 15), infer a result value from the received input data by using the learning model, generate a response or control command based on the inferred result value, and transmit the generated response or control command to the AI device (11 to 15).
Similarly, the AI device (11 to 15) may infer a result value from the input data by employing the learning model directly and generate a response or control command based on the inferred result value.
<AI+Robot>
By employing the AI technology, the robot 11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
The robot 11 may include a robot control module for controlling its motion, where the robot control module may correspond to a software module or a chip which implements the software module in the form of a hardware device.
The robot 11 may obtain status information of the robot 11, detect (recognize) the surroundings and objects, generate map data, determine a travel path and navigation plan, determine a response to user interaction, or determine motion by using sensor information obtained from various types of sensors.
Here, the robot 11 may use sensor information obtained from at least one or more sensors among lidar, radar, and camera to determine a travel path and navigation plan.
The robot 11 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the robot 11 may recognize the surroundings and objects by using the learning model and determine its motion by using the recognized surroundings or object information. Here, the learning model may be the one trained by the robot 11 itself or trained by an external device such as the AI server 16.
At this time, the robot 11 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as the AI server 16 and receiving a result generated accordingly.
The robot 11 may determine a travel path and navigation plan by using at least one or more of object information detected from the map data and sensor information or object information obtained from an external device and navigate according to the determined travel path and navigation plan by controlling its locomotion platform.
Map data may include object identification information about various objects disposed in the space in which the robot 11 navigates. For example, the map data may include object identification information about static objects such as wall and doors and movable objects such as a flowerpot and a desk. And the object identification information may include the name, type, distance, location, and so on.
Also, the robot 11 may perform the operation or navigate the space by controlling its locomotion platform based on the control/interaction of the user. At this time, the robot 11 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.
<AI+Autonomous Navigation>
By employing the AI technology, the self-driving vehicle 12 may be implemented as a mobile robot, unmanned ground vehicle, or unmanned aerial vehicle.
The self-driving vehicle 12 may include an autonomous navigation module for controlling its autonomous navigation function, where the autonomous navigation control module may correspond to a software module or a chip which implements the software module in the form of a hardware device. The autonomous navigation control module may be installed inside the self-driving vehicle 12 as a constituting element thereof or may be installed outside the self-driving vehicle 12 as a separate hardware component.
The self-driving vehicle 12 may obtain status information of the self-driving vehicle 12, detect (recognize) the surroundings and objects, generate map data, determine a travel path and navigation plan, or determine motion by using sensor information obtained from various types of sensors.
Like the robot 11, the self-driving vehicle 12 may use sensor information obtained from at least one or more sensors among lidar, radar, and camera to determine a travel path and navigation plan.
In particular, the self-driving vehicle 12 may recognize an occluded area or an area extending over a predetermined distance or objects located across the area by collecting sensor information from external devices or receive recognized information directly from the external devices.
The self-driving vehicle 12 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the self-driving vehicle 12 may recognize the surroundings and objects by using the learning model and determine its navigation route by using the recognized surroundings or object information. Here, the learning model may be the one trained by the self-driving vehicle 12 itself or trained by an external device such as the AI server 16.
At this time, the self-driving vehicle 12 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as the AI server 16 and receiving a result generated accordingly.
The self-driving vehicle 12 may determine a travel path and navigation plan by using at least one or more of object information detected from the map data and sensor information or object information obtained from an external device and navigate according to the determined travel path and navigation plan by controlling its driving platform.
Map data may include object identification information about various objects disposed in the space (for example, road) in which the self-driving vehicle 12 navigates. For example, the map data may include object identification information about static objects such as streetlights, rocks and buildings and movable objects such as vehicles and pedestrians. And the object identification information may include the name, type, distance, location, and so on.
Also, the self-driving vehicle 12 may perform the operation or navigate the space by controlling its driving platform based on the control/interaction of the user. At this time, the self-driving vehicle 12 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.
<AI+XR>
By employing the AI technology, the XR device 13 may be implemented as a Head-Mounted Display (HMD), Head-Up Display (HUD) installed at the vehicle, TV, mobile phone, smartphone, computer, wearable device, home appliance, digital signage, vehicle, robot with a fixed platform, or mobile robot.
The XR device 13 may obtain information about the surroundings or physical objects by generating position and attribute data about 3D points by analyzing 3D point cloud or image data acquired from various sensors or external devices and output objects in the form of XR objects by rendering the objects for display.
The XR device 13 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the XR device 13 may recognize physical objects from 3D point cloud or image data by using the learning model and provide information corresponding to the recognized physical objects. Here, the learning model may be the one trained by the XR device 13 itself or trained by an external device such as the AI server 16.
At this time, the XR device 13 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as the AI server 16 and receiving a result generated accordingly.
<AI+Robot+Autonomous Navigation>
By employing the AI and autonomous navigation technologies, the robot 11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
The robot 11 employing the AI and autonomous navigation technologies may correspond to a robot itself having an autonomous navigation function or a robot 11 interacting with the self-driving vehicle 12.
The robot 11 having the autonomous navigation function may correspond collectively to the devices which may move autonomously along a given path without control of the user or which may move by determining its path autonomously.
The robot 11 and the self-driving vehicle 12 having the autonomous navigation function may use a common sensing method to determine one or more of the travel path or navigation plan. For example, the robot 11 and the self-driving vehicle 12 having the autonomous navigation function may determine one or more of the travel path or navigation plan by using the information sensed through lidar, radar, and camera.
The robot 11 interacting with the self-driving vehicle 12, which exists separately from the self-driving vehicle 12, may be associated with the autonomous navigation function inside or outside the self-driving vehicle 12 or perform an operation associated with the user riding the self-driving vehicle 12.
At this time, the robot 11 interacting with the self-driving vehicle 12 may obtain sensor information in place of the self-driving vehicle 12 and provide the sensed information to the self-driving vehicle 12; or may control or assist the autonomous navigation function of the self-driving vehicle 12 by obtaining sensor information, generating information of the surroundings or object information, and providing the generated information to the self-driving vehicle 12.
Also, the robot 11 interacting with the self-driving vehicle 12 may control the function of the self-driving vehicle 12 by monitoring the user riding the self-driving vehicle 12 or through interaction with the user. For example, if it is determined that the driver is drowsy, the robot 11 may activate the autonomous navigation function of the self-driving vehicle 12 or assist the control of the driving platform of the self-driving vehicle 12. Here, the function of the self-driving vehicle 12 controlled by the robot 12 may include not only the autonomous navigation function but also the navigation system installed inside the self-driving vehicle 12 or the function provided by the audio system of the self-driving vehicle 12.
Also, the robot 11 interacting with the self-driving vehicle 12 may provide information to the self-driving vehicle 12 or assist functions of the self-driving vehicle 12 from the outside of the self-driving vehicle 12. For example, the robot 11 may provide traffic information including traffic sign information to the self-driving vehicle 12 like a smart traffic light or may automatically connect an electric charger to the charging port by interacting with the self-driving vehicle 12 like an automatic electric charger of the electric vehicle.
<AI+Robot+XR>
By employing the AI technology, the robot 11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
The robot 11 employing the XR technology may correspond to a robot which acts as a control/interaction target in the XR image. In this case, the robot 11 may be distinguished from the XR device 13, both of which may operate in conjunction with each other.
If the robot 11, which acts as a control/interaction target in the XR image, obtains sensor information from the sensors including a camera, the robot 11 or XR device 13 may generate an XR image based on the sensor information, and the XR device 13 may output the generated XR image. And the robot 11 may operate based on the control signal received through the XR device 13 or based on the interaction with the user.
For example, the user may check the XR image corresponding to the viewpoint of the robot 11 associated remotely through an external device such as the XR device 13, modify the navigation path of the robot 11 through interaction, control the operation or navigation of the robot 11, or check the information of nearby objects.
<AI+Autonomous Navigation+XR>
By employing the AI and XR technologies, the self-driving vehicle 12 may be implemented as a mobile robot, unmanned ground vehicle, or unmanned aerial vehicle.
The self-driving vehicle 12 employing the XR technology may correspond to a self-driving vehicle having a means for providing XR images or a self-driving vehicle which acts as a control/interaction target in the XR image. In particular, the self-driving vehicle 12 which acts as a control/interaction target in the XR image may be distinguished from the XR device 13, both of which may operate in conjunction with each other.
The self-driving vehicle 12 having a means for providing XR images may obtain sensor information from sensors including a camera and output XR images generated based on the sensor information obtained. For example, by displaying an XR image through HUD, the self-driving vehicle 12 may provide XR images corresponding to physical objects or image objects to the passenger.
At this time, if an XR object is output on the HUD, at least part of the XR object may be output so as to be overlapped with the physical object at which the passenger gazes. On the other hand, if an XR object is output on a display installed inside the self-driving vehicle 12, at least part of the XR object may be output so as to be overlapped with an image object. For example, the self-driving vehicle 12 may output XR objects corresponding to the objects such as roads, other vehicles, traffic lights, traffic signs, bicycles, pedestrians, and buildings.
If the self-driving vehicle 12, which acts as a control/interaction target in the XR image, obtains sensor information from the sensors including a camera, the self-driving vehicle 12 or XR device 13 may generate an XR image based on the sensor information, and the XR device 13 may output the generated XR image. And the self-driving vehicle 12 may operate based on the control signal received through an external device such as the XR device 13 or based on the interaction with the user.
[Extended Reality Technology]
eXtended Reality (XR) refers to all of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The VR technology provides objects or backgrounds of the real world only in the form of CG images, AR technology provides virtual CG images overlaid on the physical object images, and MR technology employs computer graphics technology to mix and merge virtual objects with the real world.
MR technology is similar to AR technology in a sense that physical objects are displayed together with virtual objects. However, while virtual objects supplement physical objects in the AR, virtual and physical objects co-exist as equivalents in the MR.
The XR technology may be applied to Head-Mounted Display (HMD), Head-Up Display (HUD), mobile phone, tablet PC, laptop computer, desktop computer, TV, digital signage, and so on, where a device employing the XR technology may be called an XR device.
The foregoing techniques can be applied to clarify or embody the present disclosure. Hereinafter, an intelligent device and a control method for providing a personalized service according to an embodiment of the present disclosure will be described in detail with reference to
Referring to
Referring to
For precise profiling, it is necessary to subdivide the user's individual characteristics. The individual characteristics may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand. However, aspects of the present disclosure are not limited thereto, and all individual characteristics which can limit types and ranges of services among various services may be included in order to provide a customized service.
User profile data generated by aggregating the individual characteristics may be utilized to provide a customized service for the corresponding service. Specifically, for example, it is assumed that there are two individual characteristics which can be found in the user profile data, wherein a first characteristic is a married status (hereinafter, a first individual characteristic) and a second characteristic is living with a cat (hereinafter, a second individual characteristic).
Referring to
Specifically, A related to Characteristic 1 may be a company that provides a good or service for a married user. C related to Characteristic 2 may be a company which sells pet products or a cat hospital which provides services for cat.
A company providing goods and services related to wedding, such as a weeding consulting agent and a honeymoon travel agent, the company which is not related to Characteristic 1, may be filtered by the user profile data when a customized service is provided.
A company providing reptile related goods and services, the company which is not related to Characteristic 2, may be filtered by the user profile data when a customized service is provided.
A to E indicate corporations or companies for convenience of explanation and, in terms of services provided by an electronic device, the A to E may be applications related to the corporations or companies.
As such, in order to provide a user using the electronic device with a customized service, user profile data may be generated and utilized, and a further detailed description thereof will be provided with reference to
Referring to
According to an embodiment, the method for controlling the intelligent device may be performed by the processor 180 of
In the step S1010, the processor 180 collects source data related to individual characteristics of a user.
According to an embodiment the source data may be data related to at least one of information on an application installed in the intelligent device or an operation record of the corresponding application.
According to an embodiment, the individual characteristic may be characteristics related to at least one service among a plurality of services provided through applications installed in the intelligent device.
In the step S1020, the processor 180 determines one individual characteristic by analyzing the source data.
According to an embodiment the individual characteristic may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand.
In the step S1030, the processor 180 generates the user profile data by aggregating the individual characteristic.
The generated user profile data may be utilized so that the user is provided with customized services from various applications installed in a intelligent device 100.
For example, a service provided by an application installed in the intelligent device 100 may be customized through the user profile data.
In another example, a retrieving function may improve through the user profile data. If an application installed in the intelligent device 100 is the Internet browser, the user profile data may be used when the Internet browser filters a retrieval result once again.
According to an embodiment, the source data for determining the individual characteristic may be collected in the intelligent device 100 which is carried around by the user. In addition, the source data may be collected even in another device used by the user. Collection of the source data will be described in detail with reference to
Referring to
In the step S1110, the processor 180 may collect information on at least one application installed in the intelligent device, and log information related to an operation of the application.
The information on the application may include a category, an title, or age of use. However, aspects of the present disclosure are not limited thereto, and the information on the application may be information which can specify the corresponding application among various applications, and the information on the application may include any information related to the individual characteristic. For example, if category information is included in the information on the application, the category information may be kids or education.
The log information may be information in which an operation of the application or an event occurring in execution of the application is recorded. The log information may include data (or file) that is generated as a result of operation of the application. However, aspects of the present disclosure are not limited thereto, and the log information in which a specific operation or an event occurring upon the specific operation is recorded, and data (or file) generated as a result of an operation of the intelligent device 100. For example, the log information may include an image file generated as a result of operation of a camera application, and a record of operation of a camera 121 of the intelligent device 100.
In the step S1120, the processor 180 may extract tag data related to the individual characteristic from the information on the application and from the log information, and store the log information as the source data.
Specifically, the tag information may be information extracted from the information on the application or from the log information, and the tag information may be data a keyword, weather, a number, a location, or any other information related to the individual characteristic.
The source data may refer to data base that is classified and stored to easily utilize the tag data to determine the individual characteristic. The tag data may be classified according to an application or device corresponding to a collecting source and then stored, and the tag data may be stored including a tag (e.g., gender, transportation, etc.) related to at least one individual characteristic.
The source data may be referred to as source data of a specific application in terms of determining any one personal characteristic. For example, if the personal characteristic is related to gender of the user, data collected from a voice assistant application may be used. In this case, the source data may be referred to as source data of the voice assistant application.
Classification of the source data is merely for convenience of explanation, and the classification is not to limit the scope of the present disclosure.
The collected source data may be utilized to determine various individual characteristic of the user. In order to more precisely determine the individual characteristic, the source data may be collected even in another device used by the user. Hereinafter, a detailed description will be provided with reference to
Referring to
In the step S1210, the processor 180 may access the 5G wireless communication system. Specifically, the processor 180 may control a wireless communication unit 110 to transceiver signals required so that an initial access procedure and a random access procedure are performed to access the 5G wireless communication system.
The 5G wireless communication system may be a wireless communication system providing a 5G service according to
In the step S1220, the processor 180 may receive information on an IoT device used by the user and log information related to the IoT device. Specifically, the processor may control the wireless communication unit 110 so that the information is received after the access to the 5G wireless communication system.
According to an embodiment, the 5G wireless communication system may be a wireless communication system that supports a massive Machine Type Communication (mMTC) or Narrowband Internet of Things (NB-IoT),
The processor 180 may control the wireless communication unit 110 to transceiver signals through a channel according to a communication scheme supported by the 5G wireless communication system.
Specifically, the information on the IoT device used by the user and log information related to an operation of the IoT device may be received a MTC Physical Downlink Shared Channel (MPDSCH) or a Narrowband Physical Downlink Shared Channel (NPDSCH).
In the step S1230, the processor 180 may extract the tag information related to the individual characteristic from the information on the IoT device and from the log information related to operation of the IoT device, and store the tag data as source data.
According to an embodiment, the IoT device may be at least one of an autonomous vehicle, a wearable device, a refrigerator, a washing machine, a drone, or a smart TV. However, aspects of the present disclosure are not limited thereto, and the IoT device may include any other electronic device capable of operation in conjunction with the 5G wireless communication system.
The source data may be collected not just in the intelligent device 100 frequently used by the user, but also in another IoT device.
For example, the IoT device is a washing machine, and the number of ureases is extracted as tag data from the log information. If the number of usages of the washing machine is equal to or greater than a predetermined number within a predetermined period, the individual characteristic of the user may be determined to be living in an environment where frequent use of the washing machine is required (e.g., multi-child family). User profile data generated in consideration of the individual characteristic may be utilized to preferentially show products useful for the multi-child family in a shopping application.
Hereinafter, a process of determining various individual characteristics using the source data will be described in detail with reference to
Referring to
In the step S1310, the processor 180 may retrieve a keyword relating to whether having a child from source data of at least one application in a message application or a contact list application, and match the retrieved keyword with a keyword set that is preset regarding whether having a child.
An example of the keyword regarding whether having a child may be a “daughter”. Accordingly, keywords like “beautiful daughter” and “handsome son” may be retrieved.
The keyword set preset regarding whether having a child may be composed of at least one of call word-related keywords, kid-related positive words (e.g., kid related brands), or kid related negative words, and a weight as to whether having a child may be assigned to each keyword.
Matching with the preset keyword set may be a series of processes as to determining whether the retrieved keyword matches with a keyword included in the preset keyword set, and, if so, driving a matching result (e.g., a total weight) by taking into consideration of a corresponding weight.
An example of a method of deriving the matching result will be described in the following.
A word “beautiful daughter” and “may be retrieved from source data of the contact list application, and a word “mom” may be retrieved in a received message in the message application. In the preset keyword set, the weight of the keyword “daughter” may be 5, and the weight of the keyword “mom” may be 7. A total weight may be 12. The method of calculating a total weight or a numeric value of the total weight may be set to a different method and a specific value by taking into consideration accuracy of provision of a customized service.
In the step S1320, the processor 180 may analyze an operation time of a kid-related application in the source data.
The source data may include tag data extracted from information on an application installed in the intelligent device 100, and the tag data may be a category, a title, or age of use of the application.
The source data may include tag information extracted from log information related to an operation of the application, and the tag data may be the number of times of operations of the application or an operation time of the application.
If an application of which a category (or title) belongs to kids or education among applications installed in the intelligent device 100, the processor 180 may detect the application as a kid-related application.
The processor 180 may extract an operation time of the detected application and the number of times of operations of the application, and calculate a score as shown in Equation 1, as below.
In Equation 1, launchCount denotes the number of time a kid-related application is implemented, and usageTime denotes an operation time of the kid-related application. That is, if the kid-related application is frequently used or if the kid-related application is hardly used but, when used, used for a long time, the score may be calculated into a great value. The score may be indicated as an operation time analytic result of an application associated with the kid-related application.
In the step S1330, the processor 180 may determine whether the user has a child, by using the matching result and the operation time analytic result.
For example, when the matching result or the operation time analytic result is equal to or greater than a predetermined value, the processor 180 may determine that the user has a child. In another example, when the matching result and the operation time analytic result are respectively equal to or greater than a preset value, the processor 180 may determine that the user has a child.
When the user has a child, user profile data may be utilized in an application that provides a kid-related content or service. For example, when the user searches for a specific product or service using an Internet browser application, the corresponding application may operate to preferentially show a kid-related product or service on a search result.
Referring to
In the step S1410, the processor 180 may retrieve the marriage-related keyword from the source data of the contact list application. The marriage-related keyword may be preset. Examples of the marriage-related keyword are shown in
The processor 180 may retrieve a marriage-related keyword 14A or a keyword including marriage from source data 14B from the contact list application.
In the step S1420, the processor 180 may determine whether the user is married, based on whether the user has a child and whether there is any retrieved keyword.
When determining whether the user has a child or an individual characteristic as to whether having child according to
For example, the processor 180 may determine whether the user is married by individually using information as to whether the user has a child and information as to whether there is any retrieved keyword. For example, when the user has a child, the processor 180 may determine that the user is married, without taking into consideration whether there is any retrieved keyword.
In another example, the processor 180 may determine whether the user is married, by taking into account both the information as to whether the user has a child and the information as to whether there is any retrieved keyword. Specifically, when the user has no child and there is no retrieved keyword, the processor 180 may determine that the user is single.
The processor 180 may further determine gender of the user using whether there is any retrieved keyword. A detailed description thereof will be described with reference to
Referring to
In the step S1510, the processor identifies information regarding whether the user has a child. When the user has a child (True), the processor 180 identifies whether there is a male marriage keyword (e.g., mother of wife) among retrieved keywords (S1540).
In the step S1540, when there is any male marriage keyword among the retrieved keywords (True), the processor 180 may determine that the user is a married man (S S1560). When there is no male marriage keyword among the retrieved keywords, the processor 180 may determine that the user is a married woman (S1550).
In the step S1550, when the user has no child (False), the processor 180 identifies whether the marriage-related keyword is retrieved (S1520). When the marriage-related keyword is retrieved, the processor 180 may perform the step S1540 to thereby determine that the user is a married woman (S1550) or a married man (S1560).
When the user has no child (S1510, False) and there is no retrieved marriage-related keyword (S1520, False), the processor 180 may determine that the user is single (S1530).
Referring to
In the step S1610, the processor 180 may retrieve the tag data of the pet-related image from the source data of a media-related application.
The media-related application may be an application that drives a camera 121 of the intelligent device 100. Examples of such an application may be a selfie application and a messenger application capable of capturing and transmitting a photo.
Source data 16A, 16B, and 16C of the media-related application may be tag data that is extracted from stored images upon driving of the camera 121. The tag data of the pet-related image may include at least one a name or a tag of the image, a photographing place, a photographing date, or a photographing device.
The source data 15A may be tag of each image. The tag may be automatically determined and stored according to settings of each application or may be input by a user when a corresponding image is photographed.
Pet-related tags may be as follows.
Pet-related Tags: Bulldog, cat, dalmatian dog, english bulldog, golden retriever, hamster, kitten, Pomeranian, poodle, pug, puppy, rabbit, Pet.
The aforementioned pet-related tags are merely exemplary, and the pet-related tags may be provided in more number of may be classified into further subdivided categories. In addition, the aforementioned pet-related tags may differ according to settings of each media-related application.
The source data 16B may be a photographing place. The photographing place may be specified by latitude and longitude as in
The source data 16C may be a photographing data. For example, in a specific operating system (e.g., Android), time which have elapsed since Jan. 1, 1970 is expressed in the form of an integer in microseconds (ms). The source data 16C may be a photographing date expressed according to this method. However, aspects of the present disclosure are not limited thereto, and the photographing date may be expressed as information according to a different method for specifying a date. That is, the photographing date may be expressed in a different format according to settings of the media-related application or the intelligent device 100.
In the step S1620, the processor 180 may determine whether the pet-related image is photographed at the user's home, based on at least one information on a photographing date, a photographing place, or a photographing device included in the retrieved tag data.
1) The processor 180 may identify whether the pet-related image is a recently photographed image, based on the photographing date 16C included in the tag data. Specifically, the processor 180 may identify whether the photographing date 16C falls within a predetermined period (e.g., a month) since the current point of time.
In regard to regarding a photographing data falling within the predetermined period, the processor 180 may make a determination as in the following 2).
2) The processor 180 identifies whether a photographing device included in the retrieved tag data coincides with the device 100 of the user. Specifically, the processor 180 may identifies whether a model name (e.g., A) of the photographing device included in the retrieved tag data coincides with a model name of the camera 121 of the intelligent device 100. The pet-related image may be received through a messenger application, and, in this case, the model name of the photographing device included in the retrieved tag data may not coincide with the model name 121 of the user.
In regard with the tag data having a matched photographing device, the processor 180 may make a determination as below.
3) The processor 180 may identify whether the photographing place 16B included in the retrieved tag data is within a predetermined distance from a place of residence of the user. For example, the processor 180 may specify the place of residence of the user using locations which have been detected the greatest number of times in a specific time period (e.g., 11 pm to 6 am).
The processor 180 may identify whether the photographing place is within the predetermined distance (e.g., 500 m) from the specified place of residence of the user. It is because, if the photographing place is too far from the place of residence of the user, a pet contained in the image may not be a pet living with the user. The predetermined distance may be determined as a specific value by taking into consideration a range of daily living of the user or precision of provision of a customized service.
In the step S1630, the processor 180 may determine whether the user has a pet, based on the number of pet-related images photographed at the user's home
The pet-related image satisfying the requirements 1) to 3) may be an image of a different person's pet photographed in the neighborhood of the user. Accordingly, when the number of pet-related images satisfying the requirements 1) to 3) is equal to or greater than a predetermined number (e.g, five), the processor 180 may determine that the user has a pet.
Referring to
In the step S1710, the processor 180 may determine whether the user has a car, by retrieving tag data related to vehicle audio connection in source data of the Bluetooth connection application. The source data of the Bluetooth connection application may include at least one of a connection date or a type of a connected device. Data 17A may be the connection date, and data 17B may be ID indicating the type of the connected device. The type of the connected device may be classified as below.
Type of Bluetooth-connected Device: AUDIO_VIDEO_CAMCORDER, AUDIO_VIDEO_CAR_AUDIO, AUDIO_VIDEO_HANDSFREE, AUDIO_VIDEO_HEADPHONES, AUDIO_VIDEO_HIFI_AUDIO, AUDIO_VIDEO_LOUDSPEAKER, AUDIO_VIDEO_MICROPHONE, AUDIO_VIDEO_PORTABLE_AUDIO, AUDIO_VIDEO_SET_TOP_BOX, AUDIO_VIDEO_UNCATEGORIZED, AUDIO_VIDEO_VCR, AUDIO_VIDEO_VIDEO_CAMERA, AUDIO_VIDEO_VIDEO_CONFERENCING.
The type of the Bluetooth connected device is merely an example, and may be classified by a different method or displayed in a different format.
The processor 170 identifies whether Bluetooth is recently connected, using the connection date 18A. Specifically, the processor 180 identifies whether the connection date 17A falls within a predetermine period (e.g., three weeks) from the current time.
The processor 180 identifies whether the type of the connected time is a vehicle audio (e.g., AUDIO_VIDEO_CAR_AUDIO) when the connection date falls within the predetermined period. For example, when the type of the connected device is expressed as ID (196610) having an integer as shown in
Through the aforementioned process, when the intelligent device 100 is identified as being connected to a device corresponding to the vehicle audio through Bluetooth within the predetermined period, the processor 180 may determine that the user is a car owner.
In the step S1720, the processor 180 may acquire a walking duration of the user within the predetermined time period related to commuting from the source data of the GPS application. The GPS application may be a default application embedded in the intelligent device 100, a map application, or a GPS-related application which utilizes a GPS function of the intelligent device 100.
The processor 180 may acquire the walking duration of the user from the source data of the GPS application. For example, when a calculation speed correspond to a walking speed of ordinary people according to change in a location of the user, the processor 180 may acquire the walking duration by subtracting a departure time from a time of when the user stops moving.
The processor 180 may acquire the walking duration with respect to the predetermined time period related to commuting. For example, the predetermined time period may be set to 7 am to 9 am. The processor 180 may acquire the walking duration of the user in between 7 am and 9 pm. The predetermined time period related to commuting may be set to a specific value with reference to a time zone of a location (or country) where the user lives.
In the step S1730, the processor 180 may determine that the means of transportation of the user is a car or a public transportation vehicle, based on whether the user has a car and the walking duration of the user.
For example, when it is determined that the user has a car, the processor 180 may determine that the means of transportation is the car. In another example, when the walking duration of the user is equal to or greater than a predetermined value, the processor 180 may determine that the means of transportation of the user is a public transportation vehicle. In another example, when the user has no car and the walking duration is smaller than the predetermined value, the processor 180 may determine that the user does not use a car or a public transportation vehicle (that the user commutes by foot)
For a user who uses a car as the means of transportation, traffic information on a time to commute may be useful. For a user who uses public transportation, subway or bus route-related information (e.g., arrival time at each stop) may be useful. With such determined individual characteristic being reflected, user profile data may be used for a relevant application to provide more useful information to the corresponding user.
Hereinafter, an example of a process of determining a means of transportation of the user will be described in detail with reference to
Referring to
In the step S1810, the processor 180 identifies whether there is any audio connection record by retrieving source data.
When no tag data to which a vehicle audio is connected through Bluetooth is retrieved (False), the processor 180 may determine that the user is a public transportation user (S1850).
When any tag data to which the vehicle audio is connected through Bluetooth is retrieved (True), the processor 180 may determine that the user is a car owner (S1820). In this case, the processor 180 may determine a means of transportation of the user using a walking duration of the user.
In the step S1830, when the walking duration of the user acquired in a commuting time period is equal to or greater than a predetermined value (e.g., 15 minutes) (True), the processor 180 may determine that the user is a public transportation user who has his/her own car (S1860). When the walking duration of the user acquired in a commuting time period is smaller than the predetermined value (e.g., 15 minutes) (False), the processor 180 may determine that the user is a car user (S1840).
Referring to
In the step S1910, the processor 180 may retrieve tag data related to deposition of salary from the source data of the message application.
The source data of the message application may include not just an SMS message transceiving application which is a default application installed in the intelligent device 100, but also source data of the messenger application through which payment-related messages are transceived.
The processor 180 may retrieve a message including the keyword “deposit” from messages (tag data) received through a message application A19. The processor 180 may retrieve a message including a pay-related keyword from the retrieved messages. The pay-related keyword may be preset. Examples of the pay-related keyword may be as below.
Pay-related keyword: Salary, Bonus, Pension, Incentives, Monthly Pay,
In the step S1920, the processor 180 may retrieve installation and usage record of the employee or university student-related application from the source data.
Type of an application related to an office worker may be preset or classified by a category of the application.
Type of the employee or university student-related application may be preset or may be classified by a category of the application. For example, the employee or university student-related application may be an application for course registration and time table management or an application for providing part-time job information.
In the step S1930, the processor 180 may determine an occupation of the user based on whether there is a message regarding deposition of salary and based on installation and usage record of the application.
For example, when a message related to deposition of salary is retrieved from the source data and the installation and usage record of the application related to the office worker is retrieved from the source data, the processor 180 may determine that the occupation of the user is an office worker.
In another example, when a message related to deposition of salary is retrieved from the source data and the installation and usage record of the employee or university student-related application is retrieved from the source data, the processor 180 may determine that the occupation of the user is a university student.
When the user is a university student, information on job opening or job search may be useful. When the user is an office worker, information on wealth management or marriage may be useful. With an occupation-related individual characteristic being reflected, user profile data may be used to provide the above-described customized service.
Hereinafter, an example of the process of determining an occupation of a user will be described in detail.
Referring to
In the step S2010, the processor 180 retrieves a text (or message) related to deposition within a predetermined period (e.g., six months).
In the step S2020, the processor 180 identifies whether a pay-related word (keyword) is included in the retrieved message. When the pay-related word is not included in the retrieved message (False), the processor 180 retrieves installation and usage record of an application that university students use (S2050).
In the step S2050, when the installation and usage record of the application used by the university students is retrieved (True), the processor 180 may determine that the user is a university friend (S2070). When the installation and usage record of the application used by the university students is not retrieved (False), the processor 180 may determine that the user is a freelancer (S2060).
In the step S2020, when the pay-related word is included in the retrieved message (True), the processor 180 retrieves installation and usage record of an application used by employees (S2030).
In the step S2030, when the installation and usage record of the application used by employees is retrieved (True), the processor 180 may determine that the user is an employee (S2040). When the installation and usage record of the application used by employees is not retrieved (False), the processor 180 may determine that the user is a freelance (S2060).
Referring to
In the step S2110, the processor 180 may retrieve payment-related tag data from source data from a message application or a payment-related application.
A message application A21-1 may include not just a SMS message transceiving application, which is a default application installed in the intelligent device 100, but also a messenger application through which payment-related messages are transceived.
A payment-related application A21-2 may be an application for transceiving wireless signals to make payment through the intelligent device 100, or an application for providing a payment service in association with a specific website or a specific application. However, aspects of the present disclosure are not limited thereto, and the payment-related application A21-2 may include any other applications which is directly involved in payment through the intelligent device 100 and thus stores a corresponding payment transaction therein.
In the step S2120, the processor 180 may retrieve a brand according to a mart type from the retrieved tag data.
Specifically, the processor may retrieve a brand according to a mart type from the payment transaction generated through the message application A21-1 or the payment-related application A21-2. The brand according to a mark type may be preset. The mark type may be classified as a supermarket, a department store, or a convenient store.
Since the payment transaction basically includes a payment place, the processor 180 may retrieve a brand according to the brad type in the payment place. Specifically, the mart type may be used as a filter to limit a search keyword. When a word “Supermarket” is detected in the payment transaction, the processor 180 may retrieve a supermarket brand in the payment transaction. When a word (keyword) corresponding to the mart type, the processor 180 may perform retrieval using a brand according to the mart type (entire keywords).
In the step S2130, the processor 180 may determine a brand having retrieved a predetermined number of times among retrieved brands as a preferred brand. The predetermined number of times may be set to a specific value by taking into consideration precision of a customized service.
The preferred brand may be a single brand or multiple brands. For example, the processor 180 may determine only one brand having retrieved the greatest number of times as a preferred brand. In another example, the processor 180 may determine all brands having retrieved the predetermined number of times as preferred brands.
As such, with a mart brand frequently used by a user being reflected, user profile data may be utilized to provide hot deal promotion information or operating time information of the corresponding mart.
Referring to
In the step S2210, the processor 180 may extract voice data of the user from source data of a voice assistant application, and acquire an analytic result by inputting the extracted voice data into a pre-trained voice analysis model.
The voice assistant application A22-1 may be an application that receives a voice of the user, recognizes the voice as a txt, and performs an operation in accordance with the text. However, aspects of the present disclosure are not limited thereto, and the speech assistant application may include any application that utilizes a Speech To Text (STT) function.
The processor 180 may extract the voice data of the user from the source data of the voice assistant application. The voice data may be data that is stored to trigger an operation of the voice assistant application A22-1. Extracting may refer to a process of converting voice data stored according to setting of the corresponding application into a format which enables inputting of the voice data into the voice analysis model.
The processor 180 may acquire an analytic result by inputting the extracted voice data A22-1 into the voice analysis model M22.
The voice analysis model M22 may be generated in accordance with steps (1) to (3).
Specifically, the voice analysis model M22 may be a model that is generated according to steps including: (1) collecting open data for analyzing voice (open data collection); (2) performing pre-processing to analyze the collected open data (Pre-processing); and (3) performing learning using the pre-processed data. However, this is merely an example, and the voice analysis model may be generated through a different machine learning method.
The analytic result may be output in the form of information indicating whether the corresponding voice data is male or female or in the form of a percentage (%) indicating whether the corresponding voice data is closer to male or female.
In another example, the analytic result may be output in the form of a weight that is applied to a classifier pre-trained to classify gender of the user. In this case, the analytic result may be applied as a first weight W1 to the classifier.
In the step S2220, the processor may retrieve a gender based honorific-related keyword in the source data of the contact list application. The contact list application A22-2 may be a contact list application that is a default application installed in the intelligent device 100. However, aspects of the present disclosure are not limited thereto, and the contact list application A22-2 may include a messenger application which is linked to the contact list application or which transceives messengers.
Examples of the gender based honorific-related keywords may be elder sister, wife of brother, etc., for men, and brother, husband of elder sister, etc., for women. The gender honorific-keyword may be preset.
A retrieval result regarding the gender based honorific-related keyword may be represented as the number of retrieval results regarding the gender based honorific-related keyword, and, in this case, the retrieval result may be applied as a weight to a classifier pre-trained to classify gender of the user. The retrieval result regarding the gender based honorific-related keyword may be applied as a second weight W2 to the classifier.
In the step S223, the processor 180 may determine gender of the user based on the analytic result and the retrieval result regarding the gender based honorific-related keyword.
For example, when the analytic result shows that the voice data is more likely to be female and keywords “elder brother” and “elder sister” are retrieved in the retrieval result regarding the gender based honorific-related keyword, the processor 180 may determine that the user is female.
In another example, when the analytic result shows that the voice data is more likely to be male and a keyword “elder sister” is retrieved in the retrieval result regarding the gender based honorific-related keyword, the processor 180 may determine that the user is male.
In yet another example, the processor 180 may apply the analytic result as the first weight W1 and the retrieval result regarding the gender based honorific-related keyword as the second weight W2 to the classifier. The processor 180 may determine gender of the user using an output that is obtained by inputting the voice data into the classifier.
As such, with the gender of the user being reflected, the user profile data may be utilized to provide a men-only or women-only product or service.
The processor 180 may generate the user profile data by aggregating at least one individual characteristic according to
Referring to
In the step S2310, the processor 180 may determine the individual characteristic-related application among applications installed in the intelligent device 100.
Specifically, an individual characteristic according to the user profile data may be represented as having no child, having a pet, being married, female, and an employee. The processor 180 may determine applications App C and App D related to the individual characteristic (pet, female) among applications A23 installed in the intelligent device 100.
In the step S2320, the processor 180 may allow the individual characteristic-related application to access the user profile data.
Specifically, the processor 180 may allow the applications App C and App D related to the individual characteristic to access the user profile data. In this case, allowing may refer to setting an authority to open the user profile data.
The access C22 to the user profile data may be allowed only through an Application Programming Interface (API). This is to prevent the user profile data from being exposed or revealed to the outside since the user profile data is privacy data. Accordingly, the user profile data can be utilized only within the intelligent device 100 to provide a customized service for the user.
The application App C allowed to access the user profile data may be a shopping application. Using information indicating having a pet and an employee among the user's individual characteristic, the corresponding application App C may operate to frequently show information regarding a pet-related product, office equipment, a suit, etc.
General device to which the present disclosure can be applied.
Referring to
In addition, the terminal device X100 and a server X200 may be configured such that the features described in the above-described various embodiments of the present disclosure can be applied independently or two or more of the embodiments can be applied at the same time, and a redundant description is herein omitted for clarity.
Embodiments of an intelligent device according to the present disclosure are as below.
According to an embodiment of the present disclosure, there is provided a method for controlling an intelligent device that generates user profile data to provide a customized service, the method including: collecting source data related to an individual characteristic of a user; determining at least one of the individual characteristic by analyzing the source data; and generating the user profile data by aggregating the individual characteristic, wherein the source data is data related to at least one of information on an application installed in the intelligent device and operation record of the application, and wherein the individual characteristic is a characteristic related to at least one service among multiple services provided through applications installed in the intelligent device.
Regarding Embodiment 1, the individual characteristic may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand.
Regarding Embodiment 2, the collecting of the source data may include: collecting information on at least one application installed in the intelligent device and log information related to operation of the at least one application; and extracting tag data related to the individual characteristic from information on the at least one application and the log information, and storing the extracted tag data as the source data.
Regarding Embodiment 3, when the individual characteristic is related to whether the user has a child, the determining of the individual characteristic may include: retrieving a keyword related to whether having a child from the source data of at least one application of a message application or a contact list application, and matching the retrieved keyword with a keyword set preset regarding whether having a child; analyzing an operating time of a kid-related application from the source data; and determining whether the user has a child, using a matching result and an analytic result of the analyzed operating time.
Regarding Embodiment 3, when the individual characteristic is related to whether the user is married, the determining of the individual characteristic may include: retrieving a marriage-related keyword from the source data of the contact list application and determining whether the user is married, based on whether the user has a child and whether there is any retrieved keyword.
Regarding Embodiment 3, when the individual characteristic is related to whether the user is married, the determining of the individual characteristic may include: retrieving tag data of a pet-related image from source data of a media-related application; based on at least one information of a photographing date, a photographing place, or a photographing device in the tag data, determining whether the pet-related image is photographed at home of the user; and, based on a number of pet-related images photographed at the home of the user, determining whether the user has a pet.
Regarding Embodiment 3, when the individual characteristic is related to a means of transportation of the user, the determining of the individual characteristic may include: determining whether the user has a car by retrieving tag data on vehicle audio connection from source data of a Bluetooth connection application; acquiring a walking duration of the user in a predetermined time period from source data of a Global Positioning System (GPS) application; and, based on whether the user has a car and the walking duration of the user, determining a car or a public transportation vehicle.
Regarding Embodiment 3, when the individual characteristic is related to a means of transportation of the user, the determining of the individual characteristic may include: retrieving tag data related to deposition of salary from source data from a message application; retrieving installation and usage record of an employee or university student-related application from the source data; and, based on whether there is any message related to the deposition of the salary and the installation and usage record of the application, determining an occupation of the user.
Regarding Embodiment 3, when the individual characteristic is related to a preferred brand of the user, the determining of the individual characteristic may include: retrieving tag data related to payment from source data from a message application or a payment-related application; retrieving a brand according to a mart type from the retrieved tag data; and determining a brand having retrieved a predetermined number of times among retrieved brands as a preferred brand.
Regarding Embodiment 3, when the individual characteristic is related to gender of the user, the determining of the individual characteristic may include: extracting voice data of the user from source data of a voice assistant application, and acquiring an analytic result by inputting the extracted voice data into a pre-trained voice analysis model; retrieving a gender based honorific-related keyword from source data of a contact list application; and, based on the analytic result and a retrieval result regarding the gender based honorific-related keyword, determining the gender of the user.
Regarding Embodiment 3, the method may further include: determining an application related to the individual characteristic among applications installed in the intelligent device; and allowing the application related to the individual characteristic to access the user profile data.
Regarding Embodiment 11, the access to the user profile data may be allowed only through an Application Programming Interface (API).
Regarding Embodiment 3, the collecting of the source data may include: accessing a 5G wireless communication system; receiving log information on an Internet of Thing (IoT) device used by the user and log information related to operation of the IoT device; and extracting tag data related to the individual characteristic from the information on the IoT device and the log information, and storing the extracted tag data as the source data.
Regarding Embodiment 13, the 5G communication system may support massive Machine Type Communication (mMTC) or Narrowband Internet of Things (NB-IoT), and the information on the IoT device and the log information may be received through an MTC Physical Downlink Shared Channel (MPDSCH) or a Narrowband Physical Downlink Shared Channel (NPDSCH).
Regarding Embodiment 14, the IoT device may be at least one of an autonomous vehicle, a wearable device, a refrigerator, a washing machine, a drone, or a smart TV.
According to another embodiment of the present disclosure, there is provided an intelligent device for providing a customized service, the device including: a communication module; a memory; a display; and a processor configured to control the communication module, the memory, and the display, wherein the processor is configured to: collect source data related to an individual characteristic; determine at least one of the individual characteristic by analyzing the source data; and generate the user profile data by aggregating the individual characteristic, wherein the source data is data related to at least one of information on an application installed in the intelligent device and operation record of the application, and wherein the individual characteristic is a characteristic related to at least one service among multiple services provided through applications installed in the intelligent device.
Regarding Embodiment 16, the individual characteristic may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand.
Regarding Embodiment 17, the processor may be configured to: collect information on at least one application installed in the intelligent device and log information related to operation of the at least one application; and extract tag data related to the individual characteristic from information on the at least one application and the log information, and store the extracted tag data as the source data.
Regarding Embodiment 18, when the individual characteristic is related to whether the user has a child, the processor may be configured to: retrieve a keyword related to whether having a child from the source data of at least one application of a message application or a contact list application, and match the retrieved keyword with a keyword set preset regarding whether having a child; analyze an operating time of a kid-related application from the source data; and determine whether the user has a child, using a matching result and an analytic result of the analyzed operating time.
Regarding Embodiment 18, when the individual characteristic is related to whether the user is married, the processor may be configured to: retrieve a marriage-related keyword from the source data of the contact list application; and determine whether the user is married, based on whether the user has a child and whether there is any retrieved keyword.
Regarding Embodiment 18, when the individual characteristic is related to whether the user is married, the processor is configured to: retrieve tag data of a pet-related image from source data of a media-related application; based on at least one information of a photographing date, a photographing place, or a photographing device in the tag data, determine whether the pet-related image is photographed at home of the user; and, based on a number of pet-related images photographed at the home of the user, determine whether the user has a pet.
Regarding Embodiment 18, when the individual characteristic is related to a means of transportation of the user, the processor may be configured to: determine whether the user has a car by retrieving tag data on vehicle audio connection from source data of a Bluetooth connection application; acquire a walking duration of the user in a predetermined time period from source data of a Global Positioning System (GPS) application; and, based on whether the user has a car and the walking duration of the user, determine a car or a public transportation vehicle.
Regarding Embodiment 18, when the individual characteristic is related to a means of transportation of the user, the processor is configured to: retrieve tag data related to deposition of salary from source data from a message application; retrieve installation and usage record of an employee or university student-related application from the source data; and, based on whether there is any message related to the deposition of the salary and the installation and usage record of the application, determine an occupation of the user.
Regarding Embodiment 18, when the individual characteristic is related to a preferred brand of the user, the processor is configured to: retrieve tag data related to payment from source data from a message application or a payment-related application; retrieve a brand according to a mart type from the retrieved tag data; and determine a brand having retrieved a predetermined number of times among retrieved brands as a preferred brand.
Regarding Embodiment 18, when the individual characteristic is related to gender of the user, the processor may be configured to: extract voice data of the user from source data of a voice assistant application, and acquire an analytic result by inputting the extracted voice data into a pre-trained voice analysis model; retrieve a gender based honorific-related keyword from source data of a contact list application; and, based on the analytic result and a retrieval result regarding the gender based honorific-related keyword, determine the gender of the user.
Regarding Embodiment 18, the processor may be configured to: determine an application related to the individual characteristic among applications installed in the intelligent device; and allow the application related to the individual characteristic to access the user profile data.
Regarding Embodiment 26, the access to the user profile data may be allowed only through an Application Programming Interface (API).
Regarding Embodiment 18, the processor may be configured to: access a 5G wireless communication system; receive log information on an Internet of Thing (IoT) device used by the user and log information related to operation of the IoT device; and extract tag data related to the individual characteristic from the information on the IoT device and the log information, and store the extracted tag data as the source data.
Regarding Embodiment 28, the 5G communication system may support massive Machine Type Communication (mMTC) or Narrowband Internet of Things (NB-IoT), and the information on the IoT device and the log information may be received through an MTC Physical Downlink Shared Channel (MPDSCH) or a Narrowband Physical Downlink Shared Channel (NPDSCH).
Regarding Embodiment 29, the IoT device may be at least one of an autonomous vehicle, a wearable device, a refrigerator, a washing machine, a drone, or a smart TV.
The aforementioned embodiments of the present disclosure have effects as below.
According to an embodiment of the present disclosure, user profile data may be generated to provide a customized service.
In addition, according to an embodiment of the present disclosure, a user's individual characteristic is determined to generate the user profile. The user's individual characteristic may be related to at least one of gender, whether being married, whether having a child, whether having a pet, a means of transportation, an occupation, or a preferred brand. Accordingly, the user's profile may be categorized
In addition, according to an embodiment of the present disclosure, information on an application installed in the device or log information related to operation of the corresponding application are collected, and tag data related to an individual characteristic is extracted and stored as source data. The user profile data is generated from the source data. Accordingly, a more customized service may be provided based on usage record of the corresponding device.
In addition, according to an embodiment of the present disclosure, source data related to an individual characteristic is collected from various Internet of Thing (IoT) devices through access of a wireless communication system. As the source data for determining an individual characteristic is collected not from a single device but from various devices, the user's individual characteristic may be determined more accurately.
In addition, according to an embodiment of the present disclosure, only an application related to an individual characteristic among applications installed in the device are allowed to access the user profile data. Accordingly, it is possible to prevent reckless use of the user profile data.
In addition, according to an embodiment of the present disclosure, the user profile data can be accessed only through an Application Programming Interface (API). Accordingly, the user profile data cannot be leaked to the outside, and thus, it is possible to prevent privacy information.
The present disclosure described above may be implemented in computer-readable codes in a computer readable recording medium, and the computer readable recording medium may include all kinds of recording devices for storing data that is readable by a computer system. Examples of the computer readable recording medium include HDD (Hard Disk Drive), SSD (Solid State Disk), SDD (Silicon Disk Drive), ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like, and may be implemented in the form of carrier waves (e.g., transmission through the internet). Accordingly, the foregoing detailed description should not be interpreted as restrictive in all aspects, and should be considered as illustrative. The scope of the present disclosure should be determined by rational interpretation of the appended claims, and all changes within the equivalent scope of the present disclosure are included in the scope of the present disclosure.
The above-described present disclosure can be implemented with computer-readable code in a computer-readable medium in which program has been recorded. The computer-readable medium may include all kinds of recording devices capable of storing data readable by a computer system. Examples of the computer-readable medium may include a hard disk drive (HDD), a solid state disk (SSD), a silicon disk drive (SDD), a ROM, a RAM, a CD-ROM, magnetic tapes, floppy disks, optical data storage devices, and the like and also include such a carrier-wave type implementation (for example, transmission over the Internet). Therefore, the above embodiments are to be construed in all aspects as illustrative and not restrictive. The scope of the disclosure should be determined by the appended claims and their legal equivalents, not by the above description, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
The present disclosure is described mainly about an example of application to a UE based on a 5G system, but the present disclosure may be also applied to various wireless communication system and an autonomous driving device.
Number | Date | Country | Kind |
---|---|---|---|
10-2019-0107799 | Aug 2019 | KR | national |