Ultra-Fast and Ultra-Sensitive Hydrogen Sensor
A KAIST team made an ultra-fast hydrogen sensor that can detect hydrogen gas levels under 1％ in less than seven seconds. The sensor also can detect hundreds of parts per million levels of hydrogen gas within 60 seconds at room temperature. A research group under Professor Il-Doo Kim in the Department of Materials Science and Engineering at KAIST, in collaboration with Professor Reginald M. Penner of the University of California-Irvine, has developed an ultra-fast hydrogen gas detection system based on a palladium (Pd) nanowire array coated with a metal-organic framework (MOF). Hydrogen has been regarded as an eco-friendly next-generation energy source. However, it is a flammable gas that can explode even with a small spark. For safety, the lower explosion limit for hydrogen gas is 4 vol％ so sensors should be able to detect the colorless and odorless hydrogen molecule quickly. The importance of sensors capable of rapidly detecting colorless and odorless hydrogen gas has been emphasized in recent guidelines issued by the U.S. Department of Energy. According to the guidelines, hydrogen sensors should detect 1 vol％ of hydrogen in air in less than 60 seconds for adequate response and recovery times. To overcome the limitations of Pd-based hydrogen sensors, the research team introduced a MOF layer on top of a Pd nanowire array. Lithographically patterned Pd nanowires were simply overcoated with a Zn-based zeolite imidazole framework (ZIF-8) layer composed of Zn ions and organic ligands. ZIF-8 film is easily coated on Pd nanowires by simple dipping (for 2?6 hours) in a methanol solution including Zn (NO3)2·6H2O and 2-methylimidazole. < This cover image depicts lithographically-patterned Pd nanowires overcoated with a Zn-based zeolite imidazole framework (ZIF-8) layer. > As synthesized ZIF-8 is a highly porous material composed of a number of micro-pores of 0.34 nm and 1.16 nm, hydrogen gas with a kinetic diameter of 0.289 nm can easily penetrate inside the ZIF-8 membrane, while large molecules (> 0.34 nm) are effectively screened by the MOF filter. Thus, the ZIF-8 filter on the Pd nanowires allows the predominant penetration of hydrogen molecules, leading to the acceleration of Pd-based H2 sensors with a 20-fold faster recovery and response speed compared to pristine Pd nanowires at room temperature. Professor Kim expects that the ultra-fast hydrogen sensor can be useful for the prevention of explosion accidents caused by the leakage of hydrogen gas. In addition, he expects that other harmful gases in the air can be accurately detected through effective nano-filtration by using of a variety of MOF layers. This study was carried out by Ph.D. candidate Won-Tae Koo (first author), Professor Kim (co-corresponding author), and Professor Penner (co-corresponding author). The study has been published in the online edition of ACS Nano, as the cover-featured image for the September issue. < Figure 1. Representative image for this paper published in ACS Nano, August, 18. > < Figure 2. Images of Pd nanowire array-based hydrogen sensors, scanning electron microscopy image of a Pd nanowire covered by a metal-organic framework layer, and the hydrogen sensing properties of the sensors. > < Figure 3. Schematic illustration of a metal-organic framework (MOF). The MOF, consisting of metal ions and organic ligands, is a highly porous material with an ultrahigh surface area. The various structures of MOFs can be synthesized depending on the kinds of metal ions and organic ligands. > < (From left) Professor Kim, Ph.D. candidate Koo, and Professor Penner >
Semiconductor Patterning of Seven Nanometers Techn..
A research team led by Professor Sang Ouk Kim in the Department of Materials Science and Engineering at KAIST has developed semiconductor manufacturing technology using a camera flash. This technology can manufacture ultra-fine patterns over a large area by irradiating a single flash with a seven-nanometer patterning technique for semiconductors. It can facilitate the manufacturing of highly efficient, integrated semiconductor devices in the future. Technology for the Artificial Intelligence (AI), the Internet of Things (IoTs), and big data, which are the major keys for the fourth Industrial Revolution, require high-capacity, high-performance semiconductor devices. It is necessary to develop lithography technology to produce such next-generation, highly integrated semiconductor devices. Although related industries have been using conventional photolithography for small patterns, this technique has limitations for forming a pattern of sub-10 nm patterns.? Molecular assembly patterning technology using polymers has been in the spotlight as the next generation technology to replace photolithography because it is inexpensive to produce and can easily form sub-10 nm patterns. However, since it generally takes a long time for heat treatment at high-temperature or toxic solvent vapor treatment, mass production is difficult and thus its commercialization has been limited. The research team introduced a camera flash that instantly emits strong light to solve the issues of polymer molecular assembly patterning. Using a flash can possibly achieve a semiconductor patterning of seven nanometers within 15 milliseconds (1 millisecond = 1/1,000 second), which can generate a temperature of several hundred degrees Celsius in several tens of milliseconds. The team has demonstrated that applying this technology to polymer molecular assembly allows a single flash of light to form molecular assembly patterns. The team also identified its compatibility with polymer flexible substrates, which are impossible to process at high temperatures. Through these findings, the technology can be applied to the fabrication of next-generation, flexible semiconductors. The researchers said the camera flash photo-thermal process will be introduced into molecular assembly technology and this highly-efficiency technology can accelerate the realization of molecular assembly semiconductor technology. Professor Kim, who led the research, said, “Despite its potential, molecular assembly semiconductor technology has remained a big challenge in improving process efficiency.” “This technology will be a breakthrough for the practical use of molecular assembly-based semiconductors.” The paper was published in the international journal, Advanced Materials on August 21 with first authors, researcher Hyeong Min Jin and PhD candidate Dae Yong Park. The research, sponsored by the Ministry of Science and ICT, was co-led Professor by Keon Jae Lee in the Department of Materials Science and Engineering at KAIST, and Professor Kwang Ho Kim in the School of Materials Science and Engineering at Pusan National University. < 1. Formation of semiconductor patterns using a camera flash> < Schematic diagram of molecular assembly pattern using a camera flash > < Self-assembled patterns>
A Novel and Practical Fab-route for Superomniphobi..
(clockwise from left: Jaeho Choi, Hee Tak Kim, Shin-Hyun Kim) A joint research team led by Professor Hee Tak Kim and Shin-Hyun Kim in the Department of Chemical and Biomolecular Engineering at KAIST developed a fabrication technology that can inexpensively produce surfaces capable of repelling liquids, including water and oil. The team used the photofluidization of azobenzene molecule-containing polymers to generate a superomniphobic surface which can be applied for developing stain-free fabrics, non-biofouling medical tubing, and corrosion-free surfaces. Mushroom-shaped surface textures, also called doubly re-entrant structures, are known to be the most effective surface structure that enhances resistance against liquid invasion, thereby exhibiting superior superomniphobic property. However, the existing procedures for their fabrication are highly delicate, time-consuming, and costly. Moreover, the materials required for the fabrication are restricted to an inflexible and expensive silicon wafer, which limits the practical use of the surface. To overcome such limitations, the research team used a different approach to fabricate the re-entrant structures called localized photofludization by using the peculiar optical phenomenon of azobenzene molecule-containing polymers (referred to as azopolymers). It is a phenomenon where an azopolymer becomes fluidized under irradiation, and the fluidization takes place locally within the thin surface layer of the azopolymer. With this novel approach, the team facilitated the localized photofluidization in the top surface layer of azopolymer cylindrical posts, successfully reconfiguring the cylindrical posts to doubly re-entrant geometry while the fluidized thin top surface of an azopolymer is flowing down. The structure developed by the team exhibits a superior superomniphobic property even for liquids infiltrating the surface immediately. Moreover, the superomniphobic property can be maintained on a curved target surface because its surficial materials are based on high molecules. Furthermore, the fabrication procedure of the structure is highly reproducible and scalable, providing a practical route to creating robust omniphobic surfaces. Professor Hee Tak Kim said, “Not only does the novel photo-fluidization technology in this study produce superior superomniphobic surfaces, but it also possesses many practical advantages in terms of fab-procedures and material flexibility; therefore, it could greatly contribute to real uses in diverse applications.” Professor Shin-Hyun Kim added, “The designed doubly re-entrant geometry in this study was inspired by the skin structure of springtails, insects dwelling in soil that breathe through their skin. As I carried out this research, I once again realized that humans can learn from nature to create new engineering designs.” The paper (Jaeho Choi as a first author) was published in ACS Nano, an international journal for Nano-technology, in August. < Schematic diagram of mushroom-shaped structure fabrication > < SEM image of mushroom-shaped structure > < Image of superomniphobic property of different types of liquid >
The Medici Effect： Highly Flexible, Wearable Displ..
< Ph.D. candidate Seungyeop Choi > How do you feel when technology you saw in a movie is made into reality? Collaboration between the electrical engineering and textile industries has made TVs or smartphone screens displaying on clothing a reality. A research team led by Professor Kyung Cheol Choi at the School of Electrical Engineering presented wearable displays for various applications including fashion, IT, and healthcare. Integrating OLED (organic light-emitting diode) into fabrics, the team developed the most highly flexible and reliable technology for wearable displays in the world. Recently, information displays have become increasingly important as they construct the external part of smart devices for the next generation. As world trends are focusing on the Internet of Things (IoTs) and wearable technology, the team drew a lot of attention by making great progress towards commercializing clothing-shaped ‘wearable displays’. The research for realizing displays on clothing gained considerable attention from academia as well as industry when research on luminescence formed in fabrics was introduced in 2011; however, there was no technology for commercializing it due to its surface roughness and flexibility. Because of this technical limitation, clothing-shaped wearable displays were thought to be unreachable technology. However, the KAIST team recently succeeded in developing the world’s most highly efficient, light-emitting clothes that can be commercialized. The research team used two different approaches, fabric-type and fiber-type, in order to realize clothing-shaped wearable displays. In 2015, the team successfully laminated a thin planarization sheet thermally onto fabric to form a surface that is compatible with the OLEDs approximately 200 hundred nanometers thick. Also, the team reported their research outcomes on enhancing the reliability of operating fiber-based OLEDs. In 2016, the team introduced a dip-coating method, capable of uniformly depositing layers, to develop polymer light-emitting diodes, which show high luminance even on thin fabric. Based on the research performance in 2015 and 2016, Ph.D. candidate Seungyeop Choi took the lead in the research team and succeeded in realizing fabric-based OLEDs, showing high luminance and efficiency while maintaining the flexibility of the fabric. The long-term reliability of this wearable device that has the world’s best electrical and optical characteristics was verified through their self-developed, organic and inorganic encapsulation technology. According to the team, their wearable device facilitates the operation of OLEDs even at a bending radius of 2mm. According to Choi, “Having wavy structures and empty spaces, fiber plays a significant role in lowering the mechanical stress on the OLEDs.” “Screen displayed on our daily clothing is no longer a future technology,” said Professor Choi. “Light-emitting clothes will have considerable influence on not only the e-textile industry but also the automobile and healthcare industries.” Moreover, the research team remarked, “It means a lot to realize clothing-shaped OLEDs that have the world’s best luminance and efficiency. It is the most flexible fabric-based light-emitting device among those reported. Moreover, noting that this research carried out an in-depth analysis of the mechanical characteristics of the clothing-spared, light-emitting device, the research performance will become a guideline for developing the fabric-based electronics industry.” This research was funded by the Ministry of Trade, Industry and Energy and collaborated with KOLON Glotech, INC. The research performance was published in Scientific Reports in July. < OLEDs operating in fabrics > < Current-voltage-luminance and efficiency of the highly flexible, fabric-based OLEDs;Image of OLEDs after repetitive bending tests;Verification of flexibility through mechanical simulation >
Discovery of an Optimal Drug Combination： Overcomi..
A KAIST research team presented a novel method for improving medication treatment for liver cancer using Systems Biology, combining research from information technology and the life sciences. Professor Kwang-Hyun Cho in the Department of Bio and Brain Engineering at KAIST conducted the research in collaboration with Professor Jung-Hwan Yoon in the Department of Internal Medicine at Seoul National University Hospital. This research was published in Hepatology in September 2017 (available online from August 24, 2017). Liver cancer is the fifth and seventh most common cancer found in men and women throughout the world, which places it second in the cause of cancer deaths. In particular, Korea has 28.4 deaths from liver cancer per 100,000 persons, the highest death rate among OECD countries and twice that of Japan. Each year in Korea, 16,000 people get liver cancer on average, yet the five-year survival rate stands below 12％. According to the National Cancer Information Center, lung cancer (17,399) took the highest portion of cancer-related deaths, followed by liver cancer (11,311) based on last year data. Liver cancer is known to carry the highest social cost in comparison to other cancers and it causes the highest fatality in earlier age groups (40s-50s). In that sense, it is necessary to develop a new treatment that mitigates side effects yet elevates the survival rate. There are ways in which liver cancer can be cured, such as surgery, embolization, and medication treatments; however, the options become limited for curing progressive cancer, a stage in which surgical methods cannot be executed. Among anticancer medications, Sorafenib, a drug known for enhancing the survival rate of cancer patients, is a unique drug allowed for use as a targeted anticancer medication for progressive liver cancer patients. Its sales reached more than ten billion KRW annually in Korea, but its efficacy works on only about 20％ of the treated patients. Also, acquired resistance to Sorafenib is emerging. Additionally, the action mechanism and resistance mechanism of Sorafenib is only vaguely identified. Although Sorafenib only extends the survival rate of terminal cancer patients less than three months on average, it is widely being used because drugs developed by global pharmaceutical companies failed to outperform its effectiveness. Professor Cho’s research team analyzed the expression changes of genes in cell lines in response to Sorafenib in order to identify the effect and the resistance mechanism of Sorafenib. As a result, the team discovered the resistance mechanism of Sorafenib using Systems Biology analysis. By combining computer simulations and biological experiments, it was revealed that protein disulfide isomerase (PDI) plays a crucial role in the resistance mechanism of Sorafenib and that its efficacy can be improved significantly by blocking PDI. The research team used mice in the experiment and discovered the synergic effect of PDI inhibition with Sorafenib for reducing liver cancer cells, known as hepatocellular carcinoma. Also, more PDIs are shown in tissue from patients who possess a resistance to Sorafenib. From these findings, the team could identify the possibility of its clinical applications. The team also confirmed these findings from clinical data through a retrospective cohort study. “Molecules that play an important role in cell lines are mostly put under complex regulation. For this reason, the existing biological research has a fundamental limitations for discovering its underlying principles,” Professor Cho said. “This research is a representative case of overcoming this limitation of traditional life science research by using a Systems Biology approach, combining IT and life science. It suggests the possibility of developing a new method that overcomes drug resistance with a network analysis of the targeted drug action mechanism of cancer.” The research was supported by the National Research Foundation of Korea (NRF) and funded by the Ministry of Science and ICT. < Figure 1. Simulation results from cellular experiments using hepatocellular carcinoma > < Figure 2. Network analysis and computer simulation by using the endoplasmic reticulum (ER) stress network > < Figure 3. ER stress network model >
Solutal Marangoni Flows of Miscible Liquid Drive T..
< Professor Hyoungsoo Kim, Department of Mechanical Engineering, KAIST > A research team led by Hyoungsoo Kim, a professor of Mechanical Engineering at KAIST, succeeded in quantifying the phenomenon called, the Marangoni effect, which occurs at the interface between alcohol and water. It is expected that this finding will be a valuable resource used for effectively removing impurities from a surface fluid without any contamination, and developing materials that can replace surfactants. This research, co-conducted with a research team led by Professor Howard A. Stone at Princeton University, was published online in Nature Physics on July 31. The Marangoni effect, also known as tears of wine, is generated when two fluids having a different surface tension meet, causing finite mixing, spreading time and length scale. Typically, people believe that infinitely miscible liquids immediately mix together; however, it is not always true according to this paper. The typical surface tension of alcohol is three times lower than that of water, and this different surface tension generates the Marangoni-driven convection flow at the interface of the two liquids. In addition, there is a certain amount of time required for them to mix. This phenomenon has been discussed many times since it was discovered in early the 20th century, yet there was a limit to quantifying and explaining it. Professor Kim, considering the mixing and spreading mechanism, used various flow visualization techniques and equipment for capturing high speed images in his experiment. Through the flow visualization methods, the team succeeded in quantifying and explaining the complex, physicochemical phenomenon generated between water and alcohol. Moreover, they developed a theoretical model to predict the physicochemical hydrodynamic phenomena. The theoretical model can predict the speed of Marangoni-driven convection flow, the area of a drop of alcohol and the time required to develop the flow field. Hence, this model can map out types of materials (e.g., alcohol) and the volume of a drop of liquid as applicable to target a specific situation. Moreover, the research team believes that the interfacial flow enables the driving of bulk flows and that it can be a source of technology for effectively delivering drugs and removing impurities from a surface of substance without causing secondary contamination. Above all, the results show a possibility for replacing surfactant with alcohol as a material used for delivering drugs. In the case of the drug delivery, some drugs are encapsulated with a surfactant in order to be effectively transported in vivo; however, the surfactant accumulates in the body, which can cause various side effects, such as heart disease. Therefore, using new materials like alcohol for drug delivery will contribute to preventing the side effects caused by the surfactant. “The surfactant is used for delivering drugs, but it is difficult to be expelled from the body. This will cause various side effects, such as heart diseases in asthmatic patients,” said Professor Kim. “I hope that using new materials, like alcohol, will free people from these side effects.” (Marangoni-driven convection flow generated at the interface between water and alcohol, and the flow visualization results) < A drop of alcohol on a water surface > < Comparison of mixing structures on the surface > < Marangoni mixing flow under the free surface >
Multi-Device Mobile Platform for App Functionality..
Case 1. Mr. Kim, an employee, logged on to his SNS account using a tablet PC at the airport while traveling overseas. However, a malicious virus was installed on the tablet PC and some photos posted on his SNS were deleted by someone else. Case 2. Mr. and Mrs. Brown are busy contacting credit card and game companies, because his son, who likes games, purchased a million dollars worth of game items using his smartphone. Case 3. Mr. Park, who enjoys games, bought a sensor-based racing game through his tablet PC. However, he could not enjoy the racing game on his tablet because it was not comfortable to tilt the device for game control. The above cases are some of the various problems that can arise in modern society where diverse smart devices, including smartphones, exist. Recently, new technology has been developed to easily solve these problems. Professor Insik Shin from the School of Computing has developed ‘Mobile Plus,’ which is a mobile platform that can share the functionalities of applications between smart devices. This is a novel technology that allows applications to easily share their functionalities without needing any modifications. Smartphone users often use Facebook to log in to another SNS account like Instagram, or use a gallery app to post some photos on their SNS. These examples are possible, because the applications share their login and photo management functionalities. The functionality sharing enables users to utilize smartphones in various and convenient ways and allows app developers to easily create applications. However, current mobile platforms such as Android or iOS only support functionality sharing within a single mobile device. It is burdensome for both developers and users to share functionalities across devices because developers would need to create more complex applications and users would need to install the applications on each device. To address this problem, Professor Shin’s research team developed platform technology to support functionality sharing between devices. The main concept is using virtualization to give the illusion that the applications running on separate devices are on a single device. They succeeded in this virtualization by extending a RPC (Remote Procedure Call) scheme to multi-device environments. This virtualization technology enables the existing applications to share their functionalities without needing any modifications, regardless of the type of applications. So users can now use them without additional purchases or updates. Mobile Plus can support hardware functionalities like cameras, microphones, and GPS as well as application functionalities such as logins, payments, and photo sharing. Its greatest advantage is its wide range of possible applications. Professor Shin said, "Mobile Plus is expected to have great synergy with smart home and smart car technologies. It can provide novel user experiences (UXs) so that users can easily utilize various applications of smart home/vehicle infotainment systems by using a smartphone as their hub." This research was published at ACM MobiSys, an international conference on mobile computing that was hosted in the United States on June 21. < Figure1. Users can securely log on to SNS accounts by using their personal devices > < Figure 2. Parents can control impulse shopping of their children. > < Figure 3. Users can enjoy games more and more by using the smartphone as a controller >
Analysis of Gas Adsorption Properties for Amorphou..
Professor Jihan Kim from the Department of Chemical and Biomolecular Engineering at KAIST has developed a method to predict gas adsorption properties of amorphous porous materials. Metal-organic frameworks (MOFs) have large surface area and high density of pores, making them appropriate for various energy and environmental-related applications. And although most MOFs are crystalline, these structures can deform during synthesis and/or industrial processes, leading to loss in long-range order. Unfortunately, without the structural information, existing computer simulation techniques cannot be used to model these materials. In this research, Professor Kim’s research team demonstrated that one can replace the material properties of structurally deformed MOFs with those of crystalline MOFs to indirectly analyze/model the material properties of amorphous materials. First, the team conducted simulations on methane gas adsorption properties for over 12,000 crystalline MOFs to obtain a large training set data, and created a resulting structure-property map. Upon mapping the experimental data of amorphous MOFs onto the structure-property map, results showed that regardless of crystallinity, the gas adsorption properties of MOFs showed congruence and consistency amongst one another. Based on these findings, selected crystalline MOFs with the most similar gas adsorption properties as the collapsed structure from the 12,000 candidates. Then, the team verified that the adsorption properties of these similar MOFs can be successfully transferred to the deformed MOFs across different temperatures and even to different gas molecules (e.g. hydrogen), demonstrating transferability of properties. These findings allow material property prediction in porous materials such as MOFs without structural information, and the techniques here can be used to better predict and understand optimal materials for various applications including, carbon dioxide capture, gas storage and separations. This research was conducted in collaboration with Professor Dae-Woon Lim at Kyoto University, Professor Myunghyun Paik at Seoul National University, Professor Minyoung Yoon at Gachon University, and Aadesh Harale at Saudi Arabian Oil Company. The research was published in the Proceedings of the National Academy of Sciences (PNAS) online on 10 July and the co-first authors were Ph. D. candidate WooSeok Jeong and Professor Dae-Woon Lim. This research was funded by the Saudi Aramco-KAIST CO2 Management Center. < Figure 1. Trends in structure - material property map and in collapsed structures > < Figure 2. Transferability between the experimental results of collapsed MOFs and the simulation results of crystalline MOFs >
Cooperative Tumor Cell Membrane-Targeted Photother..
〈 Prof. Ji-Ho Park 〉 A KAIST research team led by Professor Ji-Ho Park in the Bio and Brain Engineering Department at KAIST developed a technology for the effective treatment of cancer by delivering synthetic receptors throughout tumor tissue. The study, led by Ph.D. candidate Heegon Kim, was published online in Nature Communications on June 19. Cancer targeted therapy generally refers to therapy targeting specific molecules that are involved in the growth and generation of cancer. The targeted delivery of therapeutics using targeting agents such as antibodies or nanomaterials has improved the precision and safety of cancer therapy. However, the paucity and heterogeneity of identified molecular targets within tumors have resulted in poor and uneven distribution of targeted agents, thus compromising treatment outcomes. To solve this problem, the team constructed a cooperative targeting system in which synthetic and biological nanocomponents participate together in the tumor cell membrane-selective localization of synthetic receptors to amplify the subsequent targeting of therapeutics. Here, synthetic and biological nanocomponents refer to liposomes and extracellular vesicles, respectively. The synthetic receptors are first delivered selectively to tumor cell membranes in the perivascular region using liposomes. By hitchhiking with extracellular vesicles secreted by the cells, the synthetic receptors are transferred to neighboring cells and further spread throughout the tumor tissues where the molecular targets are limited. Hitchhiking extracellular vesicles for delivery of synthetic receptors was possible since extracellular vesicles, such as exosomes, mediate intercellular communications by transferring various biological components such as lipids, cytosolic proteins, and RNA through a membrane fusion process. They also play a supportive role in promoting tumor progression in that tumor-derived extracellular vesicles deliver oncogenic signals to normal host cells. The team showed that this tumor cell membrane-targeted delivery of synthetic receptors led to a uniform distribution of synthetic receptors throughout a tumor and subsequently led to enhanced phototherapeutic efficacy of the targeted photosensitizer. Professor Park said, “The cooperative tumor targeting system is expected to be applied in treating various diseases that are hard to target.” The research was funded by the Basic Science Research Program through the National Research Foundation funded by the Ministry of Science, ICT & Future Planning, and the National R&D Program for Cancer Control funded by the Ministry for Health and Welfare. < Ph.D. candidates Hee Gon Kim (left) and Chanhee Oh > Figure 1. A schematic of a cooperative tumor targeting system via delivery of synthetic receptors. Figure 2. A confocal microscopic image of a tumor section after cooperative targeting by synthetic receptor delivery. Green and magenta represent vessels and therapeutic agents inside a tumor respectively.
Why Don’t My Document Photos Rotate Correctly？
〈 The team of Professor Lee and his Ph.D.student Jeungmin Oh developed a technique that can correct a phone’s orientation by tracking the rotation sensor in a phone.) 〉 John, an insurance planner, took several photos of a competitors’ new brochures. At a meeting, he opened a photo gallery to discuss the documents with his colleagues. He found, however, that the photos of the document had the wrong orientation; they had been rotated in 90 degrees clockwise. He then rotated his phone 90 degrees counterclockwise, but the document photos also rotated at the same time. After trying this several times, he realized that it was impossible to display the document photos correctly on his phone. Instead, he had to set his phone down on a table and move his chair to show the photos in the correct orientation. It was very frustrating for John and his colleagues, because the document photos had different patterns of orientation errors. Professor Uichin Lee and his team at KAIST have identified the key reasons for such orientation errors and proposed novel techniques to solve this problem efficiently. Interestingly, it was due to a software glitch in screen rotation?tracking algorithms, and all smartphones on the market suffer from this error. When taking a photo of a document, your smartphone generally becomes parallel to the flat surface, as shown in the figure above (right). Professor Lee said, “Your phone fails to track the orientation if you make any rotation changes at that moment.” This is because software engineers designed the rotation tracking software in conventional smartphones with the following assumption: people hold their phones vertically either in portrait or landscape orientations. Orientation tracking can be done by simply measuring the gravity direction using an acceleration sensor in the phone (for example, whether gravity falls into the portrait or landscape direction). Professor Lee’s team conducted a controlled experiment to discover how often orientation errors happen in document-capturing tasks. Surprisingly, their results showed that landscape document photos had error rates of 93％. Smartphones’ camera apps display the current orientation using a camera-shaped icon, but users are unaware of this feature, nor do they notice its state when they take document photos. This is why we often encounter rotation errors in our daily lives, with no idea of why the errors are occurring. The team developed a technique that can correct a phone’s orientation by tracking the rotation sensor in a phone. When people take document photos their smartphones become parallel to the documents on a flat surface. This intention of photographing documents can be easily recognizable because gravity falls onto the phone’s surface. The current orientation can be tracked by monitoring the occurrence of significant rotation. In addition, the research team discovered that when taking a document photo, the user tends to tilt the phone, just slightly, towards the user (called a “micro-tilt phenomenon”). While the tilting degree is very small?almost indistinguishable to the naked eye?these distinct behavioral cues are enough to train machine-learning models that can easily learn the patterns of gravity distributions across the phone. The team’s experimental results showed that their algorithms can accurately track phone orientation in document-capturing tasks at 93％ accuracy. Their approaches can be readily integrated into both Google Android and Apple iPhones. The key benefits of their proposals are that the correction software works only when the intent of photographing documents is detected, and that it can seamlessly work with existing orientation tracking methods without conflict. The research team even suggested a novel user interface for photographing documents. Just like with photocopiers, the capture interface overlays a document shape onto a viewfinder so that the user can easily double-check possible orientation errors. Professor Lee said, “Photographing documents is part of our daily activities, but orientation errors are so prevalent that many users have difficulties in viewing their documents on their phones without even knowing why such errors happen.” He added, “We can easily detect users’ intentions to photograph a document and automatically correct orientation changes. Our techniques not only eliminate any inconvenience with orientation errors, but also enable a range of novel applications specifically designed for document capturing.” This work, supported by the Korean Government (MSIP), was published online in the International Journal of Human-Computer Studies in March 2017. In addition, their US patent application was granted in March 2017.
Face Recognition System “K-Eye” Presented by KAIST
Artificial intelligence (AI) is one of the key emerging technologies. Global IT companies are competitively launching the newest technologies and competition is heating up more than ever. However, most AI technologies focus on software and their operating speeds are low, making them a poor fit for mobile devices. Therefore, many big companies are investing to develop semiconductor chips for running AI programs with low power requirements but at high speeds. A research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. The system was made in collaboration with a start-up company, UX Factory Co. The K-Eye series consists of two types: a wearable type and a dongle type. The wearable type device can be used with a smartphone via Bluetooth, and it can operate for more than 24 hours with its internal battery. Users hanging K-Eye around their necks can conveniently check information about people by using their smartphone or smart watch, which connects K-Eye and allows users to access a database via their smart devices. A smartphone with K-EyeQ, the dongle type device, can recognize and share information about users at any time. When recognizing that an authorized user is looking at its screen, the smartphone automatically turns on without a passcode, fingerprint, or iris authentication. Since it can distinguish whether an input face is coming from a saved photograph versus a real person, the smartphone cannot be tricked by the user’s photograph. The K-Eye series carries other distinct features. It can detect a face at first and then recognize it, and it is possible to maintain “Always-on” status with low power consumption of less than 1mW. To accomplish this, the research team proposed two key technologies: an image sensor with “Always-on” face detection and the CNNP face recognition chip. The first key technology, the “Always-on” image sensor, can determine if there is a face in its camera range. Then, it can capture frames and set the device to operate only when a face exists, reducing the standby power significantly. The face detection sensor combines analog and digital processing to reduce power consumption. With this approach, the analog processor, combined with the CMOS Image Sensor array, distinguishes the background area from the area likely to include a face, and the digital processor then detects the face only in the selected area. Hence, it becomes effective in terms of frame capture, face detection processing, and memory usage. The second key technology, CNNP, achieved incredibly low power consumption by optimizing a convolutional neural network (CNN) in the areas of circuitry, architecture, and algorithms. First, the on-chip memory integrated in CNNP is specially designed to enable data to be read in a vertical direction as well as in a horizontal direction. Second, it has immense computational power with 1024 multipliers and accumulators operating in parallel and is capable of directly transferring the temporal results to each other without accessing to the external memory or on-chip communication network. Third, convolution calculations with a two-dimensional filter in the CNN algorithm are approximated into two sequential calculations of one-dimensional filters to achieve higher speeds and lower power consumption. With these new technologies, CNNP achieved 97％ high accuracy but consumed only 1/5000 power of the GPU. Face recognition can be performed with only 0.62mW of power consumption, and the chip can show higher performance than the GPU by using more power. These chips were developed by Kyeongryeol Bong, a Ph. D. student under Professor Yoo and presented at the International Solid-State Circuit Conference (ISSCC) held in San Francisco in February. CNNP, which has the lowest reported power consumption in the world, has achieved a great deal of attention and has led to the development of the present K-Eye series for face recognition. Professor Yoo said “AI - processors will lead the era of the Fourth Industrial Revolution. With the development of this AI chip, we expect Korea to take the lead in global AI technology.” The research team and UX Factory Co. are preparing to commercialize the K-Eye series by the end of this year. According to a market researcher IDC, the market scale of the AI industry will grow from ＄127 billion last year to ＄165 billion in this year. (Photo caption: Schematic diagram of K-Eye system)