Korean
Energy-Efficient AI Hardware Technology Via a Brai..
Researchers demonstrate neuromodulation-inspired stashing system for the energy-efficient learning of a spiking neural network using a self-rectifying memristor array < Image: A schematic illustrating the localized brain activity (a-c) and the configuration of the hardware and software hybrid neural network (d-e) using a self-rectifying memristor array (f-g). > Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. The research group led by Professor Kyung Min Kim from the Department of Materials Science and Engineering has developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation. The human brain changes its neural topology in real time, learning to store or recall memories as needed. The research group presented a new artificial intelligence learning method that directly implements these neural coordination circuit configurations. Research on artificial intelligence is becoming very active, and the development of artificial intelligence-based electronic devices and product releases are accelerating, especially in the Fourth Industrial Revolution age. To implement artificial intelligence in electronic devices, customized hardware development should also be supported. However most electronic devices for artificial intelligence require high power consumption and highly integrated memory arrays for large-scale tasks. It has been challenging to solve these power consumption and integration limitations, and efforts have been made to find out how the human brain solves problems. To prove the efficiency of the developed technology, the research group created artificial neural network hardware equipped with a self-rectifying synaptic array and algorithm called a ‘stashing system’ that was developed to conduct artificial intelligence learning. As a result, it was able to reduce energy by 37% within the stashing system without any accuracy degradation. This result proves that emulating the neuromodulation in humans is possible. Professor Kim said, "In this study, we implemented the learning method of the human brain with only a simple circuit composition and through this we were able to reduce the energy needed by nearly 40 percent.” This neuromodulation-inspired stashing system that mimics the brain’s neural activity is compatible with existing electronic devices and commercialized semiconductor hardware. It is expected to be used in the design of next-generation semiconductor chips for artificial intelligence. This study was published in Advanced Functional Materials in March 2022 and supported by KAIST, the National Research Foundation of Korea, the National NanoFab Center, and SK Hynix. -Publication: Woon Hyung Cheong, Jae Bum Jeon†, Jae Hyun In, Geunyoung Kim, Hanchan Song, Janho An, Juseong Park, Young Seok Kim, Cheol Seong Hwang, and Kyung Min Kim (2022) “Demonstration of Neuromodulation-inspired Stashing System for Energy-efficient Learning of Spiking Neural Network using a Self-Rectifying Memristor Array,” Advanced Functional Materials March 31, 2022 (DOI: 10.1002/adfm.202200337) -Profile: Professor Kyung Min Kim http://semi.kaist.ac.kr https://scholar.google.com/citations?user=BGw8yDYAAAAJ&hl=ko Department of Materials Science and Engineering KAIST
Machine Learning-Based Algorithm to Speed up DNA S..
The algorithm presents the first full-fledged, short-read alignment software that leverages learned indices for solving the exact match search problem for efficient seeding < Image:Scientists from KAIST develops new machine-learning-based approach to speed up DNA sequencing. > The human genome consists of a complete set of DNA, which is about 6.4 billion letters long. Because of its size, reading the whole genome sequence at once is challenging. So scientists use DNA sequencers to produce hundreds of millions of DNA sequence fragments, or short reads, up to 300 letters long. Then the DNA sequencer assembles all the short reads like a giant jigsaw puzzle to reconstruct the entire genome sequence. Even with very fast computers, this job can take hours to complete. A research team at KAIST has achieved up to 3.45x faster speeds by developing the first short-read alignment software that uses a recent advance in machine-learning called a learned index. The research team reported their findings on March 7, 2022 in the journal Bioinformatics. The software has been released as open source and can be found on github (https://github.com/kaist-ina/BWA-MEME). Next-generation sequencing (NGS) is a state-of-the-art DNA sequencing method. Projects are underway with the goal of producing genome sequencing at population scale. Modern NGS hardware is capable of generating billions of short reads in a single run. Then the short reads have to be aligned with the reference DNA sequence. With large-scale DNA sequencing operations running hundreds of next-generation sequences, the need for an efficient short read alignment tool has become even more critical. Accelerating the DNA sequence alignment would be a step toward achieving the goal of population-scale sequencing. However, existing algorithms are limited in their performance because of their frequent memory accesses. BWA-MEM2 is a popular short-read alignment software package currently used to sequence the DNA. However, it has its limitations. The state-of-the-art alignment has two phases – seeding and extending. During the seeding phase, searches find exact matches of short reads in the reference DNA sequence. During the extending phase, the short reads from the seeding phase are extended. In the current process, bottlenecks occur in the seeding phase. Finding the exact matches slows the process. The researchers set out to solve the problem of accelerating the DNA sequence alignment. To speed the process, they applied machine learning techniques to create an algorithmic improvement. Their algorithm, BWA-MEME (BWA-MEM emulated) leverages learned indices to solve the exact match search problem. The original software compared one character at a time for an exact match search. The team’s new algorithm achieves up to 3.45x faster speeds in seeding throughput over BWA-MEM2 by reducing the number of instructions by 4.60x and memory accesses by 8.77x. “Through this study, it has been shown that full genome big data analysis can be performed faster and less costly than conventional methods by applying machine learning technology,” said Professor Dongsu Han from the School of Electrical Engineering at KAIST. The researchers’ ultimate goal was to develop efficient software that scientists from academia and industry could use on a daily basis for analyzing big data in genomics. “With the recent advances in artificial intelligence and machine learning, we see so many opportunities for designing better software for genomic data analysis. The potential is there for accelerating existing analysis as well as enabling new types of analysis, and our goal is to develop such software,” added Han. Whole genome sequencing has traditionally been used for discovering genomic mutations and identifying the root causes of diseases, which leads to the discovery and development of new drugs and cures. There could be many potential applications. Whole genome sequencing is used not only for research, but also for clinical purposes. “The science and technology for analyzing genomic data is making rapid progress to make it more accessible for scientists and patients. This will enhance our understanding about diseases and develop a better cure for patients of various diseases.” The research was funded by the National Research Foundation of the Korean government’s Ministry of Science and ICT. -Publication Youngmok Jung, Dongsu Han, “BWA-MEME:BWA-MEM emulated with a machine learning approach,” Bioinformatics, Volume 38, Issue 9, May 2022 (https://doi.org/10.1093/bioinformatics/btac137) -Profile Professor Dongsu Han School of Electrical Engineering KAIST
Promoting metaverse initiatives with an active age..
Hong Kong-based professionals were provided a metaverse course by Professor Lik-Hang Lee in collaboration with the Hong Kong Productivity Council(HKPC) during the Spring 2022 semester. Inresponse to the growing need for virtual worlds and virtual-physical hybrid settings, "The Metaverse Course for Professionals" intends to cultivate world-class metaverse talent. The traing covered the parallel virtual world and how to exploit digitalization and industry in the metaverse era. The training was attended by R&D scientists, consultants, software engineers, and associated professionals from the HKPC. The students completed extensive courses on topics such as creating and executing virtual-physical blended environments, metaverse technologies and ecosystems, immersive smart cities, token economies, and intelligent industrialization in the metaverse age. < Figure 1. Media Coverage by HKPC regarding the KAIST's Metaverse course > Moreover, Professor Lee and his collaborators worldwide have conducted research stucies on building user0centric urban areas featuring virtual-physical blended experiences, also known as metaverse cities. His recent studies aim to strike a balance between user interactivity bandwidth and user mobilit in the metaverse cities. Some AR/VR-based gadgets and system solutions include EMG-based text entry with occupied hands [Ref.4] (Figure 2A), freehand interaction of content editing on mobile immersive experience [Ref.3] (Figure 2C), VR-driven design of human-drone interfaces [Ref.5] (Figure 2D), user authentication on mobile geadsets with gaze and footsteps [Ref.2] (Figure 2E), multi-modal metaverse between Earth and Mars [Ref.6], and to name but a few. When the metaverse services appear ubiquitously in our cities, the studies mentioned above enable users to convert their intentions into actions seamlessly in such immersive environments. < Figure 2. Smapshots of recent research on Augmented Reality(AR) and Virtual Reality(VR) >
Professor Lik-Hang Lee Offers Metaverse Course for..
< Professor Lik-Hang Lee > Professor Lik-Hang Lee from the Department of Industrial System Engineering will offer a metaverse course in partnership with the Hong Kong Productivity Council (HKPC) from the Spring 2022 semester to Hong Kong-based professionals. “The Metaverse Course for Professionals” aims to nurture world-class talents of the metaverse in response to surging demand for virtual worlds and virtual-physical blended environments. The HKPC’s R&D scientists, consultants, software engineers, and related professionals will attend the course. They will receive a professional certificate on managing and developing metaverse skills upon the completion of this intensive course. The course will provide essential skills and knowledge about the parallel virtual universe and how to leverage digitalization and industrialization in the metaverse era. The course includes comprehensive modules, such as designing and implementing virtual-physical blended environments, metaverse technology and ecosystems, immersive smart cities, token economies, and intelligent industrialization in the metaverse era. Professor Lee believes in the decades to come that we will see rising numbers of virtual worlds in cyberspace known as the ‘Immersive Internet’ that will be characterized by high levels of immersiveness, user interactivity, and user-machine collaborations. “Consumers in virtual worlds will create novel content as well as personalized products and services, becoming as catalyst for ‘hyperpersonalization’ in the next industrial revolution,” he said. Professor Lee said he will continue offering world-class education related to the metaverse to students in KAIST and professionals from various industrial sectors, as his Augmented Reality and Media Lab will focus on a variety of metaverse topics such as metaverse campuses and industrial metaverses. The HKPC has worked to address innovative solutions for Hong Kong industries and enterprises since 1967, helping them achieve optimized resource utilization, effectiveness, and cost reduction as well as enhanced productivity and competitiveness in both local and international markets. The HKPC has advocated for facilitating Hong Kong’s reindustrialization powered by Industry 4.0 and e-commerce 4.0 with a strong emphasis on R&D, IoT, AI, digital manufacturing. The Augmented Reality and Media Lab led by Professor Lee will continue its close partnerships with HKPC and its other partners to help build the epicentre of the metaverse in the region. Furthermore, the lab will fully leverage its well-established research niches in user-centric, virtual-physical cyberspace (https://www.lhlee.com/projects-8 ) to serve upcoming projects related to industrial metaverses, which aligns with the departmental focus on smart factories and artificial intelligence.
Professor June-Koo Rhee’s Team Wins the QHack Open..
< From left: Ju-young Ryu, Jeung-rak Lee, and Eyuel Elala in Professor June-Koo Rhee > The research team consisting of three master students Ju-Young Ryu, Jeung-rak Lee, and Eyel Elala in Professor June-Koo Rhee’s group from the KAIST IRTC of Quantum Computing for AI has won the first place at the QHack 2022 Open Hackathon Science Challenge. The QHack 2022 Open Hackathon is one of the world’s prestigious quantum software hackathon events held by US Xanadu, in which 250 people from 100 countries participate. Major sponsors such as IBM Quantum, AWS, CERN QTI, and Google Quantum AI proposed challenging problems, and a winning team is selected judged on team projects in each of the 13 challenges. The KAIST team supervised by Professor Rhee received the First Place prize on the Science Challenge which was organized by the CERN QTI of the European Communities. The team will be awarded an opportunity to tour CERN’s research lab in Europe for one week along with an online internship. The students on the team presented a method for “Leaning Based Error Mitigation for VQE,” in which they implemented an LBEM protocol to lower the error in quantum computing, and leveraged the protocol in the VQU algorithm which is used to calculate the ground state energy of a given molecule. Their research successfully demonstrated the ability to effectively mitigate the error in IBM Quantum hardware and the virtual error model. In conjunction, Professor June-Koo (Kevin) Rhee founded a quantum computing venture start-up, Qunova Computing(https://qunovacomputing.com), with technology tranfer from the KAIST ITRC of Quantum Computing for AI. Qunova Computing is one of the frontier of the quantum software industry in Korea.
CXL-Based Memory Disaggregation Technology Opens U..
A KAIST team’s compute express link (CXL) provides new insights on memory disaggregation and ensures direct access and high-performance capabilities A team from the Computer Architecture and Memory Systems Laboratory (CAMEL) at KAIST presented a new compute express link (CXL) solution whose directly accessible, and high-performance memory disaggregation opens new directions for big data memory processing. Professor Myoungsoo Jung said the team’s technology significantly improves performance compared to existing remote direct memory access (RDMA)-based memory disaggregation. CXL is a peripheral component interconnect-express (PCIe)-based new dynamic multi-protocol made for efficiently utilizing memory devices and accelerators. Many enterprise data centers and memory vendors are paying attention to it as the next-generation multi-protocol for the era of big data. Emerging big data applications such as machine learning, graph analytics, and in-memory databases require large memory capacities. However, scaling out the memory capacity via a prior memory interface like double data rate (DDR) is limited by the number of the central processing units (CPUs) and memory controllers. Therefore, memory disaggregation, which allows connecting a host to another host’s memory or memory nodes, has appeared. RDMA is a way that a host can directly access another host’s memory via InfiniBand, the commonly used network protocol in data centers. Nowadays, most existing memory disaggregation technologies employ RDMA to get a large memory capacity. As a result, a host can share another host’s memory by transferring the data between local and remote memory. Although RDMA-based memory disaggregation provides a large memory capacity to a host, two critical problems exist. First, scaling out the memory still needs an extra CPU to be added. Since passive memory such as dynamic random-access memory (DRAM), cannot operate by itself, it should be controlled by the CPU. Second, redundant data copies and software fabric interventions for RDMA-based memory disaggregation cause longer access latency. For example, remote memory access latency in RDMA-based memory disaggregation is multiple orders of magnitude longer than local memory access. To address these issues, Professor Jung’s team developed the CXL-based memory disaggregation framework, including CXL-enabled customized CPUs, CXL devices, CXL switches, and CXL-aware operating system modules. The team’s CXL device is a pure passive and directly accessible memory node that contains multiple DRAM dual inline memory modules (DIMMs) and a CXL memory controller. Since the CXL memory controller supports the memory in the CXL device, a host can utilize the memory node without processor or software intervention. The team’s CXL switch enables scaling out a host’s memory capacity by hierarchically connecting multiple CXL devices to the CXL switch allowing more than hundreds of devices. Atop the switches and devices, the team’s CXL-enabled operating system removes redundant data copy and protocol conversion exhibited by conventional RDMA, which can significantly decrease access latency to the memory nodes. In a test comparing loading 64B (cacheline) data from memory pooling devices, CXL-based memory disaggregation showed 8.2 times higher data load performance than RDMA-based memory disaggregation and even similar performance to local DRAM memory. In the team’s evaluations for a big data benchmark such as a machine learning-based test, CXL-based memory disaggregation technology also showed a maximum of 3.7 times higher performance than prior RDMA-based memory disaggregation technologies. “Escaping from the conventional RDMA-based memory disaggregation, our CXL-based memory disaggregation framework can provide high scalability and performance for diverse datacenters and cloud service infrastructures,” said Professor Jung. He went on to stress, “Our CXL-based memory disaggregation research will bring about a new paradigm for memory solutions that will lead the era of big data.” < Figure 1. a comparison of the architecture between CAMEL’s CXL solution and conventional RDMA-based memory disaggregation. > < Figure 2. A performance comparison between CAMEL’s CXL solution and prior RDMA-based disaggregation. > -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL) http://camelab.org School of Electrical Engineering KAIST
KAA Recognizes 4 Distinguished Alumni of the Year
< Distinguished Professor Sukbok Chang, Hyunshil Ahn at the AI Economy Institute at the Korea Economic Daily, PSTech CEO Hwan-ho Sung, Samsung Electrocnis President Hark Kyu Park (from left) > The KAIST Alumni Association (KAA) recognized four distinguished alumni of the year during a ceremony on February 25 in Seoul. The four Distinguished Alumni Awardees are Distinguished Professor Sukbok Chang from the KAIST Department of Chemistry, Hyunshil Ahn, head of the AI Economy Institute and an editorial writer at The Korea Economic Daily, CEO Hwan-ho Sung of PSTech, and President Hark Kyu Park of Samsung Electronics. Distinguished Professor Sukbok Chang who received his MS from the Department of Chemistry in 1985 has been a pioneer in the novel field of ‘carbon-hydrogen bond activation reactions’. He has significantly contributed to raising Korea’s international reputation in natural sciences and received the Kyungam Academic Award in 2013, the 14th Korea Science Award in 2015, the 1st Science and Technology Prize of Korea Toray in 2018, and the Best Scientist/Engineer Award Korea in 2019. Furthermore, he was named as a Highly Cited Researcher who ranked in the top 1% of citations by field and publication year in the Web of Science citation index for seven consecutive years from 2015 to 2021, demonstrating his leadership as a global scholar. Hyunshil Ahn, a graduate of the School of Business and Technology Management with an MS in 1985 and a PhD in 1987, was appointed as the first head of the AI Economy Institute when The Korea Economic Daily was the first Korean media outlet to establish an AI economy lab. He has contributed to creating new roles for the press and media in the 4th industrial revolution, and added to the popularization of AI technology through regulation reform and consulting on industrial policies. PSTech CEO Hwan-ho Sung is a graduate of the School of Electrical Engineering where he received an MS in 1988 and a PhD in EMBA in 2008. He has run the electronics company PSTech for over 20 years and successfully localized the production of power equipment, which previously depended on foreign technology. His development of the world’s first power equipment that can be applied to new industries including semiconductors and displays was recognized through this award. Samsung Electronics President Hark Kyu Park graduated from the School of Business and Technology Management with an MS in 1986. He not only enhanced Korea’s national competitiveness by expanding the semiconductor industry, but also established contract-based semiconductor departments at Korean universities including KAIST, Sungkyunkwan University, Yonsei University, and Postech, and semiconductor track courses at KAIST, Sogang University, Seoul National University, and Postech to nurture professional talents. He also led the national semiconductor coexistence system by leading private sector-government-academia collaborations to strengthen competence in semiconductors, and continues to make unconditional investments in strong small businesses. KAA President Chilhee Chung said, “Thanks to our alumni contributing at the highest levels of our society, the name of our alma mater shines brighter. As role models for our younger alumni, I hope greater honours will follow our awardees in the future.”
Decoding Brain Signals to Control a Robotic Arm
Advanced brain-machine interface system successfully interprets arm movement directions from neural signals in the brain < Figure:Experimental paradigm. Subjects were instructed to perform reach-and-grasp movements to designate the locations of the target in three-dimensional space. (a) Subjects A and B were provided the visual cue as a real tennis ball at one of four pseudo-randomized locations. (b) Subjects A and B were provided the visual cue as a virtual reality clip showing a sequence of five stages of a reach-and-grasp movement. > Researchers have developed a mind-reading system for decoding neural signals from the brain during arm movement. The method, described in the journal Applied Soft Computing, can be used by a person to control a robotic arm through a brain-machine interface (BMI). A BMI is a device that translates nerve signals into commands to control a machine, such as a computer or a robotic limb. There are two main techniques for monitoring neural signals in BMIs: electroencephalography (EEG) and electrocorticography (ECoG). The EEG exhibits signals from electrodes on the surface of the scalp and is widely employed because it is non-invasive, relatively cheap, safe and easy to use. However, the EEG has low spatial resolution and detects irrelevant neural signals, which makes it difficult to interpret the intentions of individuals from the EEG. On the other hand, the ECoG is an invasive method that involves placing electrodes directly on the surface of the cerebral cortex below the scalp. Compared with the EEG, the ECoG can monitor neural signals with much higher spatial resolution and less background noise. However, this technique has several drawbacks. “The ECoG is primarily used to find potential sources of epileptic seizures, meaning the electrodes are placed in different locations for different patients and may not be in the optimal regions of the brain for detecting sensory and movement signals,” explained Professor Jaeseung Jeong, a brain scientist at KAIST. “This inconsistency makes it difficult to decode brain signals to predict movements.” To overcome these problems, Professor Jeong’s team developed a new method for decoding ECoG neural signals during arm movement. The system is based on a machine-learning system for analysing and predicting neural signals called an ‘echo-state network’ and a mathematical probability model called the Gaussian distribution. In the study, the researchers recorded ECoG signals from four individuals with epilepsy while they were performing a reach-and-grasp task. Because the ECoG electrodes were placed according to the potential sources of each patient’s epileptic seizures, only 22% to 44% of the electrodes were located in the regions of the brain responsible for controlling movement. During the movement task, the participants were given visual cues, either by placing a real tennis ball in front of them, or via a virtual reality headset showing a clip of a human arm reaching forward in first-person view. They were asked to reach forward, grasp an object, then return their hand and release the object, while wearing motion sensors on their wrists and fingers. In a second task, they were instructed to imagine reaching forward without moving their arms. The researchers monitored the signals from the ECoG electrodes during real and imaginary arm movements, and tested whether the new system could predict the direction of this movement from the neural signals. They found that the novel decoder successfully classified arm movements in 24 directions in three-dimensional space, both in the real and virtual tasks, and that the results were at least five times more accurate than chance. They also used a computer simulation to show that the novel ECoG decoder could control the movements of a robotic arm. Overall, the results suggest that the new machine learning-based BCI system successfully used ECoG signals to interpret the direction of the intended movements. The next steps will be to improve the accuracy and efficiency of the decoder. In the future, it could be used in a real-time BMI device to help people with movement or sensory impairments. This research was supported by the KAIST Global Singularity Research Program of 2021, Brain Research Program of the National Research Foundation of Korea funded by the Ministry of Science, ICT, and Future Planning, and the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education. -Publication Hoon-Hee Kim, Jaeseung Jeong, “An electrocorticographic decoder for arm movement for brain-machine interface using an echo state network and Gaussian readout,” Applied Soft Computing online December 31, 2021 (doi.org/10.1016/j.asoc.2021.108393) -Profile Professor Jaeseung Jeong Department of Bio and Brain Engineering College of Engineering KAIST
Five Projects Ranked in the Top 100 for National R..
< Distinguished Professor Sang Yup Lee, Professor Kwang-Hyun Cho, Professor Byungha Shin, Professor Jiyoung Eom and Professor Myungchul Kim (from left) > Five KAIST research projects were selected as the 2021 Top 100 for National R&D Excellence by the Ministry of Science and ICT and the Korea Institute of Science & Technology Evaluation and Planning. The five projects are: -The development of E. coli that proliferates with only formic acid and carbon dioxide by Distinguished Professor Sang Yup Lee from the Department of Chemical and Biomolecular Engineering -An original reverse aging technology that restores an old human skin cell into a younger one by Professor Kwang-Hyun Cho from the Department of Bio and Brain Engineering -The development of next-generation high-efficiency perovskite-silicon tandem solar cells by Professor Byungha Shin from the Department of Materials Science and Engineering -Research on the effects of ultrafine dust in the atmosphere has on energy consumption by Professor Jiyong Eom from the School of Business and Technology Management -Research on a molecular trigger that controls the phase transformation of bio materials by Professor Myungchul Kim from the Department of Bio and Brain Engineering Started in 2006, an Evaluation Committee composed of experts in industries, universities, and research institutes has made the preliminary selections of the most outstanding research projects based on their significance as a scientific and technological development and their socioeconomic effects. The finalists went through an open public evaluation. The final 100 studies are from six fields: 18 from mechanics & materials, 26 from biology & marine sciences, 19 from ICT & electronics, 10 from interdisciplinary research, and nine from natural science and infrastructure. The selected 100 studies will receive a certificate and an award plaque from the minister of MSIT as well as additional points for business and institutional evaluations according to appropriate regulations, and the selected researchers will be strongly recommended as candidates for national meritorious awards. In particular, to help the 100 selected research projects become more accessible for the general public, their main contents will be provided in a free e-book ‘The Top 100 for National R&D Excellence of 2021’ that will be available from online booksellers.
Improving speech intelligibility with Privacy-Pres..
Privacy-Preserving AR system can augment the speaker's speech with real-life subtitles to overcome the loss of contextual cues caused by mask-wearing and social distancing during the COVID-19 pandemic. Degraded speech intelligibility induces face-to-face conversation participants to speak louder and more distinctively, exposing the content to potential eavesdroppers. Similarly, people with face masks deteriorate their speech intelligibility, especially during the post-covid-19 crisis. Augmented Reality (AR) can serve as an effective tool to visualise people conversations and promote speech intelligibility, known as speech augmentation. However, visualised conversations without proper privacy management can expose AR users to privacy risks. An international research team of Prof. Lik-Hang LEE in the Department of Industrial and Systems Engineering at KAIST and Prof. Pan HUI in Computational Media and Arts at Hong Kong University of Science and Technology employed a conversation-oriented Contextual Integrity (CI) principle to develop a privacy-preserving AR framework for speech augmentation. At its core, the framework, namely Theophany, establishes ad-hoc social networks between relevant conversation participants to exchange contextual information and improve speech intelligibility in real-time. < Figure 1: A real-life subtitle application with AR headsets > Theophany has been implemented as a real-life subtitle application in AR to improve speech intelligibility in daily conversations (Figure 1). This implementation leverages a multi-modal channel, such as eye-tracking, camera, and audio. Theophany transforms the user's speech into text and estimates the intended recipients through gaze detection. The CI Enforcer module evaluates the sentences' sensitivity. If the sensitivity meets the speaker's privacy threshold, the sentence is transmitted to the appropriate recipients (Figure 2). < Figure 2: Multi-modal Contextual Integrity Channel > Based on the principles of Contextual Integrity (CI), parameters of privacy perception are designed for privacy-preserving face-to-face conversations, such as topic, location, and participants. Accordingly, Theophany operation depends on the topic and session. Figure 3 demonstrates several illustrative conversation sessions: (a) the topic is not sensitive and transmitted to everybody in the user's gaze. (b) the topic is work-sensitive and only transmitted to the coworker. (c) the topic is sensitive and only transmitted to the friend in the user's gaze. A new friend entering the user's gaze only gets the textual transcription once a new session (topic) starts (d). (e) the topic is highly sensitive, and nobody gets the textual transcription. < Figure 3: Speech Augmentation in five illustrative sessions > Theophany within a prototypical AR system augments the speaker's speech with real-life subtitles to overcome the loss of contextual cues caused by mask-wearing and social distancing during the COVID-19 pandemic. The research was published in ACM Multimedia under the title of 'Theophany: Multi-modal Speech Augmentation in Instantaneous Privacy Channels' (DOI: 10.1145/3474085.3475507), being selected as one of the best paper award candidates (Top 5). Note that the first author is an alumnus from the Industrial and Systems Engineering Department at KAIST. Short Bio: Lik-Hang Lee received a PhD degree from SyMLab, Hong Kong University of Science and Technology, and the Bachelor's and M.Phil. degrees from the University of Hong Kong. He is currently an assistant professor (tenure-track) with the Korea Advanced Institute of Science and Technology (KAIST), South Korea, and the head of the Augmented Reality and Media Laboratory, KAIST. He has built and designed various human-centric computing specializing in augmented and virtual realities (AR/VR). In recent years, he has published more than 30 research papers on AR/VR at prestigious conferences such as ACM WWW, ACM IMWUT, ACM Multimedia, ACM CSUR, IEEE Percom, and so on. He also serves the research community, as TPCs, PCs and workshop organizers, at some prestigious venues, such as AAAI, IJCAI, IEEE PERCOM, ACM CHI, ACM Multimedia, ACM IMWUT, IEEE VR, etc. Photo:
Eco-Friendly Micro-Supercapacitors Using Fallen Le..
Femtosecond micro-supercapacitors on a single leaf could easily be applied to wearable electronics, smart houses, and IoTs < Image: The schematic illustration of the production of femtosecond laser-induced graphene. > A KAIST research team has developed a graphene-inorganic-hybrid micro-supercapacitor made of leaves using femtosecond direct laser writing lithography. The advancement of wearable electronic devices is synonymous with innovations in flexible energy storage devices. Of the various energy storage devices, micro-supercapacitors have drawn a great deal of interest for their high electrical power density, long lifetimes, and short charging times. However, there has been an increase in waste battery generation with the increases in the consumption and use of electronic equipment as well as the short replacement period that follows advancements in mobile devices. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges. Forests cover about 30 percent of the Earth’s surface, producing a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is both biodegradable and reusable, which makes it an attractive, eco-friendly material. However, if the leaves are left neglected instead of being used efficiently, they can contribute to fires or water pollution. To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a one-step technology that can create porous 3D graphene micro-electrodes with high electrical conductivity without additional treatment in atmospheric conditions by irradiating femtosecond laser pulses on the surface of the leaves without additional materials. Taking this strategy further, the team also suggested a method for producing flexible micro-supercapacitors. They showed that this technique could quickly and easily produce porous graphene-inorganic-hybrid electrodes at a low price, and validated their performance by using the graphene micro-supercapacitors to power an LED and an electronic watch that could function as a thermometer, hygrometer, and timer. These results open up the possibility of the mass production of flexible and green graphene-based electronic devices. Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.” This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research. -Publication Truong-Son Dinh Le, Yeong A. Lee, Han Ku Nam, Kyu Yeon Jang, Dongwook Yang, Byunggi Kim, Kanghoon Yim, Seung Woo Kim, Hana Yoon, and Young-jin Kim, “Green Flexible Graphene-Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses," December 05, 2021, Advanced Functional Materials (doi.org/10.1002/adfm.202107768) -Profile Professor Young-Jin Kim Ultra-Precision Metrology and Manufacturing (UPM2) Laboratory Department of Mechanical Engineering KAIST
AI Light-Field Camera Reads 3D Facial Expressions
Machine-learned, light-field camera reads facial expressions from high-contrast illumination invariant 3D facial images < Image: Facial expression reading based on MLP classification from 3D depth maps and 2D images obtained by NIR-LFC > A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology. Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications. However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction. The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times. Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment. The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy – a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions. Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.” This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. -Publication “Machine-learned light-field camera that reads fascial expression from high-contrast and illumination invariant 3D facial images,” Sang-In Bae, Sangyeon Lee, Jae-Myeong Kwon, Hyun-Kyung Kim. Kyung-Won Jang, Doheon Lee, Ki-Hun Jeong, Advanced Intelligent Systems, December 16, 2021 (doi.org/10.1002/aisy.202100182) -Profile Professor Ki-Hun Jeong Biophotonic Laboratory Department of Bio and Brain Engineering KAIST Professor Doheon Lee Department of Bio and Brain Engineering KAIST