AI Light-Field Camera Reads 3D Facial Expressions
Machine-learned, light-field camera reads facial expressions from high-contrast illumination invariant 3D facial images < Image: Facial expression reading based on MLP classification from 3D depth maps and 2D images obtained by NIR-LFC > A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology. Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications. However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction. The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54％ of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times. Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment. The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85％ accuracy – a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions. Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.” This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. -Publication “Machine-learned light-field camera that reads fascial expression from high-contrast and illumination invariant 3D facial images,” Sang-In Bae, Sangyeon Lee, Jae-Myeong Kwon, Hyun-Kyung Kim. Kyung-Won Jang, Doheon Lee, Ki-Hun Jeong, Advanced Intelligent Systems, December 16, 2021 (doi.org/10.1002/aisy.202100182) -Profile Professor Ki-Hun Jeong Biophotonic Laboratory Department of Bio and Brain Engineering KAIST Professor Doheon Lee Department of Bio and Brain Engineering KAIST
Face Detection in Untrained Deep Neural Networks
A KAIST team shows that primitive visual selectivity of faces can arise spontaneously in completely untrained deep neural networks Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks. This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences. The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains. The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience. Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks. These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions. Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning”. He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.” This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project. -Publication Seungdae Baek, Min Song, Jaeson Jang, Gwangsu Kim, and Se-Bum Baik, “Face detection in untrained deep neural network,” Nature Communications 12, 7328 on Dec.16, 2021 (https://doi.org/10.1038/s41467-021-27606-9) -Profile Professor Se-Bum Paik Visual System and Neural Network Laboratory Program of Brain and Cognitive Engineering Department of Bio and Brain Engineering College of Engineering KAIST
Team KAIST to Race at CES 2022 Autonomous Challeng..
Five top university autonomous racing teams will compete in a head-to-head passing competition in Las Vegas < Team KAIST of self-driving team from the Unmanned System Research Group (USRG) advised by Professor Hyunchul Shim (second from right) > A self-driving racing team from the KAIST Unmanned System Research Group (USRG) advised by Professor Hyunchul Shim will compete at the Autonomous Challenge at the Consumer Electronic Show (CES) on January 7, 2022. The head-to-head, high speed autonomous racecar passing competition at the Las Vegas Motor Speedway will feature the finalists and semifinalists from the Indy Autonomous Challenge in October of this year. Team KAIST qualified as a semifinalist at the Indy Autonomous Challenge and will join four other university teams including the winner of the competition, Technische Universität München. Team KAIST’s AV-21 vehicle is capable of driving on its own at more than 200km/h will be expected to show a speed of more than 300 km/h at the race.The participating teams are: 1. KAIST 2. EuroRacing : University of Modena and Reggio Emilia (Italy), University of Pisa (Italy), ETH Zürich (Switzerland), Polish Academy of Sciences (Poland) 3. MIT-PITT-RW, Massachusetts Institute of Technology, University of Pittsburgh, Rochester Institute of Technology, University of Waterloo (Canada) 4.PoliMOVE – Politecnico di Milano (Italy), University of Alabama 5.TUM Autonomous Motorsport – Technische Universität München (Germany) Professor Shim’s team is dedicated to the development and validation of cutting edge technologies for highly autonomous vehicles. In recognition of his pioneering research in unmanned system technologies, Professor Shim was honored with the Grand Prize of the Minister of Science and ICT on December 9. “We began autonomous vehicle research in 2009 when we signed up for Hyundai Motor Company’s Autonomous Driving Challenge. For this, we developed a complete set of in-house technologies such as low-level vehicle control, perception, localization, and decision making.” In 2019, the team came in third place in the Challenge and they finally won this year. For years, his team has participated in many unmanned systems challenges at home and abroad, gaining recognition around the world. The team won the inaugural 2016 IROS autonomous drone racing and placed second in the 2018 IROS Autonomous Drone Racing Competition. They also competed in 2017 MBZIRC, ranking fourth in Missions 2 and 3, and fifth in the Grand Challenge. Most recently, the team won the first round of Lockheed Martin’s Alpha Pilot AI Drone Innovation Challenge. The team is now participating in the DARPA Subterranean Challenge as a member of Team CoSTAR with NASA JPL, MIT, and Caltech. “We have accumulated plenty of first-hand experience developing autonomous vehicles with the support of domestic companies such as Hyundai Motor Company, Samsung, LG, and NAVER. In 2017, the autonomous vehicle platform “EureCar” that we developed in-house was authorized by the Korean government to lawfully conduct autonomous driving experiment on public roads,” said Professor Shim. The team has developed various key technologies and algorithms related to unmanned systems that can be categorized into three major components: perception, planning, and control. Considering the characteristics of the algorithms that make up each module, their technology operates using a distributed computing system. Since 2015, the team has been actively using deep learning algorithms in the form of perception subsystems. Contextual information extracted from multi-modal sensory data gathered via cameras, lidar, radar, GPS, IMU, etc. is forwarded to the planning subsystem. The planning module is responsible for the decision making and planning required for autonomous driving such as lane change determination and trajectory planning, emergency stops, and velocity command generation. The results from the planner are fed into the controller to follow the planned high-level command. The team has also developed and verified the possibility of an end-to-end deep learning based autonomous driving approach that replaces a complex system with one single AI network.
Connecting the Dots to Find New Treatments for Bre..
Systems biologists uncovered new ways of cancer cell reprogramming to treat drug-resistant cancers < Professor Kwang-Hyun Cho and colleagues have developed a mathematical model and identified optimal targets reprogramming basal-like cancer cells into hormone therapy-responsive luminal-A cells by deciphering the complex molecular interactions within these cells through a systems biological approach. > Scientists at KAIST believe they may have found a way to reverse an aggressive, treatment-resistant type of breast cancer into a less dangerous kind that responds well to treatment. The study involved the use of mathematical models to untangle the complex genetic and molecular interactions that occur in the two types of breast cancer, but could be extended to find ways for treating many others. The study’s findings were published in the journal Cancer Research. Basal-like tumours are the most aggressive type of breast cancer, with the worst prognosis. Chemotherapy is the only available treatment option, but patients experience high recurrence rates. On the other hand, luminal-A breast cancer responds well to drugs that specifically target a receptor on their cell surfaces, called estrogen receptor alpha (ERα). KAIST systems biologist Kwang-Hyun Cho and colleagues analyzed the complex molecular and genetic interactions of basal-like and luminal-A breast cancers to find out if there might be a way to switch the former to the latter and give patients a better chance to respond to treatment. To do this, they accessed large amounts of cancer and patient data to understand which genes and molecules are involved in the two types. They then input this data into a mathematical model that represents genes, proteins and molecules as dots and the interactions between them as lines. The model can be used to conduct simulations and see how interactions change when certain genes are turned on or off. “There have been a tremendous number of studies trying to find therapeutic targets for treating basal-like breast cancer patients,” says Cho. “But clinical trials have failed due to the complex and dynamic nature of cancer. To overcome this issue, we looked at breast cancer cells as a complex network system and implemented a systems biological approach to unravel the underlying mechanisms that would allow us to reprogram basal-like into luminal-A breast cancer cells.” Using this approach, followed by experimental validation on real breast cancer cells, the team found that turning off two key gene regulators, called BCL11A and HDAC1/2, switched a basal-like cancer signalling pathway into a different one used by luminal-A cancer cells. The switch reprograms the cancer cells and makes them more responsive to drugs that target ERα receptors. However, further tests will be needed to confirm that this also works in animal models and eventually humans. “Our study demonstrates that the systems biological approach can be useful for identifying novel therapeutic targets,” says Cho. The researchers are now expanding its breast cancer network model to include all breast cancer subtypes. Their ultimate aim is to identify more drug targets and to understand the mechanisms that could drive drug-resistant cells to turn into drug-sensitive ones. This work was supported by the National Research Foundation of Korea, the Ministry of Science and ICT, Electronics and Telecommunications Research Institute, and the KAIST Grand Challenge 30 Project. -Publication Sea R. Choi, Chae Young Hwang, Jonghoon Lee, and Kwang-Hyun Cho, “Network Analysis Identifies Regulators of Basal-like Breast Cancer Reprogramming and Endocrine Therapy Vulnerability,” Cancer Research, November 30. (doi:10.1158/0008-5472.CAN-21-0621) -Profile Professor Kwang-Hyun Cho Laboratory for Systems Biology and Bio-Inspired Engineering Department of Bio and Brain Engineering KAIST
A Team of Three PhD Candidates Wins the Korea Semi..
“We felt a sense of responsibility to help the nation advance its semiconductor design technology” < PhD candidates at the School of Electrical Engineering Yoon-Seo Cho, Sun-Eui Park, and Ju-Eun Bang (from left) > A CMOS (complementary metal-oxide semiconductor)-based “ultra-low noise signal chip” for 6G communications designed by three PhD candidates at the KAIST School of Electrical Engineering won the Presidential Award at the 22nd Korea Semiconductor Design Contest. The winners are PhD candidates Sun-Eui Park, Yoon-Seo Cho, and Ju-Eun Bang from the Integrated Circuits and System Lab run by Professor Jaehyouk Choi. The contest, which is hosted by the Ministry of Trade, Industry and Energy and the Korea Semiconductors Industry Association, is one of the top national semiconductor design contests for college students. Park said the team felt a sense of responsibility to help advance semiconductor design technology in Korea when deciding to participate the contest. The team expressed deep gratitude to Professor Choi for guiding their research on 6G communications. “Our colleagues from other labs and seniors who already graduated helped us a great deal, so we owe them a lot,” explained Park. Cho added that their hard work finally got recognized and that acknowledgement pushes her to move forward with her research. Meanwhile, Bang said she is delighted to see that many people seem to be interested in her research topic. Research for 6G is attempting to reach 1 tera bps (Tbps), 50 times faster than 5G communications with transmission speeds of up to 20 gigabytes. In general, the wider the communication frequency band, the higher the data transmission speed. Thus, the use of frequency bands above 100 gigahertz is essential for delivering high data transmission speeds for 6G communications. However, it remains a big challenge to make a precise benchmark signal that can be used as a carrier wave in a high frequency band. Despite the advantages of CMOS’s ultra-small and low-power design, it still has limitations at high frequency bands and its operating frequency. Thus, it was difficult to achieve a frequency band above 100 gigahertz. To overcome these challenges, the three students introduced ultra-low noise signal generation technology that can support high-order modulation technologies. This technology is expected to contribute to increasing the price competitiveness and density of 6G communication chips that will be used in the future. 5G just got started in 2020 and still has long way to go for full commercialization. Nevertheless, many researchers have started preparing for 6G technology, targeting 2030 since a new cellular communication appears in every other decade. Professor Choi said, “Generating ultra-high frequency signals in bands above 100 GHz with highly accurate timing is one of the key technologies for implementing 6G communication hardware. Our research is significant for the development of the world’s first semiconductor chip that will use the CMOS process to achieve noise performance of less than 80fs in a frequency band above 100 GHz.” The team members plan to work as circuit designers in Korean semiconductor companies after graduation. “We will continue to research the development of signal generators on the topic of award-winning 6G. We would like to continue our research on high-speed circuit designs such as ultra-fast analog-to-digital converters,” Park added.
Professor Sung-Ju Lee’s Team Wins the Best Paper a..
< Professor Sung-Ju Lee, Professor Eun Kyoung, Choe Hyunsung Cho, Daeun Choi, Donghwi Kim, Wan Ju Kang (from left) > A research team led by Professor Sung-Ju Lee at the School of Electrical Engineering won the Best Paper Award and the Methods Recognition Award from ACM CSCW (International Conference on Computer-Supported Cooperative Work and Social Computing) 2021 for their paper “Reflect, not Regret: Understanding Regretful Smartphone Use with App Feature-Level Analysis”. Founded in 1986, CSCW has been a premier conference on HCI (Human Computer Interaction) and Social Computing. This year, 340 full papers were presented and the best paper awards are given to the top 1％ papers of the submitted. Methods Recognition, which is a new award, is given “for strong examples of work that includes well developed, explained, or implemented methods, and methodological innovation.” Hyunsung Cho (KAIST alumus and currently a PhD candidate at Carnegie Mellon University), Daeun Choi (KAIST undergraduate researcher), Donghwi Kim (KAIST PhD Candidate), Wan Ju Kang (KAIST PhD Candidate), and Professor Eun Kyoung Choe (University of Maryland and KAIST alumna) collaborated on this research. The authors developed a tool that tracks and analyzes which features of a mobile app (e.g., Instagram’s following post, following story, recommended post, post upload, direct messaging, etc.) are in use based on a smartphone’s User Interface (UI) layout. Utilizing this novel method, the authors revealed which feature usage patterns result in regretful smartphone use. Professor Lee said, “Although many people enjoy the benefits of smartphones, issues have emerged from the overuse of smartphones. With this feature level analysis, users can reflect on their smartphone usage based on finer grained analysis and this could contribute to digital wellbeing.” < Research achievements diagram : Application feature level usage analysis / UI LAYOUT Analysis >
A Study Shows Reactive Electrolyte Additives Impro..
Stable electrode-electrolyte interfaces constructed by fluorine- and nitrogen-donating ionic additives provide an opportunity to improve high-performance lithium metal batteries < A combination of lithium difluoro (bisoxalato) phosphate as an F donor and lithium nitrate as an N donor with different electron accepting abilities and adsorption tendencies improves the cycle performance of Li｜NCM811 full cells through the creation of a dual-layer SEI on a Li metal anode and a protective CEI on a Ni-rich cathode. > A research team showed that electrolyte additives increase the lifetime of lithium metal batteries and remarkably improve the performance of fast charging and discharging. Professor Nam-Soon Choi’s team from the Department of Chemical and Biomolecular Engineering at KAIST hierarchized the solid electrolyte interphase to make a dual-layer structure and showed groundbreaking run times for lithium metal batteries. The team applied two electrolyte additives that have different reduction and adsorption properties to improve the functionality of the dual-layer solid electrolyte interphase. In addition, the team has confirmed that the structural stability of the nickel-rich cathode was achieved through the formation of a thin protective layer on the cathode. This study was reported in Energy Storage Materials. Securing high-energy-density lithium metal batteries with a long lifespan and fast charging performance is vital for realizing their ubiquitous use as superior power sources for electric vehicles. Lithium metal batteries comprise a lithium metal anode that delivers 10 times higher capacity than the graphite anodes in lithium-ion batteries. Therefore, lithium metal is an indispensable anode material for realizing high-energy rechargeable batteries. However, undesirable reactions among the electrolytes with lithium metal anodes can reduce the power and this remains an impediment to achieving a longer battery lifespan. Previous studies only focused on the formation of the solid electrolyte interphase on the surface of the lithium metal anode. The team designed a way to create a dual-layer solid electrolyte interphase to resolve the instability of the lithium metal anode by using electrolyte additives, depending on their electron accepting ability and adsorption tendencies. This hierarchical structure of the solid electrolyte interphase on the lithium metal anode has the potential to be further applied to lithium-alloy anodes, lithium storage structures, and anode-free technology to meet market expectations for electrolyte technology. The batteries with lithium metal anodes and nickel-rich cathodes represented 80.9％ of the initial capacity after 600 cycles and achieved a high Coulombic efficiency of 99.94％. These remarkable results contributed to the development of protective dual-layer solid electrolyte interphase technology for lithium metal anodes. Professor Choi said that the research suggests a new direction for the development of electrolyte additives to regulate the unstable lithium metal anode-electrolyte interface, the biggest hurdle in research on lithium metal batteries. She added that anode-free secondary battery technology is expected to be a game changer in the secondary battery market and electrolyte additive technology will contribute to the enhancement of anode-free secondary batteries through the stabilization of lithium metal anodes. This research was funded by the Technology Development Program to Solve Climate Change of the National Research Foundation in Korea funded by the Ministry of Science, ICT & Future Planning and the Technology Innovation Program funded by the Ministry of Trade, Industry & Energy, and Hyundai Motor Company. - Publication Saehun Kim, Sung O Park, Min-Young Lee, Jeong-A Lee, Imanuel Kristanto, Tae Kyung Lee, Daeyeon Hwang, Juyoung Kim, Tae-Ung Wi, Hyun-Wook Lee, Sang Kyu Kwak, and Nam Soon Choi, “Stable electrode-electrolyte interfaces constructed by fluorine- and nitrogen-donating ionic additives for high-performance lithium metal batteries,” Energy Storage Materials, 45, 1-13 (2022), (doi: https://doi.org/10.1016/j.ensm.2021.10.031) - Profile Professor Nam-Soon Choi Energy Materials Laboratory Department of Chemical and Biomolecular Engineering KAIST
3 PhD Candidates Selected as the 2021 Google Fello..
< The 2021 Google PhD fellow Soo Ye Kim, Sanghyun Woo, and Hae Beom Lee (from left) > PhD candidates Soo Ye Kim and Sanghyun Woo from the KAIST School of Electrical Engineering and Hae Beom Lee from the Kim Jaechul Graduate School of AI were selected as the 2021 Google PhD Fellows. The Google PhD Fellowship is a scholarship program that supports graduate school students from around the world that have produced excellent achievements from promising computer science-related fields. The 75 selected fellows will receive ten thousand dollars of funding with the opportunity to discuss research and receive one-on-one feedback from experts in related fields at Google. Soo Ye Kim was recognized for her research outcomes in image and video quality enhancement. In particular, she was the first to suggest a deep learning-based method to restore super-resolution and HDR videos, and to handle super-resolutionization and interpolation at the same time. Her related research outcomes were presented at top international conferences in the fields of computer vision and AI, including CVPR, ICCV, and AAAI. She is also collaborating with Google and Adobe research teams through research internships to research various high-quality video conversion methods. Sanghyun Woo was recognized for his research in the fields of visual perception and deduction. He suggested an effective deep learning model design based on the human attention mechanism, and efficient learning methodologies utilizing self-learning and simulators, which received a lot of attention. His various research outcomes related to models and learning methodologies were presented at top international conferences in the fields of computer vision and AI, including CVPR, ECCV, and NeurlPS. In particular, his paper presented at ECCV in 2018, “Convolutional Block Attention Module (CBAM)”, is being used in various computer vision applications, and has exceeded 2700 citation counts on Google Scholar. Woo was also selected as a Microsoft Research Asia PhD Fellow in 2020. Hae Beom Lee’s research achievements include effectively overcoming various limitations of the existing meta-learning framework. Specifically, he proposed ways to deal with a realistic task distribution with imbalances, improved the practicality of meta-knowledge, and made meta-learning possible, even in large-scale task scenarios. These various studies have been accepted to numerous top-tier machine learning conferences such as NeurIPS, ICML, and ICLR. In particular, one of his papers was selected as an oral presentation at ICLR 2020 and another as a spotlight presentation at NeurIPS 2020. Due to the COVID-19 pandemic, the award ceremony was held via the online Google PhD Fellow Summit, and the list of recipients can be found on the Google website .
Deep Learning Framework to Enable Material Design ..
Researchers propose a deep neural network-based forward design space exploration using active transfer learning and data augmentation < Figure 1: Schematic of deep learning framework for material design space exploration. Schematic of gradual expansion of reliable prediction domain of DNN based on the addition of data generated from the hyper-heuristic genetic algorithm and active transfer learning. > A new study proposed a deep neural network-based forward design approach that enables an efficient search for superior materials far beyond the domain of the initial training set. This approach compensates for the weak predictive power of neural networks on an unseen domain through gradual updates of the neural network with active transfer learning and data augmentation methods. Professor Seungwha Ryu believes that this study will help address a variety of optimization problems that have an astronomical number of possible design configurations. For the grid composite optimization problem, the proposed framework was able to provide excellent designs close to the global optima, even with the addition of a very small dataset corresponding to less than 0.5％ of the initial training data-set size. This study was reported in npj Computational Materials last month. “We wanted to mitigate the limitation of the neural network, weak predictive power beyond the training set domain for the material or structure design,” said Professor Ryu from the Department of Mechanical Engineering. Neural network-based generative models have been actively investigated as an inverse design method for finding novel materials in a vast design space. However, the applicability of conventional generative models is limited because they cannot access data outside the range of training sets. Advanced generative models that were devised to overcome this limitation also suffer from weak predictive power for the unseen domain. Professor Ryu’s team, in collaboration with researchers from Professor Grace Gu’s group at UC Berkeley, devised a design method that simultaneously expands the domain using the strong predictive power of a deep neural network and searches for the optimal design by repetitively performing three key steps. First, it searches for few candidates with improved properties located close to the training set via genetic algorithms, by mixing superior designs within the training set. Then, it checks to see if the candidates really have improved properties, and expands the training set by duplicating the validated designs via a data augmentation method. Finally, they can expand the reliable prediction domain by updating the neural network with the new superior designs via transfer learning. Because the expansion proceeds along relatively narrow but correct routes toward the optimal design (depicted in the schematic of Fig. 1), the framework enables an efficient search. As a data-hungry method, a deep neural network model tends to have reliable predictive power only within and near the domain of the training set. When the optimal configuration of materials and structures lies far beyond the initial training set, which frequently is the case, neural network-based design methods suffer from weak predictive power and become inefficient. Researchers expect that the framework will be applicable for a wide range of optimization problems in other science and engineering disciplines with astronomically large design space, because it provides an efficient way of gradually expanding the reliable prediction domain toward the target design while avoiding the risk of being stuck in local minima. Especially, being a less-data-hungry method, design problems in which data generation is time-consuming and expensive will benefit most from this new framework. The research team is currently applying the optimization framework for the design task of metamaterial structures, segmented thermoelectric generators, and optimal sensor distributions. “From these sets of on-going studies, we expect to better recognize the pros and cons, and the potential of the suggested algorithm. Ultimately, we want to devise more efficient machine learning-based design approaches,” explained Professor Ryu.This study was funded by the National Research Foundation of Korea and the KAIST Global Singularity Research Project. -Publication Yongtae Kim, Youngsoo, Charles Yang, Kundo Park, Grace X. Gu, and Seunghwa Ryu, “Deep learning framework for material design space exploration using active transfer learning and data augmentation,” npj Computational Materials (https://doi.org/10.1038/s41524-021-00609-2) -Profile Professor Seunghwa Ryu Mechanics & Materials Modeling Lab Department of Mechanical Engineering KAIST
A Mechanism Underlying Most Common Cause of Epilep..
An interdisciplinary study shows that neurons carrying somatic mutations in MTOR can lead to focal epileptogenesis via non-cell-autonomous hyperexcitability of nearby nonmutated neurons < Image 1: Neurons carrying somatic mutations in MTOR lead to focal epileptogenesis via non-cell autonomous hyperexcitability of nearby non-mutated neurons. (Left) Neurons with mTOR mutation (green) observed in a mouse brain section image. (Middle) Network model consisting of a small portion of mutated and a large portion of nearby non-mutated neurons. (Right) Mitigated hyperactivity of non-mutated neurons after the treatment of inhibitor of adenosine kinase. > During fetal development, cells should migrate to the outer edge of the brain to form critical connections for information transfer and regulation in the body. When even a few cells fail to move to the correct location, the neurons become disorganized and this results in focal cortical dysplasia. This condition is the most common cause of seizures that cannot be controlled with medication in children and the second most common cause in adults. Now, an interdisciplinary team studying neurogenetics, neural networks, and neurophysiology at KAIST has revealed how dysfunctions in even a small percentage of cells can cause disorder across the entire brain. They published their results on June 28 in Annals of Neurology. The work builds on a previous finding, also by a KAIST scientists, who found that focal cortical dysplasia was caused by mutations in the cells involved in mTOR, a pathway that regulates signaling between neurons in the brain. “Only 1 to 2％ of neurons carrying mutations in the mTOR signaling pathway that regulates cell signaling in the brain have been found to include seizures in animal models of focal cortical dysplasia,” said Professor Jong-Woo Sohn from the Department of Biological Sciences. “The main challenge of this study was to explain how nearby non-mutated neurons are hyperexcitable.” Initially, the researchers hypothesized that the mutated cells affected the number of excitatory and inhibitory synapses in all neurons, mutated or not. These neural gates can trigger or halt activity, respectively, in other neurons. Seizures are a result of extreme activity, called hyperexcitability. If the mutated cells upend the balance and result in more excitatory cells, the researchers thought, it made sense that the cells would be more susceptible to hyperexcitability and, as a result, seizures. “Contrary to our expectations, the synaptic input balance was not changed in either the mutated or non-mutated neurons,” said Professor Jeong Ho Lee from the Graduate School of Medical Science and Engineering. “We turned our attention to a protein overproduced by mutated neurons.” The protein is adenosine kinase, which lowers the concentration of adenosine. This naturally occurring compound is an anticonvulsant and works to relax vessels. In mice engineered to have focal cortical dysplasia, the researchers injected adenosine to replace the levels lowered by the protein. It worked and the neurons became less excitable. “We demonstrated that augmentation of adenosine signaling could attenuate the excitability of non-mutated neurons,” said Professor Se-Bum Paik from the Department of Bio and Brain Engineering. The effect on the non-mutated neurons was the surprising part, according to Paik. “The seizure-triggering hyperexcitability originated not in the mutation-carrying neurons, but instead in the nearby non-mutated neurons,” he said. The mutated neurons excreted more adenosine kinase, reducing the adenosine levels in the local environment of all the cells. With less adenosine, the non-mutated neurons became hyperexcitable, leading to seizures. “While we need further investigate into the relationship between the concentration of adenosine and the increased excitation of nearby neurons, our results support the medical use of drugs to activate adenosine signaling as a possible treatment pathway for focal cortical dysplasia,” Professor Lee said. The Suh Kyungbae Foundation, the Korea Health Technology Research and Development Project, the Ministry of Health & Welfare, and the National Research Foundation in Korea funded this work. -Publication: Koh, H.Y., Jang, J., Ju, S.H., Kim, R., Cho, G.-B., Kim, D.S., Sohn, J.-W., Paik, S.-B. and Lee, J.H. (2021), ‘Non–Cell Autonomous Epileptogenesis in Focal Cortical Dysplasia’ Annals of Neurology, 90: 285 299. (https://doi.org/10.1002/ana.26149) -Profile Professor Jeong Ho Lee Translational Neurogenetics Lab https://tnl.kaist.ac.kr/ Graduate School of Medical Science and Engineering KAIST Professor Se-Bum Paik Visual System and Neural Network Laboratory http://vs.kaist.ac.kr/ Department of Bio and Brain Engineering KAIST Professor Jong-Woo Sohn Laboratory for Neurophysiology, https://sites.google.com/site/sohnlab2014/home Department of Biological Sciences KAIST Dr. Hyun Yong Koh Translational Neurogenetics Lab Graduate School of Medical Science and Engineering KAIST Dr. Jaeson Jang Ph.D. Visual System and Neural Network Laboratory Department of Bio and Brain Engineering KAIST Sang Hyeon Ju M.D. Laboratory for Neurophysiology Department of Biological Sciences KAIST
Aline and Blow-yancy Win the Red Dot Design Awards..
Professor Lee sought ‘sustainability’ while developing Aline to meet the growing awareness of ESG (environmental, social, and governance) investing. ESG investing relies on independent ratings that help consumers assess a company’s behavior and policies when it comes to its social impact. Aline’s personal value index with six main criteria translates values into sustainable finance. By gathering data from the initial survey and regular value updates, the index is weighted according to the user’s values. Based on the index, the investment portfolio will be adjusted, and consumption against the values will be tracked. Blow-yancy is a diving VR device for neutral buoyancy training. Blow-yancy’s VR mask helps divers feel like they are wearing an actual diving mask. Users can breathe through a regulator with a built-in breathing sensor. It allows training like actual diving without going into the water, therefore enabling safer diving. “We got an idea that about 74％ of scuba divers come into contact with corals underwater at least once and that can cause an emergency situation. Divers who cannot maintain neutral buoyance will experience a tough time avoiding them,” said Professor Lee. The hardware consists of a nose covering VR mask, a regulator with a built-in breath sensor, and a controller for virtual BCD control. Blow-yancy’s five virtual missions were organized according to the diving process required by PADI, a professional diving education institute. Professor Lee’s team already received eight recognitions at the iF Design Award in April. Professor Lee said, “We will continue to develop the best UX design items that will improve our global recognition.”
Brain-Inspired Highly Scalable Neuromorphic Hardwa..
Neurons and synapses based on single transistor can dramatically reduce the hardware cost and accelerate the commercialization of neuromorphic hardware < Single transistor neurons and synapses fabricated using a standard silicon CMOS process. They are co-integrated on the same 8-inch wafer. > KAIST researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures. The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4. Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices. To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails. Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. "We have demonstrated that neurons and synapses can be implemented using a single transistor," said Joon-Kyu Han, the first author. "By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC). -Publication Joon-Kyu Han, Sung-Yool Choi, Yang-Kyu Choi, et al.“Cointegration of single-transistor neurons and synapses by nanoscale CMOS fabrication for highly scalable neuromorphic hardware,” Science Advances (DOI: 10.1126/sciadv.abg8836) -Profile Professor Yang-Kyu Choi Nano-Oriented Bio-Electronics Lab https://sites.google.com/view/nobelab/ School of Electrical Engineering KAIST Professor Sung-Yool Choi Molecular and Nano Device Laboratory https://www.mndl.kaist.ac.kr/ School of Electrical Engineering KAIST