Korean

KAIST Develops AI that Automatically Detects Defe..
< (From left) Ph.D candidate Jihye Na, Professor Jae-Gil Lee > Recently, defect detection systems using artificial intelligence (AI) sensor data have been installed in smart factory manufacturing sites. However, when the manufacturing process changes due to machine replacement or variations in temperature, pressure, or speed, existing AI models fail to properly understand the new situation and their performance drops sharply. KAIST researchers have developed AI technology that can accurately detect defects even in such situations without retraining, achieving performance improvements up to 9.42%. This achievement is expected to contribute to reducing AI operating costs and expanding applicability in various fields such as smart factories, healthcare devices, and smart cities. KAIST (President Kwang Hyung Lee) announced on the 26th of August that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new “time-series domain adaptation” technology that allows existing AI models to be utilized without additional defect labeling, even when manufacturing processes or equipment change. Time-series domain adaptation technology enables AI models that handle time-varying data (e.g., temperature changes, machine vibrations, power usage, sensor signals) to maintain stable performance without additional training, even when the training environment (domain) and the actual application environment differ. Professor Lee’s team paid attention to the fact that the core problem of AI models becoming confused by environmental (domain) changes lies not only in differences in data distribution but also in changes in defect occurrence patterns (label distribution) themselves. For example, in semiconductor wafer processes, the ratio of ring-shaped defects and scratch defects may change due to equipment modifications. The research team developed a method for decomposing new process sensor data into three components—trends, non-trends, and frequencies—to analyze their characteristics individually. Just as humans detect anomalies by combining pitch, vibration patterns, and periodic changes in machine sounds, AI was enabled to analyze data from multiple perspectives. In other words, the team developed TA4LS (Time-series domain Adaptation for mitigating Label Shifts) technology, which applies a method of automatically correcting predictions by comparing the results predicted by the existing model with the clustering information of the new process data. Through this, predictions biased toward the defect occurrence patterns of the existing process can be precisely adjusted to match the new process. In particular, this technology is highly practical because it can be easily combined like an additional plug-in module inserted into existing AI systems without requiring separate complex development. That is, regardless of the AI technology currently being used, it can be applied immediately with only simple additional procedures. < Figure 1. Concept diagram of the “TA4LS” technology developed by the research team. Sensor data from a new process is grouped by components (trends, non-trends, and frequencies) according to similar patterns. By comparing these with the defect tendencies predicted by the existing model and automatically correcting mismatches, the technology maintains high performance even when processes change. > In experiments using four benchmark datasets of time-series domain adaptation (i.e., four types of sensor data in which changes had occurred), the research team achieved up to 9.42% improvement in accuracy compared to existing methods.[TT1] Especially when process changes caused large differences in label distribution (e.g., defect occurrence patterns), the AI demonstrated remarkable performance improvement by autonomously correcting and distinguishing such differences. These results proved that the technology can be used more effectively without defects in environments that produce small batches of various products, one of the main advantages of smart factories. Professor Jae-Gil Lee, who supervised the research, said, “This technology solves the retraining problem, which has been the biggest obstacle to the introduction of artificial intelligence in manufacturing. Once commercialized, it will greatly contribute to the spread of smart factories by reducing maintenance costs and improving defect detection rates.” This research was carried out with Jihye Na, a Ph.D. student at KAIST, as the first author, with Youngeun Nam, a Ph.D. student, and Junhyeok Kang, a researcher at LG AI Research, as co-authors. The research results were presented in August 2025 at KDD (the ACM SIGKDD Conference on Knowledge Discovery and Data Mining), the world’s top academic conference in artificial intelligence and data. ※Paper Title: “Mitigating Source Label Dependency in Time-Series Domain Adaptation under Label Shifts” ※DOI: https://doi.org/10.1145/3711896.3737050 This technology was developed as part of the research outcome of the SW Computing Industry Original Technology Development Program’s SW StarLab project (RS-2020-II200862, DB4DL: Development of Highly Available and High-Performance Distributed In-Memory DBMS for Deep Learning), supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP).

KAIST achieves over 95% high-purity CO₂ capture u..
< (From left) Professor Dong-Yeun Koh from KAIST, Professor T. Alan Hatton from MIT, Dr. Young Hun Lee from MIT, Dr. Hwajoo Joo from MIT, Dr. Jung Hun Lee from MIT > Direct Air Capture (DAC) is a technology that filters out carbon dioxide present in the atmosphere at extremely low concentrations (below 400 ppm). The KAIST research team has now succeeded in capturing over 95% high-purity carbon dioxide using only low power at the level of smartphone charging voltage (3V), without hot steam or complex facilities. While high energy cost has been the biggest obstacle for conventional DAC technologies, this study is regarded as a breakthrough demonstrating real commercialization potential. Overseas patent applications have already been filed, and because it can be easily linked with renewable energy such as solar and wind power, the technology is being highlighted as a “game changer” for accelerating the transition to carbon-neutral processes. KAIST (President Kwang Hyung Lee) announced on the 25th of August that Professor Dong-Yeun Koh’s research team from the Department of Chemical and Biomolecular Engineering, in collaboration with Professor T. Alan Hatton’s group at MIT’s Department of Chemical Engineering, has developed the world’s first ultra-efficient e-DAC (Electrified Direct Air Capture) technology based on conductive silver nanofibers. Conventional DAC processes required high-temperature steam (over 100℃) in the regeneration stage, where absorbed or adsorbed carbon dioxide is separated again. This process consumes about 70% of the total energy, making energy efficiency crucial, and requires complex heat-exchange systems, which makes cost reduction difficult. The joint research team, led by KAIST, solved this problem with “fibers that heat themselves electrically,” adopting Joule heating, a method that generates heat by directly passing electricity through fibers, similar to an electric blanket. By heating only where needed without an external heat source, energy loss was drastically reduced. This technology can rapidly heat fibers to 110℃ within 80 seconds with only 3V—the energy level of smartphone charging. This shortens adsorption–desorption cycles dramatically even in low-power environments, while reducing unnecessary heat loss by about 20% compared to existing technologies. The core of this research was not just making conductive fibers, but realizing a “breathable conductive coating” that achieves both “electrical conductivity” and “gas diffusion.” The team uniformly coated porous fiber surfaces with a composite of silver nanowires and nanoparticles, forming a layer about 3 micrometers (µm) thick—much thinner than a human hair. This “3D continuous porous structure” allowed excellent electrical conductivity while securing pathways for CO₂ molecules to move smoothly into the fibers, enabling uniform, rapid heating and efficient CO₂ capture simultaneously. < Figure 1. Fabrication process of the silver nanocomposite-based conductive fibrous DAC device and schematic of CO₂ capture–regeneration mechanism through a rapid operating cycle: (1-1) A porous fiber precursor based on Y-zeolite and cellulose acetate was dip-coated with a silver nanoparticle/nanowire composite and treated with EDA vapor, resulting in an adsorptive fiber with enhanced gas selectivity and conductivity. (1-2) This fibrous DAC system enables stable and efficient CO₂ capture–regeneration even under low-power conditions, through a rapid cycle (e-TVSA) consisting of (i) CO₂ adsorption from air, (ii) gas displacement, (iii) electrically-driven Joule heating, and (iv) cooling and preparation for re-adsorption. > Furthermore, when multiple fibers were modularized and connected in parallel, the total resistance dropped below 1 ohm (Ω), proving scalability to large-scale systems. The team succeeded in recovering over 95% high-purity CO₂ under real atmospheric conditions. This achievement was the result of five years of in-depth research since 2020. Remarkably, in late 2022, long before the paper’s publication, the core technology had already been filed for PCT and domestic/international patents (WO2023068651A1, countries entered: US, EP, JP, AU, CN), securing foundational intellectual property rights. This indicates that the technology is not only highly advanced but also developed with practical commercialization in mind beyond the laboratory level. The biggest innovation of this technology is that it runs solely on electricity, making it very easy to integrate with renewable energy sources such as solar and wind. It perfectly matches the needs of global companies that have declared RE100 and seek carbon-neutral process transitions. Professor Dong-Yeun Koh of KAIST said, “Direct Air Capture (DAC) is not just a technology for reducing carbon dioxide emissions, but a key means of achieving ‘negative emissions’ by purifying the air itself. The conductive fiber-based DAC technology we developed can be applied not only to industrial sites but also to urban systems, significantly contributing to Korea’s leap as a leading nation in future DAC technologies.” < Figure 2. Uniform coating of conductive fibers and characteristics of rapid electrical heating: (2-1) By forming a uniform coating layer, the fiber’s resistance was drastically reduced to about 0.5 Ω/cm. (2-2) Heat-transfer simulations analyzing thermal efficiency according to the number of fibers loaded in a module showed that when 12 fibers were used, heat loss was minimized and the most ideal temperature distribution was obtained. This suggests the optimal fiber configuration condition for achieving uniform heating while reducing power consumption. (2-3) In actual experiments, rapid and efficient electrical heating characteristics were observed, with the fiber surface reaching 110 °C within 80 seconds using only 3V of applied voltage. > This study was led by Young Hun Lee (PhD, 2023 graduate of KAIST; currently at MIT Department of Chemical Engineering) and co-first-authored by Jung Hun Lee and Hwajoo Joo (MIT, Department of Chemical Engineering). The results were published online on August 1, 2025, in Advanced Materials, one of the world’s leading journals in materials science, and in recognition of its excellence, the work was also selected for the Front Inside Cover. ※ Paper title: “Design of Electrified Fiber Sorbents for Direct Air Capture with Electrically-Driven Temperature Vacuum Swing Adsorption” ※ DOI: https://doi.org/10.1002/adma.202504542 This study was supported by the Aramco–KAIST CO₂ Research Center and the National Research Foundation of Korea with funding from the Ministry of Science and ICT (No. RS-2023-00259416, DACU Source Technology Development Project).

In KAIST, Robots Now Untie Rubber Bands and Inser..
< (From left) M.S candidate Minseok Song, Professor Daehyung Park > The technology that allows robots to handle deformable objects such as wires, clothing, and rubber bands has long been regarded as a key task in the automation of manufacturing and service industries. However, since such deformable objects do not have a fixed shape and their movements are difficult to predict, robots have faced great difficulties in accurately recognizing and manipulating them. KAIST researchers have developed a robot technology that can precisely grasp the state of deformable objects and handle them skillfully, even with incomplete visual information. This achievement is expected to contribute to intelligent automation in various industrial and service fields, including cable and wire assembly, manufacturing that handles soft components, and clothing organization and packaging. KAIST (President Kwang Hyung Lee) announced on the 21st of August that the research team led by Professor Daehyung Park of the School of Computing developed an artificial intelligence technology called “INR-DOM (Implicit Neural-Representation for Deformable Object Manipulation),” which enables robots to skillfully handle objects whose shape continuously changes like elastic bands and which are visually difficult to distinguish. Professor Park’s research team developed a technology that allows robots to completely reconstruct the overall shape of a deformable object from partially observed three-dimensional information and to learn manipulation strategies based on it. Additionally, the team introduced a new two-stage learning framework that combines reinforcement learning and contrastive learning so that robots can efficiently learn specific tasks. The trained controller achieved significantly higher task success rates compared to existing technologies in a simulation environment, and in real robot experiments, it demonstrated a high level of manipulation capability, such as untying complicatedly entangled rubber bands, thereby greatly expanding the applicability of robots in handling deformable objects. Deformable Object Manipulation (DOM) is one of the long-standing challenges in robotics. This is because deformable objects have infinite degrees of freedom, making their movements difficult to predict, and the phenomenon of self-occlusion, in which the object hides parts of itself, makes it difficult for robots to grasp their overall state. To solve these problems, representation methods of deformable object states and control technologies based on reinforcement learning have been widely studied. However, existing representation methods could not accurately represent continuously deforming surfaces or complex three-dimensional structures of deformable objects, and since state representation and reinforcement learning were separated, there was a limitation in constructing a suitable state representation space needed for object manipulation. < Figure 1. (From top) A robotic arm performing a sealing task inserting a rubber ring into a groove, an installation task attaching an O-ring onto a cylinder, and a disentanglement task untying a rubber band tangled between two pillars. INR-DOM accurately grasped the tangled state of the object from partial observation and successfully performed the tasks. > To overcome these limitations, the research team utilized “Implicit Neural Representation.” This technology receives partial three-dimensional information (point cloud*) observed by the robot and reconstructs the overall shape of the object, including unseen parts, as a continuous surface (signed distance function, SDF). This enables robots to imagine and understand the overall shape of the object just like humans. *Point cloud 3D information: a method of representing the three-dimensional shape of an object as a “set of points” on its surface. Furthermore, the research team introduced a two-stage learning framework. In the first stage of pre-training, a model is trained to reconstruct the complete shape from incomplete point cloud data, securing a state representation module that is robust to occlusion and capable of well representing the surfaces of stretching objects. In the second stage of fine-tuning, reinforcement learning and contrastive learning are used together to optimize the control policy and state representation module so that the robot can clearly distinguish subtle differences between the current state and the goal state and efficiently find the optimal action required for task execution. When the INR-DOM technology developed by the research team was mounted on a robot and tested, it showed overwhelmingly higher success rates than the best existing technologies in three complex tasks in a simulation environment: inserting a rubber ring into a groove (sealing), installing an O-ring onto a part (installation), and untying tangled rubber bands (disentanglement). In particular, in the most challenging task, disentanglement, the success rate reached 75%, which was about 49% higher than the best existing technology (ACID, 26%). < Figure 2. INR-DOM goes through a two-stage learning process. In the first stage (pre-training), a model is trained to reconstruct a complete 3D shape from partial point cloud data. In the second stage (fine-tuning), reinforcement learning and contrastive learning are used to efficiently learn manipulation policies optimized for specific tasks. > The research team also verified that INR-DOM technology is applicable in real environments by combining sample-efficient robotic reinforcement learning with INR-DOM and performing reinforcement learning in a real-world environment. As a result, in actual environments, the robot performed insertion, installation, and disentanglement tasks with a success rate of over 90%, and in particular, in the visually difficult bidirectional disentanglement task, it achieved a 25% higher success rate compared to existing image-based reinforcement learning methods, proving that robust manipulation is possible despite visual ambiguity. Minseok Song, a master’s student and first author of this research, stated that “this research has shown the possibility that robots can understand the overall shape of deformable objects even with incomplete information and perform complex manipulation based on that understanding.” He added, “It will greatly contribute to the advancement of robot technology that performs sophisticated tasks in cooperation with humans or in place of humans in various fields such as manufacturing, logistics, and medicine.” This study, with KAIST School of Computing master’s student Minseok Song as first author, was presented at the top international robotics conference, Robotics: Science and Systems (RSS) 2025, held June 21–25 at USC in Los Angeles. ※ Paper title: “Implicit Neural-Representation Learning for Elastic Deformable-Object Manipulations” ※ DOI: https://www.roboticsproceedings.org/ (to be released), currently https://arxiv.org/abs/2505.00500 This research was supported by the Ministry of Science and ICT through the Institute of Information & Communications Technology Planning & Evaluation (IITP)’s projects “Core Software Technology Development for Complex-Intelligence Autonomous Agents” (RS-2024-00336738; Development of Mission Execution Procedure Generation Technology for Autonomous Agents’ Complex Task Autonomy), “Core Technology Development for Human-Centered Artificial Intelligence” (RS-2022-II220311; Goal-Oriented Reinforcement Learning Technology for Multi-Contact Robot Manipulation of Everyday Objects), “Core Computing Technology” (RS-2024-00509279; Global AI Frontier Lab), as well as support from Samsung Electronics. More details can be found at https://inr-dom.github.io.

KAIST Leading the International Standardization o..
< (From left) Seongha Hwang (Ph.D. candidate), Woohyuk Chung (Ph.D. candidate), Professor Jooyoung Lee (School of Computing) > In computer security, random numbers are crucial values that must be unpredictable—such as secret keys or initialization vectors (IVs)—forming the foundation of security systems. To achieve this, deterministic random bit generators (DRBGs) are used, which produce numbers that appear random. However, existing DRBGs had limitations in both security (unpredictability against hacking) and output speed. KAIST researchers have developed a DRBG that theoretically achieves the highest possible level of security through a new proof technique, while maximizing speed by parallelizing its structure. This enables safe and ultra-fast random number generation applicable from IoT devices to large-scale servers. KAIST (President Kwang Hyung Lee) announced on the 20th of August that a research team led by Professor Jooyoung Lee from the School of Computing has established a new theoretical framework for analyzing the security of permutation*-based deterministic random bit generators (DRBG, Deterministic Random Bits Generator) and has designed a DRBG that achieves optimal efficiency. *Permutation: The process of shuffling bits or bytes by changing their order, allowing bidirectional conversion (the shuffled data can be restored to its original state). Deterministic random bit generators create unpredictable random numbers from entropy sources (random data obtained from the environment) using basic cryptographic operations such as block ciphers*, hash bits— an improvement of approximately 50% compared to existing proofs. They also proved that this value is the theoretical maximum achievable. The research team also designed POSDRBG (Parallel Output Sponge-based DRBG) to address the output efficiency limitation of the existing sponge structure caused by its serial (single-line) processing. The newly proposed parallel structure processes multiple streams simultaneously, thereby achieving the maximum efficiency possible for permutation-based DRBGs. Professor Jooyoung Lee stated, “POSDRBG is a new deterministic random bit generator that improves both random number generation speed and security, making it applicable from small IoT devices to large-scale servers. This research is expected to positively influence the ongoing revision of the international DRBG standard SP800-90A*, leading to the formal inclusion of permutation-based DRBGs.” *SP800-90A: An international standard document established by the U.S. NIST (National Institute of Standards and Technology), defining the design and operational criteria for DRBGs used in cryptographic systems. Until now, permutation-based DRBGs have not been included in the standard. This research, with Woohyuk Chung (KAIST, first author), Seongha Hwang (KAIST), Hwigyeom Kim (Samsung Electronics), and Jooyoung Lee (KAIST, corresponding author), will be presented in August at CRYPTO (the Annual International Cryptology Conference), the world’s top academic conference in cryptology. Article title: “Enhancing Provable Security and Efficiency of Permutation-Based DRBGs“ DOI: https://doi.org/10.1007/978-3-032-01901-1_15 This research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP). < Figure 1. Sponge structure that outputs sequence Zi using permutation function P > The random number output function of the existing Sponge-DRBG uses a sponge structure that directly connects the permutation P. For reference, all existing permutation-function-based DRBGs have this sponge structure. In the sponge structure, among the n-bit inputs of P, only the upper r bits are used as the output Z. Therefore, the output efficiency is always limited to r/n. < Figure 2. Output structure of POSDRBG > In this study, the random number output function of POSDRBG was designed to allow parallel computation, and all n-bit outputs of the permutation function P become random numbers Z. Therefore, it has an output efficiency of 1.

KAIST Takes the Lead in Developing Core Technologi..
< Professor Sanghoo Park from Department of Nuclear and Quantum Engineering > KAIST announced on the 15th of August that Professor Sanghoo Park of the Department of Nuclear and Quantum Engineering has won two consecutive awards for early-career researchers at two of the world's most prestigious plasma academic conferences. Professor Park was selected as a recipient of the Early Career Award (ECA) at the Gaseous Electronics Conference (GEC), hosted by the American Physical Society, on August 4. He was also honored with the Young Investigator Award, presented by the International Plasma Chemistry Society (IPCS), on June 19. The American Physical Society's GEC Early Career Award is given to only one person worldwide every two years, based on a comprehensive evaluation of research excellence, academic influence, and contributions to the field of plasma. The award will be presented at GEC 2025, which will be held at COEX in Seoul from October 13 to 17. Established in 1948, the GEC is a leading academic conference in the plasma field with a 77-year history of showcasing key research achievements in all areas of plasma, including physics, chemistry, diagnostics, and application technologies. Recently, advanced application research such as eco-friendly chemical processes, next-generation semiconductors, and atomic layer and ultra-low-temperature etching technology for HBM processes have been gaining attention. To commemorate the award, Professor Park will give an invited lecture at GEC 2025 on the topic of "Deep-Learning-Based Spectroscopic Data Analysis for Advancing Plasma Spectroscopy." In his lecture, he will use case studies to demonstrate a method that allows even non-specialists to easily and quickly perform spectroscopic data analysis—which is essential for spectroscopy, a key analytical method in modern science including plasma diagnostics—by using deep learning technology. < Award ceremony at IPCS (professor sang hoo Park on the far left) > Professor Park also won the Young Investigator Award from the IPCS at the 26th International Symposium on Plasma Chemistry (ISPC 26), which was held in Minneapolis, USA, from June 15 to 20. First held in 1973, the ISPC (International Symposium on Plasma Chemistry) is a representative international conference in the field of plasma chemistry, held biennially. It covers a wide range of topics, from basic plasma chemical reaction principles to applications in semiconductor processes, green energy, environmental science, and biotechnology. Researchers from industry, academia, and research institutions worldwide share their latest findings at each event. The Young Investigator Award is given to a scientist who has obtained their doctorate within the last 10 years and has demonstrated outstanding achievements in the field. Professor Park was recognized for his leading research achievements in using plasma-liquid interactions and real-time optical diagnostic technology to environmentally fix nitrogen from the air and precisely control the quantity and types of reactive chemical species that are beneficial to the human body and the environment. < Photo of a certificate > Professor Sanghoo Park stated, "It is very meaningful to receive the Young Investigator Award representing Korea at the GEC event, which is being held in Korea for the first time in its history." He added, "I am happy that my consistent interest in and achievements in fundamental plasma science have been recognized, and it is even more significant that the efforts of the KAIST research team have been acknowledged by the world's top conferences."

KAIST develops world’s most sensitive light-powere..
<(From left) Ph.D candidate Jaeha Hwang, Ph.D candidate Jungi Song ,Professor Kayoung Lee from Electrical Engineering> Silicon semiconductors used in existing photodetectors have low light responsivity, and the two-dimensional semiconductor MoS₂ (molybdenum disulfide) is so thin that doping processes to control its electrical properties are difficult, limiting the realization of high-performance photodetectors. The KAIST research team has overcome this technical limitation and developed the world’s highest-performing self-powered photodetector, which operates without electricity in environments with a light source. This paves the way for an era where precise sensing is possible without batteries in wearable devices, biosignal monitoring, IoT devices, autonomous vehicles, and robots, as long as a light source is present. KAIST (President Kwang Hyung Lee) announced on the 14th of August that Professor Kayoung Lee’s research team from the School of Electrical Engineering has developed a self-powered photodetector that operates without external power supply. This sensor demonstrated a sensitivity up to 20 times higher than existing products, marking the highest performance level among comparable technologies reported to date. Professor Kayoung Lee’s team fabricated a “PN junction structure” photodetector capable of generating electrical signals on its own in environments with light, even without an electrical energy supply, by introducing a “van der Waals bottom electrode” that makes semiconductors extremely sensitive to electrical signals without doping. First, a “PN junction” is a structure formed by joining p-type (hole-rich) and n-type (electron-rich) materials in a semiconductor. This structure causes current to flow in one direction when exposed to light, making it a key component in photodetectors and solar cells. Normally, to create a proper PN junction, a process called “doping” is required, which involves deliberately introducing impurities into the semiconductor to alter its electrical properties. However, two-dimensional semiconductors such as MoS₂ are only a few atoms thick, so doping in the conventional way can damage the structure or reduce performance, making it difficult to create an ideal PN junction. To overcome these limitations and maximize device performance, the research team designed a new device structure incorporating two key technologies: the “van der Waals electrode” and the “partial gate.” The “partial gate” structure applies an electrical signal only to part of the two-dimensional semiconductor, controlling one side to behave like p-type and the other like n-type. This allows the device to function electrically like a PN junction without doping. Furthermore, considering that conventional metal electrodes can chemically bond strongly to the semiconductor and damage its lattice structure, the “van der Waals bottom electrode” was attached gently using van der Waals forces. This preserved the original structure of the two-dimensional semiconductor while ensuring effective electrical signal transfer. This innovative approach secured both structural stability and electrical performance, enabling the realization of a PN junction in thin two-dimensional semiconductors without damaging their structure. Thanks to this innovation, the team succeeded in implementing a high-performance PN junction without doping. The device can generate electrical signals with extreme sensitivity as long as there is light, even without an external power source. Its light detection sensitivity (responsivity) exceeds 21 A/W, more than 20 times higher than powered conventional sensors, 10 times higher than silicon-based self-powered sensors, and over twice as high as existing MoS₂ sensors. This level of sensitivity means it can be applied immediately to high-precision sensors capable of detecting biosignals or operating in dark environments. Professor Kayoung Lee stated that they “have achieved a level of sensitivity unimaginable in silicon sensors, and although two-dimensional semiconductors are too thin for conventional doping processes, [they] succeeded in implementing a PN junction that controls electrical flow without doping.” She added, “This technology can be used not only in sensors but also in key components that control electricity inside smartphones and electronic devices, providing a foundation for miniaturization and self-powered operation of next-generation electronics.” <Jaeha Hwang, Jungi Song, Experimnet in Porgress> This research, with doctoral students Jaeha Hwang and Jungi Song as co-first authors, was published online on July 26 in Advanced Functional Materials (IF 19), a leading journal in materials science. ※ Paper title: Gated PN Junction in Ambipolar MoS₂ for Superior Self-Powered Photodetection ※ DOI: https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.202510113 Meanwhile, this work was supported by the National Research Foundation of Korea, the Korea Basic Science Institute, Samsung Electronics, and the Korea Institute for Advancement of Technology.

KAIST develops “FlexGNN,” a graph analysis AI 95 t..
<(From Left) Donghyoung Han, CTO of GraphAI Co, Ph.D candidate Jeongmin Bae from KAIST, Professor Min-soo Kim from KAIST> Alongside text-based large language models (LLMs) including ChatGPT, in industrial fields, GNN (Graph Neural Network)-based graph AI models that analyze unstructured data such as financial transactions, stocks, social media, and patient records in graph form are being actively used. However, there is a limitation in that full graph learning—training the entire graph at once—requires massive memory and GPU servers. A KAIST research team has succeeded in developing the world’s highest-performance software technology that can train large-scale GNN models at maximum speed using only a single GPU server. KAIST (President Kwang Hyung Lee) announced on the 13th that the research team led by Professor Min-Soo Kim of the School of Computing has developed “FlexGNN,” a GNN system that, unlike existing methods using multiple GPU servers, can quickly train and infer large-scale full-graph AI models on a single GPU server. FlexGNN improves training speed by up to 95 times compared to existing technologies. Recently, in various fields such as climate, finance, medicine, pharmaceuticals, manufacturing, and distribution, there has been a growing number of cases where data is converted into graph form, consisting of nodes and edges, for analysis and prediction. While the full graph approach, which uses the entire graph for training, achieves higher accuracy, it has the drawback of frequently running out of memory due to the generation of massive intermediate data during training, as well as prolonged training times caused by data communication between multiple servers. To overcome these problems, FlexGNN performs optimal AI model training on a single GPU server by utilizing SSDs (solid-state drives) and main memory instead of multiple GPU servers. <Figure (a): This illustrates the typical execution flow of a conventional full-graph GNN training system. All intermediate data generated during training are retained in GPU memory, and computations are performed sequentially without data movement or memory optimization. Consequently, if the GPU memory capacity is exceeded, training becomes infeasible. Additionally, inter-GPU data exchange relies solely on a fixed method (X_rigid), limiting performance and scalability. Figure (b): This depicts an example of the execution flow based on the optimized training execution plan generated by FlexGNN. For each intermediate data, strategies such as retention, offloading, or recomputation are selectively applied. Depending on resource constraints and data size, an appropriate inter-GPU exchange method—either GPU-to-GPU (G2G) or GPU-to-Host (G2H)—is adaptively chosen by the exchange operator (X_adapt). Furthermore, offloading and reloading operations are scheduled to overlap as much as possible with computation, maximizing compute-data movement parallelism. The adaptive exchange operator and various data offloading and reloading operators (R, O) within the figure demonstrate FlexGNN's ability to flexibly control intermediate data management and inter-GPU exchange strategies based on the training execution plan.> Particularly, through AI query optimization training—which optimizes the quality of database systems—the team developed a new training optimization technology that calculates model parameters, training data, and intermediate data between GPU, main memory, and SSD layers at the optimal timing and method. As a result, FlexGNN flexibly generates optimal training execution plans according to available resources such as data size, model scale, and GPU memory, thereby achieving high resource efficiency and training speed. Consequently, it became possible to train GNN models on data far exceeding main memory capacity, and training could be up to 95 times faster even on a single GPU server. In particular, the realization of full-graph AI, capable of more precise analysis than supercomputers in applications such as climate prediction, has become a reality. Professor Min-Soo Kim of KAIST stated, “As full-graph GNN models are actively used to solve complex problems such as weather prediction and new material discovery, the importance of related technologies is increasing.” He added that “since FlexGNN has dramatically solved the longstanding problems of training scale and speed in graph AI models, we expect it to be widely used in various industries.” In this research, Jeongmin Bae, a doctoral student in the School of Computing at KAIST, participated as the first author, Donghyoung Han, CTO of GraphAI Co. (founded by Professor Kim) participated as the second author, and Professor Kim served as the corresponding author. The research results were presented on August 5 at ACM KDD, a world-renowned data mining conference. The FlexGNN technology is also planned to be applied to Grapheye’s graph database solution, GraphOn. ● Paper title: FlexGNN: A High-Performance, Large-Scale Full-Graph GNN System with Best-Effort Training Plan Optimization ● DOI: https://doi.org/10.1145/3711896.3736964 This research was supported by the IITP SW Star Lab and IITP-ITRC of the Ministry of Science and ICT, as well as the mid-level project program of the National Research Foundation of Korea.

KAIST Develops World’s First Wireless OLED Contact..
<ID-style photograph against a laboratory background featuring an OLED contact lens sample (center), flanked by the principal authors (left: Professor Seunghyup Yoo ; right: Dr. Jee Hoon Sim). Above them (from top to bottom) are: Professor Se Joon Woo, Professor Sei Kwang Hahn, Dr. Su-Bon Kim, and Dr. Hyeonwook Chae> Electroretinography (ERG) is an ophthalmic diagnostic method used to determine whether the retina is functioning normally. It is widely employed for diagnosing hereditary retinal diseases or assessing retinal function decline. A team of Korean researchers has developed a next-generation wireless ophthalmic diagnostic technology that replaces the existing stationary, darkroom-based retinal testing method by incorporating an “ultrathin OLED” into a contact lens. This breakthrough is expected to have applications in diverse fields such as myopia treatment, ocular biosignal analysis, augmented-reality (AR) visual information delivery, and light-based neurostimulation. On the 12th, KAIST (President Kwang Hyung Lee) announced that a research team led by Professor Seunghyup Yoo from the School of Electrical Engineering, in collaboration with Professor Se Joon Woo of Seoul National University Bundang Hospital (Director Jeong-Han Song), Professor Sei Kwang Hahn of POSTECH (President Sung-Keun Kim) and CEO of PHI Biomed Co., and the Electronics and Telecommunications Research Institute (ETRI, President Seungchan Bang) under the National Research Council of Science & Technology (NST, Chairman Youngshik Kim), has developed the world’s first wireless contact lens-based wearable retinal diagnostic platform using organic light-emitting diodes (OLEDs). <Figure 1. Schematic and photograph of the wireless OLED contact lens> This technology enables ERG simply by wearing the lens, eliminating the need for large specialized light sources and dramatically simplifying the conventional, complex ophthalmic diagnostic environment. Traditionally, ERG requires the use of a stationary Ganzfeld device in a dark room, where patients must keep their eyes open and remain still during the test. This setup imposes spatial constraints and can lead to patient fatigue and compliances challenges. To overcome these limitations, the joint research team integrated an ultrathin flexible OLED —approximately 12.5 μm thick, or 6–8 times thinner than a human hair— into a contact lens electrode for ERG. They also equipped it with a wireless power receiving antenna and a control chip, completing a system capable of independent operation. For power transmission, the team adopted a wireless power transfer method using a 433 MHz resonant frequency suitable for stable wireless communication. This was also demonstrated in the form of a wireless controller embedded in a sleep mask, which can be linked to a smartphone —further enhancing practical usability. <Figure 2. Schematic of the electroretinography (ERG) testing system using a wireless OLED contact lens and an example of an actual test in progress> While most smart contact lens–type light sources developed for ocular illumination have used inorganic LEDs, these rigid devices emit light almost from a single point, which can lead to excessive heat accumulation and thus usable light intensity. In contrast, OLEDs are areal light sources and were shown to induce retinal responses even under low luminance conditions. In this study, under a relatively low luminance* of 126 nits, the OLED contact lens successfully induced stable ERG signals, producing diagnostic results equivalent to those obtained with existing commercial light sources. *Luminance: A value indicating how brightly a surface or screen emits light; for reference, the luminance of a smartphone screen is about 300–600 nits (can exceed 1000 nits at maximum). Animal tests confirmed that the surface temperature of a rabbit’s eye wearing the OLED contact lens remained below 27°C, avoiding corneal heat damage, and that the light-emitting performance was maintained even in humid environments—demonstrating its effectiveness and safety as an ERG diagnostic tool in real clinical settings. Professor Seunghyup Yoo stated that “integrating the flexibility and diffusive light characteristics of ultrathin OLEDs into a contact lens is a world-first attempt,” and that “this research can help expand smart contact lens technology into on-eye optical diagnostic and phototherapeutic platforms, contributing to the advancement of digital healthcare technology.” < Wireless operation of the OLED contact lens > Jee Hoon Sim, Hyeonwook Chae, and Su-Bon Kim, PhD researchers at KAIST, played a key role as co-first authors alongside Dr. Sangbaie Shin of PHI Biomed Co.. Corresponding authors are Professor Seunghyup Yoo (School of Electrical Engineering, KAIST), Professor Sei Kwang Hahn (Department of Materials Science and Engineering, POSTECH), and Professor Se Joon Woo (Seoul National University Bundang Hospital). The results were published online in the internationally renowned journal ACS Nano on May 1st. ● Paper title: Wireless Organic Light-Emitting Diode Contact Lenses for On-Eye Wearable Light Sources and Their Application to Personalized Health Monitoring ● DOI: https://doi.org/10.1021/acsnano.4c18563 ● Related video clip: http://bit.ly/3UGg6R8 < Close-up of the OLED contact lens sample >

KAIST Develops Bioelectrosynthesis Platform for Sw..
<(From left)Professor Jimin Park, Ph.D candidate Myeongeun Lee, Ph.D cadidate Jaewoong Lee,Professor Jihan Kim> Cells use various signaling molecules to regulate the nervous, immune, and vascular systems. Among these, nitric oxide (NO) and ammonia (NH₃) play important roles, but their chemical instability and gaseous nature make them difficult to generate or control externally. A KAIST research team has developed a platform that generates specific signaling molecules in situ from a single precursor under an applied electrical signal, enabling switch-like, precise spatiotemporal control of cellular responses. This approach could provide a foundation for future medical technologies such as electroceuticals, electrogenetics, and personalized cell therapies. KAIST (President Kwang Hyung Lee) announced on August 11 that a research team led by Professor Jimin Park from the Department of Chemical and Biomolecular Engineering, in collaboration with Professor Jihan Kim's group, has developed a 'Bioelectrosynthesis Platform' capable of producing either nitric oxide or ammonia on demand using only an electrical signal. The platform allows control over the timing, spatial range, and duration of cell responses. Inspired by enzymes involved in nitrite reduction, the researchers implemented an electrochemical strategy that selectively produces nitric oxide or ammonia from a single precursor, nitrite (NO₂⁻). By changing the catalyst, the team generated ammonia or nitric oxide from nitrite using a copper-molybdenum-sulfur catalyst (Cu2MoS4) and an iron-incorporated catalyst (FeCuMS4), respectively. Through electrochemical measurements and computer simulations, the team revealed that Fe sites in the FeCuMoS4 catalyst bind nitric oxide intermediates more strongly, shifting product selectivity toward nitric oxide. Under the same electrical conditions, the Fe-containing catalyst preferentially produces nitric oxide, whereas the Cu2MoS4 catalyst favors ammonia production. <Figure 1. Schematic diagram of a bio-electrosynthesis platform that synthesizes a desired signaling substance with an electrical signal (left) and the results of precise cell control using it (right)> The research team demonstrated biological functionality by using the platform to activate ion channels in human cells. Specifically, electrochemically produced nitric oxide activated TRPV1 channels (responsive to heat and chemical stimuli), while electrochemically produced ammonia induced intracellular alkalinization and activated OTOP1 proton channels. By tuning the applied voltage and electrolysis duration, the team modulated the onset time, spatial extent, and termination of cellular responses, which effectively turned cellular signaling on and off like a switch. <Figure 2. Experimental results showing the change in the production ratio of nitric oxide and ammonia signaling substances according to the type of catalyst (left) and computational simulation results showing the strong bond between iron and nitric oxide (right)> Professor Jimin Park said, "This work is significant because it enables precise cellular control by selectively producing signaling molecules with electricity. We believe it has strong potential for applications in electroceutical technologies targeting the nervous system or metabolic disorders." Myeongeun Lee and Jaewoong Lee, Ph.D. students in the Department of Chemical and Biomolecular Engineering at KAIST, served as the co-first authors. Professor Jihan Kim is a co-author. The paper was published online in 'Angewandte Chemie International Edition' on July 8, 2025 (DOI: 10.1002/ange.202508192). Reference: https://doi.org/10.1002/ange.202508192 Authors: Myeongeun Lee†, Jaewoong Lee†, Yongha Kim, Changho Lee, Sang Yeon Oh, Prof. Jihan Kim, Prof. Jimin Park* †These authors contributed equally. *Corresponding author.

'Team Atlanta', in which KAIST Professor Insu Yun ..
<Photo1. Group Photo of Team Atlanta> Team Atlanta, led by Professor Insu Yun of the Department of Electrical and Electronic Engineering at KAIST and Tae-soo Kim, an executive from Samsung Research, along with researchers from POSTECH and Georgia Tech, won the final championship at the AI Cyber Challenge (AIxCC) hosted by the Defense Advanced Research Projects Agency (DARPA). The final was held at the world's largest hacking conference, DEF CON 33, in Las Vegas on August 8 (local time). With this achievement, the team won a prize of $4 million (approximately 5.5 billion KRW), demonstrating the excellence of their AI-based autonomous cyber defense technology on the global stage. <Photo2.Championship Commemorative:On the left and right are tournament officials. From the second person, Professor Tae-soo Kim(Samsung Research / Georgia Tech), Researcher Hyeong-seok Han (Samsung Research America), and Professor Insu Yun (KAIST)> The AI Cyber Challenge is a two-year global competition co-hosted by DARPA and the Advanced Research Projects Agency for Health (ARPA-H). It challenges contestants to automatically analyze, detect, and fix software vulnerabilities using AI-based Cyber Reasoning Systems (CRS). The total prize money for the competition is $29.5 million, with the winning team receiving $4 million. In the final, Team Atlanta scored a total of 392.76 points, a difference of over 170 points from the second-place team, Trail of Bits, securing a dominant victory. The CRS developed by Team Atlanta successfully and automatically detected various types of vulnerabilities and patched a significant number of them in real time. Among the 7 finalist teams, an average of 77% of the 70 intentionally injected vulnerabilities were found, and 61% of them were patched. The teams also found 18 additional unknown vulnerabilities in real software, proving the potential of AI security technology. All CRS technologies, including those of the winning team, will be provided as open-source and are expected to be used to strengthen the security of core infrastructure such as hospitals, water, and power systems. <Photo3. Final Scoreboard: An overwhelming victory with over 170 points> Professor Insu Yun of KAIST, a member of Team Atlanta, stated, "I am very happy to have achieved such a great result. This is a remarkable achievement that shows Korea's cyber security research has reached the highest level in the world, and it was meaningful to show the capabilities of Korean researchers on the world stage. I will continue to conduct research to protect the digital safety of the nation and global society through the fusion of AI and security technology." KAIST President Kwang-hyung Lee stated, "This victory is another example that proves KAIST is a world-leading institution in the field of future cyber security and AI convergence. We will continue to provide full support to our researchers so they can compete and produce results on the world stage." <Photo4. Results Announcement>

KAIST’s Wearable Robot Design Wins ‘2025 Red Dot A..
< Professor Hyunjoon Park, M.S candidate Eun-ju Kang, Prospective M.S candidate Jae-seong Kim, undergraduate student Min-su Kim > A team led by Professor Hyunjoon Park from the Department of Industrial Design won the ‘Best of the Best’ award at the 2025 Red Dot Design Awards, one of the world's top three design awards, for their 'Angel Robotics WSF1 VISION Concept.' The design for the next-generation wearable robot for people with paraplegia successfully implements functionality, aesthetics, and social inclusion. This latest achievement follows the team's iF Design Award win for the WalkON Suit F1 prototype, which also won a gold medal at the Cybathlon last year. This marks consecutive wins at top-tier international design awards. KAIST (President Kwang-hyung Lee) announced on the 8th of August that Move Lab, a research team led by Professor Hyunjoon Park from the Department of Industrial Design, won the 'Best of the Best' award in the Design Concept-Professional category at the prestigious '2025 Red Dot Design Awards' for their next-generation wearable robot design, the ‘Angel Robotics WSF1 VISION Concept.’ The German 'Red Dot Design Awards' is one of the world's most well-known design competitions. It is considered one of the world's top three design awards along with Germany’s iF Design Awards and America’s IDEA. The ‘Best of the Best’ award is given to the best design in a category and is awarded only to a very select few of the top designs (within the top 1%) among all Red Dot Award winners. Professor Hyunjoon Park’s team was honored with the ‘Best of the Best’ award for a user-friendly follow-up development of the ‘WalkON Suit F1 prototype,’ which won a gold medal at the 2024 Cybathlon and an iF Design Award in 2025. <Figure 1. WSF1 Vision Concept Main Image> This award-winning design is the result of industry-academic cooperation with Angel Robotics Inc., founded by Professor Kyoungchul Kong from the KAIST Department of Mechanical Engineering. It is a concept design that proposes a next-generation wearable robot (an ultra-personal mobility device) that can be used by people with paraplegia in their daily lives. The research team focused on transforming Angel Robotics Inc.'s advanced engineering platform into an intuitive and emotional, user-centric experience, implementing a design solution that simultaneously possesses functionality, aesthetics, and social inclusion. <Figure 2. WSF1 Vision Concept Full Exterior (Front View)> The WSF1 VISION Concept includes innovative features implemented in Professor Kyoungchul Kong’s Exo Lab, such as: An autonomous access function where the robot finds the user on its own. A front-loading mechanism designed for the user to put it on alone while seated. Multi-directional walking functionality realized through 12 powerful torque actuators and the latest control algorithms. AI vision technology, along with a multi-visual display system that provides navigation and omnidirectional vision. This provides users with a safer and more convenient mobility experience. The strong yet elegant silhouette was achieved through a design process that pursued perfection in proportion, surfaces, and details not seen in existing wearable robots. In particular, the fabric cover that wraps around the entire thigh from the robot's hip joint is a stylish element that respects the wearer's self-esteem and individuality, like fashionable athletic wear. It also acts as a device for the wearer to psychologically feel safe in interacting with the robot and blending in with the general public. This presents a new aesthetic for wearable robots where function and form are harmonized. <Figure 3. WSF1 Vision Concept's Operating Principle. It walks autonomously and is worn from the front while the user is seated.> KAIST Professor Hyunjoon Park said of the award, "We are focusing on using technology, aesthetics, and human-centered innovation to present advanced technical solutions as easy, enjoyable, and cool experiences for users. Based on Angel Robotics Inc.'s vision of 'recreating human ability with technology,' the WSF1 VISION Concept aimed to break away from the traditional framework of wearable robots and deliver a design experience that adds dignity, independence, and new style to the user's life." <Figure 4. WSF1 Vision Concept Detail Image> A physical model of the WSF1 VISION Concept is scheduled to be unveiled in the Future Hall of the 2025 Gwangju Design Biennale from August 30 to November 2. The theme is 'Po-yong-ji-deok' (the virtue of inclusion), and it will showcase the role of design language in creating an inclusive future society. <Figure 5. WSF1 Vision Concept: Image of a Person Wearing and Walking>

Unlocking New Potential for Natural Gas–Based Biop..
< (From Left)Jaewook Myung from KAIST, Sunho Park from KAIST, Dr. Chungheon Shin from Stanford University, Prof. Craig S. Criddle from Stanford University > KAIST announced that a research team led by Professor Jaewook Myung from the Department of Civil and Environmental Engineering, in collaboration with Stanford University, has identified how ethane (C2H6)—a major constituent of natural gas—affects the core metabolic pathways of the obligate methanotroph Methylosinus trichosporium OB3b. Methane (CH4), a greenhouse gas with roughly 25 times the global warming potential of carbon dioxide, is rarely emitted alone into the environment. It is typically released in mixtures with other gases. In the case of natural gas, ethane can comprise up to 15% of the total composition. Methanotrophs are aerobic bacteria that can utilize methane as their sole source of carbon and energy. Obligate methanotrophs, in particular, strictly utilize only C1 compounds such as methane or methanol. Until now, little was known about how these organisms respond to C2 compounds like ethane, which they cannot use for growth. < Figure 1. Conceptual overview of obligate methanotroph metabolism and PHB biosynthesis under mixed-substrate conditions of methane and ethane > This study reveals that although ethane cannot serve as a growth substrate, its presence significantly affects key metabolic functions in M. trichosporium OB3b—including methane oxidation, cell proliferation, and the intracellular synthesis of polyhydroxybutyrate (PHB), a biodegradable polymer. Under varying methane and oxygen conditions, the team observed that ethane addition consistently resulted in three metabolic effects: reduced cell growth, lower methane consumption, and increased PHB accumulation. These effects intensified with rising ethane concentrations. Notably, ethane oxidation occurred only when methane was present, confirming that it is co-oxidized via particulate methane monooxygenase (pMMO), the key enzyme responsible for methane oxidation. < Figure2. Effects of increasing ethane concentrations on methane and ethane consumption, cell growth, and PHB production in Methylosinus trichosporium OB3b > Further analysis showed that acetate, an intermediate formed during ethane oxidation, played a pivotal role in this response. Higher acetate levels inhibited growth but enhanced PHB production, suggesting that ethane-derived acetate drives contrasting carbon assimilation patterns depending on nutrient conditions—nutrient-balanced growth phase and nutrient-imbalanced PHB accumulation phas. In addition, when external reducing power was supplemented (via methanol or formate), ethane consumption was enhanced significantly, while methane oxidation remained largely unaffected. This finding suggests that ethane, despite not supporting growth, actively competes for intracellular resources such as reducing equivalents. It offers new insights into substrate prioritization and resource allocation in methanotrophs under mixed-substrate conditions. Interestingly, while methane uptake declined in the presence of ethane, the expression of pmoA, the gene encoding pMMO, remained unchanged. This suggests that ethane’s impact occurs beyond the transcriptional level—likely via post-transcriptional or enzymatic regulation. < Figure2. Effects of increasing ethane concentrations on methane and ethane consumption, cell growth, and PHB production in Methylosinus trichosporium OB3b > “This is the first study to systematically investigate how obligate methanotrophs respond to complex gas mixtures involving ethane,” said Professor Jaewook Myung. “Our findings show that even non-growth substrates can meaningfully influence microbial metabolism and biopolymer synthesis, opening new possibilities for methane-based biotechnologies and bioplastic production.” The study was supported by the National Research Foundation of Korea, the Ministry of Land, Infrastructure and Transport, and the Ministry of Oceans and Fisheries. The results were published in Applied and Environmental Microbiology, a journal of the American Society for Microbiology.