Korean

KAIST’s Reliability-Aware AI Opens Path to Faster ..
< (From front left) Professor Seungbum Hong, Professor EunAe Cho (From back left) Chaeyul Kang, Benediktus Madika, Jung Hyeon Moon, Taemin Park (Top) JooSung Shim > The power that makes electric vehicles travel further and smartphones last longer comes from battery materials. Among them, the core material that directly determines the performance and lifespan of a battery is the cathode material. What if artificial intelligence could replace the numerous experiments required for battery material development? KAIST's research team has developed an artificial intelligence (AI) framework that presents both the particle size of cathode materials and prediction reliability even in situations where experimental data is insufficient, opening the possibility of expansion to next-generation energy technologies such as all-solid-state batteries. KAIST announced on January 26th that a research team led by Professor Seungbum Hong of the Department of Materials Science and Engineering, in joint research with Professor EunAe Cho's team, has developed a machine learning framework that accurately predicts the particle size of battery cathode materials even when experimental data is incomplete and provides the degree of reliability of the results. The cathode material inside the battery is the core material that allows lithium-ion batteries to store and use energy. Currently, the most widely used cathode material for electric vehicle batteries is an NCM-based metal oxide mixed with nickel (Ni), cobalt (Co), and manganese (Mn), which greatly affects the battery's lifespan, charging speed, driving range, and safety. KAIST research team focused on the fact that the size of the very small primary particles that make up these cathode materials is a key factor in determining battery performance. This is because if the particles are too large, performance deteriorates, and conversely, if they are too small, stability problems may occur. Accordingly, the research team developed an AI-based technology that can accurately predict and control particle size. < Battery performance prediction related (AI-generated image) > In the past, to determine the particle size, numerous experiments had to be repeated while changing the sintering temperature, time, and material composition. However, in actual research fields, it was difficult to measure all conditions without omission, and experimental data were often missing, which limited the precise analysis of the relationship between process conditions and particle size. To solve this problem, the research team designed an AI framework that supplements missing data and presents prediction results along with reliability. This framework is characterized by combining a technology (MatImpute) that supplements missing experimental data by considering chemical characteristics and a probabilistic machine learning model (NGBoost) that calculates prediction uncertainty. This AI model does not stop at simply predicting particle size but also provides information on the extent to which the prediction can be trusted. This serves as an important criterion for deciding under what conditions to actually synthesize materials. As a result of learning by expanding experimental data, the AI model showed a high prediction accuracy of about 86.6%. According to the analysis, it was found that the cathode material particle size is more significantly affected by process conditions such as baking temperature and time than by material components, which aligns well with existing experimental understanding. To verify the reliability of the AI prediction, the research team conducted an experiment by newly producing four types of cathode material samples synthesized under manufacturing conditions not included in the existing data while maintaining the same metal component ratio of NCM811 (Ni 80% / Co 10% / Mn 10%) composition. As a result, the particle size predicted by the AI almost matched the actual microscopic measurement results, and most of the errors were 0.13 micrometers (μm) or less, which is much smaller than the thickness of a human hair. In particular, the actual experimental results were included within the prediction uncertainty range presented by the AI, confirming that not only the predicted value but also its reliability was valid. < Distribution shift condition experiment verification using 4 types of samples > This study is significant in that it has opened a way to find conditions with a high probability of success first without performing all experiments in battery research. Through this, it is expected to speed up the development of battery materials and significantly reduce unnecessary experiments and costs. Professor Seungbum Hong said, "The key is that the AI presents not only the predicted value but also how much the result can be trusted," and added, "It will be of practical help in designing next-generation battery materials more quickly and efficiently." In this study, Benediktus Madika, a doctoral student in the Department of Materials Science and Engineering, participated as the first author, and it was published on October 8, 2025, in 'Advanced Science', an internationally prestigious academic journal in the field of materials science and chemical engineering. ※ Paper Title: Uncertainty-Quantified Primary Particle Size Prediction in Li-Rich NCM Materials via Machine Learning and Chemistry-Aware Imputation, DOI: https://doi.org/10.1002/advs.202515694 Meanwhile, this research was conducted by researchers Benediktus Madika, Chaeyul Kang, JooSung Shim, Taemin Park, Jung Hyeon Moon, and the research team of Professor EunAe Cho and Professor Seungbum Hong, and was conducted with support from the Ministry of Science and ICT (MSIT) National Research Foundation of Korea (NRF) Future Convergence Technology Pioneer (Strategic) (Project No. RS-2023-00247245). < Battery performance prediction (AI-generated image) >

KAIST Transforms Hydrogen Energy by Flattening Gra..
<(From Left) Ph.D candidate HyunWoo J Yang, Ph.D candidate SangJae Lee, Professor EunAe Cho, Ph.D candidate DongWon Shin> Catalysts are the “invisible engines” of hydrogen energy, governing both hydrogen production and electricity generation. Conventional catalysts are typically fabricated in granular particle form, which is easy to synthesize but suffers from inefficient use of precious metals and limited durability. KAIST researchers have introduced a paper-thin sheet architecture in place of granules, demonstrating that a structural innovation—rather than new materials—can simultaneously reduce precious-metal usage while enhancing both hydrogen production and fuel-cell performance. KAIST (President Kwang Hyung Lee) announced on the 21st of January that a research team led by Professor EunAe Cho of the Department of Materials Science and Engineering has developed a new catalyst architecture that dramatically reduces the amount of expensive precious metals required while simultaneously improving hydrogen production and fuel-cell performance. The core of this research lies in the application of ultrathin nanosheet structures, with thicknesses tens of thousands of times thinner than a human hair, enabling the team to overcome both efficiency and durability limitations of conventional catalysts. Water electrolyzers and fuel cells are key technologies for hydrogen energy production and utilization. However, their commercialization has been severely constrained by the scarcity and high cost of iridium (Ir) and platinum (Pt), which are commonly used as catalysts. In conventional particle-based catalysts, only a limited surface area participates in reactions, and long-term operation inevitably leads to performance degradation. To address this, the research team transformed agglomerated catalyst particles into paper-like, ultrathin and laterally extended sheets. For water electrolysis, they developed ultrathin iridium nanosheets with lateral size of 1–3 micrometers and thicknesses below 2 nanometers. This structure dramatically increased the active surface area participating in reactions, enabling significantly higher hydrogen production with the same amount of iridium. < Ultrafine Iridium Nanosheet (AI-generated image) > In addition, the team discovered that these ultrathin nanosheets naturally formed interconnected conductive pathways on titanium oxide (TiO2), a material previously considered unsuitable as a catalyst support due to its poor electrical conductivity. As a result, titanium oxide could be stably used as a catalyst support, further enhancing durability. The resulting catalyst achieved a 38% higher hydrogen production rate than commercial catalysts and operated stably for over 1,000 hours under high-load, industry-relevant conditions (1 A/cm2*). Notably, even with approximately 65% less iridium, the catalyst delivered performance comparable to commercial benchmarks, demonstrating a major reduction in precious-metal usage. *1 A/cm2: a high-current condition corresponding to intensive operation of practical hydrogen-production systems The team further applied the ultrathin nanosheet design strategy to fuel-cell catalysts, producing platinum–copper nanosheets with thicknesses again tens of thousands of times thinner than a human hair. In fuel-cell evaluations, this catalyst exhibited a 13-fold improvement in mass activity per unit platinum compared with commercial catalysts, and delivered approximately 2.3 times higher performance in full fuel-cell tests. Even after 50,000 accelerated durability cycles, the catalyst retained about 65% of its initial performance, significantly outperforming conventional catalysts. Importantly, the same performance was achieved while reducing platinum usage by approximately 60%. Professor EunAe Cho emphasized, “This study presents a new catalyst architecture that simultaneously enhances hydrogen production and fuel-cell performance while using far less expensive precious metals,” adding, “It represents a critical turning point for lowering the cost of hydrogen energy and accelerating its commercialization.” <Schematic illustration of ultrathin nanosheet synthesis and transmission electron microscopy (TEM) images of the fabricated catalyst> <Fabrication process of an ultrathin nanosheet catalyst and transmission electron microscopy (TEM) images of the fabricated catalyst> The results of this work were published in two separate papers, both based on the shared core technology of ultrathin nanosheet architectures—one focused on hydrogen-production catalysts and the other on fuel-cell catalysts. The iridium nanosheet study, with doctoral candidate Dongwon Shin as first author, was published online on December 10, 2025, in ACS Nano (IF 16.0). ※ Paper title: “Ultrathin Iridium Nanosheets on Titanium Oxide for High-Efficiency and Durable Proton Exchange Membrane Water Electrolysis,” DOI: 10.1021/acsnano.5c15659 The platinum–copper nanosheet study, with SangJae Lee and doctoral candidate HyunWoo Yang as co–first authors, was published online on December 11, 2025, in Nano Letters (IF 9.6). ※ Paper title: “Ultrathin PtCu Nanosheets: A New Frontier in Highly Efficient and Durable Catalysts for the Oxygen Reduction Reaction,” DOI: 10.1021/acs.nanolett.5c04848 This research was supported by the Energy Human Resource Development Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) under the Ministry of Trade, Industry and Energy, and by the Nano- and Materials-Technology Development Program of the National Research Foundation of Korea under the Ministry of Science and ICT.

Breaking the 1% Barrier, KAIST Boosts Brightness o..
<(Front rwo, from left) KAIST co-first author Changhyun Joo, co-first author Seongbeom Yeon, (Back row, from left) Jaeyoung Ha, Professor Himchan Cho, Jaedong Jang> Light-emitting semiconductors are used throughout everyday life in TVs, smartphones, and lighting. However, many technical barriers remain in developing environmentally friendly semiconductor materials. In particular, nanoscale semiconductors that are tens of thousands of times smaller than the width of a human hair (about 100,000 nanometers) are theoretically capable of emitting bright light, yet in practice have suffered from extremely weak emission. KAIST researchers have now developed a new surface-control technology that overcomes this limitation. KAIST (President Kwang Hyung Lee) announced on the 14th of January that a research team led by Professor Himchan Cho of the Department of Materials Science and Engineering has developed a fundamental technology to control, at the atomic level, the surface of indium phosphide (InP)* magic-sized clusters (MSCs)—nanoscale semiconductor particles regarded as next-generation eco-friendly semiconductor materials. * Indium phosphide (InP): a compound semiconductor made of indium (In) and phosphorus (P), considered an environmentally friendly alternative that does not use hazardous elements such as cadmium The material studied by the team is known as a magic-sized cluster, an ultrasmall semiconductor particle composed of only several tens of atoms. Because all particles have identical size and structure, these materials are theoretically capable of emitting extremely sharp and pure light. However, due to their extremely small size of just 1–2 nanometers, even minute surface defects cause most of the emitted light to be lost. As a result, luminescence efficiency has remained below 1% to date. Previously, this issue was addressed by etching the surface with strong chemicals such as hydrofluoric acid (HF). However, the overly aggressive reactions often damaged the semiconductor itself. Professor Cho’s team adopted a different approach. Instead of removing the surface all at once, they devised a precision etching strategy that allows chemical reactions to proceed in a highly controlled, incremental manner. This enabled selective removal of only the defect sites that hindered light emission, while preserving the overall structure of the semiconductor. During this defect-removal process, fluorine generated by the reaction combined with zinc species in the solution to form zinc chloride, which in turn stabilized and passivated the exposed nanocrystal surface. < Schematic illustration of overcoming emission efficiency limits via atomic-scale precision control > As a result, the research team increased the luminescence efficiency of the semiconductor from below 1% to 18.1%. This represents the highest reported performance to date among indium phosphide–based ultrasmall nanosemiconductors, corresponding to an 18-fold increase in brightness. This study is particularly significant in that it demonstrates, for the first time, that the surfaces of ultrasmall semiconductors—previously considered nearly impossible to control—can be precisely engineered at the atomic level. The technology is expected to find applications not only in next-generation displays, but also in advanced fields such as quantum communication and infrared sensing. < Eco-friendly Ultra-compact Semiconductor Chemical Reaction (AI-generated image) > Professor Himchan Cho explained, “This work is not simply about making brighter semiconductors, but about demonstrating how critical atomic-level surface control is for achieving desired performance.” This research was carried out with Changhyun Joo, a doctoral student, and Seongbeom Yeon, a combined master’s-doctoral student in the Department of Materials Science and Engineering at KAIST, serving as co–first authors. Professor Himchan Cho and Professor Ivan Infante of the Basque Center for Materials, Applications, and Nanostructures (BCMaterials, Spain) participated as co-corresponding authors. The study was published online on December 16 in the Journal of the American Chemical Society (JACS), one of the most prestigious journals in chemistry. ※ Paper title: “Overcoming the Luminescence Efficiency Limitations of InP Magic-Sized Clusters,” DOI: 10.1021/jacs.5c13963 This research was supported by the National Research Foundation of Korea through the Nano Materials Technology Development Program, the Next-Generation Intelligent Semiconductor Technology Development Program, the Quantum Information Science Human Infrastructure Program, and by the Korea Basic Science Institute through its Infrastructure Support Program for Early-Career Researchers.

KAIST Proposes AI-Driven Strategy to Solve Long-St..
<(From Left) Distinguisehd Professor Sang Yup Lee, Dr. Gi Bae Kim, Professor Bernhard O. Palsson> “We know the genes, but not their functions.” To resolve this long-standing bottleneck in microbial research, a joint research team has proposed a cutting-edge research strategy that leverages Artificial Intelligence (AI) to drastically accelerate the discovery of microbial gene functions. KAIST announced on January 12th that a research team led by Distinguished Professor Sang Yup Lee from the Department of Chemical and Biomolecular Engineering, in collaboration with Professor Bernhard Palsson from the Department of Bioengineering at UCSD, has published a comprehensive review paper. The study systematically analyzes and organizes the latest AI-based research approaches aimed at revolutionizing the speed of gene function discovery. Since the early 2000s, when whole-genome sequencing became a reality, there were high expectations that the genetic blueprint of life would be fully decoded. However, even twenty years later, the roles of a significant portion of genes within microbial genomes remain unknown. While various experimental methods—such as gene deletion, analysis of gene expression profiles, and in vitro activity assays—have been employed, discovering gene functions remains a time-consuming and costly endeavor. This is primarily due to the limitations of large-scale experimentation, complex biological interactions, and the discrepancy between laboratory results and actual in vivo responses. To overcome these hurdles, the research team emphasized that an AI-driven approach combining computational biology with experimental biology is essential. In this paper, the team provides a comprehensive overview of computational biology approaches that have facilitated gene function discovery, ranging from traditional sequence similarity analysis to the latest deep-learning-based AI models. Notably, 3D protein structure prediction technologies such as AlphaFold (developed by Google DeepMind) and RoseTTAFold (developed by the University of Washington) have opened new doors. These tools go beyond simple functional estimation, offering the potential to understand the underlying mechanisms of how gene functions operate. Furthermore, generative AI is now extending research boundaries toward designing proteins with specifically desired functions. Focusing on transcription factors (proteins that act as genetic switches) and enzymes (proteins that catalyze chemical reactions), the team presented various application cases and future research directions that integrate gene sequence analysis, protein structure prediction, and diverse metagenomic analyses. <Schematic illustration of computational biology methods for enzyme function prediction> To overcome the biases and limitations inherent in traditional gene discovery, the researchers highlighted the need for an “Active Learning” framework where AI guides the experimental process. Active Learning is a method where an AI model identifies predictions with high uncertainty and suggests specific experiments to resolve them. The results are then fed back into the model to improve its accuracy. This iterative loop allows researchers to efficiently validate the most critical gene functions first. The team stressed that this approach requires tight integration with automated experimental platforms and shared research infrastructures, such as biofoundries. They also noted that “failed data”—experiments that did not yield the expected results—must be shared as vital learning assets for future research. “While deep learning-based prediction performance has improved significantly, developing ‘Explainable AI’ models that can provide biological justifications for their results remains a critical challenge,” said Dr. Gi Bae Kim of KAIST, a co-author of the study. Distinguished Professor Sang Yup Lee emphasized, “The key to surpassing the limits of gene function discovery lies in combining a systematic, AI-guided experimental framework with an automated research infrastructure under the direction of human researchers. Establishing a research ecosystem where prediction and validation are repeatedly linked is essential.” The paper was published on January 7th in Nature Microbiology, a prestigious journal in the field of biotechnology published by Nature. Publication Information Title: Approaches for accelerating microbial gene function discovery using artificial intelligence DOI: 10.1038/s41564-025-02214-1 Authors: Bernhard O. Palsson (UCSD, First Author), Sang Yup Lee (KAIST, Second and Corresponding Author), Gi Bae Kim (KAIST, Third Author) This work is supported by the Development of platform technologies of microbial cell factories for the next-generation biorefineries project (2022M3J5A1056117) and Development of advanced synthetic biology source technologies for leading the biomanufacturing industry project (RS-2024-00399424) from National Research Foundation and supported by the Korean Ministry of Science and ICT.

KAIST Develops OLED Technology with Double the Scr..
<(From Left) Ph.D candidate Minjae Kim, Professor Seunghyup Yoo, Dr. Junho Kim> Organic light-emitting diodes (OLEDs) are widely used in smartphones and TVs thanks to their excellent color reproduction and thin, flexible planar structure. However, internal light loss has limited further improvements in brightness. KAIST researchers have now developed a technology that more than doubles OLED light-emission efficiency while maintaining the flat structure that is a key advantage of OLED displays. KAIST (President Kwang Hyung Lee) announced on the 11th of January that a research team led by Professor Seunghyup Yoo of the School of Electrical Engineering has developed a new near-planar light outcoupling structure* and an OLED design method that can significantly reduce light loss inside OLED devices. * Near-planar light outcoupling structure: a thin structure that keeps the OLED surface almost flat while extracting more of the light generated inside to the outside OLEDs are composed of multiple layers of ultrathin organic films stacked on top of one another. As light passes through these layers, it is repeatedly reflected or absorbed, often causing more than 80% of the light generated inside the OLED to be lost as heat before it can escape. To address this issue, light outcoupling structures such as hemispherical lenses or microlens arrays (MLAs) have been used to extract light from OLEDs. However, hemispherical lenses protrude significantly, making it difficult to maintain a flat form factor, while MLAs must cover much larger area than individual pixel sizes to achieve sufficient light extraction. This creates limitations in achieving high efficiency without interference between neighboring pixels. To increase OLED brightness while preserving a planar structure, the research team proposed a new OLED design strategy that maximizes light extraction within the size of each individual pixel. Unlike conventional designs that assume OLEDs extend infinitely, this approach takes into account the finite pixel sizes actually used in real displays. As a result, more light can be emitted externally even from pixels of the same size. In addition, the team developed a new near-planar light outcoupling structure that helps light emerge efficiently in the forward direction without being spread too widely. This structure is very thin—comparable in thickness to existing microlens arrays—yet achieves light extraction efficiency close to that of hemispherical lenses of the same lateral dimension. As a result, it hardly undermines the flat form factors of OLEDs and can be readily applied to flexible OLED displays. By combining the new OLED design with the near-planar light outcoupling structure, the researchers successfully achieved more than a twofold improvement in light-emission efficiency even in small pixels. < Quasi-Planar Light Extraction OLED Technology > This technology enables brighter displays using the same power while maintaining OLED’s flat structure, and is expected to extend battery life and reduce heat generation in mobile devices such as smartphones and tablets. Improvements in display lifespan are also anticipated. MinJae Kim, the first author of the study, noted, “A small idea that came up during class was developed into real research results through the KAIST Undergraduate Research Program (URP).” Professor Seunghyup Yoo stated, “Although many light outcoupling structures have been proposed, most were designed for large-area lighting applications, and many were difficult to apply effectively to displays composed of numerous small pixels,” adding, “The near-planar light outcoupling structure proposed in this work was designed with constraints on the size of the light source within each pixel, reducing optical interference between adjacent pixels while maximizing efficiency.” He further emphasized that the approach can be applied not only to OLEDs but also to next-generation display technologies based on materials such as perovskites and quantum dots. < Schematic Overview and Application Examples of the Proposed Light Extraction Structure > This research, with MinJae Kim (Department of Materials Science and Engineering, KAIST; currently a Ph.D. student in Materials Science and Engineering at Stanford University) and Junho Kim (School of Electrical Engineering, KAIST; currently a postdoctoral researcher at the University of Cologne, Germany) as co–first authors, was published online on December 29, 2025, in Nature Communications. ※ Paper title: “Near-planar light outcoupling structures with finite lateral dimensions for ultra-efficient and optical crosstalk-free OLED displays” DOI: 10.1038/s41467-025-66538-6 This research was supported by the KAIST Undergraduate Research Program (URP), the Mid-Career Researcher Program and the Future Display Strategic Research Lab Program of the National Research Foundation (NRF) of Korea, the Human Resource Development Program of the Korea Institute for Advancement of Technology (KIAT), and the Korea Planning & Evaluation Institute of Industrial Technology (KEIT).

KAIST detects ‘hidden defects’ that degrade semico..
<(From Left) Professor Byungha Shin, Ph.D candidate Chaeyoun Kim, Dr. Oki Gunawan> Semiconductors are used in devices such as memory chips and solar cells, and within them may exist invisible defects that interfere with electrical flow. A joint research team has developed a new analysis method that can detect these “hidden defects” (electronic traps) with approximately 1,000 times higher sensitivity than existing techniques. The technology is expected to improve semiconductor performance and lifetime, while significantly reducing development time and costs by enabling precise identification of defect sources. KAIST (President Kwang Hyung Lee) announced on January 8th that a joint research team led by Professor Byungha Shin of the Department of Materials Science and Engineering at KAIST and Dr. Oki Gunawan of the IBM T. J. Watson Research Center has developed a new measurement technique that can simultaneously analyze defects that hinder electrical transport (electronic traps) and charge carrier transport properties inside semiconductors. Within semiconductors, electronic traps can exist that capture electrons and hinder their movement. When electrons are trapped, electrical current cannot flow smoothly, leading to leakage currents and degraded device performance. Therefore, accurately evaluating semiconductor performance requires determining how many electronic traps are present and how strongly they capture electrons. The research team focused on Hall measurements, a technique that has long been used in semiconductor analysis. Hall measurements analyze electron motion using electric and magnetic fields. By adding controlled light illumination and temperature variation to this method, the team succeeded in extracting information that was difficult to obtain using conventional approaches. Under weak illumination, newly generated electrons are first captured by electronic traps. As the light intensity is gradually increased, the traps become filled, and subsequently generated electrons begin to move freely. By analyzing this transition process, the researchers were able to precisely calculate the density and characteristics of electronic traps. The greatest advantage of this method is that multiple types of information can be obtained simultaneously from a single measurement. It allows not only the evaluation of how fast electrons move, how long they survive, and how far they travel, but also the properties of traps that interfere with electron transport. The team first validated the accuracy of the technique using silicon semiconductors and then applied it to perovskites, which are attracting attention as next-generation solar cell materials. As a result, they successfully detected extremely small quantities of electronic traps that were difficult to identify using existing methods—demonstrating a sensitivity approximately 1,000 times higher than that of conventional techniques. < Conceptual Diagram of the Evolution of Hall Characterization (Analysis) Techniques > Professor Byungha Shin stated, “This study presents a new method that enables simultaneous analysis of electrical transport and the factors that hinder it within semiconductors using a single measurement,” adding that “it will serve as an important tool for improving the performance and reliability of various semiconductor devices, including memory semiconductors and solar cells.” The results of this research were published on January 1 in Science Advances, an international academic journal, with Chaeyoun Kim, a doctoral student in the Department of Materials Science and Engineering, as the first author. ※ Paper title: “Electronic trap detection with carrier-resolved photo-Hall effect,” DOI: https://doi.org/10.1126/sciadv.adz0460 This research was supported by the Ministry of Science and ICT and the National Research Foundation of Korea. < Conceptual Diagram of Charge Transport and Trap Characterization Using Photo-Hall Measurements (AI-generated image) >

Breaking Performance Barriers of All Solid State B..
< (Bottom, from left) Professor Dong-Hwa Seo, Researcher Jae-Seung Kim, (Top, from left) Professor Kyung-Wan Nam, Professor Sung-Kyun Jung, Professor Youn-Seok Jung > Batteries are an essential technology in modern society, powering smartphones and electric vehicles, yet they face limitations such as fire explosion risks and high costs. While all-solid-state batteries have garnered attention as a viable alternative, it has been difficult to simultaneously satisfy safety, performance, and cost. Recently, a Korean research team successfully improved the performance of all-solid-state batteries simply through structural design—without adding expensive metals. KAIST announced on January 7th that a research team led by Professor Dong-Hwa Seo from the Department of Materials Science and Engineering, in collaboration with teams led by Professor Sung-Kyun Jung (Seoul National University), Professor Youn-Suk Jung (Yonsei University), and Professor Kyung-Wan Nam (Dongguk University), has developed a design method for core materials for all-solid-state batteries that uses low-cost raw materials while ensuring high performance and low risk of fire or explosion. Conventional batteries rely on lithium ions moving through a liquid electrolyte. In contrast, all-solid-state batteries use a solid electrolyte. While this makes them safer, achieving rapid lithium-ion movement within a solid has typically required expensive metals or complex manufacturing processes. To create efficient pathways for lithium-ion transport within the solid electrolyte, the research team focused on "divalent anions" such as oxygen and sulfur . Divalent anions play a crucial role in altering the crystal structure by integrating into the basic framework of the electrolyte. The team developed a technology to precisely control the internal structure of low-cost zirconium (Zr)-based halide solid electrolytes by introducing these divalent anions. This design principle, termed the "Framework Regulation Mechanism," widens the pathways for lithium ions and lowers the energy barriers they encounter during transport. By adjusting the bonding environment and crystal structure around the lithium ions, the team enabled faster and easier movement. To verify these structural changes, the researchers utilized various high-precision analysis techniques, including: High-energy Synchrontron X-ray diffraction(Synchrotron XRD) Pair Distribution Function (PDF) analysis X-ray Absorption Spectroscopy (XAS) Density Functional Theory (DFT) modeling for electronic structure and diffusion. The results showed that electrolytes incorporating oxygen or sulfur improved lithium-ion mobility by 2 to 4 times compared to conventional zirconium-based electrolytes. This signifies that performance levels suitable for practical all-solid-state battery applications can be achieved using inexpensive materials. Specifically, the ionic conductivity at room temperature was measured at approximately 1.78 mS/cm for the oxygen-doped electrolyte and 1.01 mS/cm for the sulfur-doped electrolyte. Ionic conductivity indicates how quickly and smoothly lithium ions move; a value above 1 mS/cm is generally considered sufficient for practical battery applications at room temperature. < Structural Regulation Mechanism of Zr-based Halide Electrolytes via Divalent Anion Introduction > < Atomic Rearrangement of Solid Electrolyte for All-Solid-State Batteries (AI-generated image) > Professor Dong-Hwa Seo stated, "Through this research, we have presented a design principle that can simultaneously improve the cost and performance of all-solid-state batteries using cheap raw materials. Its potential for industrial application is very high." Lead author Jae-Seung Kim added that the study shifts the focus from "what materials to use" to "how to design them" in the development of battery materials. This study, with Jae-Seung Kim (KAIST) and Da-Seul Han (Dongguk University) as co-first authors, was published in the international journal Nature Communications on November 27, 2025. Paper Title: Divalent anion-driven framework regulation in Zr-based halide solid electrolytes for all-solid-state batteries DOI: https://www.nature.com/articles/s41467-025-65702-2 This research was supported by the Samsung Electronics Future Technology Promotion Center, the National Research Foundation of Korea, and the National Supercomputing Center.

Direct Printing of Nanolasers, the Key to Optical ..
< (From left) Professor Ji Tae Kim (KAIST), Dr. Shiqi Hu (First Author, AI-based Intelligent Design-Manufacturing Integrated Research Group, KAIST-POSTECH), and Professor Junsuk Rho (POSTECH) > In future high-tech industries, such as high-speed optical computing for massive AI, quantum cryptographic communication, and ultra-high-resolution augmented reality (AR) displays, nanolasers—which process information using light—are gaining significant attention as core components for next-generation semiconductors. A research team at our university has proposed a new manufacturing technology capable of high-density placement of nanolasers on semiconductor chips, which process information in spaces thinner than a human hair. KAIST announced on January 6th that a joint research team, led by Professor Ji Tae Kim from the Department of Mechanical Engineering and Professor Junsuk Rho from POSTECH (President Seong-keun Kim), has developed an ultra-fine 3D printing technology capable of creating "vertical nanolasers," a key component for ultra-high-density optical integrated circuits. Conventional semiconductor manufacturing methods, such as lithography, are effective for mass-producing identical structures but face limitations: the processes are complex and costly, making it difficult to freely change the shape or position of devices. Furthermore, most existing lasers are built as horizontal structures lying flat on a substrate, which consumes significant space and suffers from reduced efficiency due to light leakage into the substrate. To solve these issues, the research team developed a new 3D printing method to vertically stack perovskite, a next-generation semiconductor material that generates light efficiently. This technology, known as "ultra-fine electrohydrodynamic 3D printing," uses electrical voltage to precisely control invisible ink droplets at the attoliter scale ($10∧{-18}$ L). Through this method, the team successfully printed pillar-shaped nanostructures—much thinner than a human hair—directly and vertically at desired locations without the need for complex subtractive processes (carving material away). The core of this technology lies in significantly increasing laser efficiency by making the surface of the printed perovskite nanostructures extremely smooth. By combining the printing process with gas-phase crystallization control technology, the team achieved high-quality structures with nearly single-crystalline alignment. As a result, they were able to realize high-efficiency vertical nanolasers that operate stably with minimal light loss. Additionally, the team demonstrated that the color of the emitted laser light could be precisely tuned by adjusting the height of the nanostructures. Utilizing this, they created laser security patterns invisible to the naked eye—identifiable only with specialized equipment—confirming the potential for commercialization in anti-counterfeiting technology. < 3D Printing of Perovskite Nanolasers > Professor Jitae Kim stated, "This technology allows for the direct, high-density implementation of optical computing semiconductors on a chip without complex processing. It will accelerate the commercialization of ultra-high-speed optical computing and next-generation security technologies." The research results, with Dr. Shiqi Hu from the Department of Mechanical Engineering as the first author, were published online on December 6, 2025, in ACS Nano, an international prestigious journal in the field of nanoscience. Paper Title: Nanoprinting with Crystal Engineering for Perovskite Lasers DOI: https://doi.org/10.1021/acsnano.5c16906 This research was conducted with support from the Ministry of Science and ICT’s Excellent Young Researcher Program (RS-2025-00556379), the Mid-career Researcher Support Program (RS-2024-00356928), and the InnoCORE AI-based Intelligent Design-Manufacturing Integrated Research Group (N10250154).

KAIST Solves Key Commercialization Challenges of N..
<(From Left) Ph.D candidate Juhyun Lee, Postdoctoral Researcher Jinuk Kim, (Upper Right) Professor Jinwoo Lee> Anode-free lithium metal batteries, which have attracted attention as candidates for electric vehicles, drones, and next-generation high-performance batteries, offer much higher energy density than conventional lithium-ion batteries. However, their short lifespan has made commercialization difficult. KAIST researchers have now moved beyond conventional approaches that required repeatedly changing electrolytes and have succeeded in dramatically extending battery life through electrode surface design alone. KAIST (President Kwang Hyung Lee) announced on the 4th of January that a research team led by Professors Jinwoo Lee and Sung Gap Im of the Department of Chemical and Biomolecular Engineering fundamentally resolved the issue of interfacial instability—the greatest weakness of anode-free lithium metal batteries—by introducing an ultrathin artificial polymer layer with a thickness of 15 nanometers (nm) on the electrode surface. Anode-free lithium metal batteries have a simple structure that uses only a copper current collector instead of graphite or lithium metal at the anode. This design offers advantages such as 30–50% higher energy density compared to conventional lithium-ion batteries, lower manufacturing costs, and simplified processes. However, during the initial charging process, lithium deposits directly onto the copper surface, rapidly consuming the electrolyte and forming an unstable solid electrolyte interphase (SEI), which leads to a sharp reduction in battery lifespan. Rather than changing the electrolyte composition, the research team chose a strategy of redesigning the electrode surface where the problem originates. By forming a uniform ultrathin polymer layer on the copper current collector using an iCVD (initiated chemical vapor deposition) process, they found that this layer regulates interactions with the electrolyte, precisely controlling lithium-ion transport and electrolyte decomposition pathways. <Figure 1. Schematic of an ultrathin artificial polymer layer (15 nm thick) introduced onto the electrode surface> In conventional batteries, electrolyte solvents decompose to form soft and unstable organic SEI layers, causing non-uniform lithium deposition and promoting the growth of sharp, needle-like dendrites. In contrast, the polymer layer developed in this study does not readily mix with the electrolyte solvent, inducing the decomposition of salt components rather than solvents. As a result, a rigid and stable inorganic SEI is formed, simultaneously suppressing electrolyte consumption and excessive SEI growth. Using operando Raman spectroscopy and molecular dynamics (MD) simulations, the researchers identified the mechanism by which an anion-rich environment forms at the electrode surface during battery operation, leading to the formation of a stable inorganic SEI. This technology requires only the addition of a thin surface layer without altering electrolyte composition, offering high compatibility with existing manufacturing processes and minimal cost burden. In particular, the iCVD process enables large-area, continuous roll-to-roll production, making it suitable for industrial-scale mass production beyond the laboratory. <Figure 2. Design rationale of the current collector-modifying artificial polymer layer and the SEI formation mechanism> Professor Jinwoo Lee stated, “Beyond developing new materials, this study is significant in that it presents a design principle showing how electrolyte reactions and interfacial stability can be controlled through electrode surface engineering,” adding, “This technology can accelerate the commercialization of anode-free lithium metal batteries in next-generation high-energy battery markets such as electric vehicles and energy storage systems (ESS).” This research was conducted with Ph.D candidate Juhyun Lee, and postdoctoral Jinuk Kim, a postdoctoral researcher from the Department of Chemical and Biomolecular Engineering at KAIST, serving as co–first authors. The results were published on December 10, 2025, in Joule, one of the most prestigious journals in the field of energy. ※ Paper title: “A Strategic Tuning of Interfacial Li⁺ Solvation with Ultrathin Polymer Layers for Anode-Free Lithium Metal Batteries,” Authors: Juhyun Lee (KAIST, co–first author), Jinuk Kim (KAIST, co–first author), Jinwoo Lee (KAIST, corresponding author), Sung Gap Im (KAIST, corresponding author), among a total of 18 authors, DOI: 10.1016/j.joule.2025.102226 This research was conducted at the Frontier Research Laboratory, jointly established by KAIST and LG Energy Solution, and was supported by the National Research Foundation of Korea (NRF) Mid-Career Research Program, the Korea Forest Service (Korea Forestry Promotion Institute) Advanced Technology Development Program for High Value-Added Wood Resources, and the KAIST Jang Young Sil Fellowship Program.

KAIST Demonstrates Potential to Predict Drug Side ..
<(From Left) Dr.Jaesang Kim, Professor Seongyun Jeon> Rhabdomyolysis is a condition in which muscle damage—often caused by drug intake—can lead to impaired kidney function and acute kidney failure. However, there have been limitations in directly observing how muscle and kidney damage influence each other simultaneously within the human body. KAIST researchers have developed a new device that can precisely reproduce such inter-organ interactions in a laboratory setting. KAIST (President Kwang Hyung Lee) announced on the 5th of January that a research team led by Professor Seongyun Jeon of the Department of Mechanical Engineering, in collaboration with Professor Gi-Dong Sim’s team from the same department and Professor Sejoong Kim of Seoul National University Hospital, has developed a biomicrofluidic system that can recreate, in the laboratory, the process by which drug-induced muscle damage leads to kidney injury. *Microfluidic system: a device that reproduces human organ environments on a very small chip This study is particularly significant in that it is the first to precisely reproduce, in a laboratory environment, the cascade of inter-organ reactions in which drug-induced muscle injury leads to kidney damage, using a modular (assembly-type) organ-on-a-chip platform that allows muscle and kidney tissues to be both connected and separated. To recreate conditions similar to those in the human body, the research team developed a structure that connects three-dimensionally engineered muscle tissue with proximal tubule epithelial cells (cells that play a key role in kidney function) on a single small chip. The system is a modular microfluidic chip that allows organ tissues to be connected or disconnected as needed. Cells and tissues are cultured on a small chip in a manner similar to real human organs and are designed to interact with one another. In this device, muscle and kidney tissues can be cultured separately under their respective optimal conditions and connected only at the time of experimentation to induce inter-organ interactions. After the experiment, the two tissues can be separated again for independent analysis of changes in each organ. A key feature of the system is that it allows quantitative evaluation of the effects of toxic substances released from damaged muscle on kidney tissue. <Figure 1. Conceptual Image of the Microfluidic System Experiment (Generated by AI)> Using this platform, the researchers applied atorvastatin (a cholesterol-lowering drug) and fenofibrate (a triglyceride-lowering drug), both of which are known clinically to induce muscle damage. As a result, the muscle tissue on the chip showed reduced contractile force and structural disruption, along with increased levels of biomarkers indicative of muscle damage—such as myoglobin* and CK-MM**—which are characteristic changes seen in rhabdomyolysis. *Myoglobin: a protein found in muscle cells that stores oxygen and is released into the blood or culture medium when muscle is damaged *CK-MM (Creatine Kinase-MM): an enzyme abundant in muscle tissue, with higher levels detected as muscle cell destruction increases At the same time, kidney tissue exhibited a decrease in viable cells and an increase in cell death, along with a significant rise in the expression of NGAL* and KIM-1**, biomarkers that increase during acute kidney injury. Notably, the researchers were able to observe the stepwise cascade in which toxic substances released from damaged muscle progressively exacerbated kidney injury. *NGAL: a protein that rapidly increases when kidney cells are damaged *KIM-1: a protein that becomes highly expressed as kidney cells—particularly proximal tubule cells—are increasingly damaged <Figure 2. Configuration of the Muscle–Kidney-on-a-Chip (MKoaC) Platform and Analysis of Drug Responses> Professor Seongyun Jeon stated, “This study establishes a foundation for analyzing the interactions and toxic responses occurring between muscle and kidney in a manner closely resembling the human body,” adding, “We expect this platform to enable the early prediction of drug side effects, identification of the causes of acute kidney injury*, and further expansion toward personalized drug safety assessment.” *Acute kidney injury: a condition in which the kidneys suddenly lose their ability to function properly over a short period of time This research, with Jaesang Kim participating as the first author, was published on November 12, 2025, in the international journal Advanced Functional Materials. ※ Paper title: “Implementation of Drug-Induced Rhabdomyolysis and Acute Kidney Injury in Microphysiological System,” DOI: 10.1002/adfm.202513519 This study was supported by the Ministry of Science and ICT and the National Research Foundation of Korea, and more.

Opening the Door to B Cell-Based Cancer-Rememberin..
< (From left) KAIST Professor Jung Kyoon Choi, Dr. Jeong Yeon Kim, and Dr. Jin Hyeon An > Neoantigens are unique markers that distinguish only cancer cells. By adding B cell reactivity, cancer vaccines can move beyond one-time attacks and short-term memory to become a long-term immunity that "remembers" cancer, effectively preventing recurrence. KAIST’s research team has developed an AI-based personalized cancer vaccine design technology that makes this possible and optimizes anticancer effects for each individual. KAIST announced on January 2nd that Professor Jung Kyoon Choi’s research team from the Department of Bio and Brain Engineering, in collaboration with Neogen Logic Co., Ltd., has developed a new AI model to predict neoantigens—a core element of personalized cancer vaccine development—and clarified the importance of B cells in cancer immunotherapy. The research team overcame the limitations of existing neoantigen discovery, which relied primarily on predicting T cell reactivity, and developed an AI-based neoantigen prediction technology that integrally considers both T cell and B cell reactivity. This technology has been validated through large-scale cancer genome data, animal experiments, and clinical trial data for cancer vaccines. It is evaluated as the first AI technology capable of quantitatively predicting B cell reactivity to neoantigens. Neoantigens are antigens composed of protein fragments derived from cancer cell mutations. Because they possess cancer-cell specificity, they have gained attention as a core target for next-generation cancer vaccines. Companies like Moderna and BioNTech developed COVID-19 vaccines using the mRNA platforms they secured while advancing neoantigen-based cancer vaccine technology, and they are currently actively conducting clinical trials for cancer vaccines alongside global pharmaceutical companies. However, current cancer vaccine technology is mostly focused on T cell-centered immune responses, presenting a limitation in that it does not sufficiently reflect the immune responses mediated by B cells. In fact, the research team of Professors Mark Yarchoan and Elizabeth Jaffee at Johns Hopkins University pointed out in Nature Reviews Cancer in May 2025 that “despite accumulating evidence regarding the role of B cells in tumor immunity, most cancer vaccine clinical trials still focus only on T cell responses.” The research team’s new AI model overcomes existing limitations by learning the structural binding characteristics between mutant proteins and B cell receptors (BCR) to predict B cell reactivity. In particular, an analysis of cancer vaccine clinical trial data confirmed that integrating B cell responses can significantly enhance anti-tumor immune effects in actual clinical settings. < Schematic Background of the Technology > Professor Jung Kyoon Choi stated, “Together with Neogen Logic Co., Ltd., which is currently commercializing neoantigen AI technology, we are conducting pre-clinical development of a personalized cancer vaccine platform and are preparing to submit an FDA IND* with the goal of entering clinical trials in 2027.” He added, “We will enhance the scientific completeness of cancer vaccine development based on our proprietary AI technology and push forward the transition to the clinical stage step-by-step.” *FDA IND: The procedure for obtaining permission from the U.S. Food and Drug Administration (FDA) to conduct clinical trials before administering a new drug to humans for the first time. Dr. Jeong Yeon Kim and Dr. Jin Hyeon An participated as co-first authors in this study. The research results were published in the international scientific journal Science Advances on December 3rd. ※ Paper Title: B cell–reactive neoantigens boost antitumor immunity, DOI: 10.1126/sciadv.adx8303

Presenting a Brain-Like Next-Generation AI Semicon..
< (From left) Professor Sanghun Jeon, Ph.D candidate Seungyeob Kim, Postdoctoral researcher Hongrae Cho, Ph.D candidates Sang-ho Lee and Taeseung Jung, and M.S candidate Seonjae Park > With the advancement of Artificial Intelligence (AI), the importance of ultra-low-power semiconductor technology that integrates sensing, computation, and memory into a single unit is growing. However, conventional structures face challenges such as power loss due to data movement, latency, and limitations in memory reliability. A Korean research team has drawn international academic attention by presenting core technologies for an integrated ‘Sensor–Compute–Store’ AI semiconductor to solve these issues. KAIST announced on December 31st that Professor Sanghun Jeon’s research team from the School of Electrical Engineering presented a total of six papers at the ‘International Electron Devices Meeting (IEEE IEDM 2025)’—the world’s most prestigious semiconductor conference—held in San Francisco from December 8 to 10. Among these, the papers were simultaneously selected as a Highlight Paper and a Top Ranked Student Paper. Highlight Paper: Monolithically Integrated Photodiode–Spiking Circuit for Neuromorphic Vision with In-Sensor Feature Extraction [Link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=255] Top Ranked Student Paper: A Highly Reliable Ferroelectric NAND Cell with Ultra-thin IGZO Charge Trap Layer; Trap Profile Engineering for Endurance and Retention Improvement [Link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=124] The research on the M3D integrated neuromorphic vision sensor, selected as a highlight paper, is a semiconductor that stacks the human eye and brain within a single chip. Simply put, the sensors that detect light and the circuits that process signals like a brain are made into very thin layers and stacked vertically in one chip, implementing a structure where the process of 'seeing' and 'judging' occurs simultaneously. Through this, the research team completed the world's first "In-Sensor Spiking Convolution" platform, where AI computation technology that "sees and judges at the same time" takes place directly within the camera sensor. < Figure 1. Summary of research on vertically stacked optical signal-to-spike frequency converter for AI > < Figure 2. Representative diagram of the development of a 2T-2C near-pixel analog computing cell based on oxide thin-film transistors > Previously, this technology required several stages: capturing an image (sensor), converting it to digital (ADC), storing it in memory (DRAM), and then calculating (CNN). However, this new technology eliminates unnecessary data movement as the calculation happens immediately within the sensor. As a result, it has become possible to implement real-time, ultra-low-power Edge AI with significantly reduced power consumption and dramatically improved response speeds. Based on this approach, the research team presented six core technologies at the conference covering all layers of AI semiconductors, from input to storage. They simultaneously created neuromorphic semiconductors that operate like the brain using much less electricity while utilizing existing semiconductor processes, along with next-generation memory optimized for AI. First, on the sensor side, they designed the system so that judgment occurs at the sensor stage rather than having separate components for capturing images and calculating. Consequently, power consumption decreased and response speeds increased compared to the conventional method of taking a photo and sending it to another chip for calculation. < Figure 3. Schematic diagram of a next-generation biomimetic tactile system using neuromorphic devices > < Figure 4. Representative diagram of NC-NAND development research based on Ultra-thin-Mo and Sub-3.5 nm HZO > Furthermore, in the field of memory, they implemented a next-generation NAND flash that uses the same materials but operates at lower voltages, lasts longer, and can store data stably even when the power is turned off. Through this, they presented a foundational technology that satisfies the requirements for high-capacity, high-reliability, and low-power memory necessary for AI. < Figure 5. Representative diagram of next-generation 3D FeNAND memory development research > < Figure 6. Representative diagram of research on charge behavior characterization and quantitative analysis methodology for next-generation FeNAND memory > Professor Sanghun Jeon, who led the research, stated, "This research is significant in that it demonstrates that the entire hierarchy can be integrated into a single material and process system, moving away from the existing AI semiconductor structure where sensing, computation, and storage were designed separately." He added, "Moving forward, we plan to expand this into a next-generation AI semiconductor platform that encompasses everything from ultra-low-power Edge AI to large-scale AI memory." Meanwhile, this research was conducted with support from basic research projects of the Ministry of Science and ICT and the National Research Foundation of Korea, as well as the Center for Heterogeneous Integration of Extreme-scale & Property Semiconductors (CH³IPS). It was carried out in collaboration with Samsung Electronics, Kyungpook National University, and Hanyang University.