Korean

World's First Quantum Computing for Lego-like Desi..
<(From Left to Right)Professor Jihan Kim, Ph.D. candidate Sinyoung Kang, Ph.D. candidate Younghoon Kim from the Department of Chemical and Biomolecular Engineering> Multivariate Porous Materials (MTV) are like a 'collection of Lego blocks,' allowing for customized design at a molecular level to freely create desired structures. Using these materials enables a wide range of applications, including energy storage and conversion, which can significantly contribute to solving environmental problems and advancing next-generation energy technologies. Our research team has, for the first time in the world, introduced quantum computing to solve the difficult problem of designing complex MTVs, opening an innovative path for the development of next-generation catalysts, separation membranes, and energy storage materials. On September 9, Professor Jihan Kim's research team at our university's Department of Chemical and Biomolecular Engineering announced the development of a new framework that uses a quantum computer to efficiently explore the design space of millions of multivariate porous materials (hereafter, MTV). MTV porous materials are structures formed by the combination of two or more organic ligands (linkers) and building block materials like metal clusters. They have great potential for use in the energy and environmental fields. Their diverse compositional combinations llow for the design and synthesis of new structures. Examples include gas adsorption, mixed gas separation, sensors, and catalysts. However, as the number of components increases, the number of possible combinations grows exponentially. It has been impossible to design and predict the properties of complex MTV structures using the conventional method of checking every single structure with a classical computer. The research team represented the complex porous structure as a 'network (graph) drawn on a map' and then converted each connection point and block type into qubits that a quantum computer can handle. They then asked the quantum computer to solve the problem: "Which blocks should be arranged at what ratio to create the most stable structure?" <Figure1. Overall schematics of the quantum computing algorithm to generate feasible MTV porous materials. The algorithm consists of two mapping schemes (qubit mapping and topology mapping) to allocate building blocks in a given connectivity. Different configurations go through a predetermined Hamiltonian, which is comprised of a ratio term, occupancy term, and balance term, to capture the most feasible MTV porous material> Because quantum computers can calculate multiple possibilities simultaneously, it's like spreading out millions of Lego houses at once and quickly picking out the sturdiest one. This allows them to explore a vast number of possibilities—which a classical computer would have to calculate one by one—with far fewer resources. The research team also conducted experiments on four different MTV structures that have been previously reported. The results from the simulation and the IBM quantum computer were identical, demonstrating that the method "actually works well." <Figure2. VQE sampling results for experimental structures and the structures that reproduce them, using IBM Qiskit's classical simulator. The experimental structure is predicted to be the most probable outcome of the VQE algorithm's calculation, meaning it will be generated as the most stable form of the structure.> In the future, the team plans to combine this method with machine learning to expand it into a platform that considers not only simple structural design but also synthesis feasibility, gas adsorption performance, and electrochemical properties simultaneously. Professor Jihan Kim said, "This research is the first case to solve the bottleneck of complex multivariate porous material design using quantum computing." He added, "This achievement is expected to be widely applied as a customized material design technology in fields where precise composition is key, such as carbon capture and separation, selective catalytic reactions, and ion-conducting electrolytes, and it can be flexibly expanded to even more complex systems in the future." Ph.D. candidates Sinyoung Kang and Younghoon Kim of the Department of Chemical and Biomolecular Engineering participated as co-first authors in this study. The research results were published in the online edition of the international journal ACS Central Science on August 22. Paper Title: Quantum Computing Based Design of Multivariate Porous Materials DOI: https://doi.org/10.1021/acscentsci.5c00918 Meanwhile, this research was supported by the Ministry of Science and ICT's Mid-Career Researcher Support Program and the Heterogeneous Material Support Program.

Making Truly Smart AI Agents a Reality with the Wo..
<(From Left) Engineer Jeongho Park from GraphAI, Ph.D candidate Geonho Lee, Prof. Min-Soo Kim from KAIST> For a long time, companies have been using relational databases (DB) to manage data. However, with the increasing use of large AI models, integration with graph databases is now required. This process, however, reveals limitations such as cost burden, data inconsistency, and the difficulty of processing complex queries. Our research team has succeeded in developing a next-generation graph-relational DB system that can solve these problems at once, and it is expected to be applied to industrial sites immediately. When this technology is applied, AI will be able to reason about complex relationships in real time, going beyond simple searches, making it possible to implement a smarter AI service. The research team led by Professor Min-Soo Kim announced on the 8th of September that the team has developed a new DB system named 'Chimera' that fully integrates relational DB and graph DB to efficiently execute graph-relational queries. Chimera has proven its world-class performance by processing queries at least 4 times and up to 280 times faster than existing systems in international performance standard benchmarks. Unlike existing relational DBs, graph DBs have a structure that represents data as vertices (nodes) and edges (connections), which gives them a strong advantage in analyzing and reasoning about complexly intertwined information like people, events, places, and time. Thanks to this feature, its use is rapidly spreading in various fields such as AI agents, SNS, finance, and e-commerce. With the growing demand for complex query processing between relational DBs and graph DBs, a new standard language, 'SQL/PGQ,' which extends relational query language (SQL) with graph query functions, has also been proposed. SQL/PGQ is a new standard language that adds graph traversal capabilities to the existing database language (SQL) and is designed to query both table-like data and connected information such as people, events, and places at once. Using this, complex relationships such as 'which company does my friend's friend work for?' can be searched much more simply than before. <Diagram (a): This diagram shows the typical architecture of a graph query processing system based on a traditional RDBMS. It has separate dedicated operators for graph traversal and an in-memory graph structure, while attribute joins are handled by relational operators. However, this structure makes it difficult to optimize execution plans for hybrid queries because traversal and joins are performed in different pipelines. Additionally, for large-scale graphs, the in-memory structure creates memory constraints, and the method of extracting graph data from relational data limits data freshness. Diagram (b): This diagram shows Chimera's integrated architecture. Chimera introduces new components to the existing RDBMS architecture: a traversal-join operator that combines graph traversal and joins, a disk-based graph storage, and a dedicated graph access layer. This allows it to process both graph and relational data within a single execution flow. Furthermore, a hybrid query planner integrally optimizes both graph and relational operations. Its shared transaction management and disk-based storage structure enable it to handle large-scale graph databases without memory constraints while maintaining data freshness. This architecture removes the bottlenecks of existing systems by flexibly combining traversal, joins, and mappings in a single execution plan, thereby simultaneously improving performance and scalability.> The problem is that existing approaches have relied on either trying to mimic graph traversal with join operations or processing by pre-building a graph view in memory. In the former case, performance drops sharply as the traversal depth increases, and in the latter case, execution fails due to insufficient memory even if the data size increases slightly. Furthermore, changes to the original data are not immediately reflected in the view, resulting in poor data freshness and the inefficiency of having to combine relational and graph results separately. KAIST research team's 'Chimera' fundamentally solves these limitations. The research team redesigned both the storage layer and the query processing layer of the database. First, the research team introduced a 'dual-store structure' that operates a graph-specific storage and a relational data storage together. They then applied a 'traversal-join operator' that processes graph traversal and relational operations simultaneously, allowing complex operations to be executed efficiently in a single system. Thanks to this, Chimera has established itself as the world's first graph-relational DB system that integrates the entire process from data storage to query processing into one. As a result, it recorded world-class performance on the international performance standard benchmark 'LDBC Social Network Benchmark (SNB),' being at least 4 times and up to 280 times faster than existing systems. Query failure due to insufficient memory does not occur no matter how large the graph data becomes, and since it does not use views, there is no delay problem in terms of data freshness. Professor Min-Soo Kim stated, "As the connections between data become more complex, the need for integrated technology that encompasses both graph and relational DBs is increasing. Chimera is a technology that fundamentally solves this problem, and we expect it to be widely used in various industries such as AI agents, finance, and e-commerce." The study was co-authored by Geonho Lee, a Ph.D. student in KAIST School of Computing, as the first author, and Jeongho Park, an engineer at Professor Kim's startup GraphAI Co., Ltd., as the second author, with Professor Kim as the corresponding author. The research results were presented on September 1st at VLDB, a world-renowned international academic conference in the field of databases. In particular, the newly developed Chimera technology is expected to have an immediate industrial impact as a core technology for implementing 'high-performance AI agents based on RAG (a smart AI assistant with search capabilities),' which will be applied to 'AkasicDB,' a vector-graph-relational DB system scheduled to be released by GraphAI Co., Ltd. *Paper title: Chimera: A System Design of Dual Storage and Traversal-Join Unified Query Processing for SQL/PGQ *DOI: https://dl.acm.org/doi/10.14778/3705829.3705845 This research was supported by the Ministry of Science and ICT's IITP SW Star Lab and the National Research Foundation of Korea's Mid-Career Researcher Program.

KAIST Develops Smart Patch That Can Run Tests Usin..
<(From Left) Ph.D candidate Jaehun Jeon, Professor Ki-Hun Jeong of the Department of Bio and Brain Engineering> An era is opening where it's possible to precisely assess the body’s health status using only sweat instead of blood tests. A KAIST research team has developed a smart patch that can precisely observe internal changes through sweat when simply attached to the body. This is expected to greatly contribute to the advancement of chronic disease management and personalized healthcare technologies. KAIST (President Kwang Hyung Lee) announced on September 7th that a research team led by Professor Ki-Hun Jeong of the Department of Bio and Brain Engineering has developed a wearable sensor that can simultaneously and in real-time analyze multiple metabolites in sweat. Recently, research on wearable sensors that analyze metabolites in sweat to monitor the human body’s precise physiological state has been actively pursued. However, conventional “label-based” sensors, which require fluorescent tags or staining, and “label-free” methods have faced difficulties in effectively collecting and controlling sweat. Because of this, there have been limitations in precisely observing metabolite changes over time in actual human subjects. <Figure 1. Flexible microfluidic nanoplasmonic patch (left). Sequential sample collection using the patch (center) and label-free metabolite profiling (right). In this study, we designed and fabricated a fully flexible nanoplasmonic microfluidic patch for label-free sweat analysis and performed SERS signal measurement and analysis directly from human sweat. Through this, we propose a platform capable of precisely identifying physiological changes induced by physical activity and dietary conditions.> To overcome these limitations, the research team developed a thin and flexible wearable sweat patch that can be directly attached to the skin. This patch incorporates both microchannels for collecting sweat and an ultrafine nanoplasmonic structure* that label-freely analyzes sweat components using light. Thanks to this, multiple sweat metabolites can be simultaneously analyzed without the need for separate staining or labels, with just one patch application. * Nanoplasmonic structure: An optical sensor structure where nanoscale metallic patterns interact with light, designed to sensitively detect the presence or changes in concentration of molecules in sweat. The patch was created by combining nanophotonics technology, which manipulates light at the nanometer scale (one-hundred-thousandth the thickness of a human hair) to read molecular properties, with microfluidics technology, which precisely controls sweat in channels thinner than a hair. In other words, within a single sweat patch, microfluidic technology enables sweat to be collected sequentially over time, allowing for the measurement of changes in various metabolites without any labeling process. Inside the patch are six to seventeen chambers (storage spaces), and sweat secreted during exercise flows along the microfluidic structures and fills each chamber in order. <Figure 2. Example of the fabricated patch worn (left) and images of sequential sweat collection and storage (right). By designing precise microfluidic channels based on capillary burst valves, sequential sweat collection was implemented, which enabled label-free analysis of metabolite changes associated with exercise and diet.> The research team applied the patch to actual human subjects and succeeded in continuously tracking the changing components of sweat over time during exercise. Previously, only about two components could be checked simultaneously through a label-free approach, but in this study, they demonstrated for the first time in the world that three metabolites—uric acid, lactic acid, and tyrosine—can be quantitatively analyzed simultaneously, as well as how they change depending on exercise and diet. In particular, by using artificial intelligence analysis methods, they were able to accurately distinguish signals of desired substances even within the complex components of sweat. <Figure 3. Label-free analysis graphs of metabolite changes in sweat induced by exercise. Using the fabricated patch in combination with a machine learning model, metabolite concentrations in the sweat of actual subjects were analyzed. Comparison of sweat samples collected before and after consumption of a purine-rich diet, under exercise conditions, revealed label-free detection of changes in uric acid and tyrosine levels, as well as exercise-induced lactate increase. Validation experiments using commercial kits further confirmed the quantification accuracy, supporting the clinical applicability of this platform> Professor Ki-Hun Jeong said, “This research lays the foundation for precisely monitoring internal metabolic changes over time without blood sampling by combining nanophotonics and microfluidics technologies.” He added, “In the future, it can be expanded to diverse fields such as chronic disease management, drug response tracking, environmental exposure monitoring, and the discovery of next-generation biomarkers for metabolic diseases.” This research was conducted with Jaehun Jeon, a PhD student, as the first author and was published online in Nature Communications on August 27. Paper Title: “All-Flexible Chronoepifluidic Nanoplasmonic Patch for Label-Free Metabolite Profiling in Sweat” DOI: https://doi.org/10.1038/s41467-025-63510-2 This achievement was supported by the National Research Foundation of Korea, the Ministry of Science and ICT, the Ministry of Health and Welfare, and the Ministry of Trade, Industry and Energy.

Professor Jae-woong Jeong Wins September's Scienti..
<Professor Jae-Woong Jeong from Department of Electrical and Electronic Engineering> The Ministry of Science and ICT and the National Research Foundation of Korea have announced that Professor Jae-Woong Jeong from KAIST Department of Electrical and Electronic Engineering has been selected as the September recipient of the "Scientist of the Month" award. The "Scientist of the Month" award recognizes researchers who have made a significant contribution to the development of science and technology by creating unique R&D achievements over the past three years. The award is given to one person each month and includes a commendation from the Minister of Science and ICT and a 10 million KRW prize, funded by the Science and Technology Promotion Fund/Lottery Fund of the Ministry of Science and ICT. In the lead-up to "World Patient Safety Day (September 17)," the Ministry of Science and ICT and the National Research Foundation selected Professor Jeong Jae-Woong as the award recipient for his contribution to healthcare innovation through convergence research on wearable and implantable electronic devices and medical instruments, including the development of an intravenous (IV) needle that softens in response to body temperature to enhance patient safety. Intravenous injection is a treatment method that involves directly injecting medication into a blood vessel. It is widely used in the medical field due to its ability to provide rapid and continuous drug effects. However, conventional IV needles, made of rigid metal or plastic, can damage blood vessel walls or cause complications like phlebitis. Furthermore, there is a risk of needle-stick injuries and subsequent disease transmission for medical professionals during the disposal process. Professor Jae-Woong Jeong developed a variable-stiffness* needle that is rigid at room temperature but softens like biological tissue when inserted into the body. This innovation utilizes the unique property of the liquid metal gallium, which changes from a solid to a liquid phase in response to body temperature. * Variable-stiffness: The characteristic of being able to adjust the level of rigidity (stiffness) according to a situation or condition. The variable-stiffness needle not only ensures a patient's free movement but also maintains a soft state at room temperature after use, preventing needle-stick accidents for medical professionals and fundamentally eliminating the issue of unethical needle reuse. < An intravenous needle that softens with body temperature. Intravenous injection is a treatment method that involves directly injecting medication into a blood vessel, which allows for a rapid and continuous supply of drugs, making it a globally accepted form of patient care. This research utilized the property of liquid metal gallium, which changes from a solid to a liquid state in response to body temperature, to develop a variable-stiffness intravenous needle that is rigid but softens like tissue upon insertion into the body. This needle allows for stable drug delivery without damaging blood vessels, even when the patient moves. Furthermore, the irreversible softening due to the supercooling phenomenon of gallium can fundamentally prevent post-use needle-stick injuries or unethical reuse, contributing to the safety of both patients and medical staff. This variable-stiffness technology is expected to be widely utilized in the implementation of various wearable and implantable devices that can change their properties according to different situations and purposes. > Furthermore, Professor Jae-woong Jung focused on the phenomenon in which the temperature of surrounding tissue decreases when a drug leaks during intravenous (IV) injection. He developed a function that enables real-time monitoring of local body temperature by integrating a nanofilm temperature sensor into an IV needle, thereby allowing real-time detection of IV drug leakage. This research achievement, which presents a new vision for promoting patient health and ensuring medical staff safety as required by the World Health Organization (WHO), was published as the cover article of the international journal Nature Biomedical Engineering in August 2024. Professor Jae-Woong Jeong stated, “This research is highly significant as it proposes a way to overcome the problems caused by conventional rigid medical needles and solves the infection risks from needle-stick injuries or reuse.” He added, “I will continue to dedicate my efforts to R&D so that variable-stiffness needle technology can evolve into a core technology in the medical field, enhancing the safety of both patients and medical professionals. To provide more robust support to researchers who lead such outstanding achievements, the Ministry of Science and ICT has prepared a record-high R&D budget of 11.8 trillion KRW (government proposal), including the Life Sciences (Bio) Medical Technology Development Project (361.1 billion KRW in '25 → 434.3 billion KRW in '26, proposed). The Ministry plans to strengthen investment in future industries, such as advanced life sciences, and will further reinforce rewards and recognition for researchers who produce excellent results to foster a researcher-centric R&D ecosystem.

Semiconductor Leadership Spotlighted in Nature Sis..
<(From Left) Prof. Shinhyun Choi, Prof. Young Gyu Yoon, Prof.Seunghyub Yoo from the School of Electrical Engineering, Prof. Kyung Min Kim from Materials Science and Engineering> KAIST (President Kwang Hyung Lee) announced on the 5th of September that its semiconductor research and education achievements were highlighted on August 18 in Nature Reviews Electrical Engineering, a sister journal of the world-renowned scientific journal Nature. Title: Semiconductor-related research and education at KAIST DOI: 10.1038/s44287-025-00204-3 This special "Focus" article provides a detailed look at KAIST's leadership in next-generation semiconductor research, talent development, and global industry-academia collaboration, presenting a future blueprint for Korea's semiconductor industry. Editor Silvia Conti personally conducted the interviews, with KAIST professors including Kyung Min Kim from the Department of Materials Science and Engineering, and Young Gyu Yoon, Shinhyun Choi, Sung-Yool Choi, and Seunghyub Yoo from the School of Electrical Engineering, participating. KAIST operates educational programs such as the School of Electrical Engineering, the Department of Semiconductor Systems Engineering, and the Graduate School of Semiconductor Engineering. It is leading next-generation semiconductor research in areas like neuromorphic computing, in-memory computing, and 2D new material-based devices. Building on this foundation, researchers are developing new architectures and devices that transcend the limitations of existing silicon, driving innovation in various application fields such as artificial intelligence, robotics, and medicine. Notably, research on implementing biological functions like synapses and neurons into hardware platforms using new types of memory such as RRAM and PRAM is gaining international attention. This work opens up possibilities for applications in robots, edge computing, and on-sensor AI systems. Furthermore, KAIST has operated EPSS (Samsung Advanced Human Resources Training Program) and KEPSI (SK Hynix Semiconductor Advanced Human Resources Training Program) based on long-standing partnerships with Samsung Electronics and SK Hynix. Graduate students in these programs receive full scholarships and are guaranteed employment after graduation. The Department of Semiconductor Systems Engineering, newly established in 2022, selects 100 undergraduate students each year to provide systematic education. Additionally, the KAIST–Samsung Electronics Industry-Academia Cooperation Center, which involves more than 70 labs annually, serves as a long-term hub for joint industry-academia research, contributing to solving critical issues within the industry. The article emphasizes KAIST's growth beyond a simple research institution into an international research hub. KAIST is enhancing diversity and inclusivity by expanding the hiring of female faculty and establishing a Global Talent Visa Center to support foreign professors and students, attracting outstanding talent from around the world. As a core university within the Daedeok Research Complex (Daedeok Innopolis), it serves as the heart of "Korea's Silicon Valley." KAIST researchers predict that the future of semiconductor technology is not in simple device miniaturization but in a convergent approach involving neuromorphic technology, 3D packaging technology, and AI applications. This article shows that KAIST's strategic research direction and leadership are gaining attention from both the global academic and industrial communities. Professor Kyung Min Kim stated, "I am very pleased that KAIST's next-generation semiconductor research and talent development strategy has been widely publicized to domestic and international academia and industry through this article, and we will continue to contribute to the development of future semiconductor technology with innovative convergence research." KAIST President Kwang Hyung Lee remarked, "Being highlighted for our semiconductor research and education achievements in a world-renowned science journal is a testament to the dedication and pioneering spirit of our university members. I am delighted that KAIST's growth as a global research hub is gaining recognition, and we will continue to expand industry-academia collaboration to lead next-generation semiconductor innovation and play a key role in helping Korea become a future semiconductor powerhouse."

Batteries Make 12Minute Charge for 800km Drive a R..
<Photo 1. (From left in the front row) Dr. Hyeokjin Kwon from Chemical and Biomolecular Engineering, Professor Hee Tak Kim, and Professor Seong Su Kim from Mechanical Engineering> Korean researchers have ushered in a new era for electric vehicle (EV) battery technology by solving the long-standing dendrite problem in lithium-metal batteries. While conventional lithium-ion batteries are limited to a maximum range of 600 km, the new battery can achieve a range of 800 km on a single charge, a lifespan of over 300,000 km, and a super-fast charging time of just 12 minutes. KAIST (President Kwang Hyung Lee) announced on the 4th of September that a research team from the Frontier Research Laboratory (FRL), a joint project between Professor Hee Tak Kim from the Department of Chemical and Biomolecular Engineering, and LG Energy Solution, has developed a "cohesion-inhibiting new liquid electrolyte" original technology that can dramatically increase the performance of lithium-metal batteries. Lithium-metal batteries replace the graphite anode, a key component of lithium-ion batteries, with lithium metal. However, lithium metal has a technical challenge known as dendrite, which makes it difficult to secure the battery's lifespan and stability. Dendrites are tree-like lithium crystals that form on the anode surface during battery charging, negatively affecting battery performance and stability. This dendrite phenomenon becomes more severe during rapid charging and can cause an internal short-circuit, making it very difficult to implement a lithium-metal battery that can be recharged under fast-charging conditions. The FRL joint research team has identified that the fundamental cause of dendrite formation during rapid charging of lithium metal is due to non-uniform interfacial cohesion on the surface of the lithium metal. To solve this problem, they developed a "cohesion-inhibiting new liquid electrolyte." The new liquid electrolyte utilizes an anion structure with a weak binding affinity to lithium ions (Li⁺), minimizing the non-uniformity of the lithium interface. This effectively suppresses dendrite growth even during rapid charging. This technology overcomes the slow charging speed, which was a major limitation of existing lithium-metal batteries, while maintaining high energy density. It enables a long driving range and stable operation even with fast charging. Je-Young Kim, CTO of LG Energy Solution, said, "The four years of collaboration between LG Energy Solution and KAIST through FRL are producing meaningful results. We will continue to strengthen our industry-academia collaboration to solve technical challenges and create the best results in the field of next-generation batteries." <Figure 1. Infographic on the KAIST-LGES FRL Lithium-Metal Battery Technology> Hee Tak Kim, Professor from Chemical and Biomolecular Engineering at KAIST, commented, "This research has become a key foundation for overcoming the technical challenges of lithium-metal batteries by understanding the interfacial structure. It has overcome the biggest barrier to the introduction of lithium-metal batteries for electric vehicles." The study, with Dr. Hyeokjin Kwon from the KAIST Department of Chemical and Biomolecular Engineering as the first author, was published in the prestigious journal Nature Energy on September 3. Nature Energy: According to the Journal Impact Factor announced by Clarivate Analytics in 2024, it ranks first among 182 energy journals and 23rd among more than 21,000 journals overall. Article Title: Covariance of interphasic properties and fast chargeability of energy-dense lithium metal batteries DOI: 10.1038/s41560-025-01838-1 The research was conducted through the Frontier Research Laboratory (FRL, Director Professor Hee Tak Kim), which was established in 2021 by KAIST and LG Energy Solution to develop next-generation lithium-metal battery technology.

KAIST Unlocks the Secret of Next-Generation Memory..
<(From Left) Professor Sang-Hee Ko Park, Ph.D candidate Sunghwan Park, Ph.D candidate Chaewon Gong, Professor Seungbum Hong> Resistive Random Access Memory (ReRAM), which is based on oxide materials, is gaining attention as a next-generation memory and neuromorphic computing device. Its fast speeds, data retention ability, and simple structure make it a promising candidate to replace existing memory technologies. KAIST researchers have now clarified the operating principle of this memory, which is expected to provide a key clue for the development of high-performance, high-reliability next-generation memory. KAIST (President Kwang Hyung Lee) announced on the 2nd of September that a research team led by Professor Seungbum Hong from the Department of Materials Science and Engineering, in collaboration with a research team led by Professor Sang-Hee Ko Park from the same department, has for the first time in the world precisely clarified the operating principle of an oxide-based memory device, which is drawing attention as a core technology for next-generation semiconductors. Using a 'Multi-modal Scanning Probe Microscope (Multi-modal SPM)' that combines several types of microscopes*, the research team succeeded in simultaneously observing the electron flow channels inside the oxide thin film, the movement of oxygen ions, and changes in surface potential (the distribution of charge on the material's surface). Through this, they clarified the correlation between how current changes and how oxygen defects change during the process of writing and erasing information in the memory. *Several types of microscopes: Conductive atomic force microscopy (C-AFM) for observing current flow, electrochemical strain microscopy (ESM) for observing oxygen ion movement, and Kelvin probe force microscopy (KPFM) for observing potential changes. With this special equipment, the research team directly implemented the process of writing and erasing information in the memory by applying an electrical signal to a titanium dioxide (TiO2) thin film, confirming at the nano-level that the reason for the current changes was the variation in the distribution of oxygen defects. In this process, they confirmed that the current flow changes depending on the amount and location of oxygen defects. For example, when there are more oxygen defects, the electron pathway widens, and the current flows well, but conversely, when they scatter, the current is blocked. Through this, they succeeded in precisely visualizing that the distribution of oxygen defects within the oxide determines the on/off state of the memory. <Overview of the Research Process. By using one of the SPM modes, C-AFM (Conductive Atomic Force Microscopy), resistive switching corresponding to the electroforming and reset processes is induced in a 10 nm-thick TiO₂ thin film, and the resulting local current variations caused by the applied electric field are observed. Subsequently, at the same location, ESM (Electrochemical Strain Microscopy) and KPFM (Kelvin Probe Force Microscopy) signals are comprehensively analyzed to investigate and interpret the spatial correlation of ion-electronic behaviors that influence the resistive switching phenomenon> This research was not limited to the distribution at a single point but comprehensively analyzed the changes in current flow, the movement of oxygen ions, and the surface potential distribution after applying an electrical signal over a wide area of several square micrometers (µm2). As a result, they clarified that the process of the memory's resistance changing is not solely due to oxygen defects but is also closely intertwined with the movement of electrons (electronic behavior). In particular, the research team confirmed that when oxygen ions are injected during the 'erasing process (reset process)', the memory can stably maintain its off state (high resistance state) for a long time. This is a core principle for increasing the reliability of memory devices and is expected to provide an important clue for the future development of stable, next-generation non-volatile memory. Professor Seungbum Hong of KAIST, who led the research, said, "This is an example that proves we can directly observe the spatial correlation of oxygen defects, ions, and electrons through a multi-modal microscope." He added, "It is expected that this analysis technique will open a new chapter in the research and development of various metal oxide-based next-generation semiconductor devices in the future." <By combining C-AFM and ESM techniques, the correlation between local conductivity and variations in oxygen vacancy concentration after resistive switching is analyzed. After the electroforming process, regions with increased conductivity exhibit an enhancement in the ESM amplitude signal, which can be interpreted as an increase in defect ion concentration. Conversely, after the reset process, regions with reduced conductivity show a corresponding decrease in this signal. Through these observations, it is spatially demonstrated that changes in conductivity and local defect ion concentration after resistive switching exhibit a positive correlation> This research, in which Ph.D. candidate Chaewon Gong from the KAIST Department of Materials Science and Engineering participated as the first author, was published on July 20 in 'ACS Applied Materials and Interfaces', a prestigious academic journal in the field of new materials and chemical engineering published by the American Chemical Society (ACS). ※ Paper Title: Spatially Correlated Oxygen Vacancies, Electrons and Conducting Paths in TiO2 Thin Films This research was carried out with the support of the Ministry of Science and ICT and the National Research Foundation of Korea.

KAIST succeeds in controlling complex altered gen..
< (From left) M.S candidate Insoo Jung, Ph.D candidate Corbin Hopper, Ph.D candidate Seong-Hoon Jang, Ph.D candidate Hyunsoo Yeo, Professor Kwang-Hyun Cho > Previously, research on controlling gene networks has been carried out based on a single stimulus-response of cells. More recently, studies have been proposed to precisely analyze complex gene networks to identify control targets. A KAIST research team has succeeded in developing a universal technology that identifies gene control targets in altered cellular gene networks and restores them. This achievement is expected to be widely applied to new anticancer therapies such as cancer reversibility, drug development, precision medicine, and reprogramming for cell therapy. KAIST (President Kwang Hyung Lee) announced on the 28th of August that Professor Kwang-Hyun Cho’s research team from the Department of Bio and Brain Engineering has developed a technology to systematically identify gene control targets that can restore the altered stimulus-response patterns of cells to normal by using an algebraic approach. The algebraic approach expresses gene networks as mathematical equations and identifies control targets through algebraic computations. The research team represented the complex interactions among genes within a cell as a "logic circuit diagram" (Boolean network). Based on this, they visualized how a cell responds to external stimuli as a "landscape map" (phenotype landscape). < Figure 1. Conceptual diagram of restoring normal stimulus-response patterns represented as phenotype landscapesProfessor Kwang-Hyun Cho’s research team represented the normal stimulus-response patterns of cells as a phenotype landscape and developed a technology to systematically identify control targets that can restore phenotype landscapes damaged by mutations as close to normal as possible. > By applying a mathematical method called the "semi-tensor product,*" they developed a way to quickly and accurately calculate how the overall cellular response would change if a specific gene were controlled. *Semi-tensor product: a method that calculates all possible gene combinations and control effects in a single algebraic formula However, because the key genes that determine actual cellular responses number in the thousands, the calculations are extremely complex. To address this, the research team applied a numerical approximation method (Taylor approximation) to simplify the calculations. In simple terms, they transformed a complex problem into a simpler formula while still yielding nearly identical results. Through this, the team was able to calculate which stable state (attractor) a cell would reach and predict how the cell’s state would change when a particular gene was controlled. As a result, they were able to identify core gene control targets that could restore abnormal cellular responses to states most similar to normal. < Figure 2. Schematic diagram of the process of identifying control targets for restoring normal stimulus-response patternsAfter algebraically analyzing phenotype landscapes in small-scale (A) and large-scale (B) gene networks, the team calculated all attractors to which each network state reconverges after control, and selected= > Professor Cho’s team applied the developed control technology to various gene networks and verified that it can accurately predict gene control targets that restore altered stimulus-response patterns of cells back to normal. In particular, by applying it to bladder cancer cell networks, they identified gene control targets capable of restoring altered responses to normal. They also discovered gene control targets in large-scale distorted gene networks during immune cell differentiation that are capable of restoring normal stimulus-response patterns. This enabled them to solve problems that previously required only approximate searches through lengthy computer simulations in a fast and systematic way. < Figure 3. Accuracy analysis of the developed control technology and comparative validation with existing control technologiesUsing various validated gene networks, the team verified whether the developed control technology could identify control targets with high accuracy (A–B). Control targets identified through the developed technology showed reduced recovery efficiency as the degree of mutation-induced phenotype landscape distortion increased (C). In contrast, other control technologies either failed to identify any control targets at all or suggested targets that were less effective than those identified by the developed technology (D). > Professor Cho said, “This study is evaluated as a core original technology for the development of the Digital Cell Twin model*, which analyzes and controls the phenotype landscape of gene networks that determine cell fate. In the future, it is expected to be widely applicable across the life sciences and medicine, including new anticancer therapies through cancer reversibility, drug development, precision medicine, and reprogramming for cell therapy.” *Digital Cell Twin model: a technology that digitally models the complex reactions occurring within cells, enabling virtual simulations of cellular responses instead of actual experiments KAIST master’s student Insoo Jung, PhD student Corbin Hopper, PhD student Seong-Hoon Jang, and PhD student Hyunsoo Yeo participated in this study. The results were published online on August 22 in Science Advances, an international journal published by the American Association for the Advancement of Science (AAAS). ※ Paper title: “Reverse Control of Biological Networks to Restore Phenotype Landscapes” ※ DOI: https://www.science.org/doi/10.1126/sciadv.adw3995 This research was supported by the Mid-Career Researcher Program and the Basic Research Laboratory Program of the National Research Foundation of Korea, funded by the Ministry of Science and ICT.

KAIST–Princeton University Officially Launch “Net..
< Professor Hae-Won Jeon of the Graduate School of Green Growth and Sustainable Development > KAIST (President Kwang Hyung Lee) announced on the 27th of August that a research team led by Professor Hae-Won Jeon of the Graduate School of Green Growth and Sustainable Development has signed a memorandum of understanding (MOU) with the Andlinger Center for Energy and the Environment at Princeton University in the United States to promote joint research on carbon neutrality, officially launching the Net-Zero Korea (NZK) project. This project was unveiled at the World Climate Industry EXPO (WCE) held in BEXCO, Busan, and will begin with seed funding from Google. The NZK project aims, in the short term, to accelerate the transition of Korea’s energy and industrial sectors toward carbon neutrality, and in the mid- to long term, to strengthen Korea’s energy system modeling capabilities for policy formulation and implementation. Energy system modeling plays a critical role in studying the transition to clean energy and carbon neutrality. In particular, this research plans to apply Princeton’s leading modeling methodologies from the Net-Zero America project—published in 2021 and widely recognized—to the Korean context by integrating them with KAIST’s integrated assessment modeling research. The Net-Zero Korea project will be supported by funding from Google, KAIST, and Princeton University. This research is characterized by its detailed analysis of a wide range of factors, from regional land-use changes to job creation, and by concretely visualizing the resulting transformations in energy and industrial systems. It will also be conducted through an international collaborative network while reflecting Korea’s specific conditions. In particular, KAIST will develop an optimization-based open-source energy and industrial system model that integrates the effects of international trade, thereby contributing to global academia and policy research. Therefore, the core of this modeling research is to apply to Korea the precise analysis and realistic approach that drew attention in Net-Zero America. Through this, it will be possible to visualize changes in the energy and industrial systems at high spatial, temporal, sectoral, and technological resolution, and to comprehensively analyze various factors such as regional land-use changes, capital investment requirements, job creation, and health impacts from air pollution. This will provide stakeholders with practical and reliable information. < Figure 1. 2050 U.S. Energy Infrastructure Outlook from the Net-Zero America Project.Princeton University’s Net-Zero America study shows that by 2050, the U.S. will need to build 3.2 TW of wind and solar power facilities across the country and expand transmission grid capacity by threefold. The blue dots on the map represent wind power projects, the orange dots represent large-scale solar power project locations, and the purple lines indicate new transmission lines that must be constructed. This detailed spatial analysis methodology will also be applied in the Net-Zero Korea project. > In addition, the KAIST research team will collaborate with Princeton researchers, who have conducted national-scale decarbonization modeling studies with major research institutions in Australia, Brazil, China, India, Poland, and others, leveraging a global research network for joint studies. Building on its experience in developing globally recognized integrated assessment models (IAM) tailored to Korea, KAIST will lead a new initiative to integrate international trade impacts into optimization-based open-source energy and industrial system models. This effort seeks to overcome the limitations of existing national energy modeling by reflecting the particularity of Korea, where trade plays a vital role across the economy. Professor Wei Peng, Princeton’s principal investigator, said: “Through collaboration with KAIST’s world-class experts in integrated assessment modeling, we will be able to build new research that combines the strengths of macro-energy models and integrated assessment models, thereby developing capabilities applicable to many countries where trade plays a crucial role in the economy, such as Korea.” < Figure 2. KAIST Research Team’s Carbon Neutrality Scenarios for Korea.The scenarios developed by the KAIST research team show projected changes in Korea’s greenhouse gas emissions under various carbon neutrality pathways. Unlike the Current Policy (CurPol) scenario, the Net-Zero scenarios (NZ2050, NZ2050_Nuc, NZ2050_NoCCS, NZ2050_NoCCS_Nuc) achieve carbon neutrality by 2050 through offsetting residual emissions with carbon removal technologies such as land-use, land-use change, and forestry (LULUCF) and direct air capture (DAC). Each scenario is distinguished by the extent to which nuclear power and carbon capture and storage (CCS) technologies are utilized, with the contributions of sectors such as agriculture, power, buildings, transportation, and industry indicated in different colors. > Antonia Gawel, Director of Partnerships at Google, stated: “We are very pleased to support this meaningful research being conducted by KAIST and Princeton University in Korea. It will greatly help Google achieve our goal of net-zero emissions across our supply chain by 2030.” Professor Haewon McJeon of KAIST commented: “Through joint research with Princeton University, which has been leading net-zero studies, we expect to provide science-based evidence to support Korea’s achievement of carbon neutrality and sustainable energy.” President Kwang Hyung Lee of KAIST remarked: “It is deeply meaningful that KAIST, as Korea’s representative research institution, joins hands with Princeton University, a leading institution in the United States, to jointly build a science-based policy support system for responding to the climate crisis. This collaboration will contribute not only to achieving carbon neutrality in Korean society but also to the global response to the climate crisis.”

KAIST Wins Bid for ‘Physical AI Core Technology D..
KAIST (President Kwang Hyung Lee) announced on the 28th of August that, together with Jeonbuk State, Jeonbuk National University, and Sungkyunkwan University, it has jointly won the Ministry of Science and ICT’s pilot project for the “Physical AI Core Technology Proof of Concept (PoC)”, with KAIST serving as the overall research lead. The consortium also plans to participate in a full-scale demonstration project that is expected to reach a total scale of 1 trillion KRW in the future. < General Project Director Professor Young Jae Jang from the Department of Industrial and Systems Engineering > In this project, KAIST led the research planning under the theme of “Collaborative Intelligence Physical AI.” Based on this, Jeonbuk National University and Jeonbuk State will carry out joint research and establish a collaborative intelligence physical AI industrial ecosystem within the province. The pilot project will begin on September 1 this year and will run until the end of the year over the next five years. Through this effort, Jeonbuk State aims to be built into a global hub for physical AI. KAIST will take charge of developing original research technologies, creating a research environment through the establishment of a testbed, and promoting industrial diffusion. Professor Young Jae Jang of the Department of Industrial and Systems Engineering at KAIST, who is the overall project director, has been leading research on collaborative intelligence physical AI since 2016. His “Collaborative Intelligence-Based Smart Manufacturing Innovation Technology” was selected as one of KAIST’s “Top 10 Research Achievements” in 2019. “Physical AI” refers to cutting-edge artificial intelligence technology that enables physical devices such as robots, autonomous vehicles, and factory automation equipment to perform tasks without human instruction by understanding spatiotemporal concepts. < Figure 1. Structure for learning future manufacturing data by linking reinforcement learning and simulations > In particular, collaborative intelligence physical AI is a technology in which numerous robots and automated devices in a factory environment work together to achieve goals. It is attracting attention as a key foundation for realizing “dark factories” in industries such as semiconductors, secondary batteries, and automobile manufacturing. Unlike existing manufacturing AI, this technology does not necessarily require massive amounts of historical data. Through real-time, simulation-based learning, it can quickly adapt even to manufacturing environments with frequent changes and has been deemed a next-generation technology that overcomes the limitations of data dependency. Currently, the global AI industry is led by LLMs that simulate linguistic intelligence. However, physical AI must go beyond linguistic intelligence to include spatial intelligence and virtual environment learning, requiring the organic integration of hardware such as robots, sensors, and motors with software. As a manufacturing powerhouse, Korea is well-positioned to build such an ecosystem and seize the opportunity to lead global competition. < Figure 2. Example of applying Physical AI in a semiconductor logistics robot operating system > In fact, in April 2025, KAIST won first place at INFORMS (Institute for Operations Research and the Management Sciences), the world’s largest industrial engineering society, with its case study on collaborative intelligence physical AI, beating MIT and Amazon. This achievement is recognized as proof of Korea’s global competitiveness in the physical AI technology realm. Professor Young Jae Jang, KAIST’s overall project director, said, “Winning this large-scale national project is the result of KAIST’s collaborative intelligence physical AI research capabilities accumulated over the past decade being recognized both domestically and internationally. This will be a turning point for establishing Korea’s manufacturing industry as a global leading ‘Physical AI Manufacturing Innovation Model.’” KAIST President Kwang Hyung Lee emphasized that “KAIST is taking on the role of leading not only academic research but also the practical industrialization of national strategic technologies. Building on this achievement, we will collaborate with Jeonbuk National University and Jeonbuk State to develop Korea into a world-class hub for physical AI innovation.” Through this project, KAIST, Jeonbuk National University, and Jeonbuk State plan to develop Korea into a global industrial hub for physical AI.

KAIST Develops AI that Automatically Detects Defe..
< (From left) Ph.D candidate Jihye Na, Professor Jae-Gil Lee > Recently, defect detection systems using artificial intelligence (AI) sensor data have been installed in smart factory manufacturing sites. However, when the manufacturing process changes due to machine replacement or variations in temperature, pressure, or speed, existing AI models fail to properly understand the new situation and their performance drops sharply. KAIST researchers have developed AI technology that can accurately detect defects even in such situations without retraining, achieving performance improvements up to 9.42%. This achievement is expected to contribute to reducing AI operating costs and expanding applicability in various fields such as smart factories, healthcare devices, and smart cities. KAIST (President Kwang Hyung Lee) announced on the 26th of August that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new “time-series domain adaptation” technology that allows existing AI models to be utilized without additional defect labeling, even when manufacturing processes or equipment change. Time-series domain adaptation technology enables AI models that handle time-varying data (e.g., temperature changes, machine vibrations, power usage, sensor signals) to maintain stable performance without additional training, even when the training environment (domain) and the actual application environment differ. Professor Lee’s team paid attention to the fact that the core problem of AI models becoming confused by environmental (domain) changes lies not only in differences in data distribution but also in changes in defect occurrence patterns (label distribution) themselves. For example, in semiconductor wafer processes, the ratio of ring-shaped defects and scratch defects may change due to equipment modifications. The research team developed a method for decomposing new process sensor data into three components—trends, non-trends, and frequencies—to analyze their characteristics individually. Just as humans detect anomalies by combining pitch, vibration patterns, and periodic changes in machine sounds, AI was enabled to analyze data from multiple perspectives. In other words, the team developed TA4LS (Time-series domain Adaptation for mitigating Label Shifts) technology, which applies a method of automatically correcting predictions by comparing the results predicted by the existing model with the clustering information of the new process data. Through this, predictions biased toward the defect occurrence patterns of the existing process can be precisely adjusted to match the new process. In particular, this technology is highly practical because it can be easily combined like an additional plug-in module inserted into existing AI systems without requiring separate complex development. That is, regardless of the AI technology currently being used, it can be applied immediately with only simple additional procedures. < Figure 1. Concept diagram of the “TA4LS” technology developed by the research team. Sensor data from a new process is grouped by components (trends, non-trends, and frequencies) according to similar patterns. By comparing these with the defect tendencies predicted by the existing model and automatically correcting mismatches, the technology maintains high performance even when processes change. > In experiments using four benchmark datasets of time-series domain adaptation (i.e., four types of sensor data in which changes had occurred), the research team achieved up to 9.42% improvement in accuracy compared to existing methods.[TT1] Especially when process changes caused large differences in label distribution (e.g., defect occurrence patterns), the AI demonstrated remarkable performance improvement by autonomously correcting and distinguishing such differences. These results proved that the technology can be used more effectively without defects in environments that produce small batches of various products, one of the main advantages of smart factories. Professor Jae-Gil Lee, who supervised the research, said, “This technology solves the retraining problem, which has been the biggest obstacle to the introduction of artificial intelligence in manufacturing. Once commercialized, it will greatly contribute to the spread of smart factories by reducing maintenance costs and improving defect detection rates.” This research was carried out with Jihye Na, a Ph.D. student at KAIST, as the first author, with Youngeun Nam, a Ph.D. student, and Junhyeok Kang, a researcher at LG AI Research, as co-authors. The research results were presented in August 2025 at KDD (the ACM SIGKDD Conference on Knowledge Discovery and Data Mining), the world’s top academic conference in artificial intelligence and data. ※Paper Title: “Mitigating Source Label Dependency in Time-Series Domain Adaptation under Label Shifts” ※DOI: https://doi.org/10.1145/3711896.3737050 This technology was developed as part of the research outcome of the SW Computing Industry Original Technology Development Program’s SW StarLab project (RS-2020-II200862, DB4DL: Development of Highly Available and High-Performance Distributed In-Memory DBMS for Deep Learning), supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP).

KAIST achieves over 95% high-purity CO₂ capture u..
< (From left) Professor Dong-Yeun Koh from KAIST, Professor T. Alan Hatton from MIT, Dr. Young Hun Lee from MIT, Dr. Hwajoo Joo from MIT, Dr. Jung Hun Lee from MIT > Direct Air Capture (DAC) is a technology that filters out carbon dioxide present in the atmosphere at extremely low concentrations (below 400 ppm). The KAIST research team has now succeeded in capturing over 95% high-purity carbon dioxide using only low power at the level of smartphone charging voltage (3V), without hot steam or complex facilities. While high energy cost has been the biggest obstacle for conventional DAC technologies, this study is regarded as a breakthrough demonstrating real commercialization potential. Overseas patent applications have already been filed, and because it can be easily linked with renewable energy such as solar and wind power, the technology is being highlighted as a “game changer” for accelerating the transition to carbon-neutral processes. KAIST (President Kwang Hyung Lee) announced on the 25th of August that Professor Dong-Yeun Koh’s research team from the Department of Chemical and Biomolecular Engineering, in collaboration with Professor T. Alan Hatton’s group at MIT’s Department of Chemical Engineering, has developed the world’s first ultra-efficient e-DAC (Electrified Direct Air Capture) technology based on conductive silver nanofibers. Conventional DAC processes required high-temperature steam (over 100℃) in the regeneration stage, where absorbed or adsorbed carbon dioxide is separated again. This process consumes about 70% of the total energy, making energy efficiency crucial, and requires complex heat-exchange systems, which makes cost reduction difficult. The joint research team, led by KAIST, solved this problem with “fibers that heat themselves electrically,” adopting Joule heating, a method that generates heat by directly passing electricity through fibers, similar to an electric blanket. By heating only where needed without an external heat source, energy loss was drastically reduced. This technology can rapidly heat fibers to 110℃ within 80 seconds with only 3V—the energy level of smartphone charging. This shortens adsorption–desorption cycles dramatically even in low-power environments, while reducing unnecessary heat loss by about 20% compared to existing technologies. The core of this research was not just making conductive fibers, but realizing a “breathable conductive coating” that achieves both “electrical conductivity” and “gas diffusion.” The team uniformly coated porous fiber surfaces with a composite of silver nanowires and nanoparticles, forming a layer about 3 micrometers (µm) thick—much thinner than a human hair. This “3D continuous porous structure” allowed excellent electrical conductivity while securing pathways for CO₂ molecules to move smoothly into the fibers, enabling uniform, rapid heating and efficient CO₂ capture simultaneously. < Figure 1. Fabrication process of the silver nanocomposite-based conductive fibrous DAC device and schematic of CO₂ capture–regeneration mechanism through a rapid operating cycle: (1-1) A porous fiber precursor based on Y-zeolite and cellulose acetate was dip-coated with a silver nanoparticle/nanowire composite and treated with EDA vapor, resulting in an adsorptive fiber with enhanced gas selectivity and conductivity. (1-2) This fibrous DAC system enables stable and efficient CO₂ capture–regeneration even under low-power conditions, through a rapid cycle (e-TVSA) consisting of (i) CO₂ adsorption from air, (ii) gas displacement, (iii) electrically-driven Joule heating, and (iv) cooling and preparation for re-adsorption. > Furthermore, when multiple fibers were modularized and connected in parallel, the total resistance dropped below 1 ohm (Ω), proving scalability to large-scale systems. The team succeeded in recovering over 95% high-purity CO₂ under real atmospheric conditions. This achievement was the result of five years of in-depth research since 2020. Remarkably, in late 2022, long before the paper’s publication, the core technology had already been filed for PCT and domestic/international patents (WO2023068651A1, countries entered: US, EP, JP, AU, CN), securing foundational intellectual property rights. This indicates that the technology is not only highly advanced but also developed with practical commercialization in mind beyond the laboratory level. The biggest innovation of this technology is that it runs solely on electricity, making it very easy to integrate with renewable energy sources such as solar and wind. It perfectly matches the needs of global companies that have declared RE100 and seek carbon-neutral process transitions. Professor Dong-Yeun Koh of KAIST said, “Direct Air Capture (DAC) is not just a technology for reducing carbon dioxide emissions, but a key means of achieving ‘negative emissions’ by purifying the air itself. The conductive fiber-based DAC technology we developed can be applied not only to industrial sites but also to urban systems, significantly contributing to Korea’s leap as a leading nation in future DAC technologies.” < Figure 2. Uniform coating of conductive fibers and characteristics of rapid electrical heating: (2-1) By forming a uniform coating layer, the fiber’s resistance was drastically reduced to about 0.5 Ω/cm. (2-2) Heat-transfer simulations analyzing thermal efficiency according to the number of fibers loaded in a module showed that when 12 fibers were used, heat loss was minimized and the most ideal temperature distribution was obtained. This suggests the optimal fiber configuration condition for achieving uniform heating while reducing power consumption. (2-3) In actual experiments, rapid and efficient electrical heating characteristics were observed, with the fiber surface reaching 110 °C within 80 seconds using only 3V of applied voltage. > This study was led by Young Hun Lee (PhD, 2023 graduate of KAIST; currently at MIT Department of Chemical Engineering) and co-first-authored by Jung Hun Lee and Hwajoo Joo (MIT, Department of Chemical Engineering). The results were published online on August 1, 2025, in Advanced Materials, one of the world’s leading journals in materials science, and in recognition of its excellence, the work was also selected for the Front Inside Cover. ※ Paper title: “Design of Electrified Fiber Sorbents for Direct Air Capture with Electrically-Driven Temperature Vacuum Swing Adsorption” ※ DOI: https://doi.org/10.1002/adma.202504542 This study was supported by the Aramco–KAIST CO₂ Research Center and the National Research Foundation of Korea with funding from the Ministry of Science and ICT (No. RS-2023-00259416, DACU Source Technology Development Project).