Korean

KAIST's 'FluidGPT' Wins Grand Prize at the 2025 AI..
<Commemorative Photo After Winning at the 2025 AI Champions Award Ceremony> The era has begun where an AI assistant goes beyond simple conversation to directly view the screen, make decisions, and complete tasks such as hailing a taxi or booking an SRT ticket. KAIST (President Kwang Hyung Lee) announced on the 6th that the AutoPhone Team (Fluidez, KAIST, Korea University, Sungkyunkwan University), led by Professor Insik Shin (CEO of Fluidez Co., Ltd.) of the School of Computing, was selected as the inaugural AI Champion (1st place) in the '2025 Artificial Intelligence Champion (AI Champion) Competition,' hosted by the Ministry of Science and ICT. This competition is the nation's largest AI technology contest, comprehensively evaluating the innovativeness, social impact, and commercial potential of AI technology. With 630 teams participating nationwide, the AutoPhone Team claimed the top honor and will receive 3 billion Korean won in research and development funding. The technology developed by the AutoPhone Team, 'FluidGPT,' is a fully autonomous AI agent that understands a user's voice command and enables the smartphone to independently run apps, click, input, and even complete payments. For example, when a user says, "Book an SRT ticket from Seoul Station to Busan," or "Call a taxi," FluidGPT opens the actual app and sequentially performs the necessary steps to complete the request. The core of this technology is its 'Non-Invasive (API-Free)' structure. Previously, calling a taxi using an app required directly connecting to the app's internal system (API communication) through the taxi app's API. In contrast, this technology does not modify the existing app's code or link an API. Instead, the AI directly recognizes and operates the screen (UI), acquiring the ability to use the smartphone just like a human. As a result, FluidGPT presents a new paradigm—"AI that sees, judges, and moves a hand on behalf of a person"—and is evaluated as a core technology that will usher in the 'AI Phone Era.' FluidGPT moves beyond simple voice assistance to implement the concept of 'Agentic AI' (Action-Oriented Artificial Intelligence), where the AI directly views the screen, makes decisions, and takes action. As a fully action-oriented system, the AI clicks app buttons, fills in input fields, and references data to autonomously achieve the user's objective, foreshadowing an innovation in how smartphones are used. Professor In-sik Shin of the School of Computing shared his thoughts, stating, "AI is now evolving from conversation to action. FluidGPT is a technology that understands the user's words and autonomously executes actual apps, and it will be the starting point of the 'AI Phone Era.' The AutoPhone Team possesses world-class research capabilities, and we will contribute to the widespread adoption of AI services that everyone can easily use." KAIST President Kwang Hyung Lee remarked, "This achievement is a representative example that demonstrates KAIST's vision for AI convergence," adding, "AI technology is entering the daily lives of citizens and leading a new wave of innovation." He further added, "KAIST will continue to lead research in future core technologies such as AI and semiconductors to bolster national competitiveness."

IEEE President Professor Kramer Holds Special Lect..
Kathleen A. Kramer, President of the IEEE (Institute of Electrical and Electronics Engineers), the world's largest technical professional organization dedicated to electrical and electronic technology, visited our university on the 30th and delivered a special lecture under the theme, 'Drawing the Future of Artificial Intelligence Together.' < IEEE Leadership and KAIST EE Meeting KITIS Director (Sung-Hyun Hong), KAIST EE Professors (Joonwoo Bae), (Ian Oakley), (Hye-Won Jeong), (Chang-Shik Choi), (Dong-Soo Han), Head of EE Department (Seunghyup Yoo), IEEE President (Kathleen A. Kramer), IEEE Senior Sales Director (Francis Staples), IEEE Regional Manager for APAC (Ira Tan), KAIST EE Professor (Hee-Jin Ahn), Head of Semiconductor System Engineering Department (Sung-Hwan Cho)> Standing at the colloquium podium by invitation of the Department of Electrical Engineering (Head: Seung-Hyup Yoo), President Kramer emphasized based on IEEE's core vision, 'Advancing Technology for Humanity,' that "Artificial Intelligence (AI) is no longer a concept of the distant future; it has become a technology that is transforming human lives at the center of innovation." < Photo of IEEE President's KAIST EE Colloquium Lecture > She further added, "Technology must advance with human values at its core, and AI based on ethics and inclusiveness can lead to true innovation," sharing her insights on the direction of AI development and the social responsibility of technology. Seung-Hyup Yoo, Head of the Department of Electrical Engineering, stated, "We expect President Kramer's visit to be a stepping stone that will not only widely promote our department's capabilities in advanced fields such as AI, semiconductors, signal processing, and robotics to the international academic community but also strengthen cooperation in various ways." < Tea Meeting with the IEEE Leadership and the Vice Presidents . KITIS Director (Sung-Hyun Hong), IEEE Senior Sales Director (Francis Staples), IEEE President (Kathleen A. Kramer), KAIST Executive Vice President for Research (Sang Yup Lee), Head of EE Department (Seunghyup Yoo), IEEE Regional Manager for APAC (Ira Tan)> Meanwhile, prior to the lecture, President Kramer paid a courtesy visit to Sang-Yup Lee, KAIST Executive Vice President for Research, and reaffirmed the commitment of both organizations to advancing sustainable technology and building an ethical and inclusive research ecosystem to contribute to a better life for humanity.

KAIST Develops Room-Temperature 3D Printing Techno..
<(From Left) Professor Ji Tae Kim of the Department of Mechanical Engineering, Professor Soong Ju Oh of Korea University and Professor Tianshuo Zhao of the University of Hong Kong> The “electronic eyes” technology that can recognize objects even in darkness has taken a step forward. Infrared sensors, which act as the “seeing” component in devices such as LiDAR for autonomous vehicles, 3D face recognition systems in smartphones, and wearable healthcare devices, are regarded as key components in next-generation electronics. Now, a research team at KAIST and their collaborators have developed the world’s first room-temperature 3D printing technology that can fabricate miniature infrared sensors in any desired shape and size. KAIST (President Kwang Hyung Lee) announced on the 3rd of November that the research team led by Professor Ji Tae Kim of the Department of Mechanical Engineering, in collaboration with Professor Soong Ju Oh of Korea University and Professor Tianshuo Zhao of the University of Hong Kong, has developed a 3D printing technique capable of fabricating ultra-small infrared sensors—smaller than 10 micrometers (µm)—in customized shapes and sizes at room temperature. Infrared sensors convert invisible infrared signals into electrical signals and serve as essential components in realizing future electronic technologies such as robotic vision. Accordingly, miniaturization, weight reduction, and flexible form-factor design have become increasingly important. Conventional semiconductor fabrication processes were well suited for mass production but struggled to adapt flexibly to rapidly changing technological demands. They also required high-temperature processing, which limited material choices and consumed large amounts of energy. To overcome these challenges, the research team developed an ultra-precise 3D printing process that uses metal, semiconductor, and insulator materials in the form of liquid nanocrystal inks, stacking them layer by layer within a single printing platform. This method enables direct fabrication of core components of infrared sensors at room temperature, allowing for the realization of customized miniature sensors of various shapes and sizes. Particularly, the researchers achieved excellent electrical performance without the need for high-temperature annealing by applying a “ligand-exchange” process, where insulating molecules on the surface of nanoparticles are replaced with conductive ones. As a result, the team successfully fabricated ultra-small infrared sensors measuring less than one-tenth the thickness of a human hair (under 10 µm). <Figure 1. 3D printing of infrared sensors.a. Room-temperature printing process for the electrodes and photoactive layer that make up the infrared sensor.b. Structure and chemical composition of the printed infrared microsensor. c.Printed infrared sensor micropixel array.> Professor Ji Tae Kim commented, “The developed 3D printing technology not only advances the miniaturization and lightweight design of infrared sensors but also paves the way for the creation of innovative new form-factor products that were previously unimaginable. Moreover, by reducing the massive energy consumption associated with high-temperature processes, this approach can lower production costs and enable eco-friendly manufacturing—contributing to the sustainable development of the infrared sensor industry.” The research results were published online in Nature Communications on October 16, 2025, under the title “Ligand-exchange-assisted printing of colloidal nanocrystals to enable all-printed sub-micron optoelectronics” (DOI: https://doi.org/10.1038/s41467-025-64596-4). This research was supported by the Ministry of Science and ICT of Korea through the Excellent Young Researcher Program (RS−2025−00556379), the National Strategic Technology Material Development Program (RS−2024−00407084), and the International Cooperation Research Program for Original Technology Development (RS−2024−00438059).

“AI,” the New Language of Materials Science and En..
<(From Left) M.S candidate Chaeyul Kang, Professor Seumgbum Hong, Ph. D candidate Benediktus Madika, Ph.D candidate Batzorig Buyantogtokh, Ph.D candiate Aditi Saha, > Collaborating authors include Professor Joshua Agar (Drexel University), Professors Chris Wolverton and Peter Voorhees (Northwestern University), Professor Peter Littlewood (University of St Andrews), and Professor Sergei Kalinin (University of Tennessee). Paper Title: Artificial Intelligence for Materials Discovery, Development, and Optimization The era has arrived in which artificial intelligence (AI) autonomously imagines and predicts the structures and properties of new materials. Today, AI functions as a researcher’s “second brain,” actively participating in every stage of research, from idea generation to experimental validation. KAIST (President Kwang Hyung Lee) announced on October 26 that a comprehensive review paper analyzing the impact of AI, Machine Learning (ML), and Deep Learning (DL) technologies across materials science and engineering has been published in ACS Nano (Impact Factor = 18.7). The paper was co-authored by Professor Seungbum Hong and his team from the Department of Materials Science and Engineering at KAIST, in collaboration with researchers from Drexel University, Northwestern University, the University of St Andrews, and the University of Tennessee in the United States. The research team proposed a full-cycle utilization strategy for materials innovation through an AI-based catalyst search platform, which embodies the concept of a Self-Driving Lab—a system in which robots autonomously perform materials synthesis and optimization experiments. Professor Hong’s team categorized materials research into three major stages—Discovery, Development, and Optimization—and detailed the distinctive role of AI in each phase: In the Discovery Stage, AI designs new structures, predicts properties, and rapidly identifies the most promising materials among vast candidate pools. In the Development Stage, AI analyzes experimental data and autonomously adjusts experimental processes through Self-Driving Lab systems, significantly shortening research timelines. In the Optimization Stage, AI employs Reinforcement Learning, which identifies optimal conditions through Bayesian Optimization, which efficiently finds superior results with minimal experimentation, to fine-tune designs and process conditions for maximum performance. In essence, AI serves as a “smart assistant” that narrows down the most promising materials, reduces experimental trial and error, and autonomously optimizes experimental conditions to achieve the best-performing outcomes. The paper further highlights how cutting-edge technologies such as Generative AI, Graph Neural Networks (GNNs), and Transformer models are transforming AI from a computational tool into a “thinking researcher.” Nonetheless, the team cautions that AI’s predictions are not error-proof and that key challenges persist, such as imbalanced data quality, limited interpretability of AI predictions, and integration of heterogeneous datasets. To address these limitations, the authors emphasize the importance of developing AI systems capable of autonomously understanding physical principles and ensuring transparent, verifiable decision-making processes for researchers. The review also explores the concept of the Self-Driving Lab, where AI autonomously designs experimental plans, analyzes results, and determines the next experimental steps—without manual operation by researchers. The AI-Based Catalyst Search Platform exemplifies this concept, enabling robots to automatically design, execute, and optimize catalyst synthesis experiments. In particular, the study presents cases in which AI-driven experimentation has dramatically accelerated catalyst development, suggesting that similar approaches could revolutionize research in battery and energy materials. <AI Driving Innovation Across the Entire Cycle of New Material Discovery, Development, and Optimization> “This review demonstrates that artificial intelligence is emerging as the new language of materials science and engineering, transcending its role as a mere tool,” said Professor Seungbum Hong. “The roadmap presented by the KAIST team will serve as a valuable guide for researchers in Korea’s national core industries including batteries, semiconductors, and energy materials.” Benediktus Madika (Ph.D. candidate), Aditi Saha (Ph.D. candidate), Chaeyul Kang (M.S. candidate), and Batzorig Buyantogtokh (Ph.D. candidate) from KAIST’s Department of Materials Science and Engineering contributed as co-first authors. Collaborating authors include Professor Joshua Agar (Drexel University), Professors Chris Wolverton and Peter Voorhees (Northwestern University), Professor Peter Littlewood (University of St Andrews), and Professor Sergei Kalinin (University of Tennessee). Paper Title: Artificial Intelligence for Materials Discovery, Development, and Optimization DOI: 10.1021/acsnano.5c04200 This work was supported by the National Research Foundation of Korea (NRF) with funding from the Ministry of Science and ICT (RS-2023-00247245).

KAIST, Dancing Like 'Navillera'... AI Understands ..
<(From Left)Ph.D candidate Jihyun Lee, Professor Tae-Kyun Kim, M.S candidate Changmin Lee> The era has begun where AI moves beyond merely 'plausibly drawing' to understanding even why clothes flutter and wrinkles form. A KAIST research team has developed a new generative AI that learns movement and interaction in 3D space following physical laws. This technology, which overcomes the limitations of existing 2D-based video AI, is expected to enhance the realism of avatars in films, the metaverse, and games, and significantly reduce the need for motion capture or manual 3D graphics work. KAIST (President Kwang Hyung Lee) announced on the 22nd that the research team of Professor Tae-Kyun (T-K) Kim from the School of Computing has developed 'MPMAvatar,' a spatial and physics-based generative AI model that overcomes the limitations of existing 2D pixel-based video generation technology. To solve the problems of conventional 2D technology, the research team proposed a new method that reconstructs multi-view images into 3D space using Gaussian Splatting and combines it with the Material Point Method (MPM), a physics simulation technique. In other words, the AI was trained to learn physical laws on its own by stereoscopically reconstructing videos taken from multiple viewpoints and allowing objects within that space to move and interact as if they were in real physical world. This enables the AI to compute the movement based on objects' material, shape, and external forces, and then learn the physical laws by comparing the results with actual videos. The research team represented the 3D space using point-units, and by applying both Gaussian and MPM to each point, they simultaneously achieved physically natural movement and realistic video rendering. That is, they divided the 3D space into numerous small points, making each point move and deform like a real object, thereby realizing natural video that is nearly indistinguishable from reality. In particular, to precisely express the interaction of thin and complex objects like clothing, they calculated both the object's surface (mesh) and its particle-unit structure (point), and utilized the Material Point Method (MPM), which calculates the object's movement and deformation in 3D space according to physical laws. Furthermore, they developed a new collision handling technology to realistically reproduce scenes where clothes or objects move and collide with each other in multiple spots and complex manner. The generative AI model MPMAvatar, to which this technology is applied, successfully reproduced the realistic movement and interaction of a person wearing loose clothing, and also succeeded in 'Zero-shot' generation, where the AI processes data it has never seen during the learning process by inferring on its own. <Figure 1. Modeling new human poses and clothing dynamics from multi-view video input, and zero-shot generation of novel physical interactions.> The proposed method is applicable to various physical properties, such as rigid bodies, deformable objects, and fluids, allowing it to be used not only for avatars but also for the generation of general complex scenes. <“Figure 2. Depiction of graceful dance movements and soft clothing folds, like Navillera.> Professor Tae-Kyun (T-K) Kim explained, "This technology goes beyond AI simply drawing a picture; it makes the AI understand 'why' the world in front of it looks the way it does. This research demonstrates the potential of 'Physical AI' that understands and predicts physical laws, marking an important turning point toward AGI (Artificial General Intelligence)." He added, "It is expected to be practically applied across the broaden immersive content industry, including virtual production, films, short-form contents, and adverts, creating significant change." The research team is currently expanding this technology to develop a model that can generate physically consistent 3D videos simply from a user's text input. This research involved Changmin Lee, a Master's student at the KAIST Graduate School of AI, as the first author, and Jihyun Lee, a Ph.D. student at the KAIST School of Computing, as a co-author. The research results will be presented at NeurIPS, the most prestigious international academic conference in the field of AI, on December 2nd, and the program code is to be fully released. · Paper: C. Lee, J. Lee, T-K. Kim, MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics, Proc. of Thirty-Ninth Annual Conf. on Neural Information Processing Systems (NeurIPS), San Diego, US, 2025 · arXiv version: https://arxiv.org/abs/2510.01619 · Related Project Site: https://kaistchangmin.github.io/MPMAvatar/ · Related video links showing the 'Navillera'-like dancing drawn by AI: o https://www.youtube.com/shorts/ZE2KoRvUF5c o https://youtu.be/ytrKDNqACqM This work was supported by the Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) through the Human-Oriented Next-Generation Challenging General AI Technology Development Project (RS-2025-25443318) and the Professional AI Talent Development Program for Multimodal AI Agents (RS-2025-25441313).

Refrigerator Use Increases with Stress, IoT Sensor..
<(From Left) Ph.D candidate Chanhee Lee, Professor Uichin Lee, Professor Hyunsoo Lee, Ph.D candidate Youngji Koh from School of Computing>
The number of single-person households in South Korea has exceeded 8 million, accounting for 36% of the total, marking an all-time high. A Seoul Metropolitan Government survey found that 62% of single-person households experience 'loneliness', deepening feelings of isolation and mental health issues. KAIST researchers have gone beyond the limitations of smartphones and wearables, utilizing in-home IoT data to reveal that a disruption in daily rhythm is a key indicator of worsening mental health. This research is expected to lay the foundation for developing personalized mental healthcare management systems.
KAIST (President Kwang Hyung Lee) announced on the 21st of October that a research team led by Professor Uichin Lee from the School of Computing has demonstrated the possibility of accurately tracking an individual's mental health status using in-home Internet of Things (IoT) sensor data.
Consistent self-monitoring is important for mental health management, but existing smartphone- or wearable-based tracking methods have the limitation of data loss when the user is not wearing or carrying the device inside the home.
The research team therefore focused on in-home environmental data. A 4-week pilot study was conducted on 20 young single-person households, installing appliances, sleep mats, motion sensors, and other devices to collect IoT data, which was then analyzed along with smartphone and wearable data.
The results confirmed that utilizing IoT data alongside existing methods allows for a significantly more accurate capture of changes in mental health. For instance, reduced sleep time was closely linked to increased levels of depression, anxiety, and stress, and increased indoor temperature also showed a correlation with anxiety and depression.
<Picture1. Heatmap of the Correlation Between Each User’s Mental Health Status and Sensor Data>
Participants' behavioral patterns varied, including a 'binge-eating type' with increased refrigerator use during stress and a 'lethargic type' with a sharp decrease in activity. However, a common trend clearly emerged: mental health deteriorated as daily routines became more irregular.
Variability in daily patterns was confirmed to be a more important factor than the frequency of specific behaviors, suggesting that a regular routine is essential for maintaining mental health.
When research participants viewed their life data through visualization software, they generally perceived the data as being genuinely helpful in understanding their mental health, rather than expressing concern about privacy invasion. This significantly enhanced the research acceptance and satisfaction with participation.
<Figure 2. Comparison of Average Mental Health Status Between the High Irregularity Group (Red) and the Low Irregularity Group (Blue)>

KAIST Develops Ultrafast Photothermal Process Achi..
<< (from left) Ph.D. candidate Seo-Hak Park, Dr. Jae-Wan Ahn, Ph.D. candidate Do-Kyung Jeon, Prof. Sung-Yul Choi, Prof. Il-Doo Kim, Dr. Chung-Sung Park, Ph.D. candidate Ui-Chul Shin (top left) Dr. Ha-Min Shin, Dr. Jun-Hoe Choi >> The rapid and energy-efficient synthesis of high-performance catalysts is a critical hurdle in advancing clean energy technologies like hydrogen production. Addressing this challenge, a research team at KAIST has now developed a novel platform technology that utilizes a 0.02-second flash of light to generate an ultrahigh temperature of 3,000 °C, enabling the highly efficient synthesis of catalysts. This breakthrough process reduces energy consumption by more than a thousandfold compared to conventional methods while increasing hydrogen production efficiency by up to six times, marking a significant step toward the commercialization of clean energy. KAIST (President Kwang Hyung Lee) announced on October 20 that a joint research team, co-led by Professor Il-Doo Kim from the Department of Materials Science and Engineering and Professor Sung-Yool Choi from the School of Electrical Engineering, has developed a “direct-contact photothermal annealing” platform. This technique synthesizes high-performance nanomaterials through brief exposure to intense light, generating a transient temperature of 3,000 °C in just 0.02 seconds. Using this intense photothermal energy, the researchers successfully converted chemically inert nanodiamond (ND) precursors into highly conductive and catalytically active carbon nanoonions (CNOs). More impressively, the method simultaneously functionalizes the surface of the newly formed CNOs with single atoms. This integrated, one-step process restructures the support material and embeds catalytic functionality in a single light pulse, representing a significant innovation in catalyst synthesis. CNOs, composed of concentric graphitic shells, are ideal catalyst supports due to their high conductivity, large specific surface area, and chemical stability. However, traditional CNO synthesis has been hindered by complex, multi-step post-processing required to load metal catalysts and by reliance on energy-intensive, time-consuming thermal treatments that limit scalability. < Schematic Illustration of the Limitations of Conventional Thermal-Radiation Synthesis and the Carbon Nano-Onion Conversion via Direct-Contact Photothermal Treatment > To overcome these limitations, the KAIST team leveraged the photothermal effect. They devised a method of mixing ND precursors with light-absorbing carbon black (CB) and applying an intense pulse from a xenon lamp. This approach triggers the transformation of NDs into CNOs in just 0.02 seconds, a phenomenon validated by molecular dynamics simulations. A key innovation of this platform is the simultaneous synthesis of CNOs and functionalization of single-atom catalysts (SACs). When metal precursors, such as platinum (Pt), are included in the mixture, they decompose and anchor onto the surface of the nascent CNOs as individual atoms. The subsequent rapid cooling prevents atomic aggregation, resulting in a perfectly integrated one-step process for both synthesis and functionalization. The team has successfully synthesized eight different high-density SACs, including platinum (Pt), cobalt (Co), and nickel (Ni). The resulting Pt-CNO demonstrated a sixfold enhancement in hydrogen evolution efficiency compared to conventional catalysts, achieving high performance with significantly smaller quantities of precious metals. This highlights the technology's potential for scalable and sustainable hydrogen production. “We have developed, for the first time, a direct-contact photothermal annealing process that reaches 3,000°C in under 0.02 seconds,” said Professor Il-Doo Kim. “This ultrafast synthesis and single-atom functionalization platform reduces energy consumption by more than a thousandfold compared to traditional methods. We expect it to accelerate the commercialization of technologies in hydrogen energy, gas sensing, and environmental catalysis.” The study’s first authors are Dogyeong Jeon (Ph.D. candidate, Department of Materials Science and Engineering, KAIST), Dr. Hamin Shin (an alumnus of the Department of Materials Science and Engineering and a current postdoctoral researcher at ETH Zurich), and Dr. Jun-Hwe Cha (an alumnus of the School of Electrical Engineering, now at SK hynix). Professors Sung-Yool Choi and Il-Doo Kim are the corresponding authors.\ < Inside Cover Image of the September Issue of ACS > The research was published as a Supplementary Cover Article in the September issue of ACS Nano, a leading international journal of the American Chemical Society (ACS). ※ Paper title: “Photothermal Annealing-Enabled Millisecond Synthesis of Carbon Nanoonions and Simultaneous Single-Atom Functionalization,” DOI: 10.1021/acsnano.5c11229 This research was supported by the Global R&D Infrastructure Program and the Leading Research Center Program of the National Research Foundation of Korea (NRF), funded by the Ministry of Science and ICT, and the Nano Convergence Technology Center’s Semiconductor–Battery Interfacing Platform Development Project.

KAIST Develops AI Technology That Predicts and Ass..
<(From left) Dr. Younghyun Han, (top center) Dr. Chun-Kyung Lee, (bottom center) Prof. Kwang-Hyun Cho,Ph.D. candidate Hyunjin Kim> Controlling the state of a cell in a desired direction is one of the central challenges in life sciences, including drug development, cancer treatment, and regenerative medicine. However, identifying the right drug or genetic target for that purpose is extremely difficult. To address this, researchers at KAIST have mathematically modeled the interaction between cells and drugs in a modular “Lego block” manner—breaking them down and recombining them—to develop a new AI technology that can predict not only new cell–drug reactions never before tested but also the effects of arbitrary genetic perturbations. KAIST (President Kwang Hyung Lee) announced on the 16th of October that a research team led by Professor Kwang-Hyun Cho of the Department of Bio and Brain Engineering has developed a generative AI-based technology capable of identifying drugs and genetic targets that can guide cells toward a desired state. “Latent space” is an invisible mathematical map used by image-generating AI to organize the essential features of objects or cells. The research team succeeded in separating the representations of cell states and drug effects within this space and then recombining them to predict the reactions of previously untested cell–drug combinations. They further extended this principle to show that the model can also predict how a cell’s state would change when a specific gene is regulated. The team validated this approach using real experimental data. As a result, the AI identified molecular targets capable of reverting colorectal cancer cells toward a normal-like state, which the team later confirmed through cell experiments. This finding demonstrates that the method is not limited to cancer treatment—it serves as a general platform capable of predicting various untrained cell-state transitions and drug responses. In other words, the technology not only determines whether or not a drug works but also reveals how it functions inside the cell, making the achievement particularly meaningful. <Latent Space Direction Vector–Based Cell Transition Modeling> The research provides a powerful tool for designing methods to induce desired cell-state changes. It is expected to have broad applications in drug discovery, cancer therapy, and regenerative medicine, such as restoring damaged cells to a healthy state. Professor Kwang-Hyun Cho stated, “Inspired by image-generation AI, we applied the concept of a ‘direction vector,’ an idea that allows us to transform cells in a desired direction.” He added, “This technology enables quantitative analysis of how specific drugs or genes affect cells and even predicts previously unknown reactions, making it a highly generalizable AI framework.” The study was conducted with Dr. Younghyun Han, Ph.D. candidate Hyunjin Kim, and Dr. Chun-Kyung Lee of KAIST. The research findings were published online in Cell Systems, a journal by Cell Press, on October 15. ※ Paper title: “Identifying an Optimal Perturbation to Induce a Desired Cell State by Generative Deep Learning” (DOI: 10.1016/j.cels.2025.101405) The study was supported by the National Research Foundation of Korea (NRF) through the Ministry of Science and ICT’s Mid-Career Researcher Program and the Basic Research Laboratory (BRL) Program.

Federated Learning AI Developed for Hospitals and ..
< (From bottom left) KAIST Ph.D. Candidate Yoonho Lee, Integrated M.S./Ph.D. Candidate Sein Kim, Ph.D. Candidate Sungwon Kim, Ph.D. Candidate Junseok Lee, Ph.D. Candidate Yunhak Oh, (From top right) Ph.D. Candidate Namkyeong Lee, UNC Chapel Hill Ph.D. Candidate Sukwon Yun, Emory University Professor Carl Yang, KAIST Professor Chanyoung Park > Federated Learning was devised to solve the problem of difficulty in aggregating personal data, such as patient medical records or financial data, in one place. However, during the process where each institution optimizes the collaboratively trained AI to suit its own environment, a limitation arose: the AI became overly adapted to the specific institution's data, making it vulnerable to new data. Our university research team has presented a solution to this problem and confirmed its stable performance not only in security-critical fields like hospitals and banks but also in rapidly changing environments such as social media and online shopping. KAIST announced on October 15th that the research team led by Professor Chanyoung Park of the Department of Industrial and Systems Engineering has developed a new learning method that fundamentally solves the chronic performance degradation problem of Federated Learning, significantly enhancing the Generalization performance of AI models. Federated Learning is a method that allows multiple institutions to jointly train an AI without directly exchanging data. However, a problem occurs when each institution fine-tunes the resulting joint AI model to its local setting. This is because the broad knowledge acquired earlier is diluted, leading to a Local Overfitting problem where the AI becomes excessively adapted only to the data characteristics of a specific institution. For example, if several banks jointly build a 'Collaborative Loan Review AI,' and one specific bank performs fine-tuning focusing on corporate customer data, that bank's AI becomes strong in corporate reviews but suffers from local overfitting, leading to degraded performance in reviewing individual or startup customers. Professor Park's team introduced the Synthetic Data method to solve this. They extracted only the core and representative features from each institution's data to generate virtual data that does not contain personal information and applied this during the fine-tuning process. As a result, each institution's AI can strengthen its expertise according to its own data without sharing personal information, while maintaining the broad perspective (generalization performance) gained through collaborative learning. <Figure 1. Federated Learning is a distributed learning method where multiple institutions collaboratively train a joint Artificial Intelligence model without directly sharing their data. Each institution trains its individual AI model using its local data (Institution 1, 2, 3 Data). Afterward, only the trained model information, not the original data, is securely aggregated to a central server to construct a high-performing 'Joint AI Model.' This method allows for the effect of training with diverse data while protecting the privacy of sensitive information> < Figure 2. The Local Overfitting problem occurs during the process of fine-tuning the 'Joint AI Model' built through Federated Learning with each institution's data. For example, Institution 3 can fine-tune the joint AI with its own data (Type 0, 2) to create an expert AI for those types, but in the process, it forgets the knowledge about data (Type 1) that other institutions had (Information Loss). In this way, each institution's AI becomes optimized only for its own data, gradually losing the ability (generalization performance) to solve other types of problems that were obtained through collaboration. > The research results showed that this method is particularly effective in fields where data security is crucial, such as healthcare and finance, and also demonstrated stable performance in environments where new users and products are continuously added, like social media and e-commerce. It proved that the AI could maintain stable performance without confusion even if a new institution joins the collaboration or data characteristics change rapidly. < Figure 3. The technology proposed by the research team solves the local overfitting problem by utilizing Synthetic Data. When each institution fine-tunes its AI with its own data, it simultaneously trains with 'Global Synthetic Data' created from the data of other institutions. This synthetic data acts as a kind of 'Vaccine' to prevent the AI from forgetting information not present in the local data (e.g., Type 2 in the image), helping the AI to gain expertise on specific data while retaining a broad view (generalization performance) to handle other types of data. > Professor Chanyoung Park of the Department of Industrial and Systems Engineering said, "This research opens a new path to simultaneously ensure both expertise and versatility for each institution's AI while protecting data privacy," and "It will be a great help in fields where data collaboration is essential but security is important, such as medical AI and financial fraud detection AI." The research was primarily authored by Graduate School of Data Science student Sungwon Kim and co-authored by Professor Chanyoung Park as the corresponding author. It was recognized for its excellence by being selected for an Oral Presentation, which is reserved for the top 1.8% of outstanding papers, at the International Conference on Learning Representations (ICLR) 2025, a top-tier academic conference in the field of Artificial Intelligence held in Singapore last April. ※ Paper Title: Subgraph Federated Learning for Local Generalization, https://doi.org/10.48550/arXiv.2503.03995 Meanwhile, this research is a result of projects supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) — the 'Robust, Fair, and Scalable Data-Centric Continual Learning' project, the National Research Foundation of Korea (NRF) — the 'Graph Foundation Model: Graph-based Machine Learning Applicable to Various Modalities and Domains' project, and the 'Data Science Convergence Talent Fostering Program.'

KAIST Develops Multimodal AI That Understands Text..
<(From Left) M.S candidate Soyoung Choi, Ph.D candidate Seong-Hyeon Hwang, Professor Steven Euijong Whang> Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types of data. KAIST researchers have now developed a new multimodal AI training technology that enables models to recognize both text and images evenly, enabling far more accurate predictions. KAIST (President Kwang Hyung Lee) announced on the 14th that a research team led by Professor Steven Euijong Whang from the School of Electrical Engineering has developed a novel data augmentation method that enables multimodal AI systems—those that must process multiple data types simultaneously—to make balanced use of all input data. Multimodal AI combines various forms of information, such as text and video, to make judgments. However, AI models often show a tendency to rely excessively on one particular type of data, resulting in degraded prediction performance. To solve this problem, the research team deliberately trained AI models using mismatched or incongruent data pairs. By doing so, the model learned to rely on all modalities—text, images, and even audio—in a balanced way, regardless of context. The team further improved performance stability by incorporating a training strategy that compensates for low-quality data while emphasizing more challenging examples. The method is not tied to any specific model architecture and can be easily applied to various data types, making it highly scalable and practical. <Model Prediction Changes with a Data-Centric Multimodal AI Training Framework> Professor Steven Euijong Whang explained, “Improving AI performance is not just about changing model architectures or algorithms—it’s much more important how we design and use the data for training.” He continued, “This research demonstrates that designing and refining the data itself can be an effective approach to help multimodal AI utilize information more evenly, without becoming biased toward a specific modality such as images or text.” The study was co-led by doctoral student Seong-Hyeon Hwang and master’s student Soyoung Choi, with Professor Steven Euijong Whang serving as the corresponding author. The results will be presented at NeurIPS 2025 (Conference on Neural Information Processing Systems), the world’s premier conference in the field of AI, which will be held this December in San Diego, USA, and Mexico City, Mexico. ※ Paper title: “MIDAS: Misalignment-based Data Augmentation Strategy for Imbalanced Multimodal Learning,” Original paper: https://arxiv.org/pdf/2509.25831 The research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the projects “Robust, Fair, and Scalable Data-Centric Continual Learning” (RS-2022-II220157) and “AI Technology for Non-Invasive Near-Infrared-Based Diagnosis and Treatment of Brain Disorders” (RS-2024-00444862).

Chemobiological Platform Enables Renewable Convers..
<(From Left) Professor Sun Kyu Han, Ph.D candidate Tae Wan Kim, Professor Kyeong Rok Choi, Professor Sang Yup Lee> With growing concerns over fossil fuel depletion and the environmental impacts of petrochemical production, scientists are actively exploring renewable strategies to produce essential industrial chemicals. A collaborative research team—led by Distinguished Professor Sang Yup Lee, Senior Vice President for Research, from the Department of Chemical and Biomolecular Engineering, together with Professor Sunkyu Han from the Department of Chemistry at the Korea Advanced Institute of Science and Technology (KAIST)—has developed an integrated chemobiological platform that converts renewable carbon sources such as glucose and glycerol into oxygenated precursors, which are subsequently deoxygenated in the same solvent system to yield benzene, toluene, ethylbenzene, and p-xylene (BTEX), which are fundamental aromatic hydrocarbons used in fuels, polymers, and consumer products. <Figure 1. Schematic representation of the chemobiological synthesis of BTEX from glucose or glycerol in Escherichia coli> From Sugars to Aromatic Hydrocarbons of Petroleum The researchers designed four metabolically engineered strains of Escherichia coli, each programmed to produce a specific oxygenated precursor—phenol, benzyl alcohol, 2-phenylethanol, or 2,5-xylenol. These intermediates are generated through tailored genetic modifications, such as deletion of feedback-regulated enzymes, overexpression of pathway-specific genes, and introduction of heterologous enzymes to expand metabolic capabilities. During fermentation, the products were continuously extracted into the organic solvent isopropyl myristate (IPM). Acting as a dual-function solvent, IPM not only mitigated the toxic effects of aromatic compounds on cell growth but also served directly as the reaction medium for downstream chemical upgrading. By eliminating the need for intermediate purification, solvent exchange, or distillation, this solvent-integrated system streamlined the conversion of renewable feedstocks into valuable aromatics. Overcoming Chemical Barriers in An Unconventional Solvent A central innovation of this work lies in adapting chemical deoxygenation reactions to function efficiently within IPM—a solvent rarely used in organic synthesis. Traditional catalysts and reagents often proved ineffective under these conditions due to solubility limitations or incompatibility with biologically derived impurities. Through systematic optimization, the team established mild and selective catalytic strategies compatible with IPM. For example, phenol was successfully deoxygenated to benzene in up to 85% yield using a palladium-based catalytic system, while benzyl alcohol was efficiently converted to toluene after activated charcoal pretreatment of the IPM extract. More challenging transformations, such as converting 2-phenylethanol to ethylbenzene, were achieved through a mesylation–reduction sequence adapted to the IPM phase. Likewise, 2,5-xylenol derived from glycerol was converted to p-xylene in 62% yield via a two-step reaction, completing the renewable synthesis of the full BTEX spectrum. A Sustainable, Modular Framework Beyond producing BTEX, the study establishes a generalizable framework for integrating microbial biosynthesis with chemical transformations in a continuous solvent environment. This modular approach reduces energy demand, minimizes solvent waste, and enables process intensification—key factors for scaling up renewable chemical production. The high boiling point of IPM (>300 °C) simplifies product recovery, as BTEX compounds can be isolated by fractional distillation while the solvent is readily recycled. Such a design is consistent with the principles of green chemistry and the circular economy, providing a practical alternative to fossil-based petrochemical processes. Toward A Carbon-Neutral Future Dr. Xuan Zou, the first author of this paper, explaind, “By coupling the selectivity of microbial metabolism with the efficiency of chemical catalysis, this platform establishes a renewable pathway to some of the most widely used building blocks in the chemical industry. Future efforts will focus on optimizing metabolic fluxes, extending the platform to additional aromatic targets, and adopting greener catalytic systems.” In addition, Distinguished Professor Sang Yup Lee noted “As the global demand for BTEX and related chemicals continues to grow, this innovation provides both a scientific and industrial foundation for reducing reliance on petroleum-based processes. It marks an important step toward lowering the carbon footprint of the fuel and chemical sectors while ensuring a sustainable supply of essential aromatic hydrocarbons.” This research was supported by the Development of Platform Technologies of Microbial Cell Factories for the Next-Generation Biorefineries Project (2022M3J5A1056117) and the Development of Advanced Synthetic Biology Source Technologies for Leading the Biomanufacturing Industry Project (RS-2024-00399424), funded by the National Research Foundation supported by the Korean Ministry of Science and ICT. This study was published in the latest issue of the Proceedings of the National Academy of Sciences of the United States of America (PNAS).

AI Nüshu Wins International Award
< (From left) Dr. Yuqian Sun, Professor Chang-Hee Lee of the Department of Industrial Design, and Ali Asadipour, Director of CSRC at the Royal College of Art > 'Nüshu (女書)' is the world's only women's script, a unique writing system created autonomously by women in Hunan Province, China, starting around the 19th century. These women, excluded from Hanzi education, used it to record their lives and communicate with each other. A research team from KAIST participated in the 'AI Nüshu (女书)' project, which combines the script's significance (creation amidst oppression, female solidarity, linguistic experimentation) with modern technology, winning a prestigious international award often called the 'Academy Award of the media art world.' KAIST announced on the 10th that the 'AI Nüshu' project, jointly conducted by Professor Chang-Hee Lee's research team from the Department of Industrial Design and Ali Asadipour, Director of the Computer Science Research Center at the Royal College of Art (RCA), was selected for the Honorary Mention in the Digital Humanity category at the 'Prix Ars Electronica 2025,' the world's highest-level media art festival. < Installation image of 'AI Nüshu' > The 'Prix Ars Electronica,' known as the 'Academy Award of the media art world,' is the premier international media art competition held annually in Linz, Austria. This competition, which discovers innovative works spanning the boundaries of art and science, saw 3,987 submissions from 98 countries this year, with only two works receiving the honor in the Digital Humanity category. The award-winning work, 'AI Nüshu (女书),' is based on 'Nüshu,' the world's only women's script created by Chinese women who were excluded from literacy education to record and communicate their lives. The KAIST research team and collaborators combined this script with Computational Linguistics to create an installation that visitors can directly experience. The artificial intelligence within the artwork learns the communication methods of pre-modern Chinese women and generates its own new language. This is regarded as a symbol of resistance against the patriarchal order and a feminist endeavor that moves beyond Western-centric views on language. < Example of the same sentence expressed in English, Chinese, Nüshu, and AI Nüshu > It also received high praise for artistically presenting the possibility of machines creating new languages, going beyond the preconception that 'only humans create language.' Dr. Yuqian Sun of the Royal College of Art expressed her feelings, saying, "Although there were many difficulties in my life and research process, I feel great reward and emotion through this award." Professor Chang-Hee Lee of the KAIST Department of Industrial Design stated, "It is very meaningful that this contemplative art, born from the intersection of history, humanities, art, and technology, has led to such a globally prestigious award." Detailed information about the project can be found on the official Prix Ars Electronica website (https://ars.electronica.art/prix/en/digitalhumanity/).