Engineered C. glutamicum Strain Capable of Produci..
An engineered C. glutamicum strain that can produce the world’s highest titer of glutaric acid was developed by employing systems metabolic engineering strategies A metabolic engineering research group at KAIST has developed an engineered Corynebacterium glutamicum strain capable of producing high-level glutaric acid without byproducts from glucose. This new strategy will be useful for developing engineered micro-organisms for the bio-based production of value-added chemicals. Glutaric acid, also known as pentanedioic acid, is a carboxylic acid that is widely used for various applications including the production of polyesters, polyamides, polyurethanes, glutaric anhydride, 1,5-pentanediol, and 5-hydroxyvaleric acid. Glutaric acid has been produced using various petroleum-based chemical methods, relying on non-renewable and toxic starting materials. Thus, various approaches have been taken to biologically produce glutaric acid from renewable resources. Previously, the development of the first glutaric acid producing Escherichia coli by introducing Pseudomonas putida genes was reported by a research group from KAIST, but the titer was low. Glutaric acid production by metabolically engineered Corynebacterium glutamicum has also been reported in several studies, but further improvements in glutaric acid production seemed possible since C. glutamicum has the capability of producing more than 130 g/L of L-lysine. A research group comprised of Taehee Han, Gi Bae Kim, and Distinguished Professor Sang Yup Lee of the Department of Chemical and Biomolecular Engineering addressed this issue. Their research paper “Glutaric acid production by systems metabolic engineering of an L-lysine-overproducing Corynebacterium glutamicum” was published online in PNAS on November 16, 2020. < Figure: Systems metabolic engineering strategies employed for the construction of an engineered C. glutamicum strain that is capable of efficiently producing glutaric acid. > This research reports the development of a metabolically engineered C. glutamicum strain capable of efficiently producing glutaric acid, starting from an L-lysine overproducer. The following novel strategies and approaches to achieve high-level glutaric acid production were employed. First, metabolic pathways in C. glutamicum were reconstituted for glutaric acid production by introducing P. putida genes. Then, multi-omics analyses including genome, transcriptome, and fluxome were conducted to understand the phenotype of the L-lysine overproducer strain. In addition to systematic understanding of the host strain, gene manipulation targets were predicted by omics analyses and applied for engineering C. glutamicum, which resulted in the development of an engineered strain capable of efficiently producing glutaric acid. Furthermore, the new glutaric acid exporter was discovered for the first time, which was used to further increase glutaric acid production through enhancing product excretion. Last but not least, culture conditions were optimized for high-level glutaric acid production. As a result, the final engineered strain was able to produce 105.3 g/L glutaric acid, the highest titer ever reported, in 69 hours by fed-batch fermentation. Professor Sang Yup Lee said, “It is meaningful that we were able to develop a highly efficient glutaric acid producer capable of producing glutaric acid at the world’s highest titer without any byproducts from renewable carbon sources. This will further accelerate the bio-based production of valuable chemicals in pharmaceutical/medical/chemical industries.” This research was supported by the Bio & Medical Technology Development Program of the National Research Foundation and funded by the Ministry of Science and ICT. -Profile Distinguished Professor Sang Yup Lee leesy＠kaist.ac.kr http://mbel.kaist.ac.kr Department of Chemical and Biomolecular Engineering KAIST
Researchers Control Multiple Wavelengths of Light ..
KAIST researchers have synthesized a collection of nanoparticles, known as carbon dots, capable of emitting multiple wavelengths of light from a single particle. Additionally, the team discovered that the dispersion of the carbon dots, or the interparticle distance between each dot, influences the properties of the light the carbon dots emit. The discovery will allow researchers to understand how to control these carbon dots and create new, environmentally responsible displays, lighting, and sensing technology. Research into nanoparticles capable of emitting light, such as quantum dots, has been an active area of interest for the last decade and a half. These particles, or phosphors, are nanoparticles made out of various materials that are capable of emitting light at specific wavelengths by leveraging quantum mechanical properties of the materials. This provides new ways to develop lighting and display solutions as well as more precise detection and sensing in instruments. As technology becomes smaller and more sophisticated, the usage of fluorescent nanoparticles has seen a dramatic increase in many applications due to the purity of the colors emitting from the dots as well as their tunability to meet desired optical properties. Carbon dots, a type of fluorescent nanoparticles, have seen an increase in interest from researchers as a candidate to replace non-carbon dots, the construction of which requires heavy metals that are toxic to the environment. Since they are made up of mostly carbon, the low toxicity is an extremely attractive quality when coupled with the tunability of their inherent optical properties. Another striking feature of carbon dots is their capability to emit multiple wavelengths of light from a single nanoparticle. This multi-wavelength emission can be stimulated under a single excitation source, enabling the simple and robust generation of white light from a single particle by emitting multiple wavelengths simultaneously. Carbon dots also exhibit a concentration-dependent photoluminescence. In other words, the distance between individual carbon dots affects the light that the carbon dots subsequently emit under an excitation source. These combined properties make carbon dots a unique source that will result in extremely accurate detection and sensing. This concentration-dependency, however, had not been fully understood. In order to fully utilize the capabilities of carbon dots, the mechanisms that govern the seemingly variable optical properties must first be uncovered. It was previously theorized that the concentration-dependency of carbon dots was due to a hydrogen bonding effect. Now, a KAIST research team, led by Professor Do Hyun Kim of the Department of Chemical and Biomolecular Engineering has posited and demonstrated that the dual-color-emissiveness is instead due to the interparticle distances between each carbon dot. The research was published in the 36th Issue of Physical Chemistry Chemical Physics. First author of the paper, PhD candidate Hyo Jeong Yoo, along with Professor Kim and researcher Byeong Eun Kwak, examined how the relative light intensity of the red and blue colors changed when varying the interparticle distances, or concentration, of the carbon dots. They found that as the concentration was adjusted, the light emitted from the carbon dots would transform. By varying the concentration, the team was able to control the relative intensity of the colors, as well as emit them simultaneously to generate a white light from a single source (See Figure). “The concentration-dependence of the photoluminescence of carbon dots on the change of the emissive origins for different interparticle distances has been overlooked in previous research. With the analysis of the dual-color-emission phenomenon of carbon dots, we believe that this result may provide a new perspective to investigate their photoluminescence mechanism,” Yoo explained. The newly analyzed ability to control the photoluminescence of carbon dots will likely be heavily utilized in the continued development of solid-state lighting applications and sensing. < Figure. Photoluminescence change of dual-color-emissive carbon dots (CDs) depending on their concentration. Blue- and red-emissions show different contributions with different interparticle distances. > Publication: Yoo, H. J., Kwak, B. E., and Kim. D. H. (2020) Interparticle distance as a key factor for controlling the dual-emission properties of carbon dots. Physical Chemistry Chemical Physics, Issue 36, Pages 20227-20237. Available online at https://doi.org/10.1039/d0cp02120b Profile: Do Hyun Kim, Sc.D. Professor dokim＠kaist.ac.kr http://procal.kaist.ac.kr/ Process Analysis Laboratory Department of Chemical and Biomolecular Engineering https://www.kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
To Talk or Not to Talk： Smart Speaker Determines O..
A KAIST research team has developed a new context-awareness technology that enables AI assistants to determine when to talk to their users based on user circumstances. This technology can contribute to developing advanced AI assistants that can offer pre-emptive services such as reminding users to take medication on time or modifying schedules based on the actual progress of planned tasks. Unlike conventional AI assistants that used to act passively upon users’ commands, today’s AI assistants are evolving to provide more proactive services through self-reasoning of user circumstances. This opens up new opportunities for AI assistants to better support users in their daily lives. However, if AI assistants do not talk at the right time, they could rather interrupt their users instead of helping them. The right time for talking is more difficult for AI assistants to determine than it appears. This is because the context can differ depending on the state of the user or the surrounding environment. A group of researchers led by Professor Uichin Lee from the KAIST School of Computing identified key contextual factors in user circumstances that determine when the AI assistant should start, stop, or resume engaging in voice services in smart home environments. Their findings were published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) in September. The group conducted this study in collaboration with Professor Jae-Gil Lee’s group in the KAIST School of Computing, Professor Sangsu Lee’s group in the KAIST Department of Industrial Design, and Professor Auk Kim’s group at Kangwon National University. After developing smart speakers equipped with AI assistant function for experimental use, the researchers installed them in the rooms of 40 students who live in double-occupancy campus dormitories and collected a total of 3,500 in-situ user response data records over a period of a week. The smart speakers repeatedly asked the students a question, “Is now a good time to talk?” at random intervals or whenever a student’s movement was detected. Students answered with either “yes” or “no” and then explained why, describing what they had been doing before being questioned by the smart speakers. Data analysis revealed that 47％ of user responses were “no” indicating they did not want to be interrupted. The research team then created 19 home activity categories to cross-analyze the key contextual factors that determine opportune moments for AI assistants to talk, and classified these factors into ‘personal,’ ‘movement,’ and ‘social’ factors respectively. Personal factors, for instance, include: 1. the degree of concentration on or engagement in activities, 2. the degree urgency and busyness, 3. the state of user’s mental or physical condition, and 4. the state of being able to talk or listen while multitasking. While users were busy concentrating on studying, tired, or drying hair, they found it difficult to engage in conversational interactions with the smart speakers. Some representative movement factors include departure, entrance, and physical activity transitions. Interestingly, in movement scenarios, the team found that the communication range was an important factor. Departure is an outbound movement from the smart speaker, and entrance is an inbound movement. Users were much more available during inbound movement scenarios as opposed to outbound movement scenarios. In general, smart speakers are located in a shared place at home, such as a living room, where multiple family members gather at the same time. In Professor Lee’s group’s experiment, almost half of the in-situ user responses were collected when both roommates were present. The group found social presence also influenced interruptibility. Roommates often wanted to minimize possible interpersonal conflicts, such as disturbing their roommates' sleep or work. Narae Cha, the lead author of this study, explained, “By considering personal, movement, and social factors, we can envision a smart speaker that can intelligently manage the timing of conversations with users.” She believes that this work lays the foundation for the future of AI assistants, adding, “Multi-modal sensory data can be used for context sensing, and this context information will help smart speakers proactively determine when it is a good time to start, stop, or resume conversations with their users.” This work was supported by the National Research Foundation (NRF) of Korea. < Image 1. In-situ experience sampling of user availability for conversations with AI assistants > < Image 2. Key Contextual Factors that Determine Optimal Timing for AI Assistants to Talk > Publication: Cha, N, et al. (2020) “Hello There！ Is Now a Good Time to Talk?”: Opportune Moments for Proactive Interactions with Smart Speakers. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 4, No. 3, Article No. 74, pp. 1-28. Available online at https://doi.org/10.1145/3411810 Link to Introductory Video: https://youtu.be/AA8CTi2hEf0 Profile: Uichin Lee Associate Professor uclee＠kaist.ac.kr http://ic.kaist.ac.kr Interactive Computing Lab. School of Computing https://www.kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
Chemical Scissors Snip 2D Transition Metal Dichalc..
New ‘nanoribbon’ catalyst should slash cost of hydrogen production for clean fuels Researchers have identified a potential catalyst alternative – and an innovative way to produce them using chemical ‘scissors’ – that could make hydrogen production more economical. The research team led by Professor Sang Ouk Kim at the Department of Materials Science and Engineering published their work in Nature Communications. Hydrogen is likely to play a key role in the clean transition away from fossil fuels and other processes that produce greenhouse gas emissions. There is a raft of transportation sectors such as long-haul shipping and aviation that are difficult to electrify and so will require cleanly produced hydrogen as a fuel or as a feedstock for other carbon-neutral synthetic fuels. Likewise, fertilizer production and the steel sector are unlikely to be “de-carbonized” without cheap and clean hydrogen. The problem is that the cheapest methods by far of producing hydrogen gas is currently from natural gas, a process that itself produces the greenhouse gas carbon dioxide–which defeats the purpose. Alternative techniques of hydrogen production, such as electrolysis using an electric current between two electrodes plunged into water to overcome the chemical bonds holding water together, thereby splitting it into its constituent elements, oxygen and hydrogen are very well established. But one of the factors contributing to the high cost, beyond being extremely energy-intensive, is the need for the very expensive precious and relatively rare metal platinum. The platinum is used as a catalyst–a substance that kicks off or speeds up a chemical reaction–in the hydrogen production process. As a result, researchers have long been on the hunt for a substitution for platinum -- another catalyst that is abundant in the earth and thus much cheaper. Transition metal dichalcogenides, or TMDs, in a nanomaterial form, have for some time been considered a good candidate as a catalyst replacement for platinum. These are substances composed of one atom of a transition metal (the elements in the middle part of the periodic table) and two atoms of a chalcogen element (the elements in the third-to-last column in the periodic table, specifically sulfur, selenium and tellurium). What makes TMDs a good bet as a platinum replacement is not just that they are much more abundant, but also their electrons are structured in a way that gives the electrodes a boost. In addition, a TMD that is a nanomaterial is essentially a two-dimensional super-thin sheet only a few atoms thick, just like graphene. The ultrathin nature of a 2-D TMD nanosheet allows for a great many more TMD molecules to be exposed during the catalysis process than would be the case in a block of the stuff, thus kicking off and speeding up the hydrogen-making chemical reaction that much more. However, even here the TMD molecules are only reactive at the four edges of a nanosheet. In the flat interior, not much is going on. In order to increase the chemical reaction rate in the production of hydrogen, the nanosheet would need to be cut into very thin – almost one-dimensional strips, thereby creating many edges. In response, the research team developed what are in essence a pair of chemical scissors that can snip TMD into tiny strips. “Up to now, the only substances that anyone has been able to turn into these ‘nano-ribbons’ are graphene and phosphorene,” said Sang Professor Kim, one of the researchers involved in devising the process. “But they’re both made up of just one element, so it’s pretty straightforward. Figuring out how to do it for TMD, which is made of two elements was going to be much harder.” The ‘scissors’ involve a two-step process involving first inserting lithium ions into the layered structure of the TMD sheets, and then using ultrasound to cause a spontaneous ‘unzipping’ in straight lines. “It works sort of like how when you split a plank of plywood: it breaks easily in one direction along the grain,” Professor Kim continued. “It’s actually really simple.” The researchers then tried it with various types of TMDs, including those made of molybdenum, selenium, sulfur, tellurium and tungsten. All worked just as well, with a catalytic efficiency as effective as platinum’s. Because of the simplicity of the procedure, this method should be able to be used not just in the large-scale production of TMD nanoribbons, but also to make similar nanoribbons from other multi-elemental 2D materials for purposes beyond just hydrogen production. < Schematic view of scissoring 2D sheets to nanoribbon. > -Profile Professor Sang Ouk Kim Soft Nanomaterials Laboratory (http://snml.kaist.ac.kr) Department of Materials Science and Engineering KAIST
E. coli Engineered to Grow on CO₂ and Formic Acid ..
- An E. coli strain that can grow to a relatively high cell density solely on CO₂ and formic acid was developed by employing metabolic engineering. - < From left Jong An Lee, Distinguished Professor Sang Yup Lee, Dr. Junho Bang, Dr.Jung Ho Ahn. > Most biorefinery processes have relied on the use of biomass as a raw material for the production of chemicals and materials. Even though the use of CO₂ as a carbon source in biorefineries is desirable, it has not been possible to make common microbial strains such as E. coli grow on CO₂. Now, a metabolic engineering research group at KAIST has developed a strategy to grow an E. coli strain to higher cell density solely on CO₂ and formic acid. Formic acid is a one carbon carboxylic acid, and can be easily produced from CO₂ using a variety of methods. Since it is easier to store and transport than CO₂, formic acid can be considered a good liquid-form alternative of CO₂. With support from the C1 Gas Refinery R&D Center and the Ministry of Science and ICT, a research team led by Distinguished Professor Sang Yup Lee stepped up their work to develop an engineered E. coli strain capable of growing up to 11-fold higher cell density than those previously reported, using CO₂ and formic acid as sole carbon sources. This work was published in Nature Microbiology on September 28. Despite the recent reports by several research groups on the development of E. coli strains capable of growing on CO₂ and formic acid, the maximum cell growth remained too low (optical density of around 1) and thus the production of chemicals from CO₂ and formic acid has been far from realized. The team previously reported the reconstruction of the tetrahydrofolate cycle and reverse glycine cleavage pathway to construct an engineered E. coli strain that can sustain growth on CO₂ and formic acid. To further enhance the growth, the research team introduced the previously designed synthetic CO₂ and formic acid assimilation pathway, and two formate dehydrogenases. Metabolic fluxes were also fine-tuned, the gluconeogenic flux enhanced, and the levels of cytochrome bo3 and bd-I ubiquinol oxidase for ATP generation were optimized. This engineered E. coli strain was able to grow to a relatively high OD600 of 7~11, showing promise as a platform strain growing solely on CO₂ and formic acid. Professor Lee said, “We engineered E. coli that can grow to a higher cell density only using CO₂ and formic acid. We think that this is an important step forward, but this is not the end. The engineered strain we developed still needs further engineering so that it can grow faster to a much higher density.” Professor Lee’s team is continuing to develop such a strain. “In the future, we would be delighted to see the production of chemicals from an engineered E. coli strain using CO₂ and formic acid as sole carbon sources,” he added. < Figure: Metabolic engineering strategies and central metabolic pathways of the engineered E. coli strain that grows on CO2 and formic acid. Carbon assimilation and reducing power regeneration pathways are described. Engineering strategies and genetic modifications employed in the engineered strain are also described. Figure from Nature Microbiology. > Profile: Distinguished Professor Sang Yup Lee leesy＠kaist.ac.kr http://mbel.kaist.ac.kr Department of Chemical and Biomolecular Engineering KAIST
Sturdy Fabric-Based Piezoelectric Energy Harvester..
KAIST researchers presented a highly flexible but sturdy wearable piezoelectric harvester using the simple and easy fabrication process of hot pressing and tape casting. This energy harvester, which has record high interfacial adhesion strength, will take us one step closer to being able to manufacture embedded wearable electronics. A research team led by Professor Seungbum Hong said that the novelty of this result lies in its simplicity, applicability, durability, and its new characterization of wearable electronic devices. Wearable devices are increasingly being used in a wide array of applications from small electronics to embedded devices such as sensors, actuators, displays, and energy harvesters. Despite their many advantages, high costs and complex fabrication processes remained challenges for reaching commercialization. In addition, their durability was frequently questioned. To address these issues, Professor Hong’s team developed a new fabrication process and analysis technology for testing the mechanical properties of affordable wearable devices. For this process, the research team used a hot pressing and tape casting procedure to connect the fabric structures of polyester and a polymer film. Hot pressing has usually been used when making batteries and fuel cells due to its high adhesiveness. Above all, the process takes only two to three minutes. The newly developed fabrication process will enable the direct application of a device into general garments using hot pressing just as graphic patches can be attached to garments using a heat press. In particular, when the polymer film is hot pressed onto a fabric below its crystallization temperature, it transforms into an amorphous state. In this state, it compactly attaches to the concave surface of the fabric and infiltrates into the gaps between the transverse wefts and longitudinal warps. These features result in high interfacial adhesion strength. For this reason, hot pressing has the potential to reduce the cost of fabrication through the direct application of fabric-based wearable devices to common garments. In addition to the conventional durability test of bending cycles, the newly introduced surface and interfacial cutting analysis system proved the high mechanical durability of the fabric-based wearable device by measuring the high interfacial adhesion strength between the fabric and the polymer film. Professor Hong said the study lays a new foundation for the manufacturing process and analysis of wearable devices using fabrics and polymers. He added that his team first used the surface and interfacial cutting analysis system (SAICAS) in the field of wearable electronics to test the mechanical properties of polymer-based wearable devices. Their surface and interfacial cutting analysis system is more precise than conventional methods (peel test, tape test, and microstretch test) because it qualitatively and quantitatively measures the adhesion strength. Professor Hong explained, “This study could enable the commercialization of highly durable wearable devices based on the analysis of their interfacial adhesion strength. Our study lays a new foundation for the manufacturing process and analysis of other devices using fabrics and polymers. We look forward to fabric-based wearable electronics hitting the market very soon.” The results of this study were registered as a domestic patent in Korea last year, and published in Nano Energy this month. This study has been conducted through collaboration with Professor Yong Min Lee in the Department of Energy Science and Engineering at DGIST, Professor Kwangsoo No in the Department of Materials Science and Engineering at KAIST, and Professor Seunghwa Ryu in the Department of Mechanical Engineering at KAIST. This study was supported by the High-Risk High-Return Project and the Global Singularity Research Project at KAIST, the National Research Foundation, and the Ministry of Science and ICT in Korea. < Figure 1. Fabrication process, structures, and output signals of a fabric-based wearable energy harvester. > < Figure 2. Measurement of an interfacial adhesion strength using SAICAS > -Publication: Jaegyu Kim, Seoungwoo Byun, Sangryun Lee, Jeongjae Ryu, Seongwoo Cho, Chungik Oh, Hongjun Kim, Kwangsoo No, Seunghwa Ryu, Yong Min Lee, Seungbum Hong＊, Nano Energy 75 (2020), 104992. https://doi.org/10.1016/j.nanoen.2020.104992 -Profile: Professor Seungbum Hong seungbum＠kaist.ac.kr http://mii.kaist.ac.kr/ Department of Materials Science and Engineering KAIST
Deep Learning Helps Explore the Structural and Str..
Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person’s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), widely considered the “bible” of mental health diagnosis. However, there are substantial differences amongst individuals on the spectrum and a great deal remains unknown by science about the causes of autism, or even what autism is. As a result, an accurate diagnosis of ASD and a prognosis prediction for patients can be extremely difficult. But what if artificial intelligence (AI) could help? Deep learning, a type of AI, deploys artificial neural networks based on the human brain to recognize patterns in a way that is akin to, and in some cases can surpass, human ability. The technique, or rather suite of techniques, has enjoyed remarkable success in recent years in fields as diverse as voice recognition, translation, autonomous vehicles, and drug discovery. A group of researchers from KAIST in collaboration with the Yonsei University College of Medicine has applied these deep learning techniques to autism diagnosis. Their findings were published on August 14 in the journal IEEE Access. Magnetic resonance imaging (MRI) scans of brains of people known to have autism have been used by researchers and clinicians to try to identify structures of the brain they believed were associated with ASD. These researchers have achieved considerable success in identifying abnormal grey and white matter volume and irregularities in cerebral cortex activation and connections as being associated with the condition. These findings have subsequently been deployed in studies attempting more consistent diagnoses of patients than has been achieved via psychiatrist observations during counseling sessions. While such studies have reported high levels of diagnostic accuracy, the number of participants in these studies has been small, often under 50, and diagnostic performance drops markedly when applied to large sample sizes or on datasets that include people from a wide variety of populations and locations. “There was something as to what defines autism that human researchers and clinicians must have been overlooking,” said Keun-Ah Cheon, one of the two corresponding authors and a professor in Department of Child and Adolescent Psychiatry at Severance Hospital of the Yonsei University College of Medicine. “And humans poring over thousands of MRI scans won’t be able to pick up on what we’ve been missing,” she continued. “But we thought AI might be able to.” So the team applied five different categories of deep learning models to an open-source dataset of more than 1,000 MRI scans from the Autism Brain Imaging Data Exchange (ABIDE) initiative, which has collected brain imaging data from laboratories around the world, and to a smaller, but higher-resolution MRI image dataset (84 images) taken from the Child Psychiatric Clinic at Severance Hospital, Yonsei University College of Medicine. In both cases, the researchers used both structural MRIs (examining the anatomy of the brain) and functional MRIs (examining brain activity in different regions). < Visualization of logics of classification learned by recurrent attention model (RAM). > The models allowed the team to explore the structural bases of ASD brain region by brain region, focusing in particular on many structures below the cerebral cortex, including the basal ganglia, which are involved in motor function (movement) as well as learning and memory. Crucially, these specific types of deep learning models also offered up possible explanations of how the AI had come up with its rationale for these findings. “Understanding the way that the AI has classified these brain structures and dynamics is extremely important,” said Sang Wan Lee, the other corresponding author and an associate professor at KAIST. “It’s no good if a doctor can tell a patient that the computer says they have autism, but not be able to say why the computer knows that.” The deep learning models were also able to describe how much a particular aspect contributed to ASD, an analysis tool that can assist psychiatric physicians during the diagnosis process to identify the severity of the autism. “Doctors should be able to use this to offer a personalized diagnosis for patients, including a prognosis of how the condition could develop,” Lee said. “Artificial intelligence is not going to put psychiatrists out of a job,” he explained. “But using AI as a tool should enable doctors to better understand and diagnose complex disorders than they could do on their own.” -Profile Professor Sang Wan Lee Department of Bio and Brain Engineering Laboratory for Brain and Machine Intelligence https://aibrain.kaist.ac.kr/ KAIST
Before Eyes Open, They Get Ready to See
- Spontaneous retinal waves can generate long-range horizontal connectivity in visual cortex. - A KAIST research team’s computational simulations demonstrated that the waves of spontaneous neural activity in the retinas of still-closed eyes in mammals develop long-range horizontal connections in the visual cortex during early developmental stages. This new finding featured in the August 19 edition of Journal of Neuroscience as a cover article has resolved a long-standing puzzle for understanding visual neuroscience regarding the early organization of functional architectures in the mammalian visual cortex before eye-opening, especially the long-range horizontal connectivity known as “feature-specific” circuitry. To prepare the animal to see when its eyes open, neural circuits in the brain’s visual system must begin developing earlier. However, the proper development of many brain regions involved in vision generally requires sensory input through the eyes. In the primary visual cortex of the higher mammalian taxa, cortical neurons of similar functional tuning to a visual feature are linked together by long-range horizontal circuits that play a crucial role in visual information processing. Surprisingly, these long-range horizontal connections in the primary visual cortex of higher mammals emerge before the onset of sensory experience, and the mechanism underlying this phenomenon has remained elusive. To investigate this mechanism, a group of researchers led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering at KAIST implemented computational simulations of early visual pathways using data obtained from the retinal circuits in young animals before eye-opening, including cats, monkeys, and mice. From these simulations, the researchers found that spontaneous waves propagating in ON and OFF retinal mosaics can initialize the wiring of long-range horizontal connections by selectively co-activating cortical neurons of similar functional tuning, whereas equivalent random activities cannot induce such organizations. The simulations also showed that emerged long-range horizontal connections can induce the patterned cortical activities, matching the topography of underlying functional maps even in salt-and-pepper type organizations observed in rodents. This result implies that the model developed by Professor Paik and his group can provide a universal principle for the developmental mechanism of long-range horizontal connections in both higher mammals as well as rodents. Professor Paik said, “Our model provides a deeper understanding of how the functional architectures in the visual cortex can originate from the spatial organization of the periphery, without sensory experience during early developmental periods.” He continued, “We believe that our findings will be of great interest to scientists working in a wide range of fields such as neuroscience, vision science, and developmental biology.” This work was supported by the National Research Foundation of Korea (NRF). Undergraduate student Jinwoo Kim participated in this research project and presented the findings as the lead author as part of the Undergraduate Research Participation (URP) Program at KAIST. < Figure 1. Computational simulation of retinal waves in model neural networks > < Figure 2. Spontaneous retinal wave and long-range horizontal connections > < Figure 3. Illustration of the retinal waves and their projection to the visual cortex leading to the development of long-range horizontal connections > < Image. Journal Cover Image tinal waves in model neural networks > Figures and image credit: Professor Se-Bum Paik, KAIST Image usage restrictions: News organizations may use or redistribute these figures and image, with proper attribution, as part of news coverage of this paper only. Publication: Jinwoo Kim, Min Song, and Se-Bum Paik. (2020). Spontaneous retinal waves generate long-range horizontal connectivity in visual cortex. Journal of Neuroscience, Available online at https://www.jneurosci.org/content/early/2020/07/17/JNEUROSCI.0649-20.2020 Profile: Se-Bum Paik Assistant Professor sbpaik＠kaist.ac.kr http://vs.kaist.ac.kr/ VSNN Laboratory Department of Bio and Brain Engineering Program of Brain and Cognitive Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea Profile: Jinwoo Kim Undergraduate Student bugkjw＠kaist.ac.kr Department of Bio and Brain Engineering, KAIST Profile: Min Song Ph.D. Candidate night＠kaist.ac.kr Program of Brain and Cognitive Engineering, KAIST (END)
Deep Learning-Based Cough Recognition Model Helps ..
The Center for Noise and Vibration Control at KAIST announced that their coughing detection camera recognizes where coughing happens, visualizing the locations. The resulting cough recognition camera can track and record information about the person who coughed, their location, and the number of coughs on a real-time basis. Professor Yong-Hwa Park from the Department of Mechanical Engineering developed a deep learning-based cough recognition model to classify a coughing sound in real time. The coughing event classification model is combined with a sound camera that visualizes their locations in public places. The research team said they achieved a best test accuracy of 87.4 ％. Professor Park said that it will be useful medical equipment during epidemics in public places such as schools, offices, and restaurants, and to constantly monitor patients’ conditions in a hospital room. Fever and coughing are the most relevant respiratory disease symptoms, among which fever can be recognized remotely using thermal cameras. This new technology is expected to be very helpful for detecting epidemic transmissions in a non-contact way. The cough event classification model is combined with a sound camera that visualizes the cough event and indicates the location in the video image. To develop a cough recognition model, a supervised learning was conducted with a convolutional neural network (CNN). The model performs binary classification with an input of a one-second sound profile feature, generating output to be either a cough event or something else. In the training and evaluation, various datasets were collected from Audioset, DEMAND, ETSI, and TIMIT. Coughing and others sounds were extracted from Audioset, and the rest of the datasets were used as background noises for data augmentation so that this model could be generalized for various background noises in public places. The dataset was augmented by mixing coughing sounds and other sounds from Audioset and background noises with the ratio of 0.15 to 0.75, then the overall volume was adjusted to 0.25 to 1.0 times to generalize the model for various distances. The training and evaluation datasets were constructed by dividing the augmented dataset by 9:1, and the test dataset was recorded separately in a real office environment. In the optimization procedure of the network model, training was conducted with various combinations of five acoustic features including spectrogram, Mel-scaled spectrogram and Mel-frequency cepstrum coefficients with seven optimizers. The performance of each combination was compared with the test dataset. The best test accuracy of 87.4％ was achieved with Mel-scaled Spectrogram as the acoustic feature and ASGD as the optimizer. The trained cough recognition model was combined with a sound camera. The sound camera is composed of a microphone array and a camera module. A beamforming process is applied to a collected set of acoustic data to find out the direction of incoming sound source. The integrated cough recognition model determines whether the sound is cough or not. If it is, the location of cough is visualized as a contour image with a ‘cough’ label at the location of the coughing sound source in a video image. A pilot test of the cough recognition camera in an office environment shows that it successfully distinguishes cough events and other events even in a noisy environment. In addition, it can track the location of the person who coughed and count the number of coughs in real time. The performance will be improved further with additional training data obtained from other real environments such as hospitals and classrooms. Professor Park said, “In a pandemic situation like we are experiencing with COVID-19, a cough detection camera can contribute to the prevention and early detection of epidemics in public places. Especially when applied to a hospital room, the patient's condition can be tracked 24 hours a day and support more accurate diagnoses while reducing the effort of the medical staff." This study was conducted in collaboration with SM Instruments Inc. < Figure 1. Architecture of the cough recognition model based on CNN. > < Figure 2. Examples of sound features used to train the cough recognition model. > < Figure 3. Cough detection camera and its signal processing block diagram. >
‘SoundWear’ a Heads-Up Sound Augmentation Gadget H..
In this digital era, there has been growing concern that children spend most of their playtime watching TV, playing computer games, and staring at mobile phones with ‘head-down’ posture even outdoors. To counter such concerns, KAIST researchers designed a wearable bracelet using sound augmentation to leverage play benefits by employing digital technology. The research team also investigated how sound influences children’s play experiences according to their physical, social, and imaginative aspects. Playing is a large part of enjoyable and rewarding lives, especially for children. Previously, a large part of children’s playtime used to take place outdoors, and playing outdoors has long been praised for playing an essential role in providing opportunities to perform physical activity, improve social skills, and boost imaginative thinking. Motivated by these concerns, a KAIST research team led by Professor Woohun Lee and his researcher Jiwoo Hong from the Department of Industrial Design made use of sound augmentation, which is beneficial for motivating playful experiences by facilitating imagination and enhancing social awareness with its ambient and omnidirectional characteristics. Despite the beneficial characteristics of sound augmentation, only a few studies have explored sound interaction as a technology to augment outdoor play due to its abstractness when conveying information in an open space outdoors. There is also a lack of empirical evidence regarding its effect on children's play experiences. Professor Lee’s team designed and implemented an original bracelet-type wearable device called SoundWear. This device uses non-speech sound as a core digital feature for children t o broaden their imaginations and improvise their outdoor games. < Figure 1: Four phases of the SoundWear user scenario: (A) exploration, (B) selection, (C) sonification, and (D) transmission > < Figure 1: Four phases of the SoundWear user scenario: (A) exploration, (B) selection, (C) sonification, and (D) transmission > Children equipped with SoundWear were allowed to explore multiple sounds (i.e., everyday and instrumental sounds) on SoundPalette, pick a desired sound, generate the sound with a swinging movement, and transfer the sound between multiple devices for their outdoor play. Both the quantitative and qualitative results of a user study indicated that augmenting playtime with everyday sounds triggered children’s imagination and resulted in distinct play behaviors, whereas instrumental sounds were transparently integrated with existing outdoor games while fully preserving play benefits in physical, social, and imaginative ways. The team also found that the gestural interaction of SoundWear and the free sound choice on SoundPalette helped children to gain a sense of achievement and ownership toward sound. This led children to be physically and socially active while playing. PhD candidate Hong said, “Our work can encourage the discussion on using digital technology that entails sound augmentation and gestural interactions for understanding and cultivating creative improvisations, social pretenses, and ownership of digital materials in digitally augmented play experiences.” Professor Lee also envisioned that the findings being helpful to parents and educators saying, “I hope the verified effect of digital technology on children’s play informs parents and educators to help them make more informed decisions and incorporate the playful and creative usage of new media, such as mobile phones and smart toys, for young children.” This research titled “SoundWear: Effect of Non-speech Sound Augmentation on the Outdoor Play Experience of Children” was presented at DIS 2020 (the ACM Conference on Designing Interactive Systems) taking place virtually in Eindhoven, Netherlands, from July 6 to 20. This work received an Honorable Mention Award for being in the top 5％ of all the submissions to the conference. < Figure 2. Differences in social interaction, physical activity, and imaginative utterances under the condition of baseline, everyday sound, and instrumental sound > Link to download the full-text paper: https://files.cargocollective.com/698535/disfp9072-hongA.pdf -Profile: Professor Woohun Lee woohun.lee＠kaist.ac.kr http://wonderlab.kaist.ac.kr Department of Industrial Design (ID) KAIST
Atomic Force Microscopy Reveals Nanoscale Dental E..
< Professor Seungbum Hong (left) and Dr. Chungik Oh (right) > KAIST researchers used atomic force microscopy to quantitatively evaluate how acidic and sugary drinks affect human tooth enamel at the nanoscale level. This novel approach is useful for measuring mechanical and morphological changes that occur over time during enamel erosion induced by beverages. Enamel is the hard-white substance that forms the outer part of a tooth. It is the hardest substance in the human body, even stronger than bone. Its resilient surface is 96 percent mineral, the highest percentage of any body tissue, making it durable and damage-resistant. The enamel acts as a barrier to protect the soft inner layers of the tooth, but can become susceptible to degradation by acids and sugars. Enamel erosion occurs when the tooth enamel is overexposed to excessive consumption of acidic and sugary food and drinks. The loss of enamel, if left untreated, can lead to various tooth conditions including stains, fractures, sensitivity, and translucence. Once tooth enamel is damaged, it cannot be brought back. Therefore, thorough studies on how enamel erosion starts and develops, especially at the initial stages, are of high scientific and clinical relevance for dental health maintenance. A research team led by Professor Seungbum Hong from the Department of Materials Science and Engineering at KAIST reported a new method of applying atomic force microscopy (AFM) techniques to study the nanoscale characterization of this early stage of enamel erosion. This study was introduced in the Journal of the Mechanical Behavior of Biomedical Materials (JMBBM) on June 29. AFM is a very-high-resolution type of scanning probe microscopy (SPM), with demonstrated resolution on the order of fractions of a nanometer (nm) that is equal to one billionth of a meter. AFM generates images by scanning a small cantilever over the surface of a sample, and this can precisely measure the structure and mechanical properties of the sample, such as surface roughness and elastic modulus. The co-lead authors of the study, Dr. Panpan Li and Dr. Chungik Oh, chose three commercially available popular beverages, Coca-Cola®, Sprite®, and Minute Maid® orange juice, and immersed tooth enamel in these drinks over time to analyze their impacts on human teeth and monitor the etching process on tooth enamel. Five healthy human molars were obtained from volunteers between age 20 and 35 who visited the KAIST Clinic. After extraction, the teeth were preserved in distilled water before the experiment. The drinks were purchased and opened right before the immersion experiment, and the team utilized AFM to measure the surface topography and elastic modulus map. The researchers observed that the surface roughness of the tooth enamel increased significantly as the immersion time increased, while the elastic modulus of the enamel surface decreased drastically. It was demonstrated that the enamel surface roughened five times more when it was immersed in beverages for 10 minutes, and that the elastic modulus of tooth enamel was five times lower after five minutes in the drinks. Additionally, the research team found preferential etching in scratched tooth enamel. Brushing your teeth too hard and toothpastes with polishing particles that are advertised to remove dental biofilms can cause scratches on the enamel surface, which can be preferential sites for etching, the study revealed. Professor Hong said, “Our study shows that AFM is a suitable technique to characterize variations in the morphology and mechanical properties of dental erosion quantitatively at the nanoscale level.” This work was supported by the National Research Foundation (NRF), the Ministry of Science and ICT (MSIT), and the KUSTAR-KAIST Institute of Korea. A dentist at the KAIST Clinic, Dr. Suebean Cho, Dr. Sangmin Shin from the Smile Well Dental, and Professor Kack-Kyun Kim at the Seoul National University School of Dentistry also collaborated in this project. < Figure 1. Tooth sample preparation process for atomic force microscopy (a, b, c), and an atomic force microscopy probe image (right). > < Figure 2. Changes in surface roughness (top) and modulus of elasticity (bottom) of tooth enamel exposed to popular beverages imaged by atomic force microscopy. > Publication: Li, P., et al. (2020) ‘Nanoscale effects of beverages on enamel surface of human teeth: An atomic force microscopy study’. Journal of the Mechanical Behavior of Biomedical Materials (JMBBM), Volume 110. Article No. 103930. Available online at https://doi.org/10.1016/j.jmbbm.2020.103930 Profile: Seungbum Hong, Ph.D. Associate Professor seungbum＠kaist.ac.kr http://mii.kaist.ac.kr/ Materials Imaging and Integration (MII) Lab. Department of Materials Science and Engineering (MSE) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea (END)
Sulfur-Containing Polymer Generates High Refractiv..
Transparent polymer thin film with refractive index exceeding 1.9 to serve as new platform materials for high-end optical device applications Researchers reported a novel technology enhancing the high transparency of refractive polymer film via a one-step vapor deposition process. The sulfur-containing polymer (SCP) film produced by Professor Sung Gap Im’s research team at KAIST’s Department of Chemical and Biomolecular Engineering has exhibited excellent environmental stability and chemical resistance, which is highly desirable for its application in long-term optical device applications. The high refractive index exceeding 1.9 while being fully transparent in the entire visible range will help expand the applications of optoelectronic devices. The refractive index is a ratio of the speed of light in a vacuum to the phase velocity of light in a material, used as a measure of how much the path of light is bent when passing through a material. With the miniaturization of various optical parts used in mobile devices and imaging, demand has been rapidly growing for high refractive index transparent materials that induce more light refraction with a thin film. As polymers have outstanding physical properties and can be easily processed in various forms, they are widely used in a variety of applications such as plastic eyeglass lenses. However, there have been very few polymers developed so far with a refractive index exceeding 1.75, and existing high refractive index polymers require costly materials and complicated manufacturing processes. Above all, core technologies for producing such materials have been dominated by Japanese companies, causing long-standing challenges for Korean manufacturers. Securing a stable supply of high-performance, high refractive index materials is crucial for the production of optical devices that are lighter, more affordable, and can be freely manipulated. The research team successfully manufactured a whole new polymer thin film material with a refractive index exceeding 1.9 and excellent transparency, using just a one-step chemical reaction. The SCP film showed outstanding optical transparency across the entire visible light region, presumably due to the uniformly dispersed, short-segment polysulfide chains, which is a distinct feature unachievable in polymerizations with molten sulfur. < Figure 1. A schematic illustration showing the co-polymerization of vaporized sulfur to synthesize the high refractive index thin film. > The team focused on the fact that elemental sulfur is easily sublimated to produce a high refractive index polymer by polymerizing the vaporized sulfur with a variety of substances. This method suppresses the formation of overly long S-S chains while achieving outstanding thermal stability in high sulfur concentrations and generating transparent non-crystalline polymers across the entire visible spectrum. Due to the characteristics of the vapor phase process, the high refractive index thin film can be coated not just on silicon wafers or glass substrates, but on a wide range of textured surfaces as well. We believe this thin film polymer is the first to have achieved an ultrahigh refractive index exceeding 1.9. Professor Im said, “This high-performance polymer film can be created in a simple one-step manner, which is highly advantageous in the synthesis of SCPs with a high refractive index. This will serve as a platform material for future high-end optical device applications.” This study, in collaboration with research teams from Seoul National University and Kyung Hee University, was reported in Science Advances. (Title: One-Step Vapor-Phase Synthesis of Transparent High-Refractive Index Sulfur-Containing Polymers） This research was supported by the Ministry of Science and ICT’s Global Frontier Project (Center for Advanced Soft-Electronics), Leading Research Center Support Program (Wearable Platform Materials Technology Center), and Basic Science Research Program (Advanced Research Project).