Engineers TV

As a member of Engineers Ireland you have access to Engineers TV, which contains over 700 presentations, technical lectures, courses and seminar recordings as well as events and awards footage and interviews.

To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern US, but not specifically for Boston.

To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

“If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

Prof Sapsis and his colleagues have now developed a method to 'correct' the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach 'nudges' a climate model’s simulations into more realistic patterns over large scales.

When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme. 

Prof Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

“Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” says Prof Sapsis.

“If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

The team’s results appeared recently in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

Enormous computing power

Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100km or so.

“It’s a very heavy computation requiring supercomputers,” says Prof Sapsis. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometre or less.”

To improve the resolution of these coarse climate models, scientists typically have gone under the bonnet to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

“People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” says Prof Sapsis. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation towards something that more closely represents real-world conditions.

The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learnt associations to correct a model’s predictions.

“What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the wind speeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” says Prof Sapsis.

“The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

Climate correction

As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the US Department of Energy, that simulates climate patterns around the world at a resolution of 110km.

The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learnt dynamical associations between the measured weather features and the E3SM model.

They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

“We’re not talking about huge differences in absolute terms,” says Prof Sapsis. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

“We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” says Prof Sapsis.

“Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analysing future climate scenarios.”

“The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study.

“It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

The algorithm helping to forecast the frequency of extreme weather

Chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

“This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonisation because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.

The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appeared recently in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

Breaking down CO2

Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Prof Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

“DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

To attach single strands of DNA to a carbon electrode, the MIT researchers used two 'chemical handles', one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode – just like Velcro.

Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

An efficient reaction

Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100%, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40%.

This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

Engineers come up with a new method of converting carbon dioxide into useful products

There is a staggeringly long list of things that can go wrong during the complex operation of an oil field.

One of the most common problems is spills of the salty brine that’s a toxic byproduct of pumping oil. Another is over- or under-pumping that can lead to machine failure and methane leaks. (The oil and gas industry is the largest industrial emitter of methane in the US.)

Then there are extreme weather events, which range from winter frosts to blazing heat, that can put equipment out of commission for months. One of the wildest problems Sebastien Mannai SM ’14, PhD ’18 has encountered are hogs that pop open oil tanks with their snouts to enjoy on-demand oil baths.

Mannai helps oil field owners detect and respond to these problems while optimising the operation of their machinery to prevent the issues from occurring in the first place. He is the founder and CEO of Amplified Industries, a company selling oil field monitoring and control tools that help make the industry more efficient and sustainable.

Real-time alerts when things go wrong

Amplified Industries’ sensors and analytics give oil well operators real-time alerts when things go wrong, allowing them to respond to issues before they become disasters.

“We’re able to find 99% of the issues affecting these machines, from mechanical failures to human errors, including issues happening thousands of feet underground,” says Mannai. “With our AI solution, operators can put the wells on autopilot, and the system automatically adjusts or shuts the well down as soon as there’s an issue.”

Amplified currently works with private companies in states spanning from Texas to Wyoming, that own and operate as many as 3,000 wells. Such companies make up the majority of oil well operators in the US and operate both new and older, more failure-prone equipment that has been in the field for decades.

Such operators also have a harder time responding to environmental regulations like the Environmental Protection Agency’s new methane guidelines, which seek to dramatically reduce emissions of the potent greenhouse gas in the industry over the next few years.

“These operators don’t want to be releasing methane,” says Mannai. “Additionally, when gas gets into the pumping equipment, it leads to premature failures. We can detect gas and slow the pump down to prevent it. It’s the best of both worlds: the operators benefit because their machines are working better, saving them money while also giving them a smaller environmental footprint with fewer spills and methane leaks.”

Cutting-edge technology

Mannai learnt about the cutting-edge technology used in the space and aviation industries as he pursued his master’s degree at the Gas Turbine Laboratory in MIT’s Department of Aeronautics and Astronautics. Then, during his PhD at MIT, he worked with an oil services company and discovered the oil and gas industry was still relying on decades-old technologies and equipment.

“When I first travelled to the field, I could not believe how old-school the actual operations were,” says Mannai, who has previously worked in rocket engine and turbine factories. “A lot of oil wells have to be adjusted by feel and rules of thumb. The operators have been let down by industrial automation and data companies.”

Monitoring oil wells for problems typically requires someone in a pickup truck to drive hundreds of miles between wells looking for obvious issues, says Mannai. The sensors that are deployed are expensive and difficult to replace. Over time, they’re also often damaged in the field to the point of being unusable, forcing technicians to make educated guesses about the status of each well.

“We often see that equipment unplugged or programmed incorrectly because it is incredibly overcomplicated and ill-designed for the reality of the field,” says Mannai. “Workers on the ground often have to rip it out and bypass the control system to pump by hand. That’s how you end up with so many spills and wells pumping at suboptimal levels.”

To build a better oil field monitoring system, Mannai received support from the MIT Sandbox Innovation Fund and the Venture Mentoring Service (VMS).

He also participated in the delta V summer accelerator at the Martin Trust Center for MIT Entrepreneurship, the fuse program during IAP, and the MIT I-Corps program, and took a number of classes at the MIT Sloan School of Management. In 2019, Amplified Industries – which operated under the name Acoustic Wells until recently – won the MIT $100K Entrepreneurship competition.

“My approach was to sign up to every possible entrepreneurship related program and to leverage every MIT resource I possibly could,” says Mannai. “MIT was amazing for us.”

Mannai officially launched the company after his postdoc at MIT, and Amplified raised its first round of funding in early 2020. That year, Amplified’s small team moved into the Greentown Labs startup incubator in Somerville.

Huge challenge

Mannai says building the company’s battery-powered, low-cost sensors was a huge challenge. The sensors run machine-learning inference models and their batteries last for 10 years. They also had to be able to handle extreme conditions, from the scorching hot New Mexico desert to the swamps of Louisiana and the freezing cold winters in North Dakota.

“We build very rugged, resilient hardware; it’s a must in those environments” says Mannai. “But it’s also very simple to deploy, so if a device does break, it’s like changing a lightbulb: we ship them a new one and it takes them a couple of minutes to swap it out.”

Customers equip each well with four or five of Amplified’s sensors, which attach to the well’s cables and pipes to measure variables like tension, pressure, and amps.

Vast amounts of data are then sent to Amplified’s cloud and processed by their analytics engine. Signal processing methods and AI models are used to diagnose problems and control the equipment in real-time, while generating notifications for the operators when something goes wrong. Operators can then remotely adjust the well or shut it down.

“That’s where AI is important, because if you just record everything and put it in a giant dashboard, you create way more work for people,” says Mannai. “The critical part is the ability to process and understand this newly recorded data and make it readily usable in the real world.”

Amplified’s dashboard is customised for different people in the company, so field technicians can quickly respond to problems and managers or owners can get a high-level view of how everything is running.

Mannai says often when Amplified’s sensors are installed, they’ll immediately start detecting problems that were unknown to engineers and technicians in the field. To date, Amplified has prevented hundreds of thousands of gallons worth of brine water spills, which are particularly damaging to surrounding vegetation because of their high salt and sulphur content.

Preventing those spills is only part of Amplified’s positive environmental impact; the company is now turning its attention toward the detection of methane leaks.

Helping a changing industry

The EPA’s proposed new Waste Emissions Charge for oil and gas companies would start at $900 per metric ton of reported methane emissions in 2024 and increase to $1,500 per metric ton in 2026 and beyond.

Mannai says Amplified is well positioned to help companies comply with the new rules. Its equipment has already showed it can detect various kinds of leaks across the field, purely based on analytics of existing data.

“Detecting methane leaks typically requires someone to walk around every valve and piece of piping with a thermal camera or sniffer, but these operators often have thousands of valves and hundreds of miles of pipes,” says Mannai.

“What we see in the field is that a lot of times people don’t know where the pipes are because oil wells change owners so frequently, or they will miss an intermittent leak.”

Ultimately Mannai believes a strong data backend and modernised sensing equipment will become the backbone of the industry, and is a necessary prerequisite to both improving efficiency and cleaning up the industry.

“We’re selling a service that ensures your equipment is working optimally all the time,” says Mannai. “That means a lot fewer fines from the EPA, but it also means better-performing equipment. There’s a mindset change happening across the industry, and we’re helping make that transition as easy and affordable as possible.”

Slick stuff: Shining a light on oil fields to make them more sustainable

Without a map, it can be just about impossible to know not just where you are, but where you’re going, and that’s especially true when it comes to materials properties.

For decades, scientists have understood that while bulk materials behave in certain ways, those rules can break down for materials at the micro- and nano-scales, and often in surprising ways.

One of those surprises was the finding that, for some materials, applying even modest strains – a concept known as elastic strain engineering – on materials can dramatically improve certain properties, provided those strains stay elastic and do not relax away by plasticity, fracture, or phase transformations. Micro- and nanoscale materials are especially good at holding applied strains in the elastic form.

Precisely how to apply those elastic strains (or equivalently, residual stress) to achieve certain material properties, however, had been less clear – until recently.

Using a combination of first principles calculations and machine learning, a team of MIT researchers has developed the first-ever map of how to tune crystalline materials to produce specific thermal and electronic properties.

Led by Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering, the team described a framework for understanding precisely how changing the elastic strains on a material can fine-tune properties like thermal and electrical conductivity. The work is described in an open-access paper published in PNAS.

“For the first time, by using machine learning, we’ve been able to delineate the complete six-dimensional boundary of ideal strength, which is the upper limit to elastic strain engineering, and create a map for these electronic and phononic properties,” says Prof Li. “We can now use this approach to explore many other materials. Traditionally, people create new materials by changing the chemistry.”

“For example, with a ternary alloy, you can change the percentage of two elements, so you have two degrees of freedom,” he continues. “What we’ve shown is that diamond, with just one element, is equivalent to a six-component alloy, because you have six degrees of elastic strain freedom you can tune independently.”

Small strains, big material benefits

The paper builds on a foundation laid as far back as the 1980s, when researchers first discovered that the performance of semiconductor materials doubled when a small – a mere 1% – elastic strain was applied to the material.

While that discovery was quickly commercialised by the semiconductor industry and today is used to increase the performance of microchips in everything from laptops to cellphones, that level of strain is very small compared to what we can achieve now, says Subra Suresh, the Vannevar Bush Professor of Engineering Emeritus.

In a 2018 Science paper, Prof Suresh, Dao, and colleagues demonstrated that 1% strain was just the tip of the iceberg.

As part of a 2018 study, Prof Suresh and colleagues demonstrated for the first time that diamond nanoneedles could withstand elastic strains of as much as 9% and still return to their original state. Later on, several groups independently confirmed that microscale diamond can indeed elastically deform by approximately 7% in tension reversibly.

“Once we showed we could bend nanoscale diamonds and create strains on the order of 9% or 10%, the question was, what do you do with it,” says Prof Suresh.

“It turns out diamond is a very good semiconductor material … and one of our questions was, if we can mechanically strain diamond, can we reduce the band gap from 5.6 electron-volts to two or three? Or can we get it all the way down to zero, where it begins to conduct like a metal?”

To answer those questions, the team first turned to machine learning in an effort to get a more precise picture of exactly how strain altered material properties.

“Strain is a big space,” says Prof Li. “You can have tensile strain, you can have shear strain in multiple directions, so it’s a six-dimensional space, and the phonon band is three-dimensional, so in total there are nine tunable parameters. So, we’re using machine learning, for the first time, to create a complete map for navigating the electronic and phononic properties and identify the boundaries.”

Armed with that map, the team subsequently demonstrated how strain could be used to dramatically alter diamond’s semiconductor properties.

“Diamond is like the Mt Everest of electronic materials,” says Prof Li, “because it has very high thermal conductivity, very high dielectric breakdown strengths, a very big carrier mobility. What we have shown is we can controllably squish Mt Everest down … so we show that by strain engineering you can either improve diamond’s thermal conductivity by a factor of two, or make it much worse by a factor of 20.”

New map, new applications

Going forward, the findings could be used to explore a host of exotic material properties, says Prof Li, from dramatically reduced thermal conductivity to superconductivity.

“Experimentally, these properties are already accessible with nanoneedles and even microbridges,” he says. “And we have seen exotic properties, like reducing diamond’s [thermal conductivity] to only a few hundred watts per meter-Kelvin. Recently, people have shown that you can produce room-temperature superconductors with hydrides if you squeeze them to a few hundred gigapascals, so we have found all kinds of exotic behaviour once we have the map.”

The results could also influence the design of next-generation computer chips capable of running much faster and cooler than today’s processors, as well as quantum sensors and communication devices. As the semiconductor manufacturing industry moves to denser and denser architectures, Prof Suresh says the ability to tune a material’s thermal conductivity will be particularly important for heat dissipation.

While the paper could inform the design of future generations of microchips, Zhe Shi, a postdoc in Prof Li’s lab and first author of the paper, says more work will be needed before those chips find their way into the average laptop or cellphone.

“We know that 1% strain can give you an order of magnitude increase in the clock speed of your CPU,” says Shi. “There are a lot of manufacturing and device problems that need to be solved in order for this to become realistic, but I think it’s definitely a great start. It’s an exciting beginning to what could lead to significant strides in technology.”

A first: Complete map for elastic strain engineering unveiled

Researchers at Tyndall National Institute (UCC) led by Professor Peter O’Brien have been awarded funding by the National Science Foundation in the US to diversify and strengthen the supply chain for manufacturing and packaging of semiconductor devices. 

The FUTUR-IC project is led by researchers at MIT and includes Tyndall, SEMI (an organisation to advance the global semiconductor supply chain), Hewlett Packard Enterprise, Intel, International Electronics Manufacturing Initiative (iNEMI), and the Rochester Institute of Technology. 

Prof Peter O’Brien giving a tutorial on advanced packaging at MIT in January 2024. His research team at Tyndall will partner in the FUTUR-IC project which is funded by the National Science Foundation. 

Trillion dollar market

The market for microelectronics in the next decade is predicted to be on the order of a trillion dollars, but most of the manufacturing for the industry occurs only in limited geographical pockets around the world. 

“The current microchip manufacturing supply chain, which includes production, distribution, and use, is neither scalable nor sustainable and cannot continue. We must innovate our way out of this future crisis, said Anu Agrawal, principal research scientist Materials Research Laboratory, MIT.

FUTUR-IC is a reference to the future of integrated circuits, or chips, through a global alliance for sustainable microchip manufacturing. FUTUR-IC brings together stakeholders from industry, academia, and government to co-optimise technology, ecology, and workforce across three dimensions. 

Tyndall researchers will focus their efforts on technology and workforce vectors, bringing their unique expertise in developing advanced packaging technologies and educating the future workforce. 

“We have established a deep and impactful partnership with our collaborators at MIT over the past years. FUTUR-IC is a new strand in that partnership, enabling us to deliver meaningful global impacts and strengthen research collaboration between Europe and the US,” said Prof O’Brien.

The MIT-led team is one of six that received awards addressing sustainable materials for global challenges through phase two of the NSF Convergence Accelerator program. Launched in 2019, the programme targets solutions to especially compelling challenges at an accelerated pace by incorporating a multidisciplinary research approach.

Tyndall teams up with MIT to develop sustainable semiconductor chips production processes

Undergrads, take note: The lessons you learn in those intro classes could be the key to making your next big discovery. At least, that’s been the case for MIT’s Jeehwan Kim.

A recently tenured faculty member in MIT’s departments of Mechanical Engineering and Materials Science and Engineering, Kim has made numerous discoveries about the nanostructure of materials and is funneling them directly into the advancement of next-generation electronics.

His research aims to push electronics past the inherent limits of silicon – a material that has reliably powered transistors and most other electronic elements but is reaching a performance limit as more computing power is packed into ever smaller devices. 

'For me, free thinking – to compose music, innovate something totally new – is the most important thing. And the people at MIT are very talented and curious of all the things,' says Jeehwan Kim. Photo: Jake Belcher'.

Today, Kim and his students are exploring materials, devices, and systems that could take over where silicon leaves off. Kim is applying his insights to design next-generation devices, including low-power, high-performance transistors and memory devices, artificial intelligence chips, ultra-high-definition micro-LED displays, and flexible electronic 'skin'. Ultimately, he envisions such beyond-silicon devices could be built into supercomputers small enough to fit in your pocket.

The innovations that have come out of his research are recorded in more than 200 issued US patents and 70 research papers – an extensive list that he and his students continue to grow.

Kim credits many of his breakthroughs to the fundamentals he learnt in his university days. In fact, he has carried his college textbooks and notes with him with every move. Today, he keeps the undergraduate notes – written in a light and meticulous graphite and ink – on a shelf nearest to his desk, close at hand. He references them in his own class lectures and presentations, and when brainstorming research solutions.

“These textbooks are all in my brain now,” says Kim. “I’ve learnt that if you completely understand the fundamentals, you can solve any problem.”

Jeehwan Kim aims to push electronics past silicon, whose performance faces limits as more computing power is packed into ever-smaller devices. His group is exploring materials, devices, and systems that could take over where silicon leaves off. Photo: Jake Belcher.

Fundamental shift

Kim wasn’t always a model student. Growing up in Seoul, South Korea, he was fixed on a musical career. He had a passion for singing and was bored by most other high school subjects.

“It was very monotonic,” says Kim. “My motivation for high school subjects was very low.”

After graduating from secondary school, he enrolled in a materials science programme at Hongik University, where he was lucky to met professors who had graduated from MIT and who later motivated him to study in the United States. But, Kim spent his first year there trying to make it as a musician. He wrote and sang songs that he recorded and sent to promoters, and went to multiple auditions. But after a year, he was faced with no call-backs, and a hard question.

“What should I do? It was a crisis to me,” says Kim.

In his second year, he decided to give materials science a go. When he sat in on his first class, he was surprised to find that the subject – the structure and behaviour of materials at the atomic scale – made him want to learn more.

“My first year, my GPA was almost zero because I didn’t attend class, and was going to be kicked out,” says Kim. “Then from my second year on, I really loved every single subject in materials science. People who saw me in the library were surprised: ‘what are you doing here, without a guitar?’ I must have read these textbooks more than 10 times, and felt I really understood everything fundamental.”

Kim is applying his insights to design next-generation devices, including low-power, high performance transistors and memory devices, artificial intelligence chips, ultra-high-definition micro-LED displays, and flexible electronic 'skin'. Ultimately, he envisions such beyond-silicon devices could be built into supercomputers small enough to fit in your pocket. Photo: Jake Belcher.

Back to basics

He took this new-found passion to Seoul National University, where he enrolled in the materials science master’s programme and learnt to apply the ideas he absorbed to hands-on research problems. Metallurgy was a dominant field at the time, and Kim was assigned to experiment with high-temperature alloys – mixing and melting metallic powders to create materials that could be used in high-performance engines.

After completing his master’s, Kim wanted to continue with a PhD, overseas. But to do so, he first had to serve in the military. He spent the next two-and-a-half years in the Korean air force, helping to maintain and refuel aircraft, and inventory their parts. All the while, he prepared applications to graduate schools abroad.

In 2003, after completing his service, he headed overseas, where he was accepted to the materials science graduate programme at the University of California at Los Angeles with a fellowship.

“When I came out of the airplane and went to the dorm for the first day, people were drinking Corona on the balcony, playing music, and there was beautiful weather, and I thought, this is where I’m supposed to be!” Kim recalls.

For his PhD, he began to dive into the microscopic world of electronic materials, seeking ways to manipulate them to make faster electronics. The subject was a focus for his adviser, who previously worked at Bell Labs, where many computing innovations originated at the time.

“A lot of the papers I was reading were from Bell Labs, and IBM TJ Watson, and I was so impressed, and thought: I really want to be a scientist there. That was my dream,” says Kim.

During his PhD programme, he reached out to a scientist at IBM whose name kept coming up in the papers Kim was reading. In his initial letter, Kim wrote with a question about his own PhD work, which tackled a hard industry problem: how to stretch, or 'strain', silicon to minimise defects that would occur as more transistors are packed on a chip. 

The query opened a dialogue, and Kim eventually inquired and was accepted to an internship at the IBM TJ Watson Research Center, just outside New York City. Soon after he arrived, his manager pitched him a challenge: he might be hired full time if he could solve a new, harder problem, having to do with replacing silicon.

Germanium as a possible successor to silicon

At the time, the electronics industry was looking to germanium as a possible successor to silicon. The material can conduct electrons at even smaller scales, which would enable germanium to be made into even tinier transistors, for faster, smaller, and more powerful devices. But there was no reliable way for germanium to be 'doped' – an essential process that replaces some of a material’s atoms with another type of atom in a way that controls how electrons flow through the material.

“My manager told me he didn’t expect me to solve this. But I really wanted the job,” says Kim. “So day and night, I thought, how to solve this? And I always went back to the textbooks.”

Those textbooks reminded him of a fundamental rule: replacing one atom with another would work well if both atoms were of similar size. This revelation triggered an idea. Perhaps germanium could be doped with a combination of two different atoms with an average atomic size that is similar to germanium’s.

“I came up with this idea, and right after, IBM showed that it worked. I was so amazed,” says Kim. “From that point, research became my passion. I did it because it was just so fun. Singing is not so different from performing research.”

As promised, he was hired as a postdoc and soon after, promoted to research staff member – a title he carried, literally, with pride.

“I was feeling so happy to be there,” says Kim. “I even wore my IBM badge to restaurants, and everywhere I went.”

Throughout his time at IBM, he learnt to focus on research that directly impacts everyday human life, and how to apply the fundamentals to develop next-generation products.

“IBM really raised me up as an engineer who can identify the problems in an industry and find creative solutions to tackle the challenges,” he says.

Cycle of life

And yet, Kim felt he could do more. He was working on boundary-pushing research at one of the leading innovation hubs in the country, where 'out-of-the-box' thinking was encouraged, and experimentally tested. But he wanted to explore beyond the company’s research portfolio, and also, find a way to pursue research not just as a profession but as a passion.

“My experience taught me that you can lead a very happy life as an engineer or scientist if your research becomes your hobby,” says Kim. “I wanted to teach this cycle – of happiness, research, and passion – to young people and help PhD students develop like artists or singers.”

In 2015, he packed his bags for MIT, where he accepted a junior faculty position in the Department of Mechanical Engineering. His first impressions upon arriving at the institute? “Freedom,” says Kim. “For me, free thinking – to compose music, innovate something totally new – is the most important thing. And the people at MIT are very talented and curious of all the things.”

Since he’s put down roots on campus, he has built up a highly productive research group, focused on fabricating ultra-thin, stackable, high-performance electronic materials and devices, which Kim envisions could be used to build hybrid electronic systems as small as a fingernail and as powerful as a supercomputer. He credits the group’s many innovations to the more than 40 students, postdocs, and research scientists who have contributed to his lab.

“I hope this is where they can learn that research can be an art,” says Kim. “To students especially, I hope they see that, if they enjoy what they do, then they can be whatever they want to be.”

Getting better electronics by pushing material boundaries

Theme picker