Engineers TV

As a member of Engineers Ireland you have access to Engineers TV, which contains presentations, technical lectures, courses and seminar recordings as well as events, awards footage and interviews.

Flexible sensors and an artificial intelligence model tell deformable robots how their bodies are positioned in a 3D environment.

For the first time, MIT researchers have enabled a soft robotic arm to understand its configuration in 3D space, by leveraging only motion and position data from its own 'sensorised' skin.

Soft robots constructed from highly compliant materials, similar to those found in living organisms, are being championed as safer, and more adaptable, resilient, and bioinspired alternatives to traditional rigid robots.

But giving autonomous control to these deformable robots is a monumental task because they can move in a virtually infinite number of directions at any given moment. That makes it difficult to train planning and control models that drive automation.

Large systems of multiple motion-capture cameras


Traditional methods to achieve autonomous control use large systems of multiple motion-capture cameras that provide the robots feedback about 3D movement and positions. But those are impractical for soft robots in real-world applications.

In a paper being published in the journal 'IEEE Robotics and Automation Letters', the researchers describe a system of soft sensors that cover a robot’s body to provide 'proprioception' — meaning awareness of motion and position of its body.

That feedback runs into a novel deep-learning model that sifts through the noise and captures clear signals to estimate the robot’s 3D configuration.

The researchers validated their system on a soft robotic arm resembling an elephant trunk, that can predict its own position as it autonomously swings around and extends.

The sensors can be fabricated using off-the-shelf materials, meaning any lab can develop their own systems, says Ryan Truby, a postdoc in the MIT Computer Science and Artificial Laboratory (CSAIL) who is co-first author on the paper along with CSAIL postdoc Cosimo Della Santina.

“We’re sensorising soft robots to get feedback for control from sensors, not vision systems, using a very easy, rapid method for fabrication,” he says.

“We want to use these soft robotic trunks, for instance, to orient and control themselves automatically, to pick things up and interact with the world. This is a first step towards that type of more sophisticated automated control.”

One future aim is to help make artificial limbs that can more dexterously handle and manipulate objects in the environment.

“Think of your own body: you can close your eyes and reconstruct the world based on feedback from your skin,” says co-author Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “We want to design those same capabilities for soft robots.”

Shaping soft sensors


A longtime goal in soft robotics has been fully integrated body sensors. Traditional rigid sensors detract from a soft robot body’s natural compliance, complicate its design and fabrication, and can cause various mechanical failures.

Soft-material-based sensors are a more suitable alternative, but require specialised materials and methods for their design, making them difficult for many robotics labs to fabricate and integrate in soft robots.

While working in his CSAIL lab one day looking for inspiration for sensor materials, Truby made an interesting connection. “I found these sheets of conductive materials used for electromagnetic interference shielding, that you can buy anywhere in rolls,” he says.

These materials have 'piezoresistive' properties, meaning they change in electrical resistance when strained. Truby realised they could make effective soft sensors if they were placed on certain spots on the trunk.

As the sensor deforms in response to the trunk’s stretching and compressing, its electrical resistance is converted to a specific output voltage. The voltage is then used as a signal correlating to that movement.

But the material didn’t stretch much, which would limit its use for soft robotics. Inspired by kirigami — a variation of origami that includes making cuts in a material — Truby designed and laser-cut rectangular strips of conductive silicone sheets into various patterns, such as rows of tiny holes or crisscrossing slices like a chain link fence.

That made them far more flexible, stretchable, “and beautiful to look at”, says Truby.

The researchers’ robotic trunk comprises three segments, each with four fluidic actuators (12 total) used to move the arm.

They fused one sensor over each segment, with each sensor covering and gathering data from one embedded actuator in the soft robot.

They used 'plasma bonding', a technique that energises a surface of a material to make it bond to another material.

It takes roughly a couple hours to shape dozens of sensors that can be bonded to the soft robots using a handheld plasma-bonding device.

'Learning' configurations


As hypothesised, the sensors did capture the trunk’s general movement. But they were really noisy. “Essentially, they’re non-ideal sensors in many ways,” says Truby.

“But that’s just a common fact of making sensors from soft conductive materials. Higher-performing and more reliable sensors require specialised tools that most robotics labs do not have.”

To estimate the soft robot’s configuration using only the sensors, the researchers built a deep neural network to do most of the heavy lifting, by sifting through the noise to capture meaningful feedback signals.

The researchers developed a new model to kinematically describe the soft robot’s shape that vastly reduces the number of variables needed for their model to process.

In experiments, the researchers had the trunk swing around and extend itself in random configurations over approximately an hour and a half. They used the traditional motion-capture system for ground truth data.

In training, the model analysed data from its sensors to predict a configuration, and compared its predictions to that ground truth data which was being collected simultaneously.

In doing so, the model 'learns' to map signal patterns from its sensors to real-world configurations. Results indicated, that for certain and steadier configurations, the robot’s estimated shape matched the ground truth.

Next, the researchers aim to explore new sensor designs for improved sensitivity and to develop new models and deep-learning methods to reduce the required training for every new soft robot. They also hope to refine the system to better capture the robot’s full dynamic motions.

Currently, the neural network and sensor skin are not sensitive to capture subtle motions or dynamic movements.

But, for now, this is an important first step for learning-based approaches to soft robotic control, Truby says: “Like our soft robots, living systems don’t have to be totally precise. Humans are not precise machines, compared to our rigid robotic counterparts, and we do just fine.”

'Sensorised' skin helps soft robots find their bearings

MIT frameworks are helping the US Forest Service find solutions to fire.

Forge new relationship with fire


From record-setting fires in the western United States to the devastating and still-blazing bushfires in Australia, it is increasingly apparent that society must forge a new relationship with fire.

Factors that include changing climate, expanding human development, and accumulating fuels mean new approaches are needed, and many experts are calling for increasing resiliency by suppressing fewer fires and accelerating forest restoration.

Yes, you read that correctly — suppressing fewer fires.

Scientific evidence that’s been accumulating for decades points to the ways that suppressing fire leads to unhealthy forests.

Research by the US Forest Service, an agency of the US Department of Agriculture, shows how short-term gain from suppression can condition the landscape for future, even-greater fires burning in extreme conditions, and MIT Sloan Executive Education has had a hand in translating that thinking into action.

Matthew Thompson is a research forester at the US Forest Service, where he works in the Human Dimensions Program at Rocky Mountain Research Station in Colorado and focuses on the human dimension of natural resource problems.

An engineer by training, with a PhD from Oregon State University in forest engineering, Thompson has worked with the agency for about a decade. Core to his work is understanding how best to catalyse desired changes in fire manager behaviour in terms of individual fire events and over time.

He believes that changes in fire manager decisions regarding response strategies and tactics will be necessary to change fire outcomes. To that end, Thompson enrolls in executive education courses whenever time allows to draw insights from the latest thinking at the intersection of management and science.

“Risk management and management science evolves,” says Thompson. “I’ve found the best way to stay up to date is through continuing and executive education.”

Bringing analytics to fire management


Thompson’s professional development pursuits were in part propelled by earning the Presidential Early Career Award for Scientists and Engineers in 2016.

The funding he received as a result of this award enabled him to enroll in courses over several years, including Understanding and Solving Complex Business Problems and Analytics Management: Business Lessons from the Sports Data Revolution at MIT Sloan Executive Education.

“I think it’s fair to say both courses were very influential in terms of my thinking and career trajectory,” says Thompson, who attended the programmes with his colleague David Calkin so that they could apply the concepts they learned in the classroom directly to their work and share a common language for their problem solving.

“Some will argue that fire is ultimately unique, but in these courses we encountered so many analogies to what we see in the fire management world,” says Thompson, “including other disciplines that tend to emphasise reactively fixing problems over prevention, for example.

"The more we talked with managers and took courses, the more we encountered common problems and proactive solutions.”

Thompson’s attendance in Analytics Management in 2018 was part of an effort to bring data-driven decision-making to wildland fire management.

“Ben Shields introduced an analytics management framework and gave us opportunities to work through that framework hands-on,” says Thompson.

“We were able to more clearly define the challenges of bringing data analytics to bear in our respective organisations as a result of this exercise.

"Dave and I were there to solve a problem, and the opportunity to work collaboratively in this environment with the support of faculty was invaluable.”

Thompson adds that he and his colleague went out for beers after class in Cambridge, Massachusetts, and kept working through the framework, assembling notes and strategies that would prove useful in the next phase of their work. Shields is a senior lecturer in managerial communication at the Sloan School of Management.

Thompson, Calkin, and their colleagues recently authored a paper, 'Risk Management and Analytics in Wildfire Response', in which they demonstrate the real-world application of analytics to support response decisions and organisational learning.

Not discounting culture of experiential learning


“When it comes to fire management, we are not trying to discount the culture of experiential learning, which is of course critical,” says Thompson.

“But in Ben Shields’ analytics course, we saw a direct parallel between the objections raised by sports team scouts, 20 year ago, to statistics and the fire managers who doubt how statistics can help.

"In sports, as in fire, decision-making was a job largely reserved for the trained eyes and gut instincts of grizzled veterans.”

Thompson and his colleagues argue for a stronger adoption of data-driven decisions in fire management that they colloquially refer to as “'Moneyball' for fire,” referencing the 2003 book by Michael Lewis chronicling the Oakland Athletics’ use of advanced statistics.

In the paper they introduce core analytics concepts, cite Ben Shields’ analytics management framework, and make observations on implementing an analytics agenda within organisations.

The Forest Service is now developing Risk Management 101 courses for fire officers and designing new risk management doctrine to guide the agency. During fire season, they are experimenting with on-call analytics teams to assist incident commanders.

Fire managers can use data to help prioritise where they put resources and determine where suppression efforts are likeliest to succeed. They are making steady progress towards designing the ideal response to a fire depending on when and where it happens.

A systems thinking approach to wildfires


Thompson and Calkin previously co-authored several papers on the application of enterprise risk management and systems thinking to wildfire management after completing John Sterman’s two-day systems thinking course. Sterman is the Jay W Forrester Professor of Management at the MIT Sloan School of Management.

In 'Rethinking the Wildland Fire Management System', the authors explain how systems thinking can help the fire management community be proactive rather than reactive by more fully characterising the environment in which fire management decisions are made and anticipating factors that may lead to compromised decision-making.

“In the fire community, people have been moving towards this system perspective over time, but a lot of the analysis and perspectives are looking at the social and ecological perspective — thinning forests, reducing flammable material, engaging homeowners and communities, etc.

"Many of those social-ecological perspectives have failed to account for the fire management system itself, when in fact fire management might be where the most change needs to come from.

"Part of that is because, in the western US, we’ve been so good at suppression we’ve created forests that look entirely different from what they used to look like, and we can’t treat our way out of the problem.

"We need to opportunistically manage ignitions in different way, use analytics to predict where on the landscape we can create curated fires that are going to be the right size, within pre-identified boundaries.

"There are success stories coming out of the southwest, for example, where they have been able to create and control fires that have bought the landscape a restoration treatment that is good for the next 10-20 years. This can be a much more effective solution than logging, for example.”

The sky’s the limit for fire management solutions


Thompson is optimistic.

“The engineer in me is most interested in the technical challenges, and there is huge opportunity for growth in that area. With advances in remote sensing technology, AI, ML, the sky is the limit.

"For example, recent legislation was passed to equip suppression resources with tracking technology. With this temporal, high-resolution data, we can expand the universe of what we know about current efforts to manage fire and the degree of efficiency of these efforts. This will be a vast improvement on the currently limited amount of credible data we have.”

Thompson is also emboldened by the amount and quality of research and resource sharing among the Forest Service, other agencies, and countries like Australia.

“And we are also very fortunate to have just hired Nicholas McCarthy, a wildfire data scientist from Australia who is an expert on bushfire thunderstorms and AI applications to wildfire.”

Thompson says that people often refer to fire as a wicked problem with no one solution. And while he has confidence in data and technology, he realises it’s not a silver bullet. He also cautions that one can’t decouple the social from the technical.

“Climate change is clearly a problem that is human-driven, and behaviour change is central to mitigation. The adoption of risk management approaches and data analytics is also predicated on a cultural shift.

"If tomorrow we could track every firefighter and review suppression effectiveness, it might change nothing. You need leaders to value the role of data-driven decision-making and hold others accountable to it.”

Rethinking fire with data analytics and systems design

Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects.

Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognise her face? “Yes, of course,” you probably are thinking.

If this is true, it would mean that our visual system, having seen a single image of an object such as a specific face, recognises it robustly despite changes to the object’s position and scale, for example.

Vanilla deep networks


On the other hand, we know that state-of-the-art classifiers, such as vanilla deep networks, will fail this simple test.

In order to recognise a specific face under a range of transformations, neural networks need to be trained with many examples of the face under the different conditions.

In other words, they can achieve invariance through memorisation, but cannot do it if only one image is available.

Thus, understanding how human vision can pull off this remarkable feat is relevant for engineers aiming to improve their existing classifiers.

It also is important for neuroscientists modelling the primate visual system with deep networks. In particular, it is possible that the invariance with one-shot learning exhibited by biological vision requires a rather different computational strategy than that of deep networks.

A paper by MIT PhD candidate in electrical engineering and computer science Yena Han and colleagues in 'Nature Scientific Reports' entitled 'Scale and translation-invariance for novel objects in human vision' discusses how they study this phenomenon more carefully to create novel biologically inspired networks.

Vast implications for engineering of vision systems


"Humans can learn from very few examples, unlike deep networks. This is a huge difference with vast implications for engineering of vision systems and for understanding how human vision really works," states co-author Tomaso Poggio — director of the Center for Brains, Minds and Machines (CBMM) and the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT.

"A key reason for this difference is the relative invariance of the primate visual system to scale, shift, and other transformations.

"Strangely, this has been mostly neglected in the AI community, in part because the psychophysical data were so far less than clear-cut. Han's work has now established solid measurements of basic invariances of human vision.”

To differentiate invariance rising from intrinsic computation with that from experience and memorisation, the new study measured the range of invariance in one-shot learning.

A one-shot learning task was performed by presenting Korean letter stimuli to human subjects who were unfamiliar with the language.

These letters were initially presented a single time under one specific condition and tested at different scales or positions than the original condition.

The first experimental result is that — just as you guessed — humans showed significant scale-invariant recognition after only a single exposure to these novel objects. The second result is that the range of position-invariance is limited, depending on the size and placement of objects.

Next, Han and her colleagues performed a comparable experiment in deep neural networks designed to reproduce this human performance.

The results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance.

Limited position-invariance of human vision


In addition, limited position-invariance of human vision is better replicated in the network by having the model neurons’ receptive fields increase as they are further from the centre of the visual field.

This architecture is different from commonly used neural network models, where an image is processed under uniform resolution with the same shared filters.

“Our work provides a new understanding of the brain representation of objects under different viewpoints. It also has implications for AI, as the results provide new insights into what is a good architectural design for deep neural networks,” remarks Han, CBMM researcher and lead author of the study.

Bridging the gap between human and machine vision

Study suggests non-invasive spectroscopy could be used to monitor blood glucose levels.

Patients with diabetes have to test their blood sugar levels several times a day to make sure they are not getting too high or too low.

Studies have shown that more than half of patients don’t test often enough, in part because of the pain and inconvenience of the needle prick.

One possible alternative is Raman spectroscopy, a non-invasive technique that reveals the chemical composition of tissue, such as skin, by shining near-infrared light on it.

MIT scientists have now taken an important step towards making this technique practical for patient use: they have shown that they can use it to directly measure glucose concentrations through the skin.

Until now, glucose levels had to be calculated indirectly, based on a comparison between Raman signals and a reference measurement of blood glucose levels.

While more work is needed to develop the technology into a user-friendly device, this advance shows that a Raman-based sensor for continuous glucose monitoring could be feasible, says Peter So, a professor of biological and mechanical engineering at MIT.

“Today, diabetes is a global epidemic,” says So, who is one of the senior authors of the study and the director of MIT’s Laser Biomedical Research Center.

“If there were a good method for continuous glucose monitoring, one could potentially think about developing better management of the disease.”

Sung Hyun Nam of the Samsung Advanced Institute of Technology in Seoul is also a senior author of the study, which appears today in Science Advances. Jeon Woong Kang, a research scientist at MIT, and Yun Sang Park, a research staff member at Samsung Advanced Institute of Technology, are the lead authors of the paper.

Seeing through the skin


Raman spectroscopy can be used to identify the chemical composition of tissue by analysing how near-infrared light is scattered, or deflected, as it encounters different kinds of molecules.

MIT’s Laser Biomedical Research Center has been working on Raman-spectroscopy-based glucose sensors for more than 20 years.

The near-infrared laser beam used for Raman spectroscopy can only penetrate a few millimetres into tissue, so one key advance was to devise a way to correlate glucose measurements from the fluid that bathes skin cells, known as interstitial fluid, to blood glucose levels.

However, another key obstacle remained: the signal produced by glucose tends to get drowned out by the many other tissue components found in skin.

“When you are measuring the signal from the tissue, most of the strong signals are coming from solid components such as proteins, lipids, and collagen.

"Glucose is a tiny, tiny amount out of the total signal. Because of that, so far we could not actually see the glucose signal from the measured signal,” says Kang.

To work around that, the MIT team has developed ways to calculate glucose levels indirectly by comparing Raman data from skin samples with glucose concentrations in blood samples taken at the same time.

However, this approach requires frequent calibration, and the predictions can be thrown off by movement of the subject or changes in environmental conditions.

For the new study, the researchers developed a new approach that lets them see the glucose signal directly. The novel aspect of their technique is that they shine near-infrared light onto the skin at about a 60-degree angle, but collect the resulting Raman signal from a fiber perpendicular to the skin.

This results in a stronger overall signal because the glucose Raman signal can be collected while unwanted reflected signal from the skin surface is filtered out.

The researchers tested the system in pigs and found that after 10 to 15 minutes of calibration, they could get accurate glucose readings for up to an hour. They verified the readings by comparing them to glucose measurements taken from blood samples.

“This is the first time that we directly observed the glucose signal from the tissue in a transdermal way, without going through a lot of advanced computation and signal extraction,” says So.

Continuous monitoring


Further development of the technology is needed before the Raman-based system could be used to monitor people with diabetes, say the researchers.

They now plan to work on shrinking the device, which is about the size of a desktop printer, so that it could be portable, in hopes of testing such a device on diabetic patients.

“You might have a device at home or a device in your office that you could put your finger on once in a while, or you might have a probe that you hold to your skin,” says So. “That’s what we’re thinking about in the shorter term.”

In the long term, they hope to create a wearable monitor that could offer continuous glucose measurements.

Other MIT authors of the paper include former postdoc Surya Pratap Singh, who is now an assistant professor at the Indian Institute of Technology; Wonjun Choi, a former visiting scientist from the Institute for Basic Science in South Korea; research technical staff member Luis Galindo; and principal research scientist Ramachandra Dasari. Hojun Chang, Woochang Lee, and Jongae Park of the Samsung Advanced Institute of Technology are also authors of the study.

Researchers hope to make needle pricks for diabetics a thing of the past

Proposed bridge would have been the world’s longest at the time, and new analysis shows it would have worked.

In 1502 AD, Sultan Bayezid II sent out the Renaissance equivalent of a government RFP (request for proposals), seeking a design for a bridge to connect Istanbul with its neighbour city, Galata.

Leonardo da Vinci, already a well-known artist and inventor, came up with a novel bridge design that he described in a letter to the Sultan and sketched in a small drawing in his notebook.

He didn’t get the job. But 500 years after his death, the design for what would have been the world’s longest bridge span of its time intrigued researchers at MIT, who wondered how thought-through Leonardo’s concept was and whether it really would have worked.

Spoiler alert: Leonardo knew what he was doing.

To study the question, recent graduate student Karly Bast MEng ’19, working with professor of architecture and of civil and environmental engineering John Ochsendorf and undergraduate Michelle Xie, tackled the problem by analysing the available documents, the possible materials and construction methods that were available at the time, and the geological conditions at the proposed site, which was a river estuary called the Golden Horn.

Ultimately, the team built a detailed scale model to test the structure’s ability to stand and support weight, and even to withstand settlement of its foundations.

The results of the study were presented in Barcelona recently at the conference of the International Association for Shell and Spatial Structures. They will also be featured in a talk at Draper in Cambridge, Massachusetts, and in an episode of the PBS programme NOVA, set to be shown on November 13.

A flattened arch


In Leonardo’s time, most masonry bridge supports were made in the form of conventional semicircular arches, which would have required 10 or more piers along the span to support such a long bridge.

Leonardo’s bridge concept was dramatically different — a flattened arch that would be tall enough to allow a sailboat to pass underneath with its mast in place, as illustrated in his sketch, but that would cross the wide span with a single enormous arch.

The bridge would have been about 280 metres long (though Leonardo himself was using a different measurement system, since the metric system was still a few centuries off), making it the longest span in the world at that time, had it been built. “It’s incredibly ambitious,” says Bast. “It was about 10 times longer than typical bridges of that time.”

The design also featured an unusual way of stabilising the span against lateral motions — something that has resulted in the collapse of many bridges over the centuries.

To combat that, Leonardo proposed abutments that splayed outward on either side, like a standing subway rider widening her stance to balance in a swaying car.

In his notebooks and letter to the Sultan, Leonardo provided no details about the materials that would be used or the method of construction. Bast and the team analysed the materials available at the time and concluded that the bridge could only have been made of stone, because wood or brick could not have carried the loads of such a long span.

And they concluded that, as in classical masonry bridges such as those built by the Romans, the bridge would stand on its own under the force of gravity, without any fasteners or mortar to hold the stone together.

To prove that, they had to build a model and demonstrate its stability. That required figuring out how to slice up the complex shape into individual blocks that could be assembled into the final structure.

While the full-scale bridge would have been made up of thousands of stone blocks, they decided on a design with 126 blocks for their model, which was built at a scale of 1 to 500 (making it about 32 inches long). Then the individual blocks were made on a 3D printer, taking about six hours per block to produce.

“It was time-consuming, but 3D printing allowed us to accurately recreate this very complex geometry,” says Bast.

Testing the design’s feasibility


This is not the first attempt to reproduce Leonardo’s basic bridge design in physical form. Others, including a pedestrian bridge in Norway, have been inspired by his design, but in that case modern materials — steel and concrete — were used, so that construction provided no information about the practicality of Leonardo’s engineering.

“That was not a test to see if his design would work with the technology from his time,” says Bast. But because of the nature of gravity-supported masonry, the faithful scale model, albeit made of a different material, would provide such a test.

“It’s all held together by compression only,” she says. “We wanted to really show that the forces are all being transferred within the structure,” which is key to ensuring that the bridge would stand solidly and not topple.

"As with actual masonry arch bridge construction, the 'stones' were supported by a scaffolding structure as they were assembled, and only after they were all in place could the scaffolding be removed to allow the structure to support itself. Then it came time to insert the final piece in the structure, the keystone at the very top of the arch.

“When we put it in, we had to squeeze it in. That was the critical moment when we first put the bridge together. I had a lot of doubts,” as to whether it would all work," says Bast. But “when I put the keystone in, I thought, ‘this is going to work.’ And after that, we took the scaffolding out, and it stood up".

“It’s the power of geometry”, that makes it work, she says. “This is a strong concept. It was well thought out." Score another victory for Leonardo.

“Was this sketch just freehanded, something he did in 50 seconds, or is it something he really sat down and thought deeply about? It’s difficult to know”, from the available historical material, she says.

"But proving the effectiveness of the design suggests that Leonardo really did work it out carefully and thoughtfully," she says. “He knew how the physical world works.”

He also apparently understood that the region was prone to earthquakes, and incorporated features such as the spread footings that would provide extra stability.

To test the structure’s resilience, Bast and Xie built the bridge on two movable platforms and then moved one away from the other to simulate the foundation movements that might result from weak soil.

The bridge showed resilience to the horizontal movement, only deforming slightly until being stretched to the point of complete collapse.

The design may not have practical implications for modern bridge designers, Bast says, since today’s materials and methods provide many more options for lighter, stronger designs.

But the proof of the feasibility of this design sheds more light on what ambitious construction projects might have been possible using only the materials and methods of the early Renaissance. And it once again underscores the brilliance of one of the world’s most prolific inventors.

It also demonstrates, Bast says, that “you don’t necessarily need fancy technology to come up with the best ideas”.

Engineers put Leonardo da Vinci’s bridge design to the test

Model from the Computer Science and Artificial Intelligence Laboratory identifies 'serial hijackers' of internet IP addresses.

Hijacking IP addresses is an increasingly popular form of cyber attack. This is done for a range of reasons, from sending spam and malware to stealing Bitcoin. It is estimated that in 2017 alone, routing incidents such as IP hijacks affected more than 10 per cent of all the world’s routing domains.

There have been major incidents at Amazon and Google and even in nation states — a study last year suggested that a Chinese telecom company used the approach to gather intelligence on western countries by rerouting their internet traffic through China.

Existing efforts to detect IP hijacks tend to look at specific cases when they’re already in process. But what if we could predict these incidents in advance by tracing things back to the hijackers themselves?

That’s the idea behind a new machine-learning system developed by researchers at MIT and the University of California at San Diego (UCSD).

By illuminating some of the common qualities of what they call 'serial hijackers', the team trained their system to be able to identify roughly 800 suspicious networks — and found that some of them had been hijacking IP addresses for years.

“Network operators normally have to handle such incidents reactively and on a case-by-case basis, making it easy for cybercriminals to continue to thrive,” says lead author Cecilia Testart, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who will present the paper at the ACM Internet Measurement Conference in Amsterdam on October 23.

“This is a key first step in being able to shed light on serial hijackers’ behaviour and proactively defend against their attacks.”

The paper is a collaboration between CSAIL and the Center for Applied Internet Data Analysis at UCSD’s Supercomputer Center. The paper was written by Testart and David Clark, an MIT senior research scientist, alongside MIT postdoc Philipp Richter and data scientist Alistair King as well as research scientist Alberto Dainotti of UCSD.

The nature of nearby networks


IP hijackers exploit a key shortcoming in the Border Gateway Protocol (BGP), a routing mechanism that essentially allows different parts of the internet to talk to each other. Through BGP, networks exchange routing information so that data packets find their way to the correct destination.

In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That’s unfortunately not very hard to do, since BGP itself doesn’t have any security procedures for validating that a message is actually coming from the place it says it’s coming from.

“It’s like a game of Telephone, where you know who your nearest neighbour is, but you don’t know the neighbours five or 10 nodes away,” says Testart.

In 1998 the US Senate's first-ever cybersecurity hearing featured a team of hackers who claimed that they could use IP hijacking to take down the internet in under 30 minutes. Dainotti says that, more than 20 years later, the lack of deployment of security mechanisms in BGP is still a serious concern.

To better pinpoint serial attacks, the group first pulled data from several years’ worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table.

From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviours.

The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

  • Volatile changes in activity: hijackers’ address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network’s prefix was less than 50 days, compared to almost two years for legitimate networks.
  • Multiple address blocks: serial hijackers tend to advertise many more blocks of IP addresses, also known as 'network prefixes'.
  • IP addresses in multiple countries: most networks don’t have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

Identifying false positives


Testart said that one challenge in developing the system was that events that look like IP hijacks can often be the result of human error, or otherwise legitimate.

For example, a network operator might use BGP to defend against distributed denial-of-service attacks in which there’s huge amounts of traffic going to their network.

Modifying the route is a legitimate way to shut down the attack, but it looks virtually identical to an actual hijack.

Because of this issue, the team often had to manually jump in to identify false positives, which accounted for roughly 20 per cent of the cases identified by their classifier.

Moving forward, the researchers are hopeful that future iterations will require minimal human supervision and could eventually be deployed in production environments.

“The authors' results show that past behaviours are clearly not being used to limit bad behaviours and prevent subsequent attacks,” says David Plonka, a senior research scientist at Akamai Technologies who was not involved in the work.

“One implication of this work is that network operators can take a step back and examine global internet routing across years, rather than just myopically focusing on individual incidents.”

As people increasingly rely on the Internet for critical transactions, Testart says that she expects IP hijacking’s potential for damage to only get worse. But she is also hopeful that it could be made more difficult by new security measures.

In particular, large backbone networks such as AT&T have recently announced the adoption of resource public key infrastructure (RPKI), a mechanism that uses cryptographic certificates to ensure that a network announces only its legitimate IP addresses.

“This project could nicely complement the existing best solutions to prevent such abuse that include filtering, anti-spoofing, co-ordination via contact databases, and sharing routing policies so that other networks can validate it,” says Plonka.

“It remains to be seen whether misbehaving networks will continue to be able to game their way to a good reputation. But this work is a great way to either validate or redirect the network operator community's efforts to put an end to these present dangers.”

Using machine learning to hunt down cybercriminals

Theme picker

Engineers Ireland
Engineers TV Live broadcast channel
View live broadcasts from Engineers Ireland