Managing a power grid is like trying to solve an enormous puzzle.
Grid operators must ensure the proper amount of power is flowing to the right areas at the exact time when it is needed, and they must do this in a way that minimises costs without overloading physical infrastructure. Even more, they must solve this complicated problem repeatedly, as rapidly as possible, to meet constantly changing demand.
Generator and line capacity
To help crack this consistent conundrum, researchers have developed a problem-solving tool that finds the optimal solution much faster than traditional approaches while ensuring the solution doesn’t violate any of the system’s constraints. In a power grid, constraints could be things like generator and line capacity.
This new tool incorporates a feasibility-seeking step into a powerful machine-learning model trained to solve the problem. The feasibility-seeking step uses the model’s prediction as a starting point, iteratively refining the solution until it finds the best achievable answer.
'As we try to integrate more renewables into the grid, operators must deal with the fact that the amount of power generation is going to vary moment to moment,' says Priya Donti. Image: MIT News; iStock.
The MIT system can unravel complex problems several times faster than traditional solvers, while providing strong guarantees of success. For some extremely complex problems, it could find better solutions than tried-and-true tools. The technique also outperformed pure machine learning approaches, which are fast but can’t always find feasible solutions.
In addition to helping schedule power production in an electric grid, this new tool could be applied to many types of complicated problems, such as designing new products, managing investment portfolios, or planning production to meet consumer demand.
“Solving these especially thorny problems well requires us to combine tools from machine learning, optimisation, and electrical engineering to develop methods that hit the right trade-offs in terms of providing value to the domain, while also meeting its requirements. You have to look at the needs of the application and design methods in a way that actually fulfils those needs,” says Priya Donti, the Silverman Family Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS).
Donti, senior author of an open-access paper on this new tool, called FSNet, is joined by lead author Hoang Nguyen, an EECS graduate student. The paper was presented at the Conference on Neural Information Processing Systems.
Combining approaches
Ensuring optimal power flow in an electric grid is an extremely hard problem that is becoming more difficult for operators to solve quickly.
“As we try to integrate more renewables into the grid, operators must deal with the fact that the amount of power generation is going to vary moment to moment. At the same time, there are many more distributed devices to co-ordinate,” explains Donti.
Grid operators often rely on traditional solvers, which provide mathematical guarantees that the optimal solution doesn’t violate any problem constraints. But these tools can take hours or even days to arrive at that solution if the problem is especially convoluted.
On the other hand, deep-learning models can solve even very hard problems in a fraction of the time, but the solution might ignore some important constraints. For a power grid operator, this could result in issues like unsafe voltage levels or even grid outages.
“Machine-learning models struggle to satisfy all the constraints due to the many errors that occur during the training process,” explains Nguyen.
For FSNet, the researchers combined the best of both approaches into a two-step problem-solving framework.
Focusing on feasibility
In the first step, a neural network predicts a solution to the optimisation problem. Very loosely inspired by neurons in the human brain, neural networks are deep learning models that excel at recognising patterns in data.
Next, a traditional solver that has been incorporated into FSNet performs a feasibility-seeking step. This optimisation algorithm iteratively refines the initial prediction while ensuring the solution does not violate any constraints.
Because the feasibility-seeking step is based on a mathematical model of the problem, it can guarantee the solution is deployable.
“This step is very important. In FSNet, we can have the rigorous guarantees that we need in practice,” says Hoang.
The researchers designed FSNet to address both main types of constraints (equality and inequality) at the same time. This makes it easier to use than other approaches that may require customising the neural network or solving for each type of constraint separately.
“Here, you can just plug and play with different optimisation solvers,” says Donti.
By thinking differently about how the neural network solves complex optimisation problems, the researchers were able to unlock a new technique that works better, she adds.
They compared FSNet to traditional solvers and pure machine-learning approaches on a range of challenging problems, including power grid optimisation. Their system cut solving times by orders of magnitude compared to the baseline approaches, while respecting all problem constraints.
Found better solutions to some of the trickiest problems
FSNet also found better solutions to some of the trickiest problems.
“While this was surprising to us, it does make sense. Our neural network can figure out by itself some additional structure in the data that the original optimisation solver was not designed to exploit,” explains Donti.
In the future, the researchers want to make FSNet less memory-intensive, incorporate more efficient optimisation algorithms, and scale it up to tackle more realistic problems.
“Finding solutions to challenging optimisation problems that are feasible is paramount to finding ones that are close to optimal. Especially for physical systems like power grids, close to optimal means nothing without feasibility. This work provides an important step towards ensuring that deep-learning models can produce predictions that satisfy constraints, with explicit guarantees on constraint enforcement,” says Kyri Baker, an associate professor at the University of Colorado Boulder, who was not involved with this work.
"A persistent challenge for machine learning-based optimisation is feasibility. This work elegantly couples end-to-end learning with an unrolled feasibility-seeking procedure that minimises equality and inequality violations. The results are very promising and I look forward to see where this research will head," adds Ferdinando Fioretto, an assistant professor at the University of Virginia, who was not involved with this work.