top of page
Search
  • PredApp

Large-scale Distributed Optimization





One approach inspired by human intelligence is the idea that complex thinking and decision-making are often distributed.

Since humans have direct access only to their own thinking processes, we naturally tend to think of intelligence as something that occurs inside single brains or minds. But the past few decades of cognitive science research have seen the emergence of other paradigms of thought. We’ve learned that our brains actually process perceptions, information, decisions and actions in a multitude of subsystems simultaneously. Some of our systems provide quick intuitive answers, others allow for slow rational deliberation. Much of our thinking seems to be embodied in other parts of our nervous systems or even extended outwards to the tools we use, the ideas others share with us and the culture we learn from and contribute to. In short, especially in complex systems and societies, thinking never happens in isolation.

Yet current Machine Learning techniques remain largely isolated and centralised. Machine Learning (ML) is a sub-field of computer science dedicated to deriving complex predictive models that can discover useful information, suggest conclusions and support decision-making in a variety of fields including finance, logistics, and robotics. Unlike standard hard-coded programming, ML uses a general architecture of computer code that can accept inputs, derive outputs, and optimize parameters through a self-tuning process. Tuning free parameters, however, is an increasingly tough challenge due to the proliferation of available data. Memory and computational requirements are growing much faster than processing speeds, making traditional solutions inefficient. State of the art deep neural networks, for example, operate with millions of parameters and can be very slow to train.

But what if we apply a “divide and conquer” paradigm to this hard computational problem? In our paper, using a method called distributed optimisation, we split data across multiple processing units and target a global solution by collaboratively solving more tractable local sub-problems. This approach has multiple benefits: it's scalable, cost-efficient and robust.


Distributed optimization already provides a rich set of algorithms that rely on communication between processors to determine free parameters in a decentralized way. Among the many possible approaches (e.g., distributed averaging, coordinate descent, incremental methods etc.), the fastest distributed optimization techniques are currently based on the application of second-order (Newton) methods. Though appealing, computing the Newton direction in a distributed fashion is challenging because one needs to invert the Hessian, which still requires global information.

In our work, we derive a novel connection between the Hessian of a distributed optimization problem and Symmetric Diagonally Dominant (SDD) matrices. PROWLER.io are the first to suggest a fully distributed algorithm for solving symmetric diagonally dominant systems. By properly exploiting the curvature information stored in the Hessian, we converge to a consensus between local subproblems at a quadratic rate — the fastest speed shown in the literature so far.


To validate our theoretical achievement of these convergence speeds, we evaluated our method against five state-of-the-art approaches on a variety of machine learning problems. We chose these applications because they are inspired by real-world problems and show that our approach could be used for multitask decision-making in a wide range of systems, including autonomous helicopters and humanoid robots.

For the first time, we were able to successfully show transfer between thousands of reinforcement learning control problems.


This solution to one of the most widespread issues in Machine Learning promises to ensure that artificial minds don't miss out on a tool that human financial analysts, system designers or robotics engineers take for granted: distributed thinking.

33 views0 comments

Recent Posts

See All
bottom of page