We at Predapp think of ourselves as the Principled AI company. It’s a phrase that some people outside of the industry might find a bit puzzling. We’d like to explain the why here today.
Principles are literally first things; they are fundamental beliefs, theorems, even truths that can underpin and support good decisions. They provide guidelines that shouldn’t be ignored without a good, explicit reason. For doctors it might be “do no harm”; for designers: “less is more”; as an engineer (and occasional blog post writer), I like to “keep it simple.”
When it comes to Principled AI, our principles are scientific, mathematical ones. They’re more akin to Bernoulli’s principles of flight (as air speeds up pressure is lowered) than to the moralists’ “do unto others…” or Donald Trump’s operating principle that “there's no such thing as bad publicity”.
Mathematical principles may not be as catchy as ethical ones, but they can be incredibly powerful. They are the foundation of what we do here at Predapp – developing decision-making AI for complex systems. They help us build accurate model environments and agents that learn well and make good decisions. They make it possible for us to optimise the millions of micro-decisions that make up the inner workings of big complex systems. They give our growing toolbox of AI thinking tools unprecedented power and flexibility. Most of all, they help us keep our AI safe, open and effective, unlike the unpredictable black-box approach of deep-neural nets.
So what are these principles, in simple terms?
We believe AI should:
Make useful decisions based on evidence, on facts on the ground, on what’s happening in the environment, even if it’s a game or a model of traffic. Those decisions shouldn’t just be scripted rules running on a computer - that’s not real AI. They should be genuinely intelligent interactions with the world. Humans often use anecdotal evidence, they base many of their decisions on arguments, stories and myths that can sometimes be appealing - even convincing - yet still untrue. AI can’t and shouldn’t do that. It should base its decisions on true conditions in its environment.
Model and predict using probability theory. AI must be capable of assessing and reassessing autonomously, of accurately modelling probable outcomes of actions even in complex environments filled with uncertainty. It can do this by using prior knowledge, gathered during its learning phases and interactions with the world, to narrow down the possibilities for actions and decisions. It can then choose actions that are most likely to help it accomplish its - and our - goals. Bayes’ Theorem thoroughly informs this principle for us. It helps an AI agent to focus on the salient, relevant facts in front of it, not get caught up in irrelevant details.
Learn in varied ways that are tailored to the task. Generic AI learning tools shouldn’t just be tacked on to a decision-making engine. A system with a diversity of learning techniques will always be more flexible than one that over-relies on a single method. Our list of learning tools is always growing. Crucially, AI needs to be able to learn from experience, that’s where the most effective and diverse AI learning tool comes in: reinforcement learning (RL). Thanks to increases in computing power, AI agents can model, fail, succeed and ultimately learn at unprecedented speed using RL. And they can transfer learning even faster. Once a thing is learned, it can be shared, allowing AI agents to accumulate experience from other AIs and become, in a sense, much more experienced themselves.
Be data-efficient. In recent years AI has become associated with big data, as data-rich companies and countries seek ways to make use of the mountains of information they are collecting from their users and citizens. They are like the proverbial man with the hammer who sees nails everywhere: for them, AI is mostly about making use of their mountains of data. Since Predapp models dynamic systems that are constantly changing, we develop AI that can use the smaller amount of data that is useful now but may quickly become obsolete. Making sense of dynamic traffic flow for an autonomous car is nothing like feeding ten million pictures to a deep neural net in the hopes that it can learn to recognise cats.
Be aware of others - more minds are better than one. One final, vital principle. AI’s must always be aware that they are not alone, that to be safe, effective and open, they must account for the needs of other agents, especially human ones whose own behaviour may sometimes seem surprising or irrational. They will need to compete, cooperate and collaborate with each other and with humans in ways that make the whole system smarter, safer and more useful than the sum of its parts. This is the kind of multiple intelligence that beehives, markets and societies use. Agents may compete in an environment like traffic, but they must do so in a coordinated, even cooperative way. The maths of game theory make such a principled approach to multi-agent systems possible.
That’s it. We at Predapp think all AI thinking and decision-making should try to adhere to these principles, and we are consistently surprised how many AI developers fail to do so. Much of the hype and paranoia that surround AI in the media these days refers to approaches that are not, in our terms, principled.
Deep Neural Nets (DNNs), for instance, have prompted a great deal of media noise, to the point where the public often confuses them with AI as a whole. DNNs are an evolved version of back propagation, a thirty-year-old approach that can now efficiently run on faster computers. They’re a useful tool, especially for recognition tasks, and we use them in targeted, controlled ways here at Predapp. The hype comes largely from the media’s focus on DNNs’ ability to solve the kinds of problems that non-scientists find impressive.
To many of us inside the industry, the rapture in the press when the AlphaGo team at DeepMind beat GO was puzzling. By mastering a 2500-year-old board game that human minds find very challenging, the researchers had resolved some issues regarding the performance of DNNs, but it was hardly a major step towards DeepMind’s goal of “Solving Intelligence”. Worse, since it relied wholly on so-called “Black Box” DNNs – whose decisions are notoriously difficult to trace or understand – it fed the paranoia the public feels about AI being incomprehensible, hence dangerous. It’s very difficult to nail down how DNNs make decisions: they use blind repetitive speed instead of probability; employ a single, narrow learning technique; are data inefficient and effectively single-agent. In short, DNNs by themselves are not principled.
But we can have AI that’s based on sound mathematical & scientific principles, that works from evidence in complex environments; that learns from experience, that is open, observable and traceable. It’s time to move past the hype and paranoia and engage in a serious, rational, fact-based conversation about AI. We need to stop thinking of it as some kind of weird android mind or even some special analogue of human intelligence, with all its baggage of anecdotal evidence, cognitive biases, weak understanding of probability and selfishness.
Principled AI is a toolbox. It’s filled with thinking tools that can help us cope with the growing complexity and interactivity of our global systems. Like movable print, steam engines, oil, electricity, assembly-line manufacturing, mass communication, computing and the internet, it’s a core technology that will transform everything it touches. From network planning to ride sharing, logistics to robotics, games to finance, the rapidly evolving world economy will increasingly be underpinned by one shared, unlimited resource: Principled AI.
By 2025, Principled AI will drive the world economy. That’s why.
Kommentare