Blogs / Tech Blog

Friction in Human-Computer Symbiosis: Kasparov on Chess

As we build our platforms and applications following a human-computer symbiosis approach, we keep an ear to the ground for interesting examples that illuminate new techniques or validate our approach in some empirical way.

One of the areas that we’re interested is in the overall friction of analysis systems. The systems that we build are built on commodity hardware — we’re not building faster computers and yet we can deliver orders-of-magnitude better performance on analysis tasks than existing solutions. How do we do this? By building software in such a way that it reduces the friction experienced at the boundaries between the computing power, the analyst, and the source data.

Chess as analysis laboratory

Chess is, at its heart, a predictive venture. The player attempts to anticipate their opponent’s moves, planning their own moves accordingly, with the straightforward goal of finding a sequence of piece moves that force checkmate.

This game is, in its ideal form, analysis. (The moves made are the logical extension of the analysis.) The data are clean, the problem is well-defined and everyone plays by the same rules. There are even well-defined metrics for ranking chess players by skill — a better chess player is a better chess-game analyst.

In the realm of evaluation of analysis systems, this is as about as good as it gets in terms of designing controlled experiments to study the relative strengths of different analysis systems.

Garry Kasparov, widely considered to be the greatest chess player of all time, recently wrote a review of Diego Rasskin Gutman’s book, Chess Metaphors: Artificial Intelligence and the Human Mind.

The review is excellent and covers a lot of ground. However, one particular anecdote stood out as a very interesting example of human-computer symbiosis (emphasis added):

In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

After the jump, we look at this finding in a more generalized way and map it onto the Palantir approach.

The cyborg Grandmaster: a fearsome opponent

The tournament Kasparov recalls was a showcase of chess talent, human-computer symbiosis, and raw computing power. Among those entered in the tournament were a purpose-made chess machine (similar to Deep Blue) named Hydra and a team of Grandmasters assisted by computer programs.

One losing participant had this to say about the computer-aided Grandmasters:

Secondly, I have learned that a Grandmaster armed with a chess engine is a killer combination against a plain Engine. Engines see everything via brute force, Grandmasters use their intuition and are able to see “obvious” moves at once. So the two of them together are a mighty force.

This is just as Licklider predicted 50 years ago — quoting Man-Computer Symbiosis (if I could put it better, I would):

Men will set the goals and supply the motivations, of course, at least in the early years. They will formulate hypotheses. They will ask questions… In general, they will make approximate and fallible, but leading, contributions, and they will define criteria and serve as evaluators, judging the contributions of the equipment and guiding the general line of thought.

In addition, the computer will serve as a statistical-inference, decision-theory, or game-theory machine to make elementary evaluations of suggested courses of action whenever there is enough basis to support a formal statistical analysis. Finally, it will do as much diagnosis, pattern-matching, and relevance-recognizing as it profitably can, but it will accept a clearly secondary status in those areas.

So in classic intelligence amplification fashion, having computer programs that can quickly evaluate a move’s likelihood of success can amplify the power of the Grandmaster.

While empirically true, it does beg the question: how much does it amplify the power of the Grandmaster?

One approximation might be product as a simple linear amplification. Let’s imagine a function, a(h,c), in which the analytic power (a) is the product of power of the human (h) and the computing power of the chess engine being used (c). This gives us the equation:

One term to dominate them all: friction-of-interface

Does this simple approximation hold up? It does not. The team that won the PAL/CSS Freestyle Tournament in 2005 was composed of two amateur chess players that were able to best a computer-assisted Grandmaster.

How did they accomplish this feat? It was not through superior compute power. Instead, they did so by more effectively feeding insights to their three chess engines. They played so well that a large number of people actually assumed that it was actually Kasparov himself playing:

Many speculated that it might be Garry Kasparov, who was the initiator of this kind of computer assisted chess matches. When we asked him Kasparov confirmed that was not the case. But he reminded us that it doesn’t really matter. The guiding principle of Freestyle Chess: anything is allowed. “Even if they were assisted by the devil, that would probably be covered by the rules,” he joked. “Only the moves they played count.”

What does this mean for our simple equation? Well, it looks it’s missing a term, one we’ll call f, that describes the efficiency or friction of the interface between human and computer.

Quoting Kasparov again:

Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The implication being that the equation actually looks like this:

So as the friction of the interface goes to zero, the full amplification of the chess engine is brought to bear. A quick gut-check in the opposite direction agrees: one can imagine the world’s most powerful chess engine with the world’s worst interface; spending the time it would take to express commands to this theoretically awful program would actually be worse than playing without it.

Palantir: a low-friction interface to data

As analysis problems go, chess resembles a spherical cow in a vacuum. Analysis problems in the real world are orders of magnitude messier.

Let’s reframe the terms of our equation above into a more general approach to analysis:

  • H – this is power of the analyst. In chess, the value of this terms varies widely between players; in designing real-world data analysis systems, this is more or less a constant (which is why h above becomes H below). Of course there are differing levels of expertise, training, and raw ability amongst the user population, but when we design systems, it’s with the average case in mind.
  • c – computing power. How fast are the machines? How well do they scale? How efficiently do they perform the data tasks at hand? Palantir spends significant engineering effort on optimizing the c term, but most of the growth in this term comes from the layers we depend on, built by companies like Intel, Sun, Oracle, etc.
  • f – friction. How easy is it to bring c to bear on the problem? Note that when we talk about friction of interface, this is not exclusively referring to user interface. More generally, friction can be present at any interface between two systems: data-software, software-software, human-software, etc. The f that we consider in this simple model is sum total system friction.

So our final formulation is just in terms of c and f (holding H as a constant):

When we discuss friction in real-world analysis systems, the friction actually exists at multiple levels:

  1. Creating an analysis model that will enable answering the questions that need to be explored
  2. Integrating the data into a single coherent view of the problem
  3. Enabling analysis tools to efficiently query and load the data
  4. Exposing APIs that allow developers to develop custom solutions quickly and efficiently for modeling and analysis tasks not covered by general tools
  5. User interface that makes the tools easy, enjoyable, and quick to use

Minimizing f: Haiti Flooding Predictions

If this is starting to sound very similar to Palantir’s marketing information, this is no accident. While some of our backend engineers are concerned with things like scaling and speed-of-querying, the overall innovation that we’re bringing to the field is not simply about faster data processing systems (even if they are) but reducing the friction at every interface inside a complex human-computer symbiotic system.

You want an example that ties it all together? It starts with a simple question: which of the many displaced-person camps in Haiti are most at risk for flooding as the rainy season approaches? Easy to ask, but not so simple to answer.

The original introduction to this video:

As we enter the beginning of the rainy season in Haiti, one of the biggest problems facing relief organizations today is the spectre of flooding and mudslides destroying Internally Displaced Persons (IDP) Camps. In this video, we integrate data from many sources to determine high risk aid locations.

The data integration for this video took about six hours, using sources of data that had never before been fused. The analysis itself takes a few minutes and quickly comes to an actionable answer to the original question.

Other Blogs