Licklider was an very influential man, with Cooper even crediting him for the existence of Computer Science as a modern-day field:
Until Licklider began his work at ARPA, there were no Ph.D. programs in computer science at American universities. That changed after ARPA began handing out grants to promising students, a practice that convinced MIT, Stanford, UC Berkeley and Carnegie Mellon to start their own graduate programs in computer science in 1965. Maybe that should go down as Licklider’s most lasting legacy.
In the piece, Cooper references this influential and well known work by Licklider: Man-Computer Symbiosis, by J. C. R. Licklider, published in IRE Transactions on Human Factors in Electronics, volume HFE-1, pages 4-11, March 1960.
That’s right, it was written almost 50 years ago. That said, it’s incredibly relevant today, perhaps more than ever.
Here’s the abstract:
Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization, in programming languages, and in input and output equipment.
This description is still a pretty accurate description of how most analysts (in any industry or field) go about their business:
Despite the fact that there is a voluminous literature on thinking and problem solving, including intensive case-history studies of the process of invention, I could find nothing comparable to a time-and-motion-study analysis of the mental work of a person engaged in a scientific or technical enterprise. In the spring and summer of 1957, therefore, I tried to keep track of what one moderately technical person actually did during the hours he regarded as devoted to work. Although I was aware of the inadequacy of the sampling, I served as my own subject.
About 85 per cent of my “thinking” time was spent getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it. Hours went into the plotting of graphs, and other hours into instructing an assistant how to plot. When the graphs were finished, the relations were obvious at once, but the plotting had to be done in order to make them so.
Throughout the period I examined, in short, my “thinking” time was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability.
This quote is an eerily accurate description of how trading strategies are formulated, back-tested, and implemented these days. As analogy, it’s also an accurate reflection of the modern use of information processing in the intelligence space.
It is to bring computing machines effectively into processes of thinking that must go on in “real time,” time that moves too fast to permit using computers in conventional ways. Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in its planning was begun. To think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own will require much tighter coupling between man and machine than is suggested by the example and than is possible today.
So what how does this relate to what we do? In the finance world, much of what fund managers and analysts do in building strategies has to do with formulating trading models and then building spreadsheets that can back test or simulate the performance of those models.
Our finance tool obviates the need for this “clerical, mechanical” work, allowing strategists to spend more time making sense of the interconnections in the market and formulating nuanced trading strategies and less time doing model-building in Excel. We take the state-of-art a quantum leap forward in terms of financial analysis: rather than even just allowing analysts to quickly build models and back test trading strategies, we’ve built a tool that allows for a smooth flow from hypothesis to theory with the software doing all the heavy lifting, data wrangling, eye-candy-class presentation. New variables or market conditions can be incorporated on the fly without the need for a pause from high-level thinking to gather data or marshal it into the right format. Knowledge can be divined by asking questions relative to high-level concepts of things like dynamic market conditions and meta-conditions like the volatility-of-volatility.
The question has traditionally been, “How do I effectively model this financial space?” With Palantir, we’re transforming that question into the core question asked in the finance industry, namely, “How can I better understand the interactions at work in today’s markets?” So the focus moves to the human-level questions while the software takes care of the data level machinations.
In the intelligence space, the composite views of data that the government team creates save the analysts from having to painstakingly research and record correlations across multiple informational domains. Instead, the analyst can spend time divining the meaning behind the connections and correlations. Our take on perpetual analytics takes things a step further, alerting the analyst as relevant new information enters the system. And finally, we’re building workflows that allow analysts to quickly attach ‘handles’ to data to allow what has been traditionally unstructured data get seat at this table of computer-enhanced human analysis.
We’re speeding up the process of analysis by creating an analyst-computer symbiosis. No longer will people need to spend time doing menial data processing, the computers will do it for them, while the humans provide the spark of insight, semantics, and cognition that computers lack.
It’s conceptual analysis at the speed of thought.
This is why I’m excited to come to work every day: we’re building the software that embodies a broad vision of the future. This vision of human computer symbiosis dates from five decades ago but is also apparent in every interaction we see with computers on the big and small screens (no, not our monitors). From Star Trek to 24, people want to the computers to do the repetitive and time-consuming simple work but let them have final say on any complex decisions. As one of our customers told us when shown our application: this is the future.