Discover Intelligence & Interview and Subscribe to the Newsletter!
When I was working on my recent three-fold research papers on intelligence analysis, I came across a journal article that fascinated me quite a lot since I’ve read the title. It was the case in which the content I read was exactly as good as my expectations (which are usually extremely high when they come to peer-review scientific papers). Indeed, since I started studying war theory and the philosophy of war, Clausewitz’s On War was mandatory reading. Interestingly, Clausewitz is inversely proportionally considered in intelligence and war studies. If he is one of the founding fathers of the modern understanding of war (and rightly so, notwithstanding many critics), he is almost entirely dismissed in the intelligence domain. Yes, true, he stated that intelligence is unreliable by nature, that the commander should avoid to trust intelligence (too much), and that uncertainty is inherently part of war and warfare… and so he couldn’t be said a big supporter of intelligence in general. Is this sufficient to discharge his work? So, when I read An Outline of a Clausewitzian Theory of Intelligence I finally found a partial vindication of my long-lasting necessity to see Clausewitz better considered within the intelligence studies and, more broadly, intelligence. But even more importantly, in an age that prizes all that comes from the last technological invention but the human brain, it is always healthy to remember how our world is ultimately unpredictable and dominated by an intrinsic uncertainty. The efforts of the last seventy years were to prove that everything has its own place as if nature and human beings are only tiny cogged wheels, in spite of all suggested by history and by ordinary life (actually). Then, after such a reading, I almost felt obliged to contact Dr Lillbacka to have a deeper conversation about these topics. This interview is part of this discussion which, I hope, you will find as fascinating as insightful. In addition, I invite the readers to discover Lillbacka’s publications, which are as rich as rigorous. There is no question that not everything can be covered in a single interview but I hope you will find so much to think about prediction, friction, and uncertainty that, at least, you will be enriched as much as I did. It is then with my distinct pleasure to publish the interview on Scuola Filosofica – for those who don’t know it yet; it is one of the leading cultural blogs in Italy. In the name of Scuola Filosofica Team, our readers, and myself, Giangiuseppe Pili, Ralf: thank you!
1# Dr Ralf Lillbacka, let’s start from the basics. How would you like to present yourself to the International readers and Philosophical School (Scuola Filosofica)?
My greetings to all readers of Scuola Filosofica. I hold a PhD in Political Science and I work as a Senior Lecturer at Novia, University of Applied Sciences, in Finland. In that capacity, I am primarily responsible for education in research methodology and subjects in social sciences. My activities in the field of intelligence studies during the last decade are not formally associated with my institutional affiliation, although both reflect my scientific interests. Apart from also being interested in political and military history, I might add the reserve officers’ training I received during my military service as a further source of inspiration. Hence, my research interests have focused on the intersection of intelligence studies with social sciences, e.g. organizational behavior and social learning, with research methodology and epistemology, e.g. in intelligence analysis, and with military science. In a sense, these came together when I began to consider whether some Clausewitzian tenets, long-established in military science, also could pertain to the field of intelligence.
2# Let’s start with a broad question. Is intelligence analysis able to predict future events?
I am afraid the answer to this question may be a bit long, but bear with me. Obviously, intelligence analysts can aspire to achieve a “good enough” prediction of future events, enough precise to avoid problems or to achieve policy aims. However, intelligence analysts encounter all of the same problems as anyone else in the prediction business, and then some. Epistemologically, the most fundamental question is whether the future is even in principle predictable, or rather, to what extent. In order to be completely predictable, the universe would have to be deterministic. If it is not, we would not be capable of predicting the future with absolute certainty even if we possessed all available information at a given time. Yet, even in a deterministic universe, such an amount of information (not to mention computational power) would be unobtainable, so we can reasonably claim that it is impossible to achieve predictions with absolute certainty.
The future is of course not entirely beyond prediction, and we can certainly do far better than mere guesswork would accomplish, but predictions will inevitably be probability statements.
Some domains allow more precision than others, and well-known philosopher Karl Popper famously distinguished between precise “clocks” and chaotic “clouds”. However, even the most exact clock will eventually break down or show some irregularities. Meteorologists are able to predict weather with reasonable precision several days in advance. Astronomers studying the movement of celestial bodies operate with entirely different time scales. Economists may observe long periods of almost clock-like stability, followed by chaotic change. A common denominator of all these domains is an inescapable element of uncertainty, although the magnitude is very different. So, what is different about intelligence analysts when comparing to their peers in other areas? Intelligence analysts study situations of manifest or potential conflict, which elevates complexity and unpredictability to an even higher level, as two sentient opponents try to outdo one another.
Yet, given the situation, certain options of action will be more efficient than others, and this will limit the repertoire and likely conduct of an opponent. Barry Watts in his book “Clausewitzian Friction and Future War” referred to this as interacting “possibility spaces” of opponents. You (Giangiuseppe Pili) described this aspect in a recent article in American Intelligence Journal by comparing intelligence analysis to a game of chess, which is an excellent illustration of how a set of viable moves is limited by the situation on the chessboard. To include the impact of “fog of war” and “friction”, such as chance events, von Clausewitz made a comparison to a game of cards. However, in chess and poker, rules are unaltered, whereas in a conflict, “rules” or rather the parameters of the situation are fluid. A particular strength can be negated or even turned into a weakness. Yet, it shares a common denominator with chess and poker, insofar as there are some efficient actions, determined by the possibility space, and it is the unenviable task of the analyst to identify them.
The analyst’s work is complicated by the circumstance that every situation is to some extent unique. Precedent cases may offer some direction, but do not suffice for some form of statistical inference, and an opponent certainly wants to avoid conforming to predictable patterns. So, how can prediction become more than subjective probability, or in other words, the mere belief that an occurrence has a particular likelihood? Given the possibility space of an opponent, analysts may look for indicators. Even in a chaotic situation, we can reasonably argue that the likelihood of an event is inversely proportionate to the number of preconditions that must be fulfilled. If various scenarios are mutually exclusive, it should furthermore be possible to weigh their relative likelihood. For example, the method of alternative competing hypotheses, a central method among structured analytic techniques, can also be used for assessing the relative likelihood of various scenarios.
In principle, what I am suggesting is nothing new. It is simply a multiple scenario analysis, although expressed in somewhat different terms. The rationale for such an analysis is of course that uncertainty can be reduced, but not be completely eliminated. At some stage, a “culmination point” will be reached when it cannot be reduced further and analysis will be no better than mere guesswork. However, by then analysts have hopefully obtained a “good enough” prediction or can manage uncertainty, rather than reduce it further.
3# What is, in your opinion, the legitimate goal of intelligence analysis?
It follows from the previous question that intelligence analysis should primarily reduce uncertainty, but since this is not possible beyond a certain point, a subsequent priority should be to manage uncertainty in two mutually supporting ways. First, as suggested, to assess the likelihood of various scenarios, and second, to assess the viability of various strategies. The latter may be a bit more controversial, as it borders on making policy suggestions, which in turn entails a risk of politicization and biased analysis colored by the analysts’ political preferences. However, decision-makers will always to some extent act as their own analysts, and further mixing the roles by analysts evaluating alternative policies may not be problematic, since the final word is still with the decision-makers. In a sense, this is merely a continuation of analysts assessing the probability of various scenarios by scrutinizing the potential outcome of alternative policies, and it seems less controversial to suggest that analysts (might also) play an important role in what is essentially contingency planning.
4# How would you describe unpredictability and uncertainty in the intelligence context and why are they so crucial?
They do, as suggested, represent the limit for what intelligence can accomplish with regard to predicting future events, identifying adequate strategies, and also (although it is rarely the responsibility of analysts) the execution of strategies. Apart from unexpected actions by an opponent, a multitude of chance events may intervene, something von Clausewitz referred to as “friction”. He did of course also include other aspects, but I will focus on the element of chance. Let us consider an example from ordinary, daily life. We all have experiences of working toward a deadline when something happens, making a mess of our time schedule. This manifestation of Murphy’s Law is of course not a matter of the universe conspiring against us, although it often may seem that way. Things simply happen with a certain degree of probability, some more easily foreseeable than others. Whereas for example a computer breakdown can be remedied, the loss of time may leave us more vulnerable to other chance events, making themselves known as a house of cards falling apart. Hence, the impact of friction often tends to be cumulative, as von Clausewitz noted.
Whereas a missed deadline is usually survivable, complex chains of effect may have far worse consequences in other contexts. Although there was a multitude of more fundamental causes behind the Arab Spring and the subsequent rise of ISIS, the trigger that sparked events in Tunisia and later throughout the Middle East was the public suicide by a single individual. Consequences were enormous, but events could not reasonably have been predicted with some accuracy. In today’s world, chains of effects are truly chaotic, making it even more difficult to predict future events. Hence, an important role of analysis, if it is extended to identifying policy options, would be to assess the robustness of various strategies. Even a chaotic system is not entirely beyond prediction, and as a scenario is more likely if fewer preconditions must be met, so is a strategy more robust if fewer preconditions must fall into place. In other words, we are truly moving into von Clausewitz territory, as he famously said that “the law of what is probable” should be one’s guide when choosing strategy.
5# Interestingly, Carl Von Clausewitz On War (1832) is considered a classic in war and strategic studies. Is it the case also for intelligence studies?
The continued relevance of “On War” for military science and strategic studies even after two centuries is truly remarkable, and has resisted stern and insightful criticism. This speaks of the author’s ability to capture something truly important in the nature of human conflict. In contrast, in the area of intelligence, von Clausewitz’s treatise is controversial, since he had relatively little to say on the topic, apart from seemingly ruling out its usefulness, in one passage even equating it with uncertainty and error. Most of the literature concerning von Clausewitz’s stance on intelligence has tried to explain or reconcile this paradoxical hostility. Yet, due to his so outspokenly hostile stance, any notion that he could serve a similar role in intelligence seems a genuine nonstarter, at least at first sight.
6# Even though Michael Handel wrote about Clausewitz and intelligence, Clausewitz still has not much space in the intelligence literature. Why is this the case? Why did you believe this was a misunderstanding?
To invoke von Clausewitz when discussing intelligence may undoubtedly seem questionable, since he so famously snubbed intelligence for mostly being erroneous. Hence, it would almost seem to be a faux pas by an intelligence practitioner or theoretician to invoke an authority that seemingly disqualifies it. Still, in other passages of his unfinished treatise, he clearly outlines a need for intelligence, for example in learning about the opponent.
I believe that in order to fully understand von Clausewitz’s remarks on intelligence it is necessary to consider the essence of his treatise and to read between the lines. I would argue that a common denominator of much of von Clausewitz’s thoughts is a clearly identifiable, but never truly outspoken general principle of simplicity. By simplicity I refer to robustness of strategy, in the meaning of being dependent on as few preconditions as possible, thus reducing vulnerability to friction (chance events) and uncertainty. It is apparent in his discussion of principles of war, where he favors robust principles such as massing in time and space to overcome friction in its various forms, but advices against principles that are dependent on preconditions, for example relying on ruses. It is most apparent in his notion of a center of gravity, the main source of an opponent’s strength – or we might perhaps re-conceptualize it as the source of an opponent’s possibility space. Ideally only one such center should be identified, and it should ideally be attacked only through one single action, preferably by an overwhelming force to overcome friction.
Now, with this notion of simplicity and robustness in mind, von Clausewitz’s remark could be considered a warning of founding strategy on fallible information. This can be overcome by identifying strategies that are as robust as possible. Yet, he recognized that certain knowledge is essential in planning, but emphasized the extent to which it can be relied on, or as he said, adhering to the “law of what is probable”. In a sense, this is implied by what was previously suggested as the legitimate goals of intelligence analysis: to reduce and subsequently manage uncertainty by assessing the likelihood of various scenarios, and to assess the robustness of various courses of action. It could be argued that what von Clausewitz is really saying is the maxim “Thou shalt not assume”. During an era before modern analysis and indeed before modern collection, there were few possibilities to follow this to the logical conclusion. Today, the expansion of the battlefield from a few square miles observable from a suitable Feldherrenhügel into a much wider battlespace would render command and control impossible without modern communication and also intelligence. The same certainly goes for civil intelligence. So, technology and the emergence of modern forms of collection and analysis are certainly vital for explaining this discrepancy, but an important part can arguably be found in von Clausewitz’s affinity for simplicity and robustness.
7# In a recent insightful paper published for the prestigious International Journal of Intelligence and CounterIntelligence, you argued that the intelligence process could be biased by certain given “organizational mindset.” What is it and why?
This ties in with the problem of prediction, of bias in assessing the likelihood of various scenarios, or even recognizing them in the first place. The notion that biased “mindsets” are drivers of intelligence errors is by no means new. Of course, most things are obvious in hindsight, and the simplest explanation to intelligence errors is that predicting the future is a truly difficult thing to perform. But let us start with the notion of a “mindset”. A basic feature of the human mind is its limited capacity for handling vast amounts of information. Cognitive psychology has described how such limitations are overcome by various mental processes simplifying the information by what can loosely be described as “mental models”. In a group, where actions need to be coordinated, it is of course ideal if such mental models are shared, since the group will become more effective. Hence, we can think of a mindset as a set of pre-established assumptions that do not have to be questioned every time analysts (and others) try to solve a problem with limited time and information at their hands, as it would obviously be counterproductive to re-invent the wheel again and again.
Mindsets, and on an even more fundamental level, mental preparedness for even noticing and perceiving something as relevant in the first place, are first and foremost created spontaneously. Sometimes, trying to prevent my students from falling asleep during lectures on organizational behavior, I use Thomas Schelling’s classical experiment to demonstrate how certain things gain cognitive “salience” and guide group behavior without us recognizing it. I ask them to act as mind-readers, trying to predict what one of their peers would answer when asked to select, for example, an arbitrary flower, a year, or a car, simply by guessing and not communicating in any way. After they have compared notes and found extraordinary similarities, the rough frequencies for each answer, which I have written on a board and concealed before the lecture, are revealed. Usually, they are on the spot. This speaks far less of my modest abilities as a mentalist than of the impact of such processes, taking place just beyond our consciousness. For the most part, these processes are benign, facilitating coordination of behavior in a group or within an organization. Obviously, any organization must reach a minimum of efficiency, and shared mental models aid in that respect.
However, it is a common human fallacy to think of things as either inherently good or bad, and there is a downside to this mental streamlining. While examining a series of intelligence errors and successes, I found that the mere existence of mindsets was not decisive for failure. Rather, the culprit was a particular mindset shaped by a preceding period where certain strategies and ways of conceiving the situation had proven successful or at least satisficing. Such mindsets were found in as diverse contexts as peace, periods of low-level conflict, and successful offensive operations. The common denominator was a seemingly optimal way of perceiving the situation becoming more ingrained in the organizational conduct. Eventually, this mindset could become entangled with various organizational pathologies such as vested interests favored by previous success, groupthink and personal prestige. This made organizations less able to recognize changes in the environment, for example, the likely actions of an opponent, as well as rendered them less open to the possibility of “unknown unknowns”. Lacking incentive to change basic assumptions and question old truths made them less able to predict enemy action. Hence, even if individual intelligence officers sounded the alarm, organizations (or decision-makers) were unresponsive and dismissed their warnings as unfounded since they challenged what was considered commonsensical. Assessments of what was likely were based on an erroneous mindset that at one time might have served its purpose, but had now become obsolete. Paradoxically, experiences of past success can in other words make one less capable of meeting and predicting the future.
This has implications for the current debate whether uncertainty can be eliminated through new technology. I am not saying that it would be impossible to reduce uncertainty to a point where information dominance is almost complete, but I would claim that it is a dangerous assumption that “friction” can be completely eliminated, including nasty surprises by a creative and flexible opponent. Murphy’s Law will eventually find a way, and an opponent will eventually identify a critical vulnerability in even the most sophisticated system and find a way to strike at this center of gravity, be it through exploiting zero-day vulnerabilities or hacking humans or by manipulating the environment of a system. The more sophisticated a system, the less likely that it can be robust throughout. The danger is not found in striving for perfection, but in believing that one has obtained it.
Everyone is familiar with the proverb “pride comes before a fall”. According to this perspective, pride is a symptom of an even greater cardinal sin; that of taking too much for granted.
8# I know it is a provocative question, but I believe it is a very important one. Is perfect predictability possible and… desirable?
Obviously, when approaching the topic from this angle, focus is inevitably placed on problems associated with unpredictability. However, it is not an entirely bad thing that perfect predictability is unobtainable. Still within the Clausewitzian perspective, it can be said that unpredictability opens the door for human agency, to overcome a bad situation and to find new solutions. If we look beyond the realms of human conflict and consider the human existence at large, unpredictability also suggest that the book is not yet written. Whereas the debate continues whether there actually is such a thing as a “free will” or whether it is an illusion, “free will” would by definition be impossible in a completely predictable, and thus, deterministic world. Whereas we may not be complete masters of our fate, the human predicament may be – to use a Finnish analogy – more like that of paddling a white water raft than that of being a helpless piece of drifting wood.
9# How can our readers follow you?
I am not particularly active on social media, but I can be followed on LinkedIn.
10# Five keywords that represent you?
Introvert, preplanning, critical, conscientious, unorthodox.
Be First to Comment