Discover Intelligence & Interview and Subscribe to the Newsletter!
I’m honored, delighted, and humbled by all the knowledge shared in this interview. Like many of us, I first James Bruce in his writing, from RAND reports to book chapters and papers. When I contacted him, I wanted to share my gratefulness for his seminal work on the epistemology of intelligence, because of my long-lasting interest in that almost esoteric (but crucial, I believe) topic. We had a deep conversation on intelligence analysis, the intelligence profession, and the conceptual understanding of intelligence from that moment on. As in all the best and deep conversations, there is a margin for different opinions, boosting further insights and deep thoughts. There will be so much reflection to be awed for all of our readers who will read the interview. Dr. Bruce has an outstanding position for covering so many topics at such a detailed level to be difficult to be matched, impossible to surpass. Although I try to be as grateful as I can be to all who enriched my knowledge, I can only publicly reinforce my deep appreciation for James Bruce’s interview, knowledge, experience, and all the thoughts he put into his conversations. His work and thought should definitely be an example, an inspiration for younger scholars and, more broadly, all who think human knowledge is crucial for the progress of civilization and meaning. In this very respect, James Bruce is absolutely a deep thinker. These words must be understood in the best way, as all our readers will immediately discover reading this interview. We covered crucial topics from intelligence analysis, its future as well as the epistemology of intelligence. It is then with my distinct pleasure to publish the interview on Scuola Filosofica – for those who don’t know it yet; it is one of the leading cultural blogs in Italy. In the name of Scuola Filosofica Team, our readers, and myself, Giangiuseppe Pili, James: thank you!
1# Hi James Bruce, let’s start from the basics. How would you like to present yourself to our national and international readers?
Hello, Gian, and thank you for the opportunity to discuss analysis! To start with a caveat: These interview responses are my own personal views, and they do not reflect the positions of the Central Intelligence Agency, the US government, or the RAND Corporation.
I am a retired intelligence analyst with 24 years’ experience at CIA. While there, I worked on a variety of substantive issues and also some methodological ones. With Ph.D. in hand and 10 years’ teaching experience in academe when I entered the Agency, I still had much to learn on my path to becoming a professional analyst.
My early career focus was on the Soviet Union, and I published a very controversial (then classified) paper in 1983 on civil unrest in the USSR. It described and successfully forecast growing political instability in the Soviet system due to a breakdown in the social contract between the governing Communist Party (CPSU) and the population that was growing increasingly restive with the regime’s authoritarianism and unfulfilled promises. That quantitative study of demonstrations, strikes, riots, and political violence revealed a tip-of-the-iceberg change afoot in the Soviet political culture across its 11 time zones that the KGB couldn’t curtail by force alone. The collapse of the Soviet Union on Christmas Day in 1991 was seen by some as a US intelligence failure. While that fateful day wasn’t specifically predicted, a few analysts had reported the early signs of imminent system failure and, by 1990, CIA had its demise pretty well in hand. Gorbachev’s rule was becoming increasingly precarious. Today Putin may be riding a similar tiger.
The second half of my career shifted focus to foreign denial and deception (D&D)—a major challenge to intelligence effectiveness against the hardest targets. It was here that I became broadly engaged in our technical collection capabilities and learned the importance of protecting intelligence sources and methods. And with six years prior experience in counterintelligence (CI), including two as chief of CI training, I also learned much about human source collection and its CI challenges. Spanning both Soviet issues and D&D, I also served about half my career in the National Intelligence Council conducting Intelligence Community analysis including managing National Intelligence Estimates. My time as a Deputy National Intelligence Officer for Science and Technology was a particularly rewarding assignment, as was my work on the President’s WMD Commission that examined the intelligence failure on Iraq.
After my retirement from CIA in 2005, I joined the RAND Corporation where I am still an adjunct researcher. At RAND, I led research projects for US government clients for about a dozen years, chiefly for the Intelligence Community and the Department of Defense. Now mostly retired, I still lecture on intelligence topics and write occasionally.
2# As you are a very experienced person in intelligence analysis, how would you describe the nature of intelligence? What are the main events that brought you to this kind of study?
Modern intelligence, at least in the United States since CIA’s creation in 1947, has experienced both change and continuity. But through it all, its essential nature has remained more constant than changed. These continuous features—we can call them core components—are first, supporting senior national security policymakers with information and insights they often cannot get from anywhere else; and secondarily, the secrecy and clandestinity that provides the preponderance of the intelligence that is vetted and refined through careful analysis for its users (i.e., its “customers”). This unique information-support role is best described as providing “decision advantage” to policymakers. Its uniqueness lies mostly in the secrecy through which it operates, both in collection and in the close access and confidential relationship intelligence enjoys with its policymaking customers. Happily, the US taxpayer provides literally billions of dollars per year to fund intelligence, and I believe the value-added that intelligence brings through the decision advantage it delivers puts it in a different category than other information-providers to policy such as journalists, academics, and think tanks. As good as they are, none of them benefits from the array of collection resources available to intelligence. In any competition for policy access and influence, intelligence analysts are truly advantaged.
As to the “main events” that brought me to intelligence, as a former (and still sometime) academic, then later a professional intelligence officer, I have a foot in both camps. I taught political science and international relations at two civilian universities for seven years before my faculty appointment at the National War College in Washington D.C. For me, NWC, a master’s-level senior service school sponsored by the Joint Staff, was truly the best kept secret in academe. The students, mostly seasoned US military officers equally represented from the services, and their civilian counterparts, were in their early 40s, and all were cleared for classified readings and discussions. Most graduated to more senior assignments after completing their work there. Some later became flag-rank officers or senior managers in the national security agencies including in the intelligence community. Teaching there for three years provided me with a tremendous growth experience.
The transition from the War College to CIA was easy. By the time I got to graduate school, I wanted to teach, research, and write. At the Agency, I got to research, write, and teach (and train too). My world was no longer just “academic”; it was confronting contemporaneous real-world problems where in-depth substantive knowledge mattered, and analysis—if acted upon—could have consequences. So, it seems I took the “scenic road” to intelligence. But once there, the scenery only got more vivid, more problematic, and in some ways, more scary. But the personal and professional rewards of the intelligence profession left me no doubt that whatever convoluted path I had taken, CIA was certainly the right place for me.
3# Now, let’s start with the fun! How would you define intelligence analysis?
Drawing from the book Analyzing Intelligence: National Security Practitioners’ Perspectives, 2nd ed. that I coedited with my former CIA colleague Roger George, we define analysis as a cognitive and empirical activity combining reasoning and evidence in order to produce judgments, forecasts, and insights intended to enhance understanding and reduce uncertainty for national security policymakers (p. 353). Tailored for intelligence, this definition treats analysis as an activity, addresses how it is done, what it produces, for whom, and why. A shorthand expression for analysis is “judgment under conditions of uncertainty” (p. 24), from a 2011 study conducted by National Research Council of the US National Academy of Sciences.
4# Do you think the intelligence cycle is a good model for what is called “intelligence”? And what is the most important part of the intelligence analysis process?
To the first part of the question: yes. I believe the intelligence cycle is an excellent model for intelligence. It has the advantage of closely approximating what intelligence actually does sequentially, and it succinctly illustrates the producer-consumer relationship between intelligence and its users in a requirements-production-use-and-feedback loop. It’s imperfect, of course, and no dearth of critics has pointed that out. But to date, as far as I know, no one has produced a better way of conceptualizing the intelligence process.
As to the second part, to me, the most important part of the intelligence analysis process is a laser-like focus on getting it right. I believe that starts with a careful formulation of the intelligence question to be addressed, and rigorously applying the best methodologies that can produce the most accurate answers.
When I first became an intelligence analyst in 1982, I was struck by the overwhelmingly intuitive approach most analysts took, and their reliance on the methods of journalism. Evidence gathering often seemed to me as subjective and arbitrary, sometimes driven by available classified collection which was neither a random sample nor even keyed on the most pressing policy issues. But good intelligence analysis has broader aims and responsibilities, and it demands more than just current reporting on an evolving situation. It involves policy relevance, in-depth research, careful problem formulation, systematic evidence gathering, objective analysis of relevant evidence, evaluating the impact of missing information, and thorough peer review as a vital quality assurance measure.
After some pretty serious intelligence failures over several decades (Iraq WMD and 9/11 notable among them), CIA and other agencies began an introspective process to examine sources of error and correctives for them. Central to that effort was a focus on cognitive bias and better analytical methodologies to mitigate its effects. The result was the emergence of what we call structured analytic techniques. These SATs not only address cognitive bias. They also designedly structure analysis in a way to brainstorm hypotheses for later examination, expose hidden assumptions, qualitatively “test” alternative hypotheses, identify “drivers” or causal factors, and generate different scenarios and indicators to bound uncertain outcomes. And they do much more. They also put more emphasis on group collaboration over analysis by individuals.
Although outside academics—including some very good ones—have proposed countless improvements to analysis over the years, the major contributors to these significant analytic enhancements are the result of innovative practitioners themselves such as Richards Heuer, Jack Davis, and Randy Pherson. There are others, of course, but this is a short list. These seasoned analyst-pathbreakers have been driven by the attribute I believe is central to good analysis: A concentrated focus on getting it right.
5# What do you think is the core of intelligence? Many scholars proposed analysis, but the practitioners often consider the collection phase as more important. What is your take here?
After hunting Osama bin Ladin following 9/11 without success for nearly ten years, when good intelligence finally identified his secret hideout in Abbottabad Pakistan that led to his capture and death, the former CIA Director and Secretary of Defense Robert Gates described the event as “a perfect fusion of intelligence collection, intelligence analysis, and military operations.” As a former analyst himself, Gates’ placement of analysis between collection and operations is more than just a nod to the intelligence cycle; it illustrates the integral relationship among the three. Without good collection, analysis could not have succeeded. And without good analysis, there would have been no takedown operation. Analysis depends on collection, and when collection is poor, analysts typically rely on unarticulated assumptions which can often be wrong and result in failure. Analysis can also improve collection, and often does. Both are essential. But they are not standalones. Without the two working together as they did in this case, bin Ladin might still be directing al Qaeda’s terrorist operations from Pakistan.
I cite this case because it illustrates the symbiotic relationship between collection and analysis. In some instances, collection might be more important. In others, analysis would take top billing. The controversial paper on civil unrest in the Soviet Union that I mentioned in question one was based on extensive human and technical collection. Collection provided the content of what became a substantial data base. Without it, no systematic analysis was possible. It was a necessary but not sufficient condition. But when the collected information was subjected to rigorous analysis—as it was in the Abbottabad case—a picture of growing Soviet political instability emerged that could not have been spotlighted in any other way. Collection has also improved in recent years through better vetting of human sources, and with the systematic inclusion of open sources. Unlike classified collection where analysts are mostly recipients, use of open sources requires that analysts be more proactive and perform that “collection” themselves.
From the perch of a retiring scholar-practitioner, I believe collection and analysis are equally important. They are two sides of the same coin. Neither one alone is the “core.” When both are effective and working in harmony together, they are the core. The takedown of bin Ladin exemplifies intelligence at its best. The best intelligence cannot happen without the synthesis of both.
6# As Wilhelm Agrell famously wrote, when everything is intelligence, nothing is intelligence. Interestingly, there is often contrasting intuition about intelligence, namely that it is believed capable of any prediction from one point of view. Instead, from another point of view, it is considered as limited as history. Is it omniscient or impotent? According to you, what can we achieve with intelligence analysis, and what can we not achieve with it?
Intelligence analysis has both tremendous potential and daunting limitations. With Soviet unrest and bin Ladin’s capture, I’ve cited two cases here that demonstrate its considerable value to policymakers beyond what was apparent in collection alone, and beyond mere description or synthesis. For example, in the case of Soviet civil unrest, the value-added of the analysis was its broad portrait of political disturbances observed in varied collection over a 12-year period (1970-82) and their implications for system stability. No isolated acts of unrest presented a major challenge to the system. But the cumulative impact of several hundred of them in a highly regulated society depicted a growing breakdown in the social contract and a discernible shift in the political culture, from one of a predominantly acquiescent population to one more willing to contest overwhelming authority. Workers in factories and mines were striking, demonstrations grew in size and frequency, some riots engaged over 10,000 rioters, and patterns of political violence included attempts at assassinating political leaders.
A comprehensive examination of highly varied unrest data from every Soviet region brought attention to the deeper implications of challenging the communist ruling authorities in ways they had not been challenged before. And it highlighted the growing inability of the regime to contain it by force. If the analysis was correct—and that was borne out in the ensuing eight years—it meant that the regime’s inability to reverse this accelerating trend of a frontal assault to its authority could spell its demise. Without in-depth analysis of substantial data gathered from human and technical collection as well as open-source reporting, the broader implications would have remained hidden. This analysis revealed something new about potential Soviet political fragility. It became controversial because it challenged the prevailing paradigm of a stable dictatorship comfortably in control of a politically compliant population. Earlier studies had focused on “Kremlinology” and similar elite-conflict models, generally ignoring what was going on in the larger population.
Similarly, in the case of intelligence on bin Ladin’s whereabouts, the value of the exceptional HUMINT that focused on bin Ladin’s courier was enormous. It discovered the terrorist’s hideout in Abbottabad. With cross-cuing to analyst-driven imagery, the emerging granularity of the hidden compound’s layout could begin to support operational planning. Still, even multi-INT collection by itself was not enough. Before the President could authorize deployment of an assault team to penetrate Pakistani airspace for the takedown operation, he had to know, or at least believe with reasonable certainty, that the location was right, and that bin Ladin was (and would be) there if and when he gave the green light for the assault. Throughout the discussions about varying confidence levels of the intelligence (and of the several military options) from early March through April, President Obama’s confidence was only 50/50— reportedly not more than “the flip of a coin”—that the hidden compound in Abbottabad did indeed house the world’s most wanted terrorist. But at CIA and in the Office of the Director of National Intelligence (ODNI), confidence in the intelligence that bin Ladin lived at the Abbottabad compound was notably higher. Although a few analysts put the odds even below the president’s, confidence where it mattered most ranged from strong to rock solid. The Deputy Director of CIA and career analyst Michael Morrell put it at 60 percent. Most of the senior leaders at the ONDI put it about 80 percent. And the most veteran of the CIA analysts with years on the hunt for bin Ladin put the odds of his being there at 95 percent. These higher-percentage judgments were soon vindicated as correct.
Thus, collection was a required, but not sufficient condition to launch the operation. Lacking good analysis that explained the basis for analytic confidence, along with the detailed mock-ups that enabled operational rehearsals, Obama would not likely have launched it. That confidence, borne of seasoned analysts making a confident judgment call, delivered needed value-added unavailable in collection alone. The intelligence that brought this remarkable decision advantage to the White House—with the breathtaking operational support to CIA of the Joint Special Operations Command and SEAL Team Six—produced the most celebrated CIA covert action in its history. On the short-notice occasion of President Obama’s May 2nd announcement of bin Ladin’s capture and killing, spontaneously gathering crowds outside the White House chanted in unison “USA” and “CIA.” Unprecedented, to be sure.
The limitations of analysis come in several varieties: How well or poorly it is supported by collection. How good or bad the analysis is performed. And whether or not it is used, abused, or ignored by its policymaker users. Poor collection or poor analysis are very often complicit in intelligence failures. When both are poor—as was certainly true in the Iraq WMD case—failure is all but certain. Sometimes warning intelligence is disregarded, as seems the case in the recent stunningly rapid collapse of the Afghan army and government and speedy Taliban takeover where worst-case scenarios appeared absent in the decision-making. But in Washington DC, an enduring expression is that there are only policy successes and intelligence failures. Shorn of its cynicism, the accurate part of that blame-shifting formula is that it excuses policymakers by pointing the finger to intelligence when policy goes awry.
The modern history of intelligence is littered with successes and failures. But failures get more publicity. In truth, successes greatly outnumber the failures. They happen almost daily. Most are unspectacular. But we rarely hear of the major ones, and that is often by design. Successful collection operations—both human and technical—are better left unpublicized if we want to enjoy more of them. If the classified sources and methods that deliver success are publicly exposed, they will not likely deliver success again.
For example, after the 9/11 surprise catastrophe in 2001, many wondered why the United States didn’t have good intelligence on bin Ladin before the attacks. Until August 1998, we had excellent intelligence on that priority terrorist target. Some of the best of it came from SIGINT monitoring of bin Ladin’s satellite telephone. But sensitive sources and methods are often fragile. So when irresponsible and anonymous government officials boasted to the media using then-classified information about why our cruise missile strikes on al-Qaeda training camps in Afghanistan were so successful, this SIGINT reporting dropped off dramatically the day after those missile strikes, and ceased completely on October 9th 1998. The abrupt loss of that invaluable intelligence stream seriously crippled the US ability to locate, monitor, and track al-Qaeda’s top leader. (Note, the press strongly but unconvincingly disputes this). Given the subsequent 9/11 attacks and the following decade it took to locate bin Ladin for the long-awaited takedown, we paid a high price for that classified leak.
Other limitations also degrade intelligence. Foreign denial and deception (D&D) is calculated to neutralize, impair, and mislead collection. And when D&D succeeds against collection, analysis suffers too. Collection that is conducted remotely, or “over the horizon,” can be spotty or come up dry. Weather can impair observations from skies or space. SIGINT can be beaten by encryption or couriers. Sometimes US agents abroad get caught or the operation goes bad. US intelligence performs better—often much better—in places where we have diplomatic presence, and also when we can prosper from mutually beneficial liaison relationships with other foreign intelligence services. Both are intelligence force-multipliers—or constraints when we don’t have them.
Analysis can be limited by poor collection, foreign D&D, cognitive bias, faulty assumptions, poor epistemology or analytic tradecraft (methodology), and the effects of missing information. There are so many pervasive and potential impairments to sound analysis that we should sometimes marvel when analysis performs as intended.
A key deficiency in US intelligence performance is the inability to learn lessons—namely, to learn what causes failure so we don’t have to repeat errors, and how to identify best practices so we can repeat successes. Individuals are better at learning lessons than organizations. That is why the major intelligence reform act of 2004 that set up the new Director of National Intelligence (DNI) decreed that all intelligence agencies must establish lessons-learned components. And here, little progress seems in evidence. Even the ODNI still doesn’t have one as it tasked CIA’s long-established lessons-learned center to also conduct such studies for the DNI.
On balance, during the cold war that ended with the collapse of the Soviet Union, and following the transition to a counterterrorism emphasis after 9/11, US intelligence has been a good performer; mostly solid, though far from perfect. Additionally, most US Intelligence Community agencies are perennially committed to improving their performance. And few knowledgeable observers would deny that intelligence has one of the most challenging missions in government. Success is never assured though it happens often. And while major failure happens rarely, it is always possible.
Sometimes airliners crash, ships sink, and nuclear reactors melt down. One scholar has referred to these as “normal accidents,” and the factors that cause them also apply to intelligence. But intelligence lives in a partnership with its customer base, its policymakers. And like the appliance manufacturer that claims to make better washing machines when they have smart customers, US intelligence can also benefit from the same principle. Poorly informed customers often have exaggerated—or poor—expectations for intelligence. Part of their education about intelligence should include developing realistic expectations.
7# As I hold strong opinions about this topic, I cannot resist asking you this question: How do you see the technological trend in intelligence analysis? Will the analyst’s job be substituted by a set of smart machines (or implementations of Artificial Intelligence for the aficionados)? And, more broadly, what is your take in looking at the technology in the intelligence analysis realm?
The growing appreciation for technology in analysis is a healthy and needed trend. The role of technology should continue to expand in the coming years. Yet any analyst who feels threatened by a HAL 9000-type computer from 2001 Space Odyssey movie fame or, more recently from driverless vehicles, should explore other career opportunities. Of course, like HAL’s human partner David Bowman, we should welcome any technological help we can get to assist analytic functions, including artificial intelligence. But—like David Bowman—also remain alert to its limitations. Properly teamed, human analysts with AI support should be more capable than analysts without it. But AI’s value-added will depend on the problem type. For “big data” problems, it could certainly make a difference. Machines can beat chess masters and drive cars. But can their judgment be trusted with consequential issues of national security? I’m with Bowman on this one: At least for the time being, limitations rule.
The analysts’ main job—producing judgments, forecasts, and insights for policymakers under conditions of uncertainty—seems safely in human hands for the foreseeable future. If answers to perplexing issues that analysts must deal with could be produced by mere data processing, a HAL-type machine will have an edge. But given the data requirements, that narrows the range of candidate problems considerably. Sometimes significant and complex problems are characterized by small data and great uncertainty. Would President Obama have authorized the CIA covert action to capture bin Ladin based on a machine-generated answer that the terrorist was probably at the Abbottabad compound, even if the probability given was a highly precise number? I think it’s a stretch to believe we should have more confidence in an AI answer to that question than one from human analysis. It’s hard for me to imagine what kind of software would have to be designed to empower a machine to produce a high-confidence answer to that kind of intelligence question.
This raises the non-trivial issue of whether machines could outperform humans on complex analytical tasks harder than chess matches, driverless cars, or simple requests made to Alexa or Siri. I recently spoke at a day-long conference on AI for intelligence and the key issue was cognitive bias. Could AI escape this human vulnerability? We know that bias happens in all forms of human cognition. Since intelligent machines are created, built, and programmed by humans, is it possible for humans to create learning machines that are free of bias? I believe the short answer is no. This is a hard problem, perhaps insoluble.
For now, at least, I think we should invest in AI to gain as much future advantage as possible from its potential technological support to analysis. But we should also be modest in our expectations, and not anticipate that an AI role will ultimately exceed a support function for intelligence analysis. For my part, I think the prospect of machines replacing human analysts anytime soon is a bridge too far.
8# Related to the last question, an important follow-up: How do you see the future of intelligence analysis?
In the modern (post-WWII) era of intelligence, we’ve seen a few bona fide revolutions. Can we foresee the next one? Developing space-based collection operations in the 1960s-‘70s involving multiple sensor types deployed in multiple orbits was a genuine revolution in collection. Cold war intelligence improved dramatically. Nothing comparable has happened in analysis. But failure is humbling—and also a teacher—as the later 9/11 and Iraq WMD disasters became an engine for analytic introspection and innovation. These consequential failures catalyzed major work on analytic modernization, which might qualify as a distant second to the technical collection revolution in space.
Cognitive bias then earned recognition for its degrading role in analytic accuracy, and new methodologies—structured analytic techniques, or SATs—were developed to mitigate its effects. These SATs (which I briefly discussed in question four) are now in common use—not exclusively, of course, but to a degree few anticipated when they began to emerge. As they continue to gain traction throughout the IC, their growing use and refinements as part of an overall trend of professionalization of analysis will be more evolutionary than revolutionary.
So, even lacking revolutionary change, the future of analysis seems promising, though not assured. I think it is reasonable to expect more improvements than setbacks. The key question is will analysis get better, worse, or remain static? I believe any forecast of the future of analysis in US intelligence needs to account for the following key factors: Whether the trend towards more rigorous analysis (e.g., SATs) at CIA and the other principal agencies continues; changes in the analytic workforce due to morale, attrition, and hiring; amount and content of analytic training; professional certification, incentives, and promotion criteria; and changes in senior management personnel and policies in the agencies and the ODNI that affect the direction of analysis.
Several issues of concern could wreak havoc in the future due to inattention and neglect by senior managers in the intelligence community and its agencies: I believe we have real vulnerabilities to surprise that could threaten or harm the United States because our previous capabilities to counter foreign denial and deception, and to perform the intelligence warning function, have been much downgraded in recent years. I wrote about this worrisome trend last year in Studies in Intelligence (March 2020). In both cases of countering foreign D&D and of intelligence warning, the specialized organizational units with explicit responsibilities for each have been largely disassembled. With their demise comes the loss of vital, in-depth, subject matter expertise, institutional memory, and management focus (and accountability) on priority threat issues. I believe that recovering these analytic specialties in revived dedicated organizational components is integral to high quality intelligence performance.
A third issue is cyber intelligence. The importance of cyber is growing so rapidly, it could well emerge as the next revolution in intelligence. With numerous large-scale attacks from government and criminal elements in Russia and China against US infrastructure, as well as soft targets, some hit by ransomware, we have got to up our game against foreign cyberattacks. Iran and North Korea are also improving their cyber capabilities. We require much better intelligence to support better US cyber defenses as well offensive cyber capabilities. All three issues—foreign D&D, warning, and cyber—encompass both collection and analysis challenges. All three pose non-trivial tests for US intelligence that are getting tougher, not easier.
What bodes for the future of analysis? Lacking current access and recent information on the main drivers of analytic change, I’m confined to a data-free answer. Here’s my best-guess forecast: Assuming continuation of recent efforts to further professionalize the analyst workforce—and with the hope of restoring the counter-D&D and warning functions—the outlook could be promising, and we should expect continuing improvements in analysis. Otherwise, analysis is not likely to much improve, and analytic performance could even regress to pre-9/11 and Iraq WMD levels. On balance, I believe regression is the lower-probability scenario, and the odds favor a continuing professionalization of analysis and, hopefully, commensurate improvements in analytic performance.
9# You are the author of many papers and studies on intelligence as few other people, however, there is one book chapter that always stuck in my (philosophical) mind and was actually one that brought me to the study of intelligence: “Making Intelligence Analysis More Reliable: Why Epistemology Matters to Intelligence” in Analyzing Intelligence, edited with Roger Z. George. As this series hosted intelligence theory and epistemology several times, I thought to close with an insightful discussion with it. Why does epistemology matter to intelligence, and why, according to you, is it worthwhile applying it to intelligence?
We can have no doubt that attention to epistemology is one of the most important things we can do improve intelligence analysis. As the branch of philosophy that examines the origins, nature, and theory of knowledge, epistemology is key to understanding how knowledge is produced in intelligence, as elsewhere. For knowledge to be reliable in intelligence, it must be produced by the most reliable methods that can do it. (The following discussion draws from chapter 9 in Analyzing Intelligence, 2nd ed., cited in question 3).
The principal methods for producing knowledge in intelligence are grouped here. Although the examples given below are generic, the footprints of these four ways of knowing are found in all forms of intelligence analysis:
- Authority—we can “know” something because some authority has told us it is so—and we believe it because of the nature of the authority. That could be the President, the New York Times, Fox News, Q-Anon, or a national intelligence estimate. It could also be the Bible, Koran, or Torah. Or a professor, preacher, poet, or a posting on Facebook or Twitter. Any source that is believed to be authoritative to any person can serve as a source of knowledge for that person. Not all authorities are equally reliable, of course, and authenticating the information these sources provide requires some independent fact-checking or other independent validation.
- Habit of Thought—another source of knowing is just believing something is true because we’ve always believed it to be so. Even if we don’t understand where specific long-held beliefs come from, they can include common prejudices in individuals, or conventional wisdom in groups. For example, people believed for centuries that the sun revolved around the earth. Owing to the hard work of such early scientists as Copernicus and Galileo and those who followed them, we now know the geocentric theory to be false. It is the opposite that’s true as heliocentrism, supported by overwhelming and indisputable empirical evidence, is today accepted by science as fact. Like authority, habitual thinking can be true. But it can also be false. Neither, as it turns out, is a highly reliable way of knowing owing to the failure of each to integrate error-detection processes.
- Rationalism, or reasoning—through different forms of logic (e.g., induction, deduction, abduction, dialectics) we can “reason” our way to knowledge. Logic and structured reasoning are common ways to arrive a conclusion, and some conclusions may be called “knowledge.” But many critics object that the mind can actually produce knowledge because the mind can also be a pretty unreliable source of truth. Premises of logic can be wrong, as can the reasoning processes themselves. There may be no easy way of checking the accuracy of or disproving the content of any particular logical reasoning. And sometimes the mind can just invent things with the misleading appearance of reasoning. Since knowledge requires beliefs, reason is a necessary requirement for building knowledge. But by itself, it is not sufficient as it can produce error just as easily as it can produce truth.
- Empiricism—this way of knowing is entirely dependent on sensory perception. If we can see, touch, hear, smell, or taste something, we can acquire knowledge about it directly and not have to depend on reasoning, habit of thought, or authority, all of which could be wrong or faulty in some significant way. But so too could the senses be wrong as their intake is mediated by cognitive processes. Because cognition is inescapable in perception, it necessarily invites distortion and bias. Eyewitness crime reports are notoriously unreliable. Some people “see” Unidentified Flying Objects that other onlookers miss. Still others claim to have observed ghosts, angels, or other apparitions that no one else sees. Or heard voices that others cannot hear. Acting alone, the senses themselves can be faulty or unreliable ways of knowing, at least for producing intelligence.
What we can fully appreciate about these four distinct ways of knowing (there are others, but these are the most relevant for intelligence) is that they are fully capable of producing knowledge—but they are also equally capable of producing error. Since none of them has any internal mechanism of identifying and correcting its errors, none of them alone is fully trustworthy as a source of knowledge in intelligence. So what is?
- Science. As a fortuitous product of the Enlightenment, reason and empiricism were combined in a way to produce science as a revolutionary new method of acquiring knowledge. Science is by far the most reliable way of knowing—and thus the best epistemology for intelligence. Like the others, science too is capable of producing errors (e.g., unwarranted claims about “cold fusion” or human cloning). Unlike the others, science is the only way of knowing that has built-in self-correcting mechanisms. It is thus unique among all the ways of knowing in its power to identify and correct its own errors.
For all issues where facts matter—and that is the exclusive domain where intelligence operates—science provides an evidence-based approach to developing knowledge. This approach brings a range of error-detection and correction features as a product of the following attributes of scientific investigation:
- Use of the hypothesis, which is never assumed to be true until it is rigorously evaluated (tested, even qualitatively) before any credibility can be attributed to the statement it makes.
- Objectivity, meaning that the researcher must be faithful to the facts, and not be influenced or distorted by his or her values or biases. The results of a study should not come out in a certain way because the researcher wants them to come out that way. Objective methods will prevent any outcome based mainly on desire, and permit only those based on value-free facts or evidence.
- Transparency. Methods of science must be open to public inspection, available for the scrutiny of others to ensure the results were not arrived at by some unknown or invisible means that cannot be independently examined by someone other than the researcher who produced them.
- Peer review. Scientific findings must be able to withstand rigorous review by competent peer researchers if the findings are to gain credibility on the road to becoming accepted knowledge.
- Replicability. The study should be designed so other researchers who examine the same issue in question should be able to reproduce the same or very similar results by using the same or even different methods.
- Provisional results. Science is inherently skeptical. Investigators accept the results of a scientific study as tentative, subject to refinement, strengthening, or refutation by further study. Science is a cumulative process. If the research design is sound, scientific findings can change only as facts change.
These principles are neither fully nor easily transportable into intelligence analysis. But the closer intelligence analysts can adhere to or approximate these error detection and correction processes, the more likely their results will be reliable and stand the test of time. This is so because no other way of producing knowledge (i.e., authority, habit of thought, rationalism, or empiricism) is comparably equipped to identify and correct its errors in the methods that produced that knowledge. In intelligence, steps to counter cognitive bias and the emergence of SATs can be game changers. They clearly move analysis closer to a science-based approach. In short, reliability of analysis improves as a function of the self-correction techniques available exclusively in the methods of science. No other epistemology can compete with science for its reliably truth-producing properties, much less outperform it.
10#How can our readers follow you?
With periodic exceptions for LinkedIn, as I am now almost fully retired, I have negligible presence on social media. But I can be contacted through the RAND Corporation (www.rand.org) or Florida Atlantic University (https://www.fau.edu/osherjupiter//).
11# Five keywords that represent you?
Methodical, slow thinker (Kahneman), fox (Tetlock), and tortoise (Aesop)
12# Bio: James B. Bruce, Ph.D., is a former senior executive officer at CIA, an adjunct researcher at the RAND Corporation and formerly a Senior Political Scientist there, and an adjunct professor at Georgetown and Florida Atlantic Universities. He also taught as an adjunct at Columbia and American Universities and as a full-time faculty member at the National War College. His publications have appeared in Studies in Intelligence, the American Intelligence Journal, the Journal of Strategic Security, The Intelligencer, the Defense Intelligence Journal, Group Dynamics, World Politics, and in numerous RAND reports. He co-edited Analyzing Intelligence: National Security Practitioners’ Perspectives, 2nd ed. (Georgetown University Press, 2014). He is a U.S. Navy veteran, and a member of the Board of Directors of the Association of Former Intelligence Officers.
References
Bowden, Mark, The Finish: The Killing of Osama bin Laden, New York: Atlantic Monthly Press, 2012, pp. 158-208.
Bruce, James B. “Countering Foreign Denial and Deception: The Rise and Fall of an Intelligence Discipline—and its Uncertain Future,” Studies in Intelligence, Vol. 64, No 1 (March 2020), pp. 13-30.
Bruce, James, “Dimensions of Civil Unrest in the Soviet Union,” National Intelligence Council Memorandum, NIC M 83-10006, April 1983.
George, Roger Z., and James B. Bruce (eds.), Analyzing Intelligence: National Security Practitioners’ Perspectives, 2nd ed. Washington, DC., Georgetown University Press, 2014, chapters 1, 9, 10, and 12.
Kahneman, Daniel, Thinking, Fast and Slow, New York: Farrar, Straus, and Giroux, 2011.
National Research Council, Intelligence Analysis for Tomorrow: Advances from the Behavioral and Social Sciences, Washington, DC: National Academies Press, 2011.
Reichenbach, Hans, The Rise of Scientific Philosophy, Berkeley: University of California Press, 1968.
Tetlock, Phillip, E., Expert Political Judgment: How Good Is It? How Can We Know? Princeton: Princeton University Press, 2005.
Be First to Comment