Analogical Reasoning

...An analogy is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar. Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further similarity exists...

In general (but not always), such arguments belong in the category of ampliative reasoning, since their conclusions do not follow with certainty but are only supported with varying degrees of strength. However, the proper characterization of analogical arguments is subject to debate

Analogical reasoning is fundamental to human thought and, arguably, to some nonhuman animals as well. Historically, analogical reasoning has played an important, but sometimes mysterious, role in a wide range of problem-solving contexts. The explicit use of analogical arguments, since antiquity, has been a distinctive feature of scientific, philosophical and legal reasoning. [show/hide contents]

https://plato.stanford.edu/entries/reasoning-analogy/



Analogy is a cognitive process of transferring information or meaning from a particular subject (the analog, or source) to another (the target), or a linguistic expression corresponding to such a process. In a narrower sense, analogy is an inference or an argument from one particular to another particular, as opposed to deduction, induction, and abduction, in which at least one of the premises, or the conclusion, is general rather than particular in nature. The term analogy can also refer to the relation between the source and the target themselves, which is often (though not always) a similarity, as in the biological notion of analogy.


  • Analogy plays a significant role in problem solving, as well as decision making, argumentation, perception, generalization, memory, creativity, invention, prediction, emotion, explanation, conceptualization and communication. 
  • It lies behind basic tasks such as the identification of places, objects and people, for example, in face perception and facial recognition systems. 


It has been argued that analogy is "the core of cognition". 


  • Specific analogical language comprises exemplification, comparisons, metaphors, similes, allegories, and parables, but not metonymy. 
  • Phrases like and so on, and the like, as if, and the very word like also rely on an analogical understanding by the receiver of a message including them. 


Analogy is important not only in ordinary language and common sense (where proverbs and idioms give many examples of its application) but also in science, philosophy, law and the humanities. 


  • The concepts of association, comparison, correspondence, mathematical and morphological homology, homomorphism, iconicity, isomorphism, metaphor, resemblance, and similarity are closely related to analogy. 
  • In cognitive linguistics, the notion of conceptual metaphor may be equivalent to that of analogy. 
  • Analogy is also a basis for any comparative arguments as well as experiments whose results are transmitted to objects that have been not under examination (e.g., experiments on rats when results are applied to humans).


Analogy has been studied and discussed since classical antiquity by philosophers, scientists, theologists and lawyers. The last few decades have shown a renewed interest in analogy, most notably in cognitive science.


Structure Mapping Theory


Structure mapping, originally proposed by Dedre Gentner, is a theory in psychology that describes the psychological processes involved in reasoning through and learning from analogies. More specifically, this theory aims to describe how familiar knowledge, or knowledge about a base domain, can be used to inform an individual’s understanding of a less familiar idea, or a target domain. According to this theory, individuals view their knowledge of domains as interconnected structures. In other words, a domain is viewed as consisting of objects, the object’s properties, and the relationships that characterize how objects and their properties interact. The process of analogy then involves recognizing similar structures between the two domains, inferring further similarity in structure by mapping additional relationships of a base domain to the target domain, and then checking those inferences against existing knowledge of the target domain. In general, it has been found that people prefer analogies where the two systems have a deep degree of correspondence (e.g. relationships across the domains correspond as opposed to just the objects across domains corresponding) when attempting to draw inferences between the systems. This is also known as the systematicity principle.


An example that has been used to illustrate structure mapping theory comes from Getner and Getner (1983) and uses the domains of flowing water and electricity. In a system of flowing water, the water is carried through pipes and the rate of water flow is determined by the pressure of the system. This relationship is analogous to that of electricity flowing through an electrical circuit. In a circuit, the electricity is carried through wires and the current, or rate of flow of electricity, is determined by the voltage, or electrical pressure. Given the similarity in structure, or structural alignment, between these domains, structure mapping theory would predict that relationships from one of these domains would be inferred in the other via analogy.


Structural Alignment


Structural alignment is one process involved in the larger structure mapping theory. When establishing structural alignment between two domains that are being compared, an individual is attempting to identify as many commonalities between the systems as possible while maintaining a one-to-one correspondence between elements (i.e., objects, properties, and relationships). In the flowing water and electricity analogy, a one-to-one correspondence is illustrated by water pipes mapping on to wires but not corresponding with any other elements in the circuit. Furthermore, structural alignment is also characterized by parallel connectivity, or the idea that if a one-to-one correspondence is generated between relationships across two systems (e.g., the rate of water flow through a pipe increases with pressure similarly to how the current in an electrical circuit increases with voltage), then the relevant objects and properties must also correspond (e.g. the rate of flow of water corresponds to electrical current and water pressure corresponds to voltage).


Analogical Inference


Analogical inference is a second process involved in the theory of structure mapping and happens after structural alignment has been established between two domains being compared. During this process an individual draws inferences about the target domain by projecting information from the base domain to said target domain. The following example can be used to illustrate this process, where 1 represents information about a base domain, 2 represents correspondences between the base and target domain, and 3 represents an inference about the target domain:


  1. In plumbing systems, narrow pipes lead to a decrease in rate of flow of water
  2. Narrow pipes correspond to resistors in an electrical circuit and water corresponds to electricity.
  3. In an electrical circuit, resistors lead to a decrease in the rate of flow of electricity


Evaluation


Evaluation is a third process involved in the theory of structure mapping and happens after structures have been aligned and inferences about the target domain have been proposed. During evaluation, an individual is judging whether the analogy is relevant and plausible. This process has been described as solving the selection problem in analogy, or explaining how individuals choose which inferences to map from the base to target domain as analogies would be fruitless if all possible inferences were made. When evaluating an analogy, individuals typically judge it on several factors:  


  • Factual Correctness. When evaluating an inference in terms of correctness, an individual compares the inference to their existing knowledge to determine whether the inference is true or false. In the case once cannot determine the correctness, then the one may consider the adaptability of the inference, or how easily the knowledge is modified when translation it from the base to target domain.
  • Goal Relevance. When evaluating an analogy, it is important that the inferences provide insight that is relevant to the situation at hand. For example, when attempting to solve a problem, does the inference provide insight that moves one towards a solution or generate new, potentially helpful knowledge?

 


http://en.wikipedia.org/wiki/Analogy





Inductive Inferences - When an argument claims merely that the truth of its premises make it likely or probable that its conclusion is also true, it is said to involve an inductive inference. The standard of correctness for inductive reasoning is much more flexible than that for deduction. An inductive argument succeeds whenever its premises provide some legitimate evidence or support for the truth of its conclusion.  Inductive arguments, then, may meet their standard to a greater or to a lesser degree, depending upon the amount of support they supply. No inductive argument is either absolutely perfect or entirely useless, although one may be said to be relatively better or worse than another in the sense that it recommends its conclusion with a higher or lower degree of probability. In such cases, relevant additional information often affects the reliability of an inductive argument by providing other evidence that changes our estimation of the likelihood of the conclusion.



The simplest variety of inductive reasoning is argument by analogy, which takes note of the fact that two or more things are similar in some respects and concludes that they are probably also similar in some further respect. Not every analogy is an argument; we frequently use such comparisons simply to explain or illustrate what we mean. But arguments by analogy are common, too. 


Suppose, for example, that I am thinking about buying a new car. I'm very likely to speak with other people who have recently bought new cars, noting their experiences with various makes, models, and dealers. If I discover that three of my friends have recently bought Geo Prizms from Burg and that all three have been delighted with their purchases, then I will conclude by analogy that if I buy a Geo Prizm from Burg, I will be delighted, too. 


Evaluating Analogies


Of course, this argument is not deductively valid; it is always possible that my new car may turn out to be an exception. But there are several considerations that clearly matter in determining the relative strength or weakness of my inductive inference: 


  1. Number of Instances. If five friends instead of three report their satisfaction with the model I intend to buy, that tends to make it even more likely that I will be satisfied, too. In general, more instances strengthen an analogy; fewer weaken it. 
  2. Instance Variety. If my three friends bought their Prizms from three different dealers but were all delighted, then my conclusion is somewhat more likely to be true, no matter where I decide to buy mine. In general, the more variety there is among the instances, the stronger the analogical argument becomes. 
  3. Number of Similarities. If my new purchase is not only the same make and model from the same dealer but also has the same engine, then my conclusion is more likely to be true. In general, the more similarities there are between the instances and my conclusion, the better for the analogical argument. 
  4. Relevance. Of course, the criteria we're considering apply only if the matters with which they are concerned are relevant to the argument. Ordinarily, for example, we would assume that the day of the week on which a car was purchased is irrelevant to a buyer's satisfaction with it. But relevance is not something about which we can be terribly precise; it is always possible in principle to tell a story in the context of which anything may turn out to be relevant. So we just have to use our best judgment in deciding whether or not some respect deserves to be considered. 
  5. Number of Dissimilarities. If my friends all bought Geos with automatic transmissions and I plan to buy a Geo with a standard transmission, then the conclusion that I will be delighted with my purchase is a little less likely to be true. In general, the fewer dissimilarities between instances and conclusion, the better an analogical argument is. 
  6. Modesty of Conclusion. If all three of my friends were delighted with their auto purchases but I conclude only that I will be satisfied with mine, then this relatively modest conclusion is more likely to be true. In general, arguments by analogy are improved when their conclusions are modest with respect to their premises. 


http://www.philosophypages.com/lg/e13.htm






Criteria for Evaluating Analogical Arguments

 

Logicians and philosophers of science have identified ‘textbook-style’ general guidelines for evaluating analogical arguments (Mill 1843/1930; Keynes 1921; Robinson 1930; Stebbing 1933; Copi and Cohen 2005; Moore and Parker 1998; Woods, Irvine, and Walton 2004). Here are some of the most important ones:


  • (G1) The more similarities (between two domains), the stronger the analogy.
  • (G2) The more differences, the weaker the analogy.
  • (G3) The greater the extent of our ignorance about the two domains, the weaker the analogy.
  • (G4) The weaker the conclusion, the more plausible the analogy.
  • (G5) Analogies involving causal relations are more plausible than those not involving causal relations.
  • (G6) Structural analogies are stronger than those based on superficial similarities.
  • (G7) The relevance of the similarities and differences to the conclusion (i.e., to the hypothetical analogy) must be taken into account.
  • (G8) Multiple analogies supporting the same conclusion make the argument stronger.


https://plato.stanford.edu/entries/reasoning-analogy/





How Strategists Really Think:
Tapping the Power of Analogy


by Giovanni Gavetti and Jan W. Rivkin

From the Magazine (April 2005)


Strategy is about choice. The heart of a company’s strategy is what it chooses to do and not do. The quality of the thinking that goes into such choices is a key driver of the quality and success of a company’s strategy. Most of the time, leaders are so immersed in the specifics of strategy—the ideas, the numbers, the plans—that they don’t step back and examine how they think about strategic choices. But executives can gain a great deal from understanding their own reasoning processes. In particular, reasoning by analogy plays a role in strategic decision making that is large but largely overlooked. Faced with an unfamiliar problem or opportunity, senior managers often think back to some similar situation they have seen or heard about, draw lessons from it, and apply those lessons to the current situation. Yet managers rarely realize that they’re reasoning by analogy. As a result, they are unable to make use of insights that psychologists, cognitive scientists, and political scientists have generated about the power and the pitfalls of analogy. Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.


When Analogies Are Powerful


We’ve explained the notion of analogical reasoning to executives responsible for strategy in a variety of industries, and virtually every one of them, after reflecting, could point to times when he or she relied heavily on analogies. A few well-known examples reflect how common analogical reasoning is:


  • Throughout the mid-1990s, Intel had resisted providing cheap microprocessors for inexpensive PCs. During a 1997 training seminar, however, Intel’s top management team learned a lesson about the steel industry from Harvard Business School professor Clayton Christensen: In the 1970s, upstart minimills established themselves in the steel business by making cheap concrete-reinforcing bars known as rebar. Established players like U.S. Steel ceded the low end of the business to them, but deeply regretted that decision when the minimills crept into higher-end products. Intel’s CEO at the time, Andy Grove, seized on the steel analogy, referring to cheap PCs as “digital rebar.” The lesson was clear, Grove argued: “If we lose the low end today, we could lose the high end tomorrow.” Intel soon began to promote its low-end Celeron processor more aggressively to makers and buyers of inexpensive PCs.
  • Starting in the 1970s, Circuit City thrived by selling consumer electronics in superstores. A wide selection, professional sales help, and a policy of not haggling with customers distinguished the stores. In 1993, Circuit City surprised investors by announcing that it would open CarMax, a chain of used-car outlets. The company argued that the used-car industry of the 1990s bore a close resemblance to the electronics retailing environment of the 1970s. Mom-and-pop dealers with questionable reputations dominated the industry, leaving consumers nervous when they purchased and financed complex, big-ticket, durable goods. Circuit City’s managers felt that its success formula from electronics retailing would work well in an apparently analogous setting.
  • The supermarket, a retail format pioneered during the 1930s, has served as an analogical source many times over. Charlie Merrill relied heavily on his experience as a supermarket executive as he developed the financial supermarket of Merrill Lynch. Likewise, Charles Lazarus was inspired by the supermarket when he founded Toys R Us in the 1950s. Thomas Stemberg, the founder of Staples and a former supermarket executive, reports in his autobiography that Staples began with an analogical question: “Could we be the Toys R Us of office supplies?”


Each of these instances displays the core elements of analogical reasoning: a novel problem that has to be solved or a new opportunity that begs to be tapped; a specific prior setting that managers deem to be similar in its essentials; and a solution that managers can transfer from its original setting to the unfamiliar context. When managers face a problem, sense “Ah, I’ve seen this one before,” and reach back to an earlier experience for a solution, they are using analogy.


Strategy makers use analogical reasoning more often than they know. Commonly, credit for a strategic decision goes to one of two other approaches: deduction and the process of trial and error. When managers use deduction, they apply general administrative and economic principles to a specific business situation, weigh alternatives, and make a rational choice. They choose the alternative that, according to their analysis, would lead to the best outcome. Trial and error, on the other hand, involves learning after the fact rather than thinking in advance.


Both deduction and trial and error play important roles in strategy, but each is effective only in specific circumstances. Deduction typically requires a lot of data and is therefore at its most powerful only in information-rich settings—for instance, mature and stable industries. Even where information is available, processing a great deal of raw data is very challenging, particularly if there are many intertwined choices that span functional and product boundaries. The mental demands of deduction can easily outstrip the bounds on human reasoning that psychologists have identified in numerous experiments. For this reason, deduction works best for modular problems that can be broken down and tackled piece by piece.


Trial and error is a relatively effective way to make strategic decisions in settings so ambiguous, novel, or complex that any cognitively intensive effort is doomed to fail. In altogether new situations, such as launching a radically new product, there may be no good substitute for trying something out and learning from experience.


Many, perhaps most, strategic problems are neither so novel and complex that they require trial and error nor so familiar and modular that they permit deduction. Much of the time, managers have only enough cues to see a resemblance to a past experience. They can see how an industry they’re thinking about entering looks like one they already understand, for example. It is in this large middle ground that analogical reasoning has its greatest power.


Analogical reasoning makes enormously efficient use of the information and the mental processing power that strategy makers have. When reasoning by analogy, managers need not understand every aspect of the problem at hand. Rather, they pay attention to select features of it and use them to apply the patterns of the past to the problems of the present. Imagine, for instance, the challenge facing Charles Lazarus in the fast-changing, complex toy industry of the 1950s. Had he sat down and analyzed all of the interdependent configurations of choices in toy retailing—from marketing to operations, from human resource management to logistics—it is unlikely he would have come up with a strategy as coherent and effective as the one Toys R Us adopted. The analogy he drew to supermarkets was extraordinarily efficient from an informational and cognitive point of view. In one stroke, it gave Lazarus an integrated bundle of choices: exhaustive selection, relatively low prices, rapid replenishment of stock, deep investment in information technology, self-service, shopping carts, and so forth.


Analogical reasoning can also be a source of remarkable insight. Analogies lie at the root of some of the most compelling and creative thinking in business as a whole, not just in discussions of strategy. For instance, Taiichi Ohno, the foremost pioneer of Toyota’s famed production system, supposedly invented the kanban system for replenishing inventory after he watched shelf-stocking procedures at U.S. supermarkets, and he devised the andon cord to halt a faulty production line after seeing how bus passengers signaled a driver to stop by pulling a cord that rang a bell.


Reasoning by analogy is prevalent among strategy makers because of a series of close matches: between the amount of information available in many strategic situations and the amount required to draw analogies; between the wealth of managerial experience and the need for that experience in analogical reasoning; and between the need for creative strategies and analogy’s ability to spark creativity. Reflecting these matches, business schools typically teach strategy by means of case studies, which provide an abundance of analogies from which the students can draw. (See the sidebar “Strategic Decision Making and the Case Method.”) Similarly, some of the foremost strategy consultants are famed for their ability to draw lessons from one industry and apply them to another. Thus we have ample reason to believe that analogical reasoning is a key implement in the toolbox of the typical real-world strategist.


Strategic Decision Making and the Case Method


The case method in business education has often been criticized, most recently by Henry Mintzberg, because it ...


How Analogies Fail


Though analogical reasoning is a powerful and prevalent tool, it is extremely easy to reason poorly through analogies, and strategists rarely consider how to use them well. Indeed, analogies’ very potency requires that they be used wisely. To understand the potential pitfalls, consider for a moment the anatomy of analogy. Cognitive scientists paint a simple picture of analogical reasoning. An individual starts with a situation to be handled—the target problem (for Intel, the competition from makers of low-end microprocessors). The person then considers other settings that she knows well from direct or vicarious experience and, through a process of similarity mapping, identifies a setting that, she believes, displays similar characteristics. This setting is the source problem (the steel industry). From the source emerges a candidate solution that was or should have been adopted for the source problem (a vigorous defense of the low end). The candidate solution is then applied to the target problem.


It is extremely easy to reason poorly through analogies, and strategists rarely consider how to use them well.


In a variant of this picture, the solution seeking a problem, an individual starts with a source problem and a candidate solution, then uses similarity mapping to find a target problem where the solution would work well. Circuit City’s managers, for instance, had an effective solution in consumer electronics retailing. They then found a new setting, used-car retailing, to which they believed their solution could be applied with success.


Dangers arise when strategists draw an analogy on the basis of superficial similarity, not deep causal traits. Take Ford, for instance. In overhauling its supply chain, the automaker looked carefully at Dell’s key strategic principle of “virtual integration” with its suppliers as a possible source for an analogy. On the surface, computer and auto production resemble one another. Both involve the assembly of a vast variety of models from a set of fairly standardized components. It is easy, however, to pinpoint differences between the two industries. In the PC business, for example, prices of inputs decline by as much as 1% per week—much, much faster than in the auto industry. To the extent that rapidly falling input prices play a role in Dell’s success formula, overlooking this underlying difference could seriously undermine the usefulness of the analogy. Fortunately, Ford executives thought carefully about the differences between the auto industry and the PC business, as well as the difficulty of changing their existing supply chain, as they used the analogy.


The experience of Enron shows how a seductive but bad analogy can lead to flawed decisions. Many factors contributed to Enron’s startling collapse, but headlong diversification based on loose analogies played an important role. After apparently achieving success in trading natural gas and electric power, Enron executives moved rapidly to enter or create markets for other goods ranging from coal, steel, and pulp and paper to weather derivatives and broadband telecom capacity. In a classic example of a solution seeking problems, executives looked for markets with certain characteristics reminiscent of the features of the gas and electricity markets. The characteristics included fragmented demand, rapid change due to deregulation or technological progress, complex and capital-intensive distribution systems, lengthy sales cycles, opaque pricing, and mismatches between long-term supply contracts and short-term fluctuations in customer demand. In such markets, managers were confident that Enron’s market-creation and trading skills would allow the company to make hefty profits.


On the broadband opportunity, for instance, Enron Chairman Kenneth Lay told Gas Daily, “[Broadband]’s going to start off as a very inefficient market. It’s going to settle down to a business model that looks very much like our business model on [gas and electricity] wholesale, which obviously has been very profitable with rapid growth.” But Enron’s executives failed to appreciate important, deeper differences between the markets for natural gas and bandwidth. The broadband market was based on unproven technology and was dominated by telecom companies that resented Enron’s encroachment. The underlying good—bandwidth—did not lend itself to the kinds of standard contracts that made efficient trading possible in gas and electricity. Perhaps worst, in broadband trading, Enron had to deliver capacity the “last mile” to a customer’s site—an expensive challenge that gas wholesalers didn’t face.


The danger of focusing on superficial similarity is very real, for two reasons. First, distinguishing between a target problem’s deep, structural features and its superficial characteristics is difficult, especially when the problem is new and largely unknown. In the earliest days of the Internet portal industry, for instance, it was far from clear what structure would emerge in the business. Players in the market adopted analogies that reflected idiosyncrasies of the management teams rather than deep traits of the evolving industry. The tech-savvy founders of Lycos, for instance, saw themselves competing on a high-tech battlefield and assumed that the company with the best search technology would win. Magellan’s founders, the twin daughters of publishing magnate Robert Maxwell, aimed to build “the Michelin guide to the Web” and developed editorial abilities. The pioneers of Yahoo, seeing the portal industry as a media business, invested in the company’s brand and the look and feel of its sites.


But this is only part of the picture. Not only is it difficult to distinguish deep similarities from surface resemblances in some contexts, but people typically make little effort to draw such distinctions. In laboratory experiments conducted by psychologists, subjects—even well-educated subjects—are readily seduced by similarities they should know to be superficial. In a study by psychologist Thomas Gilovich, students of international conflict at Stanford were told of a hypothetical foreign-policy crisis: A small, democratic nation was being threatened by an aggressive, totalitarian neighbor. Each student was asked to play the role of a State Department official and recommend a course of action. The descriptions of the situation were manipulated slightly. Some of the students heard versions with cues that were intended to make them think of events that preceded World War II. The president at the time, they were told, was “from New York, the same state as Franklin Roosevelt,” refugees were fleeing in boxcars, and the briefing was held in Winston Churchill Hall. Other students heard versions that might have reminded them of Vietnam. The president was “from Texas, the same state as Lyndon Johnson,” refugees were escaping in small boats, and the briefing took place in Dean Rusk Hall. Clearly, there is little reason that the president’s home state, the refugees’ vehicles, or the name of a briefing room should influence a recommendation on foreign policy. Yet subjects in the first group were significantly more likely to apply the lessons of World War II—that aggression must be met with force—than were participants in the second group, who veered toward a hands-off policy inspired by Vietnam. Not only were the students swayed by superficial likenesses, they were not even aware that they had been swayed.


The implications are unsettling. Thanks to his or her particular history and education, each manager carries around an idiosyncratic tool kit of possible sources of analogies. In choosing among tools or identifying new problems for old tools, the manager may be guided by something other than a careful look at the similarity between the source and the target.


The tendency to rely on surface similarity is made even worse by two other common flaws in how people reach judgments:


Anchoring.


Once an analogy or other idea anchors itself in a management team, it is notoriously hard to dislodge. Psychologists have shown that this is true even when decision makers obviously have no reason to believe the initial idea. In a demonstration of this effect, Nobel Prize winner Daniel Kahneman and his coauthor Amos Tversky told experimental subjects they would be asked to estimate the percentage of African countries in the membership of the United Nations. A roulette wheel with numbers from zero to 100 was spun, and after it had stopped, the subjects were asked whether the actual percentage was greater or less than the number showing on the wheel. They were then asked to estimate the correct percentage. Surprisingly, the roulette wheel had a strong impact on final estimates. For instance, subjects who saw 10% on the wheel estimated the real percentage at 25%, on average, while those who saw 65% gave an average estimate of 45%. The roulette wheel knew nothing about the composition of the United Nations, obviously, yet it had a powerful influence on people’s judgment. (The current answer: African nations make up 24% of the U.N.’s membership.)


The anchoring effect suggests that early analogies in a company, even if they have taken root casually, can have a lasting influence. This is especially true if decision makers become emotionally attached to their analogies. For years, Sun Microsystems has focused on delivering entire systems of hardware and software even as the computer industry has grown less and less integrated. CEO Scott McNealy often justifies his contrarian position by highlighting an analogy to the automotive industry. “You guys are all focusing on piston rings,” he once told reporters. “Go and ask Ford about its strategy in piston rings. And carburetors. You don’t. You talk about the whole car.” Though Sun has suffered financially, McNealy has been reluctant to shift strategy, and, indeed, he continues to use the auto analogy. Perhaps that is inevitable for an individual whose father worked in the auto industry and whose sons are named after vehicle models—Maverick, Scout, Colt, and Dakota.


Confirmation Bias.


The anchoring effect is reinforced by another problem: decision makers’ tendency to seek out information that confirms their beliefs and to ignore contradictory data. To some degree, this tendency arises simply because managers like to be right—and like to be seen as right. But there is evidence from psychology that people are better equipped to confirm beliefs than to challenge them, even when they have no vested interest in the beliefs.


Consider an illustration. Experimental subjects in Israel were asked during the 1970s, “Which pair of countries is more similar, West Germany and East Germany, or Sri Lanka and Nepal?” Most people answered, “West Germany and East Germany.” A second set of subjects was asked, “Which pair of countries is more different, West Germany and East Germany, or Sri Lanka and Nepal?” Again, most people answered, “West Germany and East Germany.” How can we reconcile the two sets of results? The accepted interpretation starts with the fact that the typical Israeli knew more about the Germanys than about Sri Lanka and Nepal. When asked to test a hypothesis of similarity, subjects sought evidence of similarity and found more between the Germanys than between Sri Lanka and Nepal. When asked to test a hypothesis of difference, they sought differences and found more of them between the Germanys. Subjects search for the attribute they are prompted to seek—similarity or difference—and do not look for evidence of the contrary attribute.


Together, anchoring and the confirmation bias suggest real problems for strategists who rely on analogies. Having adopted an analogy, perhaps a superficial one, strategy makers will seek out evidence that it is legitimate, not evidence that it is invalid. Intel’s managers will tend to look for reasons that microprocessors really are like steel; Circuit City will try to confirm that consumer electronics and used cars truly are alike. Given the variety of information available in most business situations, anyone who looks for confirming data will doubtless find something that supports his or her beliefs. Thanks to the anchoring effect, any contradictory information may well be disregarded. As a result, a company may continue to act on a superficial analogy for a long time.


Strategists will seek evidence that their analogy is legitimate, not evidence that it is invalid. As a result, a company may continue to act on a superficial analogy for a long time.


How to Avoid Superficial Analogies


Reasoning by analogy, then, poses a dilemma for senior managers. On the one hand, it is a powerful tool, well suited to the challenges of making strategy in novel, complex settings. It can spark breakthrough thinking and fuel successes like those of Toys R Us and Intel. On the other, it raises the specter of superficiality. Can managers tap the power of analogy but sidestep its pitfalls? The bad news is that it is impossible to make analogies 100% safe. Managers are especially likely to rely on analogical reasoning in unfamiliar, ambiguous environments where other forms of thinking, like deduction, break down. In those settings, it’s hard to distinguish the deep traits from the superficial. The good news is that four straightforward steps can improve a management team’s odds of using analogies skillfully. (See the exhibit “Avoiding Superficial Analogies.”)


Avoiding Superficial Analogies


Before laying out these steps, we must acknowledge our debt to political scientists, especially Harvard’s Ernest May and Richard Neustadt, who found that analogical reasoning often leads policy makers astray. The approaches they developed to train such people to make better use of history have informed our thinking.


Recognize the analogy and identify its purpose.


To defend against flawed analogies, a management team first must recognize the analogies it is using. Sometimes they are obvious. It is hard to forget that “digital rebar” is a reference to the steel industry, for instance. In other cases, influential analogies remain hidden. They often come from executives’ backgrounds. Though Merrill Lynch’s distinctive approach to retail brokerage owed much to the years that Charlie Merrill spent in the supermarket business, only occasionally did Merrill confess that “although I am supposed to be an investment banker, I think I am really and truly a grocery man at heart.”


It’s also important to identify how a company is using any analogies it recognizes. Managers use analogies for a variety of purposes, after all—to brainstorm, to communicate complexity, and to motivate employees, for example. (For thoughts on the uses of analogies, see the sidebar “A Versatile Tool.”) Often, analogies are used to spark ideas and emotions. In such cases, creativity and impact may be more important than strict validity. But when a company moves from brainstorming to deciding, and when resources are at stake, managers need to ask tough, objective questions about whether the analogy is more than superficial. To answer these questions well, strategists must analyze chains of cause and effect. It is useful to break this task into three further steps.


A Versatile Tool


This article focuses on the use of analogy as a tool for choosing among possible solutions to strategic problems, ...


Understand the source.


Begin by examining why the strategy worked in the industry from which the analogy was drawn. The classic tools of strategy analysis are extremely useful here. Indeed, the key is to lay out in-depth analyses that are familiar to strategists, particularly analyses of the source environment, the solution or strategy that worked well (or that failed) in the original context, and the link between the source environment and the winning (or losing) strategy.


Consider Circuit City’s effort to apply its retailing solution to the used-car business, and start by analyzing the source environment. When the company began its rise to prominence in the 1970s, the consumer electronics industry was dominated by mom-and-pop retailers of varying quality and efficiency. Burgeoning demand kept the retailers afloat, despite three negatives: Consumers were more committed to the national brands than to the retailers, the cost to switch from one retailer to another was low, and customers often feared that retailers were preying on their ignorance of high-tech products. The environment was marked by untapped efficiencies (for example, few economies of scale were exploited) and unmet customer needs (each store carried a limited selection of brands, and products were often out of stock).


Circuit City devised a highly effective strategy that took advantage of the opportunities and neutralized the threats in this setting. Key to the strategy was a series of fixed investments: large stores that could stock an exhaustive selection of consumer electronics, information technology that could track sales patterns closely, automated distribution centers that were tied to the sales-tracking technology, and brand-building efforts. The company differentiated itself from competitors on the basis of selection, availability, and consumer trust. It simultaneously drove down costs. Circuit City’s low prices and its other strengths led to extraordinarily large sales volumes, which reduced unit costs. Those cost reductions permitted lower prices, which drove even greater volume, and so on in a virtuous cycle.


Note how well this strategy matched the demands of the external environment. By meeting consumer needs and by building a brand that shoppers valued, Circuit City made it less attractive for customers to switch from store to store. As Circuit City’s brand rose to prominence, as sales volume grew, and as customers came to rely on the recommendations of Circuit City’s salespeople, the company became far more powerful in negotiations with suppliers. Investments in branding, distribution, information technology, and large stores raised new barriers to entry. And scale-driven cost advantages gave the company a powerful way to overcome smaller rivals.


The preceding three paragraphs lay out a chain of cause and effect that explains why Circuit City’s original strategy worked in the consumer electronics environment. The strategist’s goal is to figure out whether the causal logic holds up in the target environment. In preparing to make that analysis, the strategy maker will find it useful to compile two lists of industry features: those that play a crucial role in the causal logic and those that don’t. In the Circuit City example, the list of crucial elements includes the following features of the pre–Circuit City electronics retailing industry:


  • unsatisfied customer needs, especially for product selection, product availability, and trustworthy retailers;
  • untapped economies of scale and latent, but largely unrealized, barriers to entry;
  • a fragmented base of rivals, many of them weak;
  • unexploited opportunities to apply information and distribution technologies for better inventory management;
  • branded, powerful, reliable suppliers;
  • modest switching costs among consumers; and
  • an absence of goods that are close substitutes at the high end of the market.


At least one notable feature of the industry appears not to have played a major role in the causal logic, according to our analysis. Demand for consumer electronics was growing rapidly when Circuit City became a success, but the industry growth rate does not loom large in the causal story. The sheer size of the industry plays a role—without a critical mass of demand, economies of scale cannot be tapped—but the growth rate does not seem critical.


Assess similarity.


The strategist now maps similarities between the source and the target and determines whether the resemblance is more than superficial. The understanding of the source that he or she has built up is crucial in this step. Rather than wrestling with the entire target problem, which is much less familiar than the source, the strategist can focus on the key features of the causal logic. The question is whether the source and the target are similar or different along these features.


Background of the Work


Field research sparked our interest in analogical reasoning. While exploring the origins of strategies in the Internet portal ...


Similarities usually spring to mind quickly. But the team must also search actively for differences, seeking evidence that each essential feature of the source problem is absent in the target. This process rarely comes naturally—it is often thwarted by the confirmation bias. The team should also do something else that doesn’t come naturally: ask whether the similarities are largely superficial. The list of industry features that are not crucial in the causal logic is very useful in this step. If many of the similarities are on this list rather than the list of crucial correspondences, the management team should sound an alarm. The analogy may be based on superficial similarity.


Circuit City’s entry into the used-car market illustrates the process of assessing similarity. In many ways, the target industry in the 1990s resembled the consumer electronics retailing industry of the 1970s:


  • Many customers were unsatisfied with, and distrustful of, current retailers.
  • Economies of scale and barriers to entry were limited.
  • The industry was fragmented.
  • Information and distribution technologies remained fairly primitive, even though the inventory was highly diverse.
  • Consumers incurred few costs if they switched from one retailer to another.
  • Note that all of these similarities match crucial elements of the causal logic in electronics retailing. This bodes well for the analogy. On the other hand, there were important differences:
  • In consumer electronics, Circuit City could rely on a large base of dependable, reputable suppliers. In contrast, most used-car dealers bought their autos from individual sellers or from wholesalers, some reliable and some not.
  • The inventory of used cars was even more diverse than that of consumer electronics. It would be difficult to keep a predictable range of products in stock. This might make it hard for CarMax to detect sales trends quickly and adjust its inventory to meet demand. Moreover, the distribution expertise Circuit City had developed might not be useful in the used-car industry.
  • It was not clear whether economies of scale existed or barriers to entry could be built in auto retailing.
  • The used-car retailing market had an important substitute at the high end of the market: new-car dealers.


Translate, decide, and adapt.


The final step is to decide whether the original strategy, properly translated, will work in the target industry. This step requires, first, that the management team say clearly what the strategy would look like in the new setting. Precisely what would it take to be the Circuit City of the used-car industry or the supermarket of toys? This requires some adjustment. Even the best analogies involve some differences between the source and the target settings. By now, executives have a sense of the most important differences, and, in translating the strategy, they try to make adjustments that deal with them. After the translation comes a go-no-go decision on whether to pursue the analogy in the marketplace. This involves a clearheaded assessment of whether the translated strategy is likely to fare well in the new context. If executives opt to pursue the analogy, they face another round of adjustment—adapting in the marketplace in response to feedback from customers, rivals, suppliers, and others. It is here, in the market, that managers truly learn how good their analogies are.


Circuit City’s translated strategy bore a close resemblance to the company’s electronics retailing operation. On lots of up to 14 acres, each CarMax superstore offered an unusually broad inventory of 200 to 550 vehicles. CarMax went to special lengths to foster customers’ trust. It sold cars at fixed, posted prices, with no haggling. It hired salespeople with retailing experience, but not auto retailing experience, and gave them extensive training. CarMax compensated salespeople with a flat fee per vehicle sold rather than a fraction of the revenue they generated. The company also put in place a sophisticated inventory tracking system that mirrored the electronics retailing system, and it offered money-back guarantees and warranties that resembled those in Circuit City stores.


At the same time, CarMax adjusted the Circuit City formula to reflect the differences between the two settings. This required, for instance, that the company find reliable sources of used cars. Toward this end, CarMax placed well-trained buyers in each of its stores and offered to buy used cars directly from consumers, even those who did not intend to buy a vehicle from CarMax. The company started to sell new cars at some sites, in part to generate used cars from trade-ins. By 2002, individual consumers were CarMax’s single largest source of used cars. Regardless of source, all CarMax used cars were thoroughly inspected and reconditioned before they were resold.


The diverse inventory of used cars presented a new challenge. No single used-car lot could show the full array of vehicles in CarMax’s inventory. So CarMax developed a computer system that allowed consumers to peruse the company’s full inventory. The system told customers what was available nationwide and what it would cost to transfer a desired car to the customer’s locale.


CarMax was neither an immediate nor an unmixed success. It took Circuit City most of a decade to tailor its formula to the used-car market. The company built some stores that were too large and adopted an overly ambitious rollout plan, and price wars in the new-car market and expansion by other used-car superstores occasionally hurt its stock price. Nonetheless, the effort to reproduce Circuit City’s success in the used-car industry has produced a viable company with revenue of $4.6 billion in fiscal year 2004, a return on sales of 2% to 3%, a multibillion-dollar market capitalization, and equity whose returns have roughly matched the S&P 500’s since the IPO in 1997. This positive outcome reflects the close resemblance between the electronics retailing industry and the used-car industry, especially in features pertinent to the causal logic of the original success. It also reflects the company’s careful attention to the essential differences between the industries—or at least the company’s ability to adapt to those differences.


A critical question in this final step is how much a company should translate the candidate solution, on the basis of forethought alone, before launching it in the marketplace. In studying the transfer of best practices within companies, say from one bank branch to another, Insead’s Gabriel Szulanski and Wharton’s Sidney Winter have found that managers overestimate how well they understand cause and effect relationships and, accordingly, adjust too much on the basis of forethought. This lesson applies to analogies, too. It makes sense to adjust a candidate solution beforehand to account for glaring differences between the target and the source. But in novel, uncertain environments, where strategists rely the most on analogies, it is often wise to hold off on fine-tuning the solution until the market can give its guidance.


Toward Better Strategic Choices


Analogies lie on a spectrum. At one end lie perfect analogies, where the source and target are truly alike on the dimensions that drive economic performance. The toy retailing industry of the 1950s deeply resembled the grocery business, much to the benefit of Toys R Us, and the demands on Toyota’s kanban system closely mirrored those related to supermarket reshelving. At the opposite end of the spectrum are profoundly problematic analogies, such as Enron’s comparison of broadband and natural gas trading, that are based on superficial similarities yet plagued by underlying differences. The vast majority of analogies fall somewhere in between—they’re imperfect but useful. The challenge is to get the most out of them. In our experience, the best users of analogy harness deduction and trial and error to test and improve the analogies that lie in the middle of the spectrum. Intel’s analogy involving the steel industry, for instance, was supported by a deductive theory of cause and effect—Clayton Christensen’s ideas about disruptive technologies. It also drew strength from trial-and-error experiments that gradually refined Intel’s approach to the low end of the microprocessor market, much as Circuit City’s adjustments served to fine-tune CarMax’s strategy. Managers who wish to tap the great power of analogy and sidestep its pitfalls must master multiple modes of thought.


A version of this article appeared in the April 2005 issue of Harvard Business Review.


https://hbr.org/2005/04/how-strategists-really-think-tapping-the-power-of-analogy






Analogical Reasoning


John F. Sowa and Arun K. Majumdar

VivoMind LLC

http://www.jfsowa.com/pubs/analog.htm


Abstract.  Logical and analogical reasoning are sometimes viewed as mutually exclusive alternatives, but formal logic is actually a highly constrained and stylized method of using analogies. Before any subject can be formalized to the stage where logic can be applied to it, analogies must be used to derive an abstract representation from a mass of irrelevant detail. After the formalization is complete, every logical step — of deduction, induction, or abduction — involves the application of some version of analogy. This paper analyzes the relationships between logical and analogical reasoning and describes a highly efficient analogy engine that uses conceptual graphs as the knowledge representation. The same operations used to process analogies can be combined with Peirce's rules of inference to support an inference engine. Those operations, called the canonical formation rules for conceptual graphs, are widely used in CG systems for language understanding and scene recognition as well as analogy finding and theorem proving. The same algorithms used to optimize analogy finding can be used to speed up all the methods of reasoning based on the canonical formation rules.

This paper was published in the proceedings of the International Conference on Conceptual Structures in Dresden, Germany, in July 2003:  A. Aldo, W. Lex, & B. Ganter, eds. (2003) Conceptual Structures for Knowledge Creation and Communication, LNAI 2746, Springer-Verlag, pp. 16-36.

1. Analogy and Perception

Before discussing the use of analogy in reasoning, it is important to analyze the concept of analogy and its relationship to other cognitive processes. General-purpose dictionaries are usually a good starting point for conceptual analysis, but they seldom go into sufficient depth to resolve subtle distinctions. A typical dictionary lists synonyms for the word analogy, such as similarityresemblance, and correspondence. Then it adds more specialized word senses, such as a similarity in some respects of things that are otherwise dissimilara comparison that determines the degree of similarity, or an inference based on resemblance or correspondence. In AI, analogy-finding programs have been written since the 1960s, but they often use definitions of analogy that are specialized to a particular application.

The VivoMind Analogy Engine (VAE), which is described in Section 3, is general enough to be used in any application domain. Therefore, VAE leads to fundamental questions about the nature of analogy that have been debated in the literature of cognitive science. One three-party debate has addressed many of those issues:

  1. Thesis:  For the Structure Mapping Engine (SME), Falkenheimer, Forbus, and Gentner (1989) defined analogy as the recognition that "one thing is like another" if there is a mapping from a conceptual structure that describes the first one to a conceptual structure that describes the second. Their implementation in SME has been applied to a wide variety of practical applications and to psychological studies that compare the SME approach to the way people address the same problems.

  2. Antithesis:  In their critique of SME, Chalmers, French, and Hofstadter (1992) consider analogy to be an aspect of a more general cognitive function called high-level perception (HLP), by which an organism constructs a conceptual representation of a situation. They "argue that perceptual processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a 'representation module' that supplies representations ready-made." They criticize the "hand-coded rigid representations" of SME and insist that "content-dependent, easily adaptable representations" must be "an essential part of any accurate model of cognition."

  3. Synthesis:  In summarizing the debate, Morrison and Dietrich (1995) observed that the two positions represent different perspectives on related, but different aspects of cognition:  SME employs structure mapping as "a general mechanism for all kinds of possible comparison domains" while "HLP views analogy as a process from the bottom up; as a representation-building process based on low-level perceptual processes interacting with high-level concepts." In their response to the critics, Forbus et al. (1998) admitted that a greater integration with perceptual mechanisms is desirable, but they repeated their claim that psychological evidence is "overwhelmingly" in favor of structure mapping "as a model of human analogical processing."

The VAE approach supports Point #3:  a comprehensive theory of cognition must integrate the structure-building processes of perception with the structure-mapping processes of analogy. For finding analogies, VAE uses a high-speed implementation of structure mapping, but its algorithms are based on low-level operations that are also used to build the structures. In the first implementation, the conceptual structures were built during the process of parsing and interpreting natural language. More recently, the same low-level operations have been used to build conceptual structures from sensory data and from percept-like patterns used in scene recognition. VAE demonstrates that perception, language understanding, and structure mapping can be based on the same kinds of operations.

This paper discusses the interrelationships between logical and analogical reasoning, analyzes the underlying cognitive processes in terms of Peirce's semiotics and his classification of reasoning, and shows how those processes are supported by VAE. The same graph operations that support analogical reasoning can also be used to support formal reasoning. Instead of being mutually exclusive, logical reasoning is just a more cultivated variety of analogical reasoning. For many purposes, especially in language understanding, the analogical processes provide greater flexibility than the more constrained and less adaptable variety used in logic. But since logical and analogical reasoning share a common basis, they can be effectively used in combination.

2. Logical and Analogical Reasoning

In developing formal logic, Aristotle took Greek mathematics as his model. Like his predecessors Socrates and Plato, Aristotle was impressed with the rigor and precision of geometrical proofs. His goal was to formalize and generalize those proof procedures and apply them to philosophy, science, and all other branches of knowledge. Yet not all subjects are equally amenable to formalization. Greek mathematics achieved its greatest successes in astronomy, where Ptolemy's calculations remained the standard of precision for centuries. But other subjects, such as medicine and law, depend more on deep experience than on brilliant mathematical calculations. Significantly, two of the most penetrating criticisms of logic were written by the physician Sextus Empiricus in the second century AD and by the legal scholar Ibn Taymiyya in the fourteenth century.

Sextus Empiricus, as his nickname suggests, was an empiricist. By profession, he was a physician; philosophically, he was an adherent of the school known as the Skeptics. Sextus maintained that all knowledge must come from experience. As an example, he cited the following syllogism:

Every human is an animal. 
Socrates is human. 
Therefore, Socrates is an animal.

Sextus admitted that this syllogism represents a valid inference pattern, but he questioned the source of evidence for the major premise Every human is an animal. A universal proposition that purports to cover every instance of some category must be derived by induction from particulars. If the induction is incomplete, then the universal proposition is not certain, and there might be some human who is not an animal. But if the induction is complete, then the particular instance Socrates must have been examimed already, and the syllogism is redundant or circular. Since every one of Aristotle's valid forms of syllogisms contains at least one universal affirmative or universal negative premise, the same criticisms apply to all of them:  the conclusion must be either uncertain or circular.

The Aristotelians answered Sextus by claiming that universal propositions may be true by definition: since the type Human is defined as rational animal, the essence of human includes animal; therefore, no instance of human that was not an animal could exist. This line of defense was attacked by the Islamic jurist and legal scholar Taqi al-Din Ibn Taymiyya. Like Sextus, Ibn Taymiyya agreed that the form of a syllogism is valid, but he did not accept Aristotle's distinction between essence and accident (Hallaq 1993). According to Aristotle, the essence of human includes both rational and animal. Other attributes, such as laughing or being a featherless biped, might be unique to humans, but they are accidental attributes that could be different without changing the essence. Ibn Taymiyya, however, maintained that the distinction between essence and accident was arbitrary. Human might just as well be defined as laughing animal, with rational as an accidental attribute.

Denouncing logic would be pointless if no other method of reasoning were possible. But Ibn Taymiyya had an alternative: the legal practice of reasoning by cases and analogy. In Islamic law, a new case is assimilated to one or more previous cases that serve as precedents. The mechanism of assimilation is analogy, but the analogy must be guided by a cause that is common to the new case as well as the earlier cases. If the same cause is present in all the cases, then the earlier judgment can be transferred to the new case. As an example, it is written in the Koran that grape wine is prohibited, but nothing is said about date wine. The judgment for date wine would be derived in four steps:

  1. Given case:  Grape wine is prohibited.

  2. New case:  Is date wine prohibited?

  3. Cause:  Grape wine is prohibited because it is intoxicating; date wine is also intoxicating.

  4. Judgment:  Date wine is also prohibited.

In practice, the reasoning may be more complex. Several previous cases may have a common cause but different judgments. Then the analysis must determine whether there are mitigating circumstances that affect the operation of the cause. But the principles remain the same: analogy guided by rules of evidence and relevance determines the common cause, the effect of the mitigating circumstances, and the judgment.

Besides arguing in favor of analogy, Ibn Taymiyya also replied to the logicians who claimed that syllogistic reasoning is certain, but analogy is merely probable. He admitted that logical deduction is certain when applied to purely mental constructions in mathematics. But in any reasoning about the real world, universal propositions can only be derived by induction, and induction must be guided by the same principles of evidence and relevance used in analogy. Figure 1 illustrates Ibn Taymiyya's argument:  Deduction proceeds from a theory containing universal propositions. But those propositions must have earlier been derived by induction using the methods of analogy. The only difference is that induction produces a theory as intermediate result, which is then used in a subsequent process of deduction. By using analogy directly, legal reasoning dispenses with the intermediate theory and goes straight from cases to conclusion. If the theory and the analogy are based on the same evidence, they must lead to the same conclusions.

Figure 1:  Comparison of logical and analogical reasoning

The question in Figure 1 represents some known aspects of a new case, which has unknown aspects to be determined. In deduction, the known aspects are compared (by a version of structure mapping called unification) with the premises of some implication. Then the unknown aspects, which answer the question, are derived from the conclusion of the implication. In analogy, the known aspects of the new case are compared with the corresponding aspects of the older cases. The case that gives the best match may be assumed as the best source of evidence for estimating the unknown aspects of the new case. The other cases show alternative possibilities for those unknown aspects; the closer the agreement among the alternatives, the stronger the evidence for the conclusion.

Both Sextus Empiricus and Ibn Taymiyya admitted that logical reasoning is valid, but they doubted the source of evidence for universal propositions about the real world. What they overlooked was the pragmatic value of a good theory:  a small group of scientists can derive a theory by induction, and anyone else can apply it without redoing the exhaustive analysis of cases. The two-step process of induction followed by deduction has proved to be most successful in the physical sciences, which include physics, chemistry, molecular biology, and the engineering practices they support. The one-step process of case-based reasoning, however, is more successful in fields outside the so-called "hard" sciences, such as business, law, medicine, and psychology. Even in the "soft" sciences, which are rife with exceptions, a theory that is successful most of the time can still be useful. Many cases in law or medicine can be settled by the direct application of some general principle, and only the exceptions require an appeal to a long history of cases. And even in physics, the hardest of the hard sciences, the theories may be well established, but the question of which theory to apply to a given problem usually requires an application of analogy. In both science and daily life, there is no sharp dichotomy between subjects amenable to strict logic and those that require analogical reasoning.

The informal arguments illustrated in Figure 1 are supported by an analysis of the algorithms used for logical reasoning. Following is Peirce's classification of the three kinds of logical reasoning and the way that the structure-mapping operations of analogy are used in each of them:

  • Deduction.  A typical rule used in deduction is modus ponens:  given an assertion p and an axiom of the form p implies q, deduce the conclusion q. In most applications, the assertion p is not identical to the p in the axiom, and structure mapping is necessary to unify the two ps before the rule can be applied. The most time-consuming task is not the application of a single rule, but the repeated use of analogies for finding patterns that may lead to successful rule applications.

  • Induction.  When every instance of p is followed by an instance of q, induction is peformed by assuming that p implies q. Since the ps and qs are rarely identical in every occurrence, a form of analogy called generalization is used to derive the most general implication that subsumes all the instances.

  • Abduction.  The operation of guessing or forming an initial hypothesis is what Peirce called abduction. Given an assertion q and an axiom of the form p implies q, the guess that p is a likely cause or explanation for q is an act of abduction. The operation of guessing p uses the least constrained version of analogy, in which some parts of the matching graphs may be more generalized while other parts are more specialized.

As this discussion indicates, analogy is a prerequisite for logical reasoning, which is a highly disciplined method of using repeated analogies. In both human reasoning and computer implementations, the same underlying operations can be used to support both.

3. Analogy Engine

The VivoMind Analogy Engine (VAE), which was developed by Majumdar, is a high-performance analogy finder that uses conceptual graphs for the knowledge representation. Like SME, structure mapping is used to find analogies. Unlike SME, the VAE algorithms can find analogies in time proportional to (N log N), where N is the number of nodes in the current knowledge base or context. SME, however, requires time proportional to N3 (Forbus et al. 1995). A later version called MAC/FAC reduced the time by using a search engine to extract the most likely data before using SME to find analogies (Forbus et al. 2002). With its greater speed, VAE can find analogies in the entire WordNet knowledge base in just a few seconds, even though WordNet contains over 105 nodes. For that size, one second with an (N log N) algorithm would correspond to 30 years with an N3 algorithm.

VAE can process CGs from any source:  natural languages, programming languages, and any kind of information that can be represented in graphs, such as organic molecules or electric-power grids. In an application to distributed interacting agents, VAE processes both English messages and signals from sensors that monitor the environment. To determine an agent's actions, VAE searches for analogies to what humans did in response to similar patterns of messages and signals. To find analogies, VAE uses three methods of comparison, which can be used separately or in combination:

  1. Matching type labels.  Method #1 compares nodes that have identical labels, labels that are related as subtype and supertype such as Cat and Animal, or labels that have a common supertype such as Cat and Dog.

  2. Matching subgraphs.  Method #2 compares subgraphs with possibly different labels. This match succeeds when two graphs are isomorphic (indepedent of the labels) or when they can be made isomorphic by combining adjacent nodes.

  3. Matching transformations.  If the first two methods fail, Method #3 searches for transformations that can relate subgraphs of one graph to subgraphs of the other.

These three methods of matching graphs were inspired by Peirce's categories of Firstness, Secondness, and Thirdness (Sowa 2000). The first compares two nodes by what they contain in themselves independent of any other nodes; the second compares nodes by their relationships to other nodes; and the third compares the mediating transformations that may be necessary to make the graphs comparable. To illustrate the first two methods, the following table shows an analogy found by VAE when comparing the background knowledge in WordNet for the concept types Cat and Car:

Analogy of Cat to Car
Cat Car
head hood
eye headlight
cornea glass plate
mouth fuel cap
stomach fuel tank
bowel combustion chamber
anus exhaust pipe
skeleton chassis
heart engine
paw wheel
fur paint

Figure 2:  An analogy discovered by VAE

As Figure 2 illustrates, there is an enormous amount of background knowledge stored in lexical resources such as WordNet. It is not organized in a form that is precise enough for deduction, but it is adequate for the more primitive method of analogy.

Since there are many possible paths through all the definitions and examples of WordNet, most comparisons generate multiple analogies. To evaluate the evidence for any particular mapping, a weight of evidence is computed by using heuristics that estimate the closeness of the match. For Method #1 of matching type labels, the closest match results from identical labels. If the labels are not identical, the weight of evidence decreases with the distance between the labels in the type hierarchy:

  1. Identical type labels, such as Cat to Cat.

  2. Subtype to supertype, such as Cat to Animal.

  3. Siblings of the same supertype, such as Cat to Dog.

  4. More distant cousins, such as Cat to Tree.

For Method #2 of matching subgraphs, the closest match results from finding that both graphs, in their entirety, are isomorphic. The weight of evidence decreases as their common subgraphs become smaller or if the graphs have to be modified in order to force a match:

  1. Match isomorphic graphs.

  2. Match two graphs that have isomorphic subgraphs (the larger the subgraphs, the stronger the evidence for the match).

  3. Combine adjacent nodes to make the subgraphs isomorphic.

The analogy shown in Figure 2 received a high weight of evidence because VAE found many matching labels and large matching subgraphs in corresponding parts of a cat and parts of a car:

  • Some of the corresponding parts have similar functions:  fur and paint are outer coverings; heart and engine are internal parts that have a regular beat; skeleton and chassis are structures to which other parts are attached; paw and wheel perform a similar function, and there are four of each.

  • The longest matching subgraph is the path from mouth to stomach to bowel to anus of a cat, which matches the path from fuel cap to fuel tank to combustion chamber to exhaust pipe of a car. The stomach of a cat and the fuel tank of a car are analogous because they are both subtypes of Container. The bowel and the combustion chamber perform analogous functions. The mouth and the fuel cap are considered input orifices, and the anus and the exhaust pipe are outputs. The weight of evidence is somewhat reduced because adjustments must be made to ignore nodes that do not match:  the esophagus of a cat does not match anything in WordNet's description of a car, and the muffler of a car does not match anything in its description of a cat.

  • A shorter subgraph is the path from head to eyes to cornea of a cat, which matches the path from hood to headlights to glass plate of a car. The head and the hood are both in the front. The eyes are analogous to the headlights because there are two of each and they are related to light, even though the relationships are different. The cornea and the glass plate are in the front, and they are both transparent.

Each matching label and each structural correspondence contributes to the weight of evidence for the analogy, depending on the closeness of the match and the exactness of the correspondence.

As the cat-car comparison illustrates, analogy is a versatile method for using informal, unstructured background knowledge. But analogies are also valuable for comparing the highly formalized knowledge of one axiomatized theory to another. In the process of theory revision, Niels Bohr used an analogy between gravitational force and electrical force to derive a theory of the hydrogen atom as analogous to the earth revolving around the sun. Method #3 of analogy, which finds matching transformations, can also be used to determine the precise mappings required for transforming one theory or representation into another. As an example, Figure 3 shows a physical structure that could be represented by many different data structures.

Figure 3:  A physical structure to be represented by data

Programmers who use different tools, databases, or programming languages often use different, but analogous representations for the same kinds of information. LISP programmers, for example, prefer to use lists, while FORTRAN programmers prefer vectors. Conceptual graphs are a highly general representation, which can represent any kind of data stored in a digital computer, but the types of concepts and relations usually reflect the choices made by the original programmer, which in turn reflect the options available in the original programming tools. Figure 4 shows a representation for Figure 3 that illustrates the typical choices used with relational databases.

Figure 4:  Two structures represented in a relational database

On the left of Figure 4 are two structures:  a copy of Figure 3 and an arch constructed from three blocks. On the right are two tables:  the one labeled Objects lists the identifiers of all the objects in both tables with their shapes and colors; the one labeled Supports lists each object that supports (labeled Supporter) and the object supported (labeled Supportee). As Figure 4 illustrates, a relational database typically scatters the information about a single object or structure of objects into multiple tables. For the structure of pyramids and blocks, each object is listed once in the Objects table, and one or more times in either or both columns of the Supports table. Furthermore, information about the two disconnected structures shown on the left is intermixed in both tables. When all the information about the structure at the top left is extracted from both tables of Figure 4, it can be mapped to the conceptual graph of Figure 5.

Figure 5:  A CG derived from the relational DB

In Figure 5, each row of the table labeled Objects is represented by a conceptual relation labeled Objects, and each row of the table labeled Supports is represented by a conceptual relation labeled Supports. The type labels of the concepts are mostly derived from the labels on the columns of the two tables in Figure 4. The only exception is the label Entity, which is used instead of ID. The reason for that exception is that ID is a metalevel term about the representation language; it is not a term that is derived from the entities in the domain of discourse. The concept [Entity: E], for example, says that E is an instance of type Entity. The concept [ID: "E"], however, would say that the character string "E" is an instance of type ID. The use of the label Entity instead of ID avoids mixing the metalevel with the object level. Such mixing of levels is common in most programs, since the computer ignores any meaning that might be associated with the labels. In logic, however, the fine distinctions are important, and CGs mark them consistently.

When natural languages are translated to CGs, the distinctions must be enforced by the semantic interpreter. Figure 6 shows a CG that represents the English sentence, A red pyramid A, a green pyramid B, and a yellow pyramid C support a blue block D, which supports an orange pyramid E. The conceptual relations labeled Thme and Inst represent the case relations theme and instrument. The relations labeled Attr represent the attribute relation between a concept of some entity and a concept of some attribute of that entity. The type labels of concepts are usually derived from nouns, verbs, adjectives, and adverbs in English.

Figure 6:  A CG derived from an English sentence

Although the two conceptual graphs represent equivalent information, they look very different. In Figure 5, the CG derived from the relational database has 15 concept nodes and 9 relation nodes. In Figure 6, the CG derived from English has 12 concept nodes and 11 relation nodes. Furthermore, no type label on any node in Figure 5 is identical to any type label on any node in Figure 6. Even though some character strings are similar, their positions in the graphs cause them to be treated as distinct. In Figure 5, orange is the name of an instance of type Color; and in Figure 6, Orange is the label of a concept type. In Figure 5, Supports is the label of a relation type; and in Figure 6, Support is not only the label of a concept type, it also lacks the final S.

Because of these differences, the strict method of unification cannot show that the graphs are identical or even related. Even the more relaxed methods of matching labels or matching subgraphs are unable to show that the two graphs are analogous. Method #3 of analogy, however, can find matching transformations that can translate Figure 5 into Figure 6 or vice-versa. When VAE was asked to compare those two graphs, it found the two transformations shown in Figure 7. Each transformation determines a mapping between a type of subgraph in Figure 5 and another type of subgraph in Figure 6.

Figure 7:  Two transformations discovered by VAE

The two transformations shown in Figure 7 define a version of graph grammar for parsing one kind of graph and mapping it to the other. The transformation at the top of Figure 7 can be applied to the five subgraphs containing the relations of type Objects in Figure 5 and relate them to the five subgraphs containing the relations of type Attr in Figure 6. That same transformation could be applied in reverse to relate the five subgraphs of Figure 6 to the five subgraphs of Figure 5. The transformation at the bottom of Figure 7 could be applied from right to left in order to map Figure 6 to Figure 5. When applied in that direction, it would map three different subgraphs, which happen to contain three common nodes:  the subgraph extending from [Pyramid: A] to [Block: D]; the one from [Pyramid: B] to [Block: D]; and the one from [Pyramid: C] to [Block: D]. When applied in the reverse direction, it would map three subgraphs of Figure 5 that contained only one common node.

The transformations shown in Figure 7 have a high weight of evidence because they are used repeatedly in exactly the same way. A single transformation of one subgraph to another subgraph with no matching labels would not contribute anything to the weight of evidence. But if the same transformation is applied twice, then its likelihood is greatly increased. Transformations that can be applied three times or five times to relate all the nodes of one graph to all the nodes of another graph have a likelihood that comes close to being a certainty.

Of the three methods of analogy used in VAE, the first two — matching labels and matching subgraphs — are also used in SME. Method #3 of matching transformations, which only VAE is capable of performing, is more complex because it depends on analogies of analogies. Unlike the first two methods, which VAE can perform in (N log N) time, Method #3 takes polynomial time, and it can only be applied to much smaller amounts of data. In practice, Method #3 is usually applied to small parts of an analogy in which most of the mapping is done by the first two methods and only a small residue of unmatched nodes remains to be mapped. In such cases, the number N is small, and the mapping can be done quickly. Even for mapping Figure 5 (with N=9) to Figure 6 (with N=11), the Method #3 took a few seconds, whereas the time for Methods #1 and #2 on graphs of such size would be less than a millisecond.

Each of the three methods of analogy determines a mapping of one CG to another. The first two methods determine a node-by-node mapping of CGs, where some or all of the nodes of the first CG may have different type labels from the corresponding nodes of the other. Method #3 determines a more complex mapping, which comprises multiple mappings of subgraphs of one CG to subgraphs of the other. These methods can be applied to CGs derived from any source, including natural languages, logic, or programming languages.

In one major application, VAE was used to analyze the programs and documentation of a large corporation, which had systems in daily use that were up to forty years old (LeClerc & Majumdar 2002). Although the documentation specified how the programs were supposed to work, nobody knew what errors, discrepancies, and obsolete business procedures might be buried in the code. The task required an analysis of 100 megabytes of English, 1.5 million lines of COBOL programs, and several hundred control-language scripts, which called the programs and specified the data files and formats. Over time, the English terminology, computer formats, and file names had changed. Some of the format changes were caused by new computer systems and business practices, and others were required by different versions of federal regulations. In three weeks of computation on a 750 MHz Pentium III, VAE combined with the Intellitex parser was able to analyze the documentation and programs, translate all statements that referred to files, data, or processes in any of the three languages (English, COBOL, and JCL) to conceptual graphs, and use the CGs to generate an English glossary of all processes and data, to define the specifications for a data dictionary, to create dataflow diagrams of all processes, and to detect inconsistencies between the documentation and the implementation.

4. Inference Engine

Most theorem provers use a tightly constrained version of structure mapping called unification, which forces two structures to become identical. Relaxing constaints in one direction converts unification to generalization, and relaxing them in another direction leads to specialization. With arbitrary combinations of generalization and specialization, there is a looser kind of similarity, which, if there is no limit on the extent, could map any graph to any other. When Peirce's rules of inference are redefined in terms of generalization and specialization, they support an inference procedure that can use exactly the same algorithms and data structures designed for the VivoMind Analogy Engine. The primary difference between the analogy engine and the inference engine is in the strategy that schedules the algorithms and determines which constraints to enforce.

When Peirce invented the implication operator for Boolean algebra, he observed that the truth value of the antecedent is always less than or equal to the truth value of the consequent. Therefore, the symbol ≤ may be used to represent implication:  pq means that the truth value of p is less than or equal to the truth value of q. That same symbol may be used for generalization:  if a graph or formula p is true in fewer cases than another graph or formula q, then p is more specialized and q is more generalized. Figure 8 shows a generalization hierarchy in which the most general CG is at the top. Each dark line in Figure 8 represents the ≤ operator:  the CG above is a generalization, and the CG below is a specialization.

Figure 8:  A generalization hierarchy of CGs

The top CG says that an animate being is the agent of some act that has an entity as the theme of the act. Below it are two specializations:  a CG for a robot washing a truck, and a CG for an animal chasing an entity. The CG for an animal chasing an entity has three specializations: a human chasing a human, a cat chasing a mouse, and the dog Macula chasing a Chevrolet. The two graphs at the bottom represent the most specialized sentences:  The cat Yojo is vigorously chasing a brown mouse, and the cat Tigerlily is chasing a gray mouse.

The operations on conceptual graphs are based on combinations of six canonical formation rules, which perform the structure-building operations of perception and the structure-mapping operations of analogy. Logically, each rule has one of three possible effects on a CG:  the rule can make it more specialized, more generalized, or logically equivalent but with a modified shape. Each rule has an inverse rule that restores a CG to its original form. The inverse of specialization is generalization, the inverse of generalization is specialization, and the inverse of equivalence is another equivalence.

All the graphs in Figure 8 belong to the existential-conjunctive subset of logic, whose only operators are the existential quantifier ∃ and the conjunction ∧. For this subset, the canonical formation rules take the forms illustrated in Figures 5, 6, and 7. These rules are fundamentally graphical:  they are easier to show than to describe. Sowa (2000) presented the formal definitions, which specify the details of how the nodes and arcs are affected by each rule.

Example of copy and simplify rules.

Figure 9:  Copy and simplify rules

Figure 9 shows the first two rules:  copy and simplify. At the top is a CG for the sentence "The cat Yojo is chasing a mouse." The down arrow represents two applications of the copy rule. The first copies the Agnt relation, and the second copies the subgraph →(Thme)→[Mouse]. The two copies of the concept [Mouse] at the bottom of Figure 9 are connected by a dotted line called a coreference link; that link, which corresponds to an equal sign = in predicate calculus, indicates that both concepts must refer to the same individual. Since the new copies do not add any information, they may be erased without losing information. The up arrow represents the simplify rule, which performs the inverse operation of erasing redundant copies. The copy and simplify rules are called equivalence rules because any two CGs that can be transformed from one to the other by any combination of copy and simplify rules are logically equivalent. The two formulas in predicate calculus that are derived from Figure 9 are also logically equivalent. The top CG maps to the following formula:

(∃x:Cat)(∃y:Chase)(∃z:Mouse)(name(x,'Yojo') ∧ agnt(y,x) ∧ thme(y,z)),

In the formula that corresponds to the bottom CG, the equality z=w represents the coreference link that connects the two copies of [Mouse]:

(∃x:Cat)(∃y:Chase)(∃z:Mouse)(∃w:Mouse)(name(x,'Yojo') ∧ agnt(y,x) ∧ agnt(y,x) ∧ thme(y,z) ∧ thme(y,w) ∧ z=w).

By the inference rules of predicate calculus, either of these two formulas can be derived from the other.

Example of restrict and unrestrict rules.

Figure 10:  Restrict and unrestrict rules

Figure 10 illustrates the restrict and unrestrict rules. At the top is a CG for the sentence "A cat is chasing an animal." Two applications of the restrict rule transform it to the CG for "The cat Yojo is chasing a mouse." The first step is a restriction by referent of the concept [Cat], which represents some indefinite cat, to the more specific concept [Cat: Yojo], which represents a particular cat named Yojo. The second step is a restriction by type of the concept [Animal] to a concept of the subtype [Mouse]. Two applications of the unrestrict rule perform the inverse transformation of the bottom graph to the top graph. The restrict rule is a specialization rule, and the unrestrict rule is a generalization rule. The more specialized graph implies the more general one: if the cat Yojo is chasing a mouse, it follows that a cat is chasing an animal. The same implication holds for the corresponding formulas in predicate calculus. The more general formula

(∃x:Cat)(∃y:Chase)(∃z:Animal)(agnt(y,x) ∧ thme(y,z))

is implied by the more specialized formula

(∃x:Cat)(∃y:Chase)(∃z:Mouse)(name(x,'Yojo') ∧ agnt(y,x) ∧ thme(y,z)).

Example of join and detach rules.

Figure 11:  Join and detach rules

Figure 11 illustrates the join and detach rules. At the top are two CGs for the sentences "Yojo is chasing a mouse" and "A mouse is brown." The join rule overlays the two identical copies of the concept [Mouse], to form a single CG for the sentence "Yojo is chasing a brown mouse." The detach rule performs the inverse operation. The result of join is a more specialized graph that implies the one derived by detach. The same implication holds for the corresponding formulas in predicate calculus. The conjunction of the formulas for the top two CGs

(∃x:Cat)(∃y:Chase)(∃z:Mouse)(name(x,'Yojo') ∧ agnt(y,x) ∧ thme(y,z))

(∃w:Mouse)(∃v:Brown)attr(w,v)

is implied by the formula for the bottom CG

(∃x:Cat)(∃y:Chase)(∃z:Mouse)(∃v:Brown)(name(x,'Yojo') ∧ agnt(y,x) ∧ thme(y,z) ∧ attr(z,v)).

These rules can be applied to full first-order logic by specifying how they interact with negation. In CGs, each negation is represented by a context that has an attached relation of type Neg or its abbreviation by the symbol ¬ or ~. A positive context is nested in an even number of negations (possibly zero). A negative context is nested in an odd number of negations. The following four principles determine how negations affect the rules:

  1. Equivalence rules.  An equivalence rule remains an equivalence rule in any context, positive or negative.

  2. Specialization rules.  In a negative context, a specialization rule becomes a generalization rule; but in a positive context, it remains a specialization rule.

  3. Generalization rules.  In a negative context, a generalization rule becomes a specialization rule; but in a positive context, it remains a generalization rule.

  4. Double negation.  A double negation is a nest of two negations in which no concept or relation node occurs between the inner and the outer negation. (It is permissibe for an arc of a relation or a coreference link to cross the space between the two negations, but only if one endpoint is inside the inner negation and the other endpoint is outside the outer negation.) Then drawing or erasing a double negation around any CG or any subgraph of a CG is an equivalence operation.

In short, a single negation reverses the effect of generalization and specialization rules, but it has no effect on equivalence rules. Since drawing or erasing a double negation adds or subtracts two negations, it has no effect on any rule.

By handling the syntactic details of conceptual graphs, the canonical formation rules enable the rules of inference to be stated in a form that is independent of the graph notation. For each of the six rules, there is an equivalent rule for predicate calculus or any other notation for classical FOL. To derive equivalent rules for other notations, start by showing the effect of each rule on the existential-conjunctive subset (no operators other than ∃ and ∧). To handle negation, add one to the negation count for each subgraph or subformula that is governed by a ~ symbol. For other operators (∀, ⊃, and ∨), count the number of negations in their definitions. For example, pq is defined as ~(p∧~q); therefore, the subformula p is nested inside one additional negation, and the subformula q is nested inside two additional negations.

When the CG rules are applied to other notations, some extensions may be necessary. For example, the blank or empty graph is a well-formed EG or CG, which is always true. In predicate calculus, the blank may be represented by a constant formula T, which is defined to be true. The operation of erasing a graph would correspond to replacing a formula by T. When formulas are erased or inserted, an accompanying conjunction symbol must also be erased or inserted in some notations. Other notations, such as the Knowledge Interchange Format (KIF), are closer to CGs because they only require one conjunction symbol for an arbitrarily long list of conjuncts. In KIF, the formula (and), which is an empty list of conjuncts, may be used as a synonym for the blank graph or T. Discourse representation structures (DRSs) are even closer to EGs and CGs because they do not use any symbol for conjunction; therefore, the blank may be considered a DRS that is always true.

Peirce's rules, which he stated in terms of existential graphs, form a sound and complete system of inference for first-order logic with equality. If the word graph is considered a synonym for formula or statement, the following adaptation of Peirce's rules can be applied to any notation for FOL, including EGs, CGs, DRS, KIF, or the many variations of predicate calculus. These rules can also be applied to subsets of FOL, such as description logics and Horn-clause rules.

  • Erasure.  In a positive context, any graph or subgraph u may be replaced by a generalization of u; in particular, u may be erased (i.e. replaced by the blank, which is a generalization of every graph).

  • Insertion.  In a negative context, any graph or subgraph u may be replaced by a specialization of u; in particular, any graph may be inserted (i.e. it may replace the blank).

  • Iteration.  If a graph or subgraph u occurs in a context C, another copy of u may be inserted in the same context C or in any context nested in C.

  • Deiteration.  Any graph or subgraph u that could have been derived by iteration may be erased.

  • Equivalence.  Any equivalence rule (copy, simplify, or double negation) may be performed on any graph or subgraph in any context.

These rules, which Peirce formulated in several equivalent variants from 1897 to 1909, form an elegant and powerful generalization of the rules of natural deduction by Gentzen (1935). Like Gentzen's version, the only axiom is the blank. What makes Peirce's rules more powerful is the option of applying them in any context nested arbitrarily deep. That option shortens many proofs, and it eliminates Gentzen's bookkeeping for making and discharging assumptions. For further discussion and comparison, see MS 514 (Peirce 1909) and the commentary that shows how other rules of inference can be derived from Peirce's rules.

Unlike most proof procedures, which are tightly bound to a particular syntax, this version of Peirce's rules is stated in notation-independent terms of generalization and specialization. In this form, they can even be applied to natural languages. For any language, the first step is to show how each syntax rule affects generalization, specialization, and equivalence. In counting the negation depth for natural languages, it is important to recognize the large number of negation words, such as not, never, none, nothing, nobody, or nowhere. But many other words also contain implicit negations, which affect any context governed by those words. Verbs like prevent or deny, for example, introduce a negation into any clause or phrase in their complement. Many adjectives also have implicit negations:  a stuffed bear, for example, lacks essential properties of a bear, such as being alive. After the effects of these features on generalization and specialization have been taken into account in the syntactic definition of the language, Peirce's rules can be applied to a natural language as easily as to a formal language.

5. A Semeiotic Foundation for Cognition

Peirce developed his theory of signs or semeiotic as a species-independent theory of cognition. He considered it true of any "scientific intelligence," by which he meant any intelligence that is "capable of learning from experience." Peirce was familiar with Babbage's mechanical computer; he was the first person to suggest that such machines should be based on electical circuits rather than mechanical linkages; and in 1887, he published an article on "logical machines" in the American Journal of Psychology, which even today would be a respectable commentary on the possibilities and difficulties of artificial intelligence. His definition of sign is independent of any implementation in proteins or silicon:

I define a sign as something, A, which brings something, B, its interpretant, into the same sort of correspondence with something, C, its object, as that in which itself stands to C. In this definition I make no more reference to anything like the human mind than I do when I define a line as the place within which a particle lies during a lapse of time. (1902, p. 235)

As a close friend of William James, Peirce was familiar with the experimental psychology of his day, which he considered a valuable study of what possibilities are realized in any particular species. But he considered semeiotic to be a more fundamental, implementation-independent characterization of cognition.

Peirce defined logic as "the study of the formal laws of signs" (1902, p. 235), which implies that it is based on the same kinds of semeiotic operations as all other cognitive processes. He reserved the word analogy for what is called "analogical reasoning" in this paper. For the kinds of structure mapping performed by SME and VAE, Peirce used the term diagrammatic reasoning, which he described as follows:

The first things I found out were that all mathematical reasoning is diagrammatic and that all necessary reasoning is mathematical reasoning, no matter how simple it may be. By diagrammatic reasoning, I mean reasoning which constructs a diagram according to a precept expressed in general terms, performs experiments upon this diagram, notes their results, assures itself that similar experiments performed upon any diagram constructed according to the same precept would have the same results, and expresses this in general terms. This was a discovery of no little importance, showing, as it does, that all knowledge without exception comes from observation. (1902, pp. 91-92)

This short paragraph summarizes many themes that Peirce developed in more detail in his other works. In fact, it summarizes the major themes of this article:

  1. By a diagram, Peirce meant any abstract pattern of signs. He included the patterns of algebra and Aristotle's syllogisms as diagrammatic, but he also said that his existential graphs were "more diagrammatic".

  2. His "experiments" upon a diagram correspond to various AI procedures, such as generate and test, backtracking, breadth-first parallel search, and path following algorithms, all of which are performed on data structures that correspond to Peirce's notion of "diagram".

  3. The first sentence of the paragraph applies the term "mathematical" to any kind of deduction, including Aristotle's syllogisms and any informal logic that may be used in a casual conversation.

  4. The last sentence, which relates observation to diagrammatic reasoning, echoes the themes of the first section of this paper, which emphasized the need for an integration of perception with the mechanisms of analogy. Peirce stated that point even more forcefully:  "Nothing unknown can ever become known except through its analogy with other things known" (1902, p. 287).

In short, the operations of diagrammatic reasoning or structure mapping form the bridge from perception to all forms of reasoning, ranging from the most casual to the most advanced. Forbus et al. (2002) applied the term reasoning from first principles to logic, not to analogical reasoning. But Peirce, who invented two of the most widely used notations for logic, recognized that the underlying semeiotic mechanisms were more fundamental. The VivoMind implementation confirms Peirce's intuitions.

For natural language understanding, the constrained operations of unification and generalization are important, but exceptions, metaphors, ellipses, novel word senses, and the inevitable errors require less constrained analogies. When the VAE algorithms are used in the semantic interpreter, there are no exceptions:  analogies are used at every step, and the only difference between unification, generalization, and looser similarities is the nature of the constraints on the analogy.

References

Chalmers, D. J., R. M. French, & D. R. Hofstadter (1992) "High-level perception, representation, and analogy: A critique of artificial intelligence methodology," Journal of Experimental & Theoretical Artificial Intelligence 4, 185-211.

Falkenhainer, B., Kenneth D. Forbus, Dedre Gentner (1989) "The Structure mapping engine: algorithm and examples," Artificial Intelligence 41, 1-63.

Forbus, Kenneth D., Dedre Gentner, & K. Law (1995) "MAC/FAC: A Model of Similarity-Based Retrieval," Cognitive Science 19:2, 141-205.

Forbus, Kenneth D., Dedre Gentner, Arthur B. Markman, & Ronald W. Ferguson (1998) "Analogy just looks like high level perception: Why a domain-general approach to analogical mapping is right," Journal of Experimental & Theoretical Artificial Intelligence 10:2, 231-257.

Forbus, Kenneth D., T. Mostek, & R. Ferguson (2002) "An analogy ontology for integrating analogical processing and first-principles reasoning," Proc. IAAI-02 pp. 878-885.

Gentzen, Gerhard (1935) "Untersuchungen über das logische Schließen," translated as "Investigations into logical deduction" in The Collected Papers of Gerhard Gentzen, ed. and translated by M. E. Szabo, North-Holland Publishing Co., Amsterdam, 1969, pp. 68-131.

Hallaq, Wael B. (1993) Ibn Taymiyya Against the Greek Logicians, Clarendon Press, Oxford.

LeClerc, André, & Arun Majumdar (2002) "Legacy revaluation and the making of LegacyWorks," Distributed Enterprise Architecture 5:9, Cutter Consortium, Arlington, MA.

Morrison, Clayton T., & Eric Dietrich (1995) "Structure-mapping vs. high-level perception: the mistaken fight over the explanation of Analogy," Proc. 17th Annual Conference of the Cognitive Science Society, pp. 678-682. Available at http://babs.cs.umass.edu/~clayton/CogSci95/SM-v-HLP.html

Peirce, Charles Sanders (1887) "Logical machines," American Journal of Psychology, vol. 1, Nov. 1887, pp. 165-170.

Peirce, Charles S. (1902) Logic, Considered as Semeiotic, MS L75, edited by Joseph Ransdell, http://members.door.net/arisbe/menu/library/bycsp/l75/l75.htm 

Peirce, Charles Sanders (1909) Manuscript 514, with commentary by J. F. Sowa, available at http://www.jfsowa.com/peirce/ms514.htm

Sowa, John F. (2000) Knowledge Representation: Logical, Philosophical, and Computational Foundations, Brooks/Cole Publishing Co., Pacific Grove, 


http://www.jfsowa.com/pubs/analog.htm

No comments:

Post a Comment