Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

6.4: Reasoning by Analogy

  • Last updated
  • Save as PDF
  • Page ID 54101

  • Mehgan Andrade and Neil Walker
  • College of the Canyons

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Analogies describe similar structures and interconnect them to clarify and explain certain relations. In a recent study, for example, a song that got stuck in your head is compared to an itching of the brain that can only be scratched by repeating the song over and over again.

Restructuring by Using Analogies

One special kind of restructuring, the way already mentioned during the discussion of the Gestalt approach, is analogical problem solving. Here, to find a solution to one problem - the so called target problem, an analogous solution to another problem - the source problem, is presented. An example for this kind of strategy is the radiation problem posed by K. Duncker in 1945:

“As a doctor you have to treat a patient with a malignant, inoperable tumour, buried deep inside the body. There exists a special kind of ray, which is perfectly harmless at a low intensity, but at the sufficient high intensity is able to destroy the tumour - as well as the healthy tissue on his way to it. What can be done to avoid the latter?”

When this question was asked to participants in an experiment, most of them couldn't come up with the appropriate answer to the problem. Then they were told a story that went something like this:

A General wanted to capture his enemy's fortress. He gathered a large army to launch a full-scale direct attack, but then learned, that all the roads leading directly towards the fortress were blocked by mines. These roadblocks were designed in such a way, that it was possible for small groups of the fortress-owner's men to pass them safely, but every large group of men would initially set them off. Now the General figured out the following plan: He divided his troops into several smaller groups and made each of them march down a different road, timed in such a way, that the entire army would reunite exactly when reaching the fortress and could hit with full strength.

Here, the story about the General is the source problem, and the radiation problem is the target problem. The fortress is analogous to the tumour and the big army corresponds to the highly intensive ray. Consequently a small group of soldiers represents a ray at low intensity. The solution to the problem is to split the ray up, as the general did with his army, and send the now harmless rays towards the tumour from different angles in such a way that they all meet when reaching it. No healthy tissue is damaged but the tumour itself gets destroyed by the ray at its full intensity. M. Gick and K. Holyoak presented Duncker's radiation problem to a group of participants in 1980 and 1983. Only 10 percent of them were able to solve the problem right away, 30 percent could solve it when they read the story of the general before. After given an additional hint - to use the story as help - 75 percent of them solved the problem.

With this results, Gick and Holyoak concluded, that analogical problem solving depends on three steps:

1. Noticing that an analogical connection exists between the source and the target problem.

2. Mapping corresponding parts of the two problems onto each other (fortress → tumour, army → ray, etc.)

3. Applying the mapping to generate a parallel solution to the target problem (using little groups of soldiers approaching from different directions →sending several weaker rays from different directions)

Next, Gick and Holyoak started looking for factors that could be helpful for the noticing and the mapping parts, for example: Discovering the basic linking concept behind the source and the target problem.

The concept that links the target problem with the analogy (the “source problem“) is called problem schema. Gick and Holyoak obtained the activation of a schema on their participants by giving them two stories and asking them to compare and summarise them. This activation of problem schemata is called “schema induction“.

The two presented texts were picked out of six stories which describe analogical problems and their solution. One of these stories was "The General" (remember example in Chapter 4.1).

After solving the task the participants were asked to solve the radiation problem (see chapter 4.2). The experiment showed that in order to solve the target problem reading of two stories with analogical problems is more helpful than reading only one story: After reading two stories 52% of the participants were able to solve the radiation problem (As told in chapter 4.2 only 30% were able to solve it after reading only one story, namely: “The General“). Gick and Holyoak found out that the quality of the schema a participant developed differs. They classified them into three groups:

1. Good schemata: In good schemata it was recognised that the same concept was used in order to solve the problem (21% of the participants created a good schema and 91% of them were able to solve the radiation problem).

2. Intermediate schemata: The creator of an intermediate schema has figured out thatthe root of the matter equals (here: many small forces solved the problem). (20% created one, 40% of them had the right solution).

3. Poor schemata: The poor schemata were hardly related to the target problem. Inmany poor schemata the participant only detected that the hero of the story was rewarded for his efforts (59% created one, 30% of them had the right solution).

The process of using a schema or analogy, i.e. applying it to a novel situation is called transduction. One can use a common strategy to solve problems of a new kind. To create a good schema and finally get to a solution is a problem-solving skill that requires practice and some background knowledge.

  • Skip to main content
  • Skip to footer

Larry G. Maguire

Essays on the meaning and purpose of daily work

Analogical Thinking: A Method For Solving Problems

Analogical Thinking: A Method For Solving Problems

13th May 2019 by Larry G. Maguire 2 Comments

How To Solve Problems By Analogy

The ability to solve problems is an essential skill for our survival and growth in the fast-paced, moment to moment shifting of modern society. No matter what the domain of expertise or work, challenges present themselves at an ever-increasing rate. And so it should be, for what is a life worth living if we never have problems to solve? We must accept that challenges are inherent in life, and so we must use our imagination and ingenuity to find solutions. Creativity and high performance require it. Although solving problems is never as simple as following a linear process, using lateral thinking processes for generating solutions is a skill we can cultivate, and in this week's article, I'm taking a look at a couple of examples of analogical thinking in practice. However, take into account that often switching off entirely from the problem can be the best route to the solution you need.

When I was a kid, growing up in the suburbs of Dublin City, we'd play in the grounds of an old farmhouse that stood in the middle of the housing estate. Cleavers 1 , wild grasses and other naturally occurring local plants grew wildly on the grounds. We called Cleavers, sticklebacks because they had little hooks all over that made them stick to our clothes. We would pull bunches of them and throw them at each other for fun.

Many plants growing wild in the countryside have evolved with this ability to latch on to other material like walls, trees, animal fur, other plants and the backs of children's jumpers. Ordinarily, as adults we don't pass any comment other than perhaps, “isn't that clever”. But in 1941 as George de Mestral 2 walked in the Jura Mountains with his dog, the clever ability of the Xanthium strumarium seed pods 3 to attach themselves to his clothes and his dog's fur captured his interest. Little did he realise, that this determined little seed pod would be the foundation for what would become a multimillion-dollar business.

George de Mestral, Inventor

George de Mestral was born into a middle-class Swiss family in June 1907. His father, Albert was a civil engineer and no doubt had a significant influence on the developing mind of his son, with young George showing his creative ability by designing and patenting a toy aeroplane at age 12. De Mestral attended the highly respected Ecole Polytechnique Federale de Lausanne on the shores of Lake Geneva, Switzerland where he studied engineering. Completing his studies, he secured employment in a Swiss engineering company where he honed his technical skills.

De Mestral also enjoyed hunting in the mountains and on one particular occasion in 1941, as the story goes, he was prompted to investigate the means by which those stubborn cockleburs adhered to his clothes. Upon examining the seed pod under a microscope he noticed hundreds of tiny hooks that covered the outer husk of the seed pod. It's likely that de Mestral required many exposures to the stubborn cocklebur to prompt his inquiry, however, given his inventive mind, he somehow made a connection between what he observed and its possible commercial use.

George de Mestral, creator of Velcro hook and loop fastening system used analogical thinking

He thought that if he could somehow employ the principle used by the cocklebur to fabricate a synthetic fastening system, he would have a solution to the problems occurring with conventional fasteners of the time. De Mestral conceptualised what he wanted to create, but coming up with a practical design took considerable time. Clothing manufacturers didn't take him seriously and he encountered many practical challenges in bringing his idea to life. After many attempts, he eventually found a manufacturer in Lyon, France who was willing to work with him and together they combined the toughness of nylon with cotton to create the first working prototype.

With the new material, he was able to recreate the tiny microscopic hooks he’d observed under a microscope all those years before. Proving his concept, he soon after applied and received a patent for his invention and launched his manufacturing business which he named Velcro 4 , a combination of the French words “velours” (velvet) and “crochet” (hook).

It took nearly fifteen years of research before he was finally able to successfully reproduce the natural fastening system he had seen on the Xanthium strumarium seed pods, but he stuck to his idea – a testament to his belief in the solution he had found.

De Mestral's Use Of Analogical Thinking

Despite its widespread use today, Velcro was not an immediate commercial success for de Mestral. However, by the early 1960s and the race to reach the moon, it seems that Velcro was in the right place at the right time. With the developing needs of the aerospace industry and the successful use of Velcro by NASA, the clothing and sportswear industries also realised the possibilities that de Mestral's product presented. Soon Velcro was selling over 60 million meters of hook-and-loop fastener per year, and de Mestral became a multimillionaire.

Whether he realised it or not, de Mestral used what today we term “analogical thinking” or analogical reasoning; the process of finding a solution to a problem by finding a similar problem with a known solution and applying that solution to the current situation.

An  analogy  is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar.  Analogical reasoning  is any type of thinking that relies upon an analogy 5 Stanford Encyclopedia of Philosophy

What Is Analogical Thinking?

The world-renowned writer and philosopher, Edward de Bono 6 , creator of the term “lateral thinking”, says that the analogy technique for generating ideas is a means to get some movement going, to start a train of thought. The challenge for us, when presented with a difficult problem, is that we can become hemmed in by traditional habitual thinking. Thinking laterally through the use of analogy helps to bring about a shift away from this habitual thinking.

In his book, Lateral Thinking 7 , first published almost fifty years ago, de Bono suggests that lateral thinking, of which thinking by analogy is an aspect, is the opposite of traditional vertical thinking. Although he also says that both lateral thinking and vertical thinking can work together rather than in opposition.

Thinking by analogy helps to bring about creativity and insight and is a system of thought that can be learned. The analogy is a simple story that becomes an analogy when it is compared to the current problematic condition. The story employed must have a process that can we can follow, that we can easily understand and apply to the present circumstance. For example, you might criticise a tradesperson for creating such a mess in your home, and he may suggest that to make an omelette he has to break some eggs.

Yeah, says you. Just please don't break them all over the good carpet!

Analogical Thinking Experiment

In 1980, Mary Gick and Keith Holyoak at the University of Michigan investigated the role of analogical thinking in psychological mechanisms that underlie creative insight. In their study 8 they suggested that anecdotal reports of creative scientists and mathematicians suggested that their development of new theories often depended on noticing and applying an analogy drawn from different domains of knowledge. Analogies cited included the hydraulic model of the blood circulatory system and the planetary model of the atomic structure of matter.

The fortress story used in analogical thinking experiment

In their experiment, Gick and Holyoak presented subjects first with a military story. In the story, an army General wishes to capture a fortress located in the centre of a country to which there are several access roads. All have been mined so that while small groups of men can pass through safely, a large number will detonate the mines. A full-scale direct attack is therefore impossible. The General’s solution is to divide his army into small groups, send each group to the head of a different road, and have the groups converge simultaneously on the fortress.

Participants are then asked to find a solution to the following medical problem

A doctor is faced with a patient who has a malignant tumour in his stomach. It is impossible to operate on the patient, but unless the tumour is destroyed the patient will die. There is an x-ray that can be used to destroy the tumour but unfortunately, at the required intensity, the surrounding healthy tissue will also be destroyed. At a lower intensity, the rays are harmless to healthy tissue, but they will not affect the tumour either. What type of procedure might be used to destroy the tumour with the rays, and at the same time avoid killing the healthy tissue?

The Results

The researchers were interested to know how participants would represent the analogical relationship between the story and the problem and generate a workable solution. For participants who didn't receive the military story, only 10% managed to generate the solution to the problem. This percentage rose to 30% for those who received the story in advance of the problem. Interestingly, the result climbed to 75% when participants read more than one analogous story.

Results from the study provide experimental evidence that solutions to problems can be generated using an analogous problem from a very different domain. However, the researchers caution against the assumption that solving problems by analogy may not deliver positive results where the problems are more complex.

Success is also dependant on the individual's exposure to similar conditions in the past, with increased exposure likely to yield more consistent results in solving similar problems.

The Apple Analogy

My sons are aged 11 and 12, and they regularly find challenges with mathematics, just like most kids do. Mathematics is an abstract system of thinking and I can understand the difficulty children may have from time to time getting to grips with it. The terminology is alien and they need to build out concepts and schemas for what is essentially a new and complex language.

They are learning how to work with fractions, percentages and ratios and most of the time they navigate their way successfully, but occasionally they get stumped and ask for help. When they do I always bring in the apple analogy.

One maths question asked my son to divide an amount of money between John and Edward in the ratio of 12 to 9 respectively. My son reckoned that wasn't a fair split. I told him John worked harder than Edward and we proceeded.

I asked him first to consider the amount of money as an apple and asked him what we would need to do to share the apple so that John got 12 pieces and Edward got 9. He correctly said, slice the apple into 21 equal pieces, give John 12 and Edward 9. So now, I said, can we split this money up in the same way? We were on the pigs back.

I always use the apple analogy for the kids' maths problems and it works very well.

Final Thoughts

I remember about 10 years ago my business was in the toilet and I was under enormous financial stress. Every day was a fight with myself and everyone around me. Most days I managed things as well as possible, but other days I was beaten. I can safely say, that no amount of input from those who could see what I couldn't, no amount analogical thinking would have helped me. I was in a prolonged state of hyperactivity and awareness of the problems. Neurochemically my brain could simply not operate in my favour. When I look back now I realise that those set of circumstances simply needed to burn themselves out.

Actively trying to solve an apparent problem can often be problematic in itself. By virtue of our focus on the problem, we often can't see the solutions and there's no amount of thinking can relieve us from the predicament. Analogical thinking has a firm place in creative pursuits, however, it can only be successfully employed when we are in a calm and collected state of mind.

Therefore, I believe that our job in performing to the highest level no matter what our domain of expertise, is to cultivate a stable and measured state of mind. In that place, we can encourage access to parts of the mind that lie beyond our conscious thought and receive answers to life's most complex problems.

Article references

  • Design, C. W. (n.d.). Irish Wildflowers Irish Wild Plants Irish Wild Flora Wildflowers of Ireland. Retrieved May 12, 2019, from http://www.wildflowersofireland.net/plant_detail.php?id_flower=64&wildflower=Cleavers
  • MIT Program. (n.d.). Retrieved May 12, 2019, from https://lemelson.mit.edu/resources/george-de-mestral
  • The Remarkable Cocklebur. (n.d.). Retrieved May 12, 2019, from https://www2.palomar.edu/users/warmstrong/plapr98.htm
  • Swearingen, J. (n.d.). An Idea That Stuck: How George de Mestral Invented the Velcro Fastener. Retrieved May 12, 2019, from http://nymag.com/vindicated/2016/11/an-idea-that-stuck-how-george-de-mestral-invented-velcro.html
  • Bartha, P. (2019, January 25). Analogy and Analogical Reasoning. Retrieved May 12, 2019, from https://plato.stanford.edu/entries/reasoning-analogy/
  • Bono, E. D. (n.d.). Dr. Edward de Bono. Retrieved May 13, 2019, from https://www.edwdebono.com
  • Bono, E. D. (2016).  Lateral thinking: A textbook of creativity . London: Penguin Life.
  • Gick, M. L., & Holyoak, K. J. (1980). Analogical problem-solving.  Cognitive psychology ,  12 (3), 306-355.

Author | Larry G. Maguire

' src=

Reader Interactions

Leave a reply cancel reply.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Read The Sunday Letters Newsletter

Subscribe to the newsletter that discusses life, work, and the pursuit of happiness.

Read Sunday Letters

  • Member Login

Thinking Directions

Using Analogies for Creative Problem Solving

  • Thinking Tools

 alt=

When you are stuck on a problem and need some new ideas, you can get creative ideas by making analogies to some other field.

An analogy is an abstract parallel between two quite different things. For example, you might analogize driving to project management. In both cases it helps to have a map (i.e., a plan) for where you’re going.

When you find one parallel, you can often find others–which is why analogies help with creativity.

For example, suppose you were a manager with an employee who was causing problems, and you were looking for ways of dealing with him. You might get some ideas by comparison to other human relationships. You might use strategies that parents use to manage children, if they were appropriate. Or you might adapt military management techniques for civilian use.

But if you are looking for something new, it pays to go farther afield. Suppose you were to compare the problem employee with a problem program on your computer. Here are four things you might do to deal with the problem program:

a) uninstall the program and use a competitor

b) reinstall the program fresh

c) upgrade the program

d) check users’ groups on the web for plugins or settings to get help with the problem

To complete the analogy, translate these back into suggestions for dealing with the employee:

a) fire the employee

c) send the employee to training

d) ask around on discussion groups for suggestions for dealing with this particular problem

Of these, “reinstall the program fresh” didn’t have an obvious counterpart–so that case warrants more thinking. Here are three things that “reinstall the program” could suggest for dealing with the employee:

  • From the word “reinstall”: Write up a description of model employee behavior, then have a private talk with your employee to see if he’ll start anew and commit to this behavior.
  • From the word “fresh”: Find a different position in the company which is a better fit for the employee.
  • From the fact that reinstalling the program removes corrupted files: Make a list of all the prejudices and negative generalizations you’ve made about this employee and do some soul-searching on whether you’ve been fair and whether you’ve contributed to the problem. Then talk with the employee about your findings.

None of these are point-for-point analogies to reinstalling a program. But when you are using analogies to generate ideas, you don’t need to be that exact. The test is not whether the analogy passes a strict test, but whether you got a helpful idea.

For more ideas on how to use analogies in thinking and communicating, see Anne Miller’s book on “Metaphorically Selling .”

Chibuikem M. Williams

Lovely illustration there, very easy to understand the concept with the examples you laid out. Gracias!

Jean Moroney

Thanks! Glad it was helpful.

marketing

Appreciating the hard work you put into your blog and detailed information you present. It’s good to come across a blog every once in a while that isn’t the same outdated rehashed information. Fantastic read! I’ve saved your site and I’m including your RSS feeds to my Google account.

Jean Moroney

I appreciate the appreciation!

Trackbacks/Pingbacks

  • Effective Problem-Solving Techniques - EduLearn2Change - […] For more information about how analogies are used for creative problem-solving, read this post: Using Analogies for Creative Problem…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

in analogical problem solving the quizlet

Sign up to get a new article every week!

Browse by category.

in analogical problem solving the quizlet

  • A Value Orientation
  • Acting on Priorities
  • Best Practices
  • Book & Product Recommendations
  • Communications Skills
  • Course Correction
  • Decision Making
  • Following Through
  • Goal Setting
  • New Year's
  • Series: Central Purpose
  • Series: The Concept of Happiness
  • Series: The Concept of Value
  • The Active Mind
  • Thinking Tips
  • Understanding Emotions

Add to Cart

Do you need help getting your employer to reimburse you for the cost of your tuition?

Just let me know — I can help with the paperwork.

I can provide you a formal invoice to receive reimbursement from your employer.

Or, if your company prefers to pay the cost directly, I can accept a purchase order and invoice the company.

In addition, there is a 10% discount when three people register together.

Powered by WishList Member - Membership Software

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Analogy and Analogical Reasoning

An analogy is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar. Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further similarity exists. In general (but not always), such arguments belong in the category of ampliative reasoning, since their conclusions do not follow with certainty but are only supported with varying degrees of strength. However, the proper characterization of analogical arguments is subject to debate (see §2.2 ).

Analogical reasoning is fundamental to human thought and, arguably, to some nonhuman animals as well. Historically, analogical reasoning has played an important, but sometimes mysterious, role in a wide range of problem-solving contexts. The explicit use of analogical arguments, since antiquity, has been a distinctive feature of scientific, philosophical and legal reasoning. This article focuses primarily on the nature, evaluation and justification of analogical arguments. Related topics include metaphor , models in science , and precedent and analogy in legal reasoning .

1. Introduction: the many roles of analogy

2.1 examples, 2.2 characterization, 2.3 plausibility, 2.4 analogical inference rules, 3.1 commonsense guidelines, 3.2 aristotle’s theory, 3.3 material criteria: hesse’s theory, 3.4 formal criteria: the structure-mapping theory, 3.5 other theories, 3.6 practice-based approaches, 4.1 deductive justification, 4.2 inductive justification, 4.3 a priori justification, 4.4 pragmatic justification, 5.1 analogy and confirmation, 5.2 conceptual change and theory development, online manuscript, related entries.

Analogies are widely recognized as playing an important heuristic role, as aids to discovery. They have been employed, in a wide variety of settings and with considerable success, to generate insight and to formulate possible solutions to problems. According to Joseph Priestley, a pioneer in chemistry and electricity,

analogy is our best guide in all philosophical investigations; and all discoveries, which were not made by mere accident, have been made by the help of it. (1769/1966: 14)

Priestley may be over-stating the case, but there is no doubt that analogies have suggested fruitful lines of inquiry in many fields. Because of their heuristic value, analogies and analogical reasoning have been a particular focus of AI research. Hájek (2018) examines analogy as a heuristic tool in philosophy.

Example 1 . Hydrodynamic analogies exploit mathematical similarities between the equations governing ideal fluid flow and torsional problems. To predict stresses in a planned structure, one can construct a fluid model, i.e., a system of pipes through which water passes (Timoshenko and Goodier 1970). Within the limits of idealization, such analogies allow us to make demonstrative inferences, for example, from a measured quantity in the fluid model to the analogous value in the torsional problem. In practice, there are numerous complications (Sterrett 2006).

At the other extreme, an analogical argument may provide very weak support for its conclusion, establishing no more than minimal plausibility. Consider:

Example 2 . Thomas Reid’s (1785) argument for the existence of life on other planets (Stebbing 1933; Mill 1843/1930; Robinson 1930; Copi 1961). Reid notes a number of similarities between Earth and the other planets in our solar system: all orbit and are illuminated by the sun; several have moons; all revolve on an axis. In consequence, he concludes, it is “not unreasonable to think, that those planets may, like our earth, be the habitation of various orders of living creatures” (1785: 24).

Such modesty is not uncommon. Often the point of an analogical argument is just to persuade people to take an idea seriously. For instance:

Example 3 . Darwin takes himself to be using an analogy between artificial and natural selection to argue for the plausibility of the latter:

Why may I not invent the hypothesis of Natural Selection (which from the analogy of domestic productions, and from what we know of the struggle of existence and of the variability of organic beings, is, in some very slight degree, in itself probable) and try whether this hypothesis of Natural Selection does not explain (as I think it does) a large number of facts…. ( Letter to Henslow , May 1860 in Darwin 1903)

Here it appears, by Darwin’s own admission, that his analogy is employed to show that the hypothesis is probable to some “slight degree” and thus merits further investigation. Some, however, reject this characterization of Darwin’s reasoning (Richards 1997; Gildenhuys 2004).

Sometimes analogical reasoning is the only available form of justification for a hypothesis. The method of ethnographic analogy is used to interpret

the nonobservable behaviour of the ancient inhabitants of an archaeological site (or ancient culture) based on the similarity of their artifacts to those used by living peoples. (Hunter and Whitten 1976: 147)

For example:

Example 4 . Shelley (1999, 2003) describes how ethnographic analogy was used to determine the probable significance of odd markings on the necks of Moche clay pots found in the Peruvian Andes. Contemporary potters in Peru use these marks (called sígnales ) to indicate ownership; the marks enable them to reclaim their work when several potters share a kiln or storage facility. Analogical reasoning may be the only avenue of inference to the past in such cases, though this point is subject to dispute (Gould and Watson 1982; Wylie 1982, 1985). Analogical reasoning may have similar significance for cosmological phenomena that are inaccessible due to limits on observation (Dardashti et al. 2017). See §5.1 for further discussion.

As philosophers and historians such as Kuhn (1996) have repeatedly pointed out, there is not always a clear separation between the two roles that we have identified, discovery and justification. Indeed, the two functions are blended in what we might call the programmatic (or paradigmatic ) role of analogy: over a period of time, an analogy can shape the development of a program of research. For example:

Example 5 . An ‘acoustical analogy’ was employed for many years by certain nineteenth-century physicists investigating spectral lines. Discrete spectra were thought to be

completely analogous to the acoustical situation, with atoms (and/or molecules) serving as oscillators originating or absorbing the vibrations in the manner of resonant tuning forks. (Maier 1981: 51)

Guided by this analogy, physicists looked for groups of spectral lines that exhibited frequency patterns characteristic of a harmonic oscillator. This analogy served not only to underwrite the plausibility of conjectures, but also to guide and limit discovery by pointing scientists in certain directions.

More generally, analogies can play an important programmatic role by guiding conceptual development (see §5.2 ). In some cases, a programmatic analogy culminates in the theoretical unification of two different areas of inquiry.

Example 6 . Descartes’s (1637/1954) correlation between geometry and algebra provided methods for systematically handling geometrical problems that had long been recognized as analogous. A very different relationship between analogy and discovery exists when a programmatic analogy breaks down, as was the ultimate fate of the acoustical analogy. That atomic spectra have an entirely different explanation became clear with the advent of quantum theory. In this case, novel discoveries emerged against background expectations shaped by the guiding analogy. There is a third possibility: an unproductive or misleading programmatic analogy may simply become entrenched and self-perpetuating as it leads us to “construct… data that conform to it” (Stepan 1996: 133). Arguably, the danger of this third possibility provides strong motivation for developing a critical account of analogical reasoning and analogical arguments.

Analogical cognition , which embraces all cognitive processes involved in discovering, constructing and using analogies, is broader than analogical reasoning (Hofstadter 2001; Hofstadter and Sander 2013). Understanding these processes is an important objective of current cognitive science research, and an objective that generates many questions. How do humans identify analogies? Do nonhuman animals use analogies in ways similar to humans? How do analogies and metaphors influence concept formation?

This entry, however, concentrates specifically on analogical arguments. Specifically, it focuses on three central epistemological questions:

  • What criteria should we use to evaluate analogical arguments?
  • What philosophical justification can be provided for analogical inferences?
  • How do analogical arguments fit into a broader inferential context (i.e., how do we combine them with other forms of inference), especially theoretical confirmation?

Following a preliminary discussion of the basic structure of analogical arguments, the entry reviews selected attempts to provide answers to these three questions. To find such answers would constitute an important first step towards understanding the nature of analogical reasoning. To isolate these questions, however, is to make the non-trivial assumption that there can be a theory of analogical arguments —an assumption which, as we shall see, is attacked in different ways by both philosophers and cognitive scientists.

2. Analogical arguments

Analogical arguments vary greatly in subject matter, strength and logical structure. In order to appreciate this variety, it is helpful to increase our stock of examples. First, a geometric example:

Example 7 (Rectangles and boxes). Suppose that you have established that of all rectangles with a fixed perimeter, the square has maximum area. By analogy, you conjecture that of all boxes with a fixed surface area, the cube has maximum volume.

Two examples from the history of science:

Example 8 (Morphine and meperidine). In 1934, the pharmacologist Schaumann was testing synthetic compounds for their anti-spasmodic effect. These drugs had a chemical structure similar to morphine. He observed that one of the compounds— meperidine , also known as Demerol —had a physical effect on mice that was previously observed only with morphine: it induced an S-shaped tail curvature. By analogy, he conjectured that the drug might also share morphine’s narcotic effects. Testing on rats, rabbits, dogs and eventually humans showed that meperidine, like morphine, was an effective pain-killer (Lembeck 1989: 11; Reynolds and Randall 1975: 273).

Example 9 (Priestley on electrostatic force). In 1769, Priestley suggested that the absence of electrical influence inside a hollow charged spherical shell was evidence that charges attract and repel with an inverse square force. He supported his hypothesis by appealing to the analogous situation of zero gravitational force inside a hollow shell of uniform density.

Finally, an example from legal reasoning:

Example 10 (Duty of reasonable care). In a much-cited case ( Donoghue v. Stevenson 1932 AC 562), the United Kingdom House of Lords found the manufacturer of a bottle of ginger beer liable for damages to a consumer who became ill as a result of a dead snail in the bottle. The court argued that the manufacturer had a duty to take “reasonable care” in creating a product that could foreseeably result in harm to the consumer in the absence of such care, and where the consumer had no possibility of intermediate examination. The principle articulated in this famous case was extended, by analogy, to allow recovery for harm against an engineering firm whose negligent repair work caused the collapse of a lift ( Haseldine v. CA Daw & Son Ltd. 1941 2 KB 343). By contrast, the principle was not applicable to a case where a workman was injured by a defective crane, since the workman had opportunity to examine the crane and was even aware of the defects ( Farr v. Butters Brothers & Co. 1932 2 KB 606).

What, if anything, do all of these examples have in common? We begin with a simple, quasi-formal characterization. Similar formulations are found in elementary critical thinking texts (e.g., Copi and Cohen 2005) and in the literature on argumentation theory (e.g., Govier 1999, Guarini 2004, Walton and Hyra 2018). An analogical argument has the following form:

  • \(S\) is similar to \(T\) in certain (known) respects.
  • \(S\) has some further feature \(Q\).
  • Therefore, \(T\) also has the feature \(Q\), or some feature \(Q^*\) similar to \(Q\).

(1) and (2) are premises. (3) is the conclusion of the argument. The argument form is ampliative ; the conclusion is not guaranteed to follow from the premises.

\(S\) and \(T\) are referred to as the source domain and target domain , respectively. A domain is a set of objects, properties, relations and functions, together with a set of accepted statements about those objects, properties, relations and functions. More formally, a domain consists of a set of objects and an interpreted set of statements about them. The statements need not belong to a first-order language, but to keep things simple, any formalizations employed here will be first-order. We use unstarred symbols \((a, P, R, f)\) to refer to items in the source domain and starred symbols \((a^*, P^*, R^*, f^*)\) to refer to corresponding items in the target domain. In Example 9 , the source domain items pertain to gravitation; the target items pertain to electrostatic attraction.

Formally, an analogy between \(S\) and \(T\) is a one-to-one mapping between objects, properties, relations and functions in \(S\) and those in \(T\). Not all of the items in \(S\) and \(T\) need to be placed in correspondence. Commonly, the analogy only identifies correspondences between a select set of items. In practice, we specify an analogy simply by indicating the most significant similarities (and sometimes differences).

We can improve on this preliminary characterization of the argument from analogy by introducing the tabular representation found in Hesse (1966). We place corresponding objects, properties, relations and propositions side-by-side in a table of two columns, one for each domain. For instance, Reid’s argument ( Example 2 ) can be represented as follows (using \(\Rightarrow\) for the analogical inference):

Hesse introduced useful terminology based on this tabular representation. The horizontal relations in an analogy are the relations of similarity (and difference) in the mapping between domains, while the vertical relations are those between the objects, relations and properties within each domain. The correspondence (similarity) between earth’s having a moon and Mars’ having moons is a horizontal relation; the causal relation between having a moon and supporting life is a vertical relation within the source domain (with the possibility of a distinct such relation existing in the target as well).

In an earlier discussion of analogy, Keynes (1921) introduced some terminology that is also helpful.

Positive analogy . Let \(P\) stand for a list of accepted propositions \(P_1 , \ldots ,P_n\) about the source domain \(S\). Suppose that the corresponding propositions \(P^*_1 , \ldots ,P^*_n\), abbreviated as \(P^*\), are all accepted as holding for the target domain \(T\), so that \(P\) and \(P^*\) represent accepted (or known) similarities. Then we refer to \(P\) as the positive analogy .

Negative analogy . Let \(A\) stand for a list of propositions \(A_1 , \ldots ,A_r\) accepted as holding in \(S\), and \(B^*\) for a list \(B_1^*, \ldots ,B_s^*\) of propositions holding in \(T\). Suppose that the analogous propositions \(A^* = A_1^*, \ldots ,A_r^*\) fail to hold in \(T\), and similarly the propositions \(B = B_1 , \ldots ,B_s\) fail to hold in \(S\), so that \(A, {\sim}A^*\) and \({\sim}B, B^*\) represent accepted (or known) differences. Then we refer to \(A\) and \(B\) as the negative analogy .

Neutral analogy . The neutral analogy consists of accepted propositions about \(S\) for which it is not known whether an analogue holds in \(T\).

Finally we have:

Hypothetical analogy . The hypothetical analogy is simply the proposition \(Q\) in the neutral analogy that is the focus of our attention.

These concepts allow us to provide a characterization for an individual analogical argument that is somewhat richer than the original one.

An analogical argument may thus be summarized:

It is plausible that \(Q^*\) holds in the target, because of certain known (or accepted) similarities with the source domain, despite certain known (or accepted) differences.

In order for this characterization to be meaningful, we need to say something about the meaning of ‘plausibly.’ To ensure broad applicability over analogical arguments that vary greatly in strength, we interpret plausibility rather liberally as meaning ‘with some degree of support’. In general, judgments of plausibility are made after a claim has been formulated, but prior to rigorous testing or proof. The next sub-section provides further discussion.

Note that this characterization is incomplete in a number of ways. The manner in which we list similarities and differences, the nature of the correspondences between domains: these things are left unspecified. Nor does this characterization accommodate reasoning with multiple analogies (i.e., multiple source domains), which is ubiquitous in legal reasoning and common elsewhere. To characterize the argument form more fully, however, is not possible without either taking a step towards a substantive theory of analogical reasoning or restricting attention to certain classes of analogical arguments.

Arguments by analogy are extensively discussed within argumentation theory. There is considerable debate about whether they constitute a species of deductive inference (Govier 1999; Waller 2001; Guarini 2004; Kraus 2015). Argumentation theorists also make use of tools such as speech act theory (Bermejo-Luque 2012), argumentation schemes and dialogue types (Macagno et al. 2017; Walton and Hyra 2018) to distinguish different types of analogical argument.

Arguments by analogy are also discussed in the vast literature on scientific models and model-based reasoning, following the lead of Hesse (1966). Bailer-Jones (2002) draws a helpful distinction between analogies and models. While “many models have their roots in an analogy” (2002: 113) and analogy “can act as a catalyst to aid modeling,” Bailer-Jones observes that “the aim of modeling has nothing intrinsically to do with analogy.” In brief, models are tools for prediction and explanation, whereas analogical arguments aim at establishing plausibility. An analogy is evaluated in terms of source-target similarity, while a model is evaluated on how successfully it “provides access to a phenomenon in that it interprets the available empirical data about the phenomenon.” If we broaden our perspective beyond analogical arguments , however, the connection between models and analogies is restored. Nersessian (2009), for instance, stresses the role of analog models in concept-formation and other cognitive processes.

To say that a hypothesis is plausible is to convey that it has epistemic support: we have some reason to believe it, even prior to testing. An assertion of plausibility within the context of an inquiry typically has pragmatic connotations as well: to say that a hypothesis is plausible suggests that we have some reason to investigate it further. For example, a mathematician working on a proof regards a conjecture as plausible if it “has some chances of success” (Polya 1954 (v. 2): 148). On both points, there is ambiguity as to whether an assertion of plausibility is categorical or a matter of degree. These observations point to the existence of two distinct conceptions of plausibility, probabilistic and modal , either of which may reflect the intended conclusion of an analogical argument.

On the probabilistic conception, plausibility is naturally identified with rational credence (rational subjective degree of belief) and is typically represented as a probability. A classic expression may be found in Mill’s analysis of the argument from analogy in A System of Logic :

There can be no doubt that every resemblance [not known to be irrelevant] affords some degree of probability, beyond what would otherwise exist, in favour of the conclusion. (Mill 1843/1930: 333)

In the terminology introduced in §2.2, Mill’s idea is that each element of the positive analogy boosts the probability of the conclusion. Contemporary ‘structure-mapping’ theories ( §3.4 ) employ a restricted version: each structural similarity between two domains contributes to the overall measure of similarity, and hence to the strength of the analogical argument.

On the alternative modal conception, ‘it is plausible that \(p\)’ is not a matter of degree. The meaning, roughly speaking, is that there are sufficient initial grounds for taking \(p\) seriously, i.e., for further investigation (subject to feasibility and interest). Informally: \(p\) passes an initial screening procedure. There is no assertion of degree. Instead, ‘It is plausible that’ may be regarded as an epistemic modal operator that aims to capture a notion, prima facie plausibility, that is somewhat stronger than ordinary epistemic possibility. The intent is to single out \(p\) from an undifferentiated mass of ideas that remain bare epistemic possibilities. To illustrate: in 1769, Priestley’s argument ( Example 9 ), if successful, would establish the prima facie plausibility of an inverse square law for electrostatic attraction. The set of epistemic possibilities—hypotheses about electrostatic attraction compatible with knowledge of the day—was much larger. Individual analogical arguments in mathematics (such as Example 7 ) are almost invariably directed towards prima facie plausibility.

The modal conception figures importantly in some discussions of analogical reasoning. The physicist N. R. Campbell (1957) writes:

But in order that a theory may be valuable it must … display an analogy. The propositions of the hypothesis must be analogous to some known laws…. (1957: 129)

Commenting on the role of analogy in Fourier’s theory of heat conduction, Campbell writes:

Some analogy is essential to it; for it is only this analogy which distinguishes the theory from the multitude of others… which might also be proposed to explain the same laws. (1957: 142)

The interesting notion here is that of a “valuable” theory. We may not agree with Campbell that the existence of analogy is “essential” for a novel theory to be “valuable.” But consider the weaker thesis that an acceptable analogy is sufficient to establish that a theory is “valuable”, or (to qualify still further) that an acceptable analogy provides defeasible grounds for taking the theory seriously. (Possible defeaters might include internal inconsistency, inconsistency with accepted theory, or the existence of a (clearly superior) rival analogical argument.) The point is that Campbell, following the lead of 19 th century philosopher-scientists such as Herschel and Whewell, thinks that analogies can establish this sort of prima facie plausibility. Snyder (2006) provides a detailed discussion of the latter two thinkers and their ideas about the role of analogies in science.

In general, analogical arguments may be directed at establishing either sort of plausibility for their conclusions; they can have a probabilistic use or a modal use. Examples 7 through 9 are best interpreted as supporting modal conclusions. In those arguments, an analogy is used to show that a conjecture is worth taking seriously. To insist on putting the conclusion in probabilistic terms distracts attention from the point of the argument. The conclusion might be modeled (by a Bayesian) as having a certain probability value because it is deemed prima facie plausible, but not vice versa. Example 2 , perhaps, might be regarded as directed primarily towards a probabilistic conclusion.

There should be connections between the two conceptions. Indeed, we might think that the same analogical argument can establish both prima facie plausibility and a degree of probability for a hypothesis. But it is difficult to translate between epistemic modal concepts and probabilities (Cohen 1980; Douven and Williamson 2006; Huber 2009; Spohn 2009, 2012). We cannot simply take the probabilistic notion as the primitive one. It seems wise to keep the two conceptions of plausibility separate.

Schema (4) is a template that represents all analogical arguments, good and bad. It is not an inference rule. Despite the confidence with which particular analogical arguments are advanced, nobody has ever formulated an acceptable rule, or set of rules, for valid analogical inferences. There is not even a plausible candidate. This situation is in marked contrast not only with deductive reasoning, but also with elementary forms of inductive reasoning, such as induction by enumeration.

Of course, it is difficult to show that no successful analogical inference rule will ever be proposed. But consider the following candidate, formulated using the concepts of schema (4) and taking us only a short step beyond that basic characterization.

Rule (5) is modeled on the straight rule for enumerative induction and inspired by Mill’s view of analogical inference, as described in §2.3. We use the generic phrase ‘degree of support’ in place of probability, since other factors besides the analogical argument may influence our probability assignment for \(Q^*\).

It is pretty clear that (5) is a non-starter. The main problem is that the rule justifies too much. The only substantive requirement introduced by (5) is that there be a nonempty positive analogy. Plainly, there are analogical arguments that satisfy this condition but establish no prima facie plausibility and no measure of support for their conclusions.

Here is a simple illustration. Achinstein (1964: 328) observes that there is a formal analogy between swans and line segments if we take the relation ‘has the same color as’ to correspond to ‘is congruent with’. Both relations are reflexive, symmetric, and transitive. Yet it would be absurd to find positive support from this analogy for the idea that we are likely to find congruent lines clustered in groups of two or more, just because swans of the same color are commonly found in groups. The positive analogy is antecedently known to be irrelevant to the hypothetical analogy. In such a case, the analogical inference should be utterly rejected. Yet rule (5) would wrongly assign non-zero degree of support.

To generalize the difficulty: not every similarity increases the probability of the conclusion and not every difference decreases it. Some similarities and differences are known to be (or accepted as being) utterly irrelevant and should have no influence whatsoever on our probability judgments. To be viable, rule (5) would need to be supplemented with considerations of relevance , which depend upon the subject matter, historical context and logical details particular to each analogical argument. To search for a simple rule of analogical inference thus appears futile.

Carnap and his followers (Carnap 1980; Kuipers 1988; Niiniluoto 1988; Maher 2000; Romeijn 2006) have formulated principles of analogy for inductive logic, using Carnapian \(\lambda \gamma\) rules. Generally, this body of work relates to “analogy by similarity”, rather than the type of analogical reasoning discussed here. Romeijn (2006) maintains that there is a relation between Carnap’s concept of analogy and analogical prediction. His approach is a hybrid of Carnap-style inductive rules and a Bayesian model. Such an approach would need to be generalized to handle the kinds of arguments described in §2.1 . It remains unclear that the Carnapian approach can provide a general rule for analogical inference.

Norton (2010, and 2018—see Other Internet Resources) has argued that the project of formalizing inductive reasoning in terms of one or more simple formal schemata is doomed. His criticisms seem especially apt when applied to analogical reasoning. He writes:

If analogical reasoning is required to conform only to a simple formal schema, the restriction is too permissive. Inferences are authorized that clearly should not pass muster… The natural response has been to develop more elaborate formal templates… The familiar difficulty is that these embellished schema never seem to be quite embellished enough; there always seems to be some part of the analysis that must be handled intuitively without guidance from strict formal rules. (2018: 1)

Norton takes the point one step further, in keeping with his “material theory” of inductive inference. He argues that there is no universal logical principle that “powers” analogical inference “by asserting that things that share some properties must share others.” Rather, each analogical inference is warranted by some local constellation of facts about the target system that he terms “the fact of analogy”. These local facts are to be determined and investigated on a case by case basis.

To embrace a purely formal approach to analogy and to abjure formalization entirely are two extremes in a spectrum of strategies. There are intermediate positions. Most recent analyses (both philosophical and computational) have been directed towards elucidating criteria and procedures, rather than formal rules, for reasoning by analogy. So long as these are not intended to provide a universal ‘logic’ of analogy, there is room for such criteria even if one accepts Norton’s basic point. The next section discusses some of these criteria and procedures.

3. Criteria for evaluating analogical arguments

Logicians and philosophers of science have identified ‘textbook-style’ general guidelines for evaluating analogical arguments (Mill 1843/1930; Keynes 1921; Robinson 1930; Stebbing 1933; Copi and Cohen 2005; Moore and Parker 1998; Woods, Irvine, and Walton 2004). Here are some of the most important ones:

These principles can be helpful, but are frequently too vague to provide much insight. How do we count similarities and differences in applying (G1) and (G2)? Why are the structural and causal analogies mentioned in (G5) and (G6) especially important, and which structural and causal features merit attention? More generally, in connection with the all-important (G7): how do we determine which similarities and differences are relevant to the conclusion? Furthermore, what are we to say about similarities and differences that have been omitted from an analogical argument but might still be relevant?

An additional problem is that the criteria can pull in different directions. To illustrate, consider Reid’s argument for life on other planets ( Example 2 ). Stebbing (1933) finds Reid’s argument “suggestive” and “not unplausible” because the conclusion is weak (G4), while Mill (1843/1930) appears to reject the argument on account of our vast ignorance of properties that might be relevant (G3).

There is a further problem that relates to the distinction just made (in §2.3 ) between two kinds of plausibility. Each of the above criteria apart from (G7) is expressed in terms of the strength of the argument, i.e., the degree of support for the conclusion. The criteria thus appear to presuppose the probabilistic interpretation of plausibility. The problem is that a great many analogical arguments aim to establish prima facie plausibility rather than any degree of probability. Most of the guidelines are not directly applicable to such arguments.

Aristotle sets the stage for all later theories of analogical reasoning. In his theoretical reflections on analogy and in his most judicious examples, we find a sober account that lays the foundation both for the commonsense guidelines noted above and for more sophisticated analyses.

Although Aristotle employs the term analogy ( analogia ) and discusses analogical predication , he never talks about analogical reasoning or analogical arguments per se . He does, however, identify two argument forms, the argument from example ( paradeigma ) and the argument from likeness ( homoiotes ), both closely related to what would we now recognize as an analogical argument.

The argument from example ( paradeigma ) is described in the Rhetoric and the Prior Analytics :

Enthymemes based upon example are those which proceed from one or more similar cases, arrive at a general proposition, and then argue deductively to a particular inference. ( Rhetoric 1402b15) Let \(A\) be evil, \(B\) making war against neighbours, \(C\) Athenians against Thebans, \(D\) Thebans against Phocians. If then we wish to prove that to fight with the Thebans is an evil, we must assume that to fight against neighbours is an evil. Conviction of this is obtained from similar cases, e.g., that the war against the Phocians was an evil to the Thebans. Since then to fight against neighbours is an evil, and to fight against the Thebans is to fight against neighbours, it is clear that to fight against the Thebans is an evil. ( Pr. An. 69a1)

Aristotle notes two differences between this argument form and induction (69a15ff.): it “does not draw its proof from all the particular cases” (i.e., it is not a “complete” induction), and it requires an additional (deductively valid) syllogism as the final step. The argument from example thus amounts to single-case induction followed by deductive inference. It has the following structure (using \(\supset\) for the conditional):

[a tree diagram where S is source domain and T is target domain. First node is P(S)&Q(S) in the lower left corner. It is connected by a dashed arrow to (x)(P(x) superset Q(x)) in the top middle which in turn connects by a solid arrow to P(T) and on the next line P(T) superset Q(T) in the lower right. It in turn is connected by a solid arrow to Q(T) below it.]

In the terminology of §2.2, \(P\) is the positive analogy and \(Q\) is the hypothetical analogy. In Aristotle’s example, \(S\) (the source) is war between Phocians and Thebans, \(T\) (the target) is war between Athenians and Thebans, \(P\) is war between neighbours, and \(Q\) is evil. The first inference (dashed arrow) is inductive; the second and third (solid arrows) are deductively valid.

The paradeigma has an interesting feature: it is amenable to an alternative analysis as a purely deductive argument form. Let us concentrate on Aristotle’s assertion, “we must assume that to fight against neighbours is an evil,” represented as \(\forall x(P(x) \supset Q(x))\). Instead of regarding this intermediate step as something reached by induction from a single case, we might instead regard it as a hidden presupposition. This transforms the paradeigma into a syllogistic argument with a missing or enthymematic premise, and our attention shifts to possible means for establishing that premise (with single-case induction as one such means). Construed in this way, Aristotle’s paradeigma argument foreshadows deductive analyses of analogical reasoning (see §4.1 ).

The argument from likeness ( homoiotes ) seems to be closer than the paradeigma to our contemporary understanding of analogical arguments. This argument form receives considerable attention in Topics I, 17 and 18 and again in VIII, 1. The most important passage is the following.

Try to secure admissions by means of likeness; for such admissions are plausible, and the universal involved is less patent; e.g. that as knowledge and ignorance of contraries is the same, so too perception of contraries is the same; or vice versa, that since the perception is the same, so is the knowledge also. This argument resembles induction, but is not the same thing; for in induction it is the universal whose admission is secured from the particulars, whereas in arguments from likeness, what is secured is not the universal under which all the like cases fall. ( Topics 156b10–17)

This passage occurs in a work that offers advice for framing dialectical arguments when confronting a somewhat skeptical interlocutor. In such situations, it is best not to make one’s argument depend upon securing agreement about any universal proposition. The argument from likeness is thus clearly distinct from the paradeigma , where the universal proposition plays an essential role as an intermediate step in the argument. The argument from likeness, though logically less straightforward than the paradeigma , is exactly the sort of analogical reasoning we want when we are unsure about underlying generalizations.

In Topics I 17, Aristotle states that any shared attribute contributes some degree of likeness. It is natural to ask when the degree of likeness between two things is sufficiently great to warrant inferring a further likeness. In other words, when does the argument from likeness succeed? Aristotle does not answer explicitly, but a clue is provided by the way he justifies particular arguments from likeness. As Lloyd (1966) has observed, Aristotle typically justifies such arguments by articulating a (sometimes vague) causal principle which governs the two phenomena being compared. For example, Aristotle explains the saltiness of the sea, by analogy with the saltiness of sweat, as a kind of residual earthy stuff exuded in natural processes such as heating. The common principle is this:

Everything that grows and is naturally generated always leaves a residue, like that of things burnt, consisting in this sort of earth. ( Mete 358a17)

From this method of justification, we might conjecture that Aristotle believes that the important similarities are those that enter into such general causal principles.

Summarizing, Aristotle’s theory provides us with four important and influential criteria for the evaluation of analogical arguments:

  • The strength of an analogy depends upon the number of similarities.
  • Similarity reduces to identical properties and relations.
  • Good analogies derive from underlying common causes or general laws.
  • A good analogical argument need not pre-suppose acquaintance with the underlying universal (generalization).

These four principles form the core of a common-sense model for evaluating analogical arguments (which is not to say that they are correct; indeed, the first three will shortly be called into question). The first, as we have seen, appears regularly in textbook discussions of analogy. The second is largely taken for granted, with important exceptions in computational models of analogy ( §3.4 ). Versions of the third are found in most sophisticated theories. The final point, which distinguishes the argument from likeness and the argument from example, is endorsed in many discussions of analogy (e.g., Quine and Ullian 1970).

A slight generalization of Aristotle’s first principle helps to prepare the way for discussion of later developments. As that principle suggests, Aristotle, in common with just about everyone else who has written about analogical reasoning, organizes his analysis of the argument form around overall similarity. In the terminology of section 2.2, horizontal relationships drive the reasoning: the greater the overall similarity of the two domains, the stronger the analogical argument . Hume makes the same point, though stated negatively, in his Dialogues Concerning Natural Religion :

Wherever you depart, in the least, from the similarity of the cases, you diminish proportionably the evidence; and may at last bring it to a very weak analogy, which is confessedly liable to error and uncertainty. (1779/1947: 144)

Most theories of analogy agree with Aristotle and Hume on this general point. Disagreement relates to the appropriate way of measuring overall similarity. Some theories assign greatest weight to material analogy , which refers to shared, and typically observable, features. Others give prominence to formal analogy , emphasizing high-level structural correspondence. The next two sub-sections discuss representative accounts that illustrate these two approaches.

Hesse (1966) offers a sharpened version of Aristotle’s theory, specifically focused on analogical arguments in the sciences. She formulates three requirements that an analogical argument must satisfy in order to be acceptable:

  • Requirement of material analogy . The horizontal relations must include similarities between observable properties.
  • Causal condition . The vertical relations must be causal relations “in some acceptable scientific sense” (1966: 87).
  • No-essential-difference condition . The essential properties and causal relations of the source domain must not have been shown to be part of the negative analogy.

3.3.1 Requirement of material analogy

For Hesse, an acceptable analogical argument must include “observable similarities” between domains, which she refers to as material analogy . Material analogy is contrasted with formal analogy . Two domains are formally analogous if both are “interpretations of the same formal theory” (1966: 68). Nomic isomorphism (Hempel 1965) is a special case in which the physical laws governing two systems have identical mathematical form. Heat and fluid flow exhibit nomic isomorphism. A second example is the analogy between the flow of electric current in a wire and fluid in a pipe. Ohm’s law

states that voltage difference along a wire equals current times a constant resistance. This has the same mathematical form as Poiseuille’s law (for ideal fluids):

which states that the pressure difference along a pipe equals the volumetric flow rate times a constant. Both of these systems can be represented by a common equation. While formal analogy is linked to common mathematical structure, it should not be limited to nomic isomorphism (Bartha 2010: 209). The idea of formal analogy generalizes to cases where there is a common mathematical structure between models for two systems. Bartha offers an even more liberal definition (2010: 195): “Two features are formally similar if they occupy corresponding positions in formally analogous theories. For example, pitch in the theory of sound corresponds to color in the theory of light.”

By contrast, material analogy consists of what Hesse calls “observable” or “pre-theoretic” similarities. These are horizontal relationships of similarity between properties of objects in the source and the target. Similarities between echoes (sound) and reflection (light), for instance, were recognized long before we had any detailed theories about these phenomena. Hesse (1966, 1988) regards such similarities as metaphorical relationships between the two domains and labels them “pre-theoretic” because they draw on personal and cultural experience. We have both material and formal analogies between sound and light, and it is significant for Hesse that the former are independent of the latter.

There are good reasons not to accept Hesse’s requirement of material analogy, construed in this narrow way. First, it is apparent that formal analogies are the starting point in many important inferences. That is certainly the case in mathematics, a field in which material analogy, in Hesse’s sense, plays no role at all. Analogical arguments based on formal analogy have also been extremely influential in physics (Steiner 1989, 1998).

In Norton’s broad sense, however, ‘material analogy’ simply refers to similarities rooted in factual knowledge of the source and target domains. With reference to this broader meaning, Hesse proposes two additional material criteria.

3.3.2 Causal condition

Hesse requires that the hypothetical analogy, the feature transferred to the target domain, be causally related to the positive analogy. In her words, the essential requirement for a good argument from analogy is “a tendency to co-occurrence”, i.e., a causal relationship. She states the requirement as follows:

The vertical relations in the model [source] are causal relations in some acceptable scientific sense, where there are no compelling a priori reasons for denying that causal relations of the same kind may hold between terms of the explanandum [target]. (1966: 87)

The causal condition rules out analogical arguments where there is no causal knowledge of the source domain. It derives support from the observation that many analogies do appear to involve a transfer of causal knowledge.

The causal condition is on the right track, but is arguably too restrictive. For example, it rules out analogical arguments in mathematics. Even if we limit attention to the empirical sciences, persuasive analogical arguments may be founded upon strong statistical correlation in the absence of any known causal connection. Consider ( Example 11 ) Benjamin Franklin’s prediction, in 1749, that pointed metal rods would attract lightning, by analogy with the way they attracted the “electrical fluid” in the laboratory:

Electrical fluid agrees with lightning in these particulars: 1. Giving light. 2. Colour of the light. 3. Crooked direction. 4. Swift motion. 5. Being conducted by metals. 6. Crack or noise in exploding. 7. Subsisting in water or ice. 8. Rending bodies it passes through. 9. Destroying animals. 10. Melting metals. 11. Firing inflammable substances. 12. Sulphureous smell.—The electrical fluid is attracted by points.—We do not know whether this property is in lightning.—But since they agree in all the particulars wherein we can already compare them, is it not probable they agree likewise in this? Let the experiment be made. ( Benjamin Franklin’s Experiments , 334)

Franklin’s hypothesis was based on a long list of properties common to the target (lightning) and source (electrical fluid in the laboratory). There was no known causal connection between the twelve “particulars” and the thirteenth property, but there was a strong correlation. Analogical arguments may be plausible even where there are no known causal relations.

3.3.3 No-essential-difference condition

Hesse’s final requirement is that the “essential properties and causal relations of the [source] have not been shown to be part of the negative analogy” (1966: 91). Hesse does not provide a definition of “essential,” but suggests that a property or relation is essential if it is “causally closely related to the known positive analogy.” For instance, an analogy with fluid flow was extremely influential in developing the theory of heat conduction. Once it was discovered that heat was not conserved, however, the analogy became unacceptable (according to Hesse) because conservation was so central to the theory of fluid flow.

This requirement, though once again on the right track, seems too restrictive. It can lead to the rejection of a good analogical argument. Consider the analogy between a two-dimensional rectangle and a three-dimensional box ( Example 7 ). Broadening Hesse’s notion, it seems that there are many ‘essential’ differences between rectangles and boxes. This does not mean that we should reject every analogy between rectangles and boxes out of hand. The problem derives from the fact that Hesse’s condition is applied to the analogy relation independently of the use to which that relation is put. What counts as essential should vary with the analogical argument. Absent an inferential context, it is impossible to evaluate the importance or ‘essentiality’ of similarities and differences.

Despite these weaknesses, Hesse’s ‘material’ criteria constitute a significant advance in our understanding of analogical reasoning. The causal condition and the no-essential-difference condition incorporate local factors, as urged by Norton, into the assessment of analogical arguments. These conditions, singly or taken together, imply that an analogical argument can fail to generate any support for its conclusion, even when there is a non-empty positive analogy. Hesse offers no theory about the ‘degree’ of analogical support. That makes her account one of the few that is oriented towards the modal, rather than probabilistic, use of analogical arguments ( §2.3 ).

Many people take the concept of model-theoretic isomorphism to set the standard for thinking about similarity and its role in analogical reasoning. They propose formal criteria for evaluating analogies, based on overall structural or syntactical similarity. Let us refer to theories oriented around such criteria as structuralist .

A number of leading computational models of analogy are structuralist. They are implemented in computer programs that begin with (or sometimes build) representations of the source and target domains, and then construct possible analogy mappings. Analogical inferences emerge as a consequence of identifying the ‘best mapping.’ In terms of criteria for analogical reasoning, there are two main ideas. First, the goodness of an analogical argument is based on the goodness of the associated analogy mapping . Second, the goodness of the analogy mapping is given by a metric that indicates how closely it approximates isomorphism.

The most influential structuralist theory has been Gentner’s structure-mapping theory, implemented in a program called the structure-mapping engine (SME). In its original form (Gentner 1983), the theory assesses analogies on purely structural grounds. Gentner asserts:

Analogies are about relations, rather than simple features. No matter what kind of knowledge (causal models, plans, stories, etc.), it is the structural properties (i.e., the interrelationships between the facts) that determine the content of an analogy. (Falkenhainer, Forbus, and Gentner 1989/90: 3)

In order to clarify this thesis, Gentner introduces a distinction between properties , or monadic predicates, and relations , which have multiple arguments. She further distinguishes among different orders of relations and functions, defined inductively (in terms of the order of the relata or arguments). The best mapping is determined by systematicity : the extent to which it places higher-order relations, and items that are nested in higher-order relations, in correspondence. Gentner’s Systematicity Principle states:

A predicate that belongs to a mappable system of mutually interconnecting relationships is more likely to be imported into the target than is an isolated predicate. (1983: 163)

A systematic analogy (one that places high-order relations and their components in correspondence) is better than a less systematic analogy. Hence, an analogical inference has a degree of plausibility that increases monotonically with the degree of systematicity of the associated analogy mapping. Gentner’s fundamental criterion for evaluating candidate analogies (and analogical inferences) thus depends solely upon the syntax of the given representations and not at all upon their content.

Later versions of the structure-mapping theory incorporate refinements (Forbus, Ferguson, and Gentner 1994; Forbus 2001; Forbus et al. 2007; Forbus et al. 2008; Forbus et al 2017). For example, the earliest version of the theory is vulnerable to worries about hand-coded representations of source and target domains. Gentner and her colleagues have attempted to solve this problem in later work that generates LISP representations from natural language text (see Turney 2008 for a different approach).

The most important challenges for the structure-mapping approach relate to the Systematicity Principle itself. Does the value of an analogy derive entirely, or even chiefly, from systematicity? There appear to be two main difficulties with this view. First: it is not always appropriate to give priority to systematic, high-level relational matches. Material criteria, and notably what Gentner refers to as “superficial feature matches,” can be extremely important in some types of analogical reasoning, such as ethnographic analogies which are based, to a considerable degree, on surface resemblances between artifacts. Second and more significantly: systematicity seems to be at best a fallible marker for good analogies rather than the essence of good analogical reasoning.

Greater systematicity is neither necessary nor sufficient for a more plausible analogical inference. It is obvious that increased systematicity is not sufficient for increased plausibility. An implausible analogy can be represented in a form that exhibits a high degree of structural parallelism. High-order relations can come cheap, as we saw with Achinstein’s “swan” example ( §2.4 ).

More pointedly, increased systematicity is not necessary for greater plausibility. Indeed, in causal analogies, it may even weaken the inference. That is because systematicity takes no account of the type of causal relevance, positive or negative. (McKay 1993) notes that microbes have been found in frozen lakes in Antarctica; by analogy, simple life forms might exist on Mars. Freezing temperatures are preventive or counteracting causes; they are negatively relevant to the existence of life. The climate of Mars was probably more favorable to life 3.5 billion years ago than it is today, because temperatures were warmer. Yet the analogy between Antarctica and present-day Mars is more systematic than the analogy between Antarctica and ancient Mars. According to the Systematicity Principle , the analogy with Antarctica provides stronger support for life on Mars today than it does for life on ancient Mars.

The point of this example is that increased systematicity does not always increase plausibility, and reduced systematicity does not always decrease it (see Lee and Holyoak 2008). The more general point is that systematicity can be misleading, unless we take into account the nature of the relationships between various factors and the hypothetical analogy. Systematicity does not magically produce or explain the plausibility of an analogical argument. When we reason by analogy, we must determine which features of both domains are relevant and how they relate to the analogical conclusion. There is no short-cut via syntax.

Schlimm (2008) offers an entirely different critique of the structure-mapping theory from the perspective of analogical reasoning in mathematics—a domain where one might expect a formal approach such as structure mapping to perform well. Schlimm introduces a simple distinction: a domain is object-rich if the number of objects is greater than the number of relations (and properties), and relation-rich otherwise. Proponents of the structure-mapping theory typically focus on relation-rich examples (such as the analogy between the solar system and the atom). By contrast, analogies in mathematics typically involve domains with an enormous number of objects (like the real numbers), but relatively few relations and functions (addition, multiplication, less-than).

Schlimm provides an example of an analogical reasoning problem in group theory that involves a single relation in each domain. In this case, attaining maximal systematicity is trivial. The difficulty is that, compatible with maximal systematicity, there are different ways in which the objects might be placed in correspondence. The structure-mapping theory appears to yield the wrong inference. We might put the general point as follows: in object-rich domains, systematicity ceases to be a reliable guide to plausible analogical inference.

3.5.1 Connectionist models

During the past thirty-five years, cognitive scientists have conducted extensive research on analogy. Gentner’s SME is just one of many computational theories, implemented in programs that construct and use analogies. Three helpful anthologies that span this period are Helman 1988; Gentner, Holyoak, and Kokinov 2001; and Kokinov, Holyoak, and Gentner 2009.

One predominant objective of this research has been to model the cognitive processes involved in using analogies. Early models tended to be oriented towards “understanding the basic constraints that govern human analogical thinking” (Hummel and Holyoak 1997: 458). Recent connectionist models have been directed towards uncovering the psychological mechanisms that come into play when we use analogies: retrieval of a relevant source domain, analogical mapping across domains, and transfer of information and learning of new categories or schemas.

In some cases, such as the structure-mapping theory (§3.4), this research overlaps directly with the normative questions that are the focus of this entry; indeed, Gentner’s Systematicity Principle may be interpreted normatively. In other cases, we might view the projects as displacing those traditional normative questions with up-to-date, computational forms of naturalized epistemology . Two approaches are singled out here because both raise important challenges to the very idea of finding sharp answers to those questions, and both suggest that connectionist models offer a more fruitful approach to understanding analogical reasoning.

The first is the constraint-satisfaction model (also known as the multiconstraint theory ), developed by Holyoak and Thagard (1989, 1995). Like Gentner, Holyoak and Thagard regard the heart of analogical reasoning as analogy mapping , and they stress the importance of systematicity, which they refer to as a structural constraint. Unlike Gentner, they acknowledge two additional types of constraints. Pragmatic constraints take into account the goals and purposes of the agent, recognizing that “the purpose will guide selection” of relevant similarities. Semantic constraints represent estimates of the degree to which people regard source and target items as being alike, rather like Hesse’s “pre-theoretic” similarities.

The novelty of the multiconstraint theory is that these structural , semantic and pragmatic constraints are implemented not as rigid rules, but rather as ‘pressures’ supporting or inhibiting potential pairwise correspondences. The theory is implemented in a connectionist program called ACME (Analogical Constraint Mapping Engine), which assigns an initial activation value to each possible pairing between elements in the source and target domains (based on semantic and pragmatic constraints), and then runs through cycles that update the activation values based on overall coherence (structural constraints). The best global analogy mapping emerges under the pressure of these constraints. Subsequent connectionist models, such as Hummel and Holyoak’s LISA program (1997, 2003), have made significant advances and hold promise for offering a more complete theory of analogical reasoning.

The second example is Hofstadter and Mitchell’s Copycat program (Hofstadter 1995; Mitchell 1993). The program is “designed to discover insightful analogies, and to do so in a psychologically realistic way” (Hofstadter 1995: 205). Copycat operates in the domain of letter-strings. The program handles the following type of problem:

Suppose the letter-string abc were changed to abd ; how would you change the letter-string ijk in “the same way”?

Most people would answer ijl , since it is natural to think that abc was changed to abd by the “transformation rule”: replace the rightmost letter with its successor. Alternative answers are possible, but do not agree with most people’s sense of what counts as the natural analogy.

Hofstadter and Mitchell believe that analogy-making is in large part about the perception of novel patterns, and that such perception requires concepts with “fluid” boundaries. Genuine analogy-making involves “slippage” of concepts. The Copycat program combines a set of core concepts pertaining to letter-sequences ( successor , leftmost and so forth) with probabilistic “halos” that link distinct concepts dynamically. Orderly structures emerge out of random low-level processes and the program produces plausible solutions. Copycat thus shows that analogy-making can be modeled as a process akin to perception, even if the program employs mechanisms distinct from those in human perception.

The multiconstraint theory and Copycat share the idea that analogical cognition involves cognitive processes that operate below the level of abstract reasoning. Both computational models—to the extent that they are capable of performing successful analogical reasoning—challenge the idea that a successful model of analogical reasoning must take the form of a set of quasi-logical criteria. Efforts to develop a quasi-logical theory of analogical reasoning, it might be argued, have failed. In place of faulty inference schemes such as those described earlier ( §2.2 , §2.4 ), computational models substitute procedures that can be judged on their performance rather than on traditional philosophical standards.

In response to this argument, we should recognize the value of the connectionist models while acknowledging that we still need a theory that offers normative principles for evaluating analogical arguments. In the first place, even if the construction and recognition of analogies are largely a matter of perception, this does not eliminate the need for subsequent critical evaluation of analogical inferences. Second and more importantly, we need to look not just at the construction of analogy mappings but at the ways in which individual analogical arguments are debated in fields such as mathematics, physics, philosophy and the law. These high-level debates require reasoning that bears little resemblance to the computational processes of ACME or Copycat. (Ashley’s HYPO (Ashley 1990) is one example of a non-connectionist program that focuses on this aspect of analogical reasoning.) There is, accordingly, room for both computational and traditional philosophical models of analogical reasoning.

3.5.2 Articulation model

Most prominent theories of analogy, philosophical and computational, are based on overall similarity between source and target domains—defined in terms of some favoured subset of Hesse’s horizontal relations (see §2.2 ). Aristotle and Mill, whose approach is echoed in textbook discussions, suggest counting similarities. Hesse’s theory ( §3.3 ) favours “pre-theoretic” correspondences. The structure-mapping theory and its successors ( §3.4 ) look to systematicity, i.e., to correspondences involving complex, high-level networks of relations. In each of these approaches, the problem is twofold: overall similarity is not a reliable guide to plausibility, and it fails to explain the plausibility of any analogical argument.

Bartha’s articulation model (2010) proposes a different approach, beginning not with horizontal relations, but rather with a classification of analogical arguments on the basis of the vertical relations within each domain. The fundamental idea is that a good analogical argument must satisfy two conditions:

Prior Association . There must be a clear connection, in the source domain, between the known similarities (the positive analogy) and the further similarity that is projected to hold in the target domain (the hypothetical analogy). This relationship determines which features of the source are critical to the analogical inference.

Potential for Generalization . There must be reason to think that the same kind of connection could obtain in the target domain. More pointedly: there must be no critical disanalogy between the domains.

The first order of business is to make the prior association explicit. The standards of explicitness vary depending on the nature of this association (causal relation, mathematical proof, functional relationship, and so forth). The two general principles are fleshed out via a set of subordinate models that allow us to identify critical features and hence critical disanalogies.

To see how this works, consider Example 7 (Rectangles and boxes). In this analogical argument, the source domain is two-dimensional geometry: we know that of all rectangles with a fixed perimeter, the square has maximum area. The target domain is three-dimensional geometry: by analogy, we conjecture that of all boxes with a fixed surface area, the cube has maximum volume. This argument should be evaluated not by counting similarities, looking to pre-theoretic resemblances between rectangles and boxes, or constructing connectionist representations of the domains and computing a systematicity score for possible mappings. Instead, we should begin with a precise articulation of the prior association in the source domain, which amounts to a specific proof for the result about rectangles. We should then identify, relative to that proof, the critical features of the source domain: namely, the concepts and assumptions used in the proof. Finally, we should assess the potential for generalization: whether, in the three-dimensional setting, those critical features are known to lack analogues in the target domain. The articulation model is meant to reflect the conversations that can and do take place between an advocate and a critic of an analogical argument.

3.6.1 Norton’s material theory of analogy

As noted in §2.4 , Norton rejects analogical inference rules. But even if we agree with Norton on this point, we might still be interested in having an account that gives us guidelines for evaluating analogical arguments. How does Norton’s approach fare on this score?

According to Norton, each analogical argument is warranted by local facts that must be investigated and justified empirically. First, there is “the fact of the analogy”: in practice, a low-level uniformity that embraces both the source and target systems. Second, there are additional factual properties of the target system which, when taken together with the uniformity, warrant the analogical inference. Consider Galileo’s famous inference ( Example 12 ) that there are mountains on the moon (Galileo 1610). Through his newly invented telescope, Galileo observed points of light on the moon ahead of the advancing edge of sunlight. Noting that the same thing happens on earth when sunlight strikes the mountains, he concluded that there must be mountains on the moon and even provided a reasonable estimate of their height. In this example, Norton tells us, the the fact of the analogy is that shadows and other optical phenomena are generated in the same way on the earth and on the moon; the additional fact about the target is the existence of points of light ahead of the advancing edge of sunlight on the moon.

What are the implications of Norton’s material theory when it comes to evaluating analogical arguments? The fact of the analogy is a local uniformity that powers the inference. Norton’s theory works well when such a uniformity is patent or naturally inferred. It doesn’t work well when the uniformity is itself the target (rather than the driver ) of the inference. That happens with explanatory analogies such as Example 5 (the Acoustical Analogy ), and mathematical analogies such as Example 7 ( Rectangles and Boxes ). Similarly, the theory doesn’t work well when the underlying uniformity is unclear, as in Example 2 ( Life on other Planets ), Example 4 ( Clay Pots ), and many other cases. In short, if Norton’s theory is accepted, then for most analogical arguments there are no useful evaluation criteria.

3.6.2 Field-specific criteria

For those who sympathize with Norton’s skepticism about universal inductive schemes and theories of analogical reasoning, yet recognize that his approach may be too local, an appealing strategy is to move up one level. We can aim for field-specific “working logics” (Toulmin 1958; Wylie and Chapman 2016; Reiss 2015). This approach has been adopted by philosophers of archaeology, evolutionary biology and other historical sciences (Wylie and Chapman 2016; Currie 2013; Currie 2016; Currie 2018). In place of schemas, we find ‘toolkits’, i.e., lists of criteria for evaluating analogical reasoning.

For example, Currie (2016) explores in detail the use of ethnographic analogy ( Example 13 ) between shamanistic motifs used by the contemporary San people and similar motifs in ancient rock art, found both among ancestors of the San (direct historical analogy) and in European rock art (indirect historical analogy). Analogical arguments support the hypothesis that in each of these cultures, rock art symbolizes hallucinogenic experiences. Currie examines criteria that focus on assumptions about stability of cultural traits and environment-culture relationships. Currie (2016, 2018) and Wylie (Wylie and Chapman 2016) also stress the importance of robustness reasoning that combines analogical arguments of moderate strength with other forms of evidence to yield strong conclusions.

Practice-based approaches can thus yield specific guidelines unlikely to be matched by any general theory of analogical reasoning. One caveat is worth mentioning. Field-specific criteria for ethnographic analogy are elicited against a background of decades of methodological controversy (Wylie and Chapman 2016). Critics and defenders of ethnographic analogy have appealed to general models of scientific method (e.g., hypothetico-deductive method or Bayesian confirmation). To advance the methodological debate, practice-based approaches must either make connections to these general models or explain why the lack of any such connection is unproblematic.

3.6.3 Formal analogies in physics

Close attention to analogical arguments in practice can also provide valuable challenges to general ideas about analogical inference. In an interesting discussion, Steiner (1989, 1998) suggests that many of the analogies that played a major role in early twentieth-century physics count as “Pythagorean.” The term is meant to connote mathematical mysticism: a “Pythagorean” analogy is a purely formal analogy, one founded on mathematical similarities that have no known physical basis at the time it is proposed. One example is Schrödinger’s use of analogy ( Example 14 ) to “guess” the form of the relativistic wave equation. In Steiner’s view, Schrödinger’s reasoning relies upon manipulations and substitutions based on purely mathematical analogies. Steiner argues that the success, and even the plausibility, of such analogies “evokes, or should evoke, puzzlement” (1989: 454). Both Hesse (1966) and Bartha (2010) reject the idea that a purely formal analogy, with no physical significance, can support a plausible analogical inference in physics. Thus, Steiner’s arguments provide a serious challenge.

Bartha (2010) suggests a response: we can decompose Steiner’s examples into two or more steps, and then establish that at least one step does, in fact, have a physical basis. Fraser (forthcoming), however, offers a counterexample that supports Steiner’s position. Complex analogies between classical statistical mechanics (CSM) and quantum field theory (QFT) have played a crucial role in the development and application of renormalization group (RG) methods in both theories ( Example 15 ). Fraser notes substantial physical disanalogies between CSM and QFT, and concludes that the reasoning is based entirely on formal analogies.

4. Philosophical foundations for analogical reasoning

What philosophical basis can be provided for reasoning by analogy? What justification can be given for the claim that analogical arguments deliver plausible conclusions? There have been several ideas for answering this question. One natural strategy assimilates analogical reasoning to some other well-understood argument pattern, a form of deductive or inductive reasoning ( §4.1 , §4.2 ). A few philosophers have explored the possibility of a priori justification ( §4.3 ). A pragmatic justification may be available for practical applications of analogy, notably in legal reasoning ( §4.4 ).

Any attempt to provide a general justification for analogical reasoning faces a basic dilemma. The demands of generality require a high-level formulation of the problem and hence an abstract characterization of analogical arguments, such as schema (4). On the other hand, as noted previously, many analogical arguments that conform to schema (4) are bad arguments. So a general justification of analogical reasoning cannot provide support for all arguments that conform to (4), on pain of proving too much. Instead, it must first specify a subset of putatively ‘good’ analogical arguments, and link the general justification to this specified subset. The problem of justification is linked to the problem of characterizing good analogical arguments . This difficulty afflicts some of the strategies described in this section.

Analogical reasoning may be cast in a deductive mold. If successful, this strategy neatly solves the problem of justification. A valid deductive argument is as good as it gets.

An early version of the deductivist approach is exemplified by Aristotle’s treatment of the argument from example ( §3.2 ), the paradeigma . On this analysis, an analogical argument between source domain \(S\) and target \(T\) begins with the assumption of positive analogy \(P(S)\) and \(P(T)\), as well as the additional information \(Q(S)\). It proceeds via the generalization \(\forall x(P(x) \supset Q(x))\) to the conclusion: \(Q(T)\). Provided we can treat that intermediate generalization as an independent premise, we have a deductively valid argument. Notice, though, that the existence of the generalization renders the analogy irrelevant. We can derive \(Q(T)\) from the generalization and \(P(T)\), without any knowledge of the source domain. The literature on analogy in argumentation theory ( §2.2 ) offers further perspectives on this type of analysis, and on the question of whether analogical arguments are properly characterized as deductive.

Some recent analyses follow Aristotle in treating analogical arguments as reliant upon extra (sometimes tacit) premises, typically drawn from background knowledge, that convert the inference into a deductively valid argument––but without making the source domain irrelevant. Davies and Russell introduce a version that relies upon what they call determination rules (Russell 1986; Davies and Russell 1987; Davies 1988). Suppose that \(Q\) and \(P_1 , \ldots ,P_m\) are variables, and we have background knowledge that the value of \(Q\) is determined by the values of \(P_1 , \ldots ,P_m\). In the simplest case, where \(m = 1\) and both \(P\) and \(Q\) are binary Boolean variables, this reduces to

i.e., whether or not \(P\) holds determines whether or not \(Q\) holds. More generally, the form of a determination rule is

i.e., \(Q\) is a function of \(P_1,\ldots\), \(P_m\). If we assume such a rule as part of our background knowledge, then an analogical argument with conclusion \(Q(T)\) is deductively valid. More precisely, and allowing for the case where \(Q\) is not a binary variable: if we have such a rule, and also premises stating that the source \(S\) agrees with the target \(T\) on all of the values \(P_i\), then we may validly infer that \(Q(T) = Q(S)\).

The “determination rule” analysis provides a clear and simple justification for analogical reasoning. Note that, in contrast to the Aristotelian analysis via the generalization \(\forall x(P(x) \supset Q(x))\), a determination rule does not trivialize the analogical argument. Only by combining the rule with information about the source domain can we derive the value of \(Q(T)\). To illustrate by adapting one of the examples given by Russell and Davies ( Example 16 ), let’s suppose that the value \((Q)\) of a used car (relative to a particular buyer) is determined by its year, make, mileage, condition, color and accident history (the variables \(P_i)\). It doesn’t matter if one or more of these factors are redundant or irrelevant. Provided two cars are indistinguishable on each of these points, they will have the same value. Knowledge of the source domain is necessary; we can’t derive the value of the second car from the determination rule alone. Weitzenfeld (1984) proposes a variant of this approach, advancing the slightly more general thesis that analogical arguments are deductive arguments with a missing (enthymematic) premise that amounts to a determination rule.

Do determination rules give us a solution to the problem of providing a justification for analogical arguments? In general: no. Analogies are commonly applied to problems such as Example 8 ( morphine and meperidine ), where we are not even aware of all relevant factors, let alone in possession of a determination rule. Medical researchers conduct drug tests on animals without knowing all attributes that might be relevant to the effects of the drug. Indeed, one of the main objectives of such testing is to guard against reactions unanticipated by theory. On the “determination rule” analysis, we must either limit the scope of such arguments to cases where we have a well-supported determination rule, or focus attention on formulating and justifying an appropriate determination rule. For cases such as animal testing, neither option seems realistic.

Recasting analogy as a deductive argument may help to bring out background assumptions, but it makes little headway with the problem of justification. That problem re-appears as the need to state and establish the plausibility of a determination rule, and that is at least as difficult as justifying the original analogical argument.

Some philosophers have attempted to portray, and justify, analogical reasoning in terms of some well-understood inductive argument pattern. There have been three moderately popular versions of this strategy. The first treats analogical reasoning as generalization from a single case. The second treats it as a kind of sampling argument. The third recognizes the argument from analogy as a distinctive form, but treats past successes as evidence for future success.

4.2.1 Single-case induction

Let’s reconsider Aristotle’s argument from example or paradeigma ( §3.2 ), but this time regard the generalization as justified via induction from a single case (the source domain). Can such a simple analysis of analogical arguments succeed? In general: no.

A single instance can sometimes lead to a justified generalization. Cartwright (1992) argues that we can sometimes generalize from a single careful experiment, “where we have sufficient control of the materials and our knowledge of the requisite background assumptions is secure” (51). Cartwright thinks that we can do this, for example, in experiments with compounds that have stable “Aristotelian natures.” In a similar spirit, Quine (1969) maintains that we can have instantial confirmation when dealing with natural kinds.

Even if we accept that there are such cases, the objection to understanding all analogical arguments as single-case induction is obvious: the view is simply too restrictive. Most analogical arguments will not meet the requisite conditions. We may not know that we are dealing with a natural kind or Aristotelian nature when we make the analogical argument. We may not know which properties are essential. An insistence on the ‘single-case induction’ analysis of analogical reasoning is likely to lead to skepticism (Agassi 1964, 1988).

Interpreting the argument from analogy as single-case induction is also counter-productive in another way. The simplistic analysis does nothing to advance the search for criteria that help us to distinguish between relevant and irrelevant similarities, and hence between good and bad analogical arguments.

4.2.2 Sampling arguments

On the sampling conception of analogical arguments, acknowledged similarities between two domains are treated as statistically relevant evidence for further similarities. The simplest version of the sampling argument is due to Mill (1843/1930). An argument from analogy, he writes, is “a competition between the known points of agreement and the known points of difference.” Agreement of \(A\) and \(B\) in 9 out of 10 properties implies a probability of 0.9 that \(B\) will possess any other property of \(A\): “we can reasonably expect resemblance in the same proportion” (367). His only restriction has to do with sample size: we must be relatively knowledgeable about both \(A\) and \(B\). Mill saw no difficulty in using analogical reasoning to infer characteristics of newly discovered species of plants or animals, given our extensive knowledge of botany and zoology. But if the extent of unascertained properties of \(A\) and \(B\) is large, similarity in a small sample would not be a reliable guide; hence, Mill’s dismissal of Reid’s argument about life on other planets ( Example 2 ).

The sampling argument is presented in more explicit mathematical form by Harrod (1956). The key idea is that the known properties of \(S\) (the source domain) may be considered a random sample of all \(S\)’s properties—random, that is, with respect to the attribute of also belonging to \(T\) (the target domain). If the majority of known properties that belong to \(S\) also belong to \(T\), then we should expect most other properties of \(S\) to belong to \(T\), for it is unlikely that we would have come to know just the common properties. In effect, Harrod proposes a binomial distribution, modeling ‘random selection’ of properties on random selection of balls from an urn.

There are grave difficulties with Harrod’s and Mill’s analyses. One obvious difficulty is the counting problem : the ‘population’ of properties is poorly defined. How are we to count similarities and differences? The ratio of shared to total known properties varies dramatically according to how we do this. A second serious difficulty is the problem of bias : we cannot justify the assumption that the sample of known features is random. In the case of the urn, the selection process is arranged so that the result of each choice is not influenced by the agent’s intentions or purposes, or by prior choices. By contrast, the presentation of an analogical argument is always partisan. Bias enters into the initial representation of similarities and differences: an advocate of the argument will highlight similarities, while a critic will play up differences. The paradigm of repeated selection from an urn seems totally inappropriate. Additional variations of the sampling approach have been developed (e.g., Russell 1988), but ultimately these versions also fail to solve either the counting problem or the problem of bias.

4.2.3 Argument from past success

Section 3.6 discussed Steiner’s view that appeal to ‘Pythagorean’ analogies in physics “evokes, or should evoke, puzzlement” (1989: 454). Liston (2000) offers a possible response: physicists are entitled to use Pythagorean analogies on the basis of induction from their past success:

[The scientist] can admit that no one knows how [Pythagorean] reasoning works and argue that the very fact that similar strategies have worked well in the past is already reason enough to continue pursuing them hoping for success in the present instance. (200)

Setting aside familiar worries about arguments from success, the real problem here is to determine what counts as a similar strategy. In essence, that amounts to isolating the features of successful Pythagorean analogies. As we have seen (§2.4), nobody has yet provided a satisfactory scheme that characterizes successful analogical arguments, let alone successful Pythagorean analogical arguments.

An a priori approach traces the validity of a pattern of analogical reasoning, or of a particular analogical argument, to some broad and fundamental principle. Three such approaches will be outlined here.

The first is due to Keynes (1921). Keynes appeals to his famous Principle of the Limitation of Independent Variety, which he articulates as follows:

Armed with this Principle and some additional assumptions, Keynes is able to show that in cases where there is no negative analogy , knowledge of the positive analogy increases the (logical) probability of the conclusion. If there is a non-trivial negative analogy, however, then the probability of the conclusion remains unchanged, as was pointed out by Hesse (1966). Those familiar with Carnap’s theory of logical probability will recognize that in setting up his framework, Keynes settled on a measure that permits no learning from experience.

Hesse offers a refinement of Keynes’s strategy, once again along Carnapian lines. In her (1974), she proposes what she calls the Clustering Postulate : the assumption that our epistemic probability function has a built-in bias towards generalization. The objections to such postulates of uniformity are well-known (see Salmon 1967), but even if we waive them, her argument fails. The main objection here—which also applies to Keynes—is that a purely syntactic axiom such as the Clustering Postulate fails to discriminate between analogical arguments that are good and those that are clearly without value (according to Hesse’s own material criteria, for example).

A different a priori strategy, proposed by Bartha (2010), limits the scope of justification to analogical arguments that satisfy tentative criteria for ‘good’ analogical reasoning. The criteria are those specified by the articulation model ( §3.5 ). In simplified form, they require the existence of non-trivial positive analogy and no known critical disanalogy. The scope of Bartha’s argument is also limited to analogical arguments directed at establishing prima facie plausibility, rather than degree of probability.

Bartha’s argument rests on a principle of symmetry reasoning articulated by van Fraassen (1989: 236): “problems which are essentially the same must receive essentially the same solution.” A modal extension of this principle runs roughly as follows: if problems might be essentially the same, then they might have essentially the same solution. There are two modalities here. Bartha argues that satisfaction of the criteria of the articulation model is sufficient to establish the modality in the antecedent, i.e., that the source and target domains ‘might be essentially the same’ in relevant respects. He further suggests that prima facie plausibility provides a reasonable reading of the modality in the consequent, i.e., that the problems in the two domains ‘might have essentially the same solution.’ To call a hypothesis prima facie plausible is to elevate it to the point where it merits investigation, since it might be correct.

The argument is vulnerable to two sorts of concerns. First, there are questions about the interpretation of the symmetry principle. Second, there is a residual worry that this justification, like all the others, proves too much. The articulation model may be too vague or too permissive.

Arguably, the most promising available defense of analogical reasoning may be found in its application to case law (see Precedent and Analogy in Legal Reasoning ). Judicial decisions are based on the verdicts and reasoning that have governed relevantly similar cases, according to the doctrine of stare decisis (Levi 1949; Llewellyn 1960; Cross and Harris 1991; Sunstein 1993). Individual decisions by a court are binding on that court and lower courts; judges are obligated to decide future cases ‘in the same way.’ That is, the reasoning applied in an individual decision, referred to as the ratio decidendi , must be applied to similar future cases (see Example 10 ). In practice, of course, the situation is extremely complex. No two cases are identical. The ratio must be understood in the context of the facts of the original case, and there is considerable room for debate about its generality and its applicability to future cases. If a consensus emerges that a past case was wrongly decided, later judgments will distinguish it from new cases, effectively restricting the scope of the ratio to the original case.

The practice of following precedent can be justified by two main practical considerations. First, and above all, the practice is conservative : it provides a relatively stable basis for replicable decisions. People need to be able to predict the actions of the courts and formulate plans accordingly. Stare decisis serves as a check against arbitrary judicial decisions. Second, the practice is still reasonably progressive : it allows for the gradual evolution of the law. Careful judges distinguish bad decisions; new values and a new consensus can emerge in a series of decisions over time.

In theory, then, stare decisis strikes a healthy balance between conservative and progressive social values. This justification is pragmatic. It pre-supposes a common set of social values, and links the use of analogical reasoning to optimal promotion of those values. Notice also that justification occurs at the level of the practice in general; individual analogical arguments sometimes go astray. A full examination of the nature and foundations for stare decisis is beyond the scope of this entry, but it is worth asking the question: might it be possible to generalize the justification for stare decisis ? Is a parallel pragmatic justification available for analogical arguments in general?

Bartha (2010) offers a preliminary attempt to provide such a justification by shifting from social values to epistemic values. The general idea is that reasoning by analogy is especially well suited to the attainment of a common set of epistemic goals or values. In simple terms, analogical reasoning—when it conforms to certain criteria—achieves an excellent (perhaps optimal) balance between the competing demands of stability and innovation. It supports both conservative epistemic values, such as simplicity and coherence with existing belief, and progressive epistemic values, such as fruitfulness and theoretical unification (McMullin (1993) provides a classic list).

5. Beyond analogical arguments

As emphasized earlier, analogical reasoning takes in a great deal more than analogical arguments. In this section, we examine two broad contexts in which analogical reasoning is important.

The first, still closely linked to analogical arguments, is the confirmation of scientific hypotheses. Confirmation is the process by which a scientific hypothesis receives inductive support on the basis of evidence (see evidence , confirmation , and Bayes’ Theorem ). Confirmation may also signify the logical relationship of inductive support that obtains between a hypothesis \(H\) and a proposition \(E\) that expresses the relevant evidence. Can analogical arguments play a role, either in the process or in the logical relationship? Arguably yes (to both), but this role has to be delineated carefully, and several obstacles remain in the way of a clear account.

The second context is conceptual and theoretical development in cutting-edge scientific research. Analogies are used to suggest possible extensions of theoretical concepts and ideas. The reasoning is linked to considerations of plausibility, but there is no straightforward analysis in terms of analogical arguments.

How is analogical reasoning related to the confirmation of scientific hypotheses? The examples and philosophical discussion from earlier sections suggest that a good analogical argument can indeed provide support for a hypothesis. But there are good reasons to doubt the claim that analogies provide actual confirmation.

In the first place, there is a logical difficulty. To appreciate this, let us concentrate on confirmation as a relationship between propositions. Christensen (1999: 441) offers a helpful general characterization:

Some propositions seem to help make it rational to believe other propositions. When our current confidence in \(E\) helps make rational our current confidence in \(H\), we say that \(E\) confirms \(H\).

In the Bayesian model, ‘confidence’ is represented in terms of subjective probability. A Bayesian agent starts with an assignment of subjective probabilities to a class of propositions. Confirmation is understood as a three-place relation:

\(E\) represents a proposition about accepted evidence, \(H\) stands for a hypothesis, \(K\) for background knowledge and \(Pr\) for the agent’s subjective probability function. To confirm \(H\) is to raise its conditional probability, relative to \(K\). The shift from prior probability \(Pr(H \mid K)\) to posterior probability \(Pr(H \mid E \cdot K)\) is referred to as conditionalization on \(E\). The relation between these two probabilities is typically given by Bayes’ Theorem (setting aside more complex forms of conditionalization):

For Bayesians, here is the logical difficulty: it seems that an analogical argument cannot provide confirmation. In the first place, it is not clear that we can encapsulate the information contained in an analogical argument in a single proposition, \(E\). Second, even if we can formulate a proposition \(E\) that expresses that information, it is typically not appropriate to treat it as evidence because the information contained in \(E\) is already part of the background, \(K\). This means that \(E \cdot K\) is equivalent to \(K\), and hence \(Pr(H \mid E \cdot K) = Pr(H \mid K)\). According to the Bayesian definition, we don’t have confirmation. (This is a version of the problem of old evidence; see confirmation .) Third, and perhaps most important, analogical arguments are often applied to novel hypotheses \(H\) for which the prior probability \(Pr(H \mid K)\) is not even defined. Again, the definition of confirmation in terms of Bayesian conditionalization seems inapplicable.

If analogies don’t provide inductive support via ordinary conditionalization, is there an alternative? Here we face a second difficulty, once again most easily stated within a Bayesian framework. Van Fraassen (1989) has a well-known objection to any belief-updating rule other than conditionalization. This objection applies to any rule that allows us to boost credences when there is no new evidence. The criticism, made vivid by the tale of Bayesian Peter, is that these ‘ampliative’ rules are vulnerable to a Dutch Book . Adopting any such rule would lead us to acknowledge as fair a system of bets that foreseeably leads to certain loss. Any rule of this type for analogical reasoning appears to be vulnerable to van Fraassen’s objection.

There appear to be at least three routes to avoiding these difficulties and finding a role for analogical arguments within Bayesian epistemology. First, there is what we might call minimal Bayesianism . Within the Bayesian framework, some writers (Jeffreys 1973; Salmon 1967, 1990; Shimony 1970) have argued that a ‘seriously proposed’ hypothesis must have a sufficiently high prior probability to allow it to become preferred as the result of observation. Salmon has suggested that analogical reasoning is one of the most important means of showing that a hypothesis is ‘serious’ in this sense. If analogical reasoning is directed primarily towards prior probability assignments, it can provide inductive support while remaining formally distinct from confirmation, avoiding the logical difficulties noted above. This approach is minimally Bayesian because it provides nothing more than an entry point into the Bayesian apparatus, and it only applies to novel hypotheses. An orthodox Bayesian, such as de Finetti (de Finetti and Savage 1972, de Finetti 1974), might have no problem in allowing that analogies play this role.

The second approach is liberal Bayesianism : we can change our prior probabilities in a non-rule-based fashion . Something along these lines is needed if analogical arguments are supposed to shift opinion about an already existing hypothesis without any new evidence. This is common in fields such as archaeology, as part of a strategy that Wylie refers to as “mobilizing old data as new evidence” (Wylie and Chapman 2016: 95). As Hawthorne (2012) notes, some Bayesians simply accept that both initial assignments and ongoing revision of prior probabilities (based on plausibility arguments) can be rational, but

the logic of Bayesian induction (as described here) has nothing to say about what values the prior plausibility assessments for hypotheses should have; and it places no restrictions on how they might change.

In other words, by not stating any rules for this type of probability revision, we avoid the difficulties noted by van Fraassen. This approach admits analogical reasoning into the Bayesian tent, but acknowledges a dark corner of the tent in which rationality operates without any clear rules.

Recently, a third approach has attracted interest: analogue confirmation or confirmation via analogue simulation . As described in (Dardashti et al. 2017), the idea is as follows:

Our key idea is that, in certain circumstances, predictions concerning inaccessible phenomena can be confirmed via an analogue simulation in a different system. (57)

Dardashti and his co-authors concentrate on a particular example ( Example 17 ): ‘dumb holes’ and other analogues to gravitational black holes (Unruh 1981; Unruh 2008). Unlike real black holes, some of these analogues can be (and indeed have been) implemented and studied in the lab. Given the exact formal analogy between our models for these systems and our models of black holes, and certain important additional assumptions, Dardashti et al. make the controversial claim that observations made about the analogues provide evidence about actual black holes. For instance, the observation of phenomena analogous to Hawking radiation in the analogue systems would provide confirmation for the existence of Hawking radiation in black holes. In a second paper (Dardashti et al. 2018, Other Internet Resources), the case for confirmation is developed within a Bayesian framework.

The appeal of a clearly articulated mechanism for analogue confirmation is obvious. It would provide a tool for exploring confirmation of inaccessible phenomena not just in cosmology, but also in historical sciences such as archaeology and evolutionary biology, and in areas of medical science where ethical constraints rule out experiments on human subjects. Furthermore, as noted by Dardashti et al., analogue confirmation relies on new evidence obtained from the analogue system, and is therefore not vulnerable to the logical difficulties noted above.

Although the concept of analogue confirmation is not entirely new (think of animal testing, as in Example 8 ), the claims of (Dardashti et al. 2017, 2018 [Other Internet Resources]) require evaluation. One immediate difficulty for the black hole example: if we think in terms of ordinary analogical arguments, there is no positive analogy because, to put it simply, we have no basis of known similarities between a ‘dumb hole’ and a black hole. As Crowther et al. (2018, Other Internet Resources) argue, “it is not known if the particular modelling framework used in the derivation of Hawking radiation actually describes black holes in the first place. ” This may not concern Dardashti et al., since they claim that analogue confirmation is distinct from ordinary analogical arguments. It may turn out that analogue confirmation is different for cases such as animal testing, where we have a basis of known similarities, and for cases where our only access to the target domain is via a theoretical model.

In §3.6 , we saw that practice-based studies of analogy provide insight into the criteria for evaluating analogical arguments. Such studies also point to dynamical or programmatic roles for analogies, which appear to require evaluative frameworks that go beyond those developed for analogical arguments.

Knuttila and Loettgers (2014) examine the role of analogical reasoning in synthetic biology, an interdisciplinary field that draws on physics, chemistry, biology, engineering and computational science. The main role for analogies in this field is not the construction of individual analogical arguments but rather the development of concepts such as “noise” and “feedback loops”. Such concepts undergo constant refinement, guided by both positive and negative analogies to their analogues in engineered and physical systems. Analogical reasoning here is “transient, heterogeneous, and programmatic” (87). Negative analogies, seen as problematic obstacles for individual analogical arguments, take on a prominent and constructive role when the focus is theoretical construction and concept refinement.

Similar observations apply to analogical reasoning in its application to another cutting-edge field: emergent gravity. In this area of physics, distinct theoretical approaches portray gravity as emerging from different microstructures (Linneman and Visser 2018). “Novel and robust” features not present at the micro-level emerge in the gravitational theory. Analogies with other emergent phenomena, such as hydrodynamics and thermodynamics, are exploited to shape these proposals. As with synthetic biology, analogical reasoning is not directed primarily towards the formulation and assessment of individual arguments. Rather, its role is to develop different theoretical models of gravity.

These studies explore fluid and creative applications of analogy to shape concepts on the front lines of scientific research. An adequate analysis would certainly take us beyond the analysis of individual analogical arguments, which have been the focus of our attention. Knuttila and Loettgers (2014) are led to reject the idea that the individual analogical argument is the “primary unit” in analogical reasoning, but this is a debatable conclusion. Linneman and Visser (2018), for instance, explicitly affirm the importance of assessing the case for different gravitational models through “exemplary analogical arguments”:

We have taken up the challenge of making explicit arguments in favour of an emergent gravity paradigm… That arguments can only be plausibility arguments at the heuristic level does not mean that they are immune to scrutiny and critical assessment tout court. The philosopher of physics’ job in the process of discovery of quantum gravity… should amount to providing exactly this kind of assessments. (Linneman and Visser 2018: 12)

Accordingly, Linneman and Visser formulate explicit analogical arguments for each model of emergent gravity, and assess them using familiar criteria for evaluating individual analogical arguments. Arguably, even the most ambitious heuristic objectives still depend upon considerations of plausibility that benefit by being expressed, and examined, in terms of analogical arguments.

  • Achinstein, P., 1964, “Models, Analogies and Theories,” Philosophy of Science , 31: 328–349.
  • Agassi, J., 1964, “Discussion: Analogies as Generalizations,” Philosophy of Science , 31: 351–356.
  • –––, 1988, “Analogies Hard and Soft,” in D.H. Helman (ed.) 1988, 401–19.
  • Aristotle, 1984, The Complete Works of Aristotle , J. Barnes (ed.), Princeton: Princeton University Press.
  • Ashley, K.D., 1990, Modeling Legal Argument: Reasoning with Cases and Hypotheticals , Cambridge: MIT Press/Bradford Books.
  • Bailer-Jones, D., 2002, “Models, Metaphors and Analogies,” in Blackwell Guide to the Philosophy of Science , P. Machamer and M. Silberstein (eds.), 108–127, Cambridge: Blackwell.
  • Bartha, P., 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press.
  • Bermejo-Luque, L., 2012, “A unitary schema for arguments by analogy,” Informal Logic , 11(3): 161–172.
  • Biela, A., 1991, Analogy in Science , Frankfurt: Peter Lang.
  • Black, M., 1962, Models and Metaphors , Ithaca: Cornell University Press.
  • Campbell, N.R., 1920, Physics: The Elements , Cambridge: Cambridge University Press.
  • –––, 1957, Foundations of Science , New York: Dover.
  • Carbonell, J.G., 1983, “Learning by Analogy: Formulating and Generalizing Plans from Past Experience,” in Machine Learning: An Artificial Intelligence Approach , vol. 1 , R. Michalski, J. Carbonell and T. Mitchell (eds.), 137–162, Palo Alto: Tioga.
  • –––, 1986, “Derivational Analogy: A Theory of Reconstructive Problem Solving and Expertise Acquisition,” in Machine Learning: An Artificial Intelligence Approach, vol. 2 , J. Carbonell, R. Michalski, and T. Mitchell (eds.), 371–392, Los Altos: Morgan Kaufmann.
  • Carnap, R., 1980, “A Basic System of Inductive Logic Part II,” in Studies in Inductive Logic and Probability, vol. 2 , R.C. Jeffrey (ed.), 7–155, Berkeley: University of California Press.
  • Cartwright, N., 1992, “Aristotelian Natures and the Modern Experimental Method,” in Inference, Explanation, and Other Frustrations , J. Earman (ed.), Berkeley: University of California Press.
  • Christensen, D., 1999, “Measuring Confirmation,” Journal of Philosophy 96(9): 437–61.
  • Cohen, L. J., 1980, “Some Historical Remarks on the Baconian Conception of Probability,” Journal of the History of Ideas 41: 219–231.
  • Copi, I., 1961, Introduction to Logic, 2nd edition , New York: Macmillan.
  • Copi, I. and C. Cohen, 2005, Introduction to Logic, 12 th edition , Upper Saddle River, New Jersey: Prentice-Hall.
  • Cross, R. and J.W. Harris, 1991, Precedent in English Law, 4 th ed. , Oxford: Clarendon Press.
  • Currie, A., 2013, “Convergence as Evidence,” British Journal for the Philosophy of Science , 64: 763–86.
  • –––, 2016, “Ethnographic analogy, the comparative method, and archaeological special pleading,” Studies in History and Philosophy of Science , 55: 84–94.
  • –––, 2018, Rock, Bone and Ruin , Cambridge, MA: MIT Press.
  • Dardashti, R., K. Thébault, and E. Winsberg, 2017, “Confirmation via Analogue Simulation: What Dumb Holes Could Tell Us about Gravity,” British Journal for the Philosophy of Science , 68: 55–89.
  • Darwin, C., 1903, More Letters of Charles Darwin, vol. I , F. Darwin (ed.), New York: D. Appleton.
  • Davies, T.R., 1988, “Determination, Uniformity, and Relevance: Normative Criteria for Generalization and Reasoning by Analogy,” in D.H. Helman (ed.) 1988, 227–50.
  • Davies, T.R. and S. Russell, 1987, “A Logical Approach to Reasoning by Analogy,” in IJCAI 87: Proceedings of the Tenth International Joint Conference on Artificial Intelligence , J. McDermott (ed.), 264–70, Los Altos, CA: Morgan Kaufmann.
  • De Finetti, B., 1974, Theory of Probability, vols. 1 and 2 , trans. A. Machí and A. Smith, New York: Wiley.
  • De Finetti, B. and L.J. Savage, 1972, “How to Choose the Initial Probabilities,” in B. de Finetti, Probability, Induction and Statistics , 143–146, New York: Wiley.
  • Descartes, R., 1637/1954, The Geometry of René Descartes , trans. D.E. Smith and M.L. Latham, New York: Dover.
  • Douven, I. and T. Williamson, 2006, “Generalizing the Lottery Paradox,” British Journal for the Philosophy of Science , 57: 755–779.
  • Eliasmith, C. and P. Thagard, 2001, “Integrating structure and meaning: a distributed model of analogical mapping,” Cognitive Science 25: 245–286.
  • Evans, T.G., 1968, “A Program for the Solution of Geometric-Analogy Intelligence-Test Questions,” in M.L. Minsky (ed.), 271–353, Semantic Information Processing , Cambridge: MIT Press.
  • Falkenhainer, B., K. Forbus, and D. Gentner, 1989/90, “The Structure-Mapping Engine: Algorithm and Examples,” Artificial Intelligence 41: 2–63.
  • Forbus, K, 2001, “Exploring Analogy in the Large,” in D. Gentner, K. Holyoak, and B. Kokinov (eds.) 2001, 23–58.
  • Forbus, K., R. Ferguson, and D. Gentner, 1994, “Incremental Structure-mapping,” in Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society , A. Ram and K. Eiselt (eds.), 313–18, Hillsdale, NJ: Lawrence Erlbaum.
  • Forbus, K., C. Riesbeck, L. Birnbaum, K. Livingston, A. Sharma, and L. Ureel, 2007, “A prototype system that learns by reading simplified texts,” in AAAI Spring Symposium on Machine Reading , Stanford University, California.
  • Forbus, K., J. Usher, A. Lovett, K. Lockwood, and J. Wetzel, 2008, “Cogsketch: Open domain sketch understanding for cognitive science research and for education,” in Proceedings of the Fifth Eurographics Workshop on Sketch-Based Interfaces and Modeling , Annecy, France.
  • Forbus, K., R. Ferguson, A. Lovett, and D. Gentner, 2017, “Extending SME to Handle Large-Scale Cognitive Modeling,” Cognitive Science , 41(5): 1152–1201.
  • Franklin, B., 1941, Benjamin Franklin’s Experiments , I.B. Cohen (ed.), Cambridge: Harvard University Press.
  • Fraser, D., forthcoming, “The development of renormalization group methods for particle physics: Formal analogies between classical statistical mechanics and quantum field theory,” Synthese , first online 29 June 2018. doi:10.1007/s11229-018-1862-0
  • Galilei, G., 1610 [1983], The Starry Messenger , S. Drake (trans.) in Telescopes, Tides and Tactics , Chicago: University of Chicago Press.
  • Gentner, D., 1983, “Structure-Mapping: A Theoretical Framework for Analogy,” Cognitive Science 7: 155–70.
  • Gentner, D., K. Holyoak, and B. Kokinov (eds.), 2001, The Analogical Mind: Perspectives from Cognitive Science , Cambridge: MIT Press.
  • Gildenhuys, P., 2004, “Darwin, Herschel, and the role of analogy in Darwin’s Origin,” Studies in the History and Philosophy of Biological and Biomedical Sciences , 35: 593–611.
  • Gould, R.A. and P.J. Watson, 1982, “A Dialogue on the Meaning and Use of Analogy in Ethnoarchaeological Reasoning,” Journal of Anthropological Archaeology 1: 355–381.
  • Govier, T., 1999, The Philosophy of Argument , Newport News, VA: Vale Press.
  • Guarini, M., 2004, “A Defence of Non-deductive Reconstructions of Analogical Arguments,” Informal Logic , 24(2): 153–168.
  • Hadamard, J., 1949, An Essay on the Psychology of Invention in the Mathematical Field , Princeton: Princeton University Press.
  • Hájek, A., 2018, “Creating heuristics for philosophical creativity,” in Creativity and Philosophy , B. Gaut and M. Kieran (eds.), New York: Routledge, 292–312.
  • Halpern, J. Y., 2003, Reasoning About Uncertainty , Cambridge, MA: MIT Press.
  • Harrod, R.F., 1956, Foundations of Inductive Logic , London: Macmillan.
  • Hawthorne, J., 2012, “Inductive Logic”, The Stanford Encyclopedia of Philosophy (Winter 2012 edition), Edward N. Zalta (ed.), URL= < https://plato.stanford.edu/archives/win2012/entries/logic-inductive/ >.
  • Helman, D.H. (ed.), 1988, Analogical Reasoning: perspectives of artificial intelligence, cognitive science, and philosophy , Dordrecht: Kluwer Academic Publishers.
  • Hempel, C.G., 1965, “Aspects of Scientific Explanation,” in Aspects of Scientific Explanation and Other Essays in the Philosophy of Science , 331–496, New York: Free Press.
  • Hesse, M.B., 1964, “Analogy and Confirmation Theory,” Philosophy of Science , 31: 319–327.
  • –––, 1966, Models and Analogies in Science , Notre Dame: University of Notre Dame Press.
  • –––, 1973, “Logic of discovery in Maxwell’s electromagnetic theory,” in Foundations of scientific method: the nineteenth century , R. Giere and R. Westfall (eds.), 86–114, Bloomington: University of Indiana Press.
  • –––, 1974, The Structure of Scientific Inference , Berkeley: University of California Press.
  • –––, 1988, “Theories, Family Resemblances and Analogy,” in D.H. Helman (ed.) 1988, 317–40.
  • Hofstadter, D., 1995, Fluid Concepts and Creative Analogies , New York: BasicBooks (Harper Collins).
  • –––, 2001, “Epilogue: Analogy as the Core of Cognition,” in Gentner, Holyoak, and Kokinov (eds.) 2001, 499–538.
  • Hofstadter, D., and E. Sander, 2013, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking , New York: Basic Books.
  • Holyoak, K. and P. Thagard, 1989, “Analogical Mapping by Constraint Satisfaction,” Cognitive Science , 13: 295–355.
  • –––, 1995, Mental Leaps: Analogy in Creative Thought , Cambridge: MIT Press.
  • Huber, F., 2009, “Belief and Degrees of Belief,” in F. Huber and C. Schmidt-Petri (eds.) 2009, 1–33.
  • Huber, F. and C. Schmidt-Petri, 2009, Degrees of Belief , Springer, 2009,
  • Hume, D. 1779/1947, Dialogues Concerning Natural Religion , Indianapolis: Bobbs-Merrill.
  • Hummel, J. and K. Holyoak, 1997, “Distributed Representations of Structure: A Theory of Analogical Access and Mapping,” Psychological Review 104(3): 427–466.
  • –––, 2003, “A symbolic-connectionist theory of relational inference and generalization,” Psychological Review 110: 220–264.
  • Hunter, D. and P. Whitten (eds.), 1976, Encyclopedia of Anthropology , New York: Harper & Row.
  • Huygens, C., 1690/1962, Treatise on Light , trans. S. Thompson, New York: Dover.
  • Indurkhya, B., 1992, Metaphor and Cognition , Dordrecht: Kluwer Academic Publishers.
  • Jeffreys, H., 1973, Scientific Inference, 3rd ed. , Cambridge: Cambridge University Press.
  • Keynes, J.M., 1921, A Treatise on Probability , London: Macmillan.
  • Knuuttila, T., and A. Loettgers, 2014, “Varieties of noise: Analogical reasoning in synthetic biology,” Studies in History and Philosophy of Science , 48: 76–88.
  • Kokinov, B., K. Holyoak, and D. Gentner (eds.), 2009, New Frontiers in Analogy Research : Proceedings of the Second International Conference on Analogy ANALOGY-2009 , Sofia: New Bulgarian University Press.
  • Kraus, M., 2015, “Arguments by Analogy (and What We Can Learn about Them from Aristotle),” in Reflections on Theoretical Issues in Argumentation Theory , F.H. van Eemeren and B. Garssen (eds.), Cham: Springer, 171–182. doi: 10.1007/978-3-319-21103-9_13
  • Kroes, P., 1989, “Structural analogies between physical systems,” British Journal for the Philosophy of Science , 40: 145–54.
  • Kuhn, T.S., 1996, The Structure of Scientific Revolutions , 3 rd edition, Chicago: University of Chicago Press.
  • Kuipers, T., 1988, “Inductive Analogy by Similarity and Proximity,” in D.H. Helman (ed.) 1988, 299–313.
  • Lakoff, G. and M. Johnson, 1980, Metaphors We Live By , Chicago: University of Chicago Press.
  • Leatherdale, W.H., 1974, The Role of Analogy, Model, and Metaphor in Science , Amsterdam: North-Holland Publishing.
  • Lee, H.S. and Holyoak, K.J., 2008, “Absence Makes the Thought Grow Stronger: Reducing Structural Overlap Can Increase Inductive Strength,” in Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society , V. Sloutsky, B. Love, and K. McRae (eds.), 297–302, Austin: Cognitive Science Society.
  • Lembeck, F., 1989, Scientific Alternatives to Animal Experiments , Chichester: Ellis Horwood.
  • Levi, E., 1949, An Introduction to Legal Reasoning , Chicago: University of Chicago Press.
  • Linnemann, N., and M. Visser, 2018, “Hints towards the emergent nature of gravity,” Studies in History and Philosophy of Modern Physics , 30: 1–13.
  • Liston, M., 2000, “Critical Discussion of Mark Steiner’s The Applicability of Mathematics as a Philosophical Problem,” Philosophia Mathematica , 3(8): 190–207.
  • Llewellyn, K., 1960, The Bramble Bush: On Our Law and its Study , New York: Oceana.
  • Lloyd, G.E.R., 1966, Polarity and Analogy , Cambridge: Cambridge University Press.
  • Macagno, F., D. Walton and C. Tindale, 2017, “Analogical Arguments: Inferential Structures and Defeasibility Conditions,” Argumentation , 31: 221–243.
  • Maher, P., 2000, “Probabilities for Two Properties,” Erkenntnis , 52: 63–91.
  • Maier, C.L., 1981, The Role of Spectroscopy in the Acceptance of the Internally Structured Atom 1860–1920 , New York: Arno Press.
  • Maxwell, J.C., 1890, Scientific Papers of James Clerk Maxwell, Vol. I , W.D. Niven (ed.), Cambridge: Cambridge University Press.
  • McKay, C.P., 1993, “Did Mars once have Martians?” Astronomy , 21(9): 26–33.
  • McMullin, Ernan, 1993, “Rationality and Paradigm Change in Science,” in World Changes: Thomas Kuhn and the Nature of Science , P. Horwich (ed.), 55–78, Cambridge: MIT Press.
  • Mill, J.S., 1843/1930, A System of Logic , London: Longmans-Green.
  • Mitchell, M., 1993, Analogy-Making as Perception , Cambridge: Bradford Books/MIT Press.
  • Moore, B. N. and R. Parker, 1998, Critical Thinking, 5th ed. , Mountain View, CA: Mayfield.
  • Nersessian, N., 2002, “Maxwell and ‘the Method of Physical Analogy’: Model-Based Reasoning, Generic Abstraction, and Conceptual Change,” in Reading Natural Philosophy , D. Malament (ed.), Chicago: Open Court.
  • –––, 2009, “Conceptual Change: Creativity, Cognition, and Culture,” in Models of Discovery and Creativity , J. Meheus and T. Nickles (eds.), Dordrecht: Springer 127–166.
  • Niiniluoto, I., 1988, “Analogy and Similarity in Scientific Reasoning,” in D.H. Helman (ed.) 1988, 271–98.
  • Norton, J., 2010, “There Are No Universal Rules for Induction,” Philosophy of Science , 77: 765–777.
  • Ortony, A. (ed.), 1979, Metaphor and Thought , Cambridge: Cambridge University Press.
  • Oppenheimer, R., 1955, “Analogy in Science,” American Psychologist 11(3): 127–135.
  • Pietarinen, J., 1972, Lawlikeness, Analogy and Inductive Logic , Amsterdam: North-Holland.
  • Poincaré, H., 1952a, Science and Hypothesis , trans. W.J. Greenstreet, New York: Dover.
  • –––, 1952b, Science and Method , trans. F. Maitland, New York: Dover.
  • Polya, G., 1954, Mathematics and Plausible Reasoning , 2 nd ed. 1968, two vols., Princeton: Princeton University Press.
  • Prieditis, A. (ed.), 1988, Analogica , London: Pitman.
  • Priestley, J., 1769, 1775/1966, The History and Present State of Electricity, Vols. I and II , New York: Johnson. Reprint.
  • Quine, W.V., 1969, “Natural Kinds,” in Ontological Relativity and Other Essays , 114–138, New York: Columbia University Press.
  • Quine, W.V. and J.S. Ullian, 1970, The Web of Belief , New York: Random House.
  • Radin, M., 1933, “Case Law and Stare Decisis ,” Columbia Law Review 33 (February), 199.
  • Reid, T., 1785/1895, Essays on the Intellectual Powers of Man . The Works of Thomas Reid, vol. 3, 8 th ed. , Sir William Hamilton (ed.), Edinburgh: James Thin.
  • Reiss, J., 2015, “A Pragmatist Theory of Evidence,” Philosophy of Science , 82: 341–62.
  • Reynolds, A.K. and L.O. Randall, 1975, Morphine and Related Drugs , Toronto: University of Toronto Press.
  • Richards, R.A., 1997, “Darwin and the inefficacy of artificial selection,” Studies in History and Philosophy of Science , 28(1): 75–97.
  • Robinson, D.S., 1930, The Principles of Reasoning, 2nd ed ., New York: D. Appleton.
  • Romeijn, J.W., 2006, “Analogical Predictions for Explicit Similarity,” Erkenntnis , 64(2): 253–80.
  • Russell, S., 1986, Analogical and Inductive Reasoning , Ph.D. thesis, Department of Computer Science, Stanford University, Stanford, CA.
  • –––, 1988, “Analogy by Similarity,” in D.H. Helman (ed.) 1988, 251–269.
  • Salmon, W., 1967, The Foundations of Scientific Inference , Pittsburgh: University of Pittsburgh Press.
  • –––, 1990, “Rationality and Objectivity in Science, or Tom Kuhn Meets Tom Bayes,” in Scientific Theories (Minnesota Studies in the Philosophy of Science: Volume 14), C. Wade Savage (ed.), Minneapolis: University of Minnesota Press, 175–204.
  • Sanders, K., 1991, “Representing and Reasoning about Open-Textured Predicates,” in Proceedings of the Third International Conference on Artificial Intelligence and Law , New York: Association of Computing Machinery, 137–144.
  • Schlimm, D., 2008, “Two Ways of Analogy: Extending the Study of Analogies to Mathematical Domains,” Philosophy of Science , 75: 178–200.
  • Shelley, C., 1999, “Multiple Analogies in Archaeology,” Philosophy of Science , 66: 579–605.
  • –––, 2003, Multiple Analogies in Science and Philosophy , Amsterdam: John Benjamins.
  • Shimony, A., 1970, “Scientific Inference,” in The Nature and Function of Scientific Theories , R. Colodny (ed.), Pittsburgh: University of Pittsburgh Press, 79–172.
  • Snyder, L., 2006, Reforming Philosophy: A Victorian Debate on Science and Society , Chicago: University of Chicago Press.
  • Spohn, W., 2009, “A Survey of Ranking Theory,” in F. Huber and C. Schmidt-Petri (eds.) 2009, 185-228.
  • –––, 2012, The Laws of Belief: Ranking Theory and its Philosophical Applications , Oxford: Oxford University Press.
  • Stebbing, L.S., 1933, A Modern Introduction to Logic, 2nd edition , London: Methuen.
  • Steiner, M., 1989, “The Application of Mathematics to Natural Science,” Journal of Philosophy , 86: 449–480.
  • –––, 1998, The Applicability of Mathematics as a Philosophical Problem , Cambridge, MA: Harvard University Press.
  • Stepan, N., 1996, “Race and Gender: The Role of Analogy in Science,” in Feminism and Science , E.G. Keller and H. Longino (eds.), Oxford: Oxford University Press, 121–136.
  • Sterrett, S., 2006, “Models of Machines and Models of Phenomena,” International Studies in the Philosophy of Science , 20(March): 69–80.
  • Sunstein, C., 1993, “On Analogical Reasoning,” Harvard Law Review , 106: 741–791.
  • Thagard, P., 1989, “Explanatory Coherence,” Behavioral and Brain Science , 12: 435–502.
  • Timoshenko, S. and J. Goodier, 1970, Theory of Elasticity , 3rd edition, New York: McGraw-Hill.
  • Toulmin, S., 1958, The Uses of Argument , Cambridge: Cambridge University Press.
  • Turney, P., 2008, “The Latent Relation Mapping Engine: Algorithm and Experiments,” Journal of Artificial Intelligence Research , 33: 615–55.
  • Unruh, W., 1981, “Experimental Black-Hole Evaporation?,” Physical Review Letters , 46: 1351–3.
  • –––, 2008, “Dumb Holes: Analogues for Black Holes,” Philosophical Transactions of the Royal Society A , 366: 2905–13.
  • Van Fraassen, Bas, 1980, The Scientific Image , Oxford: Clarendon Press.
  • –––, 1984, “Belief and the Will,” Journal of Philosophy , 81: 235–256.
  • –––, 1989, Laws and Symmetry , Oxford: Clarendon Press.
  • –––, 1995, “Belief and the Problem of Ulysses and the Sirens,” Philosophical Studies , 77: 7–37.
  • Waller, B., 2001, “Classifying and analyzing analogies,” Informal Logic , 21(3): 199–218.
  • Walton, D. and C. Hyra, 2018, “Analogical Arguments in Persuasive and Deliberative Contexts,” Informal Logic , 38(2): 213–261.
  • Weitzenfeld, J.S., 1984, “Valid Reasoning by Analogy,” Philosophy of Science , 51: 137–49.
  • Woods, J., A. Irvine, and D. Walton, 2004, Argument: Critical Thinking, Logic and the Fallacies , 2 nd edition, Toronto: Prentice-Hall.
  • Wylie, A., 1982, “An Analogy by Any Other Name Is Just as Analogical,” Journal of Anthropological Archaeology , 1: 382–401.
  • –––, 1985, “The Reaction Against Analogy,” Advances in Archaeological Method and Theory , 8: 63–111.
  • Wylie, A., and R. Chapman, 2016, Evidential Reasoning in Archaeology , Bloomsbury Academic.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • Crowther, K., N. Linnemann, and C. Wüthrich, 2018, “ What we cannot learn from analogue experiments ,” online at arXiv.org.
  • Dardashti, R., S. Hartmann, K. Thébault, and E. Winsberg, 2018, “ Hawking Radiation and Analogue Experiments: A Bayesian Analysis ,” online at PhilSci Archive.
  • Norton, J., 2018. “ Analogy ”, unpublished draft, University of Pittsburgh.
  • Resources for Research on Analogy: a Multi-Disciplinary Guide (University of Windsor)
  • UCLA Reasoning Lab (UCLA)
  • Dedre Gentner’s publications (Northwestern University)
  • The Center for Research on Concepts and Cognition (Indiana University)

abduction | analogy: medieval theories of | argument and argumentation | Bayes’ Theorem | confirmation | epistemology: Bayesian | evidence | legal reasoning: precedent and analogy in | logic: inductive | metaphor | models in science | probability, interpretations of | scientific discovery

Copyright © 2019 by Paul Bartha < paul . bartha @ ubc . ca >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

ORIGINAL RESEARCH article

Flexibility in problem solving: analogical transfer of tool use in toddlers is immune to delay.

\r\nKatarzyna Bobrowicz,,*

  • 1 Department of Psychology, Lund University, Lund, Sweden
  • 2 Public Higher Medical Professional School, Opole, Poland
  • 3 Department of Philosophy and Cognitive Science, Lund University, Lund, Sweden

Solving problems that are perceptually dissimilar but require similar solutions is a key skill in everyday life. In adults, this ability, termed analogical transfer, draws on memories of relevant past experiences that partially overlap with the present task at hand. Thanks to this support from long-term memory, analogical transfer allows remarkable behavioral flexibility beyond immediate situations. However, little is known about the interaction between long-term memory and analogical transfer in development as, to date, they have been studied separately. Here, for the first time, effects of age and memory on analogical transfer were investigated in 2–4.5-olds in a simple tool-use setup. Children attempted to solve a puzzle box after training the correct solution on a different looking box, either right before the test or 24 h earlier. We found that children ( N = 105) could transfer the solution regardless of the delay and a perceptual conflict introduced in the tool set. For children who failed to transfer ( N = 54) and repeated the test without a perceptual conflict, the odds of success did not improve. Our findings suggest that training promoted the detection of functional similarities between boxes and, thereby, flexible transfer both in the short and the long term.

Introduction

Adult humans solve problems continuously. As early as in toddlerhood, humans learn how tools allow for reaching goals that would otherwise be out of reach. This learning involves transferring solutions between problems, which, in turn, requires both an ability to identify those aspects of the problem that are relevant for the solution, and remember a solution long enough to be able to apply it again. While both these capacities allegedly begin to develop in infancy ( Träuble and Pauen, 2007 ), they have – to date – only been studied separately from each other. The present study focuses on the joint contribution of these capacities at early stages of development of tool-dependent problem solving: a skill that underpins impressive human technological culture ( Osiurak and Reynaud, 2019 ).

To discover which features of a tool are relevant for reaching a desired goal, infants attend to both the tool itself and its interactions with objects in the environment ( Rakison and Woodward, 2008 ). In the first year of life, infants rapidly acquire knowledge about objects and interactions, both through own actions and through observing others ( Leslie, 1984 ; Luo et al., 2009 ). In this process, perceptual features of the tool are linked to the effect that it exerts, that is, to the function it serves ( Bates et al., 1980 ). This allows the infant to, by the end of the first year, shift from attending to overall perceptual similarity to attending to functional similarity when faced with unfamiliar objects, as long as the common function is demonstrated beforehand ( Träuble and Pauen, 2007 ).

Thus, it seems that 11- and 12-month-olds not only acquire, but also transfer knowledge about the functional parts of objects; for instance, they can identify a toothbrush and a dish brush as sharing the same functional part after observing washing the dishes with the latter. However, at this age, if a shovel had a handle similar to that of a dish brush but different from that of the toothbrush, the child would most likely group the dish brush and the shovel as similar to each other, and as different from the toothbrush. In other words, 12-month-olds still fail to prioritize the object’s functional features over conflicting perceptual ones ( Madole et al., 1993 ; Booth and Waxman, 2002 ). Resolving such conflicts may develop much later, as even 24-month-olds may struggle with disregarding the perceptually salient yet misleading features of a tool in favor of the functionally relevant ones ( Baker and Keen, 2007 ; Bechtel et al., 2013 ; Pauen and Bechtel-Kuehne, 2016 but Chen et al., 1997 ). Although 24-month-olds still prioritize perceptual saliency over the tool’s function, contrary to younger infants, they rapidly improve their performance upon feedback. The improvement in selectively attending to the object’s function in the second year of life coincides with the onset of tool use in everyday life, e.g., eating with a spoon ( Conolly and Dalgleish, 1989 ; McCarty et al., 2011 ; Bechtel et al., 2013 ).

Solving problems with tools poses a twofold difficulty: one must identify the functionally relevant features within the tool on the one hand, and the functionally relevant features within the problem on the other. For instance, to open a door, one needs to find not only the right key, but also the keyhole. Relying on such functional matches between problem and tool supports transferring solutions across problems and therefore boosts behavioral flexibility. The ability to detect common principles for solution across problems, termed analogical transfer, improves between the second and fourth year of life. For instance, Crisafi and Brown (1986) showed that 2-year-olds failed to transfer spontaneously across problems, several 3-year-olds transferred a solution across physically similar problems, and many 4-year-olds transferred across physically dissimilar problems that did not require tool use. Likewise, Brown and Kane (1988) showed that 3-year-olds spontaneously transferred tool knowledge to a target story but needed a slightly longer exposure to the source story compared to 4- and 5-year-olds. Other studies have corroborated this developmental trajectory ( Holyoak et al., 1984 ; Brown, 1989 ; Brown et al., 1989 ; Goswami, 1991 ; Chen, 1996 ). Importantly, none of these studies required actual tool use.

Between the second and fourth year of life, children rapidly develop a skill that is a hallmark of everyday human life: flexible tool-dependent problem solving. Its flexibility in adults is boosted by well-developed long-term memory, which allows for transfer across both immediate and delayed situations ( Bobrowicz, 2019 ), but the interaction between long-term memory and analogical transfer of tool use has been overlooked in research, regarding both adults and children. This is somewhat surprising, considering the role that analogical transfer of tool use has played in the development of human technological culture ( Osiurak and Reynaud, 2019 ). Such development cannot rely solely on analogical transfer of tool use; it must be supported by long-term memory that allows transferring technical skills between dissimilar contexts. The present study focuses on the interaction between analogical transfer, tool use and long-term memory in development.

Long-term memory, including episodic and procedural memory, supports analogical transfer across situations that, in everyday life, are often separated by hours, days or even months. It is unclear when long-term memory begins to support analogical transfer in young children, but previous findings suggest that immaturity of long-term memory, especially episodic memory, may limit transfer between contexts before the age of 3. While procedural memory allows for issuing motor actions acquired in the past, episodic memory warrants flexible retrieval of relevant previous situations. While not much is known about the development of procedural memory between the second and fourth year of life, brain structures associated with procedural memory mature ahead of the structures associated with episodic memory. Thus, it is reasonable to assume that procedural memory matures earlier than episodic memory ( Bauer, 2008 ) but findings are scarce and contradictory ( Lum et al., 2010 ). In any case, it seems that episodic memory is available to 3-year-olds, but improves further between the ages of 3 to 5, particularly in terms of richness of information about past situations ( Hayne et al., 2011 ), retention interval (from 15 min in 3-year-olds to 24 h or even a week in 4-year-olds; Scarf et al., 2013 ), and recollection of the temporal aspects of past situations ( Scarf et al., 2017 ). What does not improve, however, is the accuracy of recollection ( Hayne et al., 2011 ), the ability to form episodic memories ( Scarf et al., 2013 ), and the recollection of “what” and “where” happened in the past situation ( Scarf et al., 2017 ).

Findings from episodic memory research suggest that children younger than 4 years could struggle with applying knowledge to a target problem 24 h after training on a relevant source problem. However, previous studies with deferred imitation ( Herbert and Hayne, 2000 ) and object search ( DeLoache et al., 2004 ) tasks demonstrated that even 2.5-year-olds can use long-term memory to transfer knowledge across problems, both immediately after training on the source task and 24 h later. This suggests that even before the age of 3, children can identify and flexibly apply relevant knowledge acquired on another, functionally similar task.

In the current study, we investigated the development of flexible tool-dependent problem solving in children 2- to 4-years old, focusing on tool use and analogical transfer across immediate and delayed situations. We limited language demands of the task to a minimum so that language abilities would not influence the children’s performance. We designed a novel experimental setup that required solving a problem through physical tool use, after training on an analogical problem. A control group did not receive this training. Children carried out the test task either shortly after training, or after a 24-h delay. This manipulation allowed for investigating the impact of short-term and long-term memory on analogical transfer. Children who failed to solve the test task with the perceptually incongruent tool set were further tested with another tool set, where the perceptual mismatch had been removed. This manipulation, to a limited extent, allowed for investigating the impact of perceptual mismatch on analogical transfer.

We predicted that:

(H1) With age, children would be more likely to solve the test task after training, regardless of the perceptual mismatch within the tool set.

(H2) Compared to older children, younger children would be less likely to solve the test task after 24-h delay.

In order to investigate ability to focus on the aspects relevant for solving the problem, the setup involved a twofold difficulty – prioritizing the functionally relevant features of the problem over the irrelevant ones, and prioritizing the functionally relevant features of the tools over the irrelevant ones. Therefore, children needed to not only transfer a solution between two perceptually dissimilar problems, but also overcome a mismatch (conflict) between the perceptual and the functional features of the tools. We predicted that:

(H3) With age, children who received training would interact longer with the functional tool and the relevant components of the apparatuses; showing that they successfully identified the relevant aspects of the target problem. Thus, with age, children who received training will also interact shorter with the functional tool and the irrelevant components of the apparatuses.

(H4) Compared to older children, younger children receiving the test task with a 24-h delay would interact shorter with the functional tool and the relevant components, but longer with the irrelevant components, showing difficulties with identifying the relevant aspects of the target problem.

(H5) Children who solved the test task would interact longer with the functional tool and the relevant components of the apparatuses, compared to children who did not. While interacting with the functional tool and the relevant components is critical to solving the target problem, such interactions do not guarantee the solution if children do not know how to correctly apply the tool’s function.

(H6) Removing the perceptual mismatch may not benefit children who failed to solve the test with the perceptually mismatching tool set, since children between 2 and 4.5 years should be able to prioritize tool function over irrelevant perceptual similarities.

Materials and Methods

Participants.

In total, 122 children were recruited from eleven public preschools in urban and semi-urban areas of southern Sweden. Focusing on the ages of 2 to 4 years, we aimed at children 21–51 months old, but seven children who were 52–55 months old were also included. The children’s average age was 40.42 months (SD = 7.49). Parental education was high, with 86.6% having a college or university degree. Most children (57.7%) had one sibling, 13.8% had no siblings, and 21.1% had two siblings or more. Only data from 105 children (51 boys/54 girls) were included in the present analysis. Data from 17 children were excluded because of missing or distorted video recordings ( n = 12), successful solution of all test tasks upon the first presentation ( n = 2, 43, and 48 months), or missing the second day of testing ( n = 3). Out of all children, 90 received training while 15 did not, serving as a control group. The control group was limited to 15 children as a trade-off, to increase the power of the statistical analyses conducted for the experimental group. The children’s mean age in the control group was 38.53 months (SD = 9.09).

This research was approved by Swedish Ethical Review Authority in Lund (DRN 2018/572, PI Psouni). No sensitive data about participants was gathered, and only those children, whose parents submitted a written consent, either on paper or digitally, were included in the study.

Seven sets of tool-use tasks were developed, each with two puzzle boxes and three tools (see Figure 1 ). Each set consisted of a training task and a test task, which looked different but required a similar solution. The puzzle boxes were made of medium-density fibreboard (MDF) and each had a transparent plexiglass surface through which the child could peek at a toy bee trapped inside. The MDF was covered with non-toxic paint to make the boxes seem “dirty” and so discourage the children from touching them with bare hands and try to retrieve the bee using tools instead. The experimenter always wore gloves or used a paper towel to handle the boxes to make this story more believable, and children were playing along.

www.frontiersin.org

Figure 1. An overview of all sets of apparatuses and tools used in the study, with relevant components highlighted in green. Within each set of apparatuses, the training task is depicted to the left and the test task to the right. Within each set of tools, the functional tool is depicted to the left, the non-functional in the middle, and the useless to the right. Simple motor actions were required to open each box: (A) inserting the tip of the functional tool into the gap in the upper part of the apparatus (training) or the middle (test), and then lifting the tool’s handle; (B) hooking the tip of the functional tool onto the upper part of the door (training) or the hole in the front part of the lid (test), and then pulling the tool’s handle; (C) inserting the tip of the functional tool into the tube’s opening, and pushing the tool’s handle to the side (training) or downward (test); (D) casting the loop-like tip of the functional tool onto the car’s hook (training) or the hook on the doors (test), and pulling the tool’s handle; (E) casting the rake-like tip of the functional tool onto the car (training) or the handle on the doors (test), and pulling the tool’s handle; (F) inserting the tips of the pincette-like functional tool, grasping the bee, and pulling the tool (training) or using the tips to grasp the string protruding from the tube, and pulling the tool (test); (G) inserting the tip of the hockey-bat-like functional tool, and raking the bee out (both training and test).

Each set of tasks was accompanied by a set of three tools: a functional, a non-functional and a useless one, all made of white FIMO clay. The functional and the non-functional tools had the same length and rigidity, but different ends. The functional tool had a functional element on both ends, and, paired with a correct motor action on the child’s part, allowed for retrieving the bee. The non-functional tool likewise had a non-functional element on both ends, so regardless of the motor action executed by the child, it did not allow for retrieving the bee. Finally, the useless tool differed in length, shape and rigidity from the other two and did not allow for solving the task ( Figure 1 ). Within each set, solving both the training and the test task required the same tool and the same motor action (see Supplementary Figure S1 ). Each child was tested with one set.

Although the same set of tools was used in both the training and the test task, we manipulated the appearance of the tools. In the baseline and the training, each of the tools had a unique salient pattern painted on the white clay with a non-toxic dark-blue pen. The patterns on the functional and the non-functional tools were swapped in the test, leading to a perceptual mismatch between the baseline/training and the test.

In an extra testing round that commenced after a failed test, the perceptually mismatching set of tools was substituted with another set, comprising a functional, non-functional and useless tool, all decorated with a uniform x-pattern (see Figure 2 ).

www.frontiersin.org

Figure 2. An overview of the procedure in the two experimental conditions: without training (A) or with training (B) . Each child was first presented with a puzzle box containing a toy bee visible behind a transparent surface and three tools. If the child could not open the box, s/he either waited 10 min or 24 h before another chance to open the box (A) or s/he received a training on another box that required the same solution (B) . After the training, she had another chance to open the original box, either right afterward or 24 h later. This figure shows the “ B ” set that required either hooking the tip of the functional tool onto the upper part of the door (training) or the hole in the front part of the lid (test) to solve the task.

Children were recruited through announcements at the preschools, after active, informed consent by their parents/guardians. Information about the study was both physically and digitally available to parents. Test leaders spent a day in each preschool to become acquainted with the children. Children were tested individually, at their preschools, at a room arranged for the purposes of the experiment. All trials were video-recorded, capturing the experimental setup and the participant’s hands. Before each trial, children were engaged in a short chat about the box being dirty and the bee trapped inside the box. If the children had not been interested in the task at this point, the experimenter would not have proceeded with the baseline. However, such problems did not occur, as all children were interested in the trapped bee and releasing it, interacting with tools and the apparatus.

The experiment employed a 2 × 2 factorial design, manipulating condition (training vs. no training) and delay (short/10 min vs. long/24 hrs). At baseline, the child received a single opportunity to interact with the test task, from picking up a tool, through using it on the apparatus, to abandoning the tool, to ensure that he/she could not solve the task spontaneously. All three tools were available: a dot-patterned functional, a stripe-patterned non-functional and a wave-patterned useless one. If the child chose the functional tool, used it in a correct way and released the bee, a test task from another set was presented.

Upon failure to solve the task at baseline, children were assigned into either training (experimental) or no training (control). Children in the experimental group began the training immediately after baseline. During training, two tools were available: a dot-patterned functional and a wave-patterned useless one. Now, the child learned, with the help of the experimenter, how to use the functional tool to retrieve the bee from the training task. The child was first encouraged verbally to try to release the “trapped” bee from the box. If the child did not succeed, the experimenter would demonstrate once how to use the functional tool on the relevant components of the task. The child did not receive any verbal instructions and, therefore, relied solely on the motor demonstration. After each demonstration, the child was verbally invited to try on his/her own. This sequence of demonstrations by the experimenter and attempts by the child were repeated until the child succeeded to retrieve the bee thrice without any help from the experimenter. Children assigned to the control group did not receive training. Instead, they were allowed to play with the experimenter for an equivalent time period of 10 min (the training was estimated to last around 10 min). Therefore, children in the control group did not acquire, through training, relevant knowledge to apply to the test task and de facto did not participate in a transfer procedure. Children were also assigned into either a 10-min short delay or a 24-h long delay before being presented with the test task.

The test task included three tools: a stripe-patterned functional tool (dot-patterned in the baseline), a dot-patterned non-functional tool (stripe-patterned in the baseline), and a wave-patterned useless tool (same as in the baseline). Therefore, although the functional features of the tools remained the same, there was now a salient perceptual mismatch between the current and the previously used tools. Children received up to three opportunities to interact with the test task, from picking up a tool, through using it on the apparatus, to abandoning the tool. Independently of whether they solved the tests or not, children received stickers and age-appropriate toys as tokens of appreciation for their participation.

Immediately after three unsuccessful attempts at solving the test task, the perceptually mismatching set of tools was substituted with another set, comprising a functional, non-functional and useless tool, all decorated with a uniform x-pattern (see Figure 1 ). In this extra testing round, the child was allowed three new attempts at solving the test task with the set of tools.

Coding and Statistical Analysis

For each video, the child’s score and interactions with the apparatus were coded frame-by-frame in ELAN 4.9.4. An interaction was defined as the time interval between onset and offset of physical contact of a tool held by the participant and a component of the apparatus. The tool could be functional (F), non-functional (NF) or useless (U), and the component of the apparatus could be relevant (rel) or irrelevant (irrel; see Supplementary Datasets 1 and 2 ). The child could only release the toy-bee by interacting with the relevant components of the apparatus. These components differed between apparatuses (see Figure 1 ). The following variables were used as the response variables in the analyses:

(a) Score in the test, defined as the outcome of the test, equal to 0 if the child failed to solve the test, and 1 if the child solved the test within the first three attempts. This variable was dichotomous.

(b) Functional tool × relevant components, defined as a proportion of interaction time for the functional tool and the relevant components to the overall interaction time in the test. This variable was continuous. Interaction times for the functional tool and the relevant components followed a right-skewed distribution (Mdn = 0.474, min = 0, max = 1). Residuals of the generalized linear model with this variable as a response were normally distributed.

(c) Functional tool × irrelevant components, defined as a proportion of interaction time for the functional tool and the irrelevant components to the overall interaction time in the test. This variable was continuous. Interaction times for the functional tool and the irrelevant components followed a right-skewed distribution (Mdn = 0, min = 0, max = 1). Residuals of the generalized linear model with this variable as a response variable were not normally distributed, and followed Beta distribution.

(d) Score in the extra testing round, defined as the outcome of the extra test, equal to zero if the child failed to solve the extra test, and 1 if the child solved the extra test within the three attempts received after failing the test. This variable was dichotomous.

Two raters coded 64 and 36% of the videos, respectively. A third, independent rater coded all the material. Time-unit kappa, defined as the overlap between the interval patterns generated by the raters for each recording ( Bakeman et al., 2009 ) was equal to 0.99. For each recording, several variables were computed from the coded interactions (see Supplementary Dataset 1 and 2 ).

Because of its small size, the control group was not included in the main statistical analyses. However, Fisher’s exact test was run to statistically compare performance between the control and the experimental group. Each hypothesis was addressed with the following statistical tools.

(H1, H2) A generalized linear model, with the Score in the test as the response variable and two predictor variables: Delay (short vs. long) and Age (continuous). The response variable was dichotomous.

(H3, H4, H5) A linear model was used, with the proportion of the interaction time for the functional tool and the relevant components to the overall interaction time as the response variable and three predictor variables: Delay (short vs. long), Age (continuous), and the Score in the test (0 vs. 1). The Shapiro–Wolf test was run to determine the residuals’ distribution. The residuals were normally distributed. Best model selection was performed and the predictors that were not involved in significant main and/or interaction effects were dropped. Furthermore, a generalized linear model was used, with the proportion of the interaction time for the functional tool and the irrelevant components to the overall interaction time as the response variable and three predictor variables: Delay (short vs. long), Age (continuous), and the Score in the test (0 vs. 1). The Shapiro–Wolf test was run to determine the residuals’ distribution. The residuals were not normally distributed, and the variable followed a Beta distribution. Best model selection was performed and the predictors that were not involved in significant main and/or interaction effects were dropped.

(H6) A generalized linear model, with the Score in the test as the response variable and two predictor variables: Delay (short vs. long) and Age (continuous), would be used, but too few children succeeded in the extra test to permit this analysis.

All analyses were conducted in R (v.3.5.1, the R Foundation for Statistical Computing 1 ). For hypotheses testing, best model selection for generalized linear models was carried out with the following functions: glm (glmulti package; Calcagno, 2013 ), dredge, get.models (MuMIn package; Barton, 2020 ), and Anova (car package; Fox and Weisberg, 2011 ). Contrasts were calculated with glht function from multcomp package ( Hothorn et al., 2008 ) and power was estimated with Nagelkerke’s R2, using PseudoR2 function from DescTools package Signorell et al., 2019 ). Weights equal to the total interaction time per child were specified in each model, so that children who did not interact with the box would be excluded from the interactions’ analyses. The results were plotted with the interactions package ( Long, 2019 ). Significance level was set at 0.05.

All children between 24 and 29 months ( n = 10) failed to solve the test, and therefore this group was excluded from generalized linear models. Instead, Fisher’s exact test was run to statistically compare performance between children between 24 and 29 months and children older than 29 months. Among children who received training ( n = 90), 46% ( n = 41) solved the test, 43% ( n = 18) of whom after the short delay (for details see Table 1 ). None of the children who did not receive training ( n = 15) solved the test, and the probability of doing so was significantly lower than for children who received training ( p = 0.004). Therefore, training was prerequisite for solving the test. In the short delay group, the delay between baseline and test was much shorter than 10 min as children in the experimental group needed shorter trainings than planned, 2 min on average ( M = 2.02, SD = 0.87), and children in the control group played shorter with the experimenter than planned, 5.5 min on average ( M = 5.36, SD = 0.88). Five children who received training (29, 31, 37, 43, and 47 months; compared to two who did not receive training, 32 and 49 months) did not interact with the test task and were excluded from the interactions’ analyses.

www.frontiersin.org

Table 1. An overview of participants by age, outcome, condition, and delay.

Among 54 children who failed to solve the test with a perceptually incongruent set of tools and proceeded to the extra test ( N = 54, 26 boys/28 girls), the children’s ages ranged between 24 and 53 months, mean age was M = 40.24 months (SD = 7.76). In line with hypothesis H6, only 9% ( n = 5) of all children solved the test when the perceptual mismatch within the tool set was removed (see Table 1 ). Due to the small size of this group, further statistical analyses based on generalized linear modeling were not possible. Only categorical data analysis with Hildebrand’s Del was possible ( Hildebrand et al., 1977 ; Drazin and Kazanjian, 2017 ) and showed that, after the perceptual mismatch was removed, chances of solving the test were low, but not non-existent.

H1: Children Are More Likely to Solve the Test Task After Training

In line with our hypothesis, none of the children who did not receive training solved the test. Further, all children younger than 30 months also failed to solve the test, regardless of the delay between the training and the test and were significantly more likely to fail the test than older children ( p = 0.015). However, even in older children that received the training, the Score did not depend on Age, as the main effect of Age was not significant [χ 2 (1) = 0.561, p = 0.454, R 2 = 0.026].

H2: Compared to Older Children, Younger Children Are Not Less Likely to Solve the Test Task After 24-h Delay

Contrary to our hypothesis, compared to older children, younger children were not less likely to solve the test task after a 24-h delay, as the Age × Delay interaction was not significant [χ 2 (1) = 1.039, p = 0.308, R 2 = 0.026].

H3 and H4: After a Long Delay, Older Children Interact Shorter With the Functional Tool and the Relevant Components

Contrary to hypothesis H3, older children who received training did not interact longer with the functional tool and the relevant components of the apparatuses, as the main effect of Age was not significant [ F (1,74) = 0.13, p = 0.718]. Further, in line with hypothesis H4, there was a significant interaction effect for Age × Delay [ F (2,74) = 6.385, p = 0.003, R 2 = 0.28]. However, the effect was partially different than the predicted one. After the short delay, there was no difference in interacting with the functional tool and the relevant components across ages, but after the long delay, older children engaged in such interactions significantly less than younger children. Note that this effect is opposite to the predicted one (see Figure 3 ).

www.frontiersin.org

Figure 3. A plot of Age × Delay effect on the proportion of interactions with the functional tool and (A) the relevant components, (B) the irrelevant components of the apparatus in the test. Note that Age and Delay were involved in a two-way interaction effect in (A) , and a three-way interaction effect with the Score in the test in (B) . Children younger than 30 months were not involved in the analyses behind the plot, but their results are displayed in the plot for comparison. Circles stand for individual datapoints in the short-delay condition, and triangles stand for individual datapoints in the long-delay condition.

A more complex picture emerged from the analysis of interactions with the functional tool and the irrelevant components, since interaction Age × Delay was implicated in a three-way effect for Age × Delay × Success (β = 0.092, SE = 0.001, z = 147.51, p < 0.001, R 2 = 0.105). Namely, after both delays, with age, children that failed the test interacted more with the functional tool and the irrelevant components (see Figure 3 ). This was somewhat different for children that solved the test. After the long delay, the older children interacted more with the functional tools and the irrelevant components than the younger children, but after the short delay, this pattern was reversed (see Figure 3 ).

H5: Solving the Test Task Involves Longer Interactions With the Functional Tool and Relevant Apparatus Components

In line with our hypothesis, children who solved the test task interacted longer with the functional tool and relevant apparatus components, compared to children who did not, as there was a main effect of Score in the test [ F (1,74) = 9.674, p = 0.003, R 2 = 0.28] and this variable was not implicated in any interactions Further, after the short delay, solving the test involved shorter interactions with the functional tool and the irrelevant components in the older children than in the younger children. After the long delay, however, solving the test involved longer interactions with the functional tool and the irrelevant components in the older than in the younger children (see Figure 3 ).

In the current study, we tested a novel experimental setup with 2–4 1/2 year-olds to pinpoint the developmental trajectory of flexible tool-dependent problem solving. For the first time, we investigated analogical transfer of actual tool use, both immediately after training and after a 24-h delay. Contrary to previous findings, suggesting that younger children may have difficulty in analogical transfer from long-term memory, we found that once children pass the 30-month threshold, they can transfer solutions that depend on prioritizing tool function over appearance across problems, regardless of the delay between the problems. Contrary to previous studies with so young children, language demands of the task were kept to a minimum, as children received puzzle boxes instead of stories and tackled the test without any verbal or non-verbal guidance from the experimenter.

Analogical Transfer Was Not Possible Before the Age of 2.5 Years

Children younger than 2 1/2 years did not manage to solve the problem, even if they learnt the correct solution on a perceptually dissimilar problem ten min earlier. However, this was not the case for the children between 2 1/2 and 4 1/2, who, after learning the correct solution 10 min or 24 h earlier, could solve the problem regardless of the delay. Therefore, it seems that neither the delay nor the perceptual mismatch within the tool set impeded the transfer at these ages. Given previous findings from analogical transfer studies ( Crisafi and Brown, 1986 ), it is not surprising that children younger than 2 1/2 years did not transfer the solution between two physically dissimilar problems, but it is surprising that ability to transfer was independent of age in children between 30 and 55 months of age.

The current analogical transfer task perhaps required simpler analogical reasoning that other classical tasks, e.g., of the A:B:C:D type ( Thibaut and French, 2016 ). Typically, such tasks require detecting how A is related to B (e.g., A fits in B, the shirt fits in the suitcase) and then applying this relation to a pair of C and D items (e.g., C fits in D, the toy car fits in the box). This is a complex task that involves holding “online” and transferring an abstract rule, an operation that lies at the core of adult analogical reasoning. However, from the child’ point of view, the task is fairly abstract, requires well-developed verbal skills and does not allow any agency on the child’s part. On the contrary, in the current task, children needed only to hold and transfer a concrete rule across one pair of items (A can be solved as B), use no language and act as agents. This may explain why children’s analogical transfer was independent of age in children between 30 and 55 months of age. In the future, our task could be, however, used as an A:B:C:D task, as we devised several pairs of boxes, all relying on the same rule: that both boxes in the pair can be solved with the same tool.

Transferring a solution across problems arguably requires the ability to mentally represent objects and actions involved in the solution. To transfer the solution across two problems, children need to simultaneously consider the source problem, familiar but currently absent, and the target problem, currently present but unfamiliar. In other words, the child needs to activate a mental representation of the source problem in the service of problem solving, which, according to neo-Piagetian accounts may be available in 2-year-olds at the earliest (e.g., Morra et al., 2008 ). While our findings could be taken to indicate that representational ability is still immature before the 30th month of life, hindering analogical transfer, 2-year olds have been shown to succeed in transfer tasks as long as the similarity between the problems is explicitly stated by the experimenter ( Crisafi and Brown, 1986 ; Goswami, 1991 ), suggesting at least some capacity to mentally represent the source problem.

That children below 30 months have difficulties in transferring across conceptually similar problems has been shown before also in other tasks, using deferred imitation ( Herbert and Hayne, 2000 ) and object search ( DeLoache et al., 2004 ). Although analogical transfer may be available even for 2-year-olds, they may require the experimenter to highlight the conceptual similarity between the source and the target problem. For instance, Hayne and Gross (2015) showed that 2-year-olds can transfer a sequence of actions across perceptually dissimilar tasks, as long as the experimenter provided the same verbal label to highlight the underlying functional similarity. Afterward, children were also shown to map this similarity onto another set of problems, but only as long as the functional similarity was highlighted with the verbal label within the initial set.

Therefore, in principle, 2-year-olds are able to transfer knowledge across functionally similar contexts but may be less likely to spontaneously notice the link between the source and the target than 2 1/2-year-olds. The challenges posed by the target problem can be also viewed from the sensorimotor perspective, as it demands activating and coordinating several sensorimotor schemes regarding essential features of the problem (the puzzle box, its relevant components, the toy bee inside, the tools), the goal of the problem (selecting a tool and retrieving the toy bee) and the strategy of arriving at this goal (applying the tool to the components of the box; e.g., Morra and Panesi, 2017 ). Such activation and coordination may limit children’s spontaneous transfer below 30 months.

Alternatively, our youngest participants could have failed to prioritize attending to relevant over irrelevant aspects of the target problem. Between 2 and 2 1/2, children’s attention undergoes a transition, as it becomes increasingly governed by top-down (executive functions), not bottom-up influences (attractiveness and novelty of stimuli; Ruff and Capozzoli, 2003 ). Then, the participants in this age range would focus to a greater extent than older children on the irrelevant aspects of the problem. This was not the case, however, as the youngest children interacted with the functional tool and the irrelevant aspects to a similar extent as the older children who failed to solve the test.

In fact, it has recently been shown that the capacity to attend to relevant, rather than irrelevant, information in children around the age of 3 years can be substantially boosted by jointly attending to the problem tasks ( Psouni et al., 2019 ). Future studies should address whether the experimenter jointly attending to the task with the youngest children or providing verbal cues that highlight the similarity between the source and the target might boost the youngest participants’ capacity to analogical transfer.

Allowing children to manipulate the tools in future studies, even in the youngest group, may further illuminate whether children’s motor programs for tools change with age. Since tools in the current study were made-up and their function was the same for the source and the target, it is unlikely that previous motor programs for familiar tools, e.g., a spoon, impacted on children’s tool-use flexibility ( Barrett et al., 2007 ). In the future, however, our set-up could be used to study how robust tool-dependent transfer is when the function and motor programs associated with the tools is manipulated across tasks.

The Interplay of Age and Memory

Our findings suggest that once children are able to transfer, they can do so also with a delay between initial problem and test problem, beyond the 15-min limit identified by Scarf and colleagues for 3-year-olds in another task ( Scarf et al., 2013 ). Children could not simply repeat the previously learnt motor action in order to succeed, as the training and test puzzles were distinctively different in that sense; instead, they needed to generalize the action to a different looking problem. Thus, it is unlikely that children could solve the problem relying on procedural memory (note also that procedural learning does not generalize well, see, e.g., Doll et al., 2015 ).

Interestingly, our result mirrors the findings from Herbert and Hayne’s deferred imitation study with 30-month-olds, where children’s transfer was immune to a similar 24-h-delay (2005). In another above-mentioned study, this performance was also achievable for 2-year-olds as long as their attention was drawn to the functional similarity between two perceptually dissimilar tasks, suggesting that 2-year-olds’ working memory resources may suffice for analogical transfer ( Herbert and Hayne, 2000 ). Assuming that working memory is not separate from long-term memory, but rather a state of activation encompassing certain information stored in long-term memory ( Oberauer, 2002 ; Pascual-Leone and Johnson, 2005 ), it is possible that once analogical transfer is permitted by working memory maturity, it works equally well for the immediate and delayed problems, at least by 24 h.

In terms of outcome, children performed similarly regardless of the delay and age in our analogical transfer task, that is, children were equally likely to solve the test after 10 min and 24 h. However, the analysis of the interaction patterns reveals a different picture. In the short-term, attending to the relevant components of the problem was similar in children of different ages, and lower in those that did not solve the task than those that did. In the long term, however, among children above 30 months, older children attended shorter to the relevant components of the problem than the younger children. Note that this was true both for the children that failed and those that passed the test. Taken together, these results suggest that, even if children reach the same outcome in the short and in the long term, retrieval from long-term memory does not pose a uniform challenge in children of different ages.

We posit that it is highly unlikely that retrieving a relevant experience from long-term memory is increasingly difficult as child’s memory matures. Instead, we suggest that in the long term, older children may adopt a more flexible, explorative approach than younger children, spending more time on interactions with the irrelevant aspects of the target problem. Perhaps with age, the solution to the target problem becomes increasingly straightforward and children seek additional ways of solving it before acting on the relevant aspects of the problem.

This reasoning seems to be supported by the analysis of interactions with the functional tool and the irrelevant components. Among children that passed the test, the older children interacted more with the functional tool and the irrelevant components than the younger ones, but only in the long term. This pattern was different among children that failed the test, since there was no difference in interactions with the functional tool and the irrelevant components across ages, both in the short and in the long term.

Repeating the previously learnt motor action clearly did not suffice for successful transfer, as focusing on the functional tool without focusing on the relevant components did not differ between children that succeeded and those that did not, at least in the short-term. In other words, it was not sufficient for the child to pick up the functional tool and apply it to different components; rather, the child had to understand which components matched the tool’s function. Further, children had to transfer the solution acquired in a specific one-time personal episode and flexibly apply it to the novel situation, which requires episodic memory ( Tulving, 2005 ). Therefore, the present results suggest that the ability for non-verbal transfer across physically dissimilar problems in young children has been underestimated. It seems that between 2 1/2 and 4 1/2 years of age children can perform such transfers, using episodic memory, as long as the success does not require the comprehension of verbal instructions.

Limitations

The manipulations introduced in the current experiment could not disentangle between two possible reasons behind children’s failures: a difficulty caused by analogical transfer or by the perceptual mismatch within the tool set. When the perceptual mismatch was removed, very few children improved their performance, suggesting that the failures were predominantly caused by the difficulty in analogical transfer. However, as this extra test involved the same children and followed immediately after the test, failure to solve the task could also be due to a drop in children’s motivation after failure, interference between the perceptually incongruent and the uniform tool sets, or fatigue due to prolonged testing. Future studies ought to disentangle between these possible explanations through, for instance, testing one group of children with the perceptually mismatching tools, and another group with the same tools as in the training.

In the present study, group sizes for the youngest and oldest children were small ( n = 10 and n = 9, respectively). Since it is possible that the youngest children were motorically disadvantaged compared to older children, future studies might include a preferential looking test for the youngest children. Alternatively, to limit the motor involvement on the child’s part but retain the current setup, the experimenter could use tools indicated by the child, as in Pauen and Bechtel-Kuehne’s study (2016). However, the present experimental setup allowed executing multiple motor actions with the chosen tool, enhancing the child’s active involvement and independence while maintaining a minimal language task demand. Having the experimenter handle the tools as instructed by the child would significantly increase language task demands, thereby hindering a comparison of performance across age groups and children with varying language abilities.

The current design could not disentangle whether children attended to the perceptual differences between tool handles and disregarded them in favor of the tools’ functionality, or whether they did not attend to the tools’ appearance at all. Drawing the children’s attention to the tools’ appearance would, again, have resulted in increased language demands of the task. Future investigations may opt to in retrospect ask the children about the tools’ appearance, in an effort to disentangle these two possibilities.

We tested a novel set-up to investigate analogical transfer of tool use in 2–4 1/2-year-olds and showed that transfer between functionally similar, but perceptually dissimilar problems can be as robust after 10 m to a 24-h delay. Children below 30 months did not demonstrate such transfer, in line with previous studies that, like ours, limited verbal cues pointing to the similarity between the source and the target task. Interestingly, we found that, even when children’s behavior in the test led to the same outcome regardless of age and delay, this behavior had different trajectories. Therefore, we posit that future studies should focus not only on the outcome of children’s actions but also on behavioral patterns that lead to those outcomes.

The ability of flexible tool-dependent problem solving has a remarkable impact on everyday life and decision making, both in the local and the global context ( Keen, 2011 ). Recognizing the principles for solution across physical and abstract problems allows for efficient and timely action in response to grave and actual challenges, such as climate change ( Keen, 2011 ). Understanding how children spontaneously shift attention toward relevant aspects of solutions and problems could inform future interventions, on the one hand, enhancing efficient problem solving from a young age, and on the other, enhancing spontaneous focusing on relevant aspects of abstract problems in adults. Furthermore, as analogical transfer of tool use in the current setup did not require verbal instructions, the pairs of problems and tools could be tested with clinical populations of children and adults with speech and/or hearing impediments or impairments.

Data Availability Statement

All datasets generated in this study are included in the article/ Supplementary Material .

Ethics Statement

The studies involving human participants were reviewed and approved by Swedish Ethical Review Authority in Lund (DRN 2018/572, PI Psouni). Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author Contributions

KB: conceptualization, methodology, data curation, formal analysis, writing – original draft, visualization, and funding acquisition. FL and ML: methodology, investigation, and data curation. EP: methodology, writing – review and editing, supervision, project administration, and funding acquisition. All authors contributed to the article and approved the submitted version.

Materials devised in the study were financed by Stiftelsen Roy och Maj Franzéns fond, awarded to KB (grant no. RFv2018-0221). The study was partly financed by Lund University Department of Psychology Research Funding to EP. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank Johan Sahlström, Klara Thorstensson, Therese Wikström, Helena Kelber, and Brigitta Nagy for help with data collection. We thank Joost van der Weijer for his assistance with time-unit kappa, and we gratefully acknowledge Lund University Humanities Lab. Finally, we thank two reviewers for their helpful suggestions and guidance in improving this manuscript.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.573730/full#supplementary-material

  • ^ http://www.R-project.org

Bakeman, R., Quera, V., and Gnisci, A. (2009). Observer-agreement for timed-event sequential data: a comparison of time-based and event-based algorithms. Behav. Res. Methods 41, 137–147. doi: 10.3758/brm.41.1.137

PubMed Abstract | CrossRef Full Text | Google Scholar

Baker, R. K., and Keen, R. (2007). “Tool use by young children: choosing the right tool for the task,” in Paper Presented at the Biennial Meeting of Society for Research in Child Development , Boston, MA.

Google Scholar

Barrett, T. M., Davis, E. F., and Needham, A. (2007). Learning about tools in infancy. Dev. Psychol. 43, 352–368. doi: 10.1037/0012-1649.43.2.352

Barton, K. (2020). MuMIn: Multi-Model Inference. R package version 1.43.17. Available online at: https://CRAN.R-project.org/package=MuMIn (accessed April 15, 2020).

Bates, E., Carlson-Luden, V., and Bretherton, I. (1980). Perceptual aspects of tool using in infancy. Infant Behav. Dev. 3, 127–140. doi: 10.1016/S0163-6383(80)80017-8

CrossRef Full Text | Google Scholar

Bauer, P. J. (2008). Toward a neuro-developmental account of the development of declarative memory. Dev. Psychobiol. 50, 19–31. doi: 10.1002/dev.20265

Bechtel, S., Jeschonek, S., and Pauen, S. (2013). How 24-month-olds form and transfer knowledge about tools: the role of perceptual, functional, causal, and feedback information. J. Exp. Child Psychol. 115, 163–179. doi: 10.1016/j.jecp.2012.12.004

Bobrowicz, K. (2019). Memory for Problem Solving: Comparative Studies in Attention, Working and Long-term Memory. Ph.D. thesis, Lund University, Lund.

Booth, A. E., and Waxman, S. (2002). Object names and object functions serve as cues to categories for infants. Dev. Psychol. 38, 948–957. doi: 10.1037/0012-1649.38.6.948

Brown, A. L. (1989). “Analogical learning and transfer: what develops?” in Similarity and Analogical Reasoning , eds S. Vosniadou and A. Ortony (New York, NY: Cambridge University Press), 369–412. doi: 10.1017/CBO9780511529863.019

Brown, A. L., and Kane, M. J. (1988). Preschool children can learn to transfer: learning to learn and learning from example. Cogn. Psychol. 20, 493–523. doi: 10.1016/0010-0285(88)90014-X

Brown, A. L., Kane, M. J., and Long, C. (1989). Analogical transfer in young children: analogies as tools for communication and exposition. Appl. Cogn. Psychol. 3, 275–293. doi: 10.1002/acp.2350030402

Calcagno, V. (2013). glmulti: Model Selection and Multimodel Inference made Easy. R package version 1.0.7. Available online at: https://CRAN.R-project.org/package=glmulti (accessed May 26, 2020).

Chen, Z. (1996). Children’s analogical problem solving: the effects of superficial, structural and procedural similarities. J. Exp. Child Psychol. 62, 410–431. doi: 10.1006/jecp.1996.0037

Chen, Z., Sanchez, R. P., and Campbell, T. (1997). From beyond to within their grasp: the rudiments of analogical problem solving in 10- and 13-month-olds. Dev. Psychol. 33, 790–801. doi: 10.1037/0012-1649.33.5.790

Conolly, K., and Dalgleish, M. (1989). The emergence of a tool-using skill in infancy. Dev. Psychol. 25, 894–912. doi: 10.1037/0012-1649.25.6.894

Crisafi, M. A., and Brown, A. L. (1986). Analogical transfer in very young children: combining two separately learned solutions to reach a goal. Child Dev. 57, 953–968. doi: 10.2307/1130371

DeLoache, J. S., Simcock, G., and Marzolf, D. P. (2004). Transfer by very young children in the symbolic retrieval task. Child Dev. 75, 1708–1718. doi: 10.1111/j.1467-8624.2004.00811.x

Doll, B. B., Shohamy, D., and Daw, N. D. (2015). Multiple memory systems as substrates for multiple decision systems. Neurobiol. Learn. Mem. 117, 4–13. doi: 10.1016/j.nlm.2014.04.014

Drazin, R., and Kazanjian, R. K. (2017). The analysis of cross-classification data: a prediction approach. Acad. Manag. Proc. 1987, 334–337. doi: 10.5465/ambpp.1987.17534403

Fox, J., and Weisberg, S. (2011). An {R} Companion to Applied Regression , 2nd Edn. Thousand Oaks CA: Sage.

Goswami, U. (1991). Analogical reasoning: what develops? A review of research and theory. Child Dev. 62, 1–28.

Hayne, H., and Gross, J. (2015). 24-month-olds use conceptual similarity to solve new problems after a delay. Intl. J. Behav. Dev. 39, 330–345. doi: 10.1177/0165025415579227

Hayne, H., Gross, J., McNamee, S., Fitzgibbon, O., and Tustin, K. (2011). Episodic memory and episodic foresight in 3- and 5-year-old children. Cogn. Dev. 26, 343–355. doi: 10.1016/j.cogdev.2011.09.006

Herbert, J., and Hayne, H. (2000). Memory retrieval by 18-30-month-olds: age-related changes in representational flexibility. Dev. Psychol. 36, 473–484. doi: 10.1037/0012-1649.36.4.473

Hildebrand, D., Laing, J., and Rosenthal, H. (1977). Prediction Analysis of Cross Classifications. New York, NY: Wiley.

Holyoak, K. J., Junn, E. N., and Billman, D. O. (1984). Development of analogical problem-solving skill. Child Dev. 55, 2042–2055. doi: 10.2307/1129778

Hothorn, T., Bretz, F., and Westfall, P. (2008). Simultaneous inference in general parametric models. Biom. J. 50, 346–363. doi: 10.1002/bimj.200810425

Keen, R. (2011). The development of problem solving in young children: a critical cognitive skill. Annu. Rev. Psychol. 62, 1–21. doi: 10.1146/annurev.psych.031809.130730

Leslie, A. M. (1984). Spatiotemporal continuity and the perception of causality in infants. Perception 11, 173–186. doi: 10.1068/p130287

Long, J. A. (2019). interactions: Comprehensive, User-Friendly Toolkit for Probing Interactions. R package version 1.1.0. Available online at: https://cran.r-project.org/package=interactions (accessed April, 4, 2020).

Lum, J., Kidd, E., Davis, S., and Conti-Ramsden, G. (2010). Longitudinal study of declarative and procedural memory in primary school-aged children. Aust. J. Psychol. 62, 139–148. doi: 10.1080/00049530903150547

Luo, Y., Kaufman, L., and Baillargeon, R. (2009). Young infants’ reasoning about physical events involving inert and self-propelled objects. Cogn. Psychol. 58, 441–486. doi: 10.1016/j.cogpsych.2008.11.001

Madole, K. L., Oakes, L. M., and Cohen, L. B. (1993). Developmental changes in infants’ attention to function and form-function correlations. Cogn. Dev. 8, 189–209. doi: 10.1016/0885-2014(93)90014-V

McCarty, M. E., Clifton, R. K., and Collard, R. R. (2011). The beginnings of tool use by infants and toddlers. Infancy 2, 233–256. doi: 10.1207/S15327078IN0202_8

Morra, S., Gobbo, C., Marini, Z., and Sheese, R. (2008). Cognitive Development: Neo-Piagetian Perspectives. New Jersey: Lawrence Erlbaum Associates, 190–228.

Morra, S., and Panesi, S. (2017). From scribbling to drawing: the role of working memory. Cogn. Dev. 43, 142–158. doi: 10.1016/j.cogdev.2017.03.001

Oberauer, K. (2002). Access to information in working memory: exploring the focus of attention. J. Exp. Psychol. Learn. Mem. Cogn. 28, 411–421. doi: 10.1037/0278-7393.28.3.411

Osiurak, F., and Reynaud, E. (2019). The elephant in the room: what matters cognitively in cumulative technological culture. Behav. Brain Sci. 43:e156. doi: 10.1017/S0140525X19003236

Pascual-Leone, J., and Johnson, J. A. (2005). “Dialectical constructivist view of developmental intelligence,” in Handbook of Understanding and Measuring Intelligence , eds O. Wilhelm and R. W. Engle (Thousand Oaks, CA: Sage Publications, Inc), 177–201. doi: 10.4135/9781452233529.n11

Pauen, S., and Bechtel-Kuehne, S. (2016). How toddlers acquire and transfer tool knowledge: developmental changes and the role of executive functions. Child Dev. 87, 1233–1249. doi: 10.1111/cdev.12532

Psouni, E., Falck, A., Boström, L., Persson, M., Sidén, L., and Wallin, M. (2019). Together I Can! joint attention boosts 3- to 4-year-olds’ performance in a verbal false-belief test. Child Dev. 90, 35–50. doi: 10.1111/cdev.13075

Rakison, D. H., and Woodward, A. L. (2008). New perspectives on the effects of action on perceptual and cognitive development. Dev. Psychol. 44, 1209–1213. doi: 10.1037/a0012999

Ruff, H. A., and Capozzoli, M. C. (2003). Development of attention and distractability in the first 4 years of life. Dev. Psychol. 39, 877–890. doi: 10.1037/0012-1649.39.5.877

Scarf, D., Boden, H., Labuschagne, L. G., Gross, J., and Hayne, H. (2017). “What” and “where” was when? Memory for the temporal order of episodic events in children. Dev. Psychobiol. 59, 1039–1045. doi: 10.1002/dev.21553

Scarf, D., Gross, J., Colombo, M., and Hayne, H. (2013). To have and to hold: episodic memory in 3- and 4-year-old children. Dev. Psychobiol. 55, 125–132. doi: 10.1002/dev.21004

Signorell, A., Aho, K., Alfons, A., Anderegg, N., Aragon, T., Arppe, A., et al. (2019). DescTools : Tools for Descriptive Statistics. R package version 0.99.28. Available online at: https://andrisignorell.github.io/DescTools/

Thibaut, J. P., and French, R. M. (2016). Analogical reasoning, control and executive functions: a developmental investigation with eye-tracking. Cogn. Dev. 38, 10–26. doi: 10.1016/j.cogdev.2015.12.002

Träuble, B., and Pauen, S. (2007). The role of functional information for infant categorization. Cognition 105, 362–379. doi: 10.1016/j.cognition.2006.10.003

Tulving, E. (2005). “Episodic memory and autonoesis: uniquely human?” in The Missing Link in Cognition , eds H. S. Terrace and J. Metcalfe (New York, NY: Oxford University Press), 4–56. doi: 10.1093/acprof:oso/9780195161564.003.0001

Keywords : analogical transfer, tool use, memory, toddler development, functionality

Citation: Bobrowicz K, Lindström F, Lindblom Lovén M and Psouni E (2020) Flexibility in Problem Solving: Analogical Transfer of Tool Use in Toddlers Is Immune to Delay. Front. Psychol. 11:573730. doi: 10.3389/fpsyg.2020.573730

Received: 17 June 2020; Accepted: 16 September 2020; Published: 06 October 2020.

Reviewed by:

Copyright © 2020 Bobrowicz, Lindström, Lindblom Lovén and Psouni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Katarzyna Bobrowicz, [email protected] ; [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. Chapter 12.2: Problem Solving

    in analogical problem solving the quizlet

  2. Unit 1: Problem Solving Process Flashcards

    in analogical problem solving the quizlet

  3. SOLVED: What are the three steps in the process of analogical problem

    in analogical problem solving the quizlet

  4. PPT

    in analogical problem solving the quizlet

  5. Schematic depiction of the two-subphases model of analogical

    in analogical problem solving the quizlet

  6. Analogical Reasoning as a Problem Solving Approach

    in analogical problem solving the quizlet

VIDEO

  1. The Six A.i. Initiative: Analogical Tutoring

  2. Unleashing the Power of Visual Chain Thinking Enhance Creativity and Problem Solving

  3. Answers Blurred? No problem!

  4. Quiz Ridle

  5. Darden Restaurants Hiring Assessment Test

  6. United Airlines Assessment Test Training

COMMENTS

  1. Ch 12 Flashcards

    Study with Quizlet and memorize flashcards containing terms like In analogical problem solving, the _____ problem is the problem that an individual is trying to solve, and the _____ problem, which has been solved in the past, is used as a guide for reaching that solution. . source; target exemplar; source target; source prototype; target, Actions that take the problem from one state to another ...

  2. Chapter 12

    Gick and Holyoak proposed the process of analogical problem solving involved the following three steps: (1) noticing a relationship, (2) mapping the correspondence between source and target, and (3) applying the mapping to generate a parallel solution. ________________ are specific elements that make up a problem. surface features.

  3. Analogical Reasoning and Problem Solving Flashcards

    Analogical problem solving - puzzles - chinese ring puzzle 1. 5 rings from a bar on which they are impaled, whereby the bar is 3D, can be twisted, turned & rings can slide over each other. 2. digital isomorphic of the same problem, whereby there is a set of 5 boxes on the screen, each containing a ball that must be moved out of the box and moves are made by clicking a computer mouse. - in both ...

  4. Cognitive Psych Exam 3- Chapter 12

    In analogical problem solving, the _____ problem is the problem that an individual is trying to solve, and the _____ problem, which has been solved in the past, is used as a guide for reaching that solution.

  5. Analogical Transfer in Problem Solving (Chapter 11)

    Summary. When people encounter a novel problem, they might be reminded of a problem they solved previously, retrieve its solution, and use it, possibly with some adaptation, to solve the novel problem. This sequence of events, or "problem-solving transfer," has important cognitive benefits: It saves the effort needed for derivation of new ...

  6. Analogical thinking

    Analogical thinking. Analogical thinking is what we do when we use information from one domain (the source or analogy) to help solve a problem in another domain (the target). Experts often use analogies during the process of problem solving, and analogies have been involved in numerous scientific discoveries. However, studies of novice problem ...

  7. 6.4: Reasoning by Analogy

    The experiment showed that in order to solve the target problem reading of two stories with analogical problems is more helpful than reading only one story: After reading two stories 52% of the participants were able to solve the radiation problem (As told in chapter 4.2 only 30% were able to solve it after reading only one story, namely ...

  8. Analogical Thinking: A Method For Solving Problems

    Whether he realised it or not, de Mestral used what today we term "analogical thinking" or analogical reasoning; the process of finding a solution to a problem by finding a similar problem with a known solution and applying that solution to the current situation. An analogy is a comparison between two objects, or systems of objects, that ...

  9. PDF Metaphor and analogy in everyday problem solving

    Early studies found that analogy use helped people gain insight into novel problems. More recent research on metaphor goes further to show that activating mappings has subtle, sometimes surprising effects on judgment and reasoning in everyday problem solving. These ndings highlight situations in which mappings can help fi or hinder efforts to ...

  10. Using Analogies for Creative Problem Solving

    Suppose you were to compare the problem employee with a problem program on your computer. Here are four things you might do to deal with the problem program: a) uninstall the program and use a competitor. b) reinstall the program fresh. c) upgrade the program. d) check users' groups on the web for plugins or settings to get help with the problem.

  11. What Is Analogical Reasoning?

    Using analogical reasoning, we can draw upon existing knowledge and patterns to understand new or unfamiliar situations, applying solutions or insights from one context to another. Analogical reasoning example. In discussions of potential limitations on free speech, hate speech is often compared to shouting "fire" in a crowded theater.

  12. Analogy and Analogical Reasoning

    An analogy is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar.Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further ...

  13. PDF Analogical Problem Solving

    the problem to map to elements in retrieved analogs, selecting relations to map, and determining the validity of mappings. His own theory explores three important questions about solving the radiation problem. He first addresses how the radiation problem can be solved without analogy by viewing the problem as a dilemma.

  14. Similarity and Analogical Reasoning

    Well-known cognitive scientists examine the psychological processes involved in reasoning by similarity and analogy, the computational problems encountered in simulating analogical processing in problem solving, and the conditions promoting the application of analogical reasoning in everyday situations.

  15. Development of Analogical Problem-Solving Skill

    the processes by which adults solve problems by analogy. A paradigm that has been used to study analogical problem solving by college students involves having subjects solve a problem after reading a story describing an analogous problem and its solution (Gick & Holyoak, 1980, 1983). For example, Gick and Holyoak (1980) had subjects attempt to ...

  16. PDF Analogical Problem Solving: Insights from Verbal Reports

    encountered while solving the problem. solve the "birthday" problem by replacing the names of the Most studies investigating verbal data along with problem solving focus on the content level, identifying the explicit statements elicited from participants during (or following) a problem solving process. Few studies analyze verbal reports

  17. PDF The use of diagrams in analogical problem solving

    One of the most general methods for problem solving is to use a known sourceanalogue as a guide to develop-ing a solution for a novel targetanalogue. Many studies have demonstrated that both children and adults can solve problems by analogy (e.g., Gentner & Gentner, 1983; Gick & Holyoak, 1980, 1983; Holyoak, Junn, & Billman,

  18. Frontiers

    1 Department of Psychology, Lund University, Lund, Sweden; 2 Public Higher Medical Professional School, Opole, Poland; 3 Department of Philosophy and Cognitive Science, Lund University, Lund, Sweden; Solving problems that are perceptually dissimilar but require similar solutions is a key skill in everyday life. In adults, this ability, termed analogical transfer, draws on memories of relevant ...

  19. PDF Analogical Problem Solving

    In attempting to describe this type of analogical problem solving we thus inherit all the problems associated with text comprehension. In particular, perception of analogy hinges on semantic knowledge and inference procedures. Since no general theory of language understanding is available, we must of necessity gloss over many impor- ...

  20. PDF Analogical Reasoning

    People in 2-analogue group more likely to solve the tumor problem. The closer people's descriptions of story similarities came to the convergence schema, the more likely they were to solve the tumor problem. "many smal forces applied together to add up to one large force necessary to destroy the object". "in both stories a hero was ...