Mastering Python’s Set Difference: A Game-Changer for Data Wrangling

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy .

Show details

This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy .

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

It is needed for personalizing the website.

Expiry: Session

This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website

Type: HTTPS

Preserves the login/logout state of users across the whole site.

Preserves users' states across page requests.

Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.

Expiry: 365 days

Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

Used by Microsoft Clarity, to store and track visits across websites.

Expiry: 1 Year

Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.

Expiry: 1 year

Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.

Expiry: 1 Day

Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.

Expiry: 2 years

Use to measure the use of the website for internal analytics

Expiry: 1 years

The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.

Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.

Expiry: 2 months

This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.

Expiry: 399 days

Used by Google Analytics, to store and count pageviews.

Expiry: 399 Days

Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.

Expiry: 1 day

Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.

Type: PIXEL

cookies ensure that requests within a browsing session are made by the user, and not by other sites.

Expiry: 6 months

G_ENABLED_IDPS

use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.

test_cookie

This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.

this is used to send push notification using webengage.

WebKlipperAuth

used by webenage to track auth of webenagage.

Expiry: session

Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.

Use to maintain an anonymous user session by the server.

Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.

AnalyticsSyncHistory

Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.

lms_analytics

Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.

Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.

allow for the Linkedin follow feature.

often used to identify you, including your name, interests, and previous activity.

Tracks the time that the previous page took to load

Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings

Tracks percent of page viewed

AMCV_14215E3D5995C57C0A495C55%40AdobeOrg

Indicates the start of a session for Adobe Experience Cloud

Provides page name value (URL) for use by Adobe Analytics

Used to retain and fetch time since last visit in Adobe Analytics

Remembers a user's display preference/theme setting

li_theme_set

Remembers which users have updated their display / theme preferences

Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.

Used by Google Adsense, to store and track conversions.

Expiry: 3 months

Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.

These cookies are used for the purpose of targeted advertising.

Expiry: 6 hours

Expiry: 1 month

These cookies are used to gather website statistics, and track conversion rates.

Aggregate analysis of website visitors

This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.

Expiry: 4 months

Contains a unique browser and user ID, used for targeted advertising.

Used by LinkedIn to track the use of embedded services.

Used by LinkedIn for tracking the use of embedded services.

Use these cookies to assign a unique ID when users visit a website.

UserMatchHistory

These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.

Used to make a probabilistic match of a user's identity outside the Designated Countries

Expiry: 90 days

Used to collect information for analytics purposes.

Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation

UnclassNameified cookies are cookies that we are in the process of classNameifying, together with the providers of individual cookies.

Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.

Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy .

Reading list

Basics of machine learning, machine learning lifecycle, importance of stats and eda, understanding data, probability, exploring continuous variable, exploring categorical variables, missing values and outliers, central limit theorem, bivariate analysis introduction, continuous - continuous variables, continuous categorical, categorical categorical, multivariate analysis, different tasks in machine learning, build your first predictive model, evaluation metrics, preprocessing data, linear models, selecting the right model, feature selection techniques, decision tree, feature engineering, naive bayes, multiclass and multilabel, basics of ensemble techniques, advance ensemble techniques, hyperparameter tuning, support vector machine, advance dimensionality reduction, unsupervised machine learning methods, recommendation engines, improving ml models, working with large datasets, interpretability of machine learning models, automated machine learning, model deployment, deploying ml models, embedded devices, an introduction to problem-solving using search algorithms for beginners.

  • Introduction

In computer science, problem-solving refers to synthetic intelligence techniques, which include forming green algorithms, heuristics, and acting root reason analysis to locate suited solutions. Search algorithms are fundamental tools for fixing a big range of issues in computer science. They provide a systematic technique to locating answers by way of exploring a hard and fast of feasible alternatives. These algorithms are used in various applications together with path locating, scheduling, and information retrieval.

In this article, we will look into the problem-solving techniques used by various types of search algorithms.

This article was published as a part of the  Data Science Blogathon .

Table of contents

Examples of problems in artificial intelligence, problem solving techniques, properties of search algorithms, types of search algorithms, uninformed search algorithms, comparison of various uninformed search algorithms, informed search algorithms, comparison of uninformed and informed search algorithms.

In today’s fast-paced digitized world, artificial intelligence techniques are used widely to automate systems that can use the resource and time efficiently. Some of the well-known problems experienced in everyday life are games and puzzles. Using AI techniques, we can solve these problems efficiently. In this sense, some of the most common problems resolved by AI are:

  • Travelling Salesman Problem
  • Tower of Hanoi Problem
  • Water-Jug Problem
  • N-Queen Problem
  • Crypt-arithmetic Problems
  • Magic Squares
  • Logical Puzzles and so on.

In artificial intelligence, problems can be solved by using searching algorithms, evolutionary computations, knowledge representations, etc.

In this article, I am going to discuss the various searching techniques that are used to solve a problem.

In general, searching is referred to as finding information one needs.

The process of problem-solving using searching consists of the following steps.

  • Define the problem
  • Analyze the problem
  • Identification of possible solutions
  • Choosing the optimal solution
  • Implementation

Let’s discuss some of the essential properties of search algorithms.

Completeness

A search algorithm is said to be complete when it gives a solution or returns any solution for a given random input.

If a solution found is best (lowest path cost) among all the solutions identified, then that solution is said to be an optimal one.

Time complexity

The time taken by an algorithm to complete its task is called time complexity. If the algorithm completes a task in a lesser amount of time, then it is an efficient one.

Space complexity

It is the maximum storage or memory taken by the algorithm at any time while searching.

These properties are also used to compare the efficiency of the different types of searching algorithms.

Now let’s see the types of the search algorithm.

Based on the search problems, we can classify the search algorithm as

  • Uninformed search
  • Informed search

The uninformed search algorithm does not have any domain knowledge such as closeness, location of the goal state, etc. it behaves in a brute-force way. It only knows the information about how to traverse the given tree and how to find the goal state. This algorithm is also known as the Blind search algorithm or Brute -Force algorithm. The uninformed search strategies are of six types.

  • Breadth-first search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening depth-first search
  • Bidirectional search
  • Uniform cost search

Let’s discuss these six strategies one by one.

1. Breadth-first search

It is of the most common search strategies. It generally starts from the root node and examines the neighbor nodes and then moves to the next level. It uses First-in First-out (FIFO) strategy as it gives the shortest path to achieving the solution.

BFS is used where the given problem is very small and space complexity is not considered.

Now, consider the following tree.

Breadth-First search | Problem-Solving using AI

Source: Author

Here, let’s take node A as the start state and node F as the goal state.

The BFS algorithm starts with the start state and then goes to the next level and visits the node until it reaches the goal state.

In this example, it starts from A and then travel to the next level and visits B and C and then travel to the next level and visits D, E, F and G. Here, the goal state is defined as F. So, the traversal will stop at F.

Traversal in BFS

The path of traversal is:

A —-> B —-> C —-> D —-> E —-> F

Let’s implement the same in python programming.

Python Code:

Advantages of BFS

  • BFS will never be trapped in any unwanted nodes.
  • If the graph has more than one solution, then BFS will return the optimal solution which provides the shortest path.

Disadvantages of BFS

  • BFS stores all the nodes in the current level and then go to the next level. It requires a lot of memory to store the nodes.
  • BFS takes more time to reach the goal state which is far away.

2. Depth-first search

The depth-first search uses Last-in, First-out (LIFO) strategy and hence it can be implemented by using stack. DFS uses backtracking. That is, it starts from the initial state and explores each path to its greatest depth before it moves to the next path.

DFS will follow

Root node —-> Left node —-> Right node

Now, consider the same example tree mentioned above.

Here, it starts from the start state A and then travels to B and then it goes to D. After reaching D, it backtracks to B. B is already visited, hence it goes to the next depth E and then backtracks to B. as it is already visited, it goes back to A. A is already visited. So, it goes to C and then to F. F is our goal state and it stops there.

DFS Example | Problem-Solving using AI

A —-> B —-> D —-> E —-> C —-> F

The output path is as follows.

Output Snippet

Advantages of DFS

  • It takes lesser memory as compared to BFS.
  • The time complexity is lesser when compared to BFS.
  • DFS does not require much more search.

Disadvantages of DFS

  • DFS does not always guarantee to give a solution.
  • As DFS goes deep down, it may get trapped in an infinite loop.

3. Depth-limited search

Depth-limited works similarly to depth-first search. The difference here is that depth-limited search has a pre-defined limit up to which it can traverse the nodes. Depth-limited search solves one of the drawbacks of DFS as it does not go to an infinite path.

DLS ends its traversal if any of the following conditions exits.

Standard Failure

It denotes that the given problem does not have any solutions.

Cut off Failure Value

It indicates that there is no solution for the problem within the given limit.

Now, consider the same example.

Let’s take A as the start node and C as the goal state and limit as 1.

The traversal first starts with node A and then goes to the next level 1 and the goal state C is there. It stops the traversal.

Depth-limited search example | Problem-Solving using AI

A —-> C

If we give C as the goal node and the limit as 0, the algorithm will not return any path as the goal node is not available within the given limit.

If we give the goal node as F and limit as 2, the path will be A, C, F.

Let’s implement DLS.

When we give C as goal node and 1 as limit the path will be as follows.

problem solving using search algorithms

Advantages of DLS

  • It takes lesser memory when compared to other search techniques.

Disadvantages of DLS

  • DLS may not offer an optimal solution if the problem has more than one solution.
  • DLS also encounters incompleteness.  

4. Iterative deepening depth-first search

Iterative deepening depth-first search is a combination of depth-first search and breadth-first search. IDDFS find the best depth limit by gradually adding the limit until the defined goal state is reached.

Let me try to explain this with the same example tree.

Consider, A as the start node and E as the goal node. Let the maximum depth be 2.

The algorithm starts with A and goes to the next level and searches for E. If not found, it goes to the next level and finds E.

Iterative deepening depth-first search example

The path of traversal is

A —-> B —-> E

Let’s try to implement this.

The path generated is as follows.

Output snippet

Advantages of IDDFS

  • IDDFS has the advantages of both BFS and DFS.
  • It offers fast search and uses memory efficiently.

Disadvantages of IDDFS

  • It does all the works of the previous stage again and again.

5. Bidirectional search

The bidirectional search algorithm is completely different from all other search strategies. It executes two simultaneous searches called forward-search and backwards-search and reaches the goal state. Here, the graph is divided into two smaller sub-graphs. In one graph, the search is started from the initial start state and in the other graph, the search is started from the goal state. When these two nodes intersect each other, the search will be terminated.

Bidirectional search requires both start and goal start to be well defined and the branching factor to be the same in the two directions.

Consider the below graph.

Bidirectional Search Example | Problem-Solving using AI

Here, the start state is E and the goal state is G. In one sub-graph, the search starts from E and in the other, the search starts from G. E will go to B and then A. G will go to C and then A. Here, both the traversal meets at A and hence the traversal ends.

Bidirectional Search example | Problem-Solving using AI

E —-> B —-> A —-> C —-> G

Let’s implement the same in Python.

The path is generated as follows.

Output snippet

Advantages of bidirectional search

  • This algorithm searches the graph fast.
  • It requires less memory to complete its action.

Disadvantages of bidirectional search

  • The goal state should be pre-defined.
  • The graph is quite difficult to implement.

6. Uniform cost search

Uniform cost search is considered the best search algorithm for a weighted graph or graph with costs. It searches the graph by giving maximum priority to the lowest cumulative cost. Uniform cost search can be implemented using a priority queue.

Consider the below graph where each node has a pre-defined cost.

Uniform Cost Search example | Problem-Solving using AI

Here, S is the start node and G is the goal node.

From S, G can be reached in the following ways.

S, A, E, F, G -> 19

S, B, E, F, G -> 18

S, B, D, F, G -> 19

S, C, D, F, G -> 23

Here, the path with the least cost is S, B, E, F, G.

Uniform cost search example | Problem-Solving using AI

Let’s implement UCS in Python.

The optimal output path is generated.

Code Snippet

Advantages of UCS

  • This algorithm is optimal as the selection of paths is based on the lowest cost.

Disadvantages of UCS

  • The algorithm does not consider how many steps it goes to reach the lowest path. This may result in an infinite loop also.

Now, let me compare the six different uninformed search strategies based on the time complexity.

AlgorithmTimeSpaceCompleteOptimality
Breadth FirstO(b^d)O(b^d)YesYes
Depth FirstO(b^m)O(bm)NoNo
Depth LimitedO(b^l)O(bl)NoNo
Iterative DeepeningO(b^d)O(bd)YesYes
BidirectionalO(b^(d/2))O(b^(d/2))YesYes
Uniform CostO(bl+floor(C*/epsilon))O(bl+floor9C*/epsilon))YesYes

This is all about uninformed search algorithms.

Let’s take a look at informed search algorithms.

The informed search algorithm is also called heuristic search or directed search. In contrast to uninformed search algorithms, informed search algorithms require details such as distance to reach the goal, steps to reach the goal, cost of the paths which makes this algorithm more efficient.

Here, the goal state can be achieved by using the heuristic function.

The heuristic function is used to achieve the goal state with the lowest cost possible. This function estimates how close a state is to the goal.

Let’s discuss some of the informed search strategies.

1. Greedy best-first search algorithm

Greedy best-first search uses the properties of both depth-first search and breadth-first search. Greedy best-first search traverses the node by selecting the path which appears best at the moment. The closest path is selected by using the heuristic function.

Consider the below graph with the heuristic values.

Greedy best-first search algorithm example | Problem-Solving using AI

Here, A is the start node and H is the goal node.

Greedy best-first search first starts with A and then examines the next neighbour B and C. Here, the heuristics of B is 12 and C is 4. The best path at the moment is C and hence it goes to C. From C, it explores the neighbours F and G. the heuristics of F is 8 and G is 2. Hence it goes to G. From G, it goes to H whose heuristic is 0 which is also our goal state.

Path of traversal in Greedy best-first search algorithm | Problem-Solving using AI

A —-> C —-> G —-> H

Let’s try this with Python.

The output path with the lowest cost is generated.

Output snippet

The time complexity of Greedy best-first search is O(b m ) in worst cases.

Advantages of Greedy best-first search

  • Greedy best-first search is more efficient compared with breadth-first search and depth-first search.

Disadvantages of Greedy best-first search

  • In the worst-case scenario, the greedy best-first search algorithm may behave like an unguided DFS.
  • There are some possibilities for greedy best-first to get trapped in an infinite loop.
  • The algorithm is not an optimal one.

Next, let’s discuss the other informed search algorithm called the A* search algorithm.

2. A* search Algorithm

A* search algorithm is a combination of both uniform cost search and greedy best-first search algorithms. It uses the advantages of both with better memory usage. It uses a heuristic function to find the shortest path. A* search algorithm uses the sum of both the cost and heuristic of the node to find the best path.

Consider the following graph with the heuristics values as follows.

A* search example | Problem-Solving using AI

Let A be the start node and H be the goal node.

First, the algorithm will start with A. From A, it can go to B, C, H.

Note the point that A* search uses the sum of path cost and heuristics value to determine the path.

Here, from A to B, the sum of cost and heuristics is 1 + 3 = 4.

From A to C, it is 2 + 4 = 6.

From A to H, it is 7 + 0 = 7.

Here, the lowest cost is 4 and the path A to B is chosen. The other paths will be on hold.

Now, from B, it can go to D or E.

From A to B to D, the cost is 1 + 4 + 2 = 7.

From A to B to E, it is 1 + 6 + 6 = 13.

The lowest cost is 7. Path A to B to D is chosen and compared with other paths which are on hold.

Here, path A to C is of less cost. That is 6.

Hence, A to C is chosen and other paths are kept on hold.

From C, it can now go to F or G.

From A to C to F, the cost is 2 + 3 + 3 = 8.

From A to C to G, the cost is 2 + 2 + 1 = 5.

The lowest cost is 5 which is also lesser than other paths which are on hold. Hence, path A to G is chosen.

From G, it can go to H whose cost is 2 + 2 + 2 + 0 = 6.

Here, 6 is lesser than other paths cost which is on hold.

Also, H is our goal state. The algorithm will terminate here.

Path of traversal in A* | Problem-Solving using AI

Let’s try this in Python.

The output is given as

Output snippet

The time complexity of the A* search is O(b^d) where b is the branching factor.

Advantages of A* search algorithm

  • This algorithm is best when compared with other algorithms.
  • This algorithm can be used to solve very complex problems also it is an optimal one.

Disadvantages of A* search algorithm

  • The A* search is based on heuristics and cost. It may not produce the shortest path.
  • The usage of memory is more as it keeps all the nodes in the memory.

Now, let’s compare uninformed and informed search strategies.

Uninformed search is also known as blind search whereas informed search is also called heuristics search. Uniformed search does not require much information. Informed search requires domain-specific details. Compared to uninformed search, informed search strategies are more efficient and the time complexity of uninformed search strategies is more. Informed search handles the problem better than blind search.

Search algorithms are used in games, stored databases, virtual search spaces, quantum computers, and so on. In this article, we have discussed some of the important search strategies and how to use them to solve the problems in AI and this is not the end. There are several algorithms to solve any problem. Nowadays, AI is growing rapidly and applies to many real-life problems. Keep learning! Keep practicing!

Free Courses

image.name

Generative AI - A Way of Life

Explore Generative AI for beginners: create text and images, use top AI tools, learn practical skills, and ethics.

image.name

Getting Started with Large Language Models

Master Large Language Models (LLMs) with this course, offering clear guidance in NLP and model training made simple.

image.name

Building LLM Applications using Prompt Engineering

This free course guides you on building LLM apps, mastering prompt engineering, and developing chatbots with enterprise data.

image.name

Building Your first RAG System using LlamaIndex

Build your first RAG model with LlamaIndex in this free course. Dive into Retrieval-Augmented Generation now!

image.name

Building Production Ready RAG systems using LlamaIndex

Learn Retrieval-Augmented Generation (RAG): learn how it works, the RAG framework, and use LlamaIndex for advanced systems.

Recommended Articles

Uniform Cost Search Algorithm

Informed Search Strategies for State Space Sear...

Uninformed Search Strategy for State Space Sear...

Introduction to Intelligent Search Algorithms

Best First Search in Artificial Intelligence

State Space Search Optimization Using Local Sea...

What is A* Algorithm? 

Understanding the Greedy Best-First Search (GBF...

Top 7 Algorithms for Data Structures in Python

Local Search Algorithms in AI: A Comprehensive ...

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Write for us

Write, captivate, and earn accolades and rewards for your work

  • Reach a Global Audience
  • Get Expert Feedback
  • Build Your Brand & Audience
  • Cash In on Your Knowledge
  • Join a Thriving Community
  • Level Up Your Data Science Game

imag

Sion Chakrabarti

CHIRAG GOYAL

CHIRAG GOYAL

kaustubh gupta

kaustubh gupta

Barney Darlington

Barney Darlington

Suvojit Hore

Suvojit Hore

Arnab Mondal

Arnab Mondal

Flagship Courses

Popular categories, generative ai tools and techniques, popular genai models, data science tools and techniques, genai pinnacle program, revolutionizing ai learning & development.

  • 1:1 Mentorship with Generative AI experts
  • Advanced Curriculum with 200+ Hours of Learning
  • Master 26+ GenAI Tools and Libraries

Enroll with us today!

Continue your learning for free, enter email address to continue, enter otp sent to.

Resend OTP in 45s

  • Part 2 Problem-solving »
  • Chapter 3 Solving Problems by Searching
  • Edit on GitHub

Chapter 3 Solving Problems by Searching 

When the correct action to take is not immediately obvious, an agent may need to plan ahead : to consider a sequence of actions that form a path to a goal state. Such an agent is called a problem-solving agent , and the computational process it undertakes is called search .

Problem-solving agents use atomic representations, that is, states of the world are considered as wholes, with no internal structure visible to the problem-solving algorithms. Agents that use factored or structured representations of states are called planning agents .

We distinguish between informed algorithms, in which the agent can estimate how far it is from the goal, and uninformed algorithms, where no such estimate is available.

3.1 Problem-Solving Agents 

If the agent has no additional information—that is, if the environment is unknown —then the agent can do no better than to execute one of the actions at random. For now, we assume that our agents always have access to information about the world. With that information, the agent can follow this four-phase problem-solving process:

GOAL FORMULATION : Goals organize behavior by limiting the objectives and hence the actions to be considered.

PROBLEM FORMULATION : The agent devises a description of the states and actions necessary to reach the goal—an abstract model of the relevant part of the world.

SEARCH : Before taking any action in the real world, the agent simulates sequences of actions in its model, searching until it finds a sequence of actions that reaches the goal. Such a sequence is called a solution .

EXECUTION : The agent can now execute the actions in the solution, one at a time.

It is an important property that in a fully observable, deterministic, known environment, the solution to any problem is a fixed sequence of actions . The open-loop system means that ignoring the percepts breaks the loop between agent and environment. If there is a chance that the model is incorrect, or the environment is nondeterministic, then the agent would be safer using a closed-loop approach that monitors the percepts.

In partially observable or nondeterministic environments, a solution would be a branching strategy that recommends different future actions depending on what percepts arrive.

3.1.1 Search problems and solutions 

A search problem can be defined formally as follows:

A set of possible states that the environment can be in. We call this the state space .

The initial state that the agent starts in.

A set of one or more goal states . We can account for all three of these possibilities by specifying an \(Is\-Goal\) method for a problem.

The actions available to the agent. Given a state \(s\) , \(Actions(s)\) returns a finite set of actions that can be executed in \(s\) . We say that each of these actions is applicable in \(s\) .

A transition model , which describes what each action does. \(Result(s,a)\) returns the state that results from doing action \(a\) in state \(s\) .

An action cost function , denote by \(Action\-Cost(s,a,s\pr)\) when we are programming or \(c(s,a,s\pr)\) when we are doing math, that gives the numeric cost of applying action \(a\) in state \(s\) to reach state \(s\pr\) .

A sequence of actions forms a path , and a solution is a path from the initial state to a goal state. We assume that action costs are additive; that is, the total cost of a path is the sum of the individual action costs. An optimal solution has the lowest path cost among all solutions.

The state space can be represented as a graph in which the vertices are states and the directed edges between them are actions.

3.1.2 Formulating problems 

The process of removing detail from a representation is called abstraction . The abstraction is valid if we can elaborate any abstract solution into a solution in the more detailed world. The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem.

3.2 Example Problems 

A standardized problem is intended to illustrate or exercise various problem-solving methods. It can be given a concise, exact description and hence is suitable as a benchmark for researchers to compare the performance of algorithms. A real-world problem , such as robot navigation, is one whose solutions people actually use, and whose formulation is idiosyncratic, not standardized, because, for example, each robot has different sensors that produce different data.

3.2.1 Standardized problems 

A grid world problem is a two-dimensional rectangular array of square cells in which agents can move from cell to cell.

Vacuum world

Sokoban puzzle

Sliding-tile puzzle

3.2.2 Real-world problems 

Route-finding problem

Touring problems

Trveling salesperson problem (TSP)

VLSI layout problem

Robot navigation

Automatic assembly sequencing

3.3 Search Algorithms 

A search algorithm takes a search problem as input and returns a solution, or an indication of failure. We consider algorithms that superimpose a search tree over the state-space graph, forming various paths from the initial state, trying to find a path that reaches a goal state. Each node in the search tree corresponds to a state in the state space and the edges in the search tree correspond to actions. The root of the tree corresponds to the initial state of the problem.

The state space describes the (possibly infinite) set of states in the world, and the actions that allow transitions from one state to another. The search tree describes paths between these states, reaching towards the goal. The search tree may have multiple paths to (and thus multiple nodes for) any given state, but each node in the tree has a unique path back to the root (as in all trees).

The frontier separates two regions of the state-space graph: an interior region where every state has been expanded, and an exterior region of states that have not yet been reached.

3.3.1 Best-first search 

In best-first search we choose a node, \(n\) , with minimum value of some evaluation function , \(f(n)\) .

../_images/Fig3.7.png

3.3.2 Search data structures 

A node in the tree is represented by a data structure with four components

\(node.State\) : the state to which the node corresponds;

\(node.Parent\) : the node in the tree that generated this node;

\(node.Action\) : the action that was applied to the parent’s state to generate this node;

\(node.Path\-Cost\) : the total cost of the path from the initial state to this node. In mathematical formulas, we use \(g(node)\) as a synonym for \(Path\-Cost\) .

Following the \(PARENT\) pointers back from a node allows us to recover the states and actions along the path to that node. Doing this from a goal node gives us the solution.

We need a data structure to store the frontier . The appropriate choice is a queue of some kind, because the operations on a frontier are:

\(Is\-Empty(frontier)\) returns true only if there are no nodes in the frontier.

\(Pop(frontier)\) removes the top node from the frontier and returns it.

\(Top(frontier)\) returns (but does not remove) the top node of the frontier.

\(Add(node, frontier)\) inserts node into its proper place in the queue.

Three kinds of queues are used in search algorithms:

A priority queue first pops the node with the minimum cost according to some evaluation function, \(f\) . It is used in best-first search.

A FIFO queue or first-in-first-out queue first pops the node that was added to the queue first; we shall see it is used in breadth-first search.

A LIFO queue or last-in-first-out queue (also known as a stack ) pops first the most recently added node; we shall see it is used in depth-first search.

3.3.3 Redundant paths 

A cycle is a special case of a redundant path .

As the saying goes, algorithms that cannot remember the past are doomed to repeat it . There are three approaches to this issue.

First, we can remember all previously reached states (as best-first search does), allowing us to detect all redundant paths, and keep only the best path to each state.

Second, we can not worry about repeating the past. We call a search algorithm a graph search if it checks for redundant paths and a tree-like search if it does not check.

Third, we can compromise and check for cycles, but not for redundant paths in general.

3.3.4 Measuring problem-solving performance 

COMPLETENESS : Is the algorithm guaranteed to find a solution when there is one, and to correctly report failure when there is not?

COST OPTIMALITY : Does it find a solution with the lowest path cost of all solutions?

TIME COMPLEXITY : How long does it take to find a solution?

SPACE COMPLEXITY : How much memory is needed to perform the search?

To be complete, a search algorithm must be systematic in the way it explores an infinite state space, making sure it can eventually reach any state that is connected to the initial state.

In theoretical computer science, the typical measure of time and space complexity is the size of the state-space graph, \(|V|+|E|\) , where \(|V|\) is the number of vertices (state nodes) of the graph and \(|E|\) is the number of edges (distinct state/action pairs). For an implicit state space, complexity can be measured in terms of \(d\) , the depth or number of actions in an optimal solution; \(m\) , the maximum number of actions in any path; and \(b\) , the branching factor or number of successors of a node that need to be considered.

3.4 Uninformed Search Strategies 

3.4.1 breadth-first search .

When all actions have the same cost, an appropriate strategy is breadth-first search , in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on.

../_images/Fig3.9.png

Breadth-first search always finds a solution with a minimal number of actions, because when it is generating nodes at depth \(d\) , it has already generated all the nodes at depth \(d-1\) , so if one of them were a solution, it would have been found.

All the nodes remain in memory, so both time and space complexity are \(O(b^d)\) . The memory requirements are a bigger problem for breadth-first search than the execution time . In general, exponential-complexity search problems cannot be solved by uninformed search for any but the smallest instances .

3.4.2 Dijkstra’s algorithm or uniform-cost search 

When actions have different costs, an obvious choice is to use best-first search where the evaluation function is the cost of the path from the root to the current node. This is called Dijkstra’s algorithm by the theoretical computer science community, and uniform-cost search by the AI community.

The complexity of uniform-cost search is characterized in terms of \(C^*\) , the cost of the optimal solution, and \(\epsilon\) , a lower bound on the cost of each action, with \(\epsilon>0\) . Then the algorithm’s worst-case time and space complexity is \(O(b^{1+\lfloor C^*/\epsilon\rfloor})\) , which can be much greater than \(b^d\) .

When all action costs are equal, \(b^{1+\lfloor C^*/\epsilon\rfloor}\) is just \(b^{d+1}\) , and uniform-cost search is similar to breadth-first search.

3.4.3 Depth-first search and the problem of memory 

Depth-first search always expands the deepest node in the frontier first. It could be implemented as a call to \(Best\-First\-Search\) where the evaluation function \(f\) is the negative of the depth.

For problems where a tree-like search is feasible, depth-first search has much smaller needs for memory. A depth-first tree-like search takes time proportional to the number of states, and has memory complexity of only \(O(bm)\) , where \(b\) is the branching factor and \(m\) is the maximum depth of the tree.

A variant of depth-first search called backtracking search uses even less memory.

3.4.4 Depth-limited and iterative deepening search 

To keep depth-first search from wandering down an infinite path, we can use depth-limited search , a version of depth-first search in which we supply a depth limit, \(l\) , and treat all nodes at depth \(l\) as if they had no successors. The time complexity is \(O(b^l)\) and the space complexity is \(O(bl)\)

../_images/Fig3.12.png

Iterative deepening search solves the problem of picking a good value for \(l\) by trying all values: first 0, then 1, then 2, and so on—until either a solution is found, or the depth- limited search returns the failure value rather than the cutoff value.

Its memory requirements are modest: \(O(bd)\) when there is a solution, or \(O(bm)\) on finite state spaces with no solution. The time complexity is \(O(bd)\) when there is a solution, or \(O(bm)\) when there is none.

In general, iterative deepening is the preferred uninformed search method when the search state space is larger than can fit in memory and the depth of the solution is not known .

3.4.5 Bidirectional search 

An alternative approach called bidirectional search simultaneously searches forward from the initial state and backwards from the goal state(s), hoping that the two searches will meet.

../_images/Fig3.14.png

3.4.6 Comparing uninformed search algorithms 

../_images/Fig3.15.png

3.5 Informed (Heuristic) Search Strategies 

An informed search strategy uses domain–specific hints about the location of goals to find colutions more efficiently than an uninformed strategy. The hints come in the form of a heuristic function , denoted \(h(n)\) :

\(h(n)\) = estimated cost of the cheapest path from the state at node \(n\) to a goal state.

3.5.1 Greedy best-first search 

Greedy best-first search is a form of best-first search that expands first the node with the lowest \(h(n)\) value—the node that appears to be closest to the goal—on the grounds that this is likely to lead to a solution quickly. So the evaluation function \(f(n)=h(n)\) .

Search Algorithms

rykunk21's avatar

A search algorithm is a type of algorithm used in artificial intelligence to find the best or most optimal solution to a problem by exploring a set of possible solutions, also called a search space. A search algorithm filters through a large number of possibilities to find a solution that works best for a given set of constraints.

Search algorithms typically operate by organizing the search space into a particular type of graph, commonly a tree, and evaluate the best score, or cost, of traversing each branch of the tree. A solution is a path from the start state to the goal state that optimizes the cost given the parameters of the implemented algorithm.

Search algorithms are typically organized into two categories:

Uninformed Search: Algorithms that are general purpose traversals of the state space or search tree without any information about how good a state is. These are also referred to as blind search algorithms.

Informed Search: Algorithms that have information about the goal during the traversal, allowing the search to prioritize its expansion toward the goal state instead of exploring directions that may yield a favorable cost but don’t lead to the goal, or global optimum. By including extra rules that aid in estimating the location of the goal (known as heuristics) informed search algorithms can be more computationally efficient when searching for a path to the goal state.

Types of Search Algorithms

There are many types of search algorithms used in artificial intelligence, each with their own strengths and weaknesses. Some of the most common types of search algorithms include:

Depth-First Search (DFS)

This algorithm explores as far as possible along each branch before backtracking. DFS is often used in combination with other search algorithms, such as iterative deepening search, to find the optimal solution. Think of DFS as a traversal pattern that focuses on digging as deep as possible before exploring other paths.

Breadth-First Search (BFS)

This algorithm explores all the neighbor nodes at the current level before moving on to the nodes at the next level. Think of BFS as a traversal pattern that tries to explore broadly across many different paths at the same time.

Uniform Cost Search (UCS)

This algorithm expands the lowest cumulative cost from the start, continuing to explore all possible paths in order of increasing cost. UCS is guaranteed to find the optimal path between the start and goal nodes, as long as the cost of each edge is non-negative. However, it can be computationally expensive when exploring a large search space, as it explores all possible paths in order of increasing cost.

Heuristic Search

This algorithm uses a heuristic function to guide the search towards the optimal solution. A* search, one of the most popular heuristic search algorithms, uses both the actual cost of getting to a node and an estimate of the cost to reach the goal from that node.

Application of Search Algorithms

Search algorithms are used in various fields of artificial intelligence, including:

Pathfinding

Pathfinding problems involve finding the shortest path between two points in a given graph or network. BFS or A* search can be used to explore a graph and find the optimal path.

Optimization

In optimization problems, the goal is to find the minimum or maximum value of a function, subject to some constraints. Search algorithms such as hill climbing or simulated annealing are often used in optimization cases.

Game Playing

In game playing, search algorithms are used to evaluate all possible moves and choose the one that is most likely to lead to a win, or the best possible outcome. This is done by constructing a search tree where each node represents a game state and the edges represent the moves that can be taken to reach the associated new game state.

The following algorithms have dedicated, more in-depth content:

All contributors

TECHBREAUX's avatar

Contribute to Docs

  • Learn more about how to get involved.
  • Edit this page on GitHub to fix an error or make an improvement.
  • Submit feedback to let us know how we can improve Docs.

Learn AI on Codecademy

Data scientist: machine learning specialist.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Practice Searching Algorithms
  • MCQs on Searching Algorithms
  • Tutorial on Searching Algorithms
  • Linear Search
  • Binary Search
  • Ternary Search
  • Jump Search
  • Sentinel Linear Search
  • Interpolation Search
  • Exponential Search
  • Fibonacci Search
  • Ubiquitous Binary Search
  • Linear Search Vs Binary Search
  • Interpolation Search Vs Binary Search
  • Binary Search Vs Ternary Search
  • Sentinel Linear Search Vs Linear Search
  • Searching Algorithms

Searching algorithms are essential tools in computer science used to locate specific items within a collection of data. These algorithms are designed to efficiently navigate through data structures to find the desired information, making them fundamental in various applications such as databases, web search engines , and more.

problem solving using search algorithms

Table of Content

What is Searching?

  • Searching terminologies
  • Importance of Searching in DSA
  • Applications of Searching
  • Basics of Searching Algorithms
  • Comparisons Between Different Searching Algorithms
  • Library Implementations of Searching Algorithms
  • Easy Problems on Searching
  • Medium Problems on Searching
  • Hard Problems on Searching

Searching is the fundamental process of locating a specific element or item within a collection of data . This collection of data can take various forms, such as arrays, lists, trees, or other structured representations. The primary objective of searching is to determine whether the desired element exists within the data, and if so, to identify its precise location or retrieve it. It plays an important role in various computational tasks and real-world applications, including information retrieval, data analysis, decision-making processes, and more.

Searching terminologies:

Target element:.

In searching, there is always a specific target element or item that you want to find within the data collection. This target could be a value, a record, a key, or any other data entity of interest.

Search Space:

The search space refers to the entire collection of data within which you are looking for the target element. Depending on the data structure used, the search space may vary in size and organization.

Complexity:

Searching can have different levels of complexity depending on the data structure and the algorithm used. The complexity is often measured in terms of time and space requirements.

Deterministic vs. Non-deterministic:

Some searching algorithms, like  binary search , are deterministic, meaning they follow a clear, systematic approach. Others, such as linear search, are non-deterministic, as they may need to examine the entire search space in the worst case.

Importance of Searching in DSA:

  • Efficiency:  Efficient searching algorithms improve program performance.
  • Data Retrieval:  Quickly find and retrieve specific data from large datasets.
  • Database Systems:  Enables fast querying of databases.
  • Problem Solving:  Used in a wide range of problem-solving tasks.

Applications of Searching:

Searching algorithms have numerous applications across various fields. Here are some common applications:

  • Information Retrieval: Search engines like Google, Bing, and Yahoo use sophisticated searching algorithms to retrieve relevant information from vast amounts of data on the web.
  • Database Systems: Searching is fundamental in database systems for retrieving specific data records based on user queries, improving efficiency in data retrieval.
  • E-commerce: Searching is crucial in e-commerce platforms for users to find products quickly based on their preferences, specifications, or keywords.
  • Networking: In networking, searching algorithms are used for routing packets efficiently through networks, finding optimal paths, and managing network resources.
  • Artificial Intelligence: Searching algorithms play a vital role in AI applications, such as problem-solving, game playing (e.g., chess), and decision-making processes
  • Pattern Recognition: Searching algorithms are used in pattern matching tasks, such as image recognition, speech recognition, and handwriting recognition.

Basics of Searching Algorithms:

  • Introduction to Searching – Data Structure and Algorithm Tutorial
  • Importance of searching in Data Structure
  • What is the purpose of the search algorithm?

Searching Algorithms:

  • Meta Binary Search | One-Sided Binary Search
  • The Ubiquitous Binary Search

Comparisons Between Different Searching Algorithms:

  • Linear Search vs Binary Search
  • Interpolation search vs Binary search
  • Why is Binary Search preferred over Ternary Search?
  • Is Sentinel Linear Search better than normal Linear Search?

Library Implementations of Searching Algorithms:

  • Binary Search functions in C++ STL (binary_search, lower_bound and upper_bound)
  • Arrays.binarySearch() in Java with examples | Set 1
  • Arrays.binarySearch() in Java with examples | Set 2 (Search in subarray)
  • Collections.binarySearch() in Java with Examples

Easy Problems on Searching:

  • Find the largest three elements in an array
  • Find the Missing Number
  • Find the first repeating element in an array of integers
  • Find the missing and repeating number
  • Search, insert and delete in a sorted array
  • Count 1’s in a sorted binary array
  • Two elements whose sum is closest to zero
  • Find a pair with the given difference
  • k largest(or smallest) elements in an array
  • Kth smallest element in a row-wise and column-wise sorted 2D array
  • Find common elements in three sorted arrays
  • Ceiling in a sorted array
  • Floor in a Sorted Array
  • Find the maximum element in an array which is first increasing and then decreasing
  • Given an array of of size n and a number k, find all elements that appear more than n/k times

Medium Problems on Searching:

  • Find all triplets with zero sum
  • Find the element before which all the elements are smaller than it, and after which all are greater
  • Find the largest pair sum in an unsorted array
  • K’th Smallest/Largest Element in Unsorted Array
  • Search an element in a sorted and rotated array
  • Find the minimum element in a sorted and rotated array
  • Find a peak element
  • Maximum and minimum of an array using minimum number of comparisons
  • Find a Fixed Point in a given array
  • Find the k most frequent words from a file
  • Find k closest elements to a given value
  • Given a sorted array and a number x, find the pair in array whose sum is closest to x
  • Find the closest pair from two sorted arrays
  • Find three closest elements from given three sorted arrays
  • Binary Search for Rational Numbers without using floating point arithmetic

Hard Problems on Searching:

  • Median of two sorted arrays
  • Median of two sorted arrays of different sizes
  • Search in an almost sorted array
  • Find position of an element in a sorted array of infinite numbers
  • Given a sorted and rotated array, find if there is a pair with a given sum
  • K’th Smallest/Largest Element in Unsorted Array | Worst case Linear Time
  • K’th largest element in a stream
  • Best First Search (Informed Search)

Quick Links:

  • ‘Practice Problems’ on Searching
  • ‘Quizzes’ on Searching

Recommended:

  • Learn Data Structure and Algorithms | DSA Tutorial

Similar Reads

Please login to comment....

  • Best Smartwatches in 2024: Top Picks for Every Need
  • Top Budgeting Apps in 2024
  • 10 Best Parental Control App in 2024
  • Top Language Learning Apps in 2024
  • GeeksforGeeks Practice - Leading Online Coding Platform

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

home

Artificial Intelligence

  • Artificial Intelligence (AI)
  • Applications of AI
  • History of AI
  • Types of AI
  • Intelligent Agent
  • Types of Agents
  • Agent Environment
  • Turing Test in AI

Problem-solving

  • Search Algorithms
  • Uninformed Search Algorithm
  • Informed Search Algorithms
  • Hill Climbing Algorithm
  • Means-Ends Analysis

Adversarial Search

  • Adversarial search
  • Minimax Algorithm
  • Alpha-Beta Pruning

Knowledge Represent

  • Knowledge Based Agent
  • Knowledge Representation
  • Knowledge Representation Techniques
  • Propositional Logic
  • Rules of Inference
  • The Wumpus world
  • knowledge-base for Wumpus World
  • First-order logic
  • Knowledge Engineering in FOL
  • Inference in First-Order Logic
  • Unification in FOL
  • Resolution in FOL
  • Forward Chaining and backward chaining
  • Backward Chaining vs Forward Chaining
  • Reasoning in AI
  • Inductive vs. Deductive reasoning

Uncertain Knowledge R.

  • Probabilistic Reasoning in AI
  • Bayes theorem in AI
  • Bayesian Belief Network
  • Examples of AI
  • AI in Healthcare
  • Artificial Intelligence in Education
  • Artificial Intelligence in Agriculture
  • Engineering Applications of AI
  • Advantages & Disadvantages of AI
  • Robotics and AI
  • Future of AI
  • Languages used in AI
  • Approaches to AI Learning
  • Scope of AI
  • Agents in AI
  • Artificial Intelligence Jobs
  • Amazon CloudFront
  • Goals of Artificial Intelligence
  • Can Artificial Intelligence replace Human Intelligence
  • Importance of Artificial Intelligence
  • Artificial Intelligence Stock in India
  • How to Use Artificial Intelligence in Marketing
  • Artificial Intelligence in Business
  • Companies Working on Artificial Intelligence
  • Artificial Intelligence Future Ideas
  • Government Jobs in Artificial Intelligence in India
  • What is the Role of Planning in Artificial Intelligence
  • AI as a Service
  • AI in Banking
  • Cognitive AI
  • Introduction of Seaborn
  • Natural Language ToolKit (NLTK)
  • Best books for ML
  • AI companies of India will lead in 2022
  • Constraint Satisfaction Problems in Artificial Intelligence
  • How artificial intelligence will change the future
  • Problem Solving Techniques in AI
  • AI in Manufacturing Industry
  • Artificial Intelligence in Automotive Industry
  • Artificial Intelligence in Civil Engineering
  • Artificial Intelligence in Gaming Industry
  • Artificial Intelligence in HR
  • Artificial Intelligence in Medicine
  • PhD in Artificial Intelligence
  • Activation Functions in Neural Networks
  • Boston Housing Kaggle Challenge with Linear Regression
  • What are OpenAI and ChatGPT
  • Chatbot vs. Conversational AI
  • Iterative Deepening A* Algorithm (IDA*)
  • Iterative Deepening Search (IDS) or Iterative Deepening Depth First Search (IDDFS)
  • Genetic Algorithm in Soft Computing
  • AI and data privacy
  • Future of Devops
  • How Machine Learning is Used on Social Media Platforms in 2023
  • Machine learning and climate change
  • The Green Tech Revolution
  • GoogleNet in AI
  • AlexNet in Artificial Intelligence
  • Basics of LiDAR - Light Detection and Ranging
  • Explainable AI (XAI)
  • Synthetic Image Generation
  • What is Deepfake in Artificial Intelligence
  • What is Generative AI: Introduction
  • Artificial Intelligence in Power System Operation and Optimization
  • Customer Segmentation with LLM
  • Liquid Neural Networks in Artificial Intelligence
  • Propositional Logic Inferences in Artificial Intelligence
  • Text Generation using Gated Recurrent Unit Networks
  • Viterbi Algorithm in NLP
  • What are the benefits of Artificial Intelligence for devops
  • AI Tech Stack
  • Speech Recognition in Artificial Intelligence
  • Types of AI Algorithms and How Do They Work
  • AI Ethics (AI Code of Ethics)
  • Pros and Cons of AI-Generated Content
  • Top 10+ Jobs in AI and the Right Artificial Intelligence Skills You Need to Stand Out
  • AIOps (artificial intelligence for IT operations)
  • Artificial Intelligence In E-commerce
  • How AI can Transform Industrial Safety
  • How to Gradually Incorporate AI in Software Testing
  • Generative AI
  • NLTK WordNet
  • What is Auto-GPT
  • Artificial Super Intelligence (ASI)
  • AI hallucination
  • How to Learn AI from Scratch
  • What is Dilated Convolution?
  • Explainable Artificial Intelligence(XAI)
  • AI Content Generator
  • Artificial Intelligence Project Ideas for Beginners
  • Beatoven.ai: Make Music AI
  • Google Lumiere AI
  • Handling Missing Data in Decision Tree Models
  • Impacts of Artificial Intelligence in Everyday Life
  • OpenAI DALL-E Editor Interface
  • Water Jug Problem in AI
  • What are the Ethical Problems in Artificial Intelligence
  • Difference between Depth First Search, Breadth First Search, and Depth Limit Search in AI
  • How To Humanize AI Text for Free
  • 5 Algorithms that Demonstrate Artificial Intelligence Bias
  • Artificial Intelligence - Boon or Bane
  • Character AI
  • 18 of the best large language models in 2024
  • Explainable AI
  • Conceptual Dependency in AI
  • Problem characteristics in ai
  • Top degree programs for studying artificial Intelligence
  • AI Upscaling
  • Artificial Intelligence combined with decentralized technologies
  • Ambient Intelligence
  • Federated Learning
  • Neuromorphic Computing
  • Bias Mitigation in AI
  • Neural Architecture Search
  • Top Artificial Intelligence Techniques
  • Best First Search in Artificial Intelligence
  • Top 10 Must-Read Books for Artificial Intelligence
  • What are the Core Subjects in Artificial Intelligence
  • Features of Artificial Intelligence
  • Artificial Intelligence Engineer Salary in India
  • Artificial Intelligence in Dentistry
  • des.ai.gn - Augmenting Human Creativity with Artificial Intelligence
  • Best Artificial Intelligence Courses in 2024
  • Difference Between Data Science and Artificial Intelligence
  • Narrow Artificial Intelligence
  • What is OpenAI
  • Best First Search Algorithm in Artificial Intelligence
  • Decision Theory in Artificial Intelligence
  • Subsets of AI
  • Expert Systems
  • Machine Learning Tutorial
  • NLP Tutorial
  • Artificial Intelligence MCQ

Related Tutorials

  • Tensorflow Tutorial
  • PyTorch Tutorial
  • Data Science Tutorial
  • Reinforcement Learning

Search algorithms in AI are the algorithms that are created to aid the searchers in getting the right solution. The search issue contains search space, first start and end point. Now by performing simulation of scenarios and alternatives, searching algorithms help AI agents find the optimal state for the task.

Logic used in algorithms processes the initial state and tries to get the expected state as the solution. Because of this, AI machines and applications just functioning using search engines and solutions that come from these algorithms can only be as effective as the algorithms.

AI agents can make the AI interfaces usable without any software literacy. The agents that carry out such activities do so with the aim of reaching an end goal and develop action plans that in the end will bring the mission to an end. Completion of the action is gained after the steps of these different actions. The AI-agents finds the best way through the process by evaluating all the alternatives which are present. Search systems are a common task in artificial intelligence by which you are going to find the optimum solution for the AI agents.

In Artificial Intelligence, Search techniques are universal problem-solving methods. or in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will learn various problem-solving search algorithms.

Searchingis a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: Search space represents a set of possible solutions, which a system may have. It is a state from where agent begins . It is a function which observe the current state and returns whether the goal state is achieved or not. A tree representation of search problem is called Search tree. The root of the search tree is the root node which is corresponding to the initial state. It gives the description of all the available actions to the agent. A description of what each action do, can be represented as a transition model. It is a function which assigns a numeric cost to each path. It is an action sequence which leads from the start node to the goal node. If a solution has the lowest cost among all solutions.

Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:

A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input.

If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution for is said to be an optimal solution.

Time complexity is a measure of time for an algorithm to complete its task.

It is the maximum storage space required at any point during the search, as the complexity of the problem.

Here, are some important factors of role of search algorithms used AI are as follow.

"Workflow" logical search methods like describing the issue, getting the necessary steps together, and specifying an area to search help AI search algorithms getting better in solving problems. Take for instance the development of AI search algorithms which support applications like Google Maps by finding the fastest way or shortest route between given destinations. These programs basically conduct the search through various options to find the best solution possible.

Many AI functions can be designed as search oscillations, which thus specify what to look for in formulating the solution of the given problem.

Instead, the goal-directed and high-performance systems use a wide range of search algorithms to improve the efficiency of AI. Though they are not robots, these agents look for the ideal route for action dispersion so as to avoid the most impacting steps that can be used to solve a problem. It is their main aims to come up with an optimal solution which takes into account all possible factors.

AI Algorithms in search engines for systems manufacturing help them run faster. These programmable systems assist AI applications with applying rules and methods, thus making an effective implementation possible. Production systems involve learning of artificial intelligence systems and their search for canned rules that lead to the wanted action.

Beyond this, employing neural network algorithms is also of importance of the neural network systems. The systems are composed of these structures: a hidden layer, and an input layer, an output layer, and nodes that are interconnected. One of the most important functions offered by neural networks is to address the challenges of AI within any given scenarios. AI is somehow able to navigate the search space to find the connection weights that will be required in the mapping of inputs to outputs. This is made better by search algorithms in AI.

The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operators and test for the goal, so it is also called blind search.It examines each node of the tree until it achieves the goal node.

Informed search algorithms use domain knowledge. In an informed search, problem information is available which can guide the search. Informed search strategies can find a solution more efficiently than an uninformed search strategy. Informed search is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem.





Latest Courses

Python

We provides tutorials and interview questions of all technology like java tutorial, android, java frameworks

Contact info

G-13, 2nd Floor, Sec-3, Noida, UP, 201301, India

[email protected] .

Facebook

Latest Post

PRIVACY POLICY

Interview Questions

Online compiler.

problem solving using search algorithms

Data Science Central

  • Author Portal
  • 3D Printing
  • AI Data Stores
  • AI Hardware
  • AI Linguistics
  • AI User Interfaces and Experience
  • AI Visualization
  • Cloud and Edge
  • Cognitive Computing
  • Containers and Virtualization
  • Data Science
  • Data Security
  • Digital Factoring
  • Drones and Robot AI
  • Internet of Things
  • Knowledge Engineering
  • Machine Learning
  • Quantum Computing
  • Robotic Process Automation
  • The Mathematics of AI
  • Tools and Techniques
  • Virtual Reality and Gaming
  • Blockchain & Identity
  • Business Agility
  • Business Analytics
  • Data Lifecycle Management
  • Data Privacy
  • Data Strategist
  • Data Trends
  • Digital Communications
  • Digital Disruption
  • Digital Professional
  • Digital Twins
  • Digital Workplace
  • Marketing Tech
  • Sustainability
  • Agriculture and Food AI
  • AI and Science
  • AI in Government
  • Autonomous Vehicles
  • Education AI
  • Energy Tech
  • Financial Services AI
  • Healthcare AI
  • Logistics and Supply Chain AI
  • Manufacturing AI
  • Mobile and Telecom AI
  • News and Entertainment AI
  • Smart Cities
  • Social Media and AI
  • Functional Languages
  • Other Languages
  • Query Languages
  • Web Languages
  • Education Spotlight
  • Newsletters
  • O’Reilly Media

Using Uninformed & Informed Search Algorithms to Solve 8-Puzzle (n-Puzzle) in Python

SandipanDey

  • July 6, 2017 at 3:30 am

This problem appeared as a project in the  edX course ColumbiaX: CSMM.101x Artificial Intelligence (AI) . In this assignment an agent will be implemented to solve the 8-puzzle game (and the game generalized to an n × n array).

The following description of the problem is taken from the course:

I. Introduction

An instance of the  n-puzzle game  consists of a board holding n^2-1  distinct movable tiles, plus an empty space. The tiles are numbers from the set 1,..,n^2-1 . For any such board, the empty space may be legally swapped with any tile horizontally or vertically adjacent to it. In this assignment, the blank space is going to be represented with the number 0. Given an initial state of the board, the combinatorial search problem is to find a sequence of moves that transitions this state to the goal state; that is, the configuration with all tiles arranged in ascending order 0,1,… ,n^2−1 . The search space is the set of all possible states reachable from the initial state. The blank space may be swapped with a component in one of the four directions  {‘Up’, ‘Down’, ‘Left’, ‘Right’} , one move at a time. The cost of moving from one configuration of the board to another is the same and equal to one. Thus, the total cost of path is equal to the number of moves made from the initial state to the goal state.

II. Algorithm Review

The searches begin by visiting the root node of the search tree, given by the initial state. Among other book-keeping details, three major things happen in sequence in order to visit a node:

  • First, we remove a node from the frontier set.
  • Second, we check the state against the goal state to determine if a solution has been found.
  • Finally, if the result of the check is negative, we then expand the node. To expand a given node, we generate successor nodes adjacent to the current node, and add them to the frontier set. Note that if these successor nodes are already in the frontier, or have already been visited, then they should not be added to the frontier again.

This describes the life-cycle of a visit, and is the basic order of operations for search agents in this assignment—(1) remove, (2) check, and (3) expand. In this assignment, we will implement algorithms as described here.

III. What The Program Need to Output

Example: breadth-first search.

im1.png

The output file should contain exactly the following lines:

path_to_goal: [‘Up’, ‘Left’, ‘Left’] cost_of_path: 3 nodes_expanded: 10 fringe_size: 11 max_fringe_size: 12 search_depth: 3 max_search_depth: 4 running_time: 0.00188088 max_ram_usage: 0.07812500

The following algorithms are going to be implemented and taken from the lecture slides from the same course.

im2.png

The following figures and animations show how the 8-puzzle was solved starting from different initial states with different algorithms. For A* and ID-A* search we are going to use  Manhattan heuristic , which is an  admissible heuristic  for this problem. Also, the figures display the search paths from starting state to the goal node (the states with red text denote the path chosen). Let’s start with a very simple example. As can be seen, with this simple example all the algorithms find the same path to the goal node from the initial state.

Example 1: Initial State: 1,2,5,3,4,0,6,7,8

b_b0.gv.png

The nodes  expanded  by  BFS  (also the nodes that are in the  fringe  /  frontier  of the queue) are shown in the following figure:

fulltree_bfs.png

The  path  to the  goal  node (as well as the nodes expanded) with  ID-A*  is shown in the following figure:

b_i0.gv.png

Now let’s try a little more complex examples:

Example 2: Initial State: 1,4,2,6,5,8,7,3,0

The  path  to the  goal  node with  A*  is shown in the following figure:

board8.gv.png

All the nodes  expanded  by  A*  (also the nodes that are in the  fringe  /  frontier  of the queue) are shown in the following figure:

fulltree_ast.png

The  path  to the  goal  node with  BFS  is shown in the following figure:

board8.gv.png

All the nodes  expanded  by  BFS  are shown in the following figure:

2808334267

Example 3: Initial State: 1,0,2,7,5,4,8,6,3

The  path  to the  goal  node with  A*  is shown in the following figures:

b_a4_1.gv

The nodes  expanded  by  A*  (also the nodes that are in the  fringe  /  frontier  of the priority queue) are shown in the following figure (the tree is huge, use  zoom  to view it properly):

2808341174

The nodes  expanded  by  ID-A*  are shown in the following figure (again the tree is huge, use  zoom  to view it properly):

2808343958

The same problem (with a little variation) also appeared a programming exercise in the  Coursera Course Algorithm-I  (By Prof.  ROBERT SEDGEWICK ,  Princeton ) . The description of the problem taken from the assignment is shown below (notice that the  goal state  is  different  in this version of the same problem):

Write a program to solve the 8-puzzle problem (and its natural generalizations) using the A* search algorithm.

im1

  • Hamming priority function.  The number of blocks in the wrong position, plus the number of moves made so far to get to the state. Intutively, a state with a small number of blocks in the wrong position is close to the goal state, and we prefer a state that have been reached using a small number of moves.
  • Manhattan priority function.  The sum of the distances (sum of the vertical and horizontal distance) from the blocks to their goal positions, plus the number of moves made so far to get to the state.

im2.png

(2)  The following  15-puzzle  is solvable in  6 steps , as shown below:

board6.png

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Related Content

' data-src=

enjoyalgorithms

EnjoyMathematics

Problem-Solving Approaches in Data Structures and Algorithms

This blog highlights some popular problem-solving strategies for solving problems in DSA. Learning to apply these strategies could be one of the best milestones for the learners in mastering data structure and algorithms.

Top 10 problem solving techniques in data structures and algorithms

An Incremental approach using Single and Nested loops

One of the simple ideas of our daily problem-solving activities is that we build the partial solution step by step using a loop. There is a different variation to it:

  • Input-centric strategy: At each iteration step, we process one input and build the partial solution.
  • Output-centric strategy: At each iteration step, we add one output to the solution and build the partial solution.
  • Iterative improvement strategy: Here, we start with some easily available approximations of a solution and continuously improve upon it to reach the final solution.

Here are some approaches based on loop: Using a single loop and variables, Using nested loops and variables, Incrementing the loop by a constant (more than 1), Using the loop twice (Double traversal), Using a single loop and prefix array (or extra memory), etc.

Example problems:   Insertion Sort ,  Finding max and min in an array ,  Valid mountain array ,  Find equilibrium index of an array ,  Dutch national flag problem ,  Sort an array in a waveform .

Decrease and Conquer Approach

This strategy is based on finding the solution to a given problem via its one sub-problem solution. Such an approach leads naturally to a recursive algorithm, which reduces the problem to a sequence of smaller input sizes. Until it becomes small enough to be solved, i.e., it reaches the recursion’s base case.

Example problems:   Euclid algorithm of finding GCD ,  Binary Search ,  Josephus problem

Problem-solving using Binary Search

When an array has some order property similar to the sorted array, we can use the binary search idea to solve several searching problems efficiently in O(logn) time complexity. For doing this, we need to modify the standard binary search algorithm based on the conditions given in the problem. The core idea is simple: calculate the mid-index and iterate over the left or right half of the array.

Problem-solving using binary search visualization

Example problems: Find Peak Element , Search a sorted 2D matrix , Find the square root of an integer , Search in Rotated Sorted Array

Divide and Conquer Approach

This strategy is about dividing a problem into  more than one subproblems,  solving each of them, and then, if necessary, combining their solutions to get a solution to the original problem. We solve many fundamental problems efficiently in computer science by using this strategy.

Divide and conquer approach visualization

Example problems:   Merge Sort ,  Quick Sort ,  Median of two sorted arrays

Two Pointers Approach

The two-pointer approach helps us optimize time and space complexity in the case of many searching problems on arrays and linked lists. Here pointers can be pairs of array indices or pointer references to an object. This approach aims to simultaneously iterate over two different input parts to perform fewer operations. There are three variations of this approach:

Pointers are moving in the same direction with the same pace:   Merging two sorted arrays or linked lists, Finding the intersection of two arrays or linked lists , Checking an array is a subset of another array , etc.

Pointers are moving in the same direction at a different pace (Fast and slow pointers):   Partition process in the quick sort , Remove duplicates from the sorted array , Find the middle node in a linked list , Detect loop in a linked list , Move all zeroes to the end , Remove nth node from list end , etc.

Pointers are moving in the opposite direction:  Reversing an array, Check pair sum in an array , Finding triplet with zero-sum , Rainwater trapping problem , Container with most water , etc.

Two pointers approach visualization

Sliding Window Approach

A sliding window concept is commonly used in solving array/string problems. Here, the window is a contiguous sequence of elements defined by the start and ends indices. We perform some operations on elements within the window and “slide” it in a forward direction by incrementing the left or right end.

This approach can be effective whenever the problem consists of tasks that must be performed on a contiguous block of a fixed or variable size. This could help us improve time complexity in so many problems by converting the nested loop solution into a single loop solution.

Example problems: Longest substring without repeating characters , Count distinct elements in every window , Max continuous series of 1s , Find max consecutive 1's in an array , etc.

Transform and Conquer Approach

This approach is based on transforming a coding problem into another coding problem with some particular property that makes the problem easier to solve. In other words, here we solve the problem is solved in two stages:

  • Transformation stage: We transform the original problem into another easier problem to solve.
  • Conquering stage: Now, we solve the transformed problem.

Example problems: Pre-sorting based algorithms (Finding the closest pair of points, checking whether all the elements in a given array are distinct, etc.)

Problem-solving using BFS and DFS Traversal

Most tree and graph problems can be solved using DFS and BFS traversal. If the problem is to search for something closer to the root (or source node), we can prefer BFS, and if we need to search for something in-depth, we can choose DFS.

Sometimes, we can use both BFS and DFS traversals when node order is not required. But in some cases, such things are not possible. We need to identify the use case of both traversals to solve the problems efficiently. For example, in binary tree problems:

  • We use preorder traversal in a situation when we need to explore all the tree nodes before inspecting any leaves.
  • Inorder traversal of BST generates the node's data in increasing order. So we can use inorder to solve several BST problems.
  • We can use postorder traversal when we need to explore all the leaf nodes before inspecting any internal nodes.
  • Sometimes, we need some specific information about some level. In this situation, BFS traversal helps us to find the output easily.

BFS and DFS traversal visualization

To solve tree and graph problems, sometimes we pass extra variables or pointers to the function parameters, use helper functions, use parent pointers, store some additional data inside the node, and use data structures like the stack, queue, and priority queue, etc.

Example problems: Find min depth of a binary tree , Merge two binary trees , Find the height of a binary tree , Find the absolute minimum difference in a BST , The kth largest element in a BST , Course scheduling problem , bipartite graph , Find the left view of a binary tree , etc.

Problem-solving using the Data Structures

The data structure is one of the powerful tools of problem-solving in algorithms. It helps us perform some of the critical operations efficiently and improves the time complexity of the solution. Here are some of the key insights:

  • Many coding problems require an effcient way to perform the search, insert and delete operations. We can perform all these operations using the hash table in O(1) time average. It's a kind of time-memory tradeoff, where we use extra space to store elements in the hash table to improve performance.
  • Sometimes we need to store data in the stack (LIFO order) or queue (FIFO order) to solve several coding problems. 
  • Suppose there is a requirement to continuously insert or remove maximum or minimum element (Or element with min or max priority). In that case, we can use a heap (or priority queue) to solve the problem efficiently.
  • Sometimes, we store data in Trie, AVL Tree, Segment Tree, etc., to perform some critical operations efficiently. 

Various types of data structures in programming

Example problems: Next greater element , Valid Parentheses , Largest rectangle in a histogram , Sliding window maximum , kth smallest element in an array , Top k frequent elements , Longest common prefix , Range sum query , Longest consecutive sequence , Check equal array , LFU cache , LRU cache , Counting sort

Dynamic Programming

Dynamic programming is one of the most popular techniques for solving problems with overlapping or repeated subproblems. Here rather than solving overlapping subproblems repeatedly, we solve each smaller subproblems only once and store the results in memory. We can solve a lot of optimization and counting problems using the idea of dynamic programming.

Dynamic programming idea

Example problems: Finding nth Fibonacci,  Longest Common Subsequence ,  Climbing Stairs Problem ,  Maximum Subarray Sum ,  Minimum number of Jumps to reach End ,  Minimum Coin Change

Greedy Approach

This solves an optimization problem by expanding a partially constructed solution until a complete solution is reached. We take a greedy choice at each step and add it to the partially constructed solution. This idea produces the optimal global solution without violating the problem’s constraints.

  • The greedy choice is the best alternative available at each step is made in the hope that a sequence of locally optimal choices will yield a (globally) optimal solution to the entire problem.
  • This approach works in some cases but fails in others. Usually, it is not difficult to design a greedy algorithm itself, but a more difficult task is to prove that it produces an optimal solution.

Example problems: Fractional Knapsack, Dijkstra algorithm, The activity selection problem

Exhaustive Search

This strategy explores all possibilities of solutions until a solution to the problem is found. Therefore, problems are rarely offered to a person to solve the problem using this strategy.

The most important limitation of exhaustive search is its inefficiency. As a rule, the number of solution candidates that need to be processed grows at least exponentially with the problem size, making the approach inappropriate not only for a human but often for a computer as well.

But in some situations, there is a need to explore all possible solution spaces in a coding problem. For example: Find all permutations of a string , Print all subsets , etc.

Backtracking

Backtracking is an improvement over the approach of exhaustive search. It is a method for generating a solution by avoiding unnecessary possibilities of the solutions! The main idea is to build a solution one piece at a time and evaluate each partial solution as follows:

  • If a partial solution can be developed further without violating the problem’s constraints, it is done by taking the first remaining valid option at the next stage. ( Think! )
  • Suppose there is no valid option at the next stage, i.e., If there is a violation of the problem constraint, the algorithm backtracks to replace the partial solution’s previous stage with the following option for that stage. ( Think! )

Backtracking solution of 4-queen problem

In simple words, backtracking involves undoing several wrong choices — the smaller this number, the faster the algorithm finds a solution. In the worst-case scenario, a backtracking algorithm may end up generating all the solutions as an exhaustive search, but this rarely happens!

Example problems: N-queen problem , Find all k combinations , Combination sum , Sudoku solver , etc.

Problem-solving using Bit manipulation and Numbers theory

Some of the coding problems are, by default, mathematical, but sometimes we need to identify the hidden mathematical properties inside the problem. So the idea of number theory and bit manipulation is helpful in so many cases.

Sometimes understanding the bit pattern of the input and processing data at the bit level help us design an efficient solution. The best part is that the computer performs each bit-wise operation in constant time. Even sometimes, bit manipulation can reduce the requirement of extra loops and improve the performance by a considerable margin.

Example problems: Reverse bits , Add binary string , Check the power of two , Find the missing number , etc.

Hope you enjoyed the blog. Later we will write a separate blog on each problem-solving approach. Enjoy learning, Enjoy algorithms!

More from EnjoyAlgorithms

Self-paced courses and blogs.

A new accurate and fast convergence cuckoo search algorithm for solving constrained engineering optimization problems

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, index terms.

Computing methodologies

Artificial intelligence

Search methodologies

Continuous space search

Randomized search

Symbolic and algebraic manipulation

Symbolic and algebraic algorithms

Optimization algorithms

Mathematics of computing

Mathematical analysis

Mathematical optimization

Continuous optimization

Bio-inspired optimization

Discrete optimization

Optimization with randomized search heuristics

Theory of computation

Design and analysis of algorithms

Evolutionary algorithms

Recommendations

A bacterial gene recombination algorithm for solving constrained optimization problems.

Creature evolution manifests itself in the improved ability of species to adapt to their surroundings. Swarm intelligence and gene optimization are found in the population of interacting agents that are able to self-organize and self-strengthen. In this ...

Optimization of constrained mathematical and engineering design problems using chaos game optimization

  • A novel metaheuristic algorithm is proposed for constrained optimization.

In the past few decades, many different metaheuristic algorithms have been developed for optimization purposes each of which have specific advantages and disadvantages due to multiple applications in different optimization fields. The ...

Constrained Problem Optimization using Altered Artificial Bee Colony Algorithm

Constraint optimization is one of the major fields of decision science where variables and solutions are often constrained or restricted to a certain feasible space only. Moreover, it is always not possible to replace constraints by penalization ...

Information

Published in.

Netherlands

Publication History

Author tags.

  • Nonlinear optimization problem
  • constrained problems
  • engineering designing problems
  • penalty function
  • cuckoo optimization algorithm
  • Research-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

problem solving using search algorithms

  • Runestone in social media: Follow @iRunestone Our Facebook Page
  • Table of Contents
  • Assignments
  • Peer Instruction (Instructor)
  • Peer Instruction (Student)
  • Change Course
  • Instructor's Page
  • Progress Page
  • Edit Profile
  • Change Password
  • Scratch ActiveCode
  • Scratch Activecode
  • Instructors Guide
  • About Runestone
  • Report A Problem
  • This Chapter
  • 1. Introduction' data-toggle="tooltip" >

Problem Solving with Algorithms and Data Structures using Python ¶

PythonDS Cover

By Brad Miller and David Ranum, Luther College

There is a wonderful collection of YouTube videos recorded by Gerry Jenkins to support all of the chapters in this text.

  • 1.1. Objectives
  • 1.2. Getting Started
  • 1.3. What Is Computer Science?
  • 1.4. What Is Programming?
  • 1.5. Why Study Data Structures and Abstract Data Types?
  • 1.6. Why Study Algorithms?
  • 1.7. Review of Basic Python
  • 1.8.1. Built-in Atomic Data Types
  • 1.8.2. Built-in Collection Data Types
  • 1.9.1. String Formatting
  • 1.10. Control Structures
  • 1.11. Exception Handling
  • 1.12. Defining Functions
  • 1.13.1. A Fraction Class
  • 1.13.2. Inheritance: Logic Gates and Circuits
  • 1.14. Summary
  • 1.15. Key Terms
  • 1.16. Discussion Questions
  • 1.17. Programming Exercises
  • 2.1.1. A Basic implementation of the MSDie class
  • 2.2. Making your Class Comparable
  • 3.1. Objectives
  • 3.2. What Is Algorithm Analysis?
  • 3.3. Big-O Notation
  • 3.4.1. Solution 1: Checking Off
  • 3.4.2. Solution 2: Sort and Compare
  • 3.4.3. Solution 3: Brute Force
  • 3.4.4. Solution 4: Count and Compare
  • 3.5. Performance of Python Data Structures
  • 3.7. Dictionaries
  • 3.8. Summary
  • 3.9. Key Terms
  • 3.10. Discussion Questions
  • 3.11. Programming Exercises
  • 4.1. Objectives
  • 4.2. What Are Linear Structures?
  • 4.3. What is a Stack?
  • 4.4. The Stack Abstract Data Type
  • 4.5. Implementing a Stack in Python
  • 4.6. Simple Balanced Parentheses
  • 4.7. Balanced Symbols (A General Case)
  • 4.8. Converting Decimal Numbers to Binary Numbers
  • 4.9.1. Conversion of Infix Expressions to Prefix and Postfix
  • 4.9.2. General Infix-to-Postfix Conversion
  • 4.9.3. Postfix Evaluation
  • 4.10. What Is a Queue?
  • 4.11. The Queue Abstract Data Type
  • 4.12. Implementing a Queue in Python
  • 4.13. Simulation: Hot Potato
  • 4.14.1. Main Simulation Steps
  • 4.14.2. Python Implementation
  • 4.14.3. Discussion
  • 4.15. What Is a Deque?
  • 4.16. The Deque Abstract Data Type
  • 4.17. Implementing a Deque in Python
  • 4.18. Palindrome-Checker
  • 4.19. Lists
  • 4.20. The Unordered List Abstract Data Type
  • 4.21.1. The Node Class
  • 4.21.2. The Unordered List Class
  • 4.22. The Ordered List Abstract Data Type
  • 4.23.1. Analysis of Linked Lists
  • 4.24. Summary
  • 4.25. Key Terms
  • 4.26. Discussion Questions
  • 4.27. Programming Exercises
  • 5.1. Objectives
  • 5.2. What Is Recursion?
  • 5.3. Calculating the Sum of a List of Numbers
  • 5.4. The Three Laws of Recursion
  • 5.5. Converting an Integer to a String in Any Base
  • 5.6. Stack Frames: Implementing Recursion
  • 5.7. Introduction: Visualizing Recursion
  • 5.8. Sierpinski Triangle
  • 5.9. Complex Recursive Problems
  • 5.10. Tower of Hanoi
  • 5.11. Exploring a Maze
  • 5.12. Dynamic Programming
  • 5.13. Summary
  • 5.14. Key Terms
  • 5.15. Discussion Questions
  • 5.16. Glossary
  • 5.17. Programming Exercises
  • 6.1. Objectives
  • 6.2. Searching
  • 6.3.1. Analysis of Sequential Search
  • 6.4.1. Analysis of Binary Search
  • 6.5.1. Hash Functions
  • 6.5.2. Collision Resolution
  • 6.5.3. Implementing the Map Abstract Data Type
  • 6.5.4. Analysis of Hashing
  • 6.6. Sorting
  • 6.7. The Bubble Sort
  • 6.8. The Selection Sort
  • 6.9. The Insertion Sort
  • 6.10. The Shell Sort
  • 6.11. The Merge Sort
  • 6.12. The Quick Sort
  • 6.13. Summary
  • 6.14. Key Terms
  • 6.15. Discussion Questions
  • 6.16. Programming Exercises
  • 7.1. Objectives
  • 7.2. Examples of Trees
  • 7.3. Vocabulary and Definitions
  • 7.4. List of Lists Representation
  • 7.5. Nodes and References
  • 7.6. Parse Tree
  • 7.7. Tree Traversals
  • 7.8. Priority Queues with Binary Heaps
  • 7.9. Binary Heap Operations
  • 7.10.1. The Structure Property
  • 7.10.2. The Heap Order Property
  • 7.10.3. Heap Operations
  • 7.11. Binary Search Trees
  • 7.12. Search Tree Operations
  • 7.13. Search Tree Implementation
  • 7.14. Search Tree Analysis
  • 7.15. Balanced Binary Search Trees
  • 7.16. AVL Tree Performance
  • 7.17. AVL Tree Implementation
  • 7.18. Summary of Map ADT Implementations
  • 7.19. Summary
  • 7.20. Key Terms
  • 7.21. Discussion Questions
  • 7.22. Programming Exercises
  • 8.1. Objectives
  • 8.2. Vocabulary and Definitions
  • 8.3. The Graph Abstract Data Type
  • 8.4. An Adjacency Matrix
  • 8.5. An Adjacency List
  • 8.6. Implementation
  • 8.7. The Word Ladder Problem
  • 8.8. Building the Word Ladder Graph
  • 8.9. Implementing Breadth First Search
  • 8.10. Breadth First Search Analysis
  • 8.11. The Knight’s Tour Problem
  • 8.12. Building the Knight’s Tour Graph
  • 8.13. Implementing Knight’s Tour
  • 8.14. Knight’s Tour Analysis
  • 8.15. General Depth First Search
  • 8.16. Depth First Search Analysis
  • 8.17. Topological Sorting
  • 8.18. Strongly Connected Components
  • 8.19. Shortest Path Problems
  • 8.20. Dijkstra’s Algorithm
  • 8.21. Analysis of Dijkstra’s Algorithm
  • 8.22. Prim’s Spanning Tree Algorithm
  • 8.23. Summary
  • 8.24. Key Terms
  • 8.25. Discussion Questions
  • 8.26. Programming Exercises

Acknowledgements ¶

We are very grateful to Franklin Beedle Publishers for allowing us to make this interactive textbook freely available. This online version is dedicated to the memory of our first editor, Jim Leisy, who wanted us to “change the world.”

Indices and tables ¶

Search Page

Creative Commons License

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

algorithms-logo

Article Menu

problem solving using search algorithms

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

An improved iterated greedy algorithm for solving collaborative helicopter rescue routing problem with time window and limited survival time.

problem solving using search algorithms

1. Introduction

2. literature review, 3. problem formulation, 3.1. problem description, 3.2. problem formulation, 4. improved iterative greedy algorithm, 4.1. framework of the iig.

The framework of the proposed IIG algorithm
the information on rescue locations
the best solution
1Generate an initial solution X by using the initialization strategy
2 the termination condition is not satisfied
3   X = Destruction (X);
4   X’ = Construction (X , X );
5   X* = Local search (X’);
6    the optimal solution found up to now
7   Y = Acceptance criterion (X, X*);
8   Temperature = T *sum (u )/n*m*10;
9       X’< X* 
10    X* = X’;
11    
12    
13
14 the best solution X*

4.2. Solution Representation

4.3. initialization strategy.

Heuristic initialization strategy
instance information
an initial solution
1Arrange all rescue sites in ascending order based on survivors’ life strength and place them into set S ;
2 each site i within the set S  
3   each helicopter k 
4     each position u 
5       rescue site i can be added to the current position u
6         Store the position u into set S ;
7       
8     
9   
10    Choose the optimal position from the set S and add site i to the position u;
11   the insert operation of the site i is failed
12    Add a new helicopter and allocate the site i to this helicopter;
13   
14
15Store the generated solution

4.4. Feasible-First Destruction and Construction Strategy

Feasible-first destruction-construction strategy
a solution
a new solution
1Add each rescue site to set S ;
2Select one rescue site i randomly from S and add this rescue site into set S ;
3Delete the chosen rescue site i from S ;
4 each rescue site j in S  
5   Calculate the distance for each arc a(i, j);
6
7Sort the set S in ascending order based on all arcs a(i, j);
8Move (n×d−1) rescue sites ahead of S to S ;
9Delete set S from the current solution;
10 each rescue site i in the set S  
11    each helicopter k 
12    each position u 
13      site i can be added to the current position u
14        Store the position u into set S ;
15      
16    
17    
18   Select the optimal position from the set S and add site i to the position u;
19    insert the site i failed
20   Add a new helicopter and allocate the site i to this helicopter;
21    
22

4.5. Problem-Specific Local Search Strategy

The local search strategy
a solution
an improved solution
1r = rand ()%2, cnt = 0;
2  r 
3    0:
4       cnt  <  m 
5      Randomly select a helicopter k;
6      Randomly select a rescue site i in the helicopter k;
7       each of the other rescue site j in the helicopter k 
8       Swap rescue site i and rescue site j;
9       
10      Select the best solution;
11    
12    1:
13       cnt < m 
14      Selected the rescue site i with the largest anteroposterior distance in the helicopter k;
15      Remove the rescue site i from the current solution;
16       each helicopter k in the current solution
17        each position u 
18           site i can be added to the current position u
19           Store the position u into set S ;
20           
21        
22       
23      Choose the optimal position from the set S and add site i to the position u;
24    
25    2:
26       cnt  < m 
27      Randomly select helicopter k and the rescue sites along its route are stored in the set S ;
28      Delete the set S from the current solution;
29       each rescue site i in the set S  
30        each helicopter k in the current solution
31           each position u 
32            site i can be added to the current position u
33              Store the position u into set S ;
34            
35           
36        
37       Choose the optimal position from the set S and add site i to the position u;
38        insert the site i failed
39          Add one helicopter and allocate the site i to this helicopter;
40        
41       
42    
43

4.6. SA-Based Acceptance Criterion

5. experiment results, 5.1. experimental instances, 5.2. parameters setting, 5.3. effectiveness of the local search strategy, 5.4. effectiveness of the sa-based acceptance criterion, 5.5. effectiveness evaluation against the known optimal solutions, 5.6. comparison with two efficient heuristic algorithms, 5.7. comparisons with several efficient algorithms, 6. conclusions and future works, author contributions, data availability statement, acknowledgments, conflicts of interest.

  • Liu, B.; Sheu, J.B.; Zhao, X.; Chen, Y.; Zhang, W. Decision making on post-disaster rescue routing problems from the rescue efficiency perspective. Eur. J. Oper. Res. 2020 , 286 , 321–335. [ Google Scholar ] [ CrossRef ]
  • Zhu, J.; Zhao, H.; Wei, Y.; Ma, C.; Lv, Q. Unmanned aerial vehicle computation task scheduling based on parking resources in post-disaster rescue. Appl. Sci. 2022 , 13 , 289. [ Google Scholar ] [ CrossRef ]
  • Qin, Y.; Ng, K.K.H.; Hu, H.; Chan, F.T.S.; Xiao, S. Post disaster adaptation management in airport: A coordination of runway and hangar resources for relief cargo transports. Adv. Eng. Inform. 2021 , 50 , 101403. [ Google Scholar ] [ CrossRef ]
  • Wei, X.; Qiu, H.; Wang, D.; Duan, J.; Wang, Y.; Cheng, T. An integrated location-routing problem with post-disaster relief distribution. Comput. Ind. Eng. 2020 , 147 , 106632. [ Google Scholar ] [ CrossRef ]
  • Mishra, B.K.; Dahal, K.; Pervez, Z. Dynamic relief items distribution model with sliding time window in the post-disaster environment. Appl. Sci. 2022 , 12 , 8358. [ Google Scholar ] [ CrossRef ]
  • Cheng, J.; Gao, Y.; Tian, Y.; Liu, H. GA-LNS optimization for helicopter rescue dispatch. IEEE Trans. Intell. Veh. 2023 , 8 , 3898–3912. [ Google Scholar ] [ CrossRef ]
  • Xue, Y.; Gao, Y.; Tian, Y.; Liu, H.; Wang, X. Helicopter rescue for flood disaster: Scheduling, simulation, and evaluation. Aerospace 2022 , 9 , 822. [ Google Scholar ] [ CrossRef ]
  • Zhang, M.; Li, W.; Wang, M.; Li, S.; Li, B. Helicopter–UAVs search and rescue task allocation considering UAVs operating environment and performance. Comput. Ind. Eng. 2022 , 167 , 107994. [ Google Scholar ] [ CrossRef ]
  • Geng, N.; Gong, D.; Zhang, Y. PSO-based robot path planning for multisurvivor rescue in limited survival time. Math. Prob. Eng. 2014 , 2014 , 187370. [ Google Scholar ] [ CrossRef ]
  • Wang, Z.; Pan, Q.; Gao, L.; Jing, X.; Sun, Q. A cooperative iterated greedy algorithm for the distributed flowshop group robust scheduling problem with uncertain processing times. Swarm Evol. Comput. 2023 , 79 , 101320. [ Google Scholar ] [ CrossRef ]
  • Li, Y.; Pan, Q.; Ruiz, R.; Sang, H. A referenced iterated greedy algorithm for the distributed assembly mixed no-idle permutation flowshop scheduling problem with the total tardiness criterion. Knowl.-Based Syst. 2022 , 239 , 108036. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Wang, Y.; Han, Y. A variant iterated greedy algorithm integrating multiple decoding rules for hybrid blocking flow shop scheduling problem. Mathematics 2023 , 11 , 2453. [ Google Scholar ] [ CrossRef ]
  • Zhang, X.; Sang, H.; Li, J.; Han, Y.; Duan, P. An effective multi-AGVs dispatching method applied to matrix manufacturing workshop. Comput. Ind. Eng. 2022 , 163 , 107791. [ Google Scholar ] [ CrossRef ]
  • Solomon, M. Algorithms for the vehicle routing and scheduling problems with time window constraints. Oper. Res. 1987 , 35 , 254–265. [ Google Scholar ] [ CrossRef ]
  • Potvin, J.; Rousseau, J. A parallel route building algorithm for the vehicle routing and scheduling problem with time windows. Eur. J. Oper. Res. 1993 , 66 , 331–340. [ Google Scholar ] [ CrossRef ]
  • Ioannou, G.; Kritikos, M.; Prastacos, G. A greedy look-ahead heuristic for the vehicle routing problem with time windows. J. Oper. Res. Soc. 2017 , 52 , 523–537. [ Google Scholar ] [ CrossRef ]
  • Li, J.; Han, Y.; Duan, P.; Han, Y.; Niu, B.; Li, C.; Zheng, Z.; Liu, Y. Meta-heuristic algorithm for solving vehicle routing problems with time windows and synchronized visit constraints in prefabricated systems. J. Clean. Prod. 2020 , 250 , 119464. [ Google Scholar ] [ CrossRef ]
  • Bräysy, O.; Michel, G. Vehicle routing problem with time windows, Part II: Metaheuristics. Transp. Sci. 2005 , 39 , 119–139. [ Google Scholar ] [ CrossRef ]
  • Cai, Y.; Cheng, M.; Zhou, Y.; Liu, P.; Guo, J. A hybrid evolutionary multitask algorithm for the multiobjective vehicle routing problem with time windows. Inf. Sci. 2022 , 612 , 168–187. [ Google Scholar ] [ CrossRef ]
  • Zhang, R.; Yu, R.; Xia, W. Constraint-aware policy optimization to solve the vehicle routing problem with time windows. Inf. Technol. Control 2022 , 51 , 126–138. [ Google Scholar ] [ CrossRef ]
  • Liu, Y.; Roberto, B.; Zhou, J.; Yu, Y.; Zhang, Y.; Sun, W. Efficient feasibility checks and an adaptive large neighborhood search algorithm for the time-dependent green vehicle routing problem with time windows. Eur. J. Oper. Res. 2023 , 310 , 133–155. [ Google Scholar ] [ CrossRef ]
  • Saksuriya, P.; Likasiri, C. Hybrid heuristic for vehicle routing problem with time windows and compatibility constraints in home healthcare system. Appl. Sci. 2022 , 12 , 6486. [ Google Scholar ] [ CrossRef ]
  • Xu, R.; Li, S.; Wu, J. Multi-trip vehicle routing problem with time windows and resource synchronization on heterogeneous facilities. Systems with time windows. Symmetry 2023 , 15 , 486. 2023 , 11 , 412. [ Google Scholar ] [ CrossRef ]
  • Vidal, T.; Teodor, G.; Michel, G.; Christian, P. A hybrid genetic algorithm with adaptive diversity management for a large class of vehicle routing problems with time-windows. Comput. Oper. Res. 2013 , 40 , 4. [ Google Scholar ] [ CrossRef ]
  • Ahmed, Z.; Yousefikhoshbakht, M. A hybrid algorithm for the heterogeneous fixed fleet open vehicle routing problem with Time Windows. Symmetry 2023 , 15 , 486. [ Google Scholar ] [ CrossRef ]
  • Bezerra, S.; Souza, M.; Souza, S. A variable neighborhood search-based algorithm with adaptive local search for the Vehicle Routing Problem with Time Windows and multi-depots aiming for vehicle fleet reduction. Comput. Oper. Res. 2023 , 149 , 106016. [ Google Scholar ] [ CrossRef ]
  • Liu, M.; Zhao, Q.; Song, Q.; Zhang, Y. A hybrid brain storm optimization algorithm for dynamic vehicle routing problem with time windows. IEEE Access 2023 , 11 , 121087–121095. [ Google Scholar ] [ CrossRef ]
  • Wen, M.; Sun, W.; Yu, Y.; Tang, J.; Ikou, K. An adaptive large neighborhood search for the larger-scale multi depot green vehicle routing problem with time windows. J. Clean. Prod. 2022 , 374 , 133916. [ Google Scholar ] [ CrossRef ]
  • Matijević, L. General variable neighborhood search for electric vehicle routing problem with time-dependent speeds and soft time windows. Int. J. Ind. Eng. Comp. 2023 , 14 , 275–329. [ Google Scholar ] [ CrossRef ]
  • Gao, J.; Zheng, X.; Gao, F.; Tong, X.; Han, Q. Heterogeneous multitype fleet green vehicle path planning of automated guided vehicle with time windows in flexible manufacturing system. Machines 2022 , 10 , 197. [ Google Scholar ] [ CrossRef ]
  • Zhang, W.; Zeng, M.; Guo, P.; Wen, K. Variable neighborhood search for multi-cycle medical waste recycling vehicle routing problem with time windows. Int. J. Environ. Res. Public Health 2022 , 19 , 12887. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ma, H.; Sheng, Y.; Xia, W. A pointer neural network for the vehicle routing problem with task priority and limited resources. Inf. Technol. Control 2020 , 49 , 237–248. [ Google Scholar ] [ CrossRef ]
  • Lee, K.; Chae, J. Estimation of travel cost between geographic coordinates using artificial neural network: Potential application in vehicle routing problems. ISPRS Int. J. Geo-Inf. 2023 , 12 , 57. [ Google Scholar ] [ CrossRef ]
  • Duan, P.; Yu, Z.; Gao, K.; Meng, L.; Han, Y.; Ye, F. Solving the multi-objective path planning problem for mobile robot using an improved NSGA-II algorithm. Swarm Evol. Comput. 2024 , 87 , 101576. [ Google Scholar ] [ CrossRef ]
  • He, X.; Pan, Q.; Gao, L.; Neufeld, J.; Gupta, J. Historical information based iterated greedy algorithm for distributed flowshop group scheduling problem with sequence-dependent setup times. Omega 2024 , 123 , 102997. [ Google Scholar ] [ CrossRef ]
  • Rochat, Y.; Taillard, E. Probabilistic diversification and intensification in local search for vehicle routing. J. Heuristics 1995 , 1 , 147–167. [ Google Scholar ] [ CrossRef ]
  • Bent, R.; Hentenryck, P. A two-stage hybrid local search for the vehicle routing problem with time windows. Transp. Sci. 2004 , 38 , 515–530. [ Google Scholar ] [ CrossRef ]
  • Hojabri, H.; Gendreau, M.; Potvin, J.; Rousseau, L. Large neighborhood search with constraint programming for a vehicle routing problem with synchronization constraints. Comput. Oper. Res. 2018 , 92 , 87–97. [ Google Scholar ] [ CrossRef ]
  • Han, X.; Han, Y.; Zhang, B.; Qin, H.; Li, J.; Liu, Y.; Gong, D. An effective iterative greedy algorithm for distributed blocking flowshop scheduling problem with balanced energy costs criterion. Appl. Soft Comput. 2022 , 129 , 109502. [ Google Scholar ] [ CrossRef ]
  • Zou, W.; Pan, Q.; Meng, L.; Sang, H.; Han, Y.; Li, J. An effective self-adaptive iterated greedy algorithm for a multi-AGVs scheduling problem with charging and maintenance. Expert Syst. Appl. 2023 , 216 , 119512. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

NotationsDescription
0Index of the rescue center
NSet of all nodes, including the rescue sites and the rescue center
RSet of all rescue locations
HSet of all helicopters
THSet of transport helicopters
MHSet of medical helicopters
a Earliest possible rescue time of location i, iN
b Latest possible rescue time of location i, iN
d Distance between locations i and j, i,jN, ij
dm Demand for the supplies of location i, iN
dp The number of survivors waiting to be rescued at the rescue site i, iN
lt Life strength for survivors at the rescue site i, iN
cm Maximum capacity for the material of transport helicopter k, kTH
tt Minimum life strength threshold of transport helicopter k, kTH
cp Maximum capacity for casualty care of medical helicopter k, kMH
mt Minimum life strength threshold of medical helicopter k, kMH
t Service duration for rescue site i, iN
u Start time of service for rescue site i, iN, kH
ParametersLevels
1234
d0.10.20.30.4
T0.20.30.40.5
dTAverage Values
111862.75
121886.63
131896.69
141892.37
211704.23
221754.04
231712.81
241716.56
311799.72
321803.39
331776.82
341798.90
411867.13
421841.73
431819.77
441866.40
InstancesBetter ValuesAlgorithmsRPIs
IIGIIG_NLIIGIIG_NL
rc1011646.27 1762.03 7.03
rc1021723.15 1802.15 4.58
rc1031861.14 1975.20 6.13
rc1042014.80 2159.82 7.20
rc1051802.07 1946.52 8.02
rc1061741.29 1872.15 7.52
rc1071893.48 2033.92 7.42
rc1082005.98 2215.52 10.45
rc1091924.37 1999.77 3.92
rc2011971.68 2058.68 4.41
rc2021926.09 2000.24 3.85
rc2031929.751993.15 3.29
rc2042037.70 2175.06 6.74
rc2051996.43 2102.14 5.29
rc2061947.04 2011.36 3.30
rc2072016.15 2112.56 4.78
rc2082046.07 2147.54 4.96
rr1012866.623022.86 5.17
rr1022019.822033.62 0.68
rr1031754.201794.85 2.32
rr1041735.86 1806.37 4.06
rr1052065.52 2111.00 2.20
rr1061981.65 2024.05 2.14
rr1071855.79 1898.20 2.29
rr1081699.01 1815.62 6.86
rr1091938.55 2035.11 4.98
rr1101807.02 1921.21 6.32
rr1111764.78 1883.87 6.75
rr1121763.50 1901.52 7.83
rr2012106.75 2286.69 8.54
rr2021924.95 2055.48 6.78
rr2031811.40 2002.64 10.56
rr2041612.58 1766.06 9.52
rr2051859.54 2008.47 8.01
rr2061857.82 2016.67 8.55
rr2071701.91 1825.76 7.28
rr2081598.95 1735.23 8.52
rr2092026.872336.35 15.27
rr2101782.63 1905.31 6.88
rr2111735.94 1859.30 7.11
rrc1012472.05 2642.48 6.89
rrc1022309.12 2387.87 3.41
rrc1032165.93 2312.67 6.77
rrc1042045.94 2186.47 6.87
rrc1052454.70 2590.14 5.52
rrc1062333.60 2498.16 7.05
rrc1072159.70 2329.19 7.85
rrc1082172.61 2379.28 9.51
rrc2012741.56 2931.13 6.91
rrc2022560.35 2803.59 9.50
rrc2032396.92 2646.57 10.42
rrc2042245.18 2502.46 11.46
rrc2052836.283165.74 11.62
rrc2062614.92 2820.25 7.85
rrc2072380.57 2502.09 5.10
rrc2082248.49 2426.44 7.91
Mean2016.772018.882144.750.116.32
InstancesBetter ValuesAlgorithmsRPIs
IIGIIG_NSIIGIIG_NS
rc1011535.461646.27 7.22
rc1021688.981723.15 2.02
rc1031821.791861.14 2.16
rc1042014.80 2048.45 1.67
rc1051687.561802.07 6.79
rc1061741.29 1880.81 8.01
rc1071893.48 1924.29 1.63
rc1082002.702005.98 0.16
rc1091924.37 1964.96 2.11
rc2011930.641971.68 2.13
rc2021926.09 1931.06 0.26
rc2031960.171993.15 1.68
rc2042027.052037.70 0.53
rc2051993.131996.43 0.17
rc2061947.04 1950.87 0.20
rc2072016.15 2031.26 0.75
rc2082020.312046.07 1.28
rr1013005.393022.86 5.81
rr1022033.62 2043.89 0.51
rr1031794.85 1798.33 0.19
rr1041735.86 1737.67 0.10
rr1052056.112065.52 0.46
rr1061978.561981.65 0.16
rr1071855.79 1865.20 0.51
rr1081699.01 1753.45 3.20
rr1091938.55 1985.92 2.44
rr1101807.02 1837.60 1.69
rr1111764.78 1799.71 1.98
rr1121763.50 1801.79 2.17
rr2012106.75 2208.56 4.83
rr2021924.95 2011.90 4.52
rr2031811.40 1884.65 4.04
rr2041612.58 1675.13 3.88
rr2051859.54 1935.95 4.11
rr2061857.82 1943.45 4.61
rr2071701.91 1797.41 5.61
rr2081598.95 1673.06 4.63
rr2092025.392336.35 4.99
rr2101782.63 1859.44 4.31
rr2111735.94 1788.57 3.03
rrc1012472.05 2544.58 2.93
rrc1022309.12 2352.05 1.86
rrc1032165.93 2239.96 3.42
rrc1042045.94 2093.10 2.31
rrc1052454.70 2540.50 3.50
rrc1062333.60 2413.30 3.42
rrc1072159.70 2235.99 3.53
rrc1082172.61 2252.53 3.68
rrc2012741.56 2838.36 3.53
rrc2022560.35 2665.13 4.09
rrc2032396.92 2515.60 4.95
rrc2042245.18 2365.22 5.35
rrc2052886.663165.74 8.82
rrc2062614.92 2746.17 5.02
rrc2072380.57 2450.92 2.96
rrc2082248.49 2347.66 4.41
Mean2011.122018.882059.980.452.37
VRPTWR-VRPTWLST-ILSR-VRPTWLST
Solomon InstancesOptimal ValuesCreated
Instances
THD
Values
MHD
Values
OD
Values
InstancesTHD
Values
MHD
Values
OD
Values
c101 rc101_ILS 297.321126.26rc1011077.21501.451578.66
c102 rc102_ILS 354.141183.07rc1021152.61483.861636.47
c103 rc103_ILS872.47396.321268.78rc1031322.09454.681776.77
c104 rc104_ILS889.39328.321217.71rc1041363.43493.281856.71
c105 rc105_ILS865.09337.551202.64rc1051208.96540.411719.88
c106 rc106_ILS 342.591171.53rc1061042.91503.331546.24
c107 rc107_ILS863.70387.761251.46rc1071184.06610.141794.20
c108 rc108_ILS 328.901157.84rc1081308.34503.641811.98
c109 rc109_ILS921.55315.651237.20rc1091395.12520.081915.19
c201 rc201_ILS 269.09860.65rc2011139.93650.831790.76
c202 rc202_ILS602.86251.05853.91rc2021166.14582.651748.79
c203 rc203_ILS621.21259.64880.84rc2031266.77563.161829.92
c204 rc204_ILS636.55281.21917.75rc2041276.85587.601864.45
c205 rc205_ILS 261.87850.75rc2051330.27545.471875.74
c206 rc206_ILS589.34319.94909.28rc2061252.41604.031856.45
c207 rc207_ILS588.32263.74852.07rc2071247.46643.711891.17
c208 rc208_ILS589.48265.53855.01rc2081271.96605.511877.47
r101 rr101_ILS1670.8435.611706.45rr1011698.84495.002193.85
r102 rr102_ILS1496.3965.441561.83rr1021616.63414.362030.98
r103 rr103_ILS1298.04179.671477.71rr1031357.38378.461735.84
r104 rr104_ILS1095.22239.441334.65rr1041254.61474.881729.50
r105 rr105_ILS1434.2678.091512.36rr1051607.16419.562026.72
r106 rr106_ILS1310.58208.381518.96rr1061517.11374.071891.18
r107 rr107_ILS1155.45205.361360.81rr1071228.81523.981752.79
r108 rr108_ILS1027.31215.011242.31rr1081082.84563.211646.06
r109 rr109_ILS1217.34164.011381.35rr1091422.37475.531897.91
r110 rr110_ILS1164.19119.731283.93rr1101329.92420.041749.96
r111 rr111_ILS1151.99196.361348.35rr1111277.32432.001709.33
r112 rr112_ILS1029.39180.291209.67rr1121272.21425.081697.29
r201 rr201_ILS1298.18403.211701.39rr2011509.41537.992047.40
r202 rr202_ILS1195.17391.441586.61rr2021296.83534.971831.80
r203 rr203_ILS997.57315.941313.51rr2031187.71509.941697.65
r204 rr204_ILS867.36435.271302.64rr2041035.20535.181570.38
r205 rr205_ILS1107.30354.301461.61rr2051369.13452.501821.63
r206 rr206_ILS986.62354.261340.87rr2061272.80547.011819.81
r207 rr207_ILS918.39421.881340.27rr2071171.63520.171691.80
r208 rr208_ILS774.68136.96911.65rr2081034.95509.601544.55
r209 rr209_ILS978.53519.051497.59rr2091242.30564.371806.67
r210 rr210_ILS1022.58348.641371.23rr2101294.41472.941767.35
r211 rr211_ILS915.17321.971237.14rr2111149.17545.191694.36
rc101 rrc101_ILS1720.54200.931921.47rrc1011879.91537.972417.88
rc102 rrc102_ILS1562.46237.481799.94rrc1021716.54550.342266.88
rc103 rrc103_ILS1332.53320.721653.24rrc1031518.68266.252084.93
rc104 rrc104_ILS1214.76349.701564.46rrc1041460.12564.492024.61
rc105 rrc105_ILS1665.63226.021891.65rrc1051758.08630.802388.88
rc106 rrc106_ILS1481.60314.201795.80rrc1061591.53574.082165.61
rc107 rrc107_ILS1260.39333.831594.21rrc1071495.99593.712089.70
rc108 rrc108_ILS1208.48374.531583.01rrc1081516.41617.012133.42
rc201 rrc201_ILS1451.38451.131902.52rrc2011798.77711.082509.85
rc202 rrc202_ILS1404.60480.921885.53rrc2021682.92724.202407.12
rc203 rrc203_ILS1131.16452.351583.51rrc2031538.99723.382262.37
rc204 rrc204_ILS877.53133.711011.25rrc2041576.20648.572224.77
rc205 rrc205_ILS1390.44430.081820.52rrc2051887.67627.222515.19
rc206 rrc206_ILS1197.44219.541416.97rrc2061811.11693.552504.66
rc207 rrc207_ILS1082.12181.301263.42rrc2071487.43675.302162.73
rc208 rrc208_ILS894.30329.861224.16rrc2081556.21604.912161.12
InstancesBest-KnownAlgorithmsRPIs
IIGDITSHIIGDITSH
rc1011578.66 1676.791732.33 0.060.10
rc1021636.47 1765.231961.10 0.080.20
rc1031776.77 1856.431813.18 0.040.02
rc1041856.71 1901.382030.77 0.020.09
rc1051719.88 1817.411832.61 0.060.07
rc1061546.24 1765.181977.82 0.140.28
rc1071794.20 1894.032047.19 0.060.14
rc1081811.98 2035.642064.53 0.120.14
rc1091901.801915.19 2184.020.01 0.15
rc2011790.76 1993.162037.44 0.110.14
rc2021748.79 1961.772093.95 0.120.20
rc2031829.92 1977.692038.81 0.080.11
rc2041864.45 2057.872039.72 0.100.09
rc2051875.74 2081.992098.25 0.110.12
rc2061815.311856.45 2026.220.02 0.12
rc2071891.17 2097.252084.41 0.110.10
rc2081807.601877.472115.49 0.040.17
rr1012066.812193.852398.23 0.060.16
rr1021973.202030.98 2070.140.03 0.05
rr1031701.481735.84 1889.480.02 0.11
rr1041709.251729.50 1791.510.01 0.05
rr1051899.162026.722084.20 0.070.10
rr1061758.031891.181946.39 0.080.11
rr1071752.79 1807.771826.68 0.030.04
rr1081646.06 1807.171763.29 0.100.07
rr1091813.141897.911981.40 0.050.09
rr1101749.96 1905.141956.02 0.090.12
rr1111709.33 1774.361916.03 0.040.12
rr1121697.29 1842.531882.52 0.090.11
rr2012047.40 2125.492177.37 0.040.06
rr2021831.80 2007.041996.64 0.100.09
rr2031697.65 1890.901927.94 0.110.14
rr2041570.38 1701.691929.08 0.080.23
rr2051821.63 1876.992178.59 0.030.20
rr2061819.81 1962.741959.20 0.080.08
rr2071691.80 1804.681961.28 0.070.16
rr2081544.55 1708.311923.56 0.110.25
rr2091727.371806.67 1915.690.05 0.11
rr2101767.35 1834.682073.05 0.040.17
rr2111694.36 1809.621760.74 0.070.04
rrc1012417.88 2513.642452.82 0.040.01
rrc1022263.872266.882391.28 0.060.00
rrc1031906.912084.932268.32 0.090.19
rrc1042024.61 2132.792176.89 0.050.08
rrc1052253.532388.882531.48 0.060.12
rrc1062165.61 2424.502399.76 0.120.11
rrc1072089.70 2165.822148.35 0.040.03
rrc1081979.602133.42 2328.980.08 0.18
rrc2012509.85 2607.892611.76 0.040.04
rrc2022407.12 2503.572863.43 0.040.19
rrc2032262.37 2440.522471.48 0.080.09
rrc2042224.77 2370.842394.29 0.070.08
rrc2052380.762515.19 2770.010.06 0.16
rrc2062504.66 2627.292556.88 0.050.02
rrc2072162.73 2461.722656.62 0.140.23
rrc2082161.12 2325.332466.77 0.080.14
Mean1904.501929.382039.652089.790.010.070.10
InstancesBest-KnownAlgorithmsRPIs
IIGIABCALNSVNIGSAIGIIGIABCALNSVNIGSAIG
rc1011646.27 1797.771725.451752.381906.85 9.204.816.4515.83
rc1021723.15 1774.461790.251876.031937.90 2.983.898.8712.46
rc1031861.14 1884.371890.111988.721965.67 1.251.566.855.62
rc1041917.062014.801921.701985.40 2264.615.100.243.56 18.13
rc1051802.07 1888.921908.452113.252039.27 4.825.9017.2713.16
rc1061741.29 1796.881780.322023.761949.68 3.192.2416.2211.97
rc1071893.48 1984.871934.172060.322298.90 4.832.158.8121.41
rc1081932.382005.981977.62 2052.502409.943.812.34 6.2224.71
rc1091875.561924.371941.961878.39 2273.272.603.540.15 21.20
rc2011971.68 1990.102148.422283.462102.45 0.938.9615.816.63
rc2021926.09 1980.531971.661983.052112.97 2.832.372.969.70
rc2031893.911993.15 1978.872117.332296.985.24 4.4911.8021.28
rc2042037.70 2097.602060.782212.942324.72 2.941.138.6014.09
rc2051868.281996.43 2020.631980.782634.996.85 8.156.0241.04
rc2061947.04 1977.861984.332062.082649.70 1.581.925.9136.09
rc2071938.672016.15 2055.052059.972600.964.00 6.006.2634.16
rc2081997.492046.072000.75 2039.012696.202.430.16 2.0834.98
rr1012684.643022.86 3096.273162.393109.690.13 15.0018.0016.00
rr1022033.62 2077.132169.492213.742167.79 2.146.688.866.60
rr1031794.85 1870.711896.331972.201945.93 4.235.659.888.42
rr1041735.86 1738.091903.841743.391968.11 0.139.680.4313.38
rr1052065.52 2127.282195.042271.792226.11 2.996.279.997.77
rr1061920.161981.652027.872057.642145.01 3.205.617.1611.71
rr1071792.151855.79 1938.331836.752015.373.55 8.162.4912.46
rr1081699.01 1792.711925.741737.921848.04 5.5113.342.298.77
rr1091938.55 2019.062036.172166.922223.12 4.155.0411.7814.68
rr1101807.02 1824.121971.651832.971906.87 0.959.111.445.53
rr1111764.78 1803.271932.271918.231925.53 2.189.498.709.11
rr1121763.50 1792.941937.911838.861965.29 1.679.894.2711.44
rr2012059.732106.752072.582404.25 2571.162.280.6216.73 24.83
rr2021924.95 1971.592130.512002.562256.09 2.4210.684.0317.20
rr2031811.40 1860.812040.992064.732104.38 2.7312.6713.9816.17
rr2041612.58 1680.881809.391711.521806.95 4.2412.206.1412.05
rr2051859.54 1871.792050.152029.722195.36 0.6610.259.1518.06
rr2061857.82 1897.482010.602072.262203.35 2.138.2211.5418.60
rr2071701.91 1748.881890.491713.591906.55 2.7611.080.6912.02
rr2081598.95 1628.371738.481728.631755.39 1.848.738.119.78
rr2092129.602336.352541.20 2505.662886.5110.0019.00 17.6635.57
rr2101782.63 1871.752021.391966.072129.54 5.0013.3910.2919.46
rr2111735.94 1767.111908.101847.122052.21 1.809.926.4018.22
rrc1012472.05 2714.312583.592581.842551.06 9.804.514.443.20
rrc1022309.12 2413.712393.932552.782414.67 4.533.6710.554.57
rrc1032165.93 2298.332342.602318.292195.55 6.118.167.031.37
rrc1042045.94 2193.792221.562258.762468.93 7.238.5810.4020.67
rrc1052454.70 2624.852681.302489.152498.68 6.939.231.401.79
rrc1062333.60 2487.122489.822384.912546.56 6.586.692.209.12
rrc1072159.70 2272.742356.522218.532608.24 5.239.112.7220.77
rrc1082172.61 2202.602336.792286.662319.16 1.387.565.256.75
rrc2012610.572741.562612.542770.38 3249.815.020.086.12 24.49
rrc2022422.052560.35 2731.282591.502905.865.71 12.777.0019.98
rrc2032396.92 2426.702586.542622.432658.06 1.247.919.4110.89
rrc2042245.18 2327.692466.372338.172375.21 3.679.854.145.79
rrc2053165.74 3342.363398.853361.034134.57 6.007.006.0031.00
rrc2062614.92 2755.822784.372724.473053.46 5.396.484.1916.77
rrc2072380.57 2466.012423.222409.922534.57 3.591.791.236.47
rrc2082248.49 2362.232401.332587.382428.97 5.066.8015.078.03
Mean1996.712018.882056.522141.192135.332279.451.062.957.267.0414.33
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Cui, X.; Yang, K.; Wang, X.; Duan, P. An Improved Iterated Greedy Algorithm for Solving Collaborative Helicopter Rescue Routing Problem with Time Window and Limited Survival Time. Algorithms 2024 , 17 , 431. https://doi.org/10.3390/a17100431

Cui X, Yang K, Wang X, Duan P. An Improved Iterated Greedy Algorithm for Solving Collaborative Helicopter Rescue Routing Problem with Time Window and Limited Survival Time. Algorithms . 2024; 17(10):431. https://doi.org/10.3390/a17100431

Cui, Xining, Kaidong Yang, Xiaoqing Wang, and Peng Duan. 2024. "An Improved Iterated Greedy Algorithm for Solving Collaborative Helicopter Rescue Routing Problem with Time Window and Limited Survival Time" Algorithms 17, no. 10: 431. https://doi.org/10.3390/a17100431

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithms

  2. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithms

  3. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithms

  4. Introduction to Problem-Solving using Search Algorithms for Beginners

    problem solving using search algorithms

  5. What is Problem Solving Algorithm?, 4 Steps, Representation

    problem solving using search algorithms

  6. SOLUTION: Problem solving in data structures algorithms using python

    problem solving using search algorithms

VIDEO

  1. A* On A Pathfinding Problem

  2. Search Algorithm Example (BFS)

  3. Searching and Sorting Algorithms Full Course

  4. Artificial Intelligence: Solving Problems by Searching 2 الذكاء الإصطناعي: حل المشاكل بالبحث

  5. Searching, Sorting, and Hashing Algorithms Full Course

  6. Artificial intelligence

COMMENTS

  1. Introduction to Problem-Solving using Search Algorithms for Beginners

    Introduction. In computer science, problem-solving refers to synthetic intelligence techniques, which include forming green algorithms, heuristics, and acting root reason analysis to locate suited solutions. Search algorithms are fundamental tools for fixing a big range of issues in computer science. They provide a systematic technique to ...

  2. Chapter 3 Solving Problems by Searching

    In general, exponential-complexity search problems cannot be solved by uninformed search for any but the smallest instances. 3.4.2 Dijkstra's algorithm or uniform-cost search When actions have different costs, an obvious choice is to use best-first search where the evaluation function is the cost of the path from the root to the current node.

  3. Search Algorithms in AI

    The search algorithms in this section have no additional information on the goal node other than the one provided in the problem definition. The plans to reach the goal state from the start state differ only by the order and/or length of actions. ... It is used for solving real-life problems using data mining techniques. The tool was developed ...

  4. PDF 3 SOLVING PROBLEMS BY SEARCHING

    eral general-purpose search algorithms that can be used to solve these problems and compare the advantages of each algorithm. The algorithms are uninformed, in the sense that they are given no information about the problem other than its definition. Chapter 4 deals with informed search algorithms, ones that have some idea of where to look for ...

  5. PDF Solving problems by searching

    Toy problems (but sometimes useful) Illustrate or exercise various problem-solving methods Concise, exact description Can be used to compare performance Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals, simple route finding. Real-world problem. More difficult No single, agreed-upon description ...

  6. PDF Solving problems by searching

    Problem-solving agents use atomic representations (see Chapter 2), where states of the world are considered as wholes, with no internal structure visible to the problem-solving agent. We consider two general classes of search: (1) uninformed search algorithms for which the algorithm is provided no information about the problem other than its

  7. AI

    A search algorithm is a type of algorithm used in artificial intelligence to find the best or most optimal solution to a problem by exploring a set of possible solutions, also called a search space. A search algorithm filters through a large number of possibilities to find a solution that works best for a given set of constraints. Search algorithms typically operate by organizing the search ...

  8. PDF Search Problems

    But a fast test determining whether a state is reachable from another is very useful, as search techniques are often inefficient. 21. when a problem has no solution. It is often not feasible (or too. 8-puzzle Æ 362,880 states. 0.036 sec. 15-puzzle Æ 2.09 x 1013 states. ~ 55 hours.

  9. PDF Problem-Solving as Search

    Problem Solving as Search •Search is a central topic in AI -Originated with Newell and Simon's work on problem solving. -Famous book: "Human Problem Solving" (1972) •Automated reasoning is a natural search task •More recently: Smarter algorithms -Given that almost all AI formalisms (planning,

  10. PDF Chapter 3 Solving problems by searching

    Else pick some search node N from Q. If state(N) is a goal, return N (we have reached the goal) Otherwise remove N from Q. Find all the children of state(N) not in visited and create all the one-step extensions of N to each descendant. Add the extended paths to Q, add children of state(N) to Visited. Go to step 2.

  11. PDF Solving problems by searching: Uninformed Search

    State space graphs vs. search trees S a b d p a c e p h f r q q c G a e q p h f r q q c G a S G d b p q c e h a f r We construct both on demand -and we construct as little as possible. Each NODE in the search tree is an entire PATH in the state space graph. State Space Graph Search Tree 20

  12. Analysis of Searching Algorithms in Solving Modern Engineering Problems

    Many current engineering problems have been solved using artificial intelligence search algorithms. To conduct this research, we selected certain key algorithms that have served as the foundation for many other algorithms present today. This article exhibits and discusses the practical applications of A*, Breadth-First Search, Greedy, and Depth-First Search algorithms. We looked at several ...

  13. Searching Algorithms

    Got it. Searching algorithms are essential tools in computer science used to locate specific items within a collection of data. These algorithms are designed to efficiently navigate through data structures to find the desired information, making them fundamental in various applications such as databases, web search engines, and more. Searching ...

  14. PDF CSEP 573 Chapters 3-5 Problem Solving using Search

    3 5 Example: N Queens 4 Queens 6 State-Space Search Problems General problem: Given a start state, find a path to a goal state • Can test if a state is a goal • Given a state, can generate its successor states Variants: • Find any path vs. a least-cost path • Goal is completely specified, task is just to find the path - Route planning • Path doesn't matter, only finding the goal ...

  15. PDF 16.410 Lecture 02: Problem Solving as State Space Search

    Most problem solving tasks may be formulated as state space search. State space search is formalized using. graphs, simple paths, search trees, and pseudo code. Depth-first and breadth-first search are framed, among others, as instances of a. generic search strategy.

  16. Search Algorithms in AI

    Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will learn various problem-solving search algorithms. Search Algorithm Terminologies: Search: Searchingis a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors:

  17. Using Uninformed & Informed Search Algorithms to Solve 8-Puzzle (n

    The same problem (with a little variation) also appeared a programming exercise in the Coursera Course Algorithm-I (By Prof. ROBERT SEDGEWICK, Princeton).The description of the problem taken from the assignment is shown below (notice that the goal state is different in this version of the same problem): Write a program to solve the 8-puzzle problem (and its natural generalizations) using the ...

  18. Popular Approaches to Solve Coding Problems in DSA

    Example problems: Euclid algorithm of finding GCD, Binary Search, Josephus problem. Problem-solving using Binary Search. When an array has some order property similar to the sorted array, we can use the binary search idea to solve several searching problems efficiently in O(logn) time complexity.

  19. A new accurate and fast convergence cuckoo search algorithm for solving

    In recent years, the Cuckoo Optimization Algorithm (COA) has been widely used to solve various optimization problems due to its simplicity, efficacy, and capability to avoid getting trapped in local optima. However, COA has some limitations such as low convergence when it comes to solving constrained optimization problems with many constraints.

  20. Linear Search Practice Problems Algorithms

    1 2 3. We care about your data privacy. HackerEarth uses the information that you provide to contact you about relevant content, products, and services. will help you understand that you are in control of your data at HackerEarth. Solve practice problems for Linear Search to test your programming skills. Also go through detailed tutorials to ...

  21. Problem Solving with Algorithms and Data Structures using Python

    An interactive version of Problem Solving with Algorithms and Data Structures using Python. ... Search Page. Problem Solving with Algorithms and Data Structures using Python by Bradley N. Miller, David L. Ranum is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

  22. An Improved Iterated Greedy Algorithm for Solving Collaborative ...

    Next, a problem-specific local search strategy is developed to improve the algorithm's local search effectiveness. In addition, the simulated annealing (SA) method is integrated as an acceptance criterion to avoid the algorithm from getting trapped in local optima. ... solving it using an improved iterative greedy (IIG) algorithm. In the ...