if we talk about algorithm, In mathematics and computer science is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks. The concept of *algorithm* has existed for centuries; however, a partial formalization of what would become the modern *algorithm* began with attempts to solve the Entscheidungsproblem (the “decision problem”) posed by David Hilbert in 1928.

In this article we discuss on various types of Algorithms. Here is a short list of Algorithms we will discuss

- Simple recursive algorithms
- Backtracking algorithms
- Divide and conquer algorithms
- Dynamic programming algorithms
- Greedy algorithms
- Branch and bound algorithms
- Brute force algorithms
- Randomized algorithms

## Simple recursive algorithms

This types of algorithm can solves the base cases directly. It Recurs with a simpler subproblem and does some extra work to convert the solution to the simpler subproblem into a solution to the given problem. In short A **recursive algorithm** is an algorithm which calls itself with “smaller (or simpler)” input values, and which obtains the result for the current input by applying simple operations to the returned value for the smaller (or simpler) input. We call these “simple” because several of the other algorithm types are inherently recursive

Some Examples of recursive algorithms are:

- To count the number of elements in a list:

- If the list is empty, return zero; otherwise,
- Step past the first element, and count the remaining elements in the list
- Add one to the result

- To test if a value occurs in a list:

- If the list is empty, return false; otherwise,
- If the first thing in the list is the given value, return true; otherwise
- Step past the first element, and test whether the value occurs in the remainder of the list

## Backtracking algorithms

Backtracking algorithms are based on a depth-first recursive search. A backtracking algorithms used to Tests to see if a solution has been found, and if so, return it otherwise false. Now for each choice that can be made at this point, we perform following process

- Make the choice
- Recurse
- if the recursion returns a solution, return it.
- if no choices remain, return failure

Example of Backtracking

To color a map with no more than four colors:

- color(Country n)

- If all countries have been colored (n > number of countries) return success; otherwise,
- For each color c of four colors,
- If country n is not adjacent to a country that has been colored c
- Color country n with color c
- Recursively color country n+1
- If successful, return success

- If country n is not adjacent to a country that has been colored c

- Return failure (if loop exits)

## Divide and Conquer

A divide and conquer algorithm consists of two parts:

- Divide the problem into smaller subproblems of the same type, and solve these subproblems recursively
- Combine the solutions to the subproblems into a solution to the original problem

Traditionally, an algorithm is only called divide and conquer if it contains two or more recursive calls

Following are some standard algorithms that are Divide and Conquer algorithms.

**1) Binary Search** is a searching algorithm. In each step, the algorithm compares the input element x with the value of the middle element in array. If the values match, return the index of middle. Otherwise, if x is less than the middle element, then the algorithm recurs for left side of middle element, else recurs for right side of middle element.

**2) Quicksort** is a sorting algorithm. The algorithm picks a pivot element, rearranges the array elements in such a way that all elements smaller than the picked pivot element move to left side of pivot, and all greater elements move to right side. Finally, the algorithm recursively sorts the subarrays on left and right of pivot element.

**3) Merge Sort** is also a sorting algorithm. The algorithm divides the array in two halves, recursively sorts them and finally merges the two sorted halves.

**4) Closest Pair of Points** The problem is to find the closest pair of points in a set of points in x-y plane. The problem can be solved in O(n^2) time by calculating distances of every pair of points and comparing the distances to find the minimum. The Divide and Conquer algorithm solves the problem in O(nLogn) time.

**5) Strassen’s Algorithm** is an efficient algorithm to multiply two matrices. A simple method to multiply two matrices need 3 nested loops and is O(n^3). Strassen’s algorithm multiplies two matrices in O(n^2.8974) time.

**6) Cooley–Tukey Fast Fourier Transform (FFT) algorithm** is the most common algorithm for FFT. It is a divide and conquer algorithm which works in O(nlogn) time.

**7) Karatsuba algorithm for fast multiplication** it does multiplication of two *n*-digit numbers in at most single-digit multiplications in general (and exactly when *n* is a power of 2). It is therefore faster than the classical algorithm, which requires *n*^{2} single-digit products. If *n* = 2^{10} = 1024, in particular, the exact counts are 3^{10} = 59,049 and (2^{10})^{2} = 1,048,576, respectively.

Some Examples:

**Quick Sort:**

- Partition the array into two parts, and quicksort each of the parts
- No additional work is required to combine the two sorted parts

**Merge Sort:**

- Cut the array in half, and mergesort each half
- Combine the two sorted arrays into a single sorted array by merging them

## Dynamic programming algorithms

A dynamic programming algorithm remembers past results and uses them to find new results. Dynamic programming is generally used for optimization problems. This method is for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. This differs from Divide and Conquer, where subproblems generally need not overlap.

Example:

- Multiple solutions exist, need to find the “best” one
- Requires “optimal substructure” and “overlapping subproblems”
- Optimal substructure: Optimal solution contains optimal solutions to subproblems
- Overlapping subproblems: Solutions to subproblems can be stored and reused in a bottom-up fashion

- Requires “optimal substructure” and “overlapping subproblems”

Fibonacci numbers again

To find the nth Fibonacci number:

- If n is zero or one, return one; otherwise,
- Compute,
*or look up in a table,*fibonacci(n-1) and fibonacci(n-2) - Find the sum of these two numbers
- Store the result in a table and return it

- Compute,

Since finding the nth Fibonacci number involves finding all smaller Fibonacci numbers, the second recursive call has little work to do. The table may be preserved and used again later.

## Greedy algorithms

A **greedy algorithm** is an algorithmic paradigm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time. An optimization problem is one in which you want to find, not just *a* solution, but the *best* solution. A “greedy algorithm” sometimes works well for optimization problems

A greedy algorithm works in phases: At each phase:

- You take the best you can get right now, without regard for future consequences
- You hope that by choosing a
*local*optimum at each step, you will end up at a*global*optimum

Example:

Suppose you want to count out a certain amount of money, using the fewest possible bills and coins. So a greedy algorithm would do this would be

At each step, take the largest possible bill or coin that does not overshoot

- To make $6.39, you can choose:
- a $5 bill
- a $1 bill, to make $6
- a 25¢ coin, to make $6.25
- A 10¢ coin, to make $6.35
- four 1¢ coins, to make $6.39

For US money, the greedy algorithm always gives the optimum solution

## Branch and bound algorithms

**Branch and bound** (**BB**, **B&B**, or **BnB**) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root.

Branch and bound algorithms are generally used for optimization problems

- As the algorithm progresses, a tree of subproblems is formed
- The original problem is considered the “root problem”
- A method is used to construct an upper and lower bound for a given problem
- At each node, apply the bounding methods
- If the bounds match, it is deemed a feasible solution to that particular subproblem
- If bounds do
*not*match, partition the problem represented by that node, and make the two subproblems into children nodes

- Continue, using the best known feasible solution to trim sections of the tree, until all nodes have been solved or trimmed

Examples:

Travelling salesman problem: A salesman has to visit each of n cities (at least) once each, and wants to minimize total distance travelled

- Consider the root problem to be the problem of finding the shortest route through a set of cities visiting each city once
- Split the node into two child problems:
- Shortest route visiting city A first
- Shortest route
*not*visiting city A first

- Continue subdividing similarly as the tree grows

The algorithm explores *branches* of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated *bounds* on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

## Brute force algorithm

In computer science, **brute-force search** or **exhaustive search**, also known as **generate and test**, is a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem’s statement. A brute force algorithm simply tries *all* possibilities until a satisfactory solution is found.

A brute-force algorithm to find the divisors of a natural number *n* would enumerate all integers from 1 to n, and check whether each of them divides *n* without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard, and, for each arrangement, check whether each (queen) piece can attack any other.

- Such an algorithm can be
- Optimizing: Find the
*best*solution. This may require finding all solutions, or if a value for the best solution is known, it may stop when any best solution is found- Example: Finding the best path for a travelling salesman

- Satisficing: Stop as soon as a solution is found that is
*good enough*- Example: Finding a travelling salesman path that is within 10% of optimal

- Optimizing: Find the

## Randomized algorithms

A **randomized algorithm** is an algorithm that employs a degree of randomness as part of its logic. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the “average case” over all possible choices of random bits. Formally, the algorithm’s performance will be a random variable determined by the random bits; thus either the running time, or the output (or both) are random variables.

- A randomized algorithm uses a random number at least once during the computation to make a decision
- Example: In Quicksort, using a random number to choose a pivot
- Example: Trying to factor a large prime by choosing random numbers as possible divisors

That’s all for this post. I hope this will helps you. If this post helped you to understand algorithms like a pro, considering sharing so others can benefit too.

Cheers!