Creating Efficient Solutions: Algorithm Design Techniques for Data Structure Challenges

 Algorithm Design Techniques in Data Structures

A finite set of instruction that specifies a sequence of operation is to be carried out in order to solve a specific problem is called an Algorithm.

 Algorithm Design Paradigms:

General approaches to the construction of efficient solutions to problems. Such methods are of interest because:

• They provide templates suited to solving a broad range of diverse problems.

• They can be translated into common control and data structures provided by most high-level languages.

• The temporal and spatial requirements of the algorithms which result can be precisely analyzed. 

• Although more than one technique may be applicable to a specific problem, it is often the case that an algorithm constructed by one approach is clearly superior to equivalent solutions built using alternative techniques.

Algorithm Design Techniques:

1. Greedy algorithm,

2.Divide and conquer,

3. Dynamic programming,

4. Randomized algorithms,

5. Backtracking algorithms

 

1. Algorithm Design Techniques : Greedy Algorithm

A greedy algorithm is an algorithmic strategy that makes the best optimal choice at each small stage with the goal of this eventually leading to a globally optimum solution. This means that the algorithm picks the best solution at the moment without regard for consequences. It picks the best immediate output, but does not consider the big picture, hence it is considered greedy.

 

Greedy Algorithm : Uses

Most of the networking problems uses greedy algorithm. For example:

• Kruskal's Minimal Spanning Tree Algorithm

• Dijkstra's Minimal Spanning Tree Algorithm

• Travelling Salesman Problem

• Prim's Minimal Spanning Tree Algorithm 

• Job Scheduling Problem

 

Greedy Algorithm : Example : Counting Coins [1]

• This problem is to count to a desired value by choosing the least possible coins and the greedy approach forces the algorithm to pick the largest possible coin.

• If we are provided coins of ₹1, ₹5, ₹10 and ₹20 (Yes, We've ₹20 coins :D) and we are asked to count ₹36 then the greedy procedure will be

- Select one ₹20 coin, the remaining count is 16. Then select one ₹10 coin, the remaining count is 6. Then select one ₹5 coin, the remaining count is 1. And finally, the selection of one ₹ 1 coins solves the problem

Greedy Algorithm : Example : Counting Coins [2]

For the currency system, where we have coins of ₹1, ₹6, ₹10 and ₹20 value, counting coins for value 36 will be absolutely optimum but for count like 32, it may use more coins than necessary. For example, the greedy approach will use 20 + 10 + 1 + 1, total 4 coins. Whereas the same problem could be solved by using only 3 coins (20 +6+6).

Greedy Algorithm : Example : Finding Largest Sum

Take the path with the largest sum overall. A greedy algorithm would take the blue path, as a result of shortsightedness, rather than the orange path, which yields the largest sum.


Greedy Algorithm : Advantages and Disadvantages

• It is quite easy to come up with a greedy algorithm for a problem.

• Analyzing the run time for greedy algorithms will generally be much easier than for other techniques (like Divide and conquer).

• For the Divide and conquer technique, it is not clear whether the technique is fast or slow. This is because at each level of recursion the size of gets smaller and the number of sub-problems increases.

• The difficult part is that for greedy algorithms you have to work much harder to understand correctness issues.

• Even with the correct algorithm, it is hard to prove why it is correct. Proving that a greedy algorithm is correct is more of an art than a science. It involves a lot of creativity.

 Algorithm Design Techniques : Divide and Conquer

Top-down technique for designing algorithms that consists of dividing the problem into smaller subproblems so that the solutions of the subproblems are easier to find and then composing the partial solutions into the solution of the original problem.

To solve a large instance :

Divide it into two or more smaller instances. Solve each of these smaller problems, and Combine the solutions of these smaller problems to obtain the solution to the original instance.

• The smaller instances are often instances of the original problem and may be solved using the divide-and-conquer strategy recursively.

Divide and conquer : Application

• Merge Sort

• Quick sort

• Topological sort

• Binary Tree traversals: In-order, preorder and Post-order

• Binary Search

Divide and conquer : Advantages and Disadvantages

 Advantages

• In a perfect world, where the problem is easy to divide, and the sub-problem at some level is easy to solve, divide and conquer can be optimal for a general case solution, like merge sort. 

• Parallel availability, divide and conquer by it’s very nature lends itself well to parallel processing.

 Disadvantages

• Problem decomposition may be very complex and thus not really suitable to divide and conquer.

• Recursive nature of the solution may end up duplicating sub-problems, dynamic/memorized solutions may be better in some of these cases, like Fibonacci.

• Recursion into small/tiny base cases may lead to huge recursive stacks, and efficiency can be lost by not applying solutions earlier for larger base cases.

Algorithm Design Techniques : Dynamic programming

Sometimes, divide-and-conquer leads to overlapping sub-problems and thus to redundant computations. It is common that the redundancies accumulate and cause an exponential amount of wasted time. We can avoid the waste using a simple idea: solve each sub-problem only once.

To be able to do that, we have to add a certain amount of book-keeping to remember subproblems we have already solved. The technical name for this design paradigm is dynamic programming.

Dynamic programming is used where we have problems, which can be divided into similar sub-problems, so that their results can be re-used. Mostly, these algorithms are used for optimization.

Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved sub-problems.

Dynamic programming Vs Divide-and-Conquer

Dynamic programming approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub-problems are not solved independently.

Rather, results of these smaller sub-problems are remembered and used for similar or overlapping sub-problems.

Dynamic programming: Example

The following computer problems can be solved using dynamic programming approach −

•Fibonacci number series.

•Tower of Hanoi.

• Shortest path by Dijkstra’s algorithm.

• Dynamic programming can be used in both top-down and bottom-up manner.

Dynamic programming: Example1 [1]

The intuition behind dynamic programming is that we trade space for time, i.e. to say that instead of calculating all the states taking a lot of time but no space, we take up space to store the results of all the sub-problems to save time later.

 For Example:

Let’s take an example of Fibonacci number

Fibonacci(n) =1; if n=0

Fibonacci(n) =1; if n=1

Fibonacci(n) =Fibonacci (n-1) + Fibonacci(n-2)

So, the first few number in the series will be : 1,1,2,3,5,8,13,21,…..and so on

Dynamic programming: Example1 [2]

•A code for it using pure recursion:


• Using Dynamic Programming approach with memorization:


Dynamic programming: Example1 [3]

In the recursive code, a lot of values are being recalculated multiple times. We could do good with calculating each unique quantity only once. Take a look at the image to understand that how certain values were being recalculated in the recursive way:


Algorithm Design Techniques : Randomized algorithms

An algorithm that uses random numbers to decide what to do next anywhere in its logic is called Randomized Algorithm. Formally, the algorithm's performance will be a random variable determined by the random bits; thus either the running time, or the output (or both) are random variables.

For example, in Randomized Quick Sort, we use random number to pick the next pivot (or we randomly shuffle the array).



Randomized algorithms : Example : Normal Quick Sort


Randomized algorithms : Example : Randomized Quick Sort


Randomized algorithms : Advantages

• There are two principal advantages to randomized algorithms.

• The first advantage is performance; randomized algorithms run faster than the best-known deterministic algorithms for many problems.

• The second advantage is that many randomized algorithms are simpler to describe and implement than deterministic algorithms of comparable performance.

Algorithm Design Techniques : Backtracking algorithms

• Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.

• General steps:

- Start with a sub-solution.

- Check if this sub-solution will lead to the solution or not.

- If not, then come back and change the sub-solution and continue again.


Backtracking algorithms : Uses

• There are three types of problems in backtracking :

1. Decision Problem – search for a feasible solution.

2. Optimization Problem – search for the best solution.

3. Enumeration Problem – find all feasible solutions.

• Backtracking is an important tool for solving constraint satisfaction problems, such as crosswords, verbal arithmetic, Sudoku, and many other puzzles.

Backtracking algorithms : Example

• Depth First Search (DFS) with backtracking

N-queens problems

In a NxN square board N – number of queens need to be placed considering three Condition ---

- No two Queens can be placed in same row.

- No two Queens Can be places in same Column.

- No two queens Can be placed in same Diagonal.


4-Queens Demo[1]


4-Queens Demo[2]

and so on …….

4-Queens Demo[3]




 FAQ:

1. Define Algorithm Design Techniques in Data Structures.

2. Explain Algorithm Design Paradigms.

3. Write about Backtracking and Randomized algorithm.

4. What is Dynamic programming?


Thank You!!!

Deep99Notes

Post a Comment

Previous Post Next Post