Algorithm Design 2.0: Embracing Novel Approaches for Next-Level Solutions

Different Approaches to Design an Algorithm

 Algorithm:

Algorithm
 is a formally defined procedure for performing some calculation, set of step-by-step instructions that perform a specific task or operation. An algorithm provides a blueprint to write a program to solve a particular problem It is considered to be an effective procedure for solving a problem in finite number of steps.

 Different Approaches to Design an Algorithm

A complex algorithm is often divided into smaller units called modules. This process of dividing an algorithm into modules is called modularization.

 Advantages of Modularization :

• It makes the complex algorithm simpler to design and implement.

• Each module can be designed independently. While designing one module, the details of other modules can be ignored, thereby enhancing clarity in design which in turn simplifies implementation, debugging, testing, documenting, and maintenance of the overall algorithm.

TIME AND SPACE COMPLEXITY:

 Analyzing an algorithm means determining the amount of resources (such as time and memory) needed to execute it. Algorithms are generally designed to work with an arbitrary number of inputs, so the efficiency or complexity of an algorithm is stated in terms of time and space complexity.

The time complexity of an algorithm is basically the running time of a program as a function of the input size. Similarly, the space complexity of an algorithm is the amount of computer memory that is required during the program execution as a function of the input size.

 In other words, the number of machine instructions which a program executes is called its time complexity. This number is primarily dependent on the size of the program’s input and the algorithm used. Generally, the space needed by a program depends on the following two parts:

1.  Fixed part: It varies from problem to problem. It includes the space needed for storing instructions, constants, variables, and structured variables (like arrays and structures).

2. Variable part: It varies from program to program. It includes the space needed for recursion stack, and for structured variables that are allocated space dynamically during the runtime of a program.

 However, running time requirements are more critical than memory requirements.

 Worst-case, Average-case, Best-case, and Amortized Time Complexity

• Worst-case running time

Worst-case running time denotes the behavior of an algorithm with respect to the worst possible case of the input instance. An upper bound on the running time for any input. Therefore, having the knowledge of worst-case running time gives us an assurance that the algorithm will never go beyond this time limit.

Average-case running time

The average-case running time of an algorithm is an estimate of the running time for an ‘average’ input. It specifies the expected behavior of the algorithm when the input is randomly drawn from a given distribution. Average-case running time assumes that all inputs of a given size are equally likely.

Best-case running time

Best-case running time is the term ‘best-case performance’ is used to analyze an algorithm under optimal conditions. For example, the best case for a simple linear search on an array occurs when the desired element is the first in the list.  However, while developing and choosing an algorithm to solve a problem, we hardly base our decision on the best-case performance.

 It is always recommended to improve the average performance and the worst-case performance of an algorithm.

• Amortized running time:

Amortized running time refers to the time required to perform a sequence of (related) operations averaged over all the operations performed. Amortized analysis guarantees the average performance of each operation in the worst case.

 Time–Space Trade-off:

The best algorithm to solve a particular problem at hand is no doubt the one that requires less memory space and takes less time to complete its execution. But practically, designing such an ideal algorithm is not a trivial task.

There can be more than one algorithm to solve a particular problem. One may require less memory space, while the other may require less CPU time to execute. Thus, it is not uncommon to sacrifice one thing for the other. Hence, there exists a time–space trade-off among algorithms.

So, if space is a big constraint, then one might choose a program that takes less space at the cost of more CPU time. On the contrary, if time is a major constraint, then one might choose a program that takes minimum time to execute at the cost of more space.

Expressing Time and Space Complexity

The time and space complexity can be expressed using a function f(n) where n is the input size for a given instance of the problem being solved. Expressing the complexity is required when we want to predict the rate of growth of complexity as the input size of the problem increases.

There are multiple algorithms that find a solution to a given problem and we need to find the algorithm that is most efficient. The most widely used notation to express this function f(n) is the Big O notation. It provides the upper bound for the complexity.

 Algorithm Efficiency:

If a function is linear (without any loops or recursions), the efficiency of that algorithm or the running time of that algorithm can be given as the number of instructions it contains. However, if an algorithm contains loops, then the efficiency of that algorithm may vary depending on the number of loops and the running time of each loop in the algorithm.

• There are different cases in which loops determine the efficiency of an algorithm.

a. Linear Loops

b. Logarithmic Loops

c. Nested Loops

d. Linear Logarithmic loops

e. Quadratic loops

f. Dependent Quadratic Loops

 

Algorithm Efficiency :Linear Loops

 To calculate the efficiency of an algorithm that has a single loop, it needs to first determine the number of times the statements in the loop will be executed.  This is because the number of iterations is directly proportional to the loop factor. Greater the loop factor, more is the number of iterations.

 For example, consider the loop given below:

for(i=0;i<100;i++)

statement block;

Here, 100 is the loop factor. Hence, the general formula in the case of linear loops may be given as

f(n) = n

However, calculating efficiency may not be simple as shown above. Consider the loop given below:

for(i=0;i<100;i+=2)

statement block;

Here, the number of iterations is half the number of the loop factor. So, here the efficiency can be given as,

 f(n) = n/2

 Algorithm Efficiency :Logarithmic Loops

In linear loops, the loop updating of statement either adds or subtracts the loop-controlling variable. But, in logarithmic loops, the loop-controlling variable is either multiplied or divided during each iteration of the loop.

For example, look at the loops given below:

for(i=1;i<1000;i*=2) for(i=1000;i>=1;i/=2)

statement block;         statement block;

• Consider the first for loop in which the loop-controlling variable i is multiplied by 2.

    -  The loop will be executed only 10 times and not 1000 times because in each iteration the value of i doubles.

• Now, consider the second loop in which the loop-controlling variable i is divided by 2.

       - Here, the loop will be executed 10 times.

• Thus, the number of iterations is a function of the number by which the loop-controlling variable is divided or multiplied. In the above examples, it is 2. When n = 1000, the number of iterations can be given by log 1000 which is approximately equal to 10.  In general terms, the efficiency of loops in which iterations divide or multiply the loop-controlling variables can be given as f(n) = log n.

 

Algorithm Efficiency :Nested Loops:

Loops that contain loops are known as nested loops. In order to analyze nested loops, it needs to determine the number of iterations each loop completes. The total is then obtained as the product of the number of iterations in the inner loop and the number of iterations in the outer loop.

• Nested Loops Types

a. Linear Logarithmic loops

b. Quadratic loops

c. Dependent Quadratic Loops

 

a. Linear logarithmic loop

Consider the following code in which the loop-controlling variable of the inner loop is multiplied after each iteration.

for(i=0;i<10;i++)

for(j=1; j<10;j*=2)

statement block;

 The number of iterations in the inner loop is log10. This inner loop is controlled by an outer loop which iterates 10 times. Therefore, according to the formula, the number of iterations for this code can be given as 10 log 10. In more general terms, the efficiency of such loops can be given as f(n) = n log n.

b. Quadratic loop

In a quadratic loop, the number of iterations in the inner loop is equal to the number of iterations in the outer loop. Consider the following code in which the outer loop executes 10 times and for each iteration of the outer loop, the inner loop also executes 10 times. Therefore, the efficiency here is 100.

for(i=0;i<10;i++)

for(j=0; j<10;j++)

statement block;

The generalized formula for quadratic loop can be given as f(n) = n2.

 c. Dependent quadratic loop

 In a dependent quadratic loop, the number of iterations in the inner loop is dependent on the outer loop. Consider the code given below:

for(i=0;i<10;i++) for(j=0; j<=i;j++)

statement block;

In this code, the inner loop will execute just once in the first iteration, twice in the second iteration, thrice in the third iteration, so on and so forth. In this way, the number of iterations can be calculated as

1+2+3+...+9+10=55 .If we calculate the average of this loop (55/10 = 5.5), it is equal to the number of iterations in the outer loop (10) plus 1 divided by 2. In general terms, the inner loop iterates (n+ 1)/2 times. Therefore, the efficiency of such a code can be given as f(n) = n (n + 1)/2.

 Asymptotic Analysis

 Asymptotic analysis of an algorithm refers to defining the mathematical boundary/framing of its run-time performance. Using asymptotic analysis, we can conclude the best case, average case, and worst case scenario of an algorithm. Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant time. Other factors are considered constant.

For example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2).

This means the first operation running time will increase linearly with the increase in n and the running time of the second operation will increase exponentially when n increases. Similarly, the running time of both operations will be nearly the same if n is significantly small.

Asymptotic Notations:

Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.

• Ο Notation

• Ω Notation

• θ Notation

 BIG O NOTATION

The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete. The Big O notation, where O stands for ‘order of’, is concerned with what happens for very large values of n.

 For example, if a sorting algorithm performs n2 operations to sort just n elements, then that algorithm would be described as an O(n2) algorithm. When expressing complexity using the Big O notation, constant multipliers are ignored. So, an O(4n) algorithm is equivalent to O(n).

If f(n) and g(n) are the functions defined on a positive integer number n, then f(n) = O(g(n)) that is, f of n is Big–O of g of n if and only if positive constants c and n exist, such that f(n) <= cg(n). It means that for large amounts of data, f(n) will grow no more than a constant factor than g(n). Hence, g provides an upper bound..

Best case O describes an upper bound for all combinations of input. It is possibly lower than the worst case. For example, when sorting an array the best case is when the array is already correctly sorted.

Worst case O describes a lower bound for worst case input combinations. It is possibly greater than the best case. For example, when sorting an array the worst case is when the array is sorted in reverse order. If we simply write O, it means same as worst case O.

• Examples of f(n) and g(n)

 According to the Big O notation, we have five different categories of algorithms:

• Constant time algorithm: running time complexity given as O(1)

• Linear time algorithm: running time complexity given as O(n)

• Logarithmic time algorithm: running time complexity given as O(log n)

• Polynomial time algorithm: running time complexity given as O(nk) where k > 1

• Exponential time algorithm: running time complexity given as O(2n)

 

 Number of operations for different functions of n

 Limitations of Big O notation:

• Many algorithms are simply too hard to analyze mathematically.

• There may not be sufficient information to calculate the behavior of the algorithm in the average case.

• Big O analysis only tells us how the algorithm grows with the size of the problem, not how efficient it is, as it does not consider the programming effort.

• It ignores important constants. For example, if one algorithm takes O(n2) time to execute and the other takes O(100000n2) time to execute, then as per Big O, both algorithm have equal time complexity. In real-time systems, this may be a serious consideration.

 

Omega Notation, Ω:

The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete.

Best case Ω describes a lower bound for all combinations of input. This implies that the function can never get any better than the specified value. For example, when sorting an array the best case is when the array is already correctly sorted.

Worst case Ω describes a lower bound for worst case input combinations. It is possibly greater than best case. For example, when sorting an array the worst case is when the array is sorted in reverse order. If we simply write Ω, it means same as best case Ω.

 

Theta Notation, θ:

The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time.

                                                      fig: The notation θ(n)


FAQ:

1. Write Different Approaches to Design an Algorithm.
2. Define Ο Notation, Ω Notation ,θ Notation.

Thank You!!!
Deep99Notes

Post a Comment

Previous Post Next Post