CCSDAP Preparation - Day 2 & 3

This post is simple. In my previous post, I had listed some goals.
I am going to share my experience in reaching those goals.

List of topics along with commentaries:


Simple video with a simple example. We can appreciate the example too, which illustrate the concept of comparing 2 algorithms based on their time complexity. One is in order of n and other algorithm is in the order of square root of n. So, as input size increases, square root of n algorithm wins. It is more efficient than other algo.
In this video, basically the introduction was good. What is of interest to a developer when we want to find the time complexity - or how the running time of algorithm is impacted by the input. The essential term is 'rate of growth of time taken' depending on input. Immediately we can see a graph, where x-axis refers to the input size and y-axis refers to the time taken of the algorithm. And suppose we are having different algorithms and if we can plot the running time of the algorithms, then we would see different curves of these algorithms and we can compare their rate of growth and say that, when the input size becomes large, or when the input size is asymptotically high, how the algorithm behaves. We are given a model machine, assuming some unit of time taken for computation for difrerent steps like: assignment, comparison, arithmetic, return etc. And we are given 3 examples - one with constant time taken - sum of 2 nos, 2nd example with linear time taken - sum of array of numbers, 3rd example with sum of numbers in a matrix - quadratic time taken. We can plot these functions in a graph and visually appreciate these concepts.

Here, one more thing - Big O - is  a set of all functions which are of similar form. Basically the dominating term in running time equation of algorithm. For example, if the equation is n^2 + n, here dominating term is n^2. So, we can see that this is O(n^2). 

( Taking some rest, will update blog after a while, till then above is the latest update in preparation... )
Realistically speaking, I was not able to spend some more time on Day 2 because of other personal work. So, my efforts spilled over to Day 3.


In this video, you will appreciate why we are formally classifying some functions. Basically, we are grouping all quadratic rate of growth functions into one class/set, for example. Because, as n tends to infinity, all coefficients and constants will not matter. So, any equation having leading term with n to power of 2, will fall into quadratic class of functions and formally, we denote them by O(n^2). Theoretically speaking, whenever you see Big-O, it is actually a set. If you see Theta-O it is again a set and Omega-O it is also a set. And each set has a rule for all the elements belonging to that set. If an element needs to belong to that set, it has to satisfy that rule. In case of Big-O, O( g(n) ) is a set of all functions f(n), where f(n) <= cg(n) for all n >= n'. If we plot a graph, we can see that it is tightly upper bounded by g(n). All functions f(n) will not cross above g(n), after a given n' value. So,  it is tightly upper bounded. Same logic applies to tightly lower bounded, which is Omega notation. So Omega( g(n) ) is set of all functions f(n), where f(n) >= cg(n), for all n>=n'. Basically it is lower bounded, all f(n) will not cross below g(n), after some n' value. If we have a lower bound and upper bound, we are also interested in some tight bound, basically we are interested in some set, where both gets applied, which is where we come to theta. Theta(g(n)) is a set of functions f(n), where ca*g(n) <= f(n) <= cb*g(n), for all n > n',  If we plot a graph, we can see that g(n) puts f(n) into a cage, its like saying: you cannot go above me and you cannot go below me. Basically, we can treat it like: ground floor and ceiling.

Some general rules is: drop lower order terms, as we are interested in asymptotically large input size, where these terms will be insignifcant. And division of code into fragments like: { assignment, arithmetic } , { loop }, {nested loop }. Each fragment will have some complexity. Overall running time of the algorithm will be the sum of complexity of each of the fragments.


The best takeaway from this video is the link to bigocheatsheet.com and we have a beautiful graph which visualizes the time complexity functions. And it also categorizes based on performance: horrible to excellent.

Ensure that your algorithms tend to the left side time complexities more often.

I just gave a brief look at other links posted for understanding Big O notation and the reason for its existence. Got to understand the reasoning that in worst case, how the algorithm would behave, we can understand it by deriving the time complexity and we can compare different algorithms and can choose the best algorithm to solve the problem.


So, Big O gives the worst case running time, Omega gives best case, Theta gives the average case.
We are generally interested in worst case behavior for an algorithm.


Those who are from java background, cannot find a reference example code in java. But, overall it gives an idea of what all operations can be performed in array: traversal, searching - linear or binary, deleting an element in array, inserting an element in array, sorting elements in array - bubble sort , quick sort. My learning is in aspect of inserting or deleting - basically some kind of shifting is happening for all the affected elements. In deletion, all affected elements are shifted left. In insertion, all affected elements are shifted right. Accordingly, we need to write code.

( I am yet to read the descriptions of sorting algorithms in this link. will update post after some time, taking some break now )

Comments

Popular posts from this blog

CCDSAP Preparation - Day 4

CCDSAP Preparation - Day1