Complexity Analysis

Complexity Analysis

The big oh notation

Use of time complexity makes it easy to estimate the running time of a program. Performing an accurate calculation of a program’s operation time is a very labour-intensive process (it depends on the compiler and the type of computer or speed of the processor). Therefore, we will not make an accurate measurement; just a measurement of a certain order of magnitude. Complexity can be viewed as the maximum number of primitive operations that a program may execute. Regular operations are single additions, multiplications, assignments etc. We may leave some operations uncounted and concentrate on those that are performed the largest number of times. Such operations are referred to as dominant. The number of dominant operations depends on the specific input data. We usually want to know how the performance time depends on a particular aspect of the data. This is most frequently the data size, but it can also be the size of a square matrix or the value of some input variable. If you have seen O-notation before, you might find it strange that we should write, for example, n=O(n2). In the literature, we sometimes find O-notation informally describing asymptotically tight bounds, that is, what we have defined using O-notation. However, when we write f(n) = O(g(n)), we are merely claiming that some constant multiple of g(n) is an asymptotic upper bound on f(n), with no claim about how tight an upper bound it is. Distinguishing asymptotic upper bounds from asymptotically tight bounds is standard in the algorithms literature. Using O-notation, we can often describe the running time of an algorithm merely by inspecting the algorithm’s overall structure. Simply put , the oh notation measures the worst scenario of the running time of a particular algorithm.

Comparison of different time complexities

  • Constant time — O(1). There is always a fixed number of operations. For example, To calculate the square of a natural number n , its algorithm goes thus;
def square(n):
Pre : n is the natural number input
Post : the square of n was calculated
result = n * n
 return result

This Algorithm contains a constant number of operation regardless of the input range

  • Logarithmic time — O(logn). The value of n is halved on each iteration of the loop. If n = 2x then log n = x.
def logarithmic(n):
 result = 0
 while n > 1:
      n /= 2
      result += 1
 return result
  • Linear time — O(n).

    def linear(n, A):
    pre: A is an array and n is any integer 
    Post : the first zero element of A is found.
    for i in xrange(n):
     if A[i] == 0:
        return 0
    return 1
    

    Let’s note that if the first value of array A is 0 then the program will end immediately. But remember, when analyzing time complexity we should check for worst cases. The program will take the longest time to execute if array A does not contain any 0. The time complexity of this algorithm grows as the size of n or the array grows.

  • Quadratic time — O(n2).

def quadratic(n):
 result = 0
   for i in xrange(n):
        for j in xrange(i, n):
             result += 1
    return result

The result of the function equals 1/2 · (n · (n + 1)) = 1/2 · (n2 +n). When calculating the complexity we are interested in a term that grows fastest, so we not only omit constants, but also other terms ( 1/2 · n in this case). Thus we get quadratic time complexity. Sometimes the complexity depends on more variables.

  • Exponential and factorial time It is worth knowing that there are other types of time complexity such as factorial time O(n!) and exponential time O(2n). Algorithms with such complexities can solve problems only for very small values of n, because they would take too long to execute for large values of n.

Time limit

Nowadays, an average computer can perform 108 operations in less than a second. Sometimes we have the information we need about the expected time complexity , but sometimes we do not. The time limit set for online tests is usually from 1 to 10 seconds. We can therefore estimate the expected complexity. During contests, we are often given a limit on the size of data, and therefore we can guess the time complexity within which the task should be solved. This is usually a great convenience because we can look for a solution that works in a specific complexity instead of worrying about a faster solution. For example, if: • n < 1 000 000, the expected time complexity is O(n) or O(n log n), • n < 10 000, the expected time complexity is O(n2), • n < 500, the expected time complexity is O(n3). Of course, these limits are not precise. They are just approximations, and will vary depending on the specific task

Space complexity

Memory limits provide information about the expected space complexity. You can estimate the number of variables that you can declare in your programs.In short, if you have constant numbers of variables, you also have constant space complexity: in Big-O notation this is O(1). If you need to declare an array with n elements, you have linear space complexity — O(n). More specifically, space complexity is the amount of memory needed to perform the computation. It includes all the variables, both global and local, dynamic pointer data-structures and, in the case of recursion, the contents of the stack. Depending on the convention, input data may also be included. Space complexity is more tricky to calculate than time complexity because not all of these variables and data-structures may be needed at the same time. Global variables exist and occupy memory all the time; local variables (and additional information kept on the stack) will exist only during invocation of the function.

Exercise

Problem: You are given an integer n. Count the total of 1 + 2 + . . . + n.

Slow solution

time complexityO(n2).

 def slowest_solution(n) :
  result = 0
   for i in xrange(n):
      for j in xrange(i + 1):
        result += 1
 return result

slow solution

Another person may increment the result respectively by 1, 2, . . . , n. This algorithm is much faster: time complexity O(n).

 def slow_solution(n):
   result = 0
    for i in xrange(n):
      result += (i + 1)
        return result

Fast solution

It's known that gauss summation formula for natural numbers n is given by n(n+1)/2

Therefore,

def fast_ solution(n) :
result = n(n+1)/2
Return result.

Time complexity : can you guess ? It is a constant operation hence O(1).

Can you get the best algorithm for the fibonacci primes and the Lucas numbers ? Thanks for reading......