Skip to content

Latest commit

 

History

History
222 lines (163 loc) · 7.45 KB

File metadata and controls

222 lines (163 loc) · 7.45 KB

How to determine time complexity from code?

In general, you can determine the time complexity by analyzing the program’s statements. However, you have to be mindful of how are the statements arranged. Suppose they are inside a loop or have function calls or even recursion. All these factors affect the runtime of your code. Let’s see how to deal with these cases.

Sequential Statements

If we have statements with basic operations like conditionals, assignments, reading a variable. We can assume they take constant time.

statement 1;
statement 2;
...
statement n;

If we calculate the total time complexity, it would be something like this:

total = time(statement 1) + time(statement 2) + ... time (statement n)

Let’s use T(n) as the total time in function of the input size n, and t as the time complexity taken by a statement or group of statements.

T(n) = t(statement 1) + t(statement 2) + ... + t(statement n);

If each statement executes a basic operation, we can say it takes constant time O(1). As long as you have a fixed number of operations, it will be constant time, even if we have 1 or 100 of these statements.

Warning
be careful with function calls. You will have to go to the implementation and check their run time. More on that later.

Conditional Statements

Very rarely, you have a code without any conditional statement. How do you calculate the time complexity? Remember that we care about the worst-case with Big O so that we will take the maximum possible runtime.

if (isValid) {
  statement 1;
  statement 2;
} else {
  statement 3;
}

Since, we are after the worst-case we take the whichever is larger of the two possibilities:

T(n) = Math.max([t(statement 1) + t(statement 2)], [time(statement 3)])

Loop Statements

Another prevalent scenario is loops like for-loops or while-loops. For any loop, we find out the runtime of the block inside them and multiply it by the number of times the program will repeat the loop.

for (let i = 0; i < array.length; i++) {
  statement 1;
  statement 2;
}

For this example, the loop is executed array.length, assuming n is length of the array we get the following:

T(n) = n * [ t(statement 1) + t(statement 2) ]

All loops that grow proportionally to the input size have a linear time complexity O(n). If you loop through only half of the array, that’s still O(n). Remember that we drop the constants so 1/2 n ⇒ O(n).

However, if a constant number bounds the loop, let’s say 4 (or even 400). Then, the runtime is constant O(4) → O(1). See the following example.

for (let i = 0; i < 4; i++) {
  statement 1;
  statement 2;
}

That code is O(1) because it no longer depends on the input size.

Nested loops statements

Sometimes you might need to visit all the elements on a 2D array (grid/table). For such cases, you might find yourself with two nested loops.

for (let i = 0; i < n; i++) {
  statement 1;

  for (let j = 0; j < m; j++) {
    statement 2;
    statement 3;
  }
}

For this case you would have something like this:

T(n) = n * [t(statement 1) + m * t(statement 2...3)]

Assuming the statements from 1 to 3 are O(1), we would have a runtime of O(n * m). If instead of m, you had to iterate on n again, then it would be O(n^2). Another typical case is having a function inside a loop. Let’s see how to deal with that next.

Function call statements

When you calculate your programs' time complexity and invoke a function, you need to be aware of its runtime. If you created the function, that might be a simple inspection of the implementation. However, you might infer it from the language/library documentation if you use a 3rd party function.

Let’s say you have the following program:

for (let i = 0; i < n; i++) {
  fn1();
  for (let j = 0; j < n; j++) {
    fn2();
    for (let k = 0; k < n; k++) {
      fn3();
    }
  }
}
Depending on the runtime of fn1, fn2, and fn3, you would have different runtimes.
  • If they all are constant O(1), then the final runtime would be O(n^3).

  • However, if only fn1 and fn2 are constant and fn3 has a runtime of O(n^2), this program will have a runtime of O(n^5). Another way to look at it is, if fn3 has two nested and you replace the invocation with the actual implementation, you would have five nested loops.

In general, you will have something like this:

T(n) = n * [ t(fn1()) + n * [ t(fn2()) + n * [ t(fn3()) ] ] ]

Recursive Functions Statements

Analyzing the runtime of recursive functions might get a little tricky. There are different ways to do it. One intuitive way is to explore the recursion tree.

Let’s say that we have the following program:

function fn(n) {
  if (n < 0) return 0;
  if (n < 2) return n;

  return fn(n - 1) + fn(n - 2);
}

You can represent each function invocation as a bubble (or node).

Let’s do some examples:
  • When you n = 2, you have 3 function calls. First fn(2) which in turn calls fn(1) and fn(0).

  • For n = 3, you have 5 function calls. First fn(3), which in turn calls fn(2) and fn(1) and so on.

  • For n = 4, you have 9 function calls. First fn(4), which in turn calls fn(3) and fn(2) and so on.

Since it’s a binary tree, we can sense that every time n increases by one, we would have to perform at most the double of operations.

Here’s the graphical representation of the 3 examples:

graph G {
    subgraph cluster_2 {
      label = "fn(2)"
      "fn(2)-" -- { "fn(1)-", "fn(0)-" }

      "fn(0)-" [label="fn(0)"];
      "fn(1)-" [label="fn(1)", color=red];
      "fn(2)-" [label="fn(2)", color=red];
    }

    subgraph cluster_1 {
      label = "fn(3)"
      "fn(3)1" -- { "fn(2)1", "fn(1)1" }
      "fn(2)1" -- { "fn(1)3", "fn(0)1" }

      "fn(0)1" [label="fn(0)"];
      "fn(1)3" [label="fn(1)", color=red];
      "fn(1)1" [label="fn(1)"];
      "fn(2)1" [label="fn(2)", color=red];
      "fn(3)1" [label="fn(3)", color=red];
    }

    subgraph cluster_0 {
      label = "fn(4)"
      "fn(4)" -- { "fn(3)*", "fn(2)" }
      "fn(2)" -- { "fn(1)*", "fn(0)" }
      "fn(3)*" -- { "fn(2)**", "fn(1)**" }
      "fn(2)**" -- { "fn(1)****", "fn(0)**" }

      "fn(0)**" [label="fn(0)"];
      "fn(1)*" [label="fn(1)"];
      "fn(1)**" [label="fn(1)"];
      "fn(1)****" [label="fn(1)", color=red];
      "fn(2)**" [label="fn(2)", color=red];
      "fn(3)*" [label="fn(3)", color=red];
      "fn(4)" [label="fn(4)", color=red];
    }
}

If you take a look at the generated tree calls, the leftmost nodes go down in descending order: fn(4), fn(3), fn(2), fn(1), which means that the height of the tree (or the number of levels) on the tree will be n.

The total number of calls in a complete binary tree is 2^n - 1. As you can see in fn(4), the tree is not complete. The last level will only have two nodes, fn(1) and fn(0), while a full tree would have eight nodes. But still, we can say the runtime would be exponential O(2^n). It won’t get any worst because 2^n is the upper bound.

Summary

In this chapter, we learned how to calculate the time complexity of our code when we have the following elements:
  • Basic operations like assignments, bit, and math operators.

  • Loops and nested loops

  • Function invocations and recursions.

In the next section, we are going to the most common time complexities and real code examples.