Join 36000+ teachers and students using TTIO.
The complexity of an algorithm can be described by big O notation. Big O always assumes a worst case scenario. In big O is is customary to describe the input in terms of n rather than x. If we wanted to specify the complexity of an algorithm with a linear time complexity, we would say the time complexity is O(n).
Big O notation is a notation used when talking about growth rates. It formalizes the notion that two functions "grow at the same rate," or one function "grows faster than the other," and such.
It is very commonly used in computer science, when analyzing algorithms. Algorithms have a specific running time, usually declared as a function on its input size. However, implementations of a certain algorithm in different languages may yield a different function. For example, an algorithm with input size n bytes, when implemented in C++, might take time n^2n2 microseconds; but when implemented in Python, it might take time 1000n^2 + 1000n1000n2+1000n microseconds due to it being a slower language.