Please tick the below box to proceed
I agree (or if I am under 13 my parent or guardian agrees on my behalf) to the terms and conditions of use and that:
 My test statistics may be published on the site leaderboard against my username
 My teacher(s) can review my test scores
 I can receive feedback on my tests from my teacher(s)
Please tick this box to proceed
Say you were making a cup of tea ...now, ordinarily you wouldn't need to follow an algorithm to make said cup of tea, but suppose you were a robot and needed instructions ... consider for a moment how important the efficiency of the algorithm would be.
Well, we mean things like TIME and SPACE and often times these two things have a big influence on each other. Definition: Algorithmic efficiency are the properties of an algorithm which relate to the amount of computational resources used by the algorithm. Analgorithm must be analysed to determine its resource usage.
The importance of efficiency with respect to time was emphasised by Ada Lovelace in 1843 as applying to Charles Babbage's mechanical analytical engine:
"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"
In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense, i.e. use Big O notation to represent the complexity of an algorithm as a function of the size of the input n. This is generally sufficiently accurate when n is large, but may be misleading for small values of n (e.g. bubble sort may be faster than quicksort when only a few items are to be sorted).
An O function has an order. In this context, the order quantifies how rapidly O increases with input n and the shape of that increase. Useful link: http://web.mit.edu/16.070/www/lecture/big_o.pdf
O stands for 'Omega' and Omega refers to the last or final element in a sequence
the rate of growth of a function is also called its order
O stands for 'Order' and all Big O algorithms involve sorting
O stands for 'Operand' as the notation refers to algorithms applied to numerical operands only
A =linear; B = constant; C = quadratic
A = constant; B = linear; C = quadratic
A = quadratic; B = linear; C = constant
A = constant; B = quadratic; C = linear
Note that the bigO expressions do not have constants or loworder terms. This is because, ______________________ ________________________________________________________ ......(a constanttime algorithm will be faster than a lineartime algorithm, which will be faster than a quadratictime algorithm).
when N gets small enough, constants and loworder terms start to aggregate
when N gets small enough, constants and loworder terms start to grow exponentially
when N gets large enough, constants and loworder terms start to be significant and matter
when N gets large enough, constants and loworder terms don't matter
A = quadratic; B = constant; C = O(2)
A = linear; B = linear; C = O(n)
A = constant; B = constant; C = O(1)
A = constant; B = constant; C = O(n)
How efficient is an algorithm or piece of code? Efficiency covers lots of resources, including: â€¢ ________________________ â€¢ memory usage â€¢ disk usage â€¢ network usage All are important but we will mostly talk about time complexity (______________)
human error
code length
CPU (time) usage
algorithm's author
Complexity affects pefformance and performance will also affect complexity
Complexity does not affect performance but performance will always affect complexity
Complexity affects performance but not the other way around.
Complexity does not affect performance and performance does not affect complexity. They are mutually exclusive
one test (e.g., x = 0)
one assignment (e.g. x := 0)
All the listed options are valid examples of a basic operation
one arithmetic operation (e.g., +, *).
Some functions/procedures perform the same number of operations every time they are called. For example, StackSize in the Stack implementation always returns the number ofelements currently in the stack or states that the stack is empty, then we say that StackSize takes ____________________
constant time
polynomial time
linear time
O(n)+1 time
In the BubbleSort algorithm, the number of elements in the array, determines the number of operations performed by the algorithm. This parameter (number of elements) is called the ___________________________
array or algorithm size
problem size/ input size.
computational complexity
queue size
When we are trying to find the complexity of the function/ procedure/ algorithm/program, we are not interested in the exact number of operations that are being performed. Instead, we are interested in the relation of the number of operations to the problem size
TRUE
FALSE
best case / maximum
worst case / maximum
best case / minimum
worst case / minimum
For example, inserting an element into an array, we have to move the current element and all of the elements that come after it one place to the right in the array
ALL of the elements in the array must be moved
just ONE of the elements in the array must be moved
NONE of the elements in the array must be moved
just the LAST element in the array must be moved
FALSE
TRUE
O(2 to the power of N)
O(N to the power of 2)
O(N)
O(1)
O(N to the power of 2)
O(N)
O(1)
O(2 to the power of N)
O(2 to the power of N)
O(N)
O(1)
O(N to the power of 2)
O(N to the power of 2)
O(1)
O(2 to the power of N)
O(N)
bubble search
logo search (using log tables)
linear search
binary search
O(2 to the power of N)  factorial
O(1)  constant
O(N)  linear
O(N to the power of 2)  quadratic
constant
linear
exponential
quadratic
quadratic
constant
linear
exponential
for I in 1 .. N loop sequence of statements end loop;
N * N which is N(O) overall
N * O(N), which is O(1) overall
N * O(1), which is O(N) overall.
N * 1 which is O(1) overall
Nested Loops ============= for I in 1 .. N loop for J in 1 .. M loop sequence of statements end loop; end loop;
O(N) * M times. Thus, the complexity is O(N1 x M2)
M*O times. Thus, the complexity is O(N x 2)
N * M times. Thus, the complexity is O(N * M).
N * N times. Thus, the complexity is O(N * N)
FALSE
TRUE
Notation  Name  Examples 

constant  Determining if a number is even or odd; Using a constantsize lookup table; Using a suitable hash function for looking up an item.  
logarithmic  Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.  
linear  Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two nbit integers by ripple carry.  
linearithmic, loglinear, or quasilinear  Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort  
