Writing Efficient Programs
An engineer will do for a dime what any fool will do for a dollar.
– Paul Hilfinger
 Efficiency comes in two flavors, programming cost and execution cost. Today we’ll deal with the latter:
 How much time does your program take to execute?
 How much memory does your program require?
Example Problem 1
Objective: Given a sorted array A
, determine if it contains any duplicates.
 A silly algorithm is to consider every possible pair and return true if there’s any match.
 A better algorithm is: for each number
A[i]
, compare withA[i + 1]
. Return true if see a match, false otherwise.
Our Goal Today: Introduce a formal technique to compare algorithmic efficiency.
Intuitive Runtime Characterizations
Runtime Characterizations
Characterizations should:
 be simple and mathemtically rigorous
 demonstrate superiority of one algorithm over another
From the example problem 1, how can we characterize runtime to show that dup2
is better than dup1
?
public static boolean dup1(int[] A) {
for (int i = 0; i < A.length; i += 1) {
for (int j = i + 1; j < A.length; j += 1) {
if (A[i] == A[j]) {
return true;
}
}
}
return false;
}
public static boolean dup2(int[] A) {
for (int i = 0; i < A.length  1; i += 1) {
if (A[i] == A[i + 1]) {
return true;
}
}
return false;
}
Techniques for Measuring Computational Cost
 Technique 1: Measure execution time
 Unix has a built in
time
command to measure execution time. e.g.time java Dup1 20000
 Princeton StdLib has a
Stopwatch
class  Physical timewatch
 Unix has a built in
 Technique 2A: Count possible operations for an array of size
N
= 10,000 (what we’ll use often) Consider worst case and best case scenarios
 Pros: Machine independent. Input dependence is captured in model.
 Cons: Tedious to compute. Array size is arbitrary, so doesn’t tell actual time (nope, really can’t tell actual time).
 Technique 2B: Count possible operations in terms of input array size
N
 Pros:
 Cons: Even more tedious to compute. Doesn’t tell actual time.
Counting Operations
For dup1
:
Operation  Count for N = 10000 

i = 0 
1 
j = i + 1 
1 ~ 10,000 
less than (< ) 

increment (+= 1 ) 
For dup2
:
Operation  Systematic Count  Count for N = 10000 

i = 0 
1  
less than < 
0 ~ N  
increment += 1 
0 ~ N  1  
equals == 
1 ~ N  1  1 
array accesses  2 ~ 2N  2 
Comparing Algorithms
A better algorithm scales better in the worse case.
So, suppose we have two algorithms that zerpify a collection of N items, and zerp1
takes 2N^2 operations while zerp2
takes 500N operations. While zerp1
may be faster for smaller inputs, it doesn’t scale well, so zerp2
is a better algorithm.
Asymptotic Behavior
In most cases, we only care about asymptotic behavior, i.e. what happens for very large N. How this is important in real life? Think about bitcoins, NASA, etc.
Worse Case Order of Growth
Duplicate Finding
Our goal is to charactertize the runtime of dup1
and dup2
. What we’ve been doing so far indeed demonstrates the superiority of dup2
, but is it simple? Not at all.
Simplifications
 Only consider the worst case.
 Pick one representation operation as a proxy for overall runtime.
 Ignore lower order terms.
 Ignore multiplicative constant.
With these simplifications, for dup2
, we can pick “array accesses” as our cost model, and derive that the order of growth is just N.
Picking the representative operation (a.k.a the cost model)
 Good choice: increment
 Bad choice:
i = 0
 We call our choice the “cost model”
BigTheta
Formal Definition:
\(R(N) \in \Theta(f(N))\) if \(\exists k_1, k_2 \in \mathbb{Z}^+\) such that \(k_1 \cdot f(N) \leq R(N) \leq k_2 \cdot f(N)\)
Formalizing Order of Growth
\(N^3 + 3N^4 \in \Theta(N^4)\)
\(Ne^N + N \in \Theta(Ne^N)\)
Now that we’re familiar with Big O, we’ll revisit the dup1
and realize it is \(\Theta(N^2)\) time.
Tricky For Loop Example
public static void printParty(int n) {
for (int i = 0; i < n; i *= 2) {
for (int j = 0; j < n; j++) {
System.out.println("Party time!")
}
}
}
This actually has \(\Theta(N)\) not \(\Theta(N \log N)\) runtime.
Sums that’re Good to Know
 \(1 + 2 + 3 + 4 + ... + N = \frac{N(N + 1)}{2} \in \Theta(N^2)\)
 \(1 + 2 + 4 + 8 + ... + N = 2N  1 \in \Theta(N)\)
 \(1 + 2 + 4 + 8 + ... + 2^{N  1} = 2\times(2^{N  1})  1 = 2^{N}  1 \in \Theta(N)\)
Recursion Example
public static int f3(int n) {
if (n <= 1) {
return 1;
}
return f3(n  1) + f3(n  1);
}
Answer: \(\Theta(2^N)\)
Binary Search
Interesting fact: bug in Java’s implementation of binary search found in 2006.
static int binarySearch(String[] sorts, String x, int lo, int hi) {
if (lo > hi) return 1;
int m = (lo + hi) / 2;
int cmp = x.compareTo(sorted[m]);
if (cmp < 0) return binarySearch(sorted, x, lo, m  1);
else if (cmp > 0) return binarySearch(sorted, x, m + 1, hi);
else return m;
}
Runtime: \(\Theta(\log_2 N)\)
For exact solution, see this video.
Mergesort
We have talked about Selection Sort:
 Find the smallest unfixed item, move it to the front and “fix” it.
 Sort the remaining unfixed items using selection sort.
 Runtime: \(\Theta(N^2)\)
Merge Sort Basic:
 Selection sort the left half: \(\Theta(N^2)\)
 Selection sort the right half: \(\Theta(N^2)\)
 Merge the two arrays: \(\Theta(N)\)
Overall, it’s still \(\Theta(N^2)\). (\(N + 2 \cdot (\dfrac N 2)^2\) to be exact).
Merge Sort:
 If array is of size 1, return.
 Mergesort the left half.
 Mergesort the right half.
 Merge the results: \(\Theta(N)\)
Overall runtime: \(\Theta(N \log N)\)
Big O Notation
While we can think of Big Theta as “equals”, Big O is something like “less than or equal”.
Formal Definition
\(R(N) \in O(f(N))\) means there exists some positive constant \(k\) such that \(R(N) \leq k \cdot f(N)\).
Big O vs Big Theta
\(\Theta(f(N))\) means the order of growth is \(f(N)\).
\(O(f(N))\) means the order of growth is less than or equal to \(f(N)\).
Limitations of Big Theta
Consider this question:
public boolean dup4(int[] a) {
int N = a.length;
for (int i = 0; i < N; i += 1) {
for (int j = i + 1; j < N; j += 1) {
if (a[i] == a[j]) {
return true;
}
}
}
return false;
}
In this example, using Big Theta requires us to qualify what we’re talking about exactly:
 The best case runtime of
dup4
is \(\Theta(1)\).  The worst case runtime of
dup4
is \(\Theta(N^2)\).
A big theta is a strong statement, because it implies “equals”. This means that if runtime depends on more factors than just N, then we may need different Big Theta notation for every interesting condition.
In these case, we can just use Big O and avoid qualifying our statment. e.g. the runtime of dup4
is \(O(N^2)\).
A Subtle Distinction
Which of the following statement gives us more information?
 The worst case runtime is \(\Theta(N^2)\).
 The runtime is \(O(N^2)\).
It turns out, statement 1 gives us more information.
Usefulness of Big O
Although Big O doesn’t provide a tight bound, it is still useful:
 Allows us to make simple blanket statements. e.g. we can just say “binary search is O(log N)” instead of “binary search is Θ(log N) in the worst case”.
Big Omega \(\Omega\)
Big Omega is “greater than or equal”.
Formal Definition
\(R(N) \in O(f(N))\) means there exists some positive constant \(k\) such that \(R(N) \geq k \cdot f(N)\).
Usefulness of Big Omega
Two common uses of Big Omega:
 Very careful proofs of Big Theta runtime
 Providing lower bounds for the hardness of a problem
 e.g. The time to find wheather an array has duplicates is \(\Omega(N)\) in the worse case or any algorithm.
Geometric Array Resizing
For ArrayLists, most add additions are constant time, but some operations are very expensive.
Amortized Analysis (Rigorous)
 Pick a cost model
 Compute the amortized (a.k.a average) cost of the i^{th} insertion.
Cost Model: Array Accesses
Consider the cost for the 5^{th} insert into an ArrayList.
We have 4 array reads + 4 array writes + 1 array write for new value = 9 array accesses in total.
Potentials and Amortized Cost Bounds
For operation \(i\), choose an arbitrary “amortized cost” \(a_i\). This cost may be more or less than the “actual cost” \(c_i\) of that operation.
 Let \(\Phi_i\) be the potential at time \(i\). THe potential represents the cumulative difference between arbitrary amortized costs and actual costs over time.
 \(\Phi_{i+1} = \Phi_i + (a_i  C_i)\)
 If we select \(a_i\) such that potential \(\Phi_i\) is never negative, then amortized cost is an upper bound on the actual cost.
Side Note: Take CS 170 for full rigor.
Hey, kudos for making it this far! If you've liked this, you might also like Sorting.