3 How fast is the seamcarve algorithm?Tuesday, January 28, 2014How fast is the seamcarve algorithm?What does it mean for an algorithm to be fast?Low memory usage?Small amount of time measured on a stopwatch?Low power consumption?We’ll revisit this question after developing the fundamentals of algorithm analysis

4 Tuesday, January 28, 2014Running TimeThe running time of an algorithm varies with the input and typically grows with the input sizeAverage case difficult to determineIn most of computer science we focus on the worst case running timeEasier to analyzeCrucial to many applications: what would happen if an autopilot algorithm ran drastically slower for some unforeseen, untested inputs?

5 How to measure running time?Tuesday, January 28, 2014How to measure running time?ExperimentallyWrite a program implementing the algorithmRun the program with inputs of varying sizeMeasure the actual running times and plot the resultsWhy not?You have to implement the algorithm which isn’t always doable!Your inputs may not entirely test the algorithmThe running time depends on the particular computer’s hardware and software speed

6 Tuesday, January 28, 2014Theoretical AnalysisUses a high-level description of the algorithm instead of an implementationTakes into account all possible inputsAllows us to evaluate speed of an algorithm independent of the hardware or software environmentBy inspecting pseudocode, we can determine the number of statements executed by an algorithm as a function of the input size

7 Elementary OperationsTuesday, January 28, 2014Elementary OperationsAlgorithmic “time” is measured in elementary operationsMath (+, -, *, /, max, min, log, sin, cos, abs, ...)Comparisons ( ==, >, <=, ...)Function calls and value returnsVariable assignmentVariable increment or decrementArray allocationCreating a new object (careful, object's constructor may have elementary ops too!)In practice, all of these operations take different amounts of timeFor the purpose of algorithm analysis, we assume each of these operations takes the same time: “1 operation”

8 Example: Constant Running TimeTuesday, January 28, 2014Example: Constant Running Timefunction first(array): // Input: an array // Output: the first element return array[0] // index 0 and return, 2 opsHow many operations are performed in this function if the list has ten elements? If it has 100,000 elements?Always 2 operations performedDoes not depend on the input size

9 Example: Linear Running TimeTuesday, January 28, 2014Example: Linear Running Timefunction argmax(array): // Input: an array // Output: the index of the maximum value index = 0 // assignment, 1 op for i in [1, array.length): // 1 op per loop if array[i] > array[index]: // 3 ops per loopindex = i // 1 op per loop, sometimes return index // 1 opHow many operations if the list has ten elements? 100,000 elements?Varies proportional to the size of the input list: 5n + 2We’ll be in the for loop longer and longer as the input list growsIf we were to plot, the runtime would increase linearly

10 Example: Quadratic Running TimeTuesday, January 28, 2014Example: Quadratic Running Timefunction possible_products(array): // Input: an array // Output: a list of all possible products // between any two elements in the list products = [] // make an empty list, 1 op for i in [0, array.length): // 1 op per loop for j in [0, array.length): // 1 op per loop per loop products.append(array[i] * array[j]) // 4 ops per loop per loop return products // 1 opRequires about 5n2 + n + 2 operations (okay to approximate!)If we were to plot this, the number of operations executed grows quadratically!Consider adding one element to the list: the added element must be multiplied with every other element in the listNotice that the linear algorithm on the previous slide had only one for loop, while this quadratic one has two for loops, nested. What would be the highest-degree term (in number of operations) if there were three nested loops?

11 Summarizing Function GrowthTuesday, January 28, 2014Summarizing Function GrowthFor very large inputs, the growth rate of a function becomes less affected by:constant factors orlower-order termsExamples105n n and n2 both grow with same slope despite differing constants and lower-order terms10n and n both grow with same slope as well105n nn210n + 105nT(n)When studying growth rates, we only care about what happens for very large inputs (as n approaches infinity…)nIn this graph (log scale on both axes), the slope of a line corresponds to the growth rate of its respective function

13 Big-O Notation (continued)Tuesday, January 28, 2014Big-O Notation (continued)Example: n2 is not O(n)n2 ≤ cnn ≤ cThe above inequality cannot be satisfied because c must be a constant, therefore for any n > c the inequality is false

14 Tuesday, January 28, 2014Big-O and Growth RateBig-O notation gives an upper bound on the growth rate of a functionWe say “an algorithm is O(g(n))” if the growth rate of the algorithm is no more than the growth rate of g(n)We saw on the previous slide that n2 is not O(n)But n is O(n2)And n2 is O(n3)Why? Because Big-O is an upper bound!

15 Tuesday, January 28, 2014Summary of Big-O RulesIf f(n) is a polynomial of degree d, then f(n) is O(nd). In other words:forget about lower-order termsforget about constant factorsUse the smallest possible degreeIt’s true that 2n is O(n50), but that’s not a helpful upper boundInstead, say it’s O(n), discarding the constant factor and using the smallest possible degree

17 Big-O in Algorithm AnalysisTuesday, January 28, 2014Big-O in Algorithm AnalysisEasy to express T in big-O by dropping constants and lower-order termsIn big-O notationfirst is O(1)argmax is O(n)possible_products is O(n2)The convention for representing T(n) = c in big-O is O(1).

18 Tuesday, January 28, 2014Big-Omega (Ω)Recall that f(n) is O(g(n)) if f(n) ≤ cg(n) for some constant as n growsBig-O expresses the idea that f(n) grows no faster than g(n)g(n) acts as an upper bound to f(n)’s growth rateWhat if we want to express a lower bound?We say f(n) is Ω(g(n)) if f(n) ≥ cg(n)f(n) grows no slower than g(n)Big-Omega

19 f(n) is O(g(n)) and Ω(g(n))Tuesday, January 28, 2014Big-Theta (Θ)What about an upper and lower bound?We say f(n) is Θ(g(n)) iff(n) is O(g(n)) and Ω(g(n))f(n) grows the same as g(n) (tight-bound)Big-Theta

21 Tuesday, January 28, 2014Back to SeamcarveHow many distinct seams are there for an w × h image?At each row, a particular seam can go down to the left, straight down, or down to the right: three optionsSince a given seam chooses one of these three options at each row (and there are h rows), from the same starting pixel there are 3h possible seams!Since there are w possible starting pixels, the total number of seams is: w × 3hFor a square image with n total pixels, that means there are possible seams

22 Tuesday, January 28, 2014SeamcarveAn algorithm that considers every possible solution is known as an exhaustive algorithmOne solution to the seamcarve problem would be to consider all possible seams and choose the minimumWhat would be the big-O running time of that algorithm in terms of n input pixels?: exponential and not good

23 Seamcarve What’s the runtime of the solution we went over last class?Tuesday, January 28, 2014SeamcarveWhat’s the runtime of the solution we went over last class?Remember: constants don’t affect big-O runtimeThe algorithm:Iterate over every pixel from bottom to top to populate the costs and dirs arraysCreate a seam by choosing the minimum value in the top row and tracing downwardHow many times do we evaluate each pixel?A constant number of timesTherefore the algorithm is linear, or O(n), where n is the number of pixelsHint: we also could have looked back at the pseudocode and counted the number of nested loops!

24 Seamcarve: Dynamic ProgrammingTuesday, January 28, 2014Seamcarve: Dynamic ProgrammingHow did we go from an exponential algorithm to a linear algorithm!?By avoiding recomputing information we already calculated!Many seams cross paths, and we don’t need to recompute the sum of importances for a pixel if we’ve already calculated it beforeThat’s the purpose of the additional costs arrayThis strategy, storing computed information to avoid recomputing later, is what makes the seamcarve algorithm an example of dynamic programming

26 Tuesday, January 28, 2014Fibonacci: RecursiveIn order to calculate fib(4), how many times does fib() get called?fib(3)fib(2)fib(1)fib(0)fib(4)fib(1) alone gets recomputed 3 times!At each level of recursion, the algorithm makes twice as many recursive calls as the last. So for fib(n), the number of recursive calls is approximately 2n, making the algorithm O(2n)!

27 Fibonacci: Dynamic ProgrammingTuesday, January 28, 2014Fibonacci: Dynamic ProgrammingInstead of recomputing the same Fibonacci numbers over and over, we’ll compute each one only once, and store it for future reference.Like most dynamic programming algorithms, we’ll need a table of some sort to keep track of intermediary values.function dynamicFib(n):fibs = [] // make an array of size nfibs[0] = 0fibs[1] = 1for i from 2 to n:fibs[i] = fibs[i-1] + fibs[i-2]return fibs[n]

28 Fibonacci: Dynamic Programming (2)Tuesday, January 28, 2014Fibonacci: Dynamic Programming (2)What’s the runtime of dynamicFib()?Since it only performs a constant number of operations to calculate each fibonacci number from 0 to n, the runtime is clearly O(n).Once again, we have reduced the runtime of an algorithm from exponential to linear using dynamic programming!

29 Readings Dasgupta Section 0.2, pp 12-15 Dasgupta Section 0.3, pp 15-17Tuesday, January 28, 2014ReadingsDasgupta Section 0.2, pp 12-15Goes through this Fibonacci example (although without mentioning dynamic programming)This section is easily readable nowDasgupta Section 0.3, pp 15-17Describes big-O notation far better than I canIf you read only one thing in Dasgupta, read these 3 pages!Dasgupta Chapter 6, ppGoes into detail about Dynamic Programming, which it calls one of the “sledgehammers of the trade” – i.e., powerful and generalizable.This chapter builds significantly on earlier ones and will be challenging to read now, but we’ll see much of it this semester.