CS 61A Lab 10

Iterators, Generators, Streams, and Tail-Calls

Iterators

Remember the for loop? (We really hope so.) The object that
the for loop iterates over is required to be an iterable!

for elem in iterable:
# do something

for loops only work with iterables, and that means that the object you want to
use a for loop on must implement the iterable interface. In particular,
a for loop makes use of two methods: __iter__ and __next__. In other words,
an object that implements the iterable interface must implement an __iter__ method
that returns an object that implements the __next__
method.

This object that implements the __next__ method is
called an iterator. While the iterator interface also requires that the object implement
the __next__ and __iter__
methods, it does not require the __iter__ method to
return a new object.

Here is an example of a class definition for an object that implements the
iterator interface:

This also fails to implement the iterator interface. Without the __iter__
method, the for loop will error. The for loop needs to call
__iter__ first because some objects might not implement the __next__ method
themselves, but calling __iter__ will return an object that does.

This is an infinite sequence! Sequences like these are the reason iterators are useful.
Because iterators delay computation, we can use a finite amount of memory to
represent an infinitely long sequence.

For one of the above iterators that works, try this:

>>> i = IteratorA()
>>> for item in i:
... print(item)

Then again:

>>> for item in i:
... print(item)

With that in mind, try writing an iterator that "restarts" every time
it is run through a for loop.

>>> i = IteratorRestart(2, 7)
>>> for item in i:
... print(item)
# should print 2 to 7
>>> for item in i:
... print(item)
# should still print 2 to 7

Q2

Write an iterator that takes a string as input:

>>> s = Str("hello")
>>> for char in s:
... print(char)
...
h
e
l
l
o

Generators

A generator is a special type of iterator that can be written using a yield statement:

The for loop only expects the object returned by __iter__ to have a __next__ method,
and the __iter__ method is a generator function in this case. Therefore, when __iter__ is called,
it returns a generator object, which you can call __next__ on.

Q3

Write a generator that counts down to 0.

Write it in both ways: using a generator function on its own, and
within the __iter__ method of a class.

Q5

And now you know how for loops work! Or more importantly, you have
become a better computer scientist.

Streams

A stream is our third example of a lazy sequence. A stream is a lazily evaluated RList.
In other words, the stream's elements (except for the first element) are only evaluated when the values are needed.

We represent Streams using Python objects, similar to the way we defined RLists.
We nest streams inside one another, and compute one element of the sequence at a time.

Note that instead of specifying all of the elements in __init__,
we provide a function, compute_rest, that encapsulates the algorithm used
to calculate the remaining elements of the stream. Remember that the code
in the function body is not evaluated until it is called, which lets us implement
the desired evaluation behavior.

This implementation of streams also uses memoization.
The first time a program asks a Stream for its rest field, the Stream
code computes the required value using compute_rest, saves the resulting value,
and then returns it. After that, every time the rest field is referenced,
the stored value is simply returned and it is not computed again.

Notice what is happening here. We start out with a stream whose first element is 1,
and whose compute_rest function creates another stream. So when we do compute
the rest, we get another stream whose first element is one greater than the previous element,
and whose compute_rest creates another stream. Hence, we effectively get an
infinite stream of integers, computed one at a time. This is almost like an infinite recursion,
but one which can be viewed one step at a time, and so does not crash.

Q1

Write a procedure make_fib_stream() that creates an infinite stream of Fibonacci Numbers.
Make the first two elements of the stream 0 and 1.

Hint: Consider using a helper procedure that can take two arguments, then think about how to start calling that procedure.

def make_fib_stream():

Q2

Write a procedure sub_streams that takes in two
streams s1, s2, and returns a new stream
that is the result of subtracting elements from s1
by elements from s2.
For instance, if s1 was (1, 2, 3, ...)
and s2 was (2, 4, 6, ...), then the output would be
the stream (-1, -2, -3, ...).
You can assume that both Streams are infinite.

def sub_streams(s1, s2):

Q3

Define a procedure that inputs an infinite Stream, s,
and a target value and returns True if the stream converges
to the target within a certain number of values.
For this example we will say the stream converges if the
difference between two consecutive values and the difference
between the value and the target drop below max_diff for 10 consecutive values.
(Hint: create the stream of differences between consecutive elements using sub_streams)

def converges_to(s, target, max_diff=0.00001, num_values=100):

Higher Order Functions on Streams

Naturally, as the theme has always been in this class,
we can abstract our stream procedures to be higher order. Take a look at filter_stream:

You can see how this function might be useful. Notice how the Stream we create has as its
compute_rest function a procedure that promises to filter out the rest of the Stream when asked.
So at any one point, the entire stream has not been filtered.
Instead, only the part of the stream that has been referenced has been filtered,
but the rest will be filtered when asked. We can model other higher order Stream
procedures after this one, and we can combine our higher order Stream procedures to do incredible things!

Q4

In a similar model to filter_stream, let's recreate the
procedure map_stream from lecture, that given a stream stream and
a one-argument function func, returns a new stream that is the result of
applying func on every element in stream.

def stream_map(func, stream):

Q5

What does the following Stream output? Try writing out the
first few values of the stream to see the pattern.

Notice that in this version of factorial,
the return expressions both use recursive calls, and then use the values
for more "work." In other words, the current frame needs to sit around,
waiting for the recursive call to return with a value. Then the current frame
can use that value to calculate the final answer.

As an example, consider a call to fact(5) (Shown with Scheme below).
We make a new frame for the call, and in carrying out the body of the function,
we hit the recursive case, where we want to multiply 5 by the return value of the
call to fact(4). Then we want to return this product
as the answer to fact(5). However, before calculating
this product, we must wait for the call to fact(4).
The current frame stays while it waits. This is true for every successive recursive
call, so by calling fact(5), at one point we will have the frame of fact(5)
as well as the frames of fact(4), fact(3), fact(2), and fact(1), all waiting for fact(0).

Keeping all these frames around wastes a lot of space, so our goal is
to come up with an implementation of factorial that uses a constant
amount of space. We notice that in Python we can do this with a while
loop:

However, Scheme doesn't have for and
while constructs. No problem! Everything
that can be written with while and for loops and also be written recursively.
Instead of a variable, we introduce a new parameter to keep track of the total.

Because Scheme supports tail-call optimization (note that Python does not), it knows when it no
longer needs to keep around frames because there is no further calculation
to do. Looking at the last line in fact_optimized,
we notice that it returns the same thing that the recursive call does,
no more work required. Scheme realizes that there is no reason to keep
around a frame that has no work left to do, so it just has the return of
the recursive call return directly to whatever called the current frame.

Therefore the last line in fact_optimized is a tail-call.

Q1

To sum it up (with vocabulary!), here is the quote from lecture:
"A procedure call that has not yet returned is active. Some procedure calls are
tail calls. A Scheme interpreter should support an unbounded number of
active tail calls."

A tail call is a call expression in a tail context:

The last body sub-expression in a lambda expression

Sub-expressions 2 and 3 in a tail context if expression

All non-predicate sub-expressions in a tail context cond

The last sub-expression in a tail context and or or

The last sub-expression in a tail context begin

Call expressions in "tail contexts" are tail calls, because there is no reason to
keep the current frame "active."

For the following procedures, decide whether each is tail-call optimized.
Do not run the procedures (they may not work)!

Check the recursive calls in tail positions (there might be multiple).
Is it in a tail context? In other words, does the last recursive call
need to return to the caller because there is still more work to be
done with it?

List what each of the tail-calls are to help decide if they are optimized.

(define (question-a x)
(if (= x 0)
0
(+ x (question-a (- x 1)))))

The tail call is a "+." This is not optimized, because a recursive call is an
argument to the call to "+."

(define (question-b x y)
(if (= x 0)
y
(question-b (- x 1) (+ y x))))

Tail-call to question-b. It is in sub-expression 3 in a tail context if expression.