
is equal to
times
for
any positive integer
:

by computing
and multiplying the
result by
. If we add the stipulation that 1!
is equal to 1,
this observation translates directly into a
function:
function factorial(n) {
if (n === 1)
return 1;
else return n * factorial(n-1);
}
function factorial(n) {
return n === 1 ? 1 : n * factorial(n-1);
}
In this version, the body of the function consists of only a return
statement, and thus we can use the substitution model of
section 1.1.5 to watch the
function in action
computing 6!, as shown in figure 1.3.
by
specifying that we
first multiply 1 by 2, then multiply the result by 3, then by 4,
and so on until we reach
.
More formally, we maintain a running product, together with a counter
that counts from 1 up to
. We can describe the computation by
saying that the counter and the product simultaneously change from one
step to the next according to the rule
product
counter
product
counter
counter
1
is the value of the product when
the counter exceeds
.
function factorial(n) {
return fact_iter(1,1,n);
}
function fact_iter(product,counter,max_count) {
if (counter > max_count)
return product;
else return fact_iter(counter*product,
counter+1,
max_count);
}
function factorial(n) {
return fact_iter(1,1,n);
}
function fact_iter(product,counter,max_count) {
return counter > max_count
? product
: fact_iter(counter*product,
counter+1,
max_count);
}
Figure 1.4
visualizes the application of the substitution model
for computing
.
to compute
. Indeed, both processes even carry out the same
sequence of multiplications, obtaining the same sequence of partial
products. On the other hand, when we consider the
shapes of the
two processes, we find that they evolve quite differently.
,
the length of the chain of deferred multiplications, and hence the amount
of information needed to keep track of it,
grows linearly with
(is proportional to
), just like the number of steps.
Such a process is called a linear recursive process.
, are the current
values of the variables product, counter, and
max_count. We call this an
iterative process. In general, an
iterative process is one whose state can be summarized by a fixed
number of
state variables, together with a fixed rule that
describes how the state variables should be updated as the process
moves from state to state and an (optional) end test that specifies
conditions under which the process should terminate. In computing
, the number of steps required grows linearly with
. Such a process is
called a
linear iterative process.
function plus(a,b) {
return a === 0 ? b : inc(plus(dec(a),b));
}
function plus(a,b) {
return a === 0 ? b : plus(dec(a),inc(b));
}
Using the substitution model, illustrate the process generated by each
function in evaluating
plus(4,5);. Are these processes iterative or recursive?
function A(x,y) {
if (y === 0)
return 0;
else if (x === 0)
return 2 * y;
else if (y === 1)
return 2;
else return A(x - 1, A(x, y - 1));
}
What are the values of the following expressions?
A(1,10);
A(2,4);
A(3,3);Consider the following functions, where A is the function defined above:
function f(n) {
return A(0,n);
}
function g(n) {
return A(1,n);
}
function h(n) {
return A(2,n);
}
function k(n) {
return 5 * n * n;
}
Give concise mathematical definitions for the functions computed by
the functions f, g, and h for positive integer
values of
. For example, (k n) computes
.


function fib(n) {
if (n === 0)
return 0;
else if (n === 1)
return 1;
else return fib(n - 1) + fib(n - 2);
}
. To get an idea of how bad this is, one can show that the
value of
grows exponentially with
. More precisely
(see exercise 1.13),
is the closest
integer to
, where


and
,
initialized to
and
,
and to repeatedly apply the simultaneous
transformations

times,
and
will be equal, respectively, to
and
. Thus, we can compute Fibonacci numbers iteratively using
the function
function fib(n) {
return fib_iter(1,0,n);
}
function fib_iter(a,b,count) {
if (count === 0)
return b;
else return fib_iter(a + b,a,count - 1);
}
This second method for computing
is a linear iteration. The
difference in number of steps required by the two methods—one linear in
,
one growing as fast as
itself—is enormous, even for
small inputs.
,
given half-dollars, quarters, dimes, nickels, and pennies? More
generally, can we write a
function
to compute the number of ways to change any given amount of money?
using
kinds of coins equals
using all but the first
kind of coin, plus
using all
kinds of
coins, where
is the denomination of the first kind of coin.
is exactly 0, we should count that as 1 way to make change.
is less than 0, we should count that as 0 ways to make change.
is 0, we should count that as 0 ways to make change.
function count_change(amount) {
return cc(amount,5);
}
function cc(amount,kinds_of_coins) {
if (amount === 0)
return 1;
else if (amount < 0 ||
kinds_of_coins === 0)
return 0;
else return cc(amount,kinds_of_coins - 1)
+
cc(amount - first_denomination(
kinds_of_coins),
kinds_of_coins);
}
function first_denomination(kinds_of_coins) {
switch(kinds_of_coins) {
case 1: return 1;
case 2: return 5;
case 3: return 10;
case 4: return 25;
case 5: return 50;
}
}
count_change(100);
is defined by the rule that
if
and
if
. Write a JavaScript function that
computes
by means of a recursive process. Write a function that
computes
by means of an iterative process.

is the closest integer to
,
where
. Hint: Let
. Use
induction and the definition of the Fibonacci numbers (see
section 1.2.2) to prove that
.
be a parameter that measures the size of the problem,
and let
(
) be the amount
of resources the process requires for a problem
of size
. In our previous examples we took
to be the number
for which a given function is to be computed, but there are other
possibilities. For instance, if our goal is to compute an
approximation to the square root of a number, we might take
to be
the number of digits accuracy required. For matrix multiplication we
might take
to be the number of rows in the matrices.
In general there are a number of properties of the problem with respect to which
it will be desirable to analyze a given process.
Similarly,
(
)
might measure the number of internal storage registers used, the
number of elementary machine operations performed, and so on. In
computers that do only a fixed number of operations at a time, the
time required will be proportional to the number of elementary machine
operations performed.
has order of growth
, written
(pronounced theta of
), if there are
positive constants
and
independent of
such that

. (In other
words, for large
,
the value
is sandwiched between
and
.)
. Thus, the
steps required for this process grows as
. We also saw
that the space required grows as
.
For the
iterative
factorial function, the number of steps is still
.
A properly tail-recursive implementation of Scheme will require space of
—that is,
constant space—whereas JavaScript's space requirement for the iterative factorial function
is still
.10
The
tree-recursive Fibonacci computation requires
steps and space
, where
is the
golden ratio described in section 1.2.2.
steps and a process
requiring
steps and a process requiring
steps
all have
order of growth. On the other hand, order of
growth provides a useful indication of how we may expect the behavior
of the process to change as we change the size of the problem. For a
(linear) process, doubling the size will roughly double the amount
of resources used. For an
exponential process, each increment in
problem size will multiply the resource utilization by a constant
factor. In the remainder of section 1.2
we will examine two
algorithms whose order of growth is
logarithmic, so that doubling the
problem size increases the resource requirement by a constant amount.
if
is
sufficiently small, and the trigonometric identity

. (For
purposes of this exercise an angle is considered sufficiently
small if its magnitude is not greater than 0.1 radians.) These
ideas are incorporated in the following
functions:
function cube(x) {
return x * x * x;
}
function p(x) {
return 3 * x - 4 * cube(x);
}
function sine(angle) {
if (! (abs(angle) > 0.1))
return angle;
else return p(sine(angle / 3.0));
}
) used by the process generated by the sine
function when (sine a) is evaluated?
and a
positive integer exponent
and computes
. One way to do this
is via the recursive definition

function expt(b,n) {
if (n === 0)
return 1;
else return b * expt(b, n - 1);
}
steps
and
space. Just as with factorial, we can readily
formulate an equivalent linear iteration:
function expt(b,n) {
return expt_iter(b,n,1);
}
function expt_iter(b,counter,product) {
if (counter === 0)
return product;
else return expt_iter(b,
counter - 1,
b * product);
}
This version requires
steps and
space.
as



function fast_expt(b,n) {
if (n === 0)
return 1;
else if (is_even(n))
return square(fast_expt(b, n / 2));
else return b * fast_expt(b, n - 1);
}
function is_even(n) {
return n % 2 === 0;
}
where the predicate to test whether an integer is even is defined in terms of
the
operator %, which computes the remainder after integer division, by
function is_even(n) {
return n % 2 === 0;
}
in both space and number of steps. To see this, observe that
computing
using
fast_expt
requires only one more
multiplication than computing
. The size of the exponent we can
compute therefore doubles (approximately) with every new
multiplication we are allowed. Thus, the number of multiplications
required for an exponent of
grows about as fast as the logarithm
of
to the base 2. The process has
growth.11
growth and
growth
becomes striking as
becomes large. For example,
fast_expt
for
requires only 14
multiplications.12
It is also possible to use the idea of
successive squaring to devise an iterative algorithm that computes
exponentials with a logarithmic number of steps
(see exercise 1.16), although, as is often
the case with iterative algorithms, this is not written down so
straightforwardly as the recursive algorithm.13
, keep, along with the exponent
and the
base
, an additional state variable
, and define the state
transformation in such a way that the product
is unchanged
from state to state. At the beginning of the process
is taken to
be 1, and the answer is given by the value of
at the end of the
process. In general, the technique of defining an
invariant
quantity that remains unchanged from state to state is a powerful way
to think about the
design of iterative algorithms.)
function times(a,b) {
if (b === 0)
return 0;
else return a + a * (b - 1);
}
This algorithm takes a number of steps that is linear in b.
Now suppose we include, together with addition, operations double,
which doubles an integer, and halve, which divides an (even)
integer by 2. Using these, design a multiplication function analogous
to
fast_expt
that uses a logarithmic number of steps.
and
in the
fib_iter
process of
section 1.2.2:
and
. Call this transformation
, and observe that applying
over
and over again
times, starting with 1 and 0, produces the pair
and
. In other words, the Fibonacci
numbers are produced by applying
, the
th power of the
transformation
, starting with the pair
. Now consider
to be the special case of
and
in a family of
transformations
, where
transforms the pair
according to
and
. Show
that if we apply such a transformation
twice, the effect is
the same as using a single transformation
of the same form,
and compute
and
in terms of
and
. This gives us an
explicit way to square these transformations, and thus we can compute
using successive squaring, as in the
fast_expt
function. Put this all together to complete the following function,
which runs in a logarithmic number of
steps:15
and
is
defined to be the largest integer that divides both
and
with no remainder.
For example, the GCD of 16 and 28 is 4. In chapter 2,
when we investigate how to implement rational-number arithmetic, we
will need to be able to compute GCDs in order to reduce
rational numbers to lowest terms. (To reduce a rational number to
lowest terms, we must divide both the numerator and the denominator by their
GCD. For example, 16/28 reduces to 4/7.) One way to find the
GCD of two integers is to factor them and search for common
factors, but there is a famous algorithm that is much more efficient.
is
the remainder when
is divided by
, then the common divisors of
and
are
precisely the same as the common divisors of
and
. Thus, we can use the equation


function gcd(a,b) {
return b === 0 ? a : gcd(b, a % b);
}
Lamés Theorem: If Euclids Algorithm requires
steps to
compute the GCD of some pair, then the smaller number in the pair
must be greater than or equal to the
th Fibonacci
number.17
be the smaller of the two inputs to the
function.
If the process takes
steps, then we must have
. Therefore
the number of steps
grows as the logarithm (to the base
) of
.
Hence, the order of growth is
.
, one with order of growth
, and a
probabilistic algorithm with order of growth
. The
exercises at the end of this section suggest programming
projects based on these algorithms.
. It does this in a straightforward way, by
testing
for divisibility by successive integers starting with 2.
function smallest_divisor(n) {
return find_divisor(n,2);
}
function find_divisor(n,test_divisor) {
if (square(test_divisor) > n)
return n;
else if (divides(test_divisor,n))
return test_divisor;
else return find_divisor(n, test_divisor + 1);
}
function divides(a,b) {
return b % a === 0;
}
is prime if
and only if
is its own smallest divisor.
function is_prime(n) {
return n === smallest_divisor(n);
}
is not prime it must have a divisor less than or equal to
.18
This means that the algorithm need only test divisors between 1 and
. Consequently, the number of steps required to identify
as prime will have order of growth
.
primality test is based on a result from number
theory known as Fermats Little
Theorem.19
Fermats Little Theorem:
If
is a prime number and
is any positive integer less than
, then
raised to the
th power is congruent to
modulo
.
if
they both have the same remainder when divided by
. The
remainder of a number
when divided by
is also referred to as
the
remainder of
modulo
, or simply as
modulo
.)
is not prime, then, in general, most of the numbers
will not
satisfy the above relation. This leads to the following algorithm for
testing primality: Given a number
, pick a
random number
and
compute the remainder of
modulo
. If the result is not equal to
, then
is certainly not prime. If it is
, then chances are good
that
is prime. Now pick another random number
and test it with the
same method. If it also satisfies the equation, then we can be even more
confident that
is prime. By trying more and more values of
, we can
increase our confidence in the result. This algorithm is known as the
Fermat test.
function expmod(base,exp,m) {
if (exp === 0)
return 1;
else if (is_even(exp))
return square(expmod(base,exp/2,m)) % m;
else return (base * expmod(base,exp - 1,m)) % m;
}
between 1 and
inclusive and checking whether the remainder
modulo
of the
th power of
is equal to
. The random
number
is chosen using the function
random, which we assume is
included as a primitive in Scheme.
The function random
returns a
nonnegative integer less than its integer input. Hence, to obtain a random
number between 1 and
,
we call random with an input of
and add 1 to the result:
function fermat_test(n) {
function try_it(a) {
return expmod(a,n,n) === a;
}
return try_it(1 + random(n - 1));
}
function fermat_test(n) {
function try_it(a) {
return expmod(a,n,n) === a;
}
return try_it(1 + random(n - 1));
}
function fast_is_prime(n,times) {
if (times === 0)
return true;
else if (fermat_test(n))
return fast_is_prime(n, times - 1);
else return false;
}
ever fails the Fermat test, we can be certain that
is not prime.
But the fact that
passes the test, while an extremely strong
indication, is still not a guarantee that
is prime. What we would
like to say is that for any number
, if we perform the test enough
times and find that
always passes the test, then the probability
of error in our primality test can be made as small as we like.
that are not prime and
yet have the property that
is congruent to
modulo
for
all integers
. Such numbers are extremely rare, so the Fermat
test is quite reliable in practice.21
by choosing a random integer
and checking some
condition that depends upon
and
. (See
exercise 1.29 for an example of such a test.) On the
other hand, in contrast to the Fermat test, one can prove that, for
any
, the condition does not hold for most of the integers
unless
is prime. Thus, if
passes the test for some random
choice of
, the chances are better than even that
is prime. If
passes the test for two random choices of
, the chances are better
than 3 out of 4 that
is prime. By running the test with more and
more randomly chosen values of
we can make the probability of
error as small as we like.
, prints
and
checks to see if
is prime.
If
is
prime, the
function
prints three asterisks followed by the amount of time
used in performing the test.
function timed_prime_test(n) {
newline();
display(n);
start_prime_test(n,runtime());
}
function start_prime_test(n,start_time) {
if (is_prime(n))
return report_prime(runtime() - start_time);
}
function report_prime(elapsed_time) {
print(" *** ");
display(elapsed_time);
}
Using this
function,
write a
function
search_for_primes
that checks the primality of consecutive odd integers in a specified range.
Use your
function
to find the three smallest primes larger than 1000;
larger than 10,000; larger than 100,000; larger than 1,000,000. Note
the time needed to test each prime. Since the testing algorithm has
order of growth of
,
you should expect that testing
for primes around 10,000 should take about
times as long
as testing for primes around 1000. Do your timing data bear this out?
How well do the data for 100,000 and 1,000,000 support the
prediction? Is your result compatible with the notion that programs
on your machine run in time proportional to the number of steps
required for the computation?
growth, how would you expect the time to test primes near 1,000,000 to compare
with the time needed to test primes near 1000? Do your data bear this
out? Can you explain any discrepancy you find?
function expmod(base,exp,m) {
return fast_expt(base,exp) % m;
}
Is she correct?
Would this
function
serve as well for our fast prime tester? Explain.
function expmod(base,exp,m) {
if (exp === 0)
return 1;
else if (is_even(exp))
return expmod(base,exp/2,m)
* expmod(base,exp/2,m)
% m;
else return base
* expmod(base,exp - 1,m)
% m;
}
I dont see what difference that could make, says Louis.
I do. says Eva.
By writing the
function
like that, you have
transformed the
process into a
process.
Explain.
and tests whether
is congruent to
modulo
for every
, and try your
function
on the given Carmichael numbers.
is a prime number and
is any positive integer less
than
, then
raised to the
st
power is congruent to 1 modulo
. To test
the primality of a number
by the Miller-Rabin test,
we pick a random number
and raise
to the
st power
modulo
using the expmod
function.
However, whenever we perform the
squaring step in expmod, we check to see if we have
discovered a
nontrivial square root of 1
modulo
,
that is, a number not
equal to 1 or
whose square is equal to 1
modulo
. It is
possible to prove that if such a nontrivial square root of 1 exists,
then
is not prime.
It is also possible to prove that if
is an
odd number that is not prime, then, for at least half the numbers
, computing
in this way will reveal a nontrivial
square root of 1 modulo
.
(This is why the Miller-Rabin test
cannot be fooled.) Modify the
expmod
function
to signal if it
discovers a nontrivial square root of 1, and use this to implement
the Miller-Rabin test with a
function
analogous to
fermat_test.
Check your
function
by testing various known primes and non-primes.
Hint: One convenient way to make expmod
signal is to have it return 0.
function factorial(n) {
function iter(product,counter) {
if (counter > n)
return product;
else return iter(counter*product,
counter+1);
}
return iter(1,1);
}
We avoided doing this here so as to minimize the number of things to
think about at once.
th row consists of
the coefficients of the terms in the
expansion of
. This
pattern for computing the coefficients
appeared in Blaise Pascals 1653 seminal work on
probability theory,
Traité du triangle arithmétique.
According to
Knuth (1973), the same pattern appears in the Szu-yuen
Yü-chien (The Precious Mirror of the Four Elements), published
by the Chinese mathematician Chu Shih-chieh in 1303, in the
works of the twelfth-century Persian poet and mathematician Omar
Khayyam, and in the works of the twelfth-century Hindu mathematician
Bháscara Áchárya.
plus the number
of ones in the binary representation of
. This total is always
less than twice the log base 2 of
. The arbitrary constants
and
in
the definition of order notation imply that, for a logarithmic
process, the base to which logarithms are taken does not matter, so
all such processes are described as
.
, where
, for which Euclids Algorithm
terminates in
steps.
The proof is based on the claim that, if
are three successive pairs in the
reduction process, then we must have
.
To verify the claim, consider that a reduction step is defined by
applying the transformation
,
.
The second equation means that
for some
positive integer
.
And since
must be at least 1 we have
.
But in the previous
reduction step we have
.
Therefore,
. This verifies the claim. Now we can
prove the theorem by induction on
,
the number of steps that the
algorithm requires to terminate.
The result is true for
, since
this merely requires that
be at least as large as
.
Now, assume that the result is true for all integers less
than or equal to
and establish the result for
. Let
be successive pairs in the
reduction process. By our induction hypotheses, we have
and
. Thus, applying the claim we just
proved together with the definition of the Fibonacci numbers gives
, which
completes the proof of Lamés Theorem.
is greater than 1 are based on the fact that, for any integers
,
, and
, we can find the remainder of
times
modulo
by computing separately the remainders of
modulo
and
modulo
, multiplying these, and then taking the remainder of the
result modulo
. For instance, in the case where
is even, we
compute the remainder of
modulo
, square this, and take
the remainder modulo
. This technique is useful because it means
we can perform our computation without ever having to deal with
numbers much larger than
. (Compare
exercise 1.26.)