Heh, as someone who's working behind the scenes, it's nice to see HMMT problems being discussed here.

Here's one of my favorite math competition problems of all time (bonus points if you can find the source):

We have a sequence of real numbers a(0), a(1), a(2), ... that satisfies a(n) = r * a(n-1) + s * a(n-2) for some real numbers r,s. Is it possible that a(n) is never 0, but for any positive real number x we can find integers i and j such that |a(i)|< x < |a(j)| ?

Heh, as someone who's working behind the scenes, it's nice to see HMMT problems being discussed here.

Here's one of my favorite math competition problems of all time (bonus points if you can find the source):

We have a sequence of real numbers a(0), a(1), a(2), ... that satisfies a(n) = r * a(n-1) + s * a(n-2) for some real numbers r,s. Is it possible that a(n) is never 0, but for any positive real number x we can find integers i and j such that |a(i)|< x < |a(j)| ?

my guess is no asFor this to be true, you need a subsequence which converges to 0, and a subsequence that convergece to +inf or -inf. But the recursion formula should lead to an exponential behaviour like with the Fibonacci sequence (write it in 2d linear recursion, find the eigenvalues/vectors) which leads to exponential behaviour (as long as the absolute value of all eigenvalues does not equal 1. In which case you might have oscillating behaivour depending of the second eigenvalue, but the absolute value stays constant).In any case you can write a(n) as sum of two complex exponetials, and I don't see how this sum can have both subsequences converging to 0 and to +-inf.

Maybe I'm missing something without writing it down, but in this case this method is exactly what you need to find how it is possible.

Heh, as someone who's working behind the scenes, it's nice to see HMMT problems being discussed here.

Here's one of my favorite math competition problems of all time (bonus points if you can find the source):

We have a sequence of real numbers a(0), a(1), a(2), ... that satisfies a(n) = r * a(n-1) + s * a(n-2) for some real numbers r,s. Is it possible that a(n) is never 0, but for any positive real number x we can find integers i and j such that |a(i)|< x < |a(j)| ?

Heh, as someone who's working behind the scenes, it's nice to see HMMT problems being discussed here.

Here's one of my favorite math competition problems of all time (bonus points if you can find the source):

We have a sequence of real numbers a(0), a(1), a(2), ... that satisfies a(n) = r * a(n-1) + s * a(n-2) for some real numbers r,s. Is it possible that a(n) is never 0, but for any positive real number x we can find integers i and j such that |a(i)|< x < |a(j)| ?

my guess is no asFor this to be true, you need a subsequence which converges to 0, and a subsequence that convergece to +inf or -inf. But the recursion formula should lead to an exponential behaviour like with the Fibonacci sequence (write it in 2d linear recursion, find the eigenvalues/vectors) which leads to exponential behaviour (as long as the absolute value of all eigenvalues does not equal 1. In which case you might have oscillating behaivour depending of the second eigenvalue, but the absolute value stays constant).In any case you can write a(n) as sum of two complex exponetials, and I don't see how this sum can have both subsequences converging to 0 and to +-inf.

Maybe I'm missing something without writing it down, but in this case this method is exactly what you need to find how it is possible.

Had arrived to the same conclusion as DStu, but hadn't thought about complex solutions, for some reason. If both "eigenvalues" (e1, e2) have same absolute value larger than one, but their arguments cannot be transformed from one to the other by multiplication by a rational number, it should work.

The idea is that sometimes e1^n and e2^n will have a very similar argument, and hence a(n) will have a very high absolute value, and sometimes their argument will differ by more or less Pi, and hence they will mostly cancel each other, and can get arbitrarily close to zero in absolute value.

EDIT: or maybe not. The absolute value of ei^n might grow fast enough that it can't be cancelled by the nearly opposing arguments. Don't know how to prove it either way.

I agree with pacovf that it may be possible. You can get a(n) to look something like (2^n)*cos(n). But I'm not sure whether such a sequence will have a subsequence converging to zero or not. I seem to recall this question coming up in my thesis, and I think I abandoned it for a different approach.

Been thinking about it for a while. My gut instinct tells me that minn<N [cos(n)] ~ 1/N, but I can't manage to prove it. I've been working around the fact that cos(x+e) ~ e when cos(x) = 0, so the question is how "quickly" n approaches N*Pi, but I've only managed to compare it to things that go slower, not faster...

Been thinking about it for a while. My gut instinct tells me that minn<N [cos(n)] ~ 1/N, but I can't manage to prove it. I've been working around the fact that cos(x+e) ~ e when cos(x) = 0, so the question is how "quickly" n approaches N*Pi, but I've only managed to compare it to things that go slower, not faster...

So here's my line of reasoning. Let's say the cosine has a frequency f. Instead of looking at cos(2*pi*f*n), let's look at the points traced out on the unit circle and see how close they get to the y-axis. Well, if f is rational then there are only finitely many points, so assume f is irrational. Then they form a dense subset of the unit circle. In fact, I suspect that for large N, the first N points should be close to uniformly distributed around the unit circle. If so, then we would expect that for large N, the point closest to the y-axis is off by an angle of about 2pi/N, which would give a value for cosine of about 2pi/N.

Been thinking about it for a while. My gut instinct tells me that minn<N [cos(n)] ~ 1/N, but I can't manage to prove it. I've been working around the fact that cos(x+e) ~ e when cos(x) = 0, so the question is how "quickly" n approaches N*Pi, but I've only managed to compare it to things that go slower, not faster...

So here's my line of reasoning. Let's say the cosine has a frequency f. Instead of looking at cos(2*pi*f*n), let's look at the points traced out on the unit circle and see how close they get to the y-axis. Well, if f is rational then there are only finitely many points, so assume f is irrational. Then they form a dense subset of the unit circle. In fact, I suspect that for large N, the first N points should be close to uniformly distributed around the unit circle. If so, then we would expect that for large N, the point closest to the y-axis is off by an angle of about 2pi/N, which would give a value for cosine of about 2pi/N.

Ah, this should be true, and constitute a proof if using the proper formalism.

The points should be distributed uniformly on the circle for N large enough, I think three years ago one of my teachers did a 10 minute aside to kinda prove it, but I don't remember how he did it... I think it's a bit tricky to demonstrate rigourously.

Been thinking about it for a while. My gut instinct tells me that minn<N [cos(n)] ~ 1/N, but I can't manage to prove it. I've been working around the fact that cos(x+e) ~ e when cos(x) = 0, so the question is how "quickly" n approaches N*Pi, but I've only managed to compare it to things that go slower, not faster...

So here's my line of reasoning. Let's say the cosine has a frequency f. Instead of looking at cos(2*pi*f*n), let's look at the points traced out on the unit circle and see how close they get to the y-axis. Well, if f is rational then there are only finitely many points, so assume f is irrational. Then they form a dense subset of the unit circle. In fact, I suspect that for large N, the first N points should be close to uniformly distributed around the unit circle. If so, then we would expect that for large N, the point closest to the y-axis is off by an angle of about 2pi/N, which would give a value for cosine of about 2pi/N.

I don't think any irrational value gives a dense orbit, but a full measure set should. In fact I think a full-measure set of f will give you uniform distribution by the Birkhoff ergodic theorem.

Been thinking about it for a while. My gut instinct tells me that minn<N [cos(n)] ~ 1/N, but I can't manage to prove it. I've been working around the fact that cos(x+e) ~ e when cos(x) = 0, so the question is how "quickly" n approaches N*Pi, but I've only managed to compare it to things that go slower, not faster...

So here's my line of reasoning. Let's say the cosine has a frequency f. Instead of looking at cos(2*pi*f*n), let's look at the points traced out on the unit circle and see how close they get to the y-axis. Well, if f is rational then there are only finitely many points, so assume f is irrational. Then they form a dense subset of the unit circle. In fact, I suspect that for large N, the first N points should be close to uniformly distributed around the unit circle. If so, then we would expect that for large N, the point closest to the y-axis is off by an angle of about 2pi/N, which would give a value for cosine of about 2pi/N.

I don't think any irrational value gives a dense orbit, but a full measure set should. In fact I think a full-measure set of f will give you uniform distribution by the Birkhoff ergodic theorem.

The unit circle is compact. If the frequency is irrational, then the orbit never repeats, and there will be an infinite subset of the unit circle, and therefore it will have an accumulation point. This means that for any positive epsilion, one can find two points in the orbit whose angles differ by less than epsilon. But one of these points comes sooner than the other, so this means that there is some N(epsilon) such that if you take a point in the orbit and move forward N(epsilion) steps, then you will have moved forward by this same angle less than epsilon. It follows that the orbit is dense.

Been thinking about it for a while. My gut instinct tells me that minn<N [cos(n)] ~ 1/N, but I can't manage to prove it. I've been working around the fact that cos(x+e) ~ e when cos(x) = 0, so the question is how "quickly" n approaches N*Pi, but I've only managed to compare it to things that go slower, not faster...

So here's my line of reasoning. Let's say the cosine has a frequency f. Instead of looking at cos(2*pi*f*n), let's look at the points traced out on the unit circle and see how close they get to the y-axis. Well, if f is rational then there are only finitely many points, so assume f is irrational. Then they form a dense subset of the unit circle. In fact, I suspect that for large N, the first N points should be close to uniformly distributed around the unit circle. If so, then we would expect that for large N, the point closest to the y-axis is off by an angle of about 2pi/N, which would give a value for cosine of about 2pi/N.

I don't think any irrational value gives a dense orbit, but a full measure set should. In fact I think a full-measure set of f will give you uniform distribution by the Birkhoff ergodic theorem.

The unit circle is compact. If the frequency is irrational, then the orbit never repeats, and there will be an infinite subset of the unit circle, and therefore it will have an accumulation point. This means that for any positive epsilion, one can find two points in the orbit whose angles differ by less than epsilon. But one of these points comes sooner than the other, so this means that there is some N(epsilon) such that if you take a point in the orbit and move forward N(epsilion) steps, then you will have moved forward by this same angle less than epsilon. It follows that the orbit is dense.