The following question was a research exercise (i.e. an open problem) in R. Graham, D.E. Knuth, and O. Patashnik, "Concrete Mathematics", 1988, chapter 1.

It is easy to show that

$$\sum_{1 \leq k } (\frac{1}{k} \times \frac{1}{k+1}) = 1.$$

The product $\frac{1}{k} \times \frac{1}{k+1}$ is equal to the area of a $\frac{1}{k}$by$\frac{1}{k+1}$ rectangle. The sum of the areas of these rectangles is equal to 1, which is the ares of a unit square. Can we use these rectangles to cover a unit square?

Is this problem still open?

What are the best results we know about this problem (or its relaxations)?

This is not an answer, but the not-widely-read journal Geombinatorics has a lot of problems related to this. It's reviewed only by Alexander Soifer, as far as I can tell, but a lot of very interesting math flows through it, including tiling problems.
–
Eric TresslerAug 1 '10 at 23:00

2

I certainly don't know how to do this question, but I could really envisage spending a lot of time trying if I found an applet where you could drag and drop rectangles into a square, and the applet would do precise arithmetic and allow you to zoom in to various regions, and made intelligent choices about how to label lengths and so on. You kind of feel that it might be one of those problems that you can get a feel for an algorithm if you're allowed to play with examples.
–
Kevin BuzzardAug 3 '10 at 14:37

3 Answers
3

The best result that I'm aware of is due to D. Jennings, who proved that all the rectangles of size $k^{-1} × (k + 1)^{-1}$, $k = 1, 2, 3 ...$, can be packed into a square of size $(133/132)^2$ (link).

Edit 1. A web search via Google Scholar gave a reference to this article by V. Bálint, which claims that the rectangles can be packed into a square of size $(501/500)^2$.

Edit 2. The state of art of this and related packing problems due to Leo Moser is discussed in Chapter 3 of "Research Problems in Discrete Geometry" by P.Brass, W. O. J. Moser and J. Pach. The problem was still unsettled as of 2005.

I think any discussion of this problem should explicitly mention the paper by Marc M. Paulhus, "An algorithm for packing squares," J. Combin. Theory Ser. A82 (1998), 147-157. In a certain precise sense, Paulhus's results are millions of times better than any previous results, including Balint's. Paulhus's method also adapts readily to a number of other related packing problems.
–
Timothy ChowSep 10 '10 at 16:22

As a bit of fun, I have written a program that attempts to fit the first n rectangles into the square. (I accept that this is not an obvious route to a proof.)

Initially, I planned to jumble the rectangles without any strategy, except that I constrained each new rectangle to share a vertex with at least one previous rectangle. Unfortunately, I quickly found that backtracking is extremely time-consuming. In retrospect, this makes sense: if a state is reached where there are only $N$ spaces big enough to accept the next $N+1$ rectangles, backtracking will probably need to try all $N!$ permutations before deciding to backtrack further. (And this is as it should be, because one of the permutations may free up a corner to allow progress.) So, without strategy, 255 rectangles go in and then there is no more progress for a long time in this algorithm:
Dead end

So, I added a bit of strategy: try to make as many edge-to-edge joins as possible. With this algorithm, I have reached 40000 (and still going) without any need at all for backtracking. (In fact, it's quite rare to find an exact fit into a gap, where a new rectangle has edge-to-edge contact with three existing rectangles. Therefore, in retrospect, it would probably be roughly as good to insist that new rectangles have two or more edge-to-edge contacts -- which will effectively mean fitting into "corners" where the new rectangle fills the only remaining quadrant at a vertex.)

Here's an image of the situation after 10000 rectangles:
Maximized_contact. There is a different pattern, arguably just as good, if the first position with 2 edge-to-edge contacts is selected: Two_contacts (after 1000 rectangles). This is quicker.

For the squeamish, look away now: I have been using floating-point arithmetic. With the gcc compiler's somewhat lame "long double", this stores about 20 decimal places. So, I have insisted that an "exact" contact must have coordinates that match to at least 19 decimal places. A "clear" gap or overlap between non-contacts must be at least, say, $10^{-14}$ -- so there are 5 orders of magnitude between "presumably touching" and "presumably separate". You could regard this as having a probabilistic chance of a mistake, and I guess (without justification) the probability might be of order $10^{-5}$.

If gaps are required to be at least $10^{-12}$, then the algorithm is unsure whether $$ {1\over 3912} + {1\over 4124} - {1\over 4050} - {1\over 3981} = {1\over 3612702562200} $$ is zero or gap. If gaps are at least $10^{-13}$, the same happens with $$ {1\over 26981}+{1\over 29981}-{1\over 14201} = {1\over 11487435443561}.$$ These are real examples, and it's easy to concoct other situations that would challenge higher precision. For example, try $$ {1\over 30234}+{1\over 26811}-{1\over 28672}-{1\over 28172} = {1\over 27281801667907584}. $$ So far, no in-between gaps (between $10^{-19}$ and $10^{-14}$) have been encountered.

I have recently started checking the results using arbitrary-length rational numbers (using the IMath package). This is slower, of course. The size of the denominator could be excessive (see A003418), but only 138 base-10 digits were required up to 4800 rectangles. This took about 5 hours on a desktop. The code isn't designed for efficiency, and gets progressively slower in a variety of ways.

It may seem pointless to press on beyond 1000, or 2000 or whatever, and it probably is. However, there is an exciting crunch point at about 17000: until this point, there has been a clear region of unfilled space, substantially larger than the incoming rectangles. Any rectangle that doesn't fit conveniently elsewhere can go in there. This is quite a luxurious position: you can tell at a glance that deadlock won't be reached in the next few placements. When that space is filled, are the remaining slivers large enough? -- the rectangles aren't small enough that remaining gaps look like wide-open spaces. Initial experience suggests that this crunch is survived, but of course there may be more crunches to come.

@Kevin Buzzard: I hope this doesn't take the fun out of your interactive applet. I think you're right that a bit of insight comes out of this square-bashing: there is hope that there are enough small rectangles to more or less fill the gaps between medium rectangles, and enough really small rectangles to more or less fill the gaps between small rectangles, and so on. This seems to be the hope, rather than clever arrangements of exact matches.

I can be specific about the rarity of filling exact matches using this algorithm: 20 three-edge contacts in the first 1000 rectangles, 6 in the next, and 4 in the next. Presumably more could be arranged by thinking ahead. Also, a better algorithm could do a lot more to avoid small gaps (which must be the killer in the end, if there is a killer).

It's been a long time since I considered this problem, so prompted by seeing this question I was intrigued to discover more on V. Bálint's bound of $(501/500)^ 2.$

A quick search revealed Bálint's paper A Packing Problem and Geometrical Series.
In this article it is only stated that with some patience one can pack the first 499 rectangles into the unit square. However, the main difficulty of the problem is to pack the larger rectangles and so it would have been nice to see a demonstration.

Bálint addresses the question in Two Packing Problems but I do not have easy access to this and so now I'm concerned that a similar claim, without a demonstration, may have been made in this paper too.

Please could someone with access to the paper lay my concern to rest?

I would very much like to have confidence in the later bound as its validity makes the problem yet more interesting. Can we get arbitrarily close to 1? I still see no good reason why this should be the case but it's a fascinating possibility that hints at the prospect of something quite deep going on.

V. Bálint didn't give a proof of his claim in "Two packing problems" either. I can send you this paper if you're interested. My email is name.surname@gmail.com (just substitute my real name and surname).
–
Andrey RekaloAug 3 '10 at 13:32

Yes Andrey, I'm interested, so please send me a copy of the paper. My email is namesurname@ntlworld.com (as with your email, just substitute my real name and surname, and note the lack of a dot separating them).
–
Derek JenningsAug 3 '10 at 19:20

13

If you can get arbitrarily close to 1, can't you achieve 1? Consider a sequence of packings which converges to size 1. Next, find a subsequence in which the packing of the first $k$ rectangles converges to some fixed configuration (such a subsequence must exist by compatctness). Now, in this subsequence, find a subsubsequence in which the packing of the first $2k$ rectangles converges ... and so on. Now, this set of subsequences gives you a set of packings for any number $n$ of rectangles, and the earlier rectangles don't change position after they've been placed.
–
Peter ShorAug 9 '10 at 16:26

@PeterShor, is this a rigorous argument? You say that "the earlier rectangles don't change position after they've been placed", but, if I understand the argument correctly, they just don't change position *much*—but a small change can dramatically affect where and whether a tiny rectangle can fit!
–
L SpiceDec 9 '14 at 22:42

@L Spice: It's rigorous. You find a subsequence in which the packing of the first $k$ rectangles converges to a fixed configuration. This fixed configuration (the one to which the first $k$ rectangles converges) is the one that doesn't change position when you take a subsubsequence of this subsequence.
–
Peter ShorDec 10 '14 at 2:09