Googling the problem of packing circles reveals much learned comment, and lists of best-known packings at Packomania.com. But, as posted in sci.math, the placemat software knows some to be better, most obviously that for 1607 circles on a square page. If using Bunghole-standard glasses, it would have to be a large page.

For smaller tastings, how about an A3 mat, guillotined lengthways (i.e 420 x 15mm), that can be placed near centre table (leaving uncluttered space for notes/meals etc.) and contain ports in an easy access row?

RAYC wrote:For smaller tastings, how about an A3 mat, guillotined lengthways (i.e 420 x 15mm), that can be placed near centre table (leaving uncluttered space for notes/meals etc.) and contain ports in an easy access row?

The manual wrote:There are also three simple designs, /TopRow, /MiddleRow, /BottomRow, each having everything in one row, with obvious vertical position. There is also /Sides, with the obvious meaning.
For more than a few glasses these are too cramped.

Packomania.com is for geeky mathematicians, and has lots of examples of packing circles of maximal radius in various containers, including 7×10 rectangles. That is close to the proportions of A4, less a fixed margin.

Observe Packomania’s best for for 7 circles and 10 on a 7×10 rectangle: Â

I’ll implement this as two on the right (/Landscape) or top (/Portrait), reversed for /Mirror, of course, and three-row (three-column) /Diamonds for the rest. Packomania’s 10-glass solution is an epsilon better than that, but generalising the asymmetry would be too complicated. Compared to plain seven-glass /Diamonds, that adds ≈5% to the radius. PW would have wanted that for Warre versus Fonseca tasting.

Are you thinking architectural, prosaic, allusive..? I can't see anything in the pattern that would provide a short metaphorical name at present. I'm guessing it'll probably end up being something like /DiamondsPlusTwo or /DiamondsAsymmetricalSeven or something like that.

Step one: write code to solve a quartic equation. Worrying, I think that I have devised an algorithm as good as Brent’s Method, but simpler, and not needing a pre-chosen x-step.

Assume root bounded by LowerX and UpperX, with matching y values LowerY and UpperY. Interpolation would make the next x value be LowerX + (UpperX”“LowerX) × LowerY/(LowerY”“UpperY). This can fail for some shapes (e.g., y = x^4 ”“ c), as the interpolated value is always on the same side of the root, so only one side (say, LowerX) ever gets moved.

So instead make the next x value be LowerX + (UpperX”“LowerX) × Max[0.143, Min[0.857, LowerY/(LowerY”“UpperY) ]]

Repeat until UpperX”“LowerX â‰¤ Tolerance, that constant being pre-determined and small, at which time return the interpolated value (without the bounds).

When LowerX and UpperX are roughly even around the root, it interpolates. When one side is much closer, it brings in the other, moving it by a factor of 1Ã·0.143 ≈ 7.

FYI, the ‟0.143” constant came from a small experiment done in Excel. I do not know whether it should be precisely 1/7, or some other value. But a small non-exactitude in this would add only a tiny extra to the algorithm’s average time.

Indeed, this can be seen as a compromise between the slow robustness of interval bisection (‟! Max[0.5, Min[0.5, ! ]]”) and interpolation (‟! Max[0, Min[1, ! ]]”).

In the general case I'd say no; It's useful in the case you refer to because the first post is not currently being regularly updated, so the image posts are providing a useful update of current attendees and ports; if the first post were being regularly updated, I think the additional effort of pdf -> jpg -> image hosting -> post for each placemat iteration would be excessive.

jdaw1 wrote:Step one: write code to solve a quartic equation. Worrying, I think that I have devised an algorithm as good as Brent’s Method, but simpler, and not needing a pre-chosen x-step.

Assume root bounded by LowerX and UpperX, with matching y values LowerY and UpperY. Interpolation would make the next x value be LowerX + (UpperX”“LowerX) × LowerY/(LowerY”“UpperY). This can fail for some shapes (e.g., y = x^4 ”“ c), as the interpolated value is always on the same side of the root, so only one side (say, LowerX) ever gets moved.

So instead make the next x value be LowerX + (UpperX”“LowerX) × Max[0.143, Min[0.857, LowerY/(LowerY”“UpperY) ]]

Repeat until UpperX”“LowerX â‰¤ Tolerance, that constant being pre-determined and small, at which time return the interpolated value (without the bounds).

When LowerX and UpperX are roughly even around the root, it interpolates. When one side is much closer, it brings in the other, moving it by a factor of 1Ã·0.143 ≈ 7.

FYI, the ‟0.143” constant came from a small experiment done in Excel. I do not know whether it should be precisely 1/7, or some other value. But a small non-exactitude in this would add only a tiny extra to the algorithm’s average time.

Indeed, this can be seen as a compromise between the slow robustness of interval bisection (‟! Max[0.5, Min[0.5, ! ]]”) and interpolation (‟! Max[0, Min[1, ! ]]”).

Presumably this also depends on any assumptions regarding the nature of the quartic to be solved, i.e. all-real roots (or at least a real root between the specified starting points), no discontinuities (no matching pole-zero root pairs) etc. In which case whether bisection, interpolation or your alternate specified scheme would be quicker in the general case would presumably depend on the nature of the group of potential curves across which the technique would be used? An alternative to the factor changed used to avoid never reaching the root could be to add a small proportion of the step delta determined from interpolation (deliberate over-adjust), though potentially decreasing the over-step with time to avoid oscillation; similar to techniques used to avoid getting stuck in local minima.

PhilW wrote:Presumably this also depends on any assumptions regarding the nature of the quartic to be solved, i.e. all-real roots (or at least a real root between the specified starting points), no discontinuities (no matching pole-zero root pairs) etc. In which case whether bisection, interpolation or your alternate specified scheme would be quicker in the general case would presumably depend on the nature of the group of potential curves across which the technique would be used? An alternative to the factor changed used to avoid never reaching the root could be to add a small proportion of the step delta determined from interpolation (deliberate over-adjust), though potentially decreasing the over-step with time to avoid oscillation; similar to techniques used to avoid getting stuck in local minima.

Don’t need to reach the root, only for the bounds either side to be closer than xTolerance, when do a final unconstrained linear interpolation.

Any possibility of multiple roots between initial bounds (excluding duplicate root)? (I.e. can we either exclude the possibility of multiple roots being present, and If not then do we care? i.e. are all roots required, or any root). Could there be any bounding of the relative ratio of xTolerance to initial delta between upper and lower bounds?