Author
Topic: Research Project using a possible new knot (Read 1617 times)

the gallon might itself be seen as a sample of the greater-batch-of-that milk I'm pointing to *evenness* of a thing; your supposed challenges to this miss that point --of course one wouldn't think this 2% milk gallon implied all milk was so. And I don't expect this climbing rope to imply things for THAT one, or yachting ropes, or ... ; and I will urge the "KNOT strength" is better conceived as "this-material-so-knotted" strength.

Ok, let's back up a bit. What exactly are you trying to learn about milk in your 1 gallon test thought experiment? What exactly are you trying to learn about knots in your analogous spool of rope test?

If you know that the milk is homogenous and want to determine fat content, then you need test only one sample to be confident that further tests would yield the same result. That's analogous to testing your rope to see that it is polyester. If one test reveals that it is polyester, and you know that all the rope on the spool is the same, you don't have to do further tests to be confident that they would also reveal polyester.

That doesn't, however, tell you anything about knots. If you wanted to know if, let's say, how likely a particular knot tied in line from that spool is to jam, a single test won't tell you that. Knot jamming probability does not have zero variance. Not all tests will yield the exact same result. All a single test tells you is that it's possible for that knot in that line to jam (or not). It doesn't tell you (with any confidence whatsoever) what the likelihood is of the next knot jamming.

And you know that, or else you wouldn't have said:

Quote

I have argued for --where possible-- the single *test* of multiple-tokens of a knot, stringing a line say with 5-10 identical knots in it, to get a break

Here you've constructed a way of conducting 5-10 tests on separate knots with a single pull. A compound test like that might yield more confidence in the minimum strength value, but it still doesn't tell you anything about the distribution of those values.

If you still think you can determine the mean/median/mode/min/max breaking strength of a knot in given cordage with a single test, then we're never going to agree. If you think you need more than one test, then how many you need depends on distribution of the test results (which you probably don't know in advance), the amount of confidence you want, and margin of error you're willing to accept.

It's very tempting to skimp on the number of samples because it's inconvenient to take them. Just realize that you're going to sacrifice confidence or error rate. Do you want to be 95% confident in your results, or 50%? Do you want a 15% margin of error, or a 30% margin of error? At what point do you no longer learn what you set out to discover?

One last time, conventional wisdom in statistics is that if you don't know the population distribution in advance, you need a minimum of 30 random samples (and quite possibly more) to determine that distribution with meaningful confidence (and even then, it's possible to be wrong). Don't take my word for it - ask a statistician or play with the numbers yourself (there are a number of sample size calculators available online). If you choose to take fewer samples, then be prepared for people to dismiss your results as insignificant.

I agree with the others that we have taken this conversation too far off topic already, so I'll leave it at that.

That doesn't, however, tell you anything about knots. If you wanted to know if, let's say, how likely a particular knot tied in line from that spool is to jam, a single test won't tell you that. Knot jamming probability does not have zero variance. Not all tests will yield the exact same result. All a single test tells you is that it's possible for that knot in that line to jam (or not). It doesn't tell you (with any confidence whatsoever) what the likelihood is of the next knot jamming.

I think we'll find comfort that the variation justisn't so great to worry about, "at least in someknots", as I offered might be reasonably repeatedlytied alike; some others of more complexity mightnot behave so predictably.

Quote

And you know that, or else you wouldn't have said:

Quote

I have argued for --where possible-- the single *test* of multiple-tokens of a knot, stringing a line say with 5-10 identical knots in it, to get a break

Here you've constructed a way of conducting 5-10 tests on separate knots with a single pull. A compound test like that might yield more confidence in the minimum strength value, but it still doesn't tell you anything about the distribution of those values.\

Though, per above..., we might come to somecomfortable & reasonable belief that the rangeis not going to surprise us.

Quote

then how many you need depends on distribution of the test results (which you probably don't know in advance), the amount of confidence you want, and margin of error you're willing to accept.

Given vagaries of tying & various materials--and this means same brand but differenthistories of usage--, I think that getting thesort of statistical level of confidence that isdefined in the pure math is ... well distantfrom meaningful/useful information.

Some thorough testings esp. to focus on someparticular factors (e.g., having pretty evenlymade & scaled from smaller-to-thicker like ropesin checking if **size** has whatever effects,and so on) might be the basis for later takingfew tests w/some confidence if results are whereexpected.

Assuming that TestPerfect did some statistically impressiveoooodles of test cases and ...,just what confidence does that give ?Given that someonElse did whatever tying,that the load was applied in just some manner(unlikely to be like actual use), and the ropewas just that rope in just that condition.

I think we'll find comfort that the variation just isn't so great to worry about,...Though, per above..., we might come to some comfortable & reasonable belief that the range is not going to surprise us.

How do you know that, until you test it? And, how do you know that until you perform enough tests to have confidence in the results (statistically speaking)?

Quote

Assuming that TestPerfect did some statistically impressive oooodles of test cases and ..., just what confidence does that give ?

Go to one of many sample size calculators online (such as https://www.surveysystem.com/sscalc.htm), plug in the numbers, and it will tell you exactly how confident you can be (again, statistically speaking).

Quote

Given that someonElse did whatever tying, that the load was applied in just some manner (unlikely to be like actual use), and the rope was just that rope in just that condition.

What are you trying to find out? Haphazard testing will yield haphazard results. If you don't construct your test meaningfully, you won't get meaningful data.

I'm not saying that any test has to be performed X times in order to be useful -- I've done informal knot testing myself with statistically insignificant sample sizes, but that was just to get an idea of what might be interesting to investigate further. I don't even remotely assume that those results predict the probability distribution of future outcomes.

I am saying that if you want results that accurately reflect the general population, and are useful for predicting future results, then you're going to need sample sizes that are statistically significant. No amount of optimistic assumption or wishful thinking is going to change the math.

You have vanished from this thread but perhaps you are still reading with interest?At some point, this topic has drifted into testing methodologies and statistically valid sampling methods...maybe the first divergence occurred roughly at reply #17 and then escalated rapidly.

I think most of the replies from #17 onwards could be an entire new topic just discussing repeatable testing methodologies and statistically valid sampling methodology.

By now, you should be aware that the 'yachting monthly' test report is just another example of poorly conceived and poorly conducted testing. NautiKnots and Dan Lehman have already voiced their opinions herein - hopefully you wont make the same mistakes?

Quote

My pilot tests were as follows.

Bowline v Round turn and two half hitches- The round turn and two half hitches won outright 4 times out of 4 (as expected, based on work and climbing experience, and also Marlow Rope's online testing).

When you mention 'Bowline' - exactly which type of 'Bowline' are you referring to? There are many different forms of a 'Bowline' (note that I wrote 'a Bowline' and not 'the Bowline').I am going to take a wild guess and 'assume' you meant the common #1010 Bowline which is based on a single right-hand nipping loop? This seams to be the default 'Bowline' that knot testers appear to be fixated on.Its a pity that other 'Bowlines' are ignored (or in ignorance, simply not known). I would be most interested if you could test Scott's locked Bowline.However, I would like to examine properties other than the default 'pull-it-till it breaks' mindset. A significant proportion of knot testers are fixated on the idea of probing MBS yield point of a knot (ie pull till it breaks). This mindset permeates nearly all of humanity. It would be nice to see a different approach...such as probing the following aspects:[ ] jamming threshold[ ] instability threshold[ ] geometry at various load milestones (ie at certain loads, stop and photograph the knot structure - and compare to 'control' of no load)[ ] If you are in the majority mindset of pull-to-failure type thinking, could you at least test 'Bowlines against Bowlines'. For example, test #1010 against Scotts locked Bowline and #1010 against a 'slipped' #1010 (adds 3 rope diameters inside the nipping loop).

Quote

Looped double fisherman's (Scaffold hitch) v Round turn and two half hitches- The round turn and two half hitches won outright 4 times out of 4 (as expected, base on work experience and based on Marlow Ropes online testing).

Please use 'ABoK' numbers where they exist to aid in positive identification. Also, realize that these knot structures act as 'nooses'. You should characterize them as such. In fact, they are 'composite' structures consisting of:1. A tensionless hitch; and2. A securing mechanism (ie a strangled double overhand knot versus 2 half-hitches which likely form a clove hitch).The difference between the 2 structures being the type of securing mechanism.

Quote

'English Braids' have very kindly provided me with 200 metres of 4mm 12 stranded polyester dinghy control line to continue my testing.

I wish you could obtain human-rated ropes (eg EN1891 abseil rope and EN892 dynamic climbing rope. Is this an impossibility?

Quote

I have tested to failure (three times), short lengths of their control line with a splice in each end using known static weights. So I now know what load the splices part at. The next stage is to test my hitch against the splice under different environmental conditions.

Is there any reason why you couldn't terminate each end using a 'tensionless hitch' where the remaining tail is then clamped (instead of a 'splice'?

Quote

Based on Marlow Ropes online knot test which the round turn and two half hitches is rated very highly against a splice, I have high hopes for my hitch as it outperformed the round turn and two half hitches by far.

By now, you realize that it isn't 'your' hitch.ie it isn't 'new'.

...

Tim, I believe that there are 3 different types of testers as follows:1. Hobbyist/Enthusiast testers: (aka Backyard testers) who largely act in isolation:(usually an individual who isn't well funded and doesn't have sophisticated forced generating equipment that is regularly calibrated. The individual is usually an enthusiast and may seek assistance from a friend of acquaintance). Reporting is generally not bound to scientific rigor.

2. Pseudo lab testers:(usually individuals but sometimes 2 or more persons who are roping/rescue/rope access enthusiasts. They are not a certified test lab but do have force generating equipment and the means to capture data. They have freedom to test in any way they desire and their testing isn't accountable to third party accrediting agencies. Scientific rigor falls upon the individuals experience and knowledge (eg whether they have background education from a college/university or access to expertise in repeatable methodology).

3. Certified, nationally accredited test labs:(who use calibrated force generating equipment and test strictly in accordance with their accredited status (these entities are normally a business enterprise - and they routinely test things to destruction. The personnel at these labs are generally not knotting enthusiasts and knot tying skills isn't part of their day-to-day employment. All reporting is bound to rules of scientific rigor and statistical sampling methodology).

I think we'll find comfort that the variation just isn't so great to worry about,

How do you know that, until you test it? And, how do you know that until you perform enough tests to have confidence in the results (statistically speaking)?

WEll, seeing lack of variation in some testedcases could lead to expecting that in othersthat had nothing to make one suspect otherwise(and then what few test cases fell into range).

Quote

Quote

Assuming that TestPerfect did some statistically impressive oooodles of test cases and ..., just what confidence does that give ?

Go to one of many sample size calculators online (such as https://www.surveysystem.com/sscalc.htm), plug in the numbers, and it will tell you exactly how confident you can be (again, statistically speaking).

My point here is that the precision of factorsleaves all variations still in question. Yes, acalculator can tell about X at Y & Z repeated,but not of X2 at Y & Z2. So, you narrow thetesting in a sense --i.e., concentrate your testcases-- and gain that statistical confidence,but at the cost of breadth of applicability.

Quickly :: I don't want to seem hostile to theuse of these maths,but one needs knowledge of much broader reachesthan will be got if concentration of test cases is allthat one does.

By definition, you do not need to test this knot because no knot can be stronger than the cord it is made from, and cord MBS is measured by winding the cord around a round anchor - i.e. essentially a knotless fixing.

Provided the number of turns you use is sufficient for the cord/ anchor combination to shed all the force before the cord leaves to make the final strangle tie off, then the cord will rupture at its MBS at wherever its weakest point happens to be.

The only exception to this situation would be if you have insufficient turns and residual force exits the last turn, finishing up as a lateral force against the SP at the Strangle attachment. The slight angular deflection at that point will act as a weak point, the weakness being proportional to the angular displacement.

So, please ignore all the shedload of 'protocols' and statistically significant sample numbers cited above, use your Engineers eyesight and look at how cord is anchored in the MBS test rig, then make sure your knot has sufficient turns to match this and by definition it must be as strong as the cord itself.

Welcome to the wonderful world of Nodeolgists and please keep on knotting and stirring up the dust on this Forum.

By definition, you do not need to test this knot becauseno knot can be stronger than the cord it is made from,and cord MBS is measured by winding the cord arounda round anchor - i.e. essentially a knotless fixing....

What happens where testing gives contrary evidence?!"By definition," the testing is wrong?!!

I remark this in recalling one fellow who IIRC was theeditor of an angling magazine (USA) getting just suchpuzzling results --and explicitly recognizing and re-testingthem (in contrast to some reports that ignore them!)--and, well, ... he had no explanation. I think it was aparticular-#-of-wraps Bimini twist that didn't break,but the line did, and did so at higher load(s) than didthe line when he tested it --yes, another goood point :he got his own tensile and contrasted it w/the nominalone from maker (his were way higher)! -- !!

[Oh, I think that this is the guy & site & more recentthan what I recalled, but a point to begin your ownexplorations. DOUG OLANDER[/url]www.sportfishingmag.com/best-fishing-knots-main-line-to-terminal-gear#page-19[/url]]

One can beware the claims of evidence of "stronger thanthe rope" from testing a round sling with one so-calledknot in it, which don't consider that knot compressioncan feed slack into the knotted side and thus reducetension there and ... the break can occur at the pinand not the knot.

(And then there is this elsewhere-examined YachtingMagazine test-result image showing a break in the>>>eye leg<<< of a knot !! Huh?!

What happens where testing gives contrary evidence?!"By definition," the testing is wrong?!!

When testing gives contrary evidence then your understanding of the limitations of the testing or the interpretation of the Statistics is likely wrong...

Quote

yes, another goood point :he got his own tensile and contrasted it w/the nominalone from maker (his were way higher)! -- !!

1. was his tensile testing calibrated?2. makers quote MBS which is typically 3 sd's below highest figure, and for added security, some manufacturers quote MBS at 2 or 3 sd's below the mean so that 99.9% of their cord will perform within the quoted MBS.