troyp wrote:Personally, I can't understand how people tolerate IDEs. I mean, sure, I understand using them for debugging. That's what they're good for. But writing code? You get stuff like prettier autocompletion that's configured out of the box, but in exchange you have to write code without a decent text editor. When I've used an IDE it's always a huge chore to write anything, and when I get serious I have to set autoreversion and switch to Emacs to do my real coding.

Maybe when you're managing a large project IDEs become more useful, but I still can't see why you wouldn't switch to editor for editing. Unless you've got something like Eclim that lets you get real editing inside your IDE.

I'm curious, what features make a text editor "decent"?

Answering for myself: the features I look for in a text editor qua a text editor are those when enable me to browse and rearrange text quickly, precisely, and ideally without taking my hands off the keyboard.

troyp wrote:If you really wanted to, you could probably use another editor and run ozc from the command line, but you'd have no syntax highlighting, autoindentation, etc, etc.

I'm going to try it. TextWrangler allows for configurable (downloadable) syntax files, so I'm going to hunt around and see if there are any I can use. But even without syntax highlighting I think I will prefer this route.

troyp wrote:(1) What shortcuts were you used to that didn't work in Aquamacs? If you want the C-c, C-v, C-x bindings for copy/paste/cut, you need to activate CUA mode, but other common shortcuts should work.

The thing is, I type in Dvorak but use a keyboard setting that allows me to use standard keyboard shortcuts. So even though the "C" key on the keyboard will actually type a J, when used in a keyboard shortcut it will work as a C and allow me to Command-C for Copy. This only works with the Command key though, not the Ctrl or Option keys if used without the Command key. And it doesn't work in all programs. For instance, when I work in Photoshop or InDesign I just use QWERTY layout so all the keyboard shortcuts work the way they are labelled in the menus. That's not a problem, because I'm not usually doing much typing in those programs, mostly mouse work and so forth. In a text editor, it becomes much more of a nightmare.

But actually that's only half of the problem. Even without my odd keyboard preferences, there are still issues. Nearly all of the standard Mac text editing keyboard shortcuts do something else in Aquamacs. These are: Command-left or right to move to beginning or end of the line, option-left or right to skip by words, command-up or down to go to beginning or end of the file, option-up or down to skip up and down by paragraphs, and hold down shift with any of the above to highlight instead of moving the cursor. And yes, I routinely use Command C, X, V, Z, A, S, Q, W, and also Command F...the list goes on. These are all standard Mac shortcuts for text editing. They ALL work in TextWrangler.

troyp wrote:(2)Note that {Show ...} prints to the standard output, whereas {Browse ...} brings up the graphical browser. So if the latter doesn't work, your graphical browser isn't working. From memory, I think you need to choose "Oz -> Show/hide -> Emulator" from the menu to get the browser to appear. At worst, if you can't get the browser to work, you always replace calls to {Browse} with {Show}.

Yeah, I just need to play with it more. But I'm going to see if I can get it working from the command line and then if that works, I'll see if I can get syntax highlighting in TextWrangler.

Honestly I feel like the CTM book is more about the ideas than the actual code. I'm not going to use Mozart-Oz to write any production programs, so while I would like to have it working well enough to easily run the examples they give and maybe do the exercises, I can also just keep learning Java and use CTM on the side just to expand my concepts and understanding of various programming paradigms, without even running the silly Oz code.

Personally, I can't understand how people tolerate IDEs. I mean, sure, I understand using them for debugging. That's what they're good for. But writing code? You get stuff like prettier autocompletion that's configured out of the box, but in exchange you have to write code without a decent text editor. When I've used an IDE it's always a huge chore to write anything, and when I get serious I have to set autoreversion and switch to Emacs to do my real coding.

Maybe when you're managing a large project IDEs become more useful, but I still can't see why you wouldn't switch to editor for editing. Unless you've got something like Eclim that lets you get real editing inside your IDE.

Personally I couldn't live without Visual Studio + Resharper + NCrunch plus other plugins like Greenkeeper which allows one-click, automated rearrangement of properties, methods and the like into alphabetical order, newspaper mode and so on

Resharper allows for project-wide refactoring, code quality enforcement (ie. does it follow company standards) and too many other benefits to list.NCrunch allows for continuous, real-time unit testing with errors and code-coverage failures shown inside the code itself as you type it (great for TDD)

Sure, with a lot of work maybe you can configure a stand-alone text editor to do some of it, but with a modern IDE it all just works out of the box and can be configured centrally, meaning all programmers automatically know when their code fails to follow the latest agreed upon communal coding style, or fails code-complexity or code-cohesion metrics and so on.

I suppose the most important is the ability to customize the editor to suit your needs. You should be able to customize all aspects of the editor in real time, in a real programming language. It should have a system of modes to make it easy to switch between different keymaps and behaviours. There should be hooks for important events, so behaviour can be added in a systematic way. Additionally, editor functionality should be exposed as functions which can be replaced or advised if required. The various parameters controlling editor behaviours should be able to be overridden when required (some type of dynamic scoping is useful for this purpose). And obviously, all keybindings should be rebindable to arbitrary functions.

To be able to use a powerful editor, a powerful help system is a also a must. If you need to know the function a key (combination) is bound to, or what that function does, or you need to see its source, that should be quick and easy to do.

Basically, if you want to do something text-based, and you know how to do it, you should be able to instruct the editor to do it, without any undo hassle (which is likely to prevent you from actually doing it).

I suppose your average "modern" text editor is well-designed for someone who only uses an editor lightly and won't bother adding custom behaviour, but I wouldn't want to use one. When the editor (or an extension) has behaviour you don't like, you often just have to put up with it. And you can't add arbitrary behaviour without going through whatever arcane ritual they've decreed for building extensions. (And I just know that if I did go to the trouble, I still wouldn't have free reign - these editors simply aren't designed to be extensible.)

Wildcard wrote:The thing is, I type in Dvorak but use a keyboard setting that allows me to use standard keyboard shortcuts. So even though the "C" key on the keyboard will actually type a J, when used in a keyboard shortcut it will work as a C and allow me to Command-C for Copy. This only works with the Command key though, not the Ctrl or Option keys if used without the Command key. And it doesn't work in all programs. For instance, when I work in Photoshop or InDesign I just use QWERTY layout so all the keyboard shortcuts work the way they are labelled in the menus. That's not a problem, because I'm not usually doing much typing in those programs, mostly mouse work and so forth. In a text editor, it becomes much more of a nightmare.

But actually that's only half of the problem. Even without my odd keyboard preferences, there are still issues. Nearly all of the standard Mac text editing keyboard shortcuts do something else in Aquamacs. These are: Command-left or right to move to beginning or end of the line, option-left or right to skip by words, command-up or down to go to beginning or end of the file, option-up or down to skip up and down by paragraphs, and hold down shift with any of the above to highlight instead of moving the cursor. And yes, I routinely use Command C, X, V, Z, A, S, Q, W, and also Command F...the list goes on. These are all standard Mac shortcuts for text editing. They ALL work in TextWrangler.

I'm surprised at the movement shortcuts being different. I could swear they were always very similar for me in (vanilla) Emacs. (I've changed most of them because I never use them for their original purpose in an editor). Although it sounds like they're a bit different on OS X? So in OS X, Command==Super and Option==Alt, but Command is often used in place of Ctrl in combinations? I'm sure on Linux and Windows, Ctrl-left/right moves by words, Ctrl-up/down goes by line beginning/endings, Home/End goes to the beginning/end of the current line (idempotently), Ctrl-Home/End goes to beginning/end of file and shift adds selection. I think Alt-movement is usually reserved for the application (at least on Linux).

Personally, I use vi and/or emacs bindings for everything, which I find works a lot better than trying to accomodate CUA bindings. But you can change the bindings to suit your preferences. Chances are, there's a config available to restore the default OS X bindings. Probably one for converting dvorak bindings too. Maybe not for the combination, though. Still, it's easy to change whatever bindings you want. I won't go into detail now since you've decided not to use Aquamacs, but basically to set Alt-right to go the end of line, you'd start with C-e (the current end of line key), do "C-h k C-e" to find what it points to (it'll say "end-of-line"), then add "(global-set-key (kbd "<M-right>") 'end-of-line)" to your init file. (M- is Emacs for "Alt"). If you hit C-x C-e at the end of the line (in any Emacs lisp buffer), you'll execute the line, so you can check it's working.

elasto wrote:Personally I couldn't live without Visual Studio + Resharper + NCrunch plus other plugins like Greenkeeper which allows one-click, automated rearrangement of properties, methods and the like into alphabetical order, newspaper mode and so on

Resharper allows for project-wide refactoring, code quality enforcement (ie. does it follow company standards) and too many other benefits to list.

I know everyone who uses VS loves Resharper. I suppose it must have something going for it. I've always been a bit dubious about how powerful an IDE extension could be as a refactoring tool, though. Refactoring is just so... general. I haven't given it much thought, but the one conclusion I have is that the right way to offer a rectactoring solution would be as a library. It seems to me an IDE extension would need to implement its own programming language to do a good job.* My assumption is that Resharper must identify a very limited but useful class of refactorings and make those as easy as possible. That would certainly be worthwhile.* And ideally its own editor, for that matter, since some refactorings are inevitably going to require extensive user interaction, even if they're software-aided.

I'm curious about this. Refactoring is something I'm not efficient enough at (I just use general editing mechanisms). Which is bad, because it means I'm less likely to do it. I've been meaning to check out the available tools. What kind of refactorings does Resharper allow? I assume it does stuff like renaming and moving stuff around. But what else? And why does it make it so much easier?

NCrunch allows for continuous, real-time unit testing with errors and code-coverage failures shown inside the code itself as you type it (great for TDD)

Sounds like a pain in arse. Well, IDEs would have a slight presentational advantage, but I can't really see the difficulty in setting up whatever tests you want in an editor. Automation is what editors are good at (as well as, you know, editing).

troyp wrote:Personally, I can't understand how people tolerate IDEs. I mean, sure, I understand using them for debugging. That's what they're good for. But writing code? You get stuff like prettier autocompletion that's configured out of the box, but in exchange you have to write code without a decent text editor. When I've used an IDE it's always a huge chore to write anything, and when I get serious I have to set autoreversion and switch to Emacs to do my real coding.

Maybe when you're managing a large project IDEs become more useful, but I still can't see why you wouldn't switch to editor for editing. Unless you've got something like Eclim that lets you get real editing inside your IDE.

I'm curious, what features make a text editor "decent"?

Answering for myself: the features I look for in a text editor qua a text editor are those when enable me to browse and rearrange text quickly, precisely, and ideally without taking my hands off the leopard.

And why shouldn't an IDE have proper input? KDevelop has a vi-style input mode, plus (configurable) keyboard shortcuts for all the tools relying on code parsing that you simply can't have in a plain editor.

troyp wrote:Personally, I can't understand how people tolerate IDEs. I mean, sure, I understand using them for debugging. That's what they're good for. But writing code? You get stuff like prettier autocompletion that's configured out of the box, but in exchange you have to write code without a decent text editor. When I've used an IDE it's always a huge chore to write anything, and when I get serious I have to set autoreversion and switch to Emacs to do my real coding.

Maybe when you're managing a large project IDEs become more useful, but I still can't see why you wouldn't switch to editor for editing. Unless you've got something like Eclim that lets you get real editing inside your IDE.

I'm curious, what features make a text editor "decent"?

Answering for myself: the features I look for in a text editor qua a text editor are those when enable me to browse and rearrange text quickly, precisely, and ideally without taking my hands off the leopard.

And why shouldn't an IDE have proper input?

It should. In my experience, they usually don't.

I have used several IDEs with vim-style input. They have all been pretty garbage compared to vim.

Am I right to think that for some program sizes and input and output lengths, and for basically any value of t, there is a computable function, which cannot be computed in under t time, by any program within the program size?

If the program size is less than the output size, then "the dictionary order first output value for which no program operating on the given input halts and gives that output value in under t time" seems like it should be well defined, and also like it can't be computed by a program with the given program size from the given input within t time.

Is it possible for a similar method to guarantee something similar, but with the addition that it be quick to verify an answer?

It seems like it couldn't be quite the same, because if it is quick to verify a solution, then a program that has a solution to a specific input could probably (but maybe not?) have it hardcoded, and so would be able to give the result in under t? but, no, hardcoding a response doesn't work without the easy verification, so I'm not sure why adding the ability to verify an answer quickly would allow it to do that.

so, I'm not sure if it would be possible.

It seems like if there was such a function, such that the result could be verified in some fraction of t, that it could be used as a very secure proof of work.

Even without the ability to quickly verify the correct output for an input, it seems like it could still be used for a time lock puzzle?

I think the simplest algorithm for it as described above would take time which is a large multiple of the time t, and only really guarantee that it can't be solved in under time t, and even that only makes the guarantee if the attacker is not using a program which is very large...

but I would expect that designing a larger program to get past it would require a large amount of computation to design, so I'm not sure that would be a useful attack, and I'm not sure that an attacker would practically be able to do much better in the time than the runtime of the simplest algorithm to compute it, even with the large constant multiple, but if ways to lower that constant are found, it seems like after they are publicly known, that could be used by the one setting up the time lock puzzle just as well, so that overall finding ways to do that would not give attackers any advantage, and would instead only make the guarantee more exact?

I assume all of this is either old news or possibly just pointless, but I thought it was interesting.

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

Above some t, related to the BB function, all halting programs of that size will have halted, and thus all computable functions writable in that amount of size have been computed. So no, there's no such function.

(Likely, it's because BB grows so fast that you can't *encode* the time t in a program of the given size, so you can't even define the program you're talking about in the size you have to work withi.)

Above the t such that all programs of the program size that eventually halt will have halted, the function's output would be the dictionary order first output which is not the output of any program which is under the length and halts, which must exist because the output space is bigger than the program space.

Oh, ok I understand what you mean now I think.

Ok, good point, if you pick a large enough t, there is no program which fits under the program size which calculates the function for that program size and for that t.

But the function still exists and is computable I think. It would just have to be computed by a program which is bigger than the programs that the function is defined in terms of.

Probably because, as you say, a program would not have room to specify the time t.

Though, if there is no bound on the size of the input to the program, and the input to the program included t as well as the input to the function to be computed, I figure that program would be able to fit in the program size the function is defined in terms of?

Though, I expect that if one ever used a function like this for practical use, they wouldn't need values of t that large? If they care about times that long, then I think they would have enough storage to store t, and programs long enough to store t would be relevant.

(Which would have a corresponding t too long to represent, but that would not be the t they would care about)

I think the following pseudocode shows that the function is computable (or, for each t, length of programs, length of output, the function for those is computable (provided the outputs are larger than the programs)):

ProgramLength=zzzOutputLength=zzz+xMaxT=Tdef f(in): foundOutputs={} for program in programsOfLength(programLength): (halted,output)=runForTime(program,in,T) if(halted): foundOutputs[output]=true for output in outputsOfLength(OutputLength): if(output not in found Outputs): return output raise("output space must exceed program space")

Where by psuedocode I guess I mean "Python, but with some functions that I don't define".

Edit: also, thank you very much for your response

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

This article claims that fingerprints are unhashable, since you need to accept near misses. That can't be right, can it? You could have a hash function that maps small displacements to small displacements within a certain range, but then reverts to unpredictable "all-over-the-place" behaviour. It would still make it easier to guess the password — there's no way around that — but you wouldn't have the possibility of exponentially closing the distance to a known hash.

A simple example (mapping ints→ints) would divide the integers into a series of intervals and then have all the numbers in the same interval map to the same result. Eg.

If the hashed password attempt matches one of those three hashes, you accept it. That way, any attempt within 100 of the correct password will be accepted, but the password still can't be easily found from the hash. (Obviously, the number of effective passwords is reduced by a factor of 100, and a given hash has three chances to match rather than one... but that's still acceptable)

Of course, trying to do something like this with fingerprints would be much more challenging, since you'd need a way to quantify the "distance" between fingerprints so that different readings of the same fingerprint are close but different prints are far apart. (And I don't know if researcher's would have incentive to do it, since there are other serious issues with using biometrics for security, as the article mentions.) But if they could do that, then they should be able to hash them, yes? Or is there something I'm not accounting for?

With that, I guess one could try checking if the hash of the input is similar to the stored hash?

However, I don't know that perceptual hashing is designed with the right type of security in mind?

Hm..

Oh!

With cryptographically obfuscated programs, the initial "hash" could be an obfuscated program that accepts an input if it is similar enough to the fingerprint it was made for, and rejects inputs that aren't.

With current technology I don't think that would be practical, but I think that that suggests that the tsk you describe is theoretically possible?

Unless the secret sharing scheme method of program obfuscation would work (as that is, I think, much more efficient at the risk of parties colluding to break it?), but I'm not sure that would be an acceptable solution, and I'm not sure it would be fast enough even then.

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

TL;DR as far as I understood it:They developed a Turing Machine with at most 7918 states the behaviour of which cannot be proven by ZF set theory (plus some extension theory concerning large cardinal numbers).

The obvious thing I learned from this is the existence of "unprovable Turing Machines", which seems obvious only in hindsight.The not-so-obvious thing I learned is that you can argue about what a given set of axioms can prove in hard, tangible terms.

To preempt the inevitable question in the comments section: yes, we did run these Turing machines for a while, and no, none of them had halted after a day or so. But before you interpret that as evidence in favor of Goldbach, Riemann, and the consistency of ZFC, you should probably know that a Turing machine to test whether all perfect squares are less than 5, produced using Laconic, needed to run for more than an hour before it found the first counterexample (namely, 32=9) and halted. Laconic Turing machines are optimized only for the number of states, not for speed, to put it mildly.

Since then, a machine with, 1920 states has been found which also cannot be shown to never halt under zfc (but which can be shown to never halt under zfc+"zfc is consistent").

Also, wrt the large cardinal thing:

As I understand it, ZFC + large cardinal is the thing that they used to show that the ~8000 state one never halts, while ZFC is what they showed cannot show that it never halts?

Whereas the 1920 state machine directly searches for contradictions in ZFC (instead of something else that I don't understand which can't be shown in ZFC, like with the ~8k version) , so only ZFC + "ZFC is consistent" is needed to show that it never halts (which is weaker than ZFC + the large cardinal axiom they used for the ~8k one).

So, I guess for any consistent formal system, there is a smallest N such that that formal system cannot prove the value of BB(N) ?

That's interesting.

So, iirc there is a way of describing the strength of a system using ordinals. Would each system with the same strength in those terms have the same N I wonder?

[After this point in this post I might be running forwards in ignorance, and what I'm saying might be very incorrect / nonsense. I'm talking out of my ignorance here.],

So, because of that, and because I'm rambling, I'll put this in a spoiler, as what follows is probably either obvious or false. But maybe it is isn't, and is maybe even interesting instead, so I'll leave it in the post, just in a spoiler.

Spoiler:

Hmm, so, that seems like it would imply a non-decreasing function from the ordinals to the natural numbers, unless some ordinal corresponded to a system strong enough to prove any BB(N),

So either, for any N there is an ordinal such that a consistent system with that ordinal as it's strength can prove the value of BB(N), or there is some N such that no ordinal is big enough that a consistent system with its strength measured by that ordinal can prove the value of BB(N).

It seems like the second one cannot be the case, because, for any N, even if we can't prove that any particular number is greater than BB(N) in a weaker system, there is /some/ number greater than BB(N) (there are only finitely many Turing machines of size N that halt), so , as impractically large as it might be (impossible for us to express in this universe perhaps?), one could have a system that includes as an axiom " BB(N) < [that ridiculously large integer]", and , assuming that the integer chosen was large enough, the system would be consistent, and I assume it would have a strength expressable by some ordinal.

It also seems like there can't be a consistent system which can prove the value of BB(N) for any N, because, if there was, a program could be written to search for a proof of the value of any BB(N), and it would eventually find one (because possible proofs are enumerable, and verifying a proof is computable), so therefore it would be possible to turn this program into one that computes BB(N) for any N, which there cannot possibly be such a program.

So therefore, there cannot possibly be a consistent system that, for any N, can prove the value of BB(N).

So therefore, for any N, there is a consistent system which is strong enough that it can prove the value of BB(N),

And for any consistent system, there is an N big enough that the system cannot prove the value of BB(N).

Huh.

Edit: Also, for any system, there is an M such that knowing BB(M) allows for showing whether the system is consistent, so a system with a larger value for greatest N that it can show the value of BB(N) , with N>M, then it can prove the other system consistent.

So, I guess basic logic (I mean, not using any axiom system, just rules of inference, and maybe also whatever axioms are needed to deal with integers and Turing machines) can prove that a particular Turing machine not halting implies that a given system is consistent, if it can e.g. show that that Turing machine searches for contradictions in the system. So for a system S, there is a least integer m(S) such that basic logic can show that there is a Turing machine of size m(S) which, if it does not halt, then S is consistent.

And also, for any system S, there is a greatest integer M(S) such that S can prove the value of BB(M(S)).

So if M(S_1)>m(S_2) then S_1 can show whether S_2 is consistent (assuming it is consistent. If it isn't, then M(S_1) is not defined.)

I'm not sure how one would go about showing lower bounds for M(S). Like, we know that M(ZFC) < 1921, but what lower bound do we have for M(ZFC)?

I suppose we know that 5 < M(ZFC), because ZFC can show the value of BB(5), but this isn't very much.

Is there a way to show "S can show the value of BB(N)" for some N, S, without actually showing BB(N) ?

I found my old forum signature to be awkward, so I'm changing it to this until I pick a better one.

I think both Turing machines are designed to search for inconsistencies in ZFC. The 7910 state machine was found by using Laconic to compile a graph theoretic statement whose truth implies the consistency of ZFC. SRP is required for the machine to run forever (If it doesn't run forever and stops at some point, that might be before it finds a contradiction in ZFC and halts). The 1919 state machine was found after fiddling with the Laconic (Or some other?) compiler to reduce the state count, and is I think logically equivalent to the 7910 state machine.

Those have just been my thoughts so far. If anyone has more or better ideas it would be greatly appreciated!

Excuse the conceptual necro on this early part of the thread, but I've just read that the first time and recently implemented something of my own that's very close to that of which is asked.

There's an asynchronous discovery, enumeration and operation upon a file-tree. It involves faked parallelism1 and (unless clued-in) has no real idea about how much more tree each branch contains (without doing a recursive build at each stage, duplicating efforts already a necessary part of one of the other stages).

Firstly, I am tracking total items 'done' and total items so far known to need to be 'done', and the spinning up of the two numbers (per operational stack... see footnote for details) reassures me that work is being done, if only rudimentally "how far along we are" (more observed by slowdown of the target rising than the onward march of the completed total).

For an actual "completion bar", I initially assume that if the tree-root has N elements to it, each element will take 1/Nth of 100% completion, from (n-1)/N to n/N for the nth item in the whole N items. And within that nth item, there may be M items, each mth item being at the proportion (m-1)/M (and onward) of the nRange previously specified. A depth-first(ish) approach that knows enough to always advance (at varying rates) from 0% to 100%, without ever backtracking from a false sense of achievement.

Added to that, actual metrics recorded after the completion of each layer of (effective) recursion can take the actual total nodes below that branch and store a list of the most populous branches (often the nth-most populous branch is a direct sub-branch of the mth-most populous, for m < n, and sometimes even n=m+1... this is fine, as n's contribution probably is most of the reason why m is where it is in the ranking), so that upon a subsequent run the most intensive "1/Nths" can be weighted to more (at its simplest, N=N+numberOfIntensiveNodesHere, and then give 'intensive nodes' a 2/Nths span, but it can be made even more intelligent than that) and as long as there are no major differences in 'node intensity' between runs it smooths out any 'crawl' along what might previously have been the troublesome "merely one of ten subdirectories, but happens to contain 50% of the total work of the whole set" type of issue.

I don't fool myself into thinking that I've made the best solution to the problem, but it suits my needs (and, more importantly, slots right into my preferred coding style, with plenty of comments to keep future-me, at least, appraised of any actual weirdness in it) and the different situations for which I've applied this method, over the years, complete without leaving me wondering too much about progress at any stage.

It's also not so fleeting to explain, and I've doubtless reinvented the wheel, without realising, in my eclectic mix of educated and self-taught coding experiences.

1 This is messy to explain, but in case you're actually intersted:

Spoiler:

I maintain a stack of directories and two stacks of files. Initially the root(s) of the operation inhabit the directory stack and the others are initially empty but, when each directory is shifted off and its contents are examined, any subdirs are added back upon onto the dirstack (back-end normally, front-end for ones that match an arbitrary list of priorities) and any files are shoved onto the first filestack. When the first filestack is sequentially consumed (a quick but not-so-quick-that-I-could-just-grep-straight-off-in-the-dirtest-stage test), files passed as true candidates are appended to the second filestack. The second filestack is of Files Of Interest and its redepletion into the meat-and-drink of the program is the ultimate aim.Whilst the sum of the stack-sizes are non-zero, each cycle chooses to extract (to appropriately pass on) from the stack that is mathematically the proportionately least 'completed', as adjudged by the ratio of items so far removed to all items ever added. (I considered weighted ratios, but for my purposes this was sufficient.)I could parallelise the process, but this way I'm in full control of the overheads, and can guarantee no internal race-conditions/collision-slow-downs without adding further layer of coded abstractions to implement semaphor locks/etc/etc.