Sex, software, politics, and firearms. Life's simple pleasures…

Main menu

Post navigation

The Limits of Open Source

A mailing list I frequent has been discussing the current financial meltdown, specifically a news story claiming that Wall Street foooled its own computers by feeding them risk assumptions the users knew were over-optimistic.

This is also a very strong case for F/OSS software. Had such software been in use, I strongly feel that the inherent biases programmed in would have been found.

But then, that’s also true for voting machine software.

As the original begetter of the kind of argument you’re making, I’d certainly like to think so…but no, not in either case.

You’re making an error in the same class as believing that the design of security systems is just a matter of getting the algorithms and protocols right. Bruce Schneier could set you straight on that one real fast. Perhaps he will [Schneier is on the list].

Open source is great for verifying the integrity of the software itself, but doesn’t necessarily give you any purchase on auditing the software’s assumptions. Suppose the software is modeling physics: it’s not too difficult under open source to verify that (say) it’s using the textbook value of G, in the Newtonian Law of Gravity, but verifying that the textbook value of G is physically correct is a different and far more difficult problem.

Similarly, if you’re looking the source code of complex risk-modeling software, it’s relatively easy to know that the model logic is being implemented correctly. But this gives you no purchase on whether the model is correctly descriptive of real markets. Or real climate systems, or whatever.

How you find the right coefficients for the partial differential equations (and whether you’re using the right PDEs at all) is not a software problem and cannot be addressed by software engineering methods. How you verify those coefficients are correct isn’t a software-engineering problem either. Usually it involves running your model on old data and seeing if it retrodicts correctly. Usually the big problem there is whether you can find that data at all, or trust it when you find it.

None of the special risks in voting-machine software are addressed by open source either. Yes, it’s a good idea for the same software-engineering reasons open source is a good idea for all software, but! Open sourcing the software cannot guarantee that the voting machine is actually running the correct software that you think it is, rather than a version that has been maliciously corrupted. Open source cannot guarantee that the data the software reports is not tampered with in transit or at the receiving end.

These problems can be addressed, but it takes sound design of the overall system at so many higher levels that open source is really only a minor part of the toolkit.

> … but! Open sourcing the software cannot guarantee that the voting machine is actually running the correct software that you think it is, rather than a version that has been maliciously corrupted.

Unfortunately that is true for any voting system, be it the simplest paper ballot (cheapest) to the complex mechanical machines (most expensive) we still use in NY. Fundamentally, I believe that a software based system (particularly OSS) offers the the best of all features and needs in a voting system: usability, security, anonymity, speed, accuracy, auditability, flexibility, reliability, maintainability, cost-effectiveness, etc. Anything you can do with a paper, mechanical or optical scanning system can be done with an easy-to-use touch-screen OSS based system, *PROVIDED* that sound supporting processes and procedures are verifiably followed. There are a lot of ways to commit voter fraud (and disenfranchisement, for that matter).

It seems that open source is the Artificial Intelligence or Object Oriented Programming of the early 21st century. It’s being touted as a panacea for everything that goes wrong with computing. I’m a fan of open source software (since it lets me stand on the shoulders of giants as a novice programmer), but it’s not enough for software to be open. Software has to be open and good.

OSS tends to be ignorant of the transaction cost of retraining non-programmers to use the software; this ignorance means there’s a Coase Theorem non-intuitive outcome. There are exceptions, but most OSS projects I’ve worked within have a higher density of code slingers than they do usability testers. (Usability testing is unbelievably frustrating, usually done late in the process, and under horrific deadlines. Apple does incredible amounts of usability testing throughout their entire development process. Microsoft does as well, but gets the benefit of getting to play copycat.)

While “with enough eyeballs all bugs are shallow” works for fixing code, it ignores the “I don’t want to hack the source code, I just need to generate this report for my boss” level of user. Case in point, comparing functional analogue software:

OpenOffice, which inherited a lot of its documentation from StarOffice, has MUCH less consistency in its UI than MS Office; it also has less documentation and the documentation that it does have isn’t as extensively indexed – indeed, a lot of its documentation is out of date with current usage within the suite.

The GIMP has been missing a mission critical color space (reflective CMYK) for more than a decade now, making it useless for anyone who has to work with a print bureau. Many of its features aren’t documented, and one can argue that its layout of controls and menu systems look like the result of three working groups who mostly communicated by bellowing then sulking rather than a unified design element.

To its credit, since I pointed out this blind spot to Eric last, there have been initiatives acknowledging this as a transaction cost. I do not know how well they’ll work, because good user interface design requires a personality type that doesn’t do well up in OSS working groups: The benign tyrant.

Ken, I have to point out that most important open source projects do have benign tyrants, and that they’re well accepted. Larry Wall, Guido Van Rossum, Mark Shuttleworth are the most well known, but I’d say that also the top brass at things like the Apache Foundation, the Linux kernel or OpenBSD wield a similar effective power and influence. Sadly, they aren’t focused on UI design.

Thing is (OIMHO), we need first some more user interface efforts, before we can start being picky about having things like Steve Jobs and Jonathan Ive.

Adriano, the easier you make it on users, the more annoying you make it for programmers, because they have to spend more time doing things they don’t really ‘get’.

I suspect, but do not know for certain, that the more you tie things in to a specific user interface, the more you’re going to get away from the ‘everyone should just be able to fix any problem they find’ meme.

And I don’t think of Wall or Shuttleworth as benign tyrants in this aspect. I see them as project/program managers who know full well that if they don’t keep a good chunk of their programming constituencies happy, they’re going to fragment their user base.

Or, put another way – OSS is a great methodology for making really good code; it is not a panacea to all the world’s ills, and because of the nature of hacker culture, it has some blind spots.

In my district, ballots are paper forms that you mark with a felt-tip pen. The “voting machines” are simple optical scanners that enable a computer to read and tally the votes. The paper ballots are retained and can be rescanned or manually recounted if the need arises. It’s hard for me to think of any way to improve on this system.

I have yet to hear an explanation of why a touch-screen system is in any way desirable, let alone preferable to paper ballots and optical scanners. What is the advantage? I don’t get it.

Ken, I am fully aware of the benefits of good user interfaces. I admired the push inside GNOME to pick usable defaults following UI studies done by Sun.
I spoke of Wall, Van Rossum and Shuttleworth as _benign_ tyrants because of rules #1 and #2 for Larry, and because Guido and Mark are respectively the BDFL and SABDFL of their projects. So there’s more to it than their leadership being forced upon them and limited: they’re the men who, in a pinch, can make the cut, and people accept it.
I do agree with the rest of your post.

Which is why if discussion of software freedom is off the table for Eric, he loses by default. Software freedom, and the far less significant but inevitable geeky dick-size contests, are the only things motivating Linux on the desktop. On just about every other criterion it’s been completely obsoleted by Mac OS X.

I’ve stated before that the things that would get me to move from Windows to Ubuntu can be delineated as follows:

1) Adobe is convinced to port CS4 to Ubuntu/Linux along with Mac OS X and Windows
2) I can get MS Office 2003 to work correctly under WINE, or Microsoft ports Office for Mac over to Linux.
3) Printer driver support for HP printers remains good for both of these.

I think in the age of cheap hard disk space a “move” or “replacing your OS” is kinda an outdated concept. These days it can be more like “OK, which of the three installed OS’s shall I boot up today?” The realization hit me a while ago that how ridiculous the OS debates are: these days it’s not either-or, all-or-nothing, Windows-user or Linux-user: these days the question is just do you want to spend 5-10 GB hard disk space out of the 120GB or 200GB you probably have on an another OS or not?

BTW this is why I’m more and more convinced the gleaming-eye Linux advocates are doing it wrong. They should just calmly say “Here is this WUBI. All it needs is three clicks and about 4% of your hard disk space, less than any decent game, and leaves everything else on your alone. Exactly why NOT give it a try?”

Shenpen, choosing an OS is a commitment of a lot more than just some disk space. I’ve spent a lot of time getting my system working just the way I want it to: getting the OS installed in the first place, almost a thousand lines of emacs customizations, considerable customization of my window manager (xmonad), a mail and news stack that works just-so, organizing my home directory, assembling MIDI soundfonts, and so forth. Using anyone’s system other than my own for more than a few minutes drives me mad. Redoing all this for a parallel set of programs running on another OS and keeping it all in sync would be a very poor investment of my time.

>Shenpen, choosing an OS is a commitment of a lot more than just some disk space. Iâ€™ve spent a lot of time getting my system working just the way I want it >to: getting the OS installed in the first place, almost a thousand lines of emacs customizations, considerable customization of my window manager (xmonad), >a mail and news stack that works just-so, organizing my home directory, assembling MIDI soundfonts, and so forth. Using anyoneâ€™s system other than my >own for more than a few minutes drives me mad. Redoing all this for a parallel set of programs running on another OS and keeping it all in sync would be a >very poor investment of my time.

Agreed, for a power user such as yourself. But for the average Windows user, especially someone who just *cough* brought home the weekly Best Buy special, throwing in an Ubuntu (or PCLinuxOS, if you swing that way) live CD and setting up a dual boot rig is no big deal. Hell, you’ll spend more time uninstalling all the crapware that comes with an off the shelf PC than it will take to set up the dual boot.

Barring a malware attack, Vista tends to fuck itself far less frequently than previous versions of Windows. The Windows code base has undergone massive stability improvements in recent years, which serves to make Linux less enticing.

Jeff, I didn’t say “fuck itself”, though I could have been more clear about it. The point is, my vista OEM install on this laptop feels sluggish, and I can’t remove it or reinstall it without considerable annoyance (read: a weekend lost on pointless tinkering). Other people are in the same position.

>I suspect, but do not know for certain, that the more you tie things in to a specific user interface, the more youâ€™re going to get away from the â€˜everyone should just be able to fix any problem they findâ€™ meme.

No, these issues are orthogonal to each other. I speak from experience here; UI design competence isn’t all that difficult to learn, and once you have it it’s just another thing to optimize. (There’s a lot of mystification around UI design, but it’s just another kind of engineering.)

>Or, put another way – OSS is a great methodology for making really good code; it is not a panacea to all the worldâ€™s ills, and because of the nature of hacker culture, it has some blind spots.

True, and UI has been one of them. The good news is that we’re getting better; the better news is that there is no reason in principle that UI design competence cannot become a routine skill.

Ever since Windows 2000/Windows XP, Windows stability has been more than acceptable. With Vista, Windows security has gotten significantly better – enough better that people aren’t willing to give up ‘good enough’ to go through the hassle of ‘better’. I’m planning on skipping Vista and going to Windows 7 in 2010, but I also have pretty ‘clean’ computing habits.

And that problem – ‘better than’ is the enemy of ‘good enough’ is the core of most people’s reticence in switching. At least MacOS X has the benefit if having versions of the software I use available for it.

Since Ubuntu 7, I’ve softened my stance that Open Source is consistently an eyesore with usability standards written by people who regex their shopping lists.

Now, they’re written by coders who’ve read a book or two and are vaguely aware that non-programmers are a larger segment of the market than they are, but they still don’t understand how non-coders think.

No, these issues are orthogonal to each other. I speak from experience here; UI design competence isnâ€™t all that difficult to learn, and once you have it itâ€™s just another thing to optimize. (Thereâ€™s a lot of mystification around UI design, but itâ€™s just another kind of engineering.)

Agreed. UI design is a science; the only company really advancing the state of the art is Apple. Which right there explains the mass exodus of Linux users to Mac OS X. The open source crowd is still in the prescientific era of UI design, wherein personal preference, convention, and superstition substitute for many years and millions of dollars’ worth of HCI research, and the integration of those findings into the end software product at all levels.

Designing software with a sane user experience either requires massive investment in human-computer-interaction research (an investment, it has been discovered, best made back by — gasp! — selling copies of the software), or simply copying the superficies of the Macintosh or Windows UIs (the approach we’ve used so far, which has worked out okay, but would go much more smoothly if we settled on a single standard rather than different competing groups promulgating similar-but-slightly-different UIs; see below).

True, and UI has been one of them. The good news is that weâ€™re getting better; the better news is that there is no reason in principle that UI design competence cannot become a routine skill.

Not really. There’s been a backlash of late against the sound principles of scientific UI design in the form of these keyboard-driven tiling window managers. The research shows that using the mouse is more productive, yet a certain section of the fosstard crowd wants to rewind the clock to a time when HCI was heavy wizardry.

What’s needed to solve the UI problem on Linux is to decide on a single appearance and set of standard behaviors to which all Linux apps must conform. This is what Steve Jobs did for the Macintosh and he’s gotten the message out to his faithful. A Mac app that doesn’t look and behave like a Mac app is simply not tolerated in the Macintosh ecosystem. This goes especially for otherwise nice open-source apps, like OpenOffice, which need to be either rewritten, reskinned with native widgets, or face general abandonment from the Mac community. As it is, on Ubuntu alone we have KDE, GNOME, XFCE, all similar-looking but somewhat different, as well as whatever wonky UI some of the app authors decide to employ (FLTK? XUL? Xaw?) as well as the occasional troglodytic app which thinks the keyboard r00lz and the rodent dr00lz. N.B.: I say this as a troglodyte who runs the awesome window manager and always has an emacs open without buttons or menu bars.

This kind of thing is anathema in the free-wheeling free software community. Hell, we still haven’t gotten LSB right, how are we going to come up with a single standard look and feel? But for as long as we do not, we are going to be curbstomped by closed source in the marketplace, and I haven’t even addressed the things like hardware support, proprietary codecs, etc. that further impede our progress.

“Thereâ€™s a lot of mystification around UI design, but itâ€™s just another kind of engineering.”

Engineering is supposed to be something whose results can be judged objectively. At the other end of the scale, there is art, which can only be judged subjectively – either like it or not. The very term “design” is something halfway between. Like, f.e. web design. F.e. your CSS is objectively very usable, but subjectively, some might find it old-fashioned. (You know, the rounded corners and glassy surfaces “Web 2.0″ crowd.) Similarly, there are a lot of objective metrics in UI design, but there are a lot of subjective stuff going on too. GNOME just rubs me the wrong way, I have the feeling it takes me for an idiot – Linus seems to think that too – same stuff with Vista and OSX Tiger and Leopard.

Ah, my favourite fallacy. See, Apple did this research and they focused on one particular kind of user – the private, home user, or typical office user, who does a lot of different stuff. Of course for them point and click is the best way. They disregarded the kind of user who does not do a lot of different stuff, but does the same stuff again, again, again and again. Of course, they did not write software for this kind of user. I do – and both my users and I am are more and more pissed that most modern development environments or “enterprise” software frameworks support this kind of stuff less and less.

On the usability of this blog: the main logo is not a link to its homepage from the articles. You might say that the “Back” button is there for a reason (ditto for backspace and ALT-left_cursor), but it’s a common thing to have.
Plus, Back assumes that the page you come from is the homepage, which is not always the case.

> UI design competence isnâ€™t all that difficult to learn, and once you have it itâ€™s just another thing to optimize. (Thereâ€™s a lot of mystification around UI design, but itâ€™s just another kind of engineering.)

Open source programs are usable all right, they’re just usable by other programs rather than by nontechnical humans. I think the best way to make open source programs more usable is by reframing it as an interesting technical problem rather than just chrome. Apple does this brilliantly. Hackers should trip over each other to create the most beautiful and useful UIs ever.

I think the problems with FLOSS stem more from the Unix culture than from open source itself. I recently reread the Unix Haters Handbook, and I was amazed at how many criticisms are still valid today. X11 is still a disaster. Swap space still needs a separate partition. Command line editing is still required for nontrivial system administration. File deletion is still a problem. The command names are bizarre. The Unix programming environment (shells, pipes, etc.) is not very good. The default security model is still weak and requires lots of nonstandard additions to get it to work somewhat well.

Some of the command problems (shell variable expansion) seem to have been fixed. Documentation is somewhat better, but it is often scattered all over the internet.

Phil: you mention “command line editing is still required for nontrivial system administration” as if it was per se a bad thing. Truth is, the bad things about it might be the crypticness of the commands, the non uniform ways of working with commands, and the unforgivingness of deletion, but if the administration is “nontrivial”, it will be very hard (and it takes real genius) to make it easy, either through a CLI or GUI.

If I were in charge of a new edition of “The Unix Hater’s Handbook”, I’d add a chapter on Free Software purism. Linux is great as long as you don’t plan on playing 3d games or watching DVDs. The Free Software Foundation is the Taliban of computing!

I think most of the perceived UI issues with Linux have to do with the ad hoc, almost organic way it has evolved. Different coders, working on different packages, using different toolkits will have different ideas as to what constitutes an elegant, intuitive UI. By and large, the folks who put distros together do a fair job of bringing a little order to the chaos. Sure, there’s work to be done. As a Windows immigrant, I’d like to be able to configure Samba and share a printer without editing a text file. That being said, I have come to appreciate the goodness of editing text files vs. hacking a registry. I also appreciate the power I have to configure the UI to my liking, and the ease with which I can use a different desktop environment, should I choose. You just can’t do that with proprietary OSs

Different coders, working on different packages, using different toolkits will have different ideas as to what constitutes an elegant, intuitive UI. By and large, the folks who put distros together do a fair job of bringing a little order to the chaos.

This is the point I was making – good UI design requires a benign tyrant. And that tyrant has to be able to get coders to do REALLY boring stuff. Which is hard to do when you have volunteers, but possible when you’re saying “So, you want that Christmas bonus based on software sales?”

Read>Only after Apple showed the world how a Unix display system should be done. Way to chase those taillights, open source.

You’re talking through your hat. I know Keith and I’ve had conversations with him about this. Yes, he knows what Apple has been doing, but he didn’t emulate it in any sense; actually, the new X11 image composition model is far in advance of what’s inside Aqua.

Delony>Show me a man who thinks X11 isnâ€™t a disaster and Iâ€™ll show you a man who doesnâ€™t have an ATI card. :-p

Um, *I* have an ATI card and no longer consider X a disaster. (I did used to.)

ESR, X may have been souped-up and decked-out with some snazzier features, but its major problems still remain. This page (which is a few years old but still contains helpful information) describes the major problem: X11 is hamstrung by its cross-platform nature. I believe you once said that portability has beneficial side effects of transparency and readability. Unfortunately, the need to accommodate multiple platforms with various features leads to a lot of conflicting code. Since X11 is designed to run not only on Linux but other, more multimedia-primitive, operating systems like *BSD/Solaris/etc., the developers have rolled a lot of their own code to support PCI videocard detection, video drivers, framebuffers, mice & keyboard support, etc. It would be better if X11 took advantage of the Linux kernel’s framebuffer and hotplug support. Also, this method can present a security problem since X needs root permissions for many of its operations.

The next major problem is the Mechanism: Not Policy approach. Since X does not specify a single interface, a multitude of competing interfaces have arisen. This makes it difficult for a desktop Linux to have a consistent interface. Also, ICCCM still seems quite nasty.

Another problem, which is described in painful detail in the Unix Hater’s Handbook, is the difficulty of setting up remote X connections. The ability to run remote graphical applications seamlessly is the main distinguishing feature of X. If it is difficult to set up, then Houston we have a problem!

In Illinois, and in many other jurisdictions, there can be upwards of 50 ballot positions to tally. Besides President, Senator, Representative, there can be Governor, Secretary of State, Attorney General, Treasurer, Comptroller, state Senator, state Representative, County Board President, County Commissioner, State’s Attorney, Sheriff, Clerk, Treasurer, Assessor, Court Clerk, Recorder of Deeds, a bunch of judges – and up to 50 judicial retention votes.

With 200 to 300 voters in a precinct, that’s 10,000 to 30,000 separate votes to tally: from four to eight hours work after the polls close (and the polls are open four thirteen hours). It’s why mechanical voting machines were adopted.

2) Why is a touchscreen superior to a paper ballot with an optical mark-sense scanner?

Many voters screw up paper ballots. They mark in the wrong places, or too many places, or leave random marks about. The touchscreen only allows voters to cast proper votes. It also allows them to correct errors individually (a mark-sense ballot must be voided and completely redone on a fresh ballot). Some voters have difficulty handling a pen to make marks (the elderly or handicapped), but can poke at a touchscreen quite easily.

]Having said that: our voting machines in Chicago are made by Sequoia Voting Systems, which for two years was owned (and is still partly controlled) by the Venezuelan company Smartmatic. Smartmatic got some of its startup capital from a Venezuelan government investment fund, and produced all the voting machines for the controversial 2004 Venezuelan recall election. (Three different exit polls showed “SI” winning with 60% but the official result was “NO” 60%. But no tampering could be proven.) SVS voting machines were decertified in California in 2007…

Jeff, that article mentioned that the SWF specification is open, and only the player (and codecs) are closed. Isn’t Gnash compatible with online video sites like Youtube? Also, couldn’t Ogg Vorbis and Dirac be added for open source support?

Gnash is a lot better now than I remembered it, but it still mangles the audio in Strong Bad’s Email into unlistenable hash.

As for bolting on open source codecs into SWF, it would be nice if this could be done but the vast bulk of Flash content creation will be done with Adobe’s tools, will use the proprietary codecs by default, and the content thus produced will therefore be unplayable with an open source solution.

i’ve worked most of my life in the financial markets, equity and credit and hedge, as trader and quant and coder (eg, i designed and built the 1st ever credit derivative AND credit cash trading and risk management system), so can offer some insight.

>This is also a very strong case for F/OSS software. Had such software been in use, I strongly feel that the inherent biases programmed in would have been found.

this has to have been written by someone with no genuine experience in the financial markets.

errors in parameterisation will not be caught by the underlying software tool being FOSS. my whole life i have come across only 1 (one) program where a model’s parameters were embedded in code: an emerging markets stock-picker tool which did NOT include an explicit parameter for per-country growth, therefore impliedly embedding an assumption of uniform growth. (amusingly, this is precisely the opposite of the reasoning underpinning investors wanting to get into emerging markets in the first place.) that tool had been written by an intern during a summer break before any of us started. easy fixed, and we only had a couple of billion being run by that model so no real harm done. but apart from that, i’ve never seen “small” assumptions embedded in code.

but far more importantly, i’ve never seen the “big” assumptions underpinning the models acknowledged by anyone.

a key point to understand, which very few nonfinancial people know:in the financial markets, virtually all models are open-source.
weird (well, the opposite of what most people BELIEVE), but true. when it comes to pricing instruments and risk, the financial markets are essentially open-source. and by virtue of herd-driven individual-agent-fear (cf, “no one got sacked for buying IBM”), individual code implementations are extremely genericised and are stress-tested by “many eyes” harder and more perfectionisticly than almost any software on earth. and they’re tested against the industry-wide open-source models.

but despite this (or, sociologically, perhaps _because_ of this), the BIG assumptions are never questioned, and in most cases never realised.

any time i’ve racked out the BIG risks of this model or that, everyone round the table has stared at me open-mouthed, and then typically got angry and started shouting. i had one of the world’s star hedge funds entire quant team announcing to the boss i seemed like a nice guy but just didn’t understand their maths (surreal: they’d lifted it from decade old textbooks). i didn’t get the job as a result, but no real drama: 6months later the fund blew 90% of its money and shut down in a few weeks of the conditions i said they weren’t taking into account.

a little big one first:

regarding the credit crunch, the key instrument/methodology is taking a basket of assets, splitting it vertically by a virtual attribute (risk), and pricing the various split-out virtual portions.
the error in the methodology comes screaming off the page in every quant journal article and textbook. the equity tranche is effectively not priced/priceable. it’s just “assumed” to be profit.
THIS tranche, importantly, is what the papers are describing as the “toxic assets”.

more: mortgage baskets are inherently and renownedly non-priceable, due to “refinancing risk” and its implicitly larger-than-just-the-single-asset systemic implications. i don’t have the energy now to drill into that and in any case its a tangent to the point here, but basically, baskets of mortgages are by their very nature not accurately priceable by any current model. game over.

FOSS as much as you like on implementation of these models, these embedded assumptions will not be better “caught” — they’re already documented and in the public domain.

across ALL markets, the BIG one is: no herd risk, no systemic implications, everything/everyone can price in isolation, everything/everyone can price in context of a pure notional “market” which remains aloof of your actions. “iid” –independently and identically distributed– is a simplistic micro version of this, and altho various models (eg GARCH) explicitly vary this assumption in micro, almost NO financial markets participants fully grok the macro version of this: a multiplicity of participants does NOT imply a multiplicity of behaviours, nor an infinity of depth. if you have 2,000 McDonalds, you don’t have 2,000 “restaurants”; you have 1, visible a lot. and if they’re all buying meat from the same farm, then if every “restaurant” simultaneously attempts to serve everything on the menu, the farm simply can’t deliver and EVERY “restaurant” will simultaneously fail to deliver their individually small portions.

doesn’t matter HOW open the code is, the social context around the models they’re implementing is still fundamentally borken. tech/web2.0 wannabes twonk a lot about “virals”. well, the credit crunch is a viral. a bottom-up-driven systemic epidemic in a very flat ecosphere affecting a lot of identical participants.