Posted
by
kdawsonon Tuesday December 23, 2008 @05:26PM
from the kooks-we-have-always-with-us dept.

ocean_soul writes "It is well known among scientists that the impact factor of a scientific journal is not always a good indicator of the quality of the papers in the journal. An extreme example of this was recently uncovered in mathematics. The scandal is about one El Naschie, editor in chief of the 'scientific' journal Chaos, Solitons and Fractals, published by Elsevier. This is one of the highest impact factor journals in mathematics, but the quality of the papers in it is extremely poor. The journal has also published 322 papers with El Naschie as (co-)author, five of them in the latest issue. Like many crackpots, El Nashie has a kind of cult around him, with another journal devoted to praising his greatness. There was also a discussion about the Wikipedia entry for El Naschie, which was supposedly written by one of his followers. When it was deleted by Wikipedia, they even threatened legal actions (which never materialized)."

The harm, I think, is that he's not a well-enough-known crackpot; a respectable publisher (Elsevier) has given him a journal as his own private playground. This makes it more difficult for non-crackpots trying to enter the field (e.g. grad students) to sort the wheat from the chaff. It also allows other crackpots to come off as more credible by citing crackpot articles which have a veneer of respectability. Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.

In fact, the crackpotness of El Nachies' papers is obvious even to most grad students (you should read some, they are in fact rather funny). The bigger problem is that, by repeatedly citing his own articles, his journal gets a high impact factor. People who have absolutely no clue about math, like the ones who decide on the funding, conclude from the high impact factor that the papers in this journal must be of high quality.

Elsevier, just like other large commercial publishers of scientific journals, offers libraries a significant discount if they subscribe to their whole catalog.By including crappy, useless and inexpensive (for them) journals, they can siphon more money out of universities and into the pockets of their shareholders, as is their god-given duty as capitalists.

which is why academic publishing is seriously screwed up. the public pays taxes to fund most academic research, but then researchers have to pay journal publishers in order to get their papers published. and in return, the publishers retain the copyright to all public research, keeping it out of the hands of the tax payers who funded it (and charging Universities up the ass to have access to their own research).

people used to justify this commingling of academia with commercial interests by the peer-review process involved in journal publication, but the peer-review process provided by academic journals clearly isn't working here. at this point, it would be far better for Universities to publish their own research papers, allowing public research to be made freely available to students, researchers, and anyone else who might be interested in it.

research papers could be published in online databases where they would be archived for easy public access. it's easy enough for independent writers to self-publish and distribute their writings online. so it should be no problem for Universities to do the same. the peer-review process of papers submitted for publication could be handled either by the University itself, or different Universities could get together and form an agreement whereupon they would review one another's papers for free. this would keep academic research purely non-commercial and eliminate potential conflicts of interest.

eliminating/bypassing commercial publishing houses would also mean that societally beneficial projects like Google Book Search wouldn't be stonewalled by greed-driven publishers, and public good could be placed before corporate interests for once. Wikipedia is nice and all, but serious research would greatly benefit from all academic research being made freely available in a searchable online database for all to access. after all, public research isn't very useful if no one has access to it.

cool. ibiblio.org [ibiblio.org] is more of a digital archive & online library provided freely in the spirit of open information exchange (they're part of the University of North Carolina, i believe), but they also maintain a collection of open access journals and allow users to submit their own research papers to the collection.

hopefully these kinds of open access archives will catch on at more universities and convince academia that commercial journals aren't necessary.

Perhaps it's an experiment: He's a mathematician. Now he's just demonstrating how the Impact Factor is a poor metric, and will soon present a superior measure that correctly ranks the journal poorly.;)

Perhaps it's an experiment: He's a mathematician. Now he's just demonstrating how the Impact Factor is a poor metric, and will soon present a superior measure that correctly ranks the journal poorly.;)

And another article on the problem of where to publish the article describing that measure.

The low quality of his papers was obvious to me, and I only have a minor in Math. Even though I never read any until just now so I was looking for crack pottedness, I actually wondered if El Nachie may have incurred a brain injury or something. The presentation is pretty poor by academic standards.

The bigger problem is that, by repeatedly citing his own articles, his journal gets a high impact factor.

Since google similarly uses links to pages to compute their pagerank, they combat this problem constantly. People do all kinds of stuff, from buying or swiping the registration of a reputable domain name, to posting spam on forums hosted at.gov domains, to setting up complicated interwoven sets of cross-linking domains to fake "grass-roots" popularity.

Seriously? There's a lot of high-quality CS research out there in the journals and conference papers; of course there's also a lot of crap. But I'd say most of the crap comes from wishful thinking rather than pure crackpottery. If nothing else, if you try to implement something that doesn't work, you'll know immediately -- thus CS at least potentially has a built-in reality check that pure math lacks. I rather suspect that whether or not a CS journal demands working code from its authors is a strong predictor for the quality of the articles which appear in that journal.

Much like anyone with a working knowledge of CS probably has the ability to verify the CS research, math is a rather logical science which is often pretty easy to verify. Sure sure, there are things that are hard to confirm based on the amount of calculations that must be performed and irrational numbers and all that (infinity is a bitch to test), but those things exist in CS as well.

Its silly to some how imply they are vastly different from each other, they are in fact almost identical to each other.

Researchers in just about every field build on layers of other researchers' work. There simply isn't time to go back and verify every result in the reference tree of every article you site -- if you did that, you'd never get any original work done! Creating code that compiles and executes properly doesn't guarantee that everything you've based that code on is correct, of course, but it's a good sign. I'm not aware of any equivalent reality check in pure math. Now, I know relatively little about the field (applied CS and statistics is my game, specifically bioinformatics) so I'll happily accept a correction on this point.

I think you're over estimating the problem here. Sure every researcher doesn't go through every step of every proof. However a lot of researchers go through a lot of the steps (and previously went through the intermediate steps) and a few researchers go through them all.

Sanity is preserved by writing the proof down in a way that people can understand. Otherwise research papers in mathematics would all be about 5 lines long.

The sanity check provided by successful compilation is a lot less meaningful. It

There simply isn't time to go back and verify every result in the
reference tree of every article you site -- if you did that, you'd never get
any original work done!

That's what those 7-10 years of study you did are for. By the time you reach
the cutting edge of research, you had plenty of time to take courses on all relevant fields and learn every important result that gets used. If you try to contribute to
the cutting edge without knowing 95% or more of the results referenced in
a typical paper on y

Actually,it can take researchers YEARS to debug a very complex proof of something like the Riemann hypothesis. To begin to understand the proof, even a Riemann scholar might need to first learn entire fields of mathematics.

I've heard it said of de Branges at Purdue that there are few people capable or willing to confirm his work because of the difficulty involved.

Also in computer science there are groups of authors who only write in their 'own' journals and attent their 'own' conferences. Look up the subject of 'method engineering' and you will find a whole field of 'Computer science' with high crackpotness.

The truth is that in many parts of CS it's easy to publish a tissue of lies because there is no policy of publishing source code with algorithms, you just publish timings for the one case that worked and graphs of the output for the trivial case that nobody else cares about. (I'm sure lots of people in graphics will be nodding their heads at this right now...) Mathematicians, on the other hand, are expected to provide proofs, and

Formal proof languages cannot proof anything like what is proven in modern mathematics.

Formal proofs are a bit of fun for the CS people to play with but nowhere near able to handle genuine problems in mathematics.

It's like trying to describe to someone a book by describing which pixels are black and white on the page. Sure it can be done, but it's going to look like gibberish and give no useful picture until someone puts the pixels together to make the page.

Look up metamath, coq, and mizar. I suppose your definition of "genuine" problems may influence your opinion of these projects. coq was recently used to verify a completely formal proof of the four coloring problem, which I would classify as a genuine mathematical problem considering that no traditional proof has yet been found.

You realize, of course, that the only reason I was able to use a computer analogy is that we're talking about pure math. If we had been talking about CS, I'd have had to go with a car analogy right off the bat.

Ok, so it destroys the credibility of the journal, as well as the credibility of any papers coauthored by this individual, and destroys the credibility of anyone who decided that getting published (by allowing El Naschie to get his name on the paper) was more important than academic rigor. I don't see the long term, lasting harm.

The harm, I think, is that he's not a well-enough-known crackpot; a respectable publisher (Elsevier) has given him a journal as his own private playground. This makes it more difficult for non-crackpots trying to enter the field (e.g. grad students) to sort the wheat from the chaff. It also allows other crackpots to come off as more credible by citing crackpot articles which have a veneer of respectability. Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.

And it gets worse when money becomes involved. Pseudoscientists and crackpots often try to find "investors" for their schemes, and even a layman who performs due diligence can be fooled when publishers like Elsevier become enablers for pseudoscience. When the paper shows up in an INSPEC or Web of Science search, how is the person being scammed supposed to know that the paper isn't really legitimate?

Many "free energy" scam artists already have patents for their nonsensical inventions, thanks to the laxity of the USPTO. It'll get worse unless these "pseudo-journals" are exposed and publicized to the greater science and engineering community, as well as the public at large. I had never heard of El Naschie before today, because I'm not a mathematician; thanks to this article, more people like me will now keep an eye out for his future "work".

Well, there's been some research suggesting that some authors may not necessarily have bothered reading all of the material that they cite. It's easy to cut and paste items from the citation list of an earlier peer-reviewed paper, and just assume that the contents of those papers are what the earlier author has said.

I guess that the way to test this would be to get a non-existent paper listed in Physics Abstracts, and cited in one or two major papers, and then see how many subsequent papers simply add the citation to their own list.

If someone submits an paper on experimental physics, the journal referees typically aren't in a position to say that the experiment really happened the way that the authors say. As long as the claimed results are roughly in line with what people expect, and nothing see

they were heavily taken by "cold fusion researchers," a canard in three dimensions if ever I heard one, 20 years back. perhaps they occupy the same place in scientific literature as S&P and Moody's does in careful review of bonding and finance? down Illinois' way, they call it "pay to play."

And the mathematicians might have some fun with it. How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?

Oh that one is too easy - all you have to do is imagine driving on the Cross-Bronx Expressway during rush hour and you have the concept down pact!

The infinite-dimensional corresponds to the amount of time it takes to get to your destination, separable Hilbert spaces are where you are and the space just ahead of the car in the other lane moving faster than you which you can never seem to reach unless you go under the separable space under the truck, and isomorphism is what you think of the other testoster

Topological Hairy Ball Theorem: It is impossible to drive your SUV in a path that covers the whole planet without crossing your own tracks.

Fixed point theorem: If you drive from LA to san fransisco all of one day, and back all of the next along the same path, you are guareteed to hit at least one spot at the exact same time you hit that same spot on the way up.

Combinatorics: You are tired of carhenge stealing your glory and want to create a car-amid (car-pyramid). if you want your carami

How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?

Ok. So you've got this car with a one cylinder engine you see, and you put it into drive. Now nano-seconds before the first cylinder fires, you take a positional mark of exactly where the car is as well as the height of piston in the cylinder. Then it fires and moves forward and you make another mark for the car's position and the bottom travel of the piston. Then you jam it into reverse and ta

Yeah, you really have to be careful out there... that's why I get all my astronomy and mathematical insight (as well as web design hints) from http://www.timecube.com/ [timecube.com]And if it ain't there, then I just look it up on wikipedia

On a related note, in some fields there is a greater tendency to cite. I would consider an IF of 3 relatively low in biology for example, but it's decent in bioinformatics. The IF is for granting agencies who'd rather judge your work by the journal it's in, rather than actually reading the article or looking up its citation count in Google Scholar (if it's been around a while).

Incidentally, I've noticed that good open access journals in biology/bioinformatics are getting better IFs these days, so that mo

Excluding references to the same journal is too harsh a criterion, since a lot of high quality papers get published in high quality journals. What should be perhaps excluded, though, is self-citation (whether to your own articles in the same or a different journal). Also, papers published in a journal by a journal editor shouldn't count.

It's not that simple, though, when you are talking about papers with multiple authors. It doesn't take into consideration to what level of involvement any particular author was. It's not uncommon for authors to be listed due to small contributions, or insight, or internal politics. At what point do you say their contribution was significant enough to warrant exclusion from impact factor calculations because of self-citation? And how do you even quantify that level of contribution?

If you are a high quality journal it should not really matter if you exclude all references to itself. If "High quality journal" gets 60 references to it from other journals and 10 references from itself, the difference between 60 and 70 is not large. But you take "Crackpot dynamics journal" which gets 60 links to itself and 10 links from outside, the difference between 70 and 10 is large.
If you still want to keep self-references, we could calculate modified impact factor as
modified impact factor = num

According to Elsevier, his impact factor is 3.025 [elsevier.com], which does seem high compared to Elsevier titles like Advances in Applied Mathematics (founded by Gian-Carlo Rota, who was a respectable mathematician).

It's clear from the samples that El Naschie's articles are complete garbage, and I'm sure no respectable mathematician would want to publish in what's effectively a crackpot's vanity press. This is obviously the scientific journal version of Googlebombing.

Pick any of this recent papers and chances are good that most of the citations are to his own past papers. So, yes, that's how he's pulling it off: he cites himself ten times or so in each of his papers, and because he writes half the papers in each issue, that inflates the impact factor.

Citation indices like citeseer [psu.edu] distinguish self-citations from non-self-citations; if you pick some random paper that has both, you'll see a tally like "81 citations -- 7 self". Does Thomson Scientific not actually bother to do that in computing its impact factor?

In all seriousness, he's clearly churning out self-referential articles, which probably accounts for half of his references. The other half are by his "students". I remember that the Church of Scientology was doing this with websites, to mess with search engine ranks, which is very similar to Impact Factor.

If you want to automatically determine what constitutes a good journal purely from data, the definition is something like: is frequently cited by other good journals. Obviously, there's a circularity there. Various techniques attempt to mitigate it, but none are perfect, and indeed most are rather simplistic and easy to game. It's basically hard to distinguish, purely from citation data, a vibrant community of legitimate research from a vibrant community of crackpots.

In real life, most academics get around the circularity problem by starting with a set of "known good" journals that are determined by consensus in the field rather than algorithms (though this may sometimes be controversial). That lets them take into account more subjective things such as status of a research community (crackpots or not?). For example, as the linked article points out, the Annals of Mathematics is generally accepted as a top-quality venue for mathematics.

If you wanted, you could then construct an Annals-centric view of mathematical impact automatically by seeing how frequently other journals are cited by papers in Annals. This is what happens informally as journals gain and lose reputation: a promising new venue often first comes to a community's attention because its articles begin to be cited in "known good" journals.

But just taking all journals with no starting point, and attempting to extract from the citation graph which ones are "good" purely from the links, is doomed to failure, because there just isn't enough information in there to make the distinctions people want to make.

The one thing that separates crackpots from "Real Scientists" is who gets grant money. Look at String Theory or Post-modernism. Prestigious journals in both endeavors where both hoaxed. Post-modernism by the infamous Sokal affair (http://en.wikipedia.org/wiki/Sokal_affair) and String Theory by the Bogandov Affair (http://en.wikipedia.org/wiki/Bogdanov_Affair). There are also a lot of dubious things going on in the softer sciences that are heavily politicized. Meanwhile a lot of good fundamental physics

So maybe we need a Bayesian Impact Factor (BIF)? Start with some distribution for journal reputation (say, the results of a survey of university faculty and other researchers working in the area) as the prior, and then calculate a posterior based on observed citation data.

This turns out to be a problem space with some really interesting conclusions. I spent some time over the last few years working with researchers from MIT, UCSD and NBER to come up with ways to analyze this sort of problem. They were focused specifically on medical publications and researchers in the roster of the Association of American Medical Colleges. They identified a set of well-known "superstar" researchers, and traced the network of their colleagues, as well as the second-degree social network of their colleagues' colleagues among other "superstars".

I built a bunch of software to help them analyze this data, which we released as GPL'd open source projects (Publication Harvester [stellman-greene.com] and SC/Gen and SocialNetworking [stellman-greene.com]). I've gotten e-mail from a few other bibliometric researchers who have also used it. Basically, the software automatically downloads publication citations from PubMed for a set of "superstar" researchers, looks for their colleagues, and then downloads their colleagues' publication citations, generating reports that can be fed into a statistical package.

They ended up coming up with some interesting results. Here's a Google Scholar search [google.com] that shows some of the papers that came out of the study. They did end up weighting their results using journal impact factors, but the actual network of colleague publications served as an important way to remove the extraneous data.

I hate to say this because I realize how naive it is, but who cares about the quality of journals? Perhaps It's because I'm interested in a more applied field, but I judge papers by their results, generality, accuracy, clarity, and sometimes author - not what journal happened to publish them.

IMO most journals have been killing themselves off in the recent past. While running themselves as businesses may have worked when they served a useful purpose, all they do nowadays is impede openness and transparency.

The people who most directly care about especially quick-to-skim summaries of quality (like impact factor) are people judging the output of professors. If you're not familiar with a sub-field, how do you separate the professor who's published 20 lame papers in questionable venues from the professor who's published 20 high-quality papers in the top journals of his field? You look at some sort of rating for the venues he's published in.

For reading papers, I agree it's not quite as relevant. I still do do a fi

That always struck me as somewhat funny about the term "impact factor". Having an impact is in normal speech an impact on something. These factors seem to be trying to avoid the question of what you're measuring an impact on by choosing something really broad, like "impact on the advance of science". But it shouldn't be a surprise that that's more or less unmeasurable.

The problem with impact factors is that they don't measure the quality of the papers, they just measure the number of times they're referenced. The thought is that the number of times a paper is referenced is proportional to the quality. Sort of like the concept behind Google Page Rank - more inward pointing links means that the site is "better".... Except that relying solely on incoming links doesn't work to well if people start to game the system. Google, who made it's name with the power of Page Rank, h

The summary claims Chaos, Solitons and Fractals, has a high impact factor. The blog linked to, however, does not assert this, and I see no source for it. He does also co-edit the International Journal of Nonlinear Sciences and Numerical Simulation, which the blog asserts "flaunt its high 'impact factor'." The link to the IJNSNS praising him is broken, so I can't confirm that.

It looks to me like some crackpot got a journal. However, it doesn't seem particularly devastating. Nobody has based work on his articles purely on the basis of the "Impact Factor." I don't think anyone else is taking him seriously. At worst, libraries have paid to subscribe.

Some institutions may base funding on your publications weighted by the impact of the journal they are published in. I dunno if any DO, but it's possible. Its certainly not uncommon to determine funding based on number of publications, and I'd hope those numbers are weighted by the MERIT in some way or other, or else you just get people spamming "publication mills" with randomly generated BS, and get more funding to pay for the exorbitantly high application fees;) I recall a number of years back, it was

You got to the heart of the matter. Ultimately this is the primary complaint that Baez is directing at Elsevier. There's also the issue that it makes a bit of a mockery of the publication process and suggests some things need improving. But Baez is a long-time campaigner against high journal prices and I think that was one of the reasons he felt so strongly about this issue. Elsevier distribute this journal as part of a larger package that libraries pay for an

Slashdot is a bit late in reporting these news... I tried to submit them earlier [slashdot.org] when the news was fresher.

The problem at heart is that one of the biggest and evillest academic publishers, Elsevier, has been supporting a crackpot.

This shows that Elsevier isn't doing enough to promote the quality of research, and worse, libraries are paying huge fees with tax money for worthless journals. The problem here is bundling; university libraries have to buy in bundles journals, one of which may contain crackpo

This has been a fascinating case of Crackpottery. Read the blog and the subsequent replies. El Naschie seems to make it (Quantum Mechanical babble-speak) up as he goes along,but unless you are an expert in this area, as Dr. John Baez is, it would be difficult for the casual reader to discern this. This is similar to the Bogdanov affair, another well know scientific scam. ( http://en.wikipedia.org/wiki/Bogdanov_Affair [wikipedia.org] )I'm a little surprised it took this long for Slashdot to discover this one.
One other thing: One of Baez's beefs among others is that this bogus El Naschie journal is bundled with more respectable journals and Elsevier profits from the bogus science.

The Bogdanov affair is a little different. I did PhD research in theoretical physics but I was a bit unsure about the work of the Bogdanovs. There were bits of it that I could nitpick at and say it was definitely mistaken, but overall it was a little tricky to judge the bigger ideas without being a specialist in their particular subfield. The Bogdanovs had some smart people fooled. It's a very good hoax.

El Naschie's writing looks like nonsense even to non-specialists (though I guess you still need a degree in mathematics or physics). There's no way it could fool even beginners in the areas his work covers. That makes it all the more astonishing that he survived with Elsevier for so long. Apathy I guess.

People used to say about a mathematician or physicist that "what he is doing is so important that only a few people in the world can understand what he is talking about."

In a few cases it was actually true.

Also, there were mathematicians who believed that the highest form of mathematics was work that had no practical application. There was a story that the inventor of matrix theory expressed pride that he had invented a form of mathematics with absolutely no practical use. Little did he know how extensively his work could be used. He would have been appalled.

There still seems to be a feeling that the less people are able to understand a paper in a math journal, the more important the paper is likely to be.

At one time I was a subscriber to the Annals of Mathematical Statistics. Papers in math journals usually assume that you know every paper previously written by the author and the others in the field. There is often very little introductory material and no tutorial material in these papers. Even if you have a general understanding of the topic, you can't follow the papers because they are written very concisely, and assume that nothing needs to be explained if it was ever published anywhere else. You may have to backtrack for years of someone's papers and still not be able to understand the paper you are trying to read.

This is probably a combined consequence of "publish or perish" in academia and page limits in journals. It is often hard to tell if a given paper makes any sense or is useful.

Good post. As far as I'm aware, the guy 'who invented matrix theory' you refer to was in fact G. H Hardy, who was actually famous for his work in number theory. This really was a subject without practical applications, at least until RSA came along. Matrices have always been a useful calculational tool.

Unless I am mistaken, you appear to confuse Hardy [wikipedia.org] and Cayley [wikipedia.org]. It was
Cayley who invented matrix theory, whereas Hardy, who lived about 80
years later, expressed that opinion in the Mathematician's Apology, which is not an
apology in the modern sense, but rather a defense of pure mathematics.

I feel I have to defend the practice of skipping introductory material in
mathematical papers. If every technical paper had a substantial tutorial
section, then it would be twice as big or more. And what would be the

This is an example of the sort of abuse we get all the time from ignorant people.
I inherited this science from my father, an ex-used-car salesman and part-time window-box, and I am very proud to be in charge of the first science with free gifts. You get this luxury tea-trolley with every new enrolment.
In addition to this you can win a three-piece lounge suite, this luxury caravan, a weekend for two with Peter Bonetti and tonight's star prize, the entire Norwich City Council.

It might help if everyone asked their school library [utexas.edu] to stop subscribing to this "journal" and perhaps review other journals by this same publisher to see if they are worth keeping. At a time when worthwhile journals are being cut, it's a shame that schools are still paying for this one.

It does not help the cause of the sole source of criticism, a math blog from U. Texas, to have ostensibly technical criticism asserting incorrectness but admitting ignorance (see second link in summary). The author takes issue with some points in an article of which he has some experience. However, he points out several things that he has no knowledge of and admits as much. He then asserts from this admitted position of ignorance that the material with which has is not familiar is somehow fraudulent. To make that claim valid the author would have to be able to determine that with certainty, but he can't.

This technical criticism is produced in support of a posting elsewhere in the same blog, the author of which makes the same sort of assertions, and likewise fails to support most of them. In fact he can produce partial support for only one, and then claims support from others which is not produced. Some of this supposedly comes from his own administration which he admits does not support his work pursuing the matter.

I take no position with regards to the central issue. I've seen a couple journals with very incestuous editorial policies and staffs. It makes it hard for others to get published. However, the situation evolved into this because those people did a lot of work with each other, not because any of it was fraudulent, so this can happen in an absence of any wrong doing.

Claims of wrong doing are extremely serious, as the occurrence of such things are. Such claims should be supportable. The claims made in TFA that are supportable are not of evidence wrong doing, and claims of wrong doing are unsupported, and by admission, unsupportable by those making them. As far as I can tell this is a single blog's flamefest with more crackpot value than what they claim is due their target.

In short, the accusers appear to be embedded in at least as much pot and crack as they accuse others of, failing utterly to differentiate themselves from kettle. They may have a valid point, but they fail to show it, instead making themselves look all the worse through the use of reciprocal psychoceramics.

Above posted modded "interesting" should be modded "funny". For those that didn't get the joke, it appears to be an ironic attempt to apply the "El Naschie" technique in a Slashdot comment. He's basically throwing around a lot of unfounded complicated nonsense signifying nothing in an attempt to get modded up. Much as "El Naschie" does the same in his papers when really it's just, IMHO, a stream of conciousness drooling on the keyboard crackpot spew.

Basically what's being said here is that the academic publication system is vulnerable to the sorts of SEO attacks that briefly caused search engines to be befuddled by sites full of interlinked pages full of nonsense text and viagra ads. The academic publication system just moves a little slower, so it's going to take them a little longer to update things.

the sorts of SEO attacks that briefly caused search engines to be befuddled by sites full of interlinked pages full of nonsense text

What do you mean "briefly"? Wikipedia is still the top hit for most Google searches!

I wouldn't really say this affects academic publishing as a whole, though - these "impact scores" are pretty much an academic exercise, nobody really pays attention to them (unless they happen to coincide with pre-conceived opinions).

The academic publication system arguably pioneered many of the SEO techniques - self-linking, linking to your mates, mutual cross-linking networks, adding lots of outgoing high-value links to your material to improve index rankings, and so on.

If you're a new researcher in an obscure field, one of the best ways to advance is to assemble a group of researchers interested in similar topics, hold a conference so everyone can get to know each other, publish the conference proceedings, and then you all publish

Science is built on reputation. If you have a good reputation, then people take your work seriously. If one of your publications is a hoax or fraud, your career is over. This smells like yesterday's fish's sweaty socks.

Hell, expect Blogoavich to issue a statement tomorrow disavowing any association with this guy or his publications. It smells that bad.