Comments on: “Constructing expert indices measuring electoral integrity” — reply from Pippa Norrishttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/
Thu, 24 May 2018 19:23:27 +0000hourly1https://wordpress.org/?v=4.9.6By: Anonymoushttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-639174
Tue, 02 Jan 2018 02:39:57 +0000http://andrewgelman.com/?p=32498#comment-639174Why should the behavior of the North Carolina State Legislature you describe not be considered on par with other anti-democratic forms of governance?
]]>By: Nezumihttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-639125
Tue, 02 Jan 2018 00:35:30 +0000http://andrewgelman.com/?p=32498#comment-639125THere… are glaring flaws with the methodology that need to be meaningfully addressed for this to be a valid measure. North Carolina is pretty broken, as far as election integrity (we’re talking about a state that pulled a last-minute bid to strip its governor of almost all powers because the Democratic candidate won), but when the results fail so badly to put full-on autocracies and authoritarian regimes low on the scale, there’s something deeply wrong. The issue of getting too few respondents for results to be valid should have been addressed before results were even released, for a start.
]]>By: Andrewhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-639121
Tue, 02 Jan 2018 00:29:37 +0000http://andrewgelman.com/?p=32498#comment-639121Nezumi:

The problem is that if their measure says that North Korea is as democratic as North Carolina, they have a problem.

]]>By: Nezumihttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-639118
Tue, 02 Jan 2018 00:22:42 +0000http://andrewgelman.com/?p=32498#comment-639118… How so? That’s… a fairly straightforward description, which means that they’re applying a standard based on agreed-upon international conventions and global norms of election integrity and democracy to every country they evaluate, and I’m not clear how that dooms it.
]]>By: David Jessuphttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-492124
Sun, 21 May 2017 00:26:43 +0000http://andrewgelman.com/?p=32498#comment-492124What a shame that the first attempt by academics to rank U.S. state election systems on a scale of most to least democratic had such obvious flaws. A methodology that ranks the 2016 North Carolina elections as worse than those in North Korea or Cuba, as did the “Perceptions of Electoral Integrity” project (PEI), is not going to be taken seriously by U.S. media.
Yet such a ranking is enormously important in today’s fractured political climate. Doubts about the legitimacy of American democracy are on the rise. We need an objective, state-by-state comparison of electoral integrity to pinpoint problems and suggest reforms. The PEI project is commendable for taking a step in this direction.
How might a better methodology be created? I’m no academic, but here are some common-sense questions:
• First, shouldn’t the opinions of academic experts be supplemented by objective measures? Perceptions are important, but so are hard facts. Could gerrymandering, for example, be measured by the ratio of parties on the voter registration rolls compared to the ratio of parties in a gerrymandered legislature? Could one indicator of ease of voting be measured by the number of registered voters per polling place? Or, how about comparing voter turnout in states with and without same-day registration?
• Second, instead of seeking electoral experts within individual states, how about seeking experts within each subject matter area (called “dimensions” in the PEI study) and ask them to compare states within their field of expertise? For example, isn’t there an academic institute somewhere that focuses on the issue of gerrymandering? Wouldn’t its staff be the ones to rank states on that particular indicator? Ditto for voter registration laws, and each of the other indicators that need to be measured.
• Wouldn’t it be better to separate an internal, U.S. state-by-state comparison from an international comparison of countries? North Carolina and North Korea are on different planets, democracy-wise, and the indicators needed to distinguish subtle differences between states are on a different scale than those needed to compare disparate countries.
The Foundation for Democratic Education is seeking a way to monitor the health of democracy in the United States by creating a bi-annual report called “Freedom Watch America.” If there is anyone in this comment chain who would like to work toward this end, please leave a comment on our website (foundationfordemocraticeducation.org) or our facebook page. Thanks. David Jessup
]]>By: J Mannhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-406082
Thu, 26 Jan 2017 17:32:06 +0000http://andrewgelman.com/?p=32498#comment-406082Thanks Andrew – the main goal of my post was just to figure out what Professor Norris’s substantive case *was*, since the response she sent to you reads more like a description of the project and less like a specific response to your concerns.

FWIW, my pet peeve is just imprecise reporting. There’s a big difference between “opinions of North Carolina’s democracy among a group of ‘experts’ surveyed by PEI have dropped slightly” and “North Carolina is no longer a democracy.”

I see your point that one would also like to have transparent and credible methods to gather useful opinion information, even if it were accurately described.

As various commenters have noticed, there are some incoherent things about Norris’s reply:

– She seems to have abandoned the North Korea numbers (which, as we discussed, gave that country a rating higher than 50 on every one of their reported electoral integrity summaries), but a couple years ago when she posted on their findings, she presented the North Korea numbers as being valid. If you’re going to pick out results from particular countries (as she does with the U.S.), then it’s not enough for numbers go correlate fairly well with other measures; they need to be correct for individual countries. I have no sense why I should believe any of the numbers, given the problems with North Korea.

– Norris may say now that the US state data are not designed to be comparable with the international data, but that’s not what she was saying last month, when Reynold’s op-ed was getting all sorts of good press and Norris was promoting it.

– There are some weird things about the report you link to. As a commenter pointed out, she mentions Lithuania twice, once in a positive way (as a “newer democracy” that was ranked 4th in the world), and once in a dismissive way (“the US is similar to the position of Indonesia, Lithuania and Bulgaria”).

– Norris writes, “Some questioned the PEI global ratings based on so-called ‘sniff’ tests, a fancy way of saying that several cases in the dataset did not reflect the prior assumptions of the readers.” OK, so let me be clear. I have a “prior assumption” that North Korea has low electoral integrity.

]]>By: J Mannhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-403323
Tue, 24 Jan 2017 15:44:39 +0000http://andrewgelman.com/?p=32498#comment-403323Professor Norris’s responses here are welcome but somewhat opaque to a non-expert like me – I think this post of hers is also worth reading for those interested in her perspective.

If I can summarize still further, I read Professor Norris’s key points on this dispute to be more or less:

1) Her project doesn’t measure actual election integrity; given the absence of obvious measures, her project measures perceptions of election integrity among the group of experts her project selected.

2) The results correlate fairly well with other measures of perceived election integrity, which gives Professor Norris some confidence in their reliability.

3) The US state data is not designed to be comparable with the international data.

4) Professor Norris’s conclusions include a recommendation for national standards for US elections instead of local and state standards, which she characterizes as the “fox guarding the hen house.” (I’m not sure why the national standard setting body isn’t a fox, but she doesn’t get into that in this post).

]]>By: Andrés Ceballoshttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-393641
Tue, 17 Jan 2017 03:39:26 +0000http://andrewgelman.com/?p=32498#comment-393641I agree. Thank you so much for starting this debate. If only the academy could come down from the Ivory Tower and meet activists down at t he grass roots to evaluate the state of our democracy, we may one the one hand put these theories to work and at the same time get feedback from all those experts working on the ground. An organization in Colombia called the Electoral Observation Mission has a 35 variable index to measure their democracy. However the formula is fluid because political circumstances are always changing, specially in the midst of a peace process. The point is that these methods are only useful when they contribute to more transparent elections, to better media coverage and more informed citizens. Sometimes academic rigor will suffer in the name of political expediency, but that is an issue we need to deal with slowly.
]]>By: Andrewhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-382635
Fri, 06 Jan 2017 20:09:38 +0000http://andrewgelman.com/?p=32498#comment-382635Eli:

That’s one error. But another error, I think, is the reliance on experts. What does it mean if you ask 40 experts and only 2 or 3 respond?

There are a couple of ways of looking at this.

One perspective is to say that an expert is an expert, and even one expert is enough to tell us what’s going on in the state or country. But if that’s the case, why survey 40 per country? Why not just get one or two in each, and stop there?

The other perspective is to recognize the responses as subjective and variable. But if that’s the case, the issue isn’t just getting a large enough N to get a small standard error (as it seems Norris is implying). Once you accept the extreme subjectivity of responses (e.g., respondents who think North Korea has above-average electoral integrity or who think that North Carolina is less democratic than Cuba), then you have to be concerned about bias, both in the responses and in the sample. What’s the population being surveyed, are the respondents representative of that population, are they giving valid responses, etc?

I didn’t go into the above points in my original post on this survey because it all seemed so obvious. But I guess that’s the problem with our statistics and methods teaching: we go into mind-numbing detail on variance formulas etc., but not enough on basic principles of measurement.

]]>By: Eli Poupkohttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-382597
Fri, 06 Jan 2017 18:19:03 +0000http://andrewgelman.com/?p=32498#comment-382597Yes, exactly. But it’s obviously much easier to distinguish between having elections and not having elections than it is to distinguish between “fake” elections and “slightly flawed” elections.
]]>By: Nickhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-382142
Thu, 05 Jan 2017 23:14:15 +0000http://andrewgelman.com/?p=32498#comment-382142Shouldn’t a useful measure be able to distinguish fake elections in a dictatorship from slightly flawed elections in a non-dictatorship?
]]>By: Eli Poupkohttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-382025
Thu, 05 Jan 2017 20:49:14 +0000http://andrewgelman.com/?p=32498#comment-382025I would argue that the methodological error here can be traced to the second excluded category, namely states “without de jure direct (popular) elections for the lower house of the national legislature.” Instead, the survey should arguably have excluded states without de facto popular elections. This is admittedly somewhat more difficult to measure, but I don’t think excluding only states lacking de jure elections (and including those that clearly lack de facto popular–i.e. minimally democratic–elections) is justified in this analysis, especially if it going to be used to draw comparative conclusions about democratic legitimacy in different states.
]]>By: Tylerhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381370
Thu, 05 Jan 2017 00:35:01 +0000http://andrewgelman.com/?p=32498#comment-381370Exactly, just answer the questions! I’m a researcher and epidemiologist and have done both qualtitative and quantititave research. I looked for anything resembling a response to Gelman’s article, and there is none. It took a lot of words and effort to provide a non-answer.
]]>By: Jackhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381324
Wed, 04 Jan 2017 22:10:37 +0000http://andrewgelman.com/?p=32498#comment-381324Porter writes that “Perceptions of weak electoral integrity matter. They depress voting turnout, according to Professor Norris’s analysis of 2012 data from the American National Election Studies.” The link goes to the ANES homepage…
]]>By: Rainahttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381239
Wed, 04 Jan 2017 19:49:02 +0000http://andrewgelman.com/?p=32498#comment-381239For pete’s sake, Pippa. I’m an academic. This response (and the one above) might fly in a review response, but it’s not going to do anything for the public but make them dismiss you. No one cares how many reams you and your colleagues have published if you can’t BRIEFLY explain and defend the basic principles of your work to non experts.
]]>By: Mark P.http://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381232
Wed, 04 Jan 2017 19:23:00 +0000http://andrewgelman.com/?p=32498#comment-381232Look, you still haven’t acknowledged that you messed up and that North Korea in no way deserves the ratings that it got. You keep trying to toss shovelfuls of irrelevancy over the basic issue, but that basic issue refuses to be buried.
]]>By: Rahulhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381198
Wed, 04 Jan 2017 18:29:19 +0000http://andrewgelman.com/?p=32498#comment-381198Someone should model the dynamics of the propagation of crap-research in popular media.
]]>By: Rahulhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381197
Wed, 04 Jan 2017 18:28:02 +0000http://andrewgelman.com/?p=32498#comment-381197It is no worse than this, their fundamental definition of what they are trying to measure:

“The idea of electoral integrity is defined by the project to refer to agreed international conventions and global norms, applying universally to all countries worldwide through the election cycle, including during the pre-election period, the campaign, on polling day, and its aftermath.”

If that’s how they define “electoral integrity” the project is doomed from the start.

]]>By: Nickhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381191
Wed, 04 Jan 2017 18:17:29 +0000http://andrewgelman.com/?p=32498#comment-381191Let’s look at the paper ‘Do experts judge elections differently in different contexts? The cross-national comparability of expert judgments on election integrity’. The authors look at data from the pilot study for PEI and note that some countries (including some of the more *surprising* scores like Kuwait and Romania) have higher expert disagreement than others but never discuss the obvious implication – if experts disagree, then how is data received from these tiny sample sizes accepted by PEI (median of n=11 but some elections have n=2 or n=3) statistically meaningful? Would a journal publish a survey on the electoral integrity of Cuba that had n=3 respondents? But apparently if you staple together 213 such surveys then they do.
]]>By: Andrewhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381141
Wed, 04 Jan 2017 16:56:41 +0000http://andrewgelman.com/?p=32498#comment-381141Tim:

Damn! And this happened after my post. That’s just terrible: millions of people will read that New York Times article.

Now I have to partially retract my P.S. here, where I wrote:

But the good news is that the usual suspects such as ABC, NBC, CBS, CNN, NPR, BBC, NYT didn’t fall for it. I give these core media outlets such a hard time when they screw up, and they deserve our respect when they don’t take the bait on this sort of juicy, but bogus, story.

Let’s just hope NPR isn’t next.

On the plus side, Slate ran my article even thought it directly contradicted something they’d posted earlier. I’ll email Eduardo Porter and see if he can run a correction tomorrow.

]]>By: darosenthalhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381138
Wed, 04 Jan 2017 16:51:53 +0000http://andrewgelman.com/?p=32498#comment-381138That response was a horrid slog. I’ve rarely encountered a more densely compacted layer of polysyllabic obfuscation in the service of clarification than what I’ve read here. I feel like a man who, knee deep in quicksand, has been handed an anvil. Everyone involved with this nonsense should be sent to a remedial communication camp for a semester of hard writing.
]]>By: Tim Worstallhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-381137
Wed, 04 Jan 2017 16:47:33 +0000http://andrewgelman.com/?p=32498#comment-381137And Eduardo Porter buys it hook line and sinker:

We have published about our methods over the years and obviously a short blog post cannot cover all these points. I welcome the interest in our work, however, as I have been actively trying to further discussion about the construction and use of expert indices, including with IPSA and APSA workshops I organized last year. As well as the details in the technical appendices in our annual reports and dataset codebooks, let me point you towards publications where you can read about EIP’s methods in more depth by colleagues in the team, including discussions of cross-national comparability, reliability and validity checks, and also see some of the papers we brought together to discuss these issues in our meetings:

I hope that you find these readings useful for further information and we always warmly welcome constructive suggestions to improve our work. It has been striking to me how the use of these expert-based measures has expanded by leaps and bounds in the social sciences and yet the methodological discussion has lagged far behind. Let’s work together on these issues as a community of scholars.

]]>By: Old Europe mixed-methods chaphttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380796
Tue, 03 Jan 2017 23:45:30 +0000http://andrewgelman.com/?p=32498#comment-380796I’m afraid that I agree with the two questions formulated in that statement.

Someone commented on Andrew’s previous post to say that the kind of rating exercise performed by the EIP team needs some kind of adjustment (via relative indexation) to become somewhat accurate. Pippa Norris answered to say that this method has its own flaws, yet she has not established the superiority of the unadjusted measurements. Worse, the project documentation does not even indicate what adjustment methods were considered, if any.

That is sloppy at best, and it will be used in the worst ways by both hardcore ‘quants’ and hardcore ‘quals’ to dismiss the entire field of research that EIP fits in.

Dr. Norris writes, “Andrew Gelman highlights several questions about the methods which are used. This note provides a brief response to both issues”. Yet no response is forthcoming to the key questions:
(a) Given the results obtained thus far, can indexing expert opinion in this way provide any useful summary information on comparative or absolute measures of electoral reliability or democratic function?

(b) If the answer to (a) is ‘yes’, how can the results with respect to (e.g.) North Korea be justified? In what sense does the ‘upward bias’ of the N. Korea outcome represent a systemic issue that explains the results from other countries? etc….

To say, without comment or elaboration, that “domestic experts and those reporting a higher level of familiarity with the election were significantly more positive in their evaluations”, but later assert that somehow the PEI is helping social scientists ‘speak truth to power’ is an abdication of responsibility. Perhaps start with speaking truth to one’s own results by plainly discussing the implications of the threats to validity.

Again, I’m happy that Dr. Norris commented, but the substance of the comment is a giant red flag.

]]>By: Jonhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380476
Tue, 03 Jan 2017 18:19:15 +0000http://andrewgelman.com/?p=32498#comment-380476Wouldn’t any expert from North Korea who would make his country look bad be kept from being in a position to submit?
]]>By: Jonathan (another one)http://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380456
Tue, 03 Jan 2017 17:22:35 +0000http://andrewgelman.com/?p=32498#comment-380456“We seem to be heading into a fact-free zone where partisans assert that the world is flat but social science can still serve an important function in speaking truth to power, generating evidence of poor (and good) performance, and contributing towards the public sphere.” Sure, or it can be used to *exacerbate* the problem of false claims. Dr. Norris fails to answer a simple question: does she think PEI supports (worse still, forms the entire basis of) the claim North Carolina is not a democracy? If she does, is the fact-free fantasy she proposes to correct the widespread notion that it is?
]]>By: Jonathanhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380452
Tue, 03 Jan 2017 17:21:05 +0000http://andrewgelman.com/?p=32498#comment-380452Why not exclude those where upper houses aren’t elected? Britain has the unelected Lords and they bluntly stuck their noses into many issues over the last year, let alone the past few years.
]]>By: Tom Passinhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380407
Tue, 03 Jan 2017 15:58:12 +0000http://andrewgelman.com/?p=32498#comment-380407To provide a little wider look, the published spreadsheets do provide both confidence levels (at least for some of the indexes) and number of responders. Some U.S. states are given very wide confidence bands (around 43-83, for one example), and so are North Korea and Cuba (in the 2014 results). There is a relation between a low number of responding experts and the confidence bands, but it’s not a simple one. Still, all the cases of very wide confidence bands I noted had only 2 or 3 people responding to the survey. (These seem to be confidence bands for one of the sub-indices; I’m still trying to understand what all of the spreadsheet columns represent).

I suppose that if we weighted the mean values by the inverse squares of the confidence intervals, we wouldn’t put much stock in results like the N.K. and Cuba values. Nor, apparently, in the results for certain of the U.S. states either.

]]>By: Tom Passinhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380375
Tue, 03 Jan 2017 14:52:18 +0000http://andrewgelman.com/?p=32498#comment-380375“domestic experts and those reporting a higher level of familiarity with the election were significantly more positive in their evaluations”

This statement really bothers me, partly because it wasn’t reported that anything was done with the information. Should domestic experts have been downweighted? Or maybe upweighted? Does it mean that domestic North Korean experts were more relied on to assess N. K. election processes? Isn’t it likely that in states with State-directed elections that the domestic experts would be more favorable than outsiders? Or if not, how can that be reliably established?

What I read in the description of how the index is constructed is that reliance is placed on aggregation. But to increase the scientific reliability, one should be looking hard at the exceptions. N.K. is one prominent one, but clearly not the only one. Excluding it doesn’t resolve the matter. Why did it give what most would consider spurious results? And if it couldn’t be flagged as spurious until after 2014, why should we have much confidence now in the current version of the index? And how can we now tell which remaining countries in the index have similarly spurious results?

]]>By: Erik Moellerhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380221
Tue, 03 Jan 2017 11:04:06 +0000http://andrewgelman.com/?p=32498#comment-380221I agree it would be good to know a bit more about circumstances under which responses are dropped (is there a response rate threshold?). Cuba is another example where the data at face value contradicts what’s well-known: the National Assembly elections have one candidate per seat due to the way the nominating process works; significant political dissent is prohibited. That’s not a system with high integrity. So why does it receive a rating in the 60s? Is the problem with the criteria used in the index, or is it with the experts consulted?

Also, shouldn’t experts have demonstrated knowledge in more than one country? It’s difficult to estimate the fairness of a process if you’re only familiar with how your country is doing it. And of course there’s the hairy issue of the level of academic freedom in a country, which may significantly affect responses from experts in that country.

]]>By: Nerohttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380130
Tue, 03 Jan 2017 08:50:48 +0000http://andrewgelman.com/?p=32498#comment-380130“Validity and reliability tests:” looks reasonable, with no apparent large mistakes? Nevertheless, some terrible results were obtained with these reliable & valid measurements. Is there a basic problem with the approach used for assessing validity and reliability, which also calls into question validity and reliability checks in other studies? Or, does R=0.8 simply mean that in some cases extremely wrong assessments of single nations will quite necessarily occur?
]]>By: Rahulhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380112
Tue, 03 Jan 2017 06:24:21 +0000http://andrewgelman.com/?p=32498#comment-380112Pippa:

Allow me to offer an example. Say, I am a doctor who designs an ER triage protocol i.e. Based on symptoms a nurse can decide which patients need the most urgent attention.

Now someone points out, that this algorithm gives the symptoms of a torn ACL higher priority than an apparent heart attack.

Can I now just brush the criticism over by adding a Disclaimer: “This triage algorithm does not work correctly for heart attacks”?

Also I want to thank you for commenting here. We disagree on some of these measurement issues but I think the way to more forward scientifically is through open discussion, and I appreciate your openness in posting your reports and data online, and in engaging with critics here.

My concerns about North Korea are twofold: First that you and your colleagues released that earlier report in which North Korea was rated over 50 on all dimensions for 2014, and that didn’t seem to be a problem at the time. Second that the North Korea numbers were created using the same approach as used for all other countries. Given that we can all agree that North Korea’s numbers were problematic, this calls into question the method more generally.

]]>By: Nickhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380097
Tue, 03 Jan 2017 05:24:13 +0000http://andrewgelman.com/?p=32498#comment-380097“We dropped North Korea in 2015 because of the respondents, which is I guess what you suggest that we should do, so I am really unsure why you continue to flog this dead horse.” So you’re now admitting you dropped North Korea because the respondents gave ridiculous answers? Doesn’t doing that call into question the scientific basis of the entire survey concept? And how is straight up deleting an inconvenient data point good scientific ethics?
]]>By: Nickhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380094
Tue, 03 Jan 2017 05:18:13 +0000http://andrewgelman.com/?p=32498#comment-380094Correction: SLE=Sierra Leone not Slovenia
]]>By: Pippa Norrishttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380093
Tue, 03 Jan 2017 05:15:55 +0000http://andrewgelman.com/?p=32498#comment-380093We dropped North Korea in 2015 because of the respondents, which is I guess what you suggest that we should do, so I am really unsure why you continue to flog this dead horse.
]]>By: Untenured and thus anonymoushttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380088
Tue, 03 Jan 2017 04:58:57 +0000http://andrewgelman.com/?p=32498#comment-380088This is a joke, right?

At no point does Pippa answer any of the criticisms raised regarding her half-decade long project (for, example, the complete lack of face validity). The only two mentions of North Carolina are to Reynold’s place of employment, and North Korea is not mentioned save for a passing aside in a footnote.

Furthermore, in doing so, it seems there is a complete lack of understanding of what contemporary “best practices” are even in the rather sad realm of expert surveys in political science: references are made to V-Dem, but at no point does it appear there is a recognition that additive indices of latent political phenomenon have been shown for decades to be highly problematic at best, and fundamentally flawed at worst.

The conclusion is, perhaps, my favorite part: “In short, the project has made considerable progress in developing the PEI methodology over the last five years and we are confident about the results. Nevertheless, there is always room for improvement, and, in particular, learning from comparisons across similar projects is very helpful to create a community or network.”

One makes the following inferences. First, we’ve made progress developing a methodology (which completely lacks face validity, but hey it’s developing!). Second, we’re confident about the results (even though they fly in the face of pretty much everything comparativists know about elections across the globe). Third, we’re creating a community and networking, which is, after all, so much more important for getting publications and citations than actually developing anything resembling a decent measure.

If this is the best an entire research team can do when people point out the shambles that is their measure, it is most telling. The emperor, as usual, has no clothes.

]]>By: Nickhttp://andrewgelman.com/2017/01/02/constructing-expert-indices-measuring-electoral-integrity-reply-pippa-norris/#comment-380087
Tue, 03 Jan 2017 04:56:46 +0000http://andrewgelman.com/?p=32498#comment-380087“There were also elections in North Korea and Trinidad and Tobago, but with too few responses in these two cases meant that these are excluded from the dataset.”

But if you look at the Year in Elections 2014 p.36, you see that Mauritania, Slovenia and North Korea all had 2 respondents for a 6% response rate, yet only North Korea was dropped from The Year in Elections 2015. There were more countries with only three or four respondents in there too.