Meta-expertise

How can we cope with becoming more hyper-specialized and fragmented in our personal knowledge bases while at the same time being exposed to too much relevant formation on the internet for any of us to learn and process?

Michael Smithson provides a modest proposal for addressing this challenge: We should all become expert about experts and expertise. That is, we should develop meta-expertise.

Climate Etc. has had several previous posts on expertise, focused on the politics of expertise:

We can’t know everything, but knowing an expert when we see one, being able to tell the difference between an expert and an impostor, and knowing what it takes to become an expert can guide our search for assistance in all things about which we’re ignorant. A meta-expert should:

Know the broad parameters of and requirements for attaining expertise;

Be able to distinguish a genuine expert from a pretender or a charlatan;

Know whether expertise is and when it is not attainable in a given domain;

Possess effective criteria for evaluating expertise, within reasonable limits; and

Be aware of the limitations of specialized expertise.

That said, the Wikipedia entry also raises a potentially vexing point, namely that “expertise” may come down to merely a matter of consensus, often dictated by the self-same “experts.”

What are the requirements for attaining deep expertise? Two popular criteria are talent and deliberative practice. Re deliberate practice, a much-discussed rule of thumb is the “10,000 hour rule.”

The 10K rule can be a useful guide but there’s an important caveat. It may be a necessary but it is by no means a sufficient condition for guaranteeing deep expertise. At least three other conditions have to be met: Deliberative and effective practice in a domain where deep expertise is attainable.

Back to the caveats. First, no deliberation makes practice useless. Having spent approximately 8 hours every day sleeping for the past 61 years (178,120 hours) hasn’t made me an expert on sleep. Likewise, deliberative but ineffective practice methods deny us top-level expertise. Early studies of Morse Code experts demonstrated that mere deliberative practice did not guarantee best performance results; specific training regimes were required instead.

There are, at least, some domains that are deeply complex where “experts” perform no better then less trained individuals or simple algorithms. In Philip Tetlock’s 2005 book on so-called “expert” predictions, he finds that many so-called experts perform no better than chance in predicting political events, financial trends, and so on.

What can explain the absence of deep expertise in these instances? Tetlock attributes experts’ poor performance to two factors, among others: Hyperspecialization and overconfidence. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. Tetlock also claims that cognitive style counts: “Foxes” tend to outperform “hedgehogs.” These terms are taken from Isaiah Berlin’s popular essay: Foxes know a little about lots of things, whereas hedgehogs know one big thing.

Another contributing factor may be a lack of meta-cognitive insight on the part of the experts. . . . The disquieting implication of these findings is that domain expertise doesn’t include meta-cognitive expertise.

Finally, here are a few tests that can be used to evaluate the “experts” in your life:

Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?

Walking the walk: Is the expert an active practitioner in their domain (versus being a critic or a commentator)?

Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.

Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?

Hedgehog-Fox test: Tetlock found that Foxes were better-calibrated and more able to entertain self-disconfirming counterfactuals than hedgehogs, but allowed that hedgehogs can occasionally be “stunningly right” in a way that foxes cannot. Is your expert a fox or a hedgehog?

Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?

JC comments: Some interesting insights here, my favorites are:

Tetlock attributes experts’ poor performance to two factors, among others: Hyperspecialization and overconfidence.

domain expertise doesn’t include meta-cognitive expertise.

Also, I like the 6 tests for evaluating experts. Collectively, I don’t think the IPCC scores very well, whereas individual scientists score much better. In the context of the discourse at Climate Etc., how do these criteria work in assessing an individual’s contributions to the debate (e.g. the most recent skydragon thread)?

97 responses to “Meta-expertise”

“This is chiefly practicable in a dispute between scholars in the presence of the unlearned. If you have no argument ad rem, and none either ad hominem, you can make one ad auditores; that is to say, you can start some invalid objection, which, however, only an expert sees to be invalid. Now your opponent is an expert, but those who form your audience are not, and accordingly in their eyes he is defeated; particularly if the objection which you make places him in any ridiculous light. People are ready to laugh, and you have the laughers on your side. To show that your objection is an idle one, would require a long explanation on the part of your opponent, and a reference to the principles of the branch of knowledge in question, or to the elements of the matter which you are discussing; and people are not disposed to listen to it. For example, your opponent states that in the original formation of a mountain-range the granite and other elements in its composition were, by reason of their high temperature, in a fluid or molten state; that the temperature must have amounted to some 480 degrees Fahrenheit; and that when the mass took shape it was covered by the sea. You reply, by an argument ad auditores, that at that temperature – nay, indeed, long before it had been reached, namely, at 212 degrees Fahrenheit – the sea would have been boiled away, and spread through the air in the form of steam. At this the audience laughs. To refute the objection, your opponent would have to show that the boiling-point depends not only on the degree of warmth, but also on the atmospheric pressure; and that as soon as about half the sea-water had gone off in the shape of steam, this pressure would be so greatly increased that the rest of it would fail to boil even at a temperature of 480 degrees. He is debarred from giving this explanation, as it would require a treatise to demonstrate the matter to those who had no acquaintance with physics.”

Science is about discovering object truth about nature. Scientific breakthroughs in understanding often directly counter “meta-expertise” on “experts”, and “consensus”. So how can we detect scientists leading breakthroughs that run counter to the “experts”? Sadly, Max Planck:

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”

Did you know that mass density can change with the speed of rotation changes?
Centrifugal force is the underlaying factor.
The faster the speed, the more outward force. Next is the speed of the solar system in forward motion(gives the bug on the windshield effect).

There is an efficient way of dealing with coming to terms with competing experts. It has been done with regularity for some time. That is an adversarial system, in which experts are subjected to questioning that they simply cannot ignore.

Civil trials often involve dueling experts on highly complex and technical topics. Whether expansive anti-trust cases dealing with complex economic and financial issues, patent litigation, medical malpractice cases, or other sometimes extremely technical disputes, lawyers must educate a jury as to the issues involved, and persuade them that their client’s position is correct.

Each side uses its own experts to explain the economic/financial/technical issues. Each side is then allowed to cross examine the other’s experts, with their own expert providing advice on what to ask.

Who is being persuaded in most such cases? A jury of individuals with no technical competence in the field in dispute. Yet such trials are conducted, successfully, all the time.

The key in such cases is the skill of the advocates, and the experts, in making the technical knowledge of the experts accessible to the lay jury. Experts who sit back and revel in their CVs and command of the literature are worthless in this context if they cannot communicate their knowledge in ways understandable by those outside their field. Advocates who try to score cheap debating points over actually educating the jury also fare poorly (with the exception of emotional appeals in certain highly charged cases).

Unfortunately, we will probably never see Gavin Schmidt, Michael Mann, or Phil Jones under oath, answering questions on the science, under penalty of a contempt order if they refuse to answer.

But frankly the blogosphere is coming close to performing the same function as an adversarial trial anyway. There are ample experts on every side of the issue. Many of the blogs, including this one, are articulate outlets for the varying perspectives, CAGW, AGW, lukewarmer, and skeptic. These blogs serve the same purpose as a lawyer at trial, interpreting and explaining to the lay man the significance, accuracy and certainty of the various expert opinions.

The scientists, engineers and auditors are the expert witnesses. The blogs are the attorney/advocates for their respective positions. And the electorate is the jury. While millions of voters may not read ClimateEtc., RealClimate, WUWT, Climate Audit et al., enough do to influence public opinion, and elections.

Ask 100 people if they have heard of climategate, or doubt CAGW. You’ll get more than one positive response. More importantly, the blogs are the starting point for much of the information that ends up in the media (both mainstream and conservative alternatives like talk radio and the internet), and being debated by politicians. I had never visited Climate Audit before climategate, but I was aware of the controversy regarding the hockeystick long before that.

The blogs frame the debate, and provide a forum for immediate responses ala the Stieg-O’Donnell debate, debates which otherwise might simmer for years percolating through the pal reviewed literature. Without the blogs, the skeptical/lukewarmer positions would be much less widely known. Without the blogs, the debate would be much less accessible to laymen like myself would have to search further for interpretations and explanations of varying expert claims.

The consensus view has always had the megaphone of the MSM, but the debate depends on both sides being aired. An issue being debated on the blogs has a much greater chance of being picked up on talk radio, Fox News, and conservative internet news outlets. The back and forth that results is exactly what a functioning democracy needs. Which is why progressives with authoritarian tendencies want so much to stifle or at least muzzle outlets like this one.

Judging my reasoning as circular without knowing what it is, that’s putting the cart before the horse, no?

Consider two expert witnesses testifying in a personal injury case. Both have similar credentials, experience, etc. One is calmly confident, one is defensive. One discusses upfront any caveats to his analysis, the other only admits to uncertainty under cross examination. One is dispassionately professional, the other personalizes any disagreement with his analysis.

“Each of these subsystems has a host of known and unknown forcings, interactions, phase transitions, limitations, resonances, couplings, response times, feedbacks, natural cycles, emergent phenomena, constructal constraints, and control systems. Finally, climate is affected by things occurring on spatial scales from the molecular to the planetary, and on temporal scales from the instantaneous to millions of years.

“To illustrate what this complexity means for the current “simple physics” paradigm, consider a similar “simple physics” problem in heat transfer. Suppose we take a block of aluminum six feet long and put one end of it into a bucket of hot water. We attach a thermometer to the other end, keep the water hot, and watch what happens. Fairly soon, the temperature at the other end of the block starts to rise. It’s a one-dimensional problem, ruled by simple physics.

“To verify our results, we try it again, but this time with a block of iron. Once again the temperature soon rises at the other end, just a bit more slowly than the aluminum We try it with a block of glass, and a block of wood, and a block of copper. In each case, after time, the temperature at the other end of the block rises. This is clearly simple physics in each case.

“As a final test, I look around for something else that is six feet long to use in the investigation. Finding nothing, I have an inspiration. I sit down, put my feet in the hot water, put the thermometer in my mouth and wait for the temperature of my head to start rising. After all, heat transmission is simple physics, isn’t it? So I just sit with my feet in the hot water and wait for the temperature of my head to rise.

“And wait.

“And wait …

“The moral of the story is that in dealing with complex systems such as the climate or the human body, the simplistic application of one-dimensional analyses or the adoption of a simple paradigm based on simple physics often gives results that have no resemblance to real world outcomes. It is this inability of the current paradigm to lead us to any deeper understanding of climate that underlines the need for a new paradigm. The current paradigm is incapable of solving many of the puzzles posed by the variations in global climate.”

Why should I waste my time educating the wilfully ignorant? If you cared google is your friend. It’s a basic fact that the greenhouse effect warms the Earth. I am happy with the time I do have to assert that and so help to inject some sanity into proceedings.

Here are some more basic facts that I will assert for you, but won’t waste my time explaining the evidence:
The holocaust happened.
9/11 was caused by terrorists.
Man landed on the moon.

“You are arguing that for something to be settled it must be demonstratable.”

You mean demonstrable? That’s how science works, Good Dr. Replicability, predictability, evidence, all that stuff goes into a demonstration. We don’t want ‘settled’ to become subjective or meaningless term, now do we?

Exclude the 1976/77 and 1997/98 ENSO events – and then see what the ‘trend’ is. It is a very simple exercise – as shown in the realclimate post I linked to?

For the rest – the ‘overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period. The most obvious explanation is the associated changes in cloudiness during this period.’http://isccp.giss.nasa.gov/projects/browse_fc.html

And I so resent phrases like – it’s an illusion or it’s wrong or you are mistaken as jumping off points for a discussion. We all have a very partial grasps of the facts – but I did not point to the fact earlier that you were mistaken in every one of your claims. And rude and noxiously arrogant to boot.

That is terrible – a graph without without any explanation or methodology?

This is far better – http://www.agci.org/docs/lean.pdf – see figure 2. But Lean fails to grasp the dynamic changes in the PDV – and so has no basis for predicting the near term evolution of climate. The rise in the Swanson et al paper (1979 to 1997) is about 0.1 degrees C/decade – and in Lean about 1.3. The future for Swanson et al is for no warming for an indeterminate period from 1998.

If you are going to discuss science – you should actually discuss science in peer reviewed studies and not by referencing the blogosphere.

Given that the NASA/GISS data referenced earlier contradicts the attribution of recent warming to carbon dioxide – there are substantive questions still to be resolved as to the cause of recent warming.

And yet Lean manages to reproduce the global temperature record using a gradually increasing background warming rather than step changes in 1976 and 1998. Which is entirely my point that “step changes” in temperature in 1976 and 1998 are an illusion due to the timing of solar cycles and ENSO.

The ENSO caused temperature changes are in the data and total – 0.468 degrees in the 2 periods. So most of the warming happened in 4 years. So what is the real trend? Swanson says it is the trend from 1979 to 1997 – about 0.1 degrees C/decade

That is all I am saying and there is no need to introduce ideas of step changes. Although – Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

Judith Lean started in 1980 with the satellite record – and used a multiple linear regression method. It would be most surprising if the results did not match the temperature record.

But it only works if you have all of the forcing – NASA suggests that she doesn’t – and this seems supported by surface cloud observation on the Pacific and ‘Earthshine’ measurements.

‘Earth’s global albedo, or reflectance, is a critical component of the global climate as this parameter, together with the solar constant, determines the amount of energy coming to Earth. Probably because of the lack of reliable data, traditionally the Earth’s albedo has been considered to be roughly constant, or studied theoretically as a feedback mechanism in response to a change in climate. Recently, however, several studies have shown large decadal variability in the Earth’s reflectance. Variations in terrestrial reflectance derive primarily from changes in cloud amount, thickness and location, all of which seem to have changed over decadal and longer scales.’

“In the context of the discourse at Climate Etc., how do these criteria work in assessing an individual’s contributions to the debate (e.g. the most recent skydragon thread)?”

Judith, are you asking us to call out individuals we think might be ‘charlatans’???? ;)

Of course not.

to answer your question…. I like the criteria and have used many of them (1,2,4, and 6 or variants thereof) myself since I started reading this blog to assess denizens and visitors who post here and I have assembled a mental list of ‘experts’ I pay attention to and a list of ‘charlatans’ that are good for entertainment value.

Many of the comments seem OT. The core assertions are “A meta-expert should:

Know the broad parameters of and requirements for attaining expertise;
Be able to distinguish a genuine expert from a pretender or a charlatan;
Know whether expertise is and when it is not attainable in a given domain;
Possess effective criteria for evaluating expertise, within reasonable limits; and
Be aware of the limitations of specialized expertise. ”

Knowing the broad parameters and requirements for attaining expertise is something of a fuzzy notion. Time in chair isn’t a very good indicator – if it were, we’d all be asking 80 year scientists for all the answers. The person who first discovers something is always immediately the leading expert. Unfortunately, I would make a distinction between being the expert and having expertise. While the peron who knows 100% of the 1% already uncovered in a new field may be the leading expert, I’d argue that anyone who knows 1% of what is potentially discoverable in a field doesn’t have expertise in the area.

Personally, I think understanding both the breadth and depth of knowledge of the person over a wide range of areas is more helpful than just looking at the narrow subject area. If I wanted to know what an elephant looked like, I doubt I would convene a panel composed of experts on the different parts – who knows what they’d come up with? I’d most likely find one or more experts who had done the same thing with other animals, and ask them to do it for an elephant.

There are plenty of experts who can give you a shopping list of what could go wrong, but usually it takes a real expert to come up with a good prioritized list of what’s most likely to go wrong.
“

Under the “tests that can be used to evaluate experts”, all are important on their own merit IMO, but these three stood out for me, particularly as they apply to IPCC, as they are closely interrelated.

3. Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.

JC has run a separate thread on overconfidence in IPCC reports. Pielke has made a (tongue-in-cheek?) analysis, which elicited an immediate knee-jerk defensive reaction from a “mainstream insider”.

4. Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?

The insider “Team” does not score well here, as Climategate revealed.

6. Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?

“Ethical behaviour” seems to be missing from the post above. I think it’s an important element in trying to judge expertise, as there is a large overlap here with the issue of trust. If there are two supposed experts, with opposing views, which view should I trust? Who do I think is the real expert? Whose opinion should I trust?

If I see an “expert” engaging in eyebrow-raising practices, I am less likely to accept their expertise as being valid – as I now have a reason to doubt their integrity.

I call these things cargo cult science, because they follow all the
apparent precepts and forms of scientific investigation, but
they’re missing something essential…

Feynman continues the theme of scientific integrity

If you’ve made up your mind
to test a theory, or you want to explain some idea, you should
always decide to publish it whichever way it comes out. If we only
publish results of a certain kind, we can make the argument look
good. We must publish both kinds of results.

I don’t think I have ever seen this happen in “climatology”. I’ve never read words like “we don’t know” or “it’s better than we thought” or “the observations mean we must reject the theory”.

He ends the 1974 speech on an optimistic note…

So I have just one wish for you–the good luck to be somewhere where you are free to maintain the kind of integrity I have
described, and where you do not feel forced by a need to maintain
your position in the organization, or financial support, or so on,
to lose your integrity. May you have that freedom.

That’s a good talk, and it contains much that every scientist should accept as basis for all his work and all presentation of that work.

The real life is, however, not so simple as demonstrated by this example given by Feynman

The easiest way to explain this idea is to contrast it, for example, with advertising. Last night I heard that Wesson oil doesn’t soak through food. Well, that’s true. It’s not dishonest; but the thing I’m talking about is not just a matter of not being dishonest, it’s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated at another temperature, they all will– including Wesson oil. So it’s the implication which has been conveyed, not the fact, which is true, and the difference is what we have to deal with.

That example tells that conveying the right implication is not the same thing as presenting true facts. To convey the right implication the scientist must tell about the uncertainties, but she must also tell, why she believes that the results are valid and significant. Overemphasizing uncertainties is also one way of conveying a wrong message, and that must also be avoided. This is not so much a problem, when the only audience is formed by other scientists of the same specialty, but it may be a major problem, when the audience is wider.

I don’t think distinguishing an expert is all that complex. Ask, “How do you ascertain expertise in others?”

If you don’t get a thoughtful answer, it’s very likely that all you are going to hear from this person is the “common” wisdom in his/her field.

I used to hire experts to help spend other people’s money on complex projects where successful outcomes were much to be preferred. Prospective experts were asked what was the worst screw-up they’d made in their area, how was it discovered, and what was done about it.

Not hiring people who had no such story seemed to work over many years.

I suggest that useful expertise requires clear apprehension of the possibility and potential consequences of error. Without that, all you are likely to be engaging is an encyclopedia – albeit one with missing chapters.

What distinguishes our efforts from those in climate science is tangible results leading to track record. “There it is. No repeals of the known laws of physics were required by this project.”

They don’t have a track record. We’re implored to ‘trust” people who suggest or “inform” policies almost totally without demonstrated known effect of results of earlier recommendations. That they think we should trust them is, to me, adequate reason not to.

to extend the above, some of them don’t trust us with their data and computations that we might verify or confirm their inferences, yet they think we should trust them to inform policy on a very much more global basis.

No micro-track record because work cannot be audited, no macro-track record because their recommendations haven’t been universally adopted – obviously not their fault.

Robert, that is a good point, but mine was more directed to the value of a track record. If you are looking for a contractor, it’s a good idea to interview his previous victims and have a look at what he’s built.

I also suspect that Jack Hughes was in design not the building side of the business,as was I, not that that makes much difference.

I’d add, that exhibiting tangible results as steps in a track record was not intended to demean the value of abstract results if they are fair characterizations of scientific work. My concern is that the bias at this site seems to me driven by a desire to have credibility (Which she does with me) with folks outside her discipline and that what’s driving this desire is another, to influence public policy in the areas of her expertise.

There cannot be anything really wrong with this. As some point the “abstract” may support a call to action. I think Einstein and a couple of his fellows did something like this around 1939. I don’t know if they were able to show more substantial support for the possibility of bomb building, but in the instant case, I don’t find the “support: sufficiently compelling to warrant the actions with regard to CO2 controlling.

It’s depreciative to describe it as a tribal practice. Augury was a hugely successful institution for almost a millennium. It correlates with the rise of perhaps the greatest of all empires and with significant technological and intellectual advance. Whereas the decline correlates with a corruption of the practice and the adoption of a foreign religion.

Can a Hedgehog tell a Fox how to be a better Fox, or vice versa? Let’s assume that the Area of Expertise we are discussing is “How to be a better Fox?” Now, who do you sit down with, just a bunch of foxes? No, I don’t think so. You will also want to discuss the matter, and include the opinions of, all the little things the fox eats and interacts with during the course of it’s life. Sound reasonable?

An expert needs to inspire trust to be an “effective” expert. But trust with…? Who? Just his/her own little clique of likeminded mini-experts studying the same little something? A broader field of experts who are studying similar little somethings that relate to each other? How about trust with people who have no expertise at all about the subject under discussion, but just love the political and economic implications in terms of where the discussion is going and see great opportunities for themselves and others like themselves?

We live in tribes. No one speaks for any one tribe, not even the Chief, about everything and anything under the Sun. When a conference of a tribe, or of Tribal Chiefs, gets together to discuss a problem, the subject is limited. Why? It makes things very simple when the problem is a BIG one. Well… as simple as possible.

We live on one planet. We live in many areas. There are many nations. Each nation is made up of many tribes/states. Each tribe is made up of many clans/cities/towns/villages. Each clan is made of many neighborhoods/families, and each family is made of many individuals who are the ultimate expert and decision maker of their own destiny. Hummmm… what is an expert? Is he/she someone who wants the world to listen to them for any one of many reasons, or is he/she someone that has an unimaginable long row to hoe if they want the world to agree with them?

It’s nearly impossible to be an Atlas expert, to lift the World and move it wherever you will. Or a Pharoh expert, and tell a nation to build you a grand tomb. Were Einstein and the other greats of science in their fields experts? Yes they were. Did they move the world? Yes they did, but only for the blink of an eye and by a measure we cannot even measure.

The expert who tries to Save the World from Itself is a fool. And on that I assure you, I am an expert and have every qualification to judge anyone who would try to repeat my folly. I dare say, I am not unique, I am as common as a grain of sand on a beach.

It really is hard to keep old men and women quiet about what they have learned. Thy’re not trying to show how smart they are, they’re only trying to say you kids a little time and effort. Nuff’ said.

We are not too far apart in the philosophy field even though no doubt our backgrounds and environmental upbringing is different.

I am a fool too to a society that sooner ignore rather than explore even for their own good or a higher knowledge base.
Many mistakes in science are coming out, yet that tradition of holding tight to the old and ignoring anything new still has that bond to keep the tribe in check.

Honesty and openness will go a long way to helping us determine who is an expert and who is selling a bill of goods. People identify scams using metadata. Scams are marked by big claims but no real proof or are too cheap or good to be true. If some climate scientists were more open with data and code, trust would come more easily for them. The fact they don’t understand this makes me doubt their overall level of intelligence.

Then very much of science is a scam.
Do a little exploring, come up with a mathematical equation and boom a new science that is now taught to be absolute fact. Even though it is a theory propped up by the illusion of mathematical significance.

The age old simple measuring was lost to the massive certainty of mathematical calculations.
Look at quantum physics. Pure crap in a cart with absolutely no motion in a universe full of movement.

Gary M is right. We evaluate competeing experts all the time in the courtroom. I’ve been trying to make this point for years — if the hockey team were put on the stand and cross-examined by a quality attorney there isn’t a jury in the world which would believe them. Their credibility would be thoroughly shredded.

We don’t need meta-experts. A reflexive deference to “experts” and “expertise” is the problem. We don’t solve it by adding another layer of the problem. That inappropriate deference results in abdication of responsibility by the public and extraordinary levels of hubris by the so-called “experts”. We desperately need a lot less of both.

Tetlock isn’t the only one who offers insight into the problems caused by over-reliance on experts. Two other books which examine the issue from a different perspective are Wisdom of Crowds and An Army of Davids.

Anyone interested in ways to evaluate the quality of expertise need only review the areas of inquiry that a lawyer uses to cross-examine an expert witness. Competence is but one. And even if the typical layman isn’t versed in the science, everyone understands the concept of quality. Any expert who ignores basic quality control in his worlk will find himself ignored by a jury. That’s where we are right now in climate science — a remarkable lack of quality control in every aspect.

The idea that we have an over-reliance on experts begs the question – “If not experts then who?” Are we collectively saying that poorly informed amateurs or politicians would make better decisions. Surely the message of Tetlock is not that we should lack trust in all expertise, only that we should try to avoid trusting the wrong sorts of experts – i.e. Hedgehogs (of which ther appear to be quite a few on this site).

Why the insistence on unthinking reliance? Why the embrace of the foolishness of credentialed expertise?

The world is full of people capable of demonstrating the manifest error in Mann’s hockey stick, Rahmstorf’s “worse than we thought”, Jones’ Chinese fantasy data, or Steig’s effort to smear data all over Antarctica. They don’t have to have credentials.

As the general public, we don’t have to trust anyone. We have every right, in fact the duty, to cross-examine any and all work which anyone proposes to use to argue for imposing on our lives, liberty or property. We don’t trust a computer model. We demand that it be verified and validated.

We don’t trust a hockey stick. We demand that all data, code, and methods be provided so that those with the interest and ability can take it apart and examine it — regardless of credentials.

We don’t have to trust the manipulations and adjustments of official databases. We’re entitled to all the information required to examine every aspect of them.

And all of us can ask questions regarding quality control. If you don’t install, check and calibrate your instruments, you don’t have much scientific credibility. If you don’t institute quality control measures for you databases, no credibility. No transparency? No credibility. No replication or audit? Goodbye credibility. Evidence of efforts to distort the publication process or IPCC assessments. Credibility flushed.

“The world is full of people capable of demonstrating the manifest error in Mann’s hockey stick . . .”

Apparently not, since it’s never been shown to be wrong. Indeed, the world (of scientists) is full of people that have reproduced the results.

You say we don’t need experts, and then you repeat a number of discredited denier myths, peddled by the ignorant to the dogmatic, and circulated endlessly despite repeatedly being debunked.

This illustrates something important: many, in fact most(*) deniers will not benefit from learning to analyze expertise, because of your mistaken belief (your overconfidence) that you have all the information you need as well as the capacity to analyze it.

According to the recent “Six Americas” survey, 70% of climate “dismissives” (deniers) thing they have all the information they need on climate change, far higher than any other group.

The state of the world about which you fantasize does not constitute scientific fact. While your world is likely interesting to psychiatrists, the rest of us are limited to the real world. Mann’s work has been so thoroughly debunked that only religious zealots still seek to raise it from the dead.

How can we cope with becoming more hyper-specialized and fragmented in our personal knowledge bases while at the same time being exposed to too much relevant formation on the internet for any of us to learn and process?

—————

JC,

The answer to your question is by studying both general philosophy and the history of philosophy. Optional; a general study of the philosophy of science. That will fully equip anyone with what is necessary to apply fundamental rational analysis to any detailed work of science without being a scientist at all.

What scientists do is not impenetrable to the well-equipped non-scientist.

Indeed, that is why I have devoted numerous threads to the scientific method and philosophy and sociology of science. Such an understanding contributes to “meta-cognitive capability,” which is a big plus in evaluating arguments about a complex topic such as climate change.

My problem with my own answer to your question is the treatment of philosophical education at the university level. My analysis is that in general in universities there is a relatively high imbalance toward minutely fragmentized and randomized philosophical theories. Therefore philosophical education now most often results in fragmentized and randomized bits of the general philosophical learning. That is not helpful toward equipping adults with the fundamental rational analysis capability that I advocate in answer to your question.

The ability of the crowd to bring wisdom to bear on a question can be seen on this issue of “expertise”. Scientists may wonder who is equipped to question an expert, but lawyers who cross-examine experts all the time wonder why it’s even a question. Perhaps we can agree that the attorney is a different kind of expert, too, but that simply proves the point that there are all kinds of people who are far beyond the tiny confines of some scientific specialty who can bring “expertise” which can be very helpful in evaluating scientific claims. The crowd is full of people who know more, in total, than any specialist.

stan good point. the problem comes when people question the IPCC and associated experts, and they are told they are not qualified to question the experts, which is followed by appeal to consensus. the idea of “meta-cognitive expertise” is an important one IMO.

Even more important than expertise is accountability. Is your expert personally accountable to you.

Much has been done over the past century in the professions to bring various degrees of accountability. To have them you need standards as without standards there is no measure of accountability. You also need the standing to enforce the accountability as even if you have standards they don’t help unless you can enforce compliance with them.

Only after you have the above is it wise to move to which expert you should look to for answers.

You need to be quite careful when looking at the definition of an ‘expert’, though the 6 points are generally ok;
1- Actually less important that you would think. For example, one of THE most intelligent people i have ever met (and who has founded, run and succesffuly sold two highly successful biotech firms) had no qualificiations past a bachelours.

2- THIS is the important one. Do the do the relevant work, do they do it well, reproducably and more importantly for me- are they a good trouble shooter. A true expert not only knows his subject, but also how to deal with matters whe they inevitably, hit the fan.

3-Reasonable enough. Though the percentage ‘correct’ question is probably a bit daft.

4-A tightrope this. it’s easy to think an expert is dismissive of new ideas without knowing their full history; they may just be being dismissive as they’ve already exhausted that particulr line of enquiry. The important distinction is this; if they know you’re wrong, they’ll tell you, calmly and point out why. If they think it’s interesting or plausable, they’ll say so.

5-I think this is a pointless one.

6-THE single most important aspect of a scientist let alone a scientific expert. I was always trained that there are ‘no wrong answers’ in science. If you’re wrong, you’re wrong- you’ve still learnt something and that after all is what science is all about.

If someone is unwilling to admit to their errors, or worse, attempts to hide, concela or distract from those errors- then you have a problem.

I think there should another point of emphasis in the context of ‘expertise’ and assessing whether or not someone has scientific expertise in particular.

7) He/she is willing to admit that he/she was/is incorrect in a particular assessment when new, more meaningful information comes to light.

I think it’s really this measure that many in the ‘climate debate’ on both sides are missing. New information comes to light, but it’s always spun in such a way that it confirms, rather than disconfirms, some preconceived notion of the ‘truth’.

It seems to me that this is really the measure that one can find a crank versus a real scientists. Real scientists are not scared of being wrong. It happens to us all the time. It has to when you’re trying to discover something new.