Well, some of us did give answers, but there is no consensus of opinion. There cannot be, because no one can know the answer.

But let me repeat my best answer, which sounds facetious, but is not:

Q: "What is the highest priority experiment the LENR community wants to see conducted?"

The answer is in the process, not in any specifics about the actual experiment. It is not possible to know what experiment is best, so instead of trying to determine this, you should structure the project with the best likelihood of finding the answer. I recommend Harry Truman's method:

Find some young people who are very good at experimental science and very anxious to do cold fusion.

Ask them what they want to do, and tell them they should do that.

Stay out of their way. Give them whatever equipment they need. Do not attempt to direct them, second guess them, or even understand them. Most scientific progress is made by young people who do not know where the limits are or what is supposedly impossible.

That's it. That's all there is to it. That is how every fundamental breakthrough in history was made, as far as I know. I have never heard of one that was planned ahead of time, or one in which people had any clue what to do before they did it. Of course if you are designing a new computer or an airplane, you have tons of textbook information to go by. That's entirely different. Someday there will be thousands of pages of textbooks telling people how to do cold fusion, but at present there are only a dozen pages or so. The "how to" papers by Storms and Cravens, and the recipe by Mizuno and me. That's about it. Several thousand pages more will be needed before cold fusion can be made practical. That is something Seven_of_twenty and others fail to realize.

The experimental literature has many hints, but for the most part you need to read between the lines and synthesize your own understanding. There is nothing explicit because the authors themselves do not know.

That is all scattered around in the references at the bottom of the Nature article page. I did not know that until today, so no one should be embarrassed about not noticing. Trevithick will eventually put it all together in one, more digestible, and publishable format.

As to the rest of your post, I will say please read my very first post. Not the title of the thread. It has always been a 2 part question, the 2nd only answerable after the first was.

I have never heard of one that was planned ahead of time, or one in which people had any clue what to do before they did it.

I did not mean that people make discoveries by blindly floundering around, without a hypothesis. I meant that the talented young people you hire must form their own hypotheses. If they can't do that, they are not scientists. They are skilled laboratory technicians. A project may need lab technicians, just as it might need a tech writer (me!), but we don't make breakthroughs. A scientist is someone with an innate ability to see the answer and then figure out a way to do an experiment to reveal it.

Ed Storms has a hypothesis about nanocracks and the NAE. Is it correct? The only way to find out is to do experiments. You will never discover the answer by thinking about it, or debating it. Ed is a skilled experimentalist, so he can describe the experiments that are needed. Your hotshot young scientists must have that ability, or you need to find a different group of young people.

In my opinion, the NHE project failed because it was top down. Because the project leaders told the researchers what they should do, what they should look for, and what the goals were. They published those goals in detail before the project began. This is like trying to draw a map of country no one has explored yet. It just happens that I own map like that. It was made in 1760 in France, and it shows parts of North America that no European had seen. You can see the limitations of trying to do that.

When you describe a potential application to a programmer, the programmer does not ask, "how can we make a computer do that?" She knows the answer. As she looks at the problem, the answer reveals itself in her mind, because she understands computers and she has practice designing programs. If she needed to ask, "how do we do this?" she is not a programmer. She may need to ask many specific questions about the application and what the customer needs, but crafting the answers into something the computer can execute, and something the user will find easy to operate, is her job. That's her role.

Images

That is all scattered around in the references at the bottom of the Nature article page. I did not know that until today, so no one should be embarrassed about not noticing. Trevithick will eventually put it all together in one, more digestible, and publishable format.

As to the rest of your post, I will say please read my very first post. Not the title of the thread. It has always been a 2 part question, the 2nd only answerable after the first was.

The second part is probably the part that everyone thought: “that at least an anomaly is observed repeatably” but no one dared to say it.

I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

That is all scattered around in the references at the bottom of the Nature article page. I did not know that until today, so no one should be embarrassed about not noticing. Trevithick will eventually put it all together in one, more digestible, and publishable format.

The diplomatic channels are now open. Your comments here will be read by the Google Program Manager, so you can assume you are communicating directly with him. This is a unique opportunity for the community to come together as one voice, to help Google decide what their next step should be. I would ask everyone be respectful, and prefer those with actual LENR lab experience make most of the recommendations. This will be heavily moderated, and if anything is even close to personal, will be deleted instead of being moved to the Clearance thread.

The question in the title of this thread: "What is the highest priority experiment the LENR community wants to see conducted", was posed by Matt Trevithick (Google Program Manager) at ICCF18 in 2013. To date, it still has not been satisfactorily answered. It would be useful for the field to put aside their differences, and preferences, and recommend "one top thing to spin up an academic team to evaluate". The proposals should be "well formulated so that they are actionable".

In addition, it would be helpful to know: "What would convince you that the experiment has been run well, such that you will accept the results...whatever they might be?".

PS: I would like to personally thank Team Google for making themselves available.

This was my first post on this thread I started. Read Matt's question after I say "it would be helpful to know:".

I’ll point out that, as evidenced in Tanzella’s abstract for ICCF, Brillouin is doing exactly what I believe should be done to use the opportunity offered by Google’s initiative, i.e. using the calorimetry set-up proposed by Berlinguette and alii as a standard. I do not believe I need to explain the massive repercussions if this calorimetry set-up gives the same results as the ones Tanzella has obtained up to now.

They have the talent, the question IMO is whether the community will accept their findings? In my first post on this thread, I asked that same question given me by Trevithick to present to the forum. No one has provided an answer to it yet.

Well, I will accept their findings, if well written. We do not yet have enough detail on the (negative) experiments they have done for that to be very useful, but I'd hope for this at some time. The (positive) work on low energy impact studies and better calorimetry looks interesting, and I don't see anyone here not accepting it. The only thing I'd say about the experimental work on impact studies (which of course I accept) is that they are not the only fish in that pond and putting their work in the context of previous studies would probably be useful.

I’ll point out that, as evidenced in Tanzella’s abstract for ICCF, Brillouin is doing exactly what I believe should be done to use the opportunity offered by Google’s initiative, i.e. using the calorimetry set-up proposed by Berlinguette and alii as a standard. I do not believe I need to explain the massive repercussions if this calorimetry set-up gives the same results as the ones Tanzella has obtained up to now.

Agreed, except that any calorimetry setup subjected to Q pulses will need explicit attention to its EMI hardening if it is going to give real results: they would need to do that.

TG feels they are up to the task, and committed to "solving this case". They have the talent, funding, and equipment to do so. Worse case is that their quest will lead them to new findings in other, more conventional sciences (as has already happened), and best case they determine there is some type of effect that may save the planet. Seems a win-win situation either way.

Solving a scientific case, especially a long lasting and controversial case like CF, is useful even if it doesn't lead to new findings in other fields. From a scientific POV, finding the truth is always a winning outcome, even if it is opposed to what it is hoped to find. I'm glad this is the TG commitment and I'm honored to have had the chance to let them know my opinion on how to achieve this goal.

Quote

They are already in the planning phase as to what comes next. There will be a focus on basic research to explore the science more thoroughly, and when/if a suitable experiment is found, will attempt to replicate as well. As we discovered with this thread though, the devil is in the details when deciding on which one. Same for them, as it was with us. While at first glance it looks good, upon closer examination...maybe not so.

Let me say that I strongly doubt that they can push further the exploration on this field if they don't first resolve the main controversial aspects and episodes in the CF/LENR history. The most famous and important of these episodes is the "1992 boil-off experiment". If they don't understand what happened in that occasion and the influence of the F&P claims on the evolution of the CF field, they are going to prolong a no-way research for the next years or decades.

Quote

And while many high quality experiments do meet the criteria, the author/s, for various reasons, often are not able to fully cooperate, That is, and will always be a deal killer, and a continuing problem. And one that goes to what many here (including myself) have noted, and complained about before...too much secrecy in the field. As McKubre has said many times..."we are all keeping secrets from each other". You would think that 30 years later, with little progress to show for it, that would not be a problem, but unfortunately it is.

The "1992 boil-off experiment" has none of these problems. Thanks to the lab videos and the many papers issued on this milestone of the CF research, the talented Google's experts will be able to successfully replicate it in a short time and easily identify the true source of the output energy claimed by F&P, even with no collaboration from the old guard. Revealing this secret kept hidden for almost 30 years is only a TG decision. The Google company is the main world player in the distribution of information, it's up to them to decide how to manage the information about the CF affair.

Quote

Interestingly, TG has had a strong, positive response to their Nature paper, and research initiatives...from mainstream science of all places! That is in contrast to the lukewarm reception received from the field, although McKubre's welcome letter was an exception. Who knows; maybe yesterdays mainstream critics, will become tomorrows supporters?

It depends on how the Google company will handle its responsibility for the distribution of information. If they will choose to publicly reveal how F&P calculated the output heat in the "1992 boil-off experiment", I don't think that the number of CF supporters will increase.

Quote

Some of whom BTW, have offered their expertise on this next project. Which means that along with their present team -most of whom are new to the field, there will be other new faces, with fresh ideas, and outlooks.

I strongly recommend candidates to become the new faces of the CF/LENR field to first study the "1992 boil-off experiment" and to fully understand the source of the excess heat claimed by F&P.

Quote

That is something the field has been deprived of much too long. No fault of the old guard, as they have tried to bring new blood in, but with little success due circumstances beyond their control. Google intends to change that by providing cover for a new generation to enter the study, and in doing so open up their own chapter for the history books.

Well, as authoritatively emphasized by a world class academic (1), until now the books about CF history can be subdivided into two main chapters, respectively dedicated to F&P and to the Ecat. It's up to the TG managers to decide if the third chapter of these books will be dedicated to their Company.

I have to say I am confused. Whatever ascoli's motives (and this site seems obsessed with motives in an unhealthy way, they are not important) I detect mixed messages about the classic LENR experiments.

Jed (and others?) are now saying that only skeptics don't agree that the boil-off experiment demonstrated very large amounts of excess energy. It was exceptionally well documented, and is relatively easy to do.

So: why not do it? The fact that ascoli believes it will not work is surely irrelevant if other people here think it will. Some honesty and clarity on this matter would be helpful. I remember when i ask for specific published evidence of LENR the classic experiments are most often quoted.

Personally I'd not do that specific experiment, because those uncontrolled boil-off conditions are difficult to analyse, but I'd have no problem with a better instrumented and controlled variant, and I'm sure that could be sorted out. F&P claimed excess heat in the more controlled parts of the experiment as well.

I still don't understand why google should not spend their time doing the classic D/Pd electrolysis experiments. According to people here they were replicable many times, and we now know much more about the exact conditions needed to make them work than was the case originally. If the google guys have done these with negative results, but people in the community feel they have not done them the right way, surely the priority is to explain what is the right way and ask for a bit more effort in that area?

I am baffled, and I also feel perhaps we are missing part of the conversation here.

I have to say I am confused. Whatever ascoli's motives (and this site seems obsessed with motives in an unhealthy way, they are not important) I detect mixed messages about the classic LENR experiments.

Jed (and others?) are now saying that only skeptics don't agree that the boil-off experiment demonstrated very large amounts of excess energy. It was exceptionally well documented, and is relatively easy to do.

So: why not do it? The fact that ascoli believes it will not work is surely irrelevant if other people here think it will. Some honesty and clarity on this matter would be helpful. I remember when i ask for specific published evidence of LENR the classic experiments are most often quoted.

Personally I'd not do that specific experiment, because those uncontrolled boil-off conditions are difficult to analyse, but I'd have no problem with a better instrumented and controlled variant, and I'm sure that could be sorted out. F&P claimed excess heat in the more controlled parts of the experiment as well.

I still don't understand why google should not spend their time doing the classic D/Pd electrolysis experiments. According to people here they were replicable many times, and we now know much more about the exact conditions needed to make them work than was the case originally. If the google guys have done these with negative results, but people in the community feel they have not done them the right way, surely the priority is to explain what is the right way and ask for a bit more effort in that area?

I am baffled, and I also feel perhaps we are missing part of the conversation here.

THH

Display More

Yes THH, the issue is confusing because people simply will not listen to what I'm saying. The experience shows without a doubt that the LENR effect is very material sensitive. The material MUST be able to produce the required conditions upon reaction with hydrogen. If the material is not able to do this, nothing the experimenter can do will cause LENR. On the other hand, an improper treatment of potentially active material can kill the ability to cause LENR. The details of proper treatment are important but are frequently ignored because the experimenter has a favorite concept they insist on applying.

After numerous discussions with Google, I came away with the impression they think LENR is unlikely to be real , did not think my theory applies, and did not have the background required to properly evaluate what is known about LENR. This attitude is very common throughout the scientific community. Consequently, the problem looks very confusing to most people. It is not confusing to me. The problem can be solved if what is now known is applied properly. I have written two books and numerous papers in an attempt to teach. Unfortunately, learning what is known is the first problem being ignored by most efforts. I do not mean this information can be applied as if it were a recipe. Instead, it must be used as an effective guide to create the required conditions in the material. I know of only one large effort that is doing this. I expect this and any other effort that take this approach will succeed. In other words, the confusion you speak of is caused by ignorance, nothing else.

Now a method might have been discovered that can produce the required conditions with reasonable predictability. We will see if an effective effort is made to understand what burnishing actually does to cause LENR before the people who want to make money as quickly as possible get their hooks into the effort and cause eventual failure because they can not stop long enough in their quest for riches to understand how the process works. This phenomenon is much more complex than any other. It must be understood before it can be effectively applied. In this way, it is similar to the transistor, but even more complex. But based on past experience, I have very little hope that my suggestions will be followed.

Well, I will accept their findings, if well written. We do not yet have enough detail on the (negative) experiments they have done for that to be very useful, but I'd hope for this at some time.

As I mentioned to Jed; all the "details" are scattered around in the reference section at the bottom of the Nature paper. When time permits, they will put it all together in a user friendly format, and publish. From my understanding, there will be no more additional information included, other than what is already in the refs.

TG does not know if the Berkeley Lawrence findings are of a conventional nature, or LENR....but intend to find out.

The work at Lawrence Lab is exploring the hot fusion reaction at low applied energy. The goal is to determine whether a new variation in the hot fusion reaction might occur when applied energy is low. They are not addressing the role of the NAE or the unique nature of the LENR nuclear process.

This approached has been explored and published in dozens of papers. Apparently, the electrons present in the material can help overcome the Coulomb barrier when applied energy is low but the hot fusion process occurs never the less. The paper published in Nature clearly reveals the confusion between cold fusion and hot fusion on which their approach is based.

Jed (and others?) are now saying that only skeptics don't agree that the boil-off experiment demonstrated very large amounts of excess energy. It was exceptionally well documented, and is relatively easy to do.

So: why not do it?

It is difficult. I think ordinary bulk Pd-D would be better to start with. I think the Google people tried to do bulk Pd-D but they did not achieve high enough loading. In that case they could not do the boil off experiment. The Pd has to produce heat during electrolysis first, or it will not go into high heat mode and boil off.

The work at Lawrence Lab is exploring the hot fusion reaction at low applied energy. The goal is to determine whether a new variation in the hot fusion reaction might occur when applied energy is low. They are not addressing the role of the NAE or the unique nature of the LENR nuclear process.

This approached has been explored and published in dozens of papers. Apparently, the electrons present in the material can help overcome the Coulomb barrier when applied energy is low but the hot fusion process occurs never the less. The paper published in Nature clearly reveals the confusion between cold fusion and hot fusion on which their approach is based.

This is true and very unfortunate. After reading about from all sources all these years, I have recently arrived to a similar realization: The belief that LENR is a shortcut of conventional fusion using the same ideas of hot fusion thinking that the conditions can be changed when applied to condensed matter, has been a constant source both problems of design of the experiments, wrong focus on the experiments and misunderstandings on interpretation of the results. LENR is a “new” kind of phenomena, it can’t be analyzed with the same optic than conventional nuclear phenomena that studies fusion of light elements or fission of heavy elements.

Cardone keeps calling it a “Nuovo fenomeno” in their approach using cavitation but they realized years ago that the input of energy can’t explain what happens from the point of view of the “force” applied when the bubble collapses. But more importantly, I think, wether you like the deformed time space theory or not, Is that they have got different results using the same energy input but different angles and spatial orientations, which is impressive per se. I have been reading their work in detail and sequence, and still don’t grasp well how they say that can control the reactions, but they have found ways to consistently create neutrons and transmutation (up and down the atomic number) without gamma radiation. This, being completely impossible with current mainstream theory, implies this is completely “new” and demands a new approach.

I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

According to people here they were replicable many times, and we now know much more about the exact conditions needed to make them work than was the case originally. If the google guys have done these with negative results, but people in the community feel they have not done them the right way, surely the priority is to explain what is the right way and ask for a bit more effort in that area?

I do not know if they did it the right way. The paper does not say how they did it.

We don't need to explain to them what is the right way. Just point them to Storms, Cravens and Fleischmann. Plus Miles and some others. Maybe they read this and they are already doing it the right way. Maybe, but I doubt it, because they did not list Storms or Cravens in the references. I suppose they did not read them.

I cannot "explain" anything to them unless they contact me and ask, or unless they read these messages. I have no way of knowing whether the messages are getting through. Frankly, it is not my problem. Many people have done the wrong thing in cold fusion, for the reasons Ed described. There is nothing he or I can do about it.