With just a few individuals left to send off as redos, most of the focus on the milkweed project has moved towards analysis. Analysis is inherently tricky because there is a lot of trial and error. There are essentially a few options for analysis. You can use someone else’s software, you can modify someone else’s to fit your needs, OR you can make your own. The last one is obviously extremely time intensive, so our hope was to find a software we could use to fit our needs.

If there’s one thing that I’ve learned from my three years of research, it’s that its super important to go with the flow. Since my last update my milkweed project has taken on an entirely new shape. After discovering that glyphosate herbicides do not act as a proxy for connectedness, I decided to use a different approach and expand on my project from last summer.

My original plan was to use common Milkweed as a study system to understand the impact of clonality and group survival. By intentionally adding a pathogen, such as an herbicide, the spread of the negative effects can be witnessed in a clonally connected plant. The goal of this experiment is to see how far the pathogens travel in a patch, how long it takes for other plants to die, and if there is any preferential sharing. For instance, sometimes younger plants are favored in sharing. Herbicide will be used as a proxy for connectedness and physiological integration.

Following DNA extractions, it was time for amplification of DNA through Polymerase Chain Reaction (PCR). PCR amplifies the DNA by denaturing, copying, and synthesizing over and over again. Denaturing occurs via temperature, first during the DNA extraction process, and then again using the Thermocline machine (or PCR machine). The thermocline helps to regulate temperature allowing for PCR to go through its different steps, such as synthesis or denaturing. Synthesis occurs using a special polymerase Taq.

Following the collection of data and samples, it was time for lab work to begin. The first step of my procedure was DNA extractions. With around 800 plant samples, I quickly realized financially and time wise doing all 800 would not be practical. This led to many discussions on how to subsample. Was it better to do more transects, with fewer plants from each transect? More plants, with fewer coverage of overall transects? How do you account for the difference in densities between transects? Overtime, with more and more discussions, it became clear that sampling more transects would be a better option, even if that meant fewer plants per transect. Additionally, for any transect with ~30 or fewer plants, the entire transect would be sampled. For any plant over that, a subsample would be done using a random number generator to randomly select which plants should be extracted.