Segmented flows are a technique of increasing interest in lab-on-a-chip applications, such as bio-analytical assays for drug discovery or portable biological testing. These flows provide multiple advantages over continuous flows, but the knowledge base remains very limited. This project uses a research code to simulate an array of flow conditions for segmented droplets of the test fluid. The results are used to develop relations for quantities of interest, such as relative pressure drop and wall film thickness, as a function of relevant physical parameters. In addition, the three-dimensional velocity and pressure fields generated are used to physically explain certain poorly-understood aspects of these relations.

Segmented flows are a technique of increasing interest in lab-on-a-chip applications, such as bio-analytical assays for drug discovery or portable biological testing. These flows provide multiple advantages over continuous flows, but the knowledge base remains very limited. This project uses a research code to simulate an array of flow conditions for segmented droplets of the test fluid. The results are used to develop relations for quantities of interest, such as relative pressure drop and wall film thickness, as a function of relevant physical parameters. In addition, the three-dimensional velocity and pressure fields generated are used to physically explain certain poorly-understood aspects of these relations.

We did compare some of my numerical results in capillary droplet flow with the corresponding experimental results of a former colleague of mine who was working on fabricating and testing these chips. Qualitatively, the results showed similar trends, but quantitively the two were in disagreement. However, it is my understanding that there, the experimental results were as much in question as the numerical code. For example, the experimentally measured pressure drop for a single fluid did not match the theoretically predicted value (whereas my code did, although that is not surprising for such a trivial problem). At the time, my colleague and our professor were trying to track down the source of the experimental discrepancy — looking at things such as surface roughness, variations in the channel size and shape, and the effect of the bends in the channel, all of which may behave differently at the micro-scale. Obviously, if the discrepancy turned out to be caused by a physical effect common to such devices, the appropriate physics would have to be incorporated into the code in order to accurately predict these flows.

At any rate, I would definitely be interested in comparing my results with any other experimental data, although the availabilty of such data appears to remain scarce.

Great job! you talk about lab-on-a-chip applications for clinical diagnostics and such. You also talk about the different flows that you can devise in simulations. The real question is at the intersection of these two. Can you provide an example of how you used a clinical need to govern the configuration of your flow models. In other words, have you given any thought to specific applications for which the droplet flows you are looking at would be useful?

The answer is… yes and no. I, personally, have not had much involvement in the design of flow configurations for clinical applications, other than sitting in on a few group meetings early on in my project. However, my project is in some respects an expansion on previous experimental work that has been carried out by members of the Mechanical Engineering and Chemistry departments at LSU.

One experiment that I know has been demonstrated to work is a fluorescence cross-correlation spectroscopy assay to test the effectiveness of enzyme inhibitors. Short DNA strands (oligonucleotides) were prepared with two fluorescent markers, one on each side of the strand, that fluoresced at two different wavelengths under laser light. The DNA, the enzyme (APE1), and an enzyme inhibitor in aqueous solution were then passed through the chip as the testing fluid. Then, using optical sensors for the two emissive frequencies of the markers, they could independently track the signals indicating the two halves of the DNA sequence in the droplets as they passed by the sensor. If the enzyme cleaved the molecule, the two markers would generally pass by at different times. If it did not, the two signals would be approximately the same.

Now, the problem does present some rather important limitations to the flow. First of all, there must be adequate mixing in the droplets for all the chemicals to be able to interact. As I touched on in my video, the circulation inside the droplets does improve mixing as compared to single-phase flows, and there are other techniques in constructing the channel shapes that can help, as well. However, analysing the flow fields from the simulations can give us a quantifiable description of the mixing rate, and let us know which flows would accommodate the need. The speed of the droplets is also a major factor. For one thing, there needs to be enough time between the formation of the droplets and their reaching the sensor for the biochemical reactions to take place. Also, I believe there is a limit to how fast the sensors can sample the photons. If the droplets pass by too quickly, there will not be enough data points to perform an adequate signal analysis. Both of these provide upper limits to the allowable speed of the droplets. This can be a design challenge since, as it was explained to me, it becomes more difficult to generate stable, predictable flows at the lower flow rates.

However, as I said, I was not directly involved in these designs. My involvement in the project has been more focused on developing and using the code as a tool to investigate these flows. I did use one of the experimental flow patterns as sort of my “baseline” for the parametric studies, and then varied the different parameters from there, seeing how far I could push the code and what the results were.

A very nice contribution and great tool for microfluidics device developers. Can you model handle particulates such as cells, viruses, aggregates etc or is it strictly limited to fluids and soluble compounds? How do you validate your model and how do you determine critical parameters that are likely composition dependent?

Our model treats each fluid as a homogeneous, Newtonian entity. We do not model any particulate matter directly. We can, however, model the effects of solutes in the flow to the extent that they modify the fluid properties. For example, in the liquid-liquid experiments of my colleague, the carrier fluid was a perfluorocarbon with 10% (by volume) perfluorooctanol as a surfactant. This surfactant affects the density and viscosity of the fluid, and especially the surface tension between the carrier fluid and the water droplets. However, we simply measured these properties in the lab and used the resulting numbers as constants in the simulation.

The code could, however, be modified to handle more general cases. For example, if we found that surfactant matter was unevenly distributed on the surface of the droplets, we could change the code to handle this. Further modifications would also be possible, if needed, to accommodate non-Newtonian rheologies and also to accommodate the transport of substances mixed or in solution.

A colleague of mine working as an intern with my professor at LSU did do some simulations modelling cells as highly viscous drops with the appropriate surface tension to approximate the mechanical behaviour of cells. Although this is not considered an unacceptable approach, they concluded that it would be best to model a cell as a viscoelastic particle, and this adds another dimension to the problem. It then becomes one of flow-structure interaction requiring CFD coupled with Computational Mechanics (e.g. employing finite elements for the stress and deformation solution on the cell).

As I mentioned in a previous response, the model has been validated qualitatively by comparing trends with experiments. Quantitative comparison was not to our satisfaction, but the experiments themselves involved a high degree of uncertainty, given that they were conducted on the micro-scale with some unpredictable variability even in the geometry of the micro-channels themselves, stemming from micro-manufacturing issues over the rather long length of the channels required for credible pressure drop measurements.

In the interest of honesty, I have to admit that our efforts to make more than trivial improvements to the code have been a little less than what we hoped for thus far.

First of all, because of the long simulation times, we wanted to explore the possibility of making the code run more efficiently, specifically the two Poisson solvers – for the density and pressure equations – which take up the majority of the calculation time. Working with Professor Tromeur-Dervout in Lyon, I applied an additive Schwarz domain decomposition technique, then took advantage of the linear convergence of the solution at the interfaces to periodically apply an Aitken acceleration and make predictions of the final solution at the interfaces [ref 1]. Technically, the technique worked, in that it did converge and the Aitken acceleration improved convergence over the Schwarz technique alone. However, for the cases I was working with, the modified code actually ran more slowly than the original code, as it turned out that the original code converged in fewer iterations than were necessary for using the Schwarz-Aitken technique. I suspect the decrease in solution speed was due to the domain decomposition blocking the original multigrid method’s ability to communicate coarse-grid solutions quickly across the entire domain when used with simple domain splitting. Meanwhile, concurrent research by others under Professor Tromeur confirmed that the technique in general was more likely to be successful for very large systems. We were not able to test the modified code on a system sufficiently large for the benefits of the technique to become visible, but this could be an interesting topic of future work.

I have also modified the code to simulate annular flows, where the testing fluid forms a continuous tube or jet, surrounded by the carrier fluid. We were particularly interested in investigating the stability of these flows, as they represent the limiting case of long plugs which approach each other. Initial testing appears to show the modification works. However, the method for measuring (and thus controlling) the fluid volume relies on a closed front, and an improved method needs to be developed for these infinite (periodic) shapes. This is important as the need to automatically correct the front in order to conserve dispersed fluid volumes is one limitation of the implementation of the front-tracking technique.

Finally, the code as I originally received it did not handle particularly well cases where the droplet front comes very close to the wall (typically within 1~1.5 grid spacings). This is because the code does not allow for the physics of the lubrication effect of a film of carrier fluid this thin. I made a modification to the code which artificially allows the front points to slide past the walls if they are too close. This prevents fronts which momentarily approach the walls from becoming “stuck” as was seen in some early simulations. However, this modification is not necessarily physically accurate, so for cases where the film remains thin in some places even at steady state (such as for low-capillary number plug flows), we need to apply improved physics to the code. We are currently working on incorporating a technique based on lubrication theory, where the thickness of the film and certain assumptions about the flow are used to estimate the local flow field. This technique has already been demonstrated on a two-dimensional version of this code [ref 2]. However, to our knowledge, no results using a three-dimensional version have been published so far.

Could you use data on “classical” (non-micro) systems (which I think are probably easily available in the literature, and reliable) to achieve some validation? Or could you compare with predictions, say, from FLUENT?

The front-tracking technique has been validated for a number of different types of flows [ref 1]. As I mentioned in a previous comment, results of micro-scale Taylor-like liquid-liquid flow experiments are difficult to come by. Larger scale systems may be an interesting idea for validation. However, it should be noted that larger scales will in general have a much higher Eötvös number than the micro-scale flows. That is, gravity and the buoyancy of the droplets will play a much more important role relative to the surface tension effects, unless they are done under micro-gravity.

A colleague of mine working with my professor at LSU is in the process of benchmarking some cases I ran using the Volume of Fluid method as implemented in ANSYS/FLUENT. A comparison of the two solutions will be most informative. There is also a recent level-set method implementation in ANSYS/FLUENT that they will be benchmarking as well.

Thanks, Farid.
I have actually been running my code exclusively on parallel machines for quite some time. For the cases I investigate, the code usually runs fastest using either 8, 12, or 16 processors. Since the solution is transient, what I typically do is to let it run until the flows reach a steady state. The time this takes can vary greatly. For example, a high-Reynolds number, low-capillary number flow may approach steady state in two or three days with tens of thousands of time steps, while a low-Reynolds number high-capillary number flow may take several weeks of calculation with millions of time steps.
The fixed grid ranges from 400,000 to over 5 million grid points in some cases, while the moving front representing the droplet boundary can have an additional 10,000 to over 300,000 points, depending on the droplet size and shape. For a typical plug flow simulation, for example, I then have on the order of 100 MB of data (formatted) for every time step that I save the full solution. Obviously, I have to find the right balance of doing this sparingly enough to avoid having too much data to handle and doing it frequently enough to have a good picture of what is happening in the flow, especially if something unexpected happens.

I enjoyed your video very much. Your work parallels somewhat what mine entails, in that I am interested in dermal transport of hydrophilic substances through microporous channels of the outermost skin layer. Hence, I am very interested in your type of modeling. I also have enjoyed your comments on the other videos I have selected. Very insightful!

Thanks, Terri!
Yes, I can definitely see the similarities. Do you work in the lab, in numerical simulations, or both?
I would be interested in learning more about your models, as well. When you say hydrophilic substances, are these in the liquid state? Or hydrophilic particles, as a powder or in suspension?
I know that polygonal channels are sometimes used to represent multiphase flows through porous substances, as they share some of the same features which are absent in cylindrical channels (such as the flow being able to slip past the dispersed phase more easily in corners).
Of course, the applicability of our numerical method depends greatly on the specifics of your flows – what substances are being transported, the scale and nature of the microporous channels, and other factors. However, if any part of my research could help you in your work, I would of course be more than happy to share with you what I know.
Thanks again for your comments.

Hi Eamonn,
Great video. I have a couple questions. First, you mention the application to bio-assays. Is your model taking into account the forces on the fluid in the segmented droplets? I know you mention that the shape of the droplets can be changed by different parameters in the microfluidic device. Could this shape change cause the embedded biological cells to deform/burst?
Secondly, how long does your code take to run? Are you looking to improve the efficiency in time, or the breadth of the simulations?
Thanks,
T.J.

Hi, T.J.,
Thanks, and good questions. Your first one actually brings up a number of important points, so I hope you bear with me on the answer.

First of all, you asked about the forces on the fluid in the droplets. The code is solving the standard incompressible Navier-Stokes equations for fluid flow on the entire domain. This includes the physical forces associated with the pressure and viscous stresses. The important thing here, with the front-tracking method, is that the coefficients related to fluid density and viscosity are not constant across the domain, but are defined locally at each grid point depending on whether they are inside or outside the droplet (our code also uses Peskin’s smoothing for points close to the interface, so there is not a sharp discontinuity in the fluid properties). There is also a source term for the force associated with surface tension at the interface, which depends on the local curvature of the droplet interface.

Secondly, you asked about the deformation of biological cells which may be suspended in the fluid. This is a very good question. I’m afraid I don’t have the data for how much of an effect the flow may have on them. However, I would imagine the pressure changes and shear stresses would be more directly related to their deformation and eventual destruction than the shape of the droplet itself, and both of these can be easily derived from my simulation results. Now the caveat to this: These channels can be very small. The chips produced at LSU have channel widths between 50 and 200 microns, and the idea has already been brought up of possibly going smaller in the future. Therefore a biological cell may be large enough compared to the droplet size to have a noticeable influence on the flow field. Since our current model treats the droplet as a homogeneous fluid, it would not take this effect into account. Of course, it is possible to modify the code to simulate particulate or larger-body entities in the fluid, albeit at a higher computational cost.

At this point you may be asking yourself, “How useful is the code in its current state if it doesn’t take into account the effects of cells on the flow?” This brings me to my third point, which is, “What are some of the specific potential applications of these flows?” This is a topic that I did not address in my video due to time constraints, and because I chose to focus more on the work with which I was directly involved. However, it has come up a few times in questions from friends and some of the judges, and in hindsight, it might have made for a more interesting video than what I had presented.
To address this, let me briefly describe one of the experiments that has been successfully carried out in the LSU Chemistry department to demonstrate the usefulness of these chips. The experimental chip was used to perform a fluorescence cross-correlation spectroscopy assay to test the effectiveness of enzyme inhibitors. Enzyme inhibitors are a common category of drug, and this technique could be useful for rapid testing of potential new drugs for treating various diseases. In the experiment, short DNA strands (oligonucleotides) were prepared with two fluorescent markers, one on each side of the strand, that fluoresced at two different wavelengths under laser light. The DNA, an associated enzyme (APE1), and an enzyme inhibitor, all in aqueous solution, were then passed through the chip as the testing fluid. Then, using optical sensors for the two emissive frequencies of the markers, they could independently track the signals indicating the two halves of the DNA sequence in the droplets as they passed by the sensor. If the enzyme cleaved the molecule, the two markers would generally pass by at different times. If it did not, the two signals would be approximately the same.
This is an example of a useful application that does not include whole cells, and the molecules in the droplets are small enough that treating the whole droplet as a homogeneous fluid is probably a valid approximation, at least in terms of predicting the flow fields.

And finally, you asked about run times for my code. As I mentioned in a previous comment, this varies a great deal depending on the domain size, droplet size, time scale of the flow, and other factors. For the grid resolution I have been using, small droplet simulations may take two or three days to finish, while some larger plug simulations may take up to several weeks. Thus, one of our goals was to investigate ways we might be able to improve the efficiency, as you said, to reduce the simulation times. Unfortunately, as it turns out, the code has limited scalability due to the need to communicate data for droplet fronts that cross multiple processor domains, and the multigrid technique it uses was already competitive with the Schwarz-Aitken acceleration technique we were investigating for our current configuration. Our current work on the code, as I mentioned in the video, is on implementing a lubrication theory-based algorithm in the code in order to simulate flows with thin films. Currently, the minimum film thickness we can simulate is restricted by the coarseness of the grid, and finer grids would be prohibitively slow to solve for practical simulations.

I know that was a lot of information, but I hope I answered your questions. Please feel free to ask any follow-up questions if you have any, and thanks again for your interest.
- Eamonn