There are numerous filters to separate particles in liquid based on their size, which can be enough to isolate them; however, particle shape can be more important, as it distinguishes healthy red blood cells from those affected by sickle-cell disease or malaria. Shape can also be used to determine what stage a cell is in of the cell cycle, which would benefit researchers looking for dividing cells. Recent research by Dino Di Carlo of UCLA looks to separate particles of differing aspect ratios continuously, using inertial fluid-dynamics. His work, “Continuous Inertial Focusing and Separation of Particles by Shape,” featured in Physical Review X reminds me of his previous work to use inertial fluid-dynamics to continuously filter particles according to size.

Particle Separation

Existing methods to separate particles according to shape include hydro-dynamic filtration (HDF) and deterministic lateral displacement (DLD), along with a few others. DLD involves a grid of posts in a channel that is arranged in a way that effectively separates particles according to size. This can be enhanced by controlling particle orientation in order to filter by particle shape or shear stress to separate by particle deformability. HDF uses highly branched channels to separate particles by size according to the fluidic resistance of the side channels. Both of these methods are passive and continuous, but they have highly complex structures and require low flow rates (no greater than a few µL/min). Di Carlo’s method uses fluid inertia to focus particles in channels at high flow rates from 40 to 80 µL/min. This utilizes a shear-gradient lift force and a wall-effect lift force in order to shift particles across streamlines when the Reynolds number of the particle is on the magnitude of 1 or greater. The shear-gradient lift force is directed down the shear gradient and toward the wall, while the wall-effect lift force is caused by the wake of a particle near the wall and is directed away from the wall. With a Reynolds numbers on the order of 1, inertial lift forces dominate the particle behavior while viscous interactions dominate when the particle Reynolds number is << 1. As the particle Reynolds number increases, migration across streamlines, away from the center line, is observed.

Di Carlo attempted to separate spheres and ellipsoids of varying aspect ratios, while conserving volume. Spheres of 3 µm and 6 µm in diameter, and ellipsoids that conserved the volume at 1:3 and 1:5 aspect ratios were used.

Results

This work demonstrates that rod-like particles find equilibrium positions close to the center of the channel, while spheres of the same volume end up in streamlines close to the wall. When the major axis of the particles rotates perpendicular to the plane of the wall, the wall-effect lift increases and the particles is pushed away from the wall. Once the major axis has realigned with the direction of the flow, the wall-effect lift decreases and the particles move towards the wall one again. However, particles with higher aspect ratios experience a wall-effect lift greater than the shear-gradient lift and find equilibrium positions closer to the center of the channel. As the Reynolds number of the particle increases (increasing flow rate is one way to increase Reynolds number), the shear-gradient lift force increases faster than the wall-effect lift force. But the particles with higher aspect ratios rotate and experience a greater wall-effect lift force and return to the center. This relationship with increasing particle Reynolds number allows this method to scale with flow rate, while the previous methods do not.

Four different shape-activated particle-sorting (SAPS) devices were designed with varying numbers of outlets, outlet resistances, channel aspect ratios and flow rates. Three of the devices used 6 µm spheres and their derived ellipsoids, while the final device used the 3 µm particles. All of the devices had varying performances, but the researchers selected device C, which had 7 outlets and isolated 88% of the spheres with 87% purity, 49% of 1:5 rods with 78% purity and 77% of 1:3 rods with 80% purity, for sorting yeast cells. Yeast cells are normally spherical, but form a bispherical twin or aggregate when budding. This change in shape is similar to the varying aspect ratios examined previously. It is useful to synchronize cell cycle stages, but this may be achieved with chemicals that alter cell physiology, changes in temperature or size filtration. SAPS C was able to extract nondividing singles with high yield and purity up to 94% and 54% of budded yeast cells were recovered at 31% purity, which increased from 6.6% purity at the inlet. According to my rough measurements from the paper’s figures, the aspect ratio of budding yeast is less than 2:1 which may explain the difference in performance compared to the original ellipsoids. Previously, Sugaya et al. used a 5 outlet HDF system to separate budding yeast cells. In comparison, Sugaya achieved up to 69.4% purity of budding cells in one outlet, up from 39.4% at the inlet. This same outlet recovered 28.8% of budding cells while another outlet recovered 65.2%. There is still room for improvement of Di Carlo’s budding yeast yield, but this operated at 1500 cell/s, compared to 100 cell/s from previous work that utilized dielectrophoretic forces according to the opacity of dividing yeast.

Future Work

Di Carlo has proposed that this work be used to sort shaped particles in other areas to improve cytometry that operates on spherical particles, alignment of barcoded particles, and identification of microalgae that vary in size and shape. Interestingly, he also introduced the capability of this setup for a non-biological process: improving cement. Cement strength and stability are affected by particle shape and size and could benefit from shape based separation. According to Dr. Di Carlo, “… [Cement] particles that are too large may not react completely in internal regions of the particle, while smaller particles with very high surface area to volume ratios can react too quickly and may not be stable.” I’m excited to see microfluidics expand into more established industries and further demonstrating real-world potential to be a more cost effective, accessible technology.

Organs on Chips

In an effort to model the complex processes occurring in human bodies, Donald Ingber has pioneered the development of ‘organs-on-chips,’ reproducing the lung and the gut on microfluidic devices. These systems allow researchers to replicate and study organs without the use of human test subjects. While this is one of the best options, there are too many variables to control, understand, and more importantly, manipulate. At the other end of the spectrum is an in vitro study with a cell line and few variables that hardly resemble the real environment. Researchers in Switzerland have developed their own gut-on-a-chip that imitates a human gastrointestinal tract called the Nutrichip. They hope to use this microfluidic device to study the immune-modulatory function of food (with a strong focus on dairy food). This work is detailed in the article “NutriChip: Nutrition Analysis Meets Microfluidics,” which appears in Lab on a Chip.

Device Aims

The Nutrichip is primarily focused on analyzing the presence and kinetics of inflammatory biomarkers after a meal, which should shed insight on how certain foods impact our bodies. For example, milk products not only provide nutrients, but affect our physiological functions via bacteria, proteins and bioactive peptides. In particular these agents may modulate the fabrication of pro-inflammatory cytokines, which the Nutrichip aims to monitor. The Nutrichip mimics the thin layer of epithelial cells of the intestinal tract and the immune system it interacts with. In this system, the epithelial cells transport nutrients into circulation to be metabolized by the body, and the immune system controls responses to any trespassing pathogenic organisms. Not only do epithelial cells try to keep out pathogens, but they must filter what reaches the immune layer in order to maintain immunological tolerance to nutrients.

Device Design

The device is composed of three distinct parts, an apical layer, a basolateral layer, and a membrane which separates them. The apical layer is populated with a culture of intestinal epithelial cells. In this model, the authors used Caco-2 cells to produce a confluent layer of the intestine. This cell line is derived from a human colorectal adenocarcinoma that demonstrates most of the morphological and functional traits of intestinal cells. The basolateral layer contains a culture of monocytic cell line differentiated into macrophages. A chamber downstream of the monocytes contains magnetic beads functionalized with antibodies against the targeted cytokines. These beads allow in situ capture and washing of the cytokines before fluorescent detection.

Experimental Procedure and Results

The researchers treated the apical layer with tumor necrosis alpha (TNFα) and lipopolysaccharide (LPS), which is found in the membrane of Gram-negative bacteria, for 24 hours. This resulted in a significant secretion of the cytokine Interleukin 6 (IL-6) in the basolateral media. However, the Caco-2 cells protected the macrophages very well, as the LPS concentration applied to the apical layer resulting in a response, was three orders of magnitude greater than that would be applied to the basolateral layer directly to produce the same cytokine response. When the macrophages were treated directly with LPS, there was a significant increase in IL-6 which indicates that they may be able to quantify the IL-6 with their proposed magnetic beads. These two experiments indicate that they can partially mimic the intestinal tract and monitor the effect that certain compounds or nutrients have on the body’s immune system.

Thoughts on Future Work

This is all very preliminary work, and the researchers hope to measure the pro-inflammatory activity of meals, anti-inflammatory activity of dairy products as well the bio-availability of nutrients in digested foods. To make that final component a reality, they also intend to include on-chip digestion capabilities. I am sure there are alternatives to a dynamic digestive tract moving bolus through the system, but I’d love to see it anyway. I think this work could be extremely beneficial in understanding food allergies. We seem to have more food allergies these days that may be due to changes in modern life or may have gone undetected previously (or is perhaps just over diagnosed).

Back when I was in sixth grade, I remember reading a little blurb in some science magazine at school that in the future we could receive shots via a method that would feel as soft as a banana peel. Although I’m now a champ at taking shots, it’s still not a bad idea. We’ve had transdermal patches (think nicotine and birth control) for some time now, but those release their medicine over a period of time. A syringe is capable of delivering a dose at once, and can take a biological sample too. Researchers from the University of Pisa have developed this ‘syringe of the future’ within ‘A minimally invasive microchip for transdermal injection/sampling applications’ in Lab on a Chip.

Microsyringe Design

The microsyringe is actually a 0.5 cm x 0.5 cm microchip featuring thousands of hollow silicon-dioxide microneedles. The microneedles are 100 µm long at a density of 1 million needles/cm2. These needles are at least 100 times denser and 10 times smaller than other results reported in the literature. Furthermore, this is not simply a design for microneedles; the researchers have incorporated a reservoir that is meant to store samples from the body and medicine when injecting. The reservoir is comprised of 14 independent volumes adding up to 4.2µL with a sealed plastic cap. The microsyringe would not penetrate as deeply as a normal syringe and would have a smaller total cross-sectional area, resulting in a less painful injection. The researchers intend this microsyringe to be an integral part of an artificial pancreas that is capable of continuously sampling interstitial fluids, measuring glucose levels and releasing insulin to regulate glucose in the blood.

Microsyringe Theoretical Analysis

Schematic of microsryinge illustrates the array of needles connecting to independent reservoirs on reverse.

If the microsyringe is going to be used in the real world, the microneedles need to be able to puncture the outer layers of the skin without breaking. They first tested this theoretically by analyzing the forces acting on the microneedles during insertion. The force required to pierce the skin (derived from a known skin-piercing pressure and the known area of the microsyringe) are compared to the maximum values of the buckling and bending forces that the microneedles would encounter. The buckling force would arise when the microneedle insertion is misaligned with respect to its axis of symmetry (in other words, when the skin isn’t orthogonal to the microneedle cross-section). The bending force is generated by lateral movement between the tissue and needle during the beginning of insertion. The theoretical analysis reveals that the buckling and bending forces are at least 10 times greater than the piercing force, giving it a factor of safety greater than 10.

Microsyringe Mechanical Testing

SEM of fabricated microsyringe illustrating the density and uniformity of the needles.

The researchers followed up this theoretical analysis by simulating insertion into human skin using agarose gel models, which have mechanical properties similar to those of skin. The microsyringes were inserted into the skin at 200 and 500 gram-forces (typical force produced by finger) for 30, 60 and 120 seconds. After repeated insertion tests of the same microneedles, the researchers found that they all penetrated successfully without significant damage to the needles (characterization by SEM). Since this device needs to be able to store liquid, they tested losses due to evaporation and acceleration (ie dropping). Over a 19 day test period, they measured an evaporation rate of 71 nL/min through the microneedles, which would drain the reservoir in an hour. Under acceleration of 80g, the microneedles lost 1320 nL/min, which would drain the reservoir in 3 minutes of falling.

I wonder how the use of this microsyringe would actually work. 4.2 µL is small to begin with, but each individual reservoir is 0.3 µL. It’ll be difficult to load medicine and collect samples. I’m really interested in the group’s plan to use it in an artificial pancreas. A microsyringe by itself is good, but I love the idea of a pancreas. I suppose that one of the most important aspects of the microsyringe is its reservoir size. What’s a relevant medicine payload? What sample size is needed for analysis? I think this is some promising research from this group, and I look forward to reading about their pancreas.

On Microfluidic Future I like reviewing advancements in therapeutic or diagnostic devices because I’m really drawn to those areas of research. Every once in a while, however, I take interest in research for the sake for knowledge, like the Root Chip. I recently came across an article from Dino Di Carlo of UCLA that describes a microfluidic device used to study cancer cells. The article, “Increased Asymmetric and Multi-Daughter Cell Division in Mechanically Confined Microenvironments” appeared in PLoS ONE, which is an open access journal (very cool!).

Specifically, Di Carlo’s device is used to study the effects of the mechanical environment on cancer cells during division. It’s commonly known that the course of the cell cycle is affected by soluble factors, but the cell’s mechanical interaction with the environment also affects its morphology, differentiation and cell cycle. Changes in confinement and substrate elasticity were tested using the HeLa cervical cancer cell line in this study. The authors looked for several deviations from standard cell division including delayed mitosis, multi-daughter mitosis events, unevenly sized daughter cells and induction of cell death.

Device Design

A. The device features posts to confine cells with continuous perfusion. B. The device is deformable, bringing the two layers into contact. C. The device has variable elasticity and confinement height.

Di Carlo’s device has a bottom layer and an elevated PDMS layer supported by posts with varying height for control over cell confinement. In a relaxed state, there is a 15 µm clearance between the posts and bottom layer. When pressure is applied to the device, the two layers meet which confines the cells between posts and reduces the clearance to 3 µm or 7 µm. Additionally, the top layer has an elasticity of 130 KPa or 1 MPa. The device is designed to allow media to flow throughout all the confining chambers, eliminating the possibility of cell death due to a toxic environment.

Results

Induced multi-daughter division resulted in 3, 4 and even 5 daughters

In an unconfined environment, a HeLa cell would normally ball up into a sphere during mitosis, which would take no longer than 140 minutes. But with increased confinement and stiffness, the authors witnessed multi-daughter mitosis (one cell dividing into three or four daughter cells), unbalanced daughter sizes, prolonged mitosis and cell death. Resulting control cells from division would often be spheres with a diameter of 20 µm, while the confined cells would be highly asymmetric with diameters 40-80 µm. Increases in stiffness and confinement generally increased the odds of abnormal cell division, with some clear observed patterns. Under low compression of 7µm and stiffness of 130 KPa, 90% of multi-daughter divisions resulted in three cells. When a stiffness of 1 MPa was applied to the same low compression, 85% of multi-daughter divisions resulted in four cells. The authors believe that the cells aren’t able to effectively deform the stiffer substrate and are limited in how spherical they can be before mitosis. The confined shape may also affect chromosomes lining up at the metaphase plane(s), resulting in a bias toward multi-daughter divisions. The multi-daughter cell divisions can produce viable cells, which subsequently can undergo their own multi-daughter division, and also generate daughters that re-fuse after division.

The authors also hypothesized that when the cells are forced to divide in a discoid shape, signaling and regulation may be affected. Diffusion or active transport of signals would take much longer to traverse the large cross-section of the cell, and the force of cytoskeletal elements might be diminished across the same large distance.

Discussion

This work has produced some findings that may not be totally surprising, but are definitely peculiar. A follow-up to the findings generated here would surely add to the increasing knowledge base of cancer cell behavior. In its current form, this information wouldn’t lead to any new treatments, although under high confinement 70% of cell cycles resulted in cell death, which holds potential in therapeutic applications. Studying diseases help us learn more about healthy cells because we can see what goes wrong when specific elements fail, but I’m also interested in seeing how healthy cells react under the same mechanical conditions. The microfluidic device itself also has potential beyond the study of cellular life cycles: One area in particular includes investigating the effects of mechanical strain in osteocyte and chondrocyte differentiation.

Cell Sorting

Cells are quite valuable, especially when used for regenerative research, diagnostics or research. But harvested cells do not come presorted and need to be separated from a heterogeneous mixture of cells. There are already numerous methods to sort cells according to biophysical properties such as size, density, morphology, and dielectric or magnetic susceptibility. Cell sorting based on labels can have a higher specificity, but introduces extra steps to add and remove labels, which can affect the phenotype of the cell. Rohit Karnik of MIT has developed a cell sorting method based on cell rolling. The continuous, label-free process is described in “Cell sorting by deterministic rolling” in Lab on a Chip.

Cell Rolling

Target cells initiate cell rolling and enter the space between the ridges, which leads them to the gutter side. Non-target cells do not adhere or roll along the surface and maintain their original trajectories.

Cell rolling is a phenomenon where a cell is constantly forming and releasing adhesive bonds with a surface under fluid flow. The continuous creation and release of bonds by the cell induces rolling and is an integral role in the movement of lymphocytes, platelets, stem cells and metastatic cancer cells. To induce cell rolling, a surface needs to be coated with a ligand specific to the target cell type. The rolling target cells need to be focused, so slanted ridges are added to the bottom of the channel. When the target cell comes into contact with the surface of the coated ridges, it will begin to roll along it and eventually turn the corner into the space between ridges. By following the path of the direction of the ridges, the targeted cells will be focused on one side of the channel known as the gutter. The non-target cells should not adhere and roll along the ridges, allowing them to be spatially differentiated from the target cells. But the ridges actually serve an additional purpose: Acting as mixers, these ridges introduce circulation to the axial flow. This flow would normally be laminar, which would prevent the majority of cells from coming in contact with the surface of the channel and rolling.

Cell Rolling Testing & Optimization

HL60 (target) and K562 (non-target) cells enter through the same inlet. Due to the cell rolling sorting, the cells exit the two outlets highly organized.

Karnik validated this cell sorting method by processing the leukemia cell lines HL60 and K562. The surfaces were coated with P-selectin, which is a known ligand for the target HL60. HL60 and K562 were injected in a single inlet at a ratio of 2:3, respectively. Outlet A held 95.0 ± 2.8% HL60 cells, and outlet B had 94.3 ± 0.9% K562 cells. The cell sorting was extremely successful and 87.2 ± 3.7% and 76.7 ± 14.2% of the HL60 and K562 cells were recovered at the end of the process. Cell loss was most likely due to settling in the syringe at the inlet and cells remaining in the channel and dead volumes at the end of the process. Karnik also investigated the effect of the ligand concentration on cell sorting. Higher concentrations generated stronger cell-surface adhesion, but this came at the expense of cell rolling, so an operating point had to be determined for an ideal cell rolling concentration; at a flow rate of 70 µL/min, the channel was incubated with P-selectin concentration of 1.5 µg/mL.

Discussion

I really like this method of cell sorting because it is both passive and label-free. Although an extra section of channel must be added with coated ridges, no other major components are necessary. This method does not need any more equipment or chambers, making it simple to integrate into a project. With its small footprint, it can also be highly parallelized, negating the need for it to operate at high flow rates which could hinder cell rolling. This could either function as a standalone sorting device or integrated into a device processing a mixture of cells. Similar to other cell sorting procedures, widespread usage of this particular method is limited to availability of information: only cell lines for which we’ve characterized the rolling behavior for can be sorted this way.

Reference:

I really like this method of cell sorting because it is both passive and label-free. Although an extra section of channel must be added with coated ridges, no other major components are necessary. This method does not need any more equipment or chambers, making it simple to integrate into a project. With its small footprint, it can also be highly parallelized, negating the need for it to operate at high flow rates which could hinder cell rolling. This could either function as a standalone sorting device or integrated into a device processing a mixture of cells. Similar to other cell sorting procedures, widespread usage of this particular method is limited to availability of information: only cell lines for which we’ve characterized the rolling behavior for can be sorted this way.

Microfluidic devices are able to process small volumes of liquid and are comprised of microscale components, but the devices themselves are not often small themselves. These labs-on-chips are often limited to lives in labs instead of the remote areas that could really benefit from their use. The limitation comes in the form of support equipment used to process or analyze assays that are expensive, bulky, energy consuming and/or require trained professional operators. Syringe pumps are often used in labs to drive liquids used in assays at specific flow rates and to ensure that the right volume is used. The need for complicated, external flow equipment was recently addressed by a group from Peking University. The group’s paper, “Squeeze-chip: a finger-controlled microfluidic flow network device and its application to biochemical assays” was recently featured on the cover of Lab on a Chip.

Squeeze-chip Design

The squeeze-chip is comprised of two check valves on either side of a reservoir. Squeezing the reservoir pumps fluid through one check valve. The reservoir is refilled after release through the second check valve.

The ‘squeeze-chip’ is based on a system of check valves and finger-operated pumps. Check valves allow fluid to flow in only one direction and, in this case, are fabricated from PDMS and integrated into a microfluidic card. The pump is a fluid reservoir that can be depressed by a finger. Squeezing the reservoir evacuates fluid through one of the check valves oriented to pass fluid away from the reservoir. After releasing the reservoir, it draws fluid in through a second check valve that’s oriented in a different direction so that it can only feed liquid into the reservoir. Alternatively, specially designed squeeze-chips can handle two immiscible fluids so that with each pump, a small plug of one fluid can be inserted into the system and is sandwiched by the other fluid. The displaced volume is not always equal, but the reservoirs feed into metering channels which only accept a specific volume, adding some control to the squeeze-chip. The authors have had success in delivering volumes ranging from nanoliters to microliters. This is the basic setup for a squeeze-chip, which can be combined with other units to create a more complex, sophisticated system.

Squeeze-chip Validation

Alternate design of the squeeze-chip creates liquid sandwiches one fluid in another when immiscible.

The researchers demonstrated the squeeze-chip’s ability by running colorimetric assays to measure glucose and uric acid at clinically relevant concentrations of 0-10 mM and 0-15 mM respectively.. These assays comprise a system of squeeze-chips that mix solutions, resulting in a 4mm thick readout chamber, allowing the user to see the solution’s color with the naked eye. The researchers were able to detect glucose as low as 1 mM and uric acid as low as 100 µM with initial sample consumption less than 5 µL per test. Limits of detection can be lowered by increasing the readout chamber thickness, which would make the color darker.

Discussion

Sample operation of squeeze-chip used in colorimetric assay.

I think that the squeeze-chip is a great component to make devices more viable outside of the lab, though it may not be suitable for every card. The metering chambers add some volume control, but, again, this may not be enough. More importantly, volumetric flow rate isn’t controlled, which eliminates the squeeze-chip as a viable option for applications requiring more stringent regulation. There are several considerations that need to be kept in mind when designing any lab-on-a-chip for use outside the lab. Despite any microscale magic taking place, the end-user and intended environment need elevated priority, meaning that these devices need to be relatively cheap, free of any tethers to an advanced lab and operable by people with limited education. The squeeze-chip certainly addresses cost and eliminates a connection to an external syringe pump. It can be operated by hand, or even operated by an actuated piston if the chip is predestined to function in some housing. Usability testing results would be interesting to see as well, including performance variation among users, but it looks like devices using the squeeze-chip can be readily used in areas of need.

Diagnostics in Low-Resource Settings

A lot of excitement surrounding microfluidics has been about its promising use in diagnosis in low-resource settings. Many infectious diseases present in developing countries are manageable or treatable with available medications, but still account for 1/3 of deaths. In these areas, multiple diseases present similar symptoms, leading to misdiagnosis and thus incorrect treatment. Hundreds of blood-based microfluidic immunoassays are available for diagnostic purposes, but they’re not all created equally. They require varying levels of sample processing or analysis that prohibit their deployment in low-resource settings. Further, while some diseases may have similar symptoms, they might require different detection techniques, with varying sample volumes, reagents and processing time, making it difficult to detect multiple diseases within the same system. This is the focus of recent work from Paul Yager of University of Washington. In his Lab on a Chip paper, “Progress toward multiplexed sample-to-result detection in low resource settings using microfluidic immunoassay cards,” he and his colleagues develop a system to detect both Typhoid fever and malaria.

Malaria and Typhoid Dual Detection

The proposed card is compact and intended to integrate with the DxBox. All sample processing occurs on-card including IgG filtration. This also depicts the porous nitrocellulose membrane of the FMIA which provides high assay surface area.

The developed system is intended to integrate with the DxBox, an ongoing project focused on a point-of-care diagnostic device. As I mentioned before, different diseases might require different means of detection. In this case, the researchers decided to detect antigens generated by malaria parasites and IgM antibodies generated by the host in response to the bacteria responsible for typhoid (Salmonella Typhi). The microfluidic card is based on a flow-through membrane immunoassay (FMIA) composed primarily of nitrocellulose, instead of traditional microfluidic channels. Nitrocellulose is essentially paper and provides a lot of surface area, creating shorter assay times. Enzyme-linked immunosorbent assays (ELISA) are standard lab assays and will be replicated using FMIA. However, ELISA can be slow (more than 3 hours) due to the diffusion between the bulk fluid and the capture service, while the FMIA can perform the same task in half an hour due to its high surface area.

Immunoassay Process

The system processes blood from the same sample in different, parallel steps to test for malaria and typhoid.

The detections of both analytes are run in parallel and start with the same unfiltered blood. The card extracts the plasma with a filter, eliminating whole blood cells, and this is where the assays for malaria and typhoid diverge. The typhoid assay must filter out any IgG antibodies (which would cause false positives when testing for IgM) and dilute the sample further. This results in a four-fold increase in the sample volume used in the malaria segment. Each analyte is then captured by immobilized reagents and labeled with gold nanoparticles conjugated to antibodies. The entire process is driven by pneumatic pressure and valves. Pneumatics is cheaper than alternatives, plus it doesn’t dilute the sample with an additional liquid, but it comes at the cost of introduced air bubbles. Air vents were incorporated to eliminate bubbles, but they were not totally eradicated and still obstructed the image analysis sometimes. Within the DxBox, analysis is intended to be carried out by a webcam. However, the current design of the system created nonuniform lighting (which can be rectified), and a flatbed scanner was used instead.

Results

This microfluidic card was tested on blood samples with Typhoid or malaria. Unfortunately the researchers did not test on a large enough sample to evaluate clinical utility or determine a limit of detection for the card. Currently lab-based ELISA has a limit of detection near 4 ng/mL, which is clinically relevant. The researchers also ran each sample on ELISA and a bench-top FMIA in addition to the on-card FMIA. Comparing the quantified signal of the on-card FMIA to ELISA resulted in an R2 value of 0.73, and on-card FMIA vs bench-top FMIA had an R2 value of 0.92. These are fine results that demonstrate how closely the on-card FMIA follows the bench-top methods, but it would mean a whole lot more given a limit of detection.

Discussion

The results of this card design seem promising but will mean a lot more with more testing. The pneumatic actuation was a major hindrance to project success. While they could operate at different pressures, the actuators were unable to actually control the liquid velocity. Also, the pneumatics introduced bubbles into the card, which not only affected the assay process but the final image to be analyzed as well. While only two diseases were showcased here, the authors have indicated that there is already work to create a more complete fever symptom panel. They also acknowledged that this format could be applied to other panels aimed at diarrheal diseases and sexually transmitted diseases as well. This format could really be adapted for a variety of diseases, with the disease diagnosis as the limiting factor for card design.

Microfluidic Future is by no means an accurate representation of all the current, ongoing research in microfluidics. Nevertheless, the fact that you won’t be able to find any articles about assays relying on a biophysical marker isn’t too far off the reality in microfluidics. I suppose this is partly due to the incredible amount of previous work on molecular markers when high resolution control hadn’t been realized yet. Regardless, I was happy to come across an article about a microfluidic device that indicates sickle cell disease risk using the disease’s biophysical characteristics. The work “A Biophysical Indicator of Vaso-occusive Risk in Sickle Cell Disease” appeared in Science Translational Medicine this past February and is a result of ongoing sickle research by MIT and Harvard Medical School. My friend originally forwarded me an article about it on Medgadget, which you should also check out, along with the podcast it mentions.

Sickle Cell Disease

Red blood cells with abnormal Hemoglobin (HbS) can form into a sickle shape and occlude blood vessels (image source)

Sickle cell disease affects more than 13 million people worldwide and is responsible for $1.1 billion in costs per year in the United States. A mutation in the hemoglobin molecule causes red blood cells to change shape and stiffen when releasing oxygen. This shape change in many red blood cells can occlude a blood vessel, resulting in a crisis. While this fundamental component of the disease is known, there are many factors and processes relating to this event that are still unknown, resulting in an inability to discern the severity of sickle cell disease for a particular patient, besides the fact that they have it. The ability to predict the severity of the sickle cell disease would both aid the development of new therapies and guide clinical intervention.

Characterizing Disease Severity

The authors of this paper have previously demonstrated that they could simulate the vaso-occlusive crisis events by altering the oxygen concentration of sickle cell disease blood flowing through a capillary-sized microchannel. This paper takes it a step further and quantifies how the blood conductance, defined as velocity per unit pressure drop, changes during the events and uses it as a measure of disease severity. When the authors reduced the oxygen content, blood velocity would decrease, despite the constant pressure applied. The authors hypothesized that the conductance would change faster for patients with severe sickle cell disease as opposed to patients with a more benign form of the disease. You can see that the conductance of a patient with benign sickle cell disease (A) and that of a patient with severe sickle cell disease (B) are drastically different.

Blood from a patient with benign sickle cell disease (A) and sever sickle cell disease (B) were both subjected to changes in oxygen concentration, indicated by the top panels. This resulted in drastically different changes in conductance, which could distinguish the two types of patients.

Device Value

As I mentioned, this device has potential use in developing therapies for sickle cell disease. The authors demonstrated this with 5-hydroxymethyl furfural (5HMF), which is known to increase hemoglobin oxygen affinity. Hemoglobin with a higher oxygen affinity would retain its ‘safe’ structure as it would release its oxygen less readily. As expected, this molecule caused a fivefold slower reduction in conductance change compared to an untreated, severe blood sample. While this device’s strength originates in its focus on biophysical markers, it could also be utilized to further understand the process of vaso-occlusive events and guide the handling of patients and discovery of effective therapies.

Regardless of the praise this paper has already received, I think it’s rather solid, and I’m not sure what else I would have liked to see addressed. Don’t expect to see this in your local pharmacy any time soon, though, since it can’t predict the occurrence of crises, but instead would indicate what treatment a patient would need.

It’s not hard to see that a lot here at Microfluidic Future focuses on the medical applications of microfluidics, but that doesn’t mean that I’m not interested in other ways the technology can be used. I love to see novel applications of microfluidics because progress for anyone is progress for everyone. That brings me to today’s post on the RootChip. If the name isn’t a total give away, I recently came across an article that uses a microfluidic chip to study the roots of plants. In the article, “The RootChip: An Integrated Microfluidic Chip for Plant Science” by Stephen Quake and other researchers from Stanford University, a device is developed to study the roots of Arabidopsis thaliana.

Studying Arabidopsis Thaliana Roots

The current RootChip handles 8 seedlings with independent control. After previous germination, tips containing the seedlings are inserted. The roots can readily be analyzed and observed in the flow chamber.

Arabidopsis thaliana has been extensively studied and is akin to the drosophila or zebrafish. It’s not easy to study the roots of a plant because they are sensitive to dehydration and physical damage, which could occur with mounting. Ideally they would be observed as close to in vivo as possible. The authors set out to study the Arabidopsis roots in parallel perfusion chambers, allowing many seedlings to be studied at once. To prove their concept, they used roots “expressing a genetically encoded fluorescence sensor for Glc and Gal.” This would allow them to non-invasively detect the Glc and Gal metabolite levels in real time.

Before introduction to the RootChip, Arabidopsis seeds are germinated in agar medium filled micropipette tips. This germination prior to attachment to the RootChip allows the roots to be screened for adequate growth and desired properties. After 5 days of germination, the tips are mounted on the RootChip. The roots continue to grow into the RootChip’s observation chambers which are filled with liquid medium. The microfluidic channel that leads from the inserted tip transitions from vertical to horizontal and the root typically aligns with the direction of flow. The entire RootChip is encased in a chip carrier that includes water reservoirs to ensure that proper humidity is maintained throughout the study.

Each seedling in the RootChip feeds into its own flow chamber. The entire RootChip can be mounted on a microscope and is enclosed to maintain humidity.

The roots were subjected to pulses of medium spiked with either Glc or Gal. Geneticaly encoded fluorescence sensors allowed for cytosolic Glc and Gal measurements, and the authors recorded reproducible elevations in the roots’ corresponding sugar concentrations. Gal is a well-known root growth inhibitor and the authors examined its effect while perfusing the roots with Gal for a long period of time. They observed darkening of the tissue and a loss of normal cytosolic signal distribution, indicating cell death and tissue alterations due to the long Gal exposure. The authors also noticed swelling epidermal cells, which would indicate a defect in the integrity of the cell wall.

The authors demonstrated the RootChip’s use to observe developing roots in parallel. Their method allows for different properties of the roots to be studied at once with live imaging. Further, the chip’s total control over the perfusion environment allows for complicated experiments. The authors also claim:

“The RootChip will greatly facilitate the ability to investigate nutrient uptake in different root zones, cell type-dependent metabolite flux, and the response of individual cells (such as root hair cells) to different environmental stimuli.”

Value of RootChip Design

I don’t have any experience in plant study, but this device seems very promising, and I’m excited to see what it leads to in both plant and biomedical microfluidic research. While the use of Gal and Glc may be exciting to some, it is really irrelevant and could be replaced by any other fluorescent markers. Let’s look instead at the capabilities that the RootChip demonstrated. First, the chip allowed the researchers to process 8 seedlings in parallel, which is always a timesaver! The authors noted that in its current configuration, 8 seedlings could be used, but modifications to the design could enable them to use more than 30 seedlings. The design also saves time since the seeds can begin germination externally before introduction to the chip. This allows researchers to select ‘high-achieving’ seedlings, and they won’t have wasted space on a dud.

Moving onto the experiment-enabling design of the chip, we can see that each perfusion chamber is independent and gives the user total control over several simultaneous experiments. This by itself is fine: You could use the RootChip to control your experiments and then use some other means to analyze the results. But with bright-field microscopy, the roots can be observed during and after the experiment in their ‘natural’ environment. I said before that I’m not an expert on plants, so I really don’t know the extent to which roots can be at least partially transparent. The transparency of the root and the bright-field microscopy allowed them to track the fluorescently-labeled metabolite activity, the swelling of the cells and the darkening of the root from death. What else could you observe about a root in the RootChip? And what else besides a root could you observe in the perfusion chamber? I guess you could look at some parasites like a tapeworm and watch them grow and respond to environmental changes. But would you really want to? I have a 100 yard rule about how close I let tapeworms get to my insides.

What’s So Great About Oral Diagnostics?

Well, a lot of things, but let’s start with the basics. In order to use a microfluidic device, you need some type of fluid right? Sure if you had some powder or fine material you could suspend it in a fluid, but for simplicity sake, let’s look at fluids as our test material. If you wanted to run a health-related diagnostic, you only have so many bodily fluids available before you have to get creative and very invasive:

Blood

Urine

Saliva

Sweat

Mucus

Tears

Out of all those fluids, blood (or serum) has been the preferred liquid. It is extremely rich in information and can expose a lot about a systemic condition or report on ailments located deep within the body. You have to filter it if you don’t want the blood cells in your sample, but it’s just a needle prick away. Other ‘fluids’ like mucus or saliva require a bit more work because of how thick and viscous they are, plus you need to filter out the debris floating around in your mouth. If blood is so great, why do we need anything else? Although blood is a great global fluid, sometimes you can get more detailed information by going closer to the source of the problem and choosing a more local fluid, but perhaps one of the greatest reasons is because the process to obtain the blood is still invasive. In the ideal microfluidics world of the future, we would need very small sample sizes and pin pricks wouldn’t be that bad. For now, spitting into a cup is still easier than and more enjoyable than getting stuck. Plus, exposed blood is always a health concern, and should definitely be avoided if possible.

Okay, so I guess you can see why you might want to investigate other fluids, but why saliva? Like I said before, there are some analytes from the blood present in saliva in lower concentrations, like C-Reactive Protein (CRP). In fact, tests have been proposed to monitor growth factors, drugs of abuse, steroids and infectious diseases using oral fluid. CRP is a possible indication of inflammation and is released in events like heart attacks. Detection of this protein along with others can be a good indication of an acute myocardial infarction, but you wouldn’t be able to use that alone. CRP may be present for other types of inflammation, especially a local one occurring in the mouth.

Tackling Periodontal Disease Early

But let’s not focus solely on detecting global diseases and problems. It’s still valuable to detect diseases in the mouth. Periodontal disease is a common oral infectious disease that is a leading cause of tooth loss in adults. Currently, clinical practices don’t have the capability to detect the onset of inflammation leading to periodontal disease and can’t identify the patients at the greatest risk for disease progression. A Point-of-Care device to detect this onset would permit earlier detection and could be utilized outside of the dentist’s office. Testing for periodontal disease can become much easier and widespread since you don’t need a highly trained professional to run the test, and it can be done in health care clinics or at home. There are many underserved communities that cannot afford to visit the dentist, but cheap, regular screening for disease can allow them to manage a disease before it gets out of hand. Additionally, preventing oral diseases can go a long way for the rest of the body, as periodontal disease has been connected to cardiovascular disease, stroke and osteoporosis.

Engaging Saliva in Pharmacogenomics

Finally, oral fluids can play a part in pharmacogenomics studies. Pharmacogenomics is the marriage of genetics and pharmacology. While we may think that we understand the processes of diseases, the diseases and their treatments can vary greatly from person to person depending on genetics. In an ideal world, every single drug and treatment we receive would be tailored specifically to our DNA. There is still a lot of work needed to find out which genes have greater effects on both the disease and treatment, but in order to learn more and to eventually provide tailored treatments, we need to understand our own DNA. Oral fluid can be a great source to obtain that DNA. The DNA we use can come from anywhere, so why not easily dislodge some cells in the mouth instead of pricking ourselves with needles?

There are many bodily fluids for us to choose from, but saliva has some key advantages. There are both important local and global diseases it can test. It certainly is less invasive than blood and does not require the same privacy (and planning) as urine collection. But there still needs to be work to determine the ideal biomarkers in the saliva and amplify their signals. There’s some good advice found in “Translational and Clinical Applications of Salivary Diagnostics” that not only applies to saliva POC devices, but to all their POC device brethren:

“While their analysis core is substantially smaller than that of benchtop alternatives, the network of macroscopic laboratory-based infrastructure required for sample processing, analyte detection, data processing, and reagent handling implies that these platforms are best described as ‘chips-in-a-lab’ rather than true ‘labs-on-a-chip’.”

No matter what fluid we’re using, or disease we’re screening, we need to design these devices with the clear motivation for them to be used outside our labs, and in the wild.