Field Precision software tipsEffective finite-element modeling of electromagnetic fields2017-09-09T12:17:40Zhttp://fieldp.com/myblog/feed/atom/WordPressshumphrieshttp://fieldp.com/myblog/?p=28862017-03-18T19:24:43Z2017-03-18T18:26:27ZLinear quadrupoles are widely used for ion mass spectrometry. They can separate ions by the ratio of mass number to charge number (M/Z) without a magnetic field, leading to compact, light and inexpensive systems. An assembly consists of a source of low-energy ions that enter an extended electrostatic quadrupole. The voltage applied to the lens has [...]]]>Linear quadrupoles are widely used for ion mass spectrometry. They can separate ions by the ratio of mass number to charge number (M/Z) without a magnetic field, leading to compact, light and inexpensive systems. An assembly consists of a source of low-energy ions that enter an extended electrostatic quadrupole. The voltage applied to the lens has both DC and AC components. The orbits of most ions are unstable. Depending on the choice of voltages and RF frequency, only ions within a small range of M/Z have stable orbits and pass to a downstream detector. This report demonstrates the application of the three-dimensional OmniTrak code to characterize quadrupole mass analyzers.

Assume the quadrupole has ideal hyperbolic electrodes with an electrode-tip radius r0 (i.e., the distance from the axis to the closest point on an electrode). The beam propagates in the z direction. Figure 1 shows a cross section of the geometry in the x-y plane. The voltage applied to the top and bottom electrodes has the form of Eq. 1 where U is the DC component and V is the magnitude of the RF voltage with angular frequency ?. Voltages with opposite polarity are applied to the y and x electrodes.

The electrostatic potential in the gap has the form of Eq. 2. The y component of electric field is given by Eq. 3. Equation 4 describes ion motion in the y direction. With the introduction of the dimensionless variables of Eqs. 5, 6 and 7, the equation of motion takes the form of Eq. 8. Motion in the x direction is described by Eq. 9.

Equations 8 and 9 must be solved numerically. There are regions in (a,q) space where ion motion in x or y may be stable or unstable. Here, the term stable implies that the trajectories are bounded oscillations. In unstable regions, the displacement increases exponentially so that in a short time ions strike the electrodes. In particular, ions have stable trajectories in both the x and y directions within the shaded region shown in Figure 2. (Note that there is also a symmetric stability region below the q axis. Changing a to -a simply exchanges the roles of x and y).

Figure 2. Region of a-q space with stable motion in both x and y.

For reference, note that the ratio of the parameters is proportional to the DC voltage component divided by the AC component (Eq. 10). A fixed ratio corresponds to a straight line in (q,a) space like the dashed line in Fig. 2. Motion along the line starting at the origin is equivalent to raising the magnitude of U and V while keeping a constant ration. For a section of the scan, the curve in Fig. 2 passes through the stability region. The tip of the stability region has the coordinates q = 0.706 and a = 0.237.

The operation of the quadrupole as a mass filter is clear if we plot regions of stability in terms of voltage values U and V rather than the dimensionless parameters a and q. Figure 3 shows stability regions for two values of M/Z. The M/Z value for the green shaded region is half that of the tan region. The working line follows a fixed voltage ratio, U/V < (0.237)/(2 × 0.706). Note that as the magnitude of the applied voltages increases, the working line first passes through the green stability zone and then through the tan zone. A plot of the detector response versus a scan of voltage magnitude will show two peaks corresponding to the two values of (M/Z). Higher voltages correspond to higher mass-to-charge ratios. Absolute values of (M/Z) may be inferred from known values of the voltage, quadrupole geometry and RF frequency. Note that the ratio U/V = 0.168. gives ideal resolution but no throughput. Lowering the DC voltage allows more particles to pass to the detector at the expense of reduced mass resolution. With no DC offset (U = 0), all ions pass through the quadrupole.

Figure 3. Stability regions and a working line for ion transport as a function of the applied voltages U and V. The shaded regions show the stability zone for two ion species. The green region has a mass to charge ratio (M/Z) equal to half that of the tan region.

We now have the theoretical resources to design a practical system. Take the quadrupole length as L = 15.0 cm. Fabricating hyperbolic electrodes is expensive, so we’ll use stock electrodes with a circular cross section to approximate the field. The choice is acceptable as long as the ions are confined close to the axis. The electrode diameter is d = 1.0 cm. The electrodes are displaced 0.9 cm from the propagation axis, giving the minimum distance from the axis to an electrode of rc = 0.4 cm. We can identify two questions that can be answered by a 3D code:

How does the choice of rc versus d affect the linearity of electric fields near the axis?

What is the value of ro for equivalent hyperbolic electrodes? In other words, if we used hyperbolic instead of circular electrodes with a given voltage, what choice of ro would give the same electric field variation near the axis?

We’ll concentrate on the second issue, calculating the electric field with HiPhi. The quadrupole assembly is centered in a solution volume that covers the range -3.0 cm ? x,y ? 3.0 cm and 0.0 cm ? z ? 20.0 cm. The wall of the solution volume is grounded. The element size in the quadrupole region is 0.05 cm. Figure 4 shows the resulting mesh.

Figure 4. Conformal representation of the quadrupole electrodes.

I used normalized voltages of +1.0 V on the y electrodes and -1.0 V on the x electrodes in the electrostatic solution. Figure 5 shows calculated values of Ex in a scan through the axis along x at the quadrupole center. Over the range -0.2 cm ? x ? 0.2 cm, the field varied linearly with x to within 0.01%. The calculated dependence is Ex(x) = (1.255 × 10^5)x V/m. A comparison with Eq. 3 for hyperbolic electrodes implies the relationship of Eq. 11. The equivalent hyperbolic pole tip distance is 0.3993 cm, quite close to the physical distance rc = 0.4000 cm. My initial guess of rc = 0.4 d turned out to give excellent results.

The electrostatic solution was used as a basis for OmniTrak calculations of ion trajectories. I concentrated on singly-charged carbon ions (M = 12, Z = 1) with an initial kinetic energy of 40 eV. The drift velocity was 2.5 × 10^4 m/s, so the transit time through the spectrometer was about 6 ?s. There should be several RF periods over the transit time to ensure that ions experience an average focusing force. I picked f = 2.5 MHz, corresponding to ? = 1.571 × 10^7 1/s. A time step choice of ?t = 10.0 ns gave an acceptable value of 40 integration steps per RF period. I picked the PList command to initiate ion trajectories near the z = 0.0 cm boundary with velocity directed along z. To sample positions in x-y space, I used a spreadsheet to create a circular distribution of 36 particles at radius 0.1 cm. (There was no point in representing an on-axis particle. It would traverse the system with no transverse motion in all circumstances). Equations 6 and 7 can be used to find the U and V values for a working line that passes through the tip of the stability region (Fig. 2). Taking a = 0.237, q = 0.706 and substituting values for the quadrupole leads to the Eqs. 12 and 13.

The values for carbon ions are U = 14.65 V and V = 87.28 V. The value of U must be lower to observe transmitted ions.

The full OmniTrak input script is listed at the end of the report. The critical commands to represent the RF quadrupole are:

The first command loads the normalized electric field. The adjustment factor (? = 0.9875) determines the voltage magnitude and hence the position along the working line. We can make small changes to study the transmission width of an ion type, or large changes to study a different type. For example, the adjustment factor for singly-ionized iron is about ? = (56/12) = 4.67. The ModFunc command serves two functions:

Add a time variation of fields during the ion transit.

Set the slope of the working line (i.e., determine the acceptance of the system).

With regard to the first function, all values of potential in the electrostatic solution are multiplied by the current value of the algebraic function. This process is valid when the propagation distance of an electromagnetic signal over an RF period is much larger than the system.

My procedure was to set a value of U less than 14.65 V in the ModFunc command and then to do a series of runs with varying ? to change the total voltage level. Figure 6 shows the results for U = 12.0 V and 14.0 V. As expected, the peak signal occurred near the tip of the stability region and a lower value of U gave an increased transmission width. Figure 7 shows the effect of small changes of ? on ion trajectories.

Figure 6. Particle transmission factor as a function of the relative voltage magnitude and U.

Figure 7. Projected carbon ion trajectories in the z-y plane for V = 87.28 V and U = 14.0 V. A small change in the applied voltage magnitude causes transmission to drop from 100% to 0%. Note that the y scale has been expanded for clarity.

Finally, note that the calculation procedure can be automated for extended data analyses in the background. For example, to make a scan in voltage magnitude, the EField3D command could be changed to

EFIELD3D: QuadAnalyzer.HOU %1

where %1 represents a command line parameter. Multiple OmniTrak calculations could be controlled with a batch file with a form like this:

]]>0shumphrieshttp://fieldp.com/myblog/?p=28712017-03-16T14:41:15Z2017-03-16T14:36:07ZThe programs Geometer and MetaMesh are used to build three-dimensional meshes for our multi-physics solution programs using constructive solid geometry. The user has the option to assemble objects from native models (e.g., spheres, cylinders, boxes, turnings,…) or to import complex shapes from SolidWorks and other 3D CAD programs. This article reviews how to make objects with [...]]]>The programs Geometerand MetaMeshare used to build three-dimensional meshes for our multi-physics solution programs using constructive solid geometry. The user has the option to assemble objects from native models (e.g., spheres, cylinders, boxes, turnings,…) or to import complex shapes from SolidWorks and other 3D CAD programs. This article reviews how to make objects with conformal surfaces from native models. In particular, I want to clarify some points about conformal fitting.

To start, it’s important to understand the concept of the initial foundation mesh versus the final conformal mesh. The foundation mesh is a regular, structured set of box elements that fill the solution volume. Regions in the solution volume are collections in elements that represent physical objects (such as electrodes and dielectrics in electrostatic solutions). In the volume-fitting process, the goal is to make an assignment of region numbers to the foundation-mesh elements for the best representation of the physical objects. The process of surface fitting occurs after the foundation mesh is complete. Here, elements on the object surfaces are shaped to follow the mathematical definitions of native parts or CAD models to produce the final conformal mesh. As a result, elements near the surface become generalized hexahedrons.

The two mesh-generation tools serve the following functions:

1) Geometer is an interactive, graphical environment to create or to edit parts in the foundation mesh. A part is a single native or CAD model. Objects may be composed of multiple parts. The output of Geometer is a text script that lists specifications of the foundation mesh and the included parts. The script (FName.MIN) acts as input to MetaMesh. A script may also be prepared directly with a text editor.

2) MetaMesh analyzes the script to produce a binary output file of the coordinates of mesh nodes and element identities used by the solution programs. If the user makes no changes to the script produced by Geometer, the MetaMesh output is a regular mesh of box elements, identical to the foundation mesh. The user adds or edits commands of the MetaMesh script to control conformal fitting. This process is the focus of this report.

Figure 1. Example geometry.

Figure 1 shows the test geometry, a two-electrode gap. The solution volume covers the space -3.0 ? x,y ? 3.0, 0.0 ? z ? 7.0. The upper electrode has a spherical depression. The lower electrode has a spherical extension and a circular extraction port. The system has cylindrical symmetry about the z axis. A quick approach is to represent the upper and lower electrodes as turnings that extend beyond the boundaries of the solution volume. Instead, we will work with basic shapes to illustrate the construction procedure.

If we start a new script in Geometer with the given solution-volume dimensions, the program automatically adds a box region that fills the volume with the name SolutionVolume. Using the command to edit region names, we change the name of the first region to Air and the second and third region names to Upper and Lower. We then add a part with the name UpperElectrode and the region Upper. The part is a Box with dimensions Lx = Ly = 6.0 and Lz = 2.0 with a displacement 6.0 in z. To cut the depression, we add a sphere with name UpperCut, region Air, radius 2.5 and displacement 2.5 in z. The action changes the region identity of overlapped elements and nodes in the upper electrode to Air. Next we add a Box with name LowerElectrode, region Lower, dimensions Lx = Ly = 3.0 and Lz = 2.0 and displacement 1.0 in z to represent the lower electrode. The extension is a Sphere with name LowerExtension, region Lower, radius 2.0 and displacement z = 1.0. Finally, we add a Cylinder with name Aperture, region Air, radius 0.5, and length 7.0 to cut the hole. Figure 2 shows the resulting mesh created by MetaMesh if no changes are made to the script. It shows the familiar stair-step pattern for curved surfaces characteristic of regular meshes.

Figure 2. Mesh with no conformal fitting.

Before proceeding, we should note several significant points.

1) Some of the parts (like the Aperture) extended out of the solution volume. This is not a problem — external sections of parts are ignored.

2) Regions may be created by the actions of multiple parts. For example, the lower electrode was the sum of the LowerElectrode and LowerExtension parts minus the Aperture part.

3) Note how the superposition principle affects the ordering of parts. For example, we enter the UpperElectrode and then cut then spherical depression by overwriting part of the object with Air elements. Only then do we then enter the LowerElectrode. If we entered both electrodes followed by the air sphere, both electrodes would have cuts.

4) Finally, there are special considerations for cutting holes when the mesh will be used in a solution program that depends on the node identity.

Figure 3. MetaMesh input script with fitting commands.

Figure 3 shows the input script produced by Geometer. The highlighted lines have been added to implement conformal fitting. The Global section (Lines 1-14) performs three functions: a) set the dimensions of the solution volume, b) specify the sizes of elements in the foundation mesh and c) define names for the regions. Part 1 (15-21) is the solution volume region defined by geometer with all elements set to region number 1 (Air). Part 2 (22-28) is the upper electrode that overwrites the air elements. Part 3 (29-37) is the spherical concavity in the upper electrode. In Part 3, the command

Surface Region Upper

states that elements of the part in contact with elements of type Upper should be shaped so that they conform to the spherical surface. The command

Coat Upper Upper

states that nodes shared with elements of region Upper should be marked as Upper. This operation ensures that the nodes of all elements in the upper electrode are marked as part of the fixed potential region. (Note: a valid solution is obtained even if we forget to add this command. Solution programs like HiPhi automatically set all nodes connected to fixed-potential electrodes to the fixed-potential region number.)

Moving on, Part 4 (38-44) is the lower electrode while Part 5 (45-52) is the spherical extension. The command

Surface Part UpperCut

states that elements adjacent to those marked as part UpperCut are shaped to lie on the spherical surface. The final part is the cylindrical aperture. The length of 7.0 ensures that the cylinder extends above the spherical extension while the section outside the solution volume is ignored. The commands

Surface Region Lower
Coat Lower Lower

move nodes shared with elements of the electrode to conform to the cylindrical shape and correct the region assignment of these nodes. Figure 4 shows a detail with nodes of the cylinder area.

Figure 4. Detail of the mesh around the Aperture showing node assignments.

To conclude, I’ll clarify why a Surface Part command is used in Part 4 rather than a Surface Region command. Fitting occurs after volume assignment operations (i.e., the foundation mesh has been completed). In this case, there are elements of with region identity Air in the aperture as well as in the gap between electrodes. Clearly, we do not want nodes inside the aperture to be shifted to the spherical surface of the extension. Such moves would cause severe distortion of the mesh. Therefore, we use a more specific designation to narrow down the set of nodes that should be shifted.

]]>0shumphrieshttp://fieldp.com/myblog/?p=28612017-01-31T13:44:55Z2017-01-31T13:44:04ZOmniTrak is our software package for charged-particle guns and beam dynamics. The solution program can read output files from HiPhi and Magnum for 3D electric and magnetic field information. These complex files record values of potential on conformal meshes. Tables are another option for field information. Tables record [Ex,Ey,Ez] or [Bx,By,Bz] on a regular box mesh. [...]]]>OmniTrak is our software package for charged-particle guns and beam dynamics. The solution program can read output files from HiPhi and Magnum for 3D electric and magnetic field information. These complex files record values of potential on conformal meshes. Tables are another option for field information. Tables record [Ex,Ey,Ez] or [Bx,By,Bz] on a regular box mesh. Table files are easy to create and facilitate import of field information from other programs. Sometimes, it is useful to use an intermediate table instead of directly reading information from HiPhi and Magnum. In this case, the user uses the MATRIX command in PhiView or MagView to convert the values of potential on the conformal mesh to a table of electric field or magnetic flux density values. This reduces the amount of work performed by OmniTrak and can speed up many calculations.

A user recently suggested another application of tables that is helpful when the fields of a magnetic transport element can be expressed as a linear superposition of the fields produced by different control coils. For example, it may be useful to model the performance of a magnetic scanning system by calculating orbits for several values of current in the the X and Y drive coils. A quick way to do the calculation would be to prepare tables for the fields of the X and Y drive coils with a normalized current, and then to load a superposition of the fields into OmniTrak with different scaling factors. The process could be controlled by a batch file for a parameter search.

Recently, we make changes to OmniTrak to enable this procedure. The command to load a table has the form

BTable3D COIL01.MTX 1.5

where COIL01.MTX is the name of the table in the working directory and 1.5 represents an optional scaling factor. The factor multiples values of [Bx,By,Bz] — the number may be positive or negative. In the past, a second BTable3D command would simply cause the previous table to be overwritten. In the current version of OmniTrak, multiple BTable3D statements result in a superposition of values. There is no limit to number of commands — here is an example with two:

BTable3D COIL01.MTX 1.5
BTable3D COIL02.MTX 2.0

OmniTrak reads the first table in the normal way with scaling factor 1.5. When the second command is encountered, the program opens the file and checks that the map parameters (IMax, JMax, KMax, Xmin, Xmax, Ymin,…) are identical. If so, OmniTrak reads the second table and adds values of {Bx,By,Bz] to those already in memory with the scaling factor 2.0. In summary:

There is no limit to the number of BTable3D commands.

Multiple maps must use the same mesh.

BTable3D commands may appear in any order in the FIELDS section of the OmniTrak script.

Shifts and rotations are applied after all tables have been superimposed.

The user must ensure that the superposition is physically valid.

The ETable3D command has also been extended.

Figure 1 shows data from a test example: two identical shielded solenoid lenses at different positions along the z axis. The blue squares represent the tabular field of the first coil and the red squares show the field of the second downstream coil. The line is the total field calculated with the OmniTrakBSCAN command resulting from the command set listed above. The following parameters were used in the MagviewMATRIX command to create the tables: IMax = 21, XMin: -10.0, XMax = 10.0, JMax = 21, YMin = -10.0, YMax = 10.0, KMax = 240, Zmin: -12.0 and ZMax = 12.0.

]]>0shumphrieshttp://fieldp.com/myblog/?p=28382017-01-05T12:57:37Z2017-01-04T22:41:33ZIn a previous article, I discussed the basis of the H*(10) standard for radiation safety. Here, I’ll continue by demonstrating techniques for shielding calculations. I’ll use a published simple benchmark calculation to compare the results to several other Monte Carlo codes and to illustrate some of the advantages of GamBet. For this, I am grateful to [...]]]>In a previous article, I discussed the basis of the H*(10) standard for radiation safety. Here, I’ll continue by demonstrating techniques for shielding calculations. I’ll use a published simple benchmark calculation to compare the results to several other Monte Carlo codes and to illustrate some of the advantages of GamBet. For this, I am grateful to the people at RayXpert who have made reports on a number of benchmark calculations available on the Internet. In particular, I admire their patience and fortitude in learning four other codes besides their own to make the comparisons.

Figure 1. Geometry of the benchmark calculation taken from the RayXpert report.

Figure 1 shows the geometry taken from the report H*(10) dose rate evolution with some different shielding. Targets are located 100 cm from a Co60 point source with activity 10^9 Bq[1]. The dose at a depth of 1.0 cm was calculated for a bare target and with three shield configurations. The setup illustrates two points I made in the last article about H*(10) calculations:

The results are not sensitive to the detector geometry. In this case, a 4.0 cm radius sphere of tissue equivalent material is used rather than the ICRU 15.0 cm radius sphere.

The radiation weighting factor (Wr) is taken as unity for the photons and secondary electrons and there is no organ correction for the phantom (Wt = 1.0). In this case, the biological dose (Sv) equals the physical dose (Gy).

I limited my calculations to the bare target and to one behind a 2 cm lead shield. Here are the results reported by RayXpert. The values are given in units of ?Gy/hour.

The targets subtend only a small fraction of the solid angle surrounding the source, so it would be foolish to use an isotropic source. Instead, this is an ideal opportunity to use the GamBetRadiation Source Tool, illustrated in Fig. 2. After the user enters values in the dialog fields, the program creates a standard source file. The idea is to fill a circle of a specified radius in the direction of interest with a large, random distribution of particles. The flux assigned to each particle is calculated to represent the specified source activity. In this case, I created a distribution of 200,000 photons of energy 1.17 MeV and an equal number at 1.33 MeV. The distribution filled a circle of radius 4.0 cm at a distance of 92.0 from the source, ensuring complete irradiation of the detector. The energy deposition was symmetric about the line from the source through the detector, so I further reduced run time by using the option for cylindrical scoring in GamBet. The air region was represented by the standard Penelope dry air model and the tissue-equivalent target was represented by the custom material definition discussed in the previous article:

Figure 3 shows the dose distribution calculated by GamBet for the bare detector with photons entering from the left[2]. Even for this simple case, there is quite a lot going on. The dose at the surface is lower than the peak dose in the material. This is the result of the buildup of the secondary-electron distribution. The energy loss-rate for electrons is much higher than that of photons of the same energy. The depth to reach the equilibrium dose is about 0.6 cm, hence the choice of the H*(10) measurement depth as 1.0 cm for ? rays in the MeV range. The dose rate decreases gradually through the material because of ? ray attenuation. Note the difference in the air dose rate at the entrance and exit of the target. The enhanced downstream dose rate results from electrons driven out of the material. The H*(10) dose rate determined by GamBet was 338 ?Gy/hour, in good absolute agreement with the values reported by RayXpert.

Figure 3. Dose-rate distribution for a bare target.

Figure 4 shows the dose-rate distribution with 2.0 cm lead shield placed a short distance in front of the target. The shield extends to the outer radius — the portion irradiated by the 4.0 cm radius beam is clearly visible. The dose buildup depth in lead is shorter than the element size, so it is not visible[3]. Also, note the absence of a buildup region in the target. Most of the equilibrium secondary electron distribution from the lead crosses the small air gap and enters the target. The GamBet H*(10) prediction is 146 ?Gy/hour, again comparable to the values determined by the other codes. Finally, Figure 5 shows dose-rate scans along the diameter of the spherical target for the two cases. The dashed line is the H*(10) point.

Figure 4. Dose-rate distribution for a 2.0 cm lead shield.

Figure 5. Dose-rate scans along a diameter of the spherical target.

One conclusion of this study is that all radiation-shielding codes give reasonable results. This is not surprising because the physics and mathematics base for Monte Carlo radiation codes is over half a century old. The question within this constraint is: why choose GamBet? Beyond its low price, GamBet has several advantages, some of which are apparent in this study:

Zoning in GamBet is performed with conformal finite-element meshes, allowing plots such as those of Figure 3 and 4. Complex processes may enter even the simplest radiation calculations, so detailed pictures of dose distributions are essential for evaluating results.

GamBet is fast for two reasons: 1) the code is designed to encourage division of solutions into stages and 2) the code supports efficient parallel processing. For this example, the run time with 1.6 million incident model photons (4 parallel processes with 400,000 photons each) was 7 minutes.

The element approach in GamBet is more effective than combinatorial solid geometry to represent complex systems. The advantage is not apparent in this calculation, but is overwhelming when representing something like a human body phantom.

Although all Monte Carlo physics engines are about equal for MeV ? rays, Penelope has far more detailed support for low-energy X-rays.

GamBet has several advanced features, such as integration of calculated 3D electric and magnetic fields and direct coupling to electron beam and thermal transport programs.

]]>0shumphrieshttp://fieldp.com/myblog/?p=28312017-01-04T14:55:16Z2017-01-04T14:52:59ZI recently had an inquiry from a prospective customer. He had investigated several Monte Carlo options and was impressed with one package because the developer emphasized that the software could perform H*(10) calculations. In reality, every Monte Carlo X-ray code has this capability. The only requirement is that the user be aware of the definition. To [...]]]>I recently had an inquiry from a prospective customer. He had investigated several Monte Carlo options and was impressed with one package because the developer emphasized that the software could perform H*(10) calculations. In reality, every Monte Carlo X-ray code has this capability. The only requirement is that the user be aware of the definition. To remove any doubts, I will review the significance and technical background of H*(10) calculations in this article.

In applications to personnel shielding, it is not sufficient simply to know the radiation field at a location. By radiation field, I mean the fluxes of energetic electrons and photons along with their energy spectra. The critical issue is how do the particles interact with tissue and how does the dose (deposited energy divided by mass) build up inside the organism. A further complication is that the same dose may have different biological effects, depending on the radiation and tissue type. The standard unit of physical dose is the gray (Gy), equal to 1 joule/kg of deposited energy. The unit of biological dose is the sievert (Sv) which also has units of J/kg. The difference is that the biological dose includes weighting factors that depend on the radiation type (Wr) and the tissue type (Wt) to indicate the relative potential for biological harm. Dose values are related by

D(Sv) = D(Gy)*Wr*Wt

The radiation weighting factor is Wr = 1.0 for photons and electrons, so we need not worry about it in GamBet calculations. The tissue weighting factor Wt is difficult to estimate, and therefore there is considerable disagreement about the values. For whole-body irradiation, the convention is to take Wt = 1.0. The implication is that for most shielding calculations at energies of interest for medical applications, sieverts are equivalent to grays.

The critical issue is the dose buildup through interactions with tissue. To standardize measurements and calculations, the ICRU (International Commission on Radiation Units and Measurements) defined the Directional Dose Equivalent, H*(d). The quantity applies to radiation moving predominantly in one direction, such as X-rays outside a radiation shield. A different quantity, Hp (the personnel dose equivalent), applies to approximately isotropic radiation fields, like the time that Mr. Spock was trapped in the Propulsion Chamber.

The ideal calculation or measurement defined by the ICRU uses a 30-cm-diameter sphere of tissue-equivalent plastic with a density of 1 g/cm^3 and a mass composition of 76.2% oxygen, 11.1% carbon, 10.1% hydrogen and 2.6% nitrogen. The quantity H*(d) is the biological dose at a depth d below the surface of the sphere in the direction of the radiation (Figure 1). The most common choice of d is 10.0 mm, hence the term H*(10). The reasoning is as follows. When a flux of gamma rays enters a material, the dose increases moving away from the surface because of the generation of secondary electrons (e.g., Compton electrons). The electrons deposit energy more rapidly than photons. The secondary electron density grows with distance until the production rate equals the absorption rate. At this equilibrium point, the dose reaches a maximum. For gamma rays in the 1 MeV range (typical of radioactive sources) in a material with density 1 gm/cm^3 (e.g., water), the equilibrium depth is a 10.0 mm or less. The following article describes benchmark calculations that illustrate this effect.

Figure 1. Geometry to define the Directional Dose Equivalent.

To conclude, here are a couple practical suggestions for H*(10) calculations in GamBet. First, in defining custom materials, GamBet follows the Penelope convention using the stochiometric composition (the relative number of atoms in an equivalent molecule) rather than mass fractions. To make the conversion, assume that F0, FC, FH and FN are the stochiometric fractions for oxygen, carbon, hydrogen and nitrogen. Because the quantities are relative, we take F0 = 1.00. Using the atomic weights and mass fraction of 0.762 for oxygen, we can determine the other quantities from these equations:

Finally, note it is not necessary to use a 30 cm sphere of the tissue equivalent material in calculations. Any block of material will give about the same answer, as long as the object has depth and width greater than d.

The next article describes a walkthrough example that gives some good insights into the physics of the H*(10) calculation. The example illustrates several GamBet techniques and new features and provides an opportunity to make comparisons with several other radiation-shielding codes.

]]>0shumphrieshttp://fieldp.com/myblog/?p=28202016-12-31T13:10:53Z2016-12-30T17:31:40ZThe GamBet software suite calculates the interactions of energetic photons, electrons and positrons with matter. In a previous sequence of three articles (Monte Carlo methods versus moment equations: Part A, Part B and Part C), I discussed how Monte Carlo calculations approximate the behavior of large collections of particles. The essence of the Monte Carlo method [...]]]>The GamBet software suite calculates the interactions of energetic photons, electrons and positrons with matter. In a previous sequence of three articles (Monte Carlo methods versus moment equations: Part A, Part B and Part C), I discussed how Monte Carlo calculations approximate the behavior of large collections of particles. The essence of the Monte Carlo method is to follow the interactions of a finite set of model particles with the assumption that they represent the average behavior of the large set. The advantage is that there is a close connection with the interaction physics, so that a Monte Carlo calculation is often called a simulation. The disadvantage is that it is a statistical process, so the accuracy of predictions is limited by random variations set by the number of model particles or events, N. The accuracy in estimating an average quantity is usually expressed as <x> ± ?, where <x> is the ideal average and ? is the standard deviation set by the finite number of samples. The standard deviation equals the square root of the variance, the average of the squared differences from the mean value. The standard deviation scales as 1/?N. Improving the accuracy of estimates by a factor of 10 requires 100 times the number of model particles. The implication is that high accuracy leads to long run times.

The term variance reduction[1] applies to methods to optimize Monte Carlo calculations to gain an edge on the 1/?N limit. Variance reduction is not a formalized mathematical method, but rather a set of common-sense fixes that can go a long way toward reducing run time. The essence is finding ways not to waste time on particles that will not contribute to critical results. The success and validity of variance reduction depends strongly on user judgements. GamBet uses the Penelope package for interaction physics. Penelope has built-in features for variance reduction that are implemented in GamBet with the following commands:

ENHANCE NReg NSplit [ElecP, PhotP, PosiP, ElecS, PhotS, PosiS]

GamBet is unique among Monte Carlo codes in its use of a finite-element conformal meshes to represent the solution volume. Users can divide the space into a number of regions to represent different materials or sections of an object. The Enhance command improves statistics in a critical region (such as a detector). Particles entering the region are split into NSplit particles with statistical weight 1/NSplit. With the optional string parameters, the operation may be limited to specific particle types (primary or secondary electrons, photons and positrons).

REDUCE NReg NKill [ElecP, PhotP, PosiP, ElecS, PhotS, PosiS]

The Reduce command is the inverse of Enhance. The number of particle entering a specified non-critical region are reduced while increasing their statistical weight.

FORCE [ELEC,PHOT,POSI] [HELAS,...] Factor [NReg]

This command instructs the program to increase the probability of low-probability interactions like bremsstrahlung emission. The statistical weight of reaction products is decreased to preserve the correct energy balance between reactions.

Beyond the Penelope techniques, GamBet is structured to help achieve short run times. The code was designed to encourage the division of calculations into manageable segments. For example, an initial segment could address a radioactive source with shielding and collimation, while a second segment could address the interaction of forward-directed radiation with tissue. The segments are connected by an escape file which records the set of model particles that reach the boundary of the first segment. The key to variance reduction in GamBet is filtering and transforming escape files for optimal performance in a following segment. The escape distribution can be modified within a GamBet run with the following command:

ESCAPEFILTER Condition01 Condition02 ...

The conditions are strings like X>0.15, T<5.0E6,…. Particles must meet the combined conditions to be included in the escape file. Conditions may apply to spatial locations, kinetic energy and particle type. The idea is to limit particles to those that will play a role in the following segment and to limit the size of the escape file. Recently we expanded the EscapeFilter conditions to include particle direction: Ux>0.1,Uz>0.95,Ur<0.25,… Here, the quantities are the components of a unit vector pointing along the direction of the velocity. One application is to limit particles to those that are aimed toward a detector or target.

Figure 1. GenDist as a component in a variance reduction strategy.

The GamBet package includes GenDist, a powerful utility to create or to modify large particle distributions. GenDist can act as an additional stage between calculation segments (Figure 1) to optimize particle properties for reduced variance. The basic sequence is to read an escape file, filter or transform the particle parameters and to write a modify file to be used as the source for a following calculation segment. In the past, the operations were controlled interactively by the user in a program window. We have recently added a script capability for autonomous production runs. Here is a summary of the new script commands:

READ FPrefix.SRC
Load an escape file

WRITE Fprefix.SRC
Write a file of transformed particle parameters, applying any filter conditions that have been set.

AXIS [X,Y,Z]
Set a reference axis for evaluating transverse velocity in transformations and filters.

FILTER Condition Value
Set any number of filter conditions. The set of conditions is the same as those in the GamBetEscapeFilter command.

XFORM GENERAL XShift YShift ZShift XRotate YRotate ZRotate
Move or rotate the particles to match the coordinate system of the next computational segment.

XFORM UNIDIST Dist XFORM NORMPLANE Pos XFORM CLOSETOLINE HLine YLine
Move particles in ballistic orbits following their velocity vectors. The options are shifts 1) a uniform distance backward or forward, 2) to a plane normal to the current axis or 3) to their positions closest to a line parallel to the current axis. One main application is to find the effective radius of a bremsstrahlung source for X-ray imaging applications by back-projection to the target.

BEAMSECT2D NThet BEAMSECT3D X 0 X X
The first command converts a distribution from 2D cylindrical calculation to one suitable for a 3D calculation. The second command limits 3D beam distributions to specific transverse quadrants or mirrors a beam distribution.

SCALE Fact
Change the size of a particle beam or convert spatial units to match calculation segments. For example, a source calculation may use units of mm, but units of m may be more appropriate for the detector calculation.

For more details, use these links to download the GamBetand GenDistmanuals. Free updates to the new programs are available to GamBet and Xenos users.

Footnotes

[1] The term standard deviation reduction would be a better choice, but it is a less compelling phrase.

]]>0shumphrieshttp://fieldp.com/myblog/?p=28112016-12-29T16:15:40Z2016-12-29T15:25:55ZThe generation of X-rays via bremsstrahlung is energetically inefficient at electron beam energies of interest for medical applications. Therefore, high-power electron beams are necessary for procedures such as CT scans. Dissipation of energy to prevent target melting is a major issue. One solution is a rapidly-rotating target. Alternatively, we recently worked with a customer using a [...]]]>The generation of X-rays via bremsstrahlung is energetically inefficient at electron beam energies of interest for medical applications. Therefore, high-power electron beams are necessary for procedures such as CT scans. Dissipation of energy to prevent target melting is a major issue. One solution is a rapidly-rotating target. Alternatively, we recently worked with a customer using a liquid jet of gallium alloy as the X-target.

HeatWave, a component of our Xenos suite for X-ray source design, performs 3D thermal conduction calculations. Energy deposition profiles determined by the GamBet Monte Carlo code can be imported as thermal source distributions. The distributions can be modulated with user-specified functions to represent pulsed or periodic beams. This capability can provide, at best, an rough approximation to a rotating target.

The HeatWave solution is performed on a stationary mesh — it would be difficult and unwieldy to introduce a moving mesh. On the other hand, it is relatively easy to move the heat source through the mesh, achieving the same result. Accordingly, we have introduced a new capability in HeatWave to generate exact moving-target simulations. This article summarizes its operation.

Power distributions from the programs Aether (microwave fields), RFE3 (time-dependent electric fields) and GamBet (Monte Carlo X-rays and electrons in matter) can be imported into HeatWave with the SourceFile command. The only restriction is that the field and particle solutions must be performed on the same mesh as that used for the thermal solution. The result is that the elements of the HeatWave solution have assigned source power densities. File sources may be used inb both static and dynamic solutions. In the dynamic mode, a time variation may be associated with the file source. The variation may be defined by a table of [t,f(t)] values using the SourceMod command. Here, f(t) is a multiplication factor. The factor may also be defined by an algebraic expression.

In the new version of HeatWave, the element power distribution is initially copied to a reference mesh variable. A time-dependent displacement vector is defined with the new commands XDisp, YDisp and/or ZDisp. As with the modulation functions, the time-dependent vector components may be defined by either a table of values (e.g., [t, x(t)]) or an algebraic function. The element power densities from the reference mesh are periodically mapped to the computational mesh using the current value of the displacement vector [x(t), y(t), z(t)]. The mapping algorithm conserves energy. Depending on the displacement, some source energy may be mapped outside the source region. As an example, heating of a rotating target could be modeled using a sawtooth function for x(t).

Figure 1 shows the results of a demonstration calculation. A 4 MeV electron beam of radius 5.0 mm with current 23.9 ?A impinges on an aluminum target of thickness 10.0 mm. The top section shows the temperature distribution in the center of the target (z = 5.0 mm) with a total irradiation time of 10 s and a fixed source distribution. In contrast, the lower section shows a solution where the source moves the right 20.0 mm during the 10 s interval. Note that the peak temperature drops from 505.1 ºC t0 338.2 ºC.

]]>0shumphrieshttp://fieldp.com/myblog/?p=27992016-10-05T21:53:52Z2016-10-05T21:48:45ZIn finite-element solutions at low magnetic fields, the properties of ferromagnetic materials are not critically important. The quantity used in the solutions is ?= 1/?r. In this case, both 1/800 and 1/8000 are close to zero compared to unity; therefore, the details of the ?r(B) curve make little difference in the solution. On the other hand, [...]]]>In finite-element solutions at low magnetic fields, the properties of ferromagnetic materials are not critically important. The quantity used in the solutions is ?= 1/?r. In this case, both 1/800 and 1/8000 are close to zero compared to unity; therefore, the details of the ?r(B) curve make little difference in the solution. On the other hand, the variation of relative permeability may be quite important at high field levels where the materials are driven into saturation. Unfortunately, the available data in tabulations of ?r(B) are generally the iverse of what we need: there is extensive information at low B but most tabulations end well below the saturation region. The subject of this article is how to guess ?r(B) at high field levels from the low-field behavior. Sounds impossible, but its actually easy thanks to two factors:

Magnetic saturation occurs smoothly (i.e., there are no abrupt changes).

We know the variation at extremely high fields.

The key is choosing a good method to plot the data.

First, some background. The typical measurement setup is a torus of material with a drive coil winding. The quantity H (in A/m) is the magnetic field produced by the coil in the absence of the material. A useful quantity is B0 = ?0*H (in Tesla), the magnetic flux density produced by the coil inside the torus with no material. The quantity B is the total flux density in the torus with the material present. In this case, alignment of atomic currents adds to the field value so that B > B0. In a soft magnetic material (i.e., no permanent magnetization), both B0 and B equal zero when there is no drive current. The alignment of magnetic domains increases as the drive current increases; therefore, B grows faster than B0. The relative magnetic permeability is defined as ?r = B/B0. At high values of drive current, all the material domains have been aligned. In this state, the material makes a maximum contribution to the total flux density of Bs (the saturation flux density). This contribution does not change with higher drive current. For high values of B0, the total flux density is approximated by

B ? B0 + Bs. (1)

To illustrate the estimation procedure, we’ll consider the specific example of Magnifer 50 RG, a nickle alloy with a high value of Bs. Figure 1 shows a graph from a data sheet supplied by VDM Metals. The sheet lists the saturation flux density as Bs = 1.55 T. The plot shows B (in mT) versus H (mA/cm) at several frequencies. Because we are interested in the static properties, we’ll consider only curve 1. The data extend to a peak value of H = 200 A/m. At this point, ?r > 1000, so that the material is well below saturation. I have an application where the material is driven well into to saturation by applied fields up to H = 5000 A/m, about 400 times the highest known value! Is it possible to make calculations with confidence?

Figure 1. Data from VDM Metals on Magnifer 50 RG, B versus H.

The first step is convert the graphics data to a number set. The FP Universal Scale is the ideal tool for this task. After setting the correct log scales, I could record a set of points with simple mouse clicks, including the conversion factors to create a list of B versus B0 in units of Tesla. In this case, the relative magnetic permeability is the ratio ?r = B/B0.

The key to estimating the missing values is to create plots of the material behavior at the two extremes: the tabulated values at low B0 and predictions from Eq. 1 at high B0. To ensure the validity of Eq. 1, I picked B0 values corresponding to highly saturated material: 0.1, 0.2 and 0.5 T. The art is picking the right type of plot. Figure 2 shows B versus B0 with log-log scales. With the requirement of a smooth variation, clearly the unknown values must lie close to the dashed red line connecting the data extremes. Accordingly, I used the Universal Scale to find several points along the line. I combined the interpolated values with the low field tabulated values and the high-field predictions to build a data set that spans the complete range of behavior for Magnifer 50. The new data are available on our magnetic materials page.

Figure 2. Plot of B versus B0 for Magnifer 50 showing values at high and low field with a proposed interpolation (dashed red line).

Finally, Figure 3 shows alternate plots to Fig. 2: B0 versus ?r and B versus ?r. In all cases, the variation over the unknown saturation region is well approximated by a simple straight-line fit.

]]>0shumphrieshttp://fieldp.com/myblog/?p=27772016-09-15T21:44:31Z2016-09-15T21:06:30ZMany processes represented by GamBet, such as the emission of bremsstrahlung photons and Compton electrons, involve interactions with atomic electrons. Particle emission may also follow from reactions within nucleii. Although GamBet does not address nuclear physics, the program can determine the histories of particles created by nuclear events. There are two types of processes: nuclear reactions [...]]]>Many processes represented by GamBet, such as the emission of bremsstrahlung photons and Compton electrons, involve interactions with atomic electrons. Particle emission may also follow from reactions within nucleii. Although GamBet does not address nuclear physics, the program can determine the histories of particles created by nuclear events. There are two types of processes: nuclear reactions and radioactivity. A nuclear reaction is a response to specific event, like fission following absorption of a neutron. On the other hand, radioactive events occur spontaneously at random times. Some nucleii are inherently unstable but in a state of local equilibrium. A transition can occur through quantum tunneling, a random process. For X-ray applications, we will limit attention to radioactivity. To begin, we’ll review some nomenclature and characteristics of sources. The second part of the article deals with specific GamBet strategies.

Four types of particles may be generated in radioactive events:

Fast electrons and positrons (beta rays)

Photons (gamma rays)

Heavy charged particles (protons, alpha rays,…)

Neutrons.

This article deals with first two types. Radioactive sources of beta and gamma rays have extensive applications in areas such as medical treatments, food irradiation and detector calibration.

The activity of a radioactive source is determined by law of radioactive decay (Eq, 1). In the equation, the quantity N equals the total number of nucleii in the source. The left-hand side is the number of nucleii that decay per second. The quantity ? (with units of 1/s) is the decay constant. It depends on the energy state and quantum barrier of the nucleus. Accordingly, sources exhibit huge variations of ?. The historical units of activity for a source is the curie (Ci). One curie equals 3.7 × 10^10 decays/s (approximately equal the activity of 1 gram of Ra226. The modern standard unit is the becquerel (Bq) equal to 1 decay/s (1 Bq = 2.703 × 10^-11 Ci).

We can also interpret the decay constant in terms of a single nucleus. The probability that a nucleus has not decayed after a time t is given by Eq. 2. The average lifetime (the mean of the distribution) is 1/?. The halflife is another useful quantity. It equals the time for half of the nucleii present in a source at t = 0.0 to decay. Equation 2 leads to the expression for the halflife of Eq. 3.

In comparison to particle accelerators, the main advantage of radioactive isotopes as sources is that they do not require power input and expensive ancillary equipment (e.g., power supplies, vacuum systems,…). Many isotopes are produced by exposure in a nuclear reactor and may be relatively inexpensive when reactors are available. The disadvantages of radioactive sources are that they run continuously and produce a broad energy spectrum of electrons and positrons.

The most important nuclear processes for the production of beta and gamma rays are beta decay and electron capture. Figure 1 shows the atomic mass of the most stable isotopes as a function of atomic number Z. Isotopes above the line have an excess of neutrons — their usual route toward stability is to emit a ?- particle (electron), converting a neutron to a proton while preserving the number of nucleons. In other words, the nucleus changes its isotopic identity while preserving its isomer identity. Similarly, isotopes with an excess of protons emit ?+ particles (positrons). Both forms of nuclear transformations are called beta decay.

Figure 1. Atomic weight of the most stable isotopes as a function of atomic number Z. The dashed line shows the mass for equal numbers of neutrons and protons.

First, consider ?- emission. There are two isotopes commonly used in research and industry: Cs137 and Co60. Nuclear processes are commonly illustrated with energy-level diagrams — Fig. 2a shows the decay scheme for Cs137. The horizontal axis represents isomer identity and the vertical axis shows energy levels. Dark lines indicate a nucleus in the ground state and light lines designate an excited state. The arrows indicate the directions of transformations. The starting point is the ground state of Cs137. The figure 30.17 years is the half-life for decay. Decay events of type ß- convert the nucleus to the more stable isomer, Ba137. The arrows indicate that there are two decay paths. In 94.6% of the decays, the emission process carries off 0.512 MeV (shared between the emitted electron and an anti-neutrino) and leaves the Ba137 nucleus in an excited state. The state decays with a half-life of 2.55 minutes, resulting in emission of a 0.662 MeV gamma ray. In 5.4% of the events, the ß- particle and antineutrino carry of 1.174 MeV and leaving the product nucleus in the ground state.

Figure 2. Energy level diagrams for the radioactive decay of Cs137 and Na22.

The emission process does not produce a single ß- particle of energy 0.512 or 1.174 MeV, but rather a broad spectrum of electrons with kinetic energy spread between zero and the maximum. The reason is the condition of conservation of spin. Nucleii have spin values an integer multiple of h/2? while electrons have spin ½(h/2?). For balance, an additional particle is required with half-integer spin. In his theory of beta decay, Fermi postulated the existence of neutrinos and antineutrinos, neutral particles with spin ½(h/2?) and very small mass, thereby almost undetectable. In a ß- decay, the available energy is partitioned between the electron, the nucleus and an antineutrino. The theory to determine the spectrum is complex — all ß- decays give rise to a spectrum similar to that of Fig. 3. The spectrum is skewed toward lower energy by the effect of Coulomb attraction as the electron escapes from the nucleus. Generally, Cs137 is used as a a source of 0.662 gamma rays because the ß- particles are preferentially absorbed by the source and surrounding structure and the antineutrinos pass away with no effect.

Figure 3. Distribution of electron energy for the radioactive decay of C14.

We next consider proton-rich isotopes that approach the stability line through emission of positrons. The mechanism is similar to ß- emission with the exceptions that a neutrino is emitted and the positron emission spectrum shifted toward higher energies because of Coulomb force repulsion from nucleus. Figure 2b shows the energy-level diagram for Na22, a positron emitter. The halflife for all decay processes is 2.60 years. There are several decay pathways. The most likely event (90.33% probability) is that a proton changes to a neutron by emission of a positron, leaving the product isotope Ne22 in an excited state. A gamma ray of energy 1.275 MeV is released almost immediately as the nucleus relaxes to the ground state. In this case, the maximum positron energy is 0.545 MeV. In rare instances, a positron with energy ?1.82 MeV is released, leaving the product nucleus in the ground state. A third process that may occur is electron capture. In 9.62% of the decays, an inner orbital electron is captured by the nucleus, again resulting in the conversion of a proton to a neutron. The Ne22 nucleus is left in the same excited state as with ß+ emission, again followed by the release of a 1.27 MeV ?. The difference from ß+ decay is that no positron or neutrino is emitted. Electron capture leaves a vacancy in the K or L shell of the electron cloud, so characteristic X-rays are also emitted as the atom relaxes.

This table lists useful commercial radioactive sources of electrons, photons and positrons. A common feature is a halflife of one to a few years. For isotopes with lower values, it would be necessary to produce and use them quickly. A long half life means reduced activity.

Common commercial radioactive sources

We’ll now turn to GamBet modeling techniques, in particular how to create a particle input file to represent a radioactive source. There are some challenges:

Particles are emitted over an extended spatial region, the volume of the source.

Electrons and positrons have a broad energy distributions.

Often, we want to normalize particle flux to represent a specific source activity.

Particle file creation is greatly facilitated through the use of statistical codes like R (Link to a comprehensive short course with examples on using R with GamBet).

Dealing with the finite source size is relatively easy. If the activity is uniform over the source volume, then the probability density for emission is uniform over the volume. As an example, consider a cylindrical source of length L and radius R. Given a routine that creates a random variable ? in the range 0 ? ? ? 1.0, then values of the z coordinate (along the cylinder axis) are assigned according to Eq. 4. We can use the rejection method to determine coordinates in the x-y plane. We assign coordinates by Eqs. 5 and 6 and keep only instances that satisfy Eq. 7.

With regard to energy distributions, photons from sources like Cs137 and Na22 are essentially monoenergetic. In contrast, the ? particles have an energy distribution like that of Fig. 3. In principle, thin films could be used as sources of electrons or positrons. In this case, it would be necessary to represent the spectrum and to determine the effect of energy loss in the film The spectral shape and endpoint energy vary with the type of isotope. Chapter 10 of the reference Using R for GamBet Statistical Analysis discusses methods for creating arbitrary distributions. In practice, an exact model may not be necessary and the data may not even be available. In applications such as estimating shield effectiveness, it may be sufficient to model the ? decay spectrum with a simple function like that of Eq. 8. In the equation, Emax is the maximum ? energy. Taking the integral gives the cumulative probability distribution (i.e., the probability that a ? has energy less than or equal to E) of Eq. 9. Values of P(E) range from 0.0 to 1.0. We can obtain the desired distribution by assigning energy from a random-uniform variable ? using Eq. 10, the inverse of Eq. 9. Figure 4 shows the result with 10,000 particles having endpoint energy Emax = 0.512 MeV.

Figure 4. Generation of the spectrum of Eq. 8 with 10,000 particles using assignment from Eq. 10.

To conclude, we’ll address how to create a GamBet source file to represent a given source activity. We’ll follow a specific example — a Co60 source with activity 10 Ci. This figure corresponds to a disintegration rate of Rd = 3.7 × 10^11 1/s. Figure 5 shows an energy level diagram. The isotope decays through ?- decay with a halflife of 5.27 years. Almost all events result in a excited state of the Ni60 nucleus that relaxes to the stable ground state by almost instantaneous emission of ? rays of energy 1.17 and 1.33 MeV. A source assembly typically consists of the source combined with shielding and collimators to create a directional photon flux. A goal of a calculation could be to compare radiation fluxes in the forward and reverse directions.

Figure 5. Energy level diagram for the radioactive decay of Co60.

We specify Np = 1000 model emission points uniformly distributed over the source volume using techniques like those discussed previously. At each emission point, we generate Ng = 500 photons of energy 1.17 MeV and Ng photons of energy 1.33 MeV. The photons are randomly distributed over 4? steradians of solid angle. Equations 11 and 12 can be used to pick the azimuthal and polar angles. In the continuous-beam mode of GamBet, each photon in the file should be assigned a flux value given by Eq. 12. In this case, GamBet gives absolute values of particle flux through and deposited dose in structure surrounding the source assembly. Note that this case is relatively simple because almost all events follow the same decay path. In the case of Cs137 (Fig. 2a), we need to multiply Rd by 0.946 to get the correct absolute flux of 0.617 MeV ? rays.

The procedure as described may be inefficient to calculate forward photon flux or shielding leakage because most of the model particles would not contribute. A simple variance reduction technique is to limit the range of solid angle d? so that photons are preferentially directed toward the measure point. The solid angle should be large enough to include the possibility of scattering from the shield or collimator. To properly normalize the calculation, the photon flux values should be adjusted by a factor d?/4?.

]]>0shumphrieshttp://fieldp.com/myblog/?p=27682016-08-23T21:41:03Z2016-08-23T21:39:04ZThis is the concluding of three articles introducing the basic concepts of the Monte Carlo method for radiation transport calculations. The demonstration calculations discussed in the previous articles give a basis for comparing the relative benefits and drawbacks of the Monte Carlo and transport equation approaches. We have seen that both strategies are based on averages [...]]]>This is the concluding of three articles introducing the basic concepts of the Monte Carlo method for radiation transport calculations. The demonstration calculations discussed in the previous articles give a basis for comparing the relative benefits and drawbacks of the Monte Carlo and transport equation approaches. We have seen that both strategies are based on averages over random distributions of particles whose statistical properties are known. Both methods give the same result in the limit of a large number of model particles. The main difference is that the averaging process is performed at the beginning of the transport equation calculation but at the end of the Monte Carlo solution.

In the transport equation approach to the two-dimensional random walk, the idea is to seek average quantities n or J and to find relationships between them (like Fick’s first and second laws). These relationships are accurate when there are large numbers of particles. To illustrate the meaning of large, note that the number of electrons in one cubic micrometer of aluminum equals 3 × 10^15. When averages are taken over such large numbers, the transport equations are effectively deterministic.

In the Monte Carlo method, the idea is to follow individual particles based on a knowledge of their interaction mechanisms. A practical computer simulation may involve millions of model particles, orders of magnitude below the actual particle number. Therefore, each model particle represents the average behavior of a large group of actual particles. In contrast to transport equations, the accuracy of Monte Carlo calculations is dominated by statistic variations.

An additional benefit of transport equations is that they often have closed-form solutions that lead to scaling relationships like Eq. 22 of the previous article. We could extract an approximation to the relationship from Monte Carlo results, although at the expense of some labor.

Despite the apparently favorable features of the transport equations, Monte Carlo is the primary tool for electron/photon transport. Let’s understand why. One advantage is apparent comparing the relative effort in the demonstration solutions — the Monte Carlo calculation is much easier to understand. A clear definition of physical properties of particle collisions was combined with a few simple rules. The only derivation required was that for the mean free path. The entire physical model was contained in a few lines of code. In contrast, the transport model required considerable insight and the derivation of several equations. In addition, it was necessary to introduce additional results like the divergence theorem. Most of us feel more comfortable staying close to the physics with a minimum of intervening mathematical constructions. This attitude represents good strategy, not laziness. Less abstraction means less chance for error. A computer calculation that closely adheres to the physics is called a simulation. Program managers and funding agents have a warm feeling for simulations.

Beyond the emotional appeal, there is an over-riding practical reason to apply Monte Carlo to electron/photon transport in matter. Transport equations become untenable when the interaction physics becomes complex. For example, consider the following scenario for a demonstration calculation:

In 20% of collisions, a particle splits into two particles with velocity 0.5v0 and 0.2v0. The two particles are emitted at a random angles separated by 60°. Each secondary particle has its own cross section for interaction with the background obstacles.

It would be relatively easy to modify the code of the first article to represent this history and even more complex ones. On the other hand, it would be require consider effort and theoretical insight to modify a transport equation. As a second example, suppose the medium were not uniform but had inclusions with different cross sections and with dimensions less than ?. In this case, the derivation of Fick’s first law is invalid. A much more complex relationship would be needed. Again, it would relatively simple to incorporate such a change in a Monte Carlo model. Although these scenarios may sound arbitrary, they are precisely the type of processes that occur in electron/photon showers.

In summary, the goal in collective physics is to describe behavior of huge numbers of particles. We have discussed two approaches:

Monte Carlo method. Define a large but reasonable set of model particles, where each model particle represents the behavior of a group of real particles with similar properties. Propagate the model particles as single particles using known physics and probabilities of interactions. Then, take averages to infer the group behavior.

The choice of method depends on the nature of the particles and their interaction mechanisms. Often, practical calculations usually use a combination of the two approaches. For example, consider the three types of calculations required for the design of X-ray devices (supported in our Xenospackage):

Radiation transport in matter. Photons may be treated with the Monte Carlo technique, but mixed methods are necessary for electrons and positrons. In addition to discrete events (hard interactions) like Compton scattering, energetic electrons in matter undergo small angle scattering and energy loss with a vast number of background electrons (soft interactions). It would be impossible to model each interaction individually. Instead, averages based on transport calculations are used.

Heat transfer. Here, particles are the energy transferred from one atom to an adjacent one. Because the interaction model is simple and the mean-free-path is extremely small, transport equations are clearly the best choice.

Electric and magnetic fields. The standard approach is through the Maxwell equations. They are transport type equations, derived by taking averages over a large number of charges. On the other other hand, we employ Monte-Carlo-type methods to treat contributions to fields from high-current electron beams.