► This work developed a stylized three dimensional benchmark problem based on Argonne National Laboratory's conceptual Advanced Burner Test Reactor design. This reactor is a sodium…
(more)

▼ This work developed a stylized three dimensional benchmark problem based on Argonne National Laboratory's conceptual Advanced Burner Test Reactor design. This reactor is a sodium cooled fast reactor designed to burn recycled fuel to generate power while transmuting long term waste. The specification includes heterogeneity at both the assembly and core levels while the geometry and material compositions are both fully described. After developing the benchmark, 15 group cross sections were developed so that it could be used for transport code method verification. Using the aforementioned benchmark and 15 group cross sections, the Coarse-Mesh Transport Method (COMET) code was compared to Monte Carlo code MCNP5 (MCNP).
Results were generated for three separate core cases: control rods out, near critical, and control rods in. The cross section groups developed do not compare favorably to the continuous energy model; however, the primary goal of these cross sections is to provide a common set of approachable cross sections that are widely usable for numerical methods development benchmarking.
Eigenvalue comparison results for MCNP vs. COMET are strong, with two of the models within one standard deviation and the third model within one and a third standard deviation. The fission density results are highly accurate with a pin fission density average of less than 0.5% for each model.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Erickson, Anna (committee member).

▼ The fluoride-salt-cooled high-temperature reactor (FHR) is a novel reactor design benefitting from passive safety features, high operating temperatures with corresponding high conversion efficiency, to name a few key features. The fuel is a layered graphite plank configuration containing enriched uranium oxycarbide (UCO) tri-structural isotropic (TRISO) fuel particles. Fuel cycle cost (FCC) models have been used to analyze and optimize fuel plate thicknesses, enrichment, and packing fraction as well as to gauge the economic competitiveness of this reactor design.
Since the development of the initial FCC model, many corrections and modifications have been identified that will make the model more accurate. These modifications relate to corrections made to the neutronic simulations and the need for a more accurate fabrication costs estimate. The former pertains to a MC Dancoff factor that corrects for fuel particle neutron shadowing that occurs for double-heterogeneous fuels in multi-group calculations. The latter involves a detailed look at the fuel fabrication process to properly account for material, manufacturing, and quality assurance cost components and how they relate to the heavy metal loading in a FHR fuel plank.
It was found that the fabrication cost may be a more significant portion of the total FCC than was initially attributed. TRISO manufacturing cost and heavy metal loading via packing fraction were key factors in total fabrication cost. This study evaluated how much neutronic and fabrication cost corrections can change the FCC model, optimum fuel element parameters, and the economic feasibility of the reactor design.
Advisors/Committee Members: Petrovic, Bojan (advisor), Erickson, Anne (committee member), Deo, Chaitanya (committee member).

► The purpose of this research is to examine the effects of systematic uncertainty of reactor operating parameters on isotope ratios in spent fuel rods, specifically…
(more)

▼ The purpose of this research is to examine the effects of systematic uncertainty of reactor operating parameters on isotope ratios in spent fuel rods, specifically from the BR3 reactor. The primary operating parameters of interest are position of the rod within an assembly and the boron concentration in the coolant and the ratios examined are 240Pu/239Pu and 137Cs/135Cs ratios. The model-predicted isotope ratios were also compared to experimentally measured isotope ratios for the rod of interest. An assembly-level model of the reactor of interest was created in MCNP. Four test cases of the rod position and four test cases of the boron concentration were created. The method involved the development of response functions for the final isotope ratios as function of input parameters. An uncertainty analysis was performed using a variance-covariance matrix for the response function of the isotope ratios. The uncertainty analysis revealed a high systematic uncertainty for the 240Pu/239Pu ratio and an over-prediction of approximately 30% from the experimental isotope ratio. The systematic uncertainty for the 137Cs/135Cs ratio was found to be slightly higher than that of the experimental but not as high as the 240Pu/239Pu ratio. The sensitivity analysis of the 137Cs/135Cs ratio showed that it was difficult to gain information about the rod's location within the assembly.
Advisors/Committee Members: Erickson, Anna (advisor), Petrovic, Bojan (committee member), Robel, Martin (committee member).

▼ This thesis presents a formulation for an adaptive COMET method for solving whole reactor eigenvalue and flux distribution problems using a varying flux expansion at mesh interfaces. While COMET solutions have enjoyed accuracy on par with Monte Carlo techniques with a computational efficiency several orders of magnitude greater than stochastic methods, it was desired to extend the efficiency of the method further. Improved efficiency is obtained by allowing the flux expansion at mesh interfaces, which was previously held constant throughout a whole problem, to adapt to different expansion orders depending upon mesh composition and spatial effects due to neighboring meshes. To test the method, two benchmark problems were solved using the standard and adaptive COMET solution methods: the C5G7 benchmark problem and a pressurized water reactor benchmark with mixed-oxide (MOX) fuel assemblies. In both benchmark cases, three different configurations for different insertion of control rods were considered. For all cases, the agreement between the standard and adaptive COMET solutions was excellent, with eigenvalue agreement being 3 pcm or less and average pin fission errors being much less than 0.5% in all cases. Increases in computational efficiency by factors of 2.1 to 3.6 were observed. The strong performance of the adaptive method implies that it can be used to obtain accurate solutions to reactor problems with more efficiency than the standard COMET method.
Advisors/Committee Members: Rahnema, Farzad (advisor), Zhang, Dingkang (committee member), Petrovic, Bojan (committee member).

► Some thermalized ions flowing slowly radially outward have sufficient energy to access loss orbits which allow them to free stream out of the confined plasma…
(more)

▼ Some thermalized ions flowing slowly radially outward have sufficient energy to access loss orbits which allow them to free stream out of the confined plasma region and become ion orbit lost (IOL). Ions flowing in the counter-current (ctr-Ip) direction are more readily lost, exerting a net torque on the ions in the co-current direction. A particle-momentum-energy balance analysis, including the effects of IOL, was performed on three similar DIII-D discharges. After the L-H transition, the intrinsic rotation due to IOL decreases for a short time in the plasma edge near the separatrix. For 1 of the 3 discharges, there is a corresponding decrease in the measured carbon rotation in the edge. For all of the discharges, the electromagnetic pinch velocity went from weakly inward in L-mode to strongly inward in H-mode near the separatrix.
Advisors/Committee Members: Stacey, Weston M. (advisor), Petrovic, Bojan (advisor), Groebner, Rich J. (advisor).

► The swelling mechanisms of U3Si2 under neutron irradiation in reactor conditions are not unequivocally known. The limited experimental evidence that is available suggests that the…
(more)

▼ The swelling mechanisms of U3Si2 under neutron irradiation in reactor conditions are not unequivocally known. The limited experimental evidence that is available suggests that the main driver of the swelling in this material would be fission gases accumulation at crystalline grain boundaries. The steps that lead to the accumulation of fission gases at these locations are multiple and complex. However, gradually, the gaseous fission products migrate by diffusion. Upon reaching a grain boundary, which acts as a trap, the gaseous fission products start to accumulate, thus leading to formation of bubbles and hence to swelling. Therefore, a quantitative model of swelling requires the incorporation of phenomena that increase the presence of grain boundaries and decrease grain sizes, thus creating sites for bubble formation and growth. It is assumed that grain boundary formation results from the conversion of stored energy from accumulated dislocations into energy for the formation of new grain boundaries.This thesis attempts to develop a quantitative model for grain subdivision in U3Si2 based on the above mentioned phenomena to verify the presence of this mechanism and to use in conjunction with swelling codes to evaluate the total swelling of the pellet in the reactor during its lifetime.
Advisors/Committee Members: Petrovic, Bojan (advisor), Ougouag, Abderrafi (advisor), Deo, Chaitanya (committee member).

► Recent developments in the global nuclear industry have led to the need of reactor designs that are not only safe, but also address the challenges…
(more)

▼ Recent developments in the global nuclear industry have led to the need of reactor designs that are not only safe, but also address the challenges involving nuclear waste while producing clean electricity at low costs. One of the designs proposed to fill these requirements is the Advanced Burner Reactor (ABR), a sodium cooled, metal fuel fast reactor system that uses spent fuel from current light water reactors as part of its energy source. Due to the complex nature of nuclear reactors, extensive modeling of a system must be performed in order to demonstrate the viability of such system.
This thesis combines two established reactor modeling techniques in order to efficiently model the ABR core. The computational methods used in this work are Monte Carlo (MC) and nodal diffusion. The MC method is a well-established computational approach for modeling of nuclear systems, and is considered to be very accurate and versatile. However, the MC requires extensive time and computational resources, and its applicability becomes prohibitively expensive when performing analyses of accident scenarios. Meanwhile, the nodal diffusion method requires much fewer resources to perform such analyses, but theoretically the accuracy is compromised due to the simplifications applied to the model.
The main focus of the work presented in this thesis revolves around expanding the capabilities of nodal diffusion codes to calculate local isotopic concentrations, activities and decay heat quantities, which is a first-of-a-kind demonstration of the applicability of nodal diffusion codes for such calculations. Establishing this approach allows for the capability of decay heat to be calculated rapidly and efficiently, allowing for the performance of transient analyses in accident scenarios.
The work presented in this thesis uses the MC code Serpent as a macroscopic and microscopic cross-section generation tool, and the nodal diffusion code DYN3D for full core analysis of the ABR core. The Serpent-DYN3D code sequence is then applied for various scenarios, including decay heat analysis, and compared to reference MC solutions. It is found that the Serpent DYN3D sequence is an adequate tool for modeling of sodium cooled, metal fuel fast reactors, providing accurate solutions while saving on time and computational resources required.
Advisors/Committee Members: Kotlyar, Dan (advisor), Petrovic, Bojan (advisor), Hertel, Nolan (advisor).

► Throughout the history of the Annular Core Research Reactor (ACRR), Transient Rod (TR) A has experienced an increased rate of failure versus the other two…
(more)

▼ Throughout the history of the Annular Core Research Reactor (ACRR), Transient Rod (TR) A has experienced an increased rate of failure versus the other two TRs (B and C). Either by pneumatic force or electric motor, the transient rods remove the poison rods from the ACRR core allowing for the irradiation of experiments. In order to develop causes for why TR A is failing (rod break) more often, a better understanding of the whole TR system and its components is needed. This study aims to provide a foundational understanding of how the TR pneumatic system affects the motion of the TRs and the resulting effects that the TR motion has on the neutronics of the ACRR. Transient rod motion profiles have been generated using both experimentally-obtained pressure data and by thermodynamic theory, and input into Razorback, a SNL-developed point kinetics and thermal hydraulics code, to determine the effects that TR timing and pneumatic pressure have on reactivity addition and reactivity feedback. From this study, accurate and precise TR motion profiles have been developed, along with an increased understanding of the pulse timing sequence. With this information, a safety limit within the ACRR was verified for different TR travel lengths and pneumatic system pressures. In addition, longer reactivity addition times have been correlated to cause larger amounts of reactivity feedback. The added clarity on TR motion and timing from this study will pave the way for further study to determine the cause for the increased failure rate of TR A.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Utschig, Tristan (committee member), Black, Michael (committee member).

► The Integral Inherently Safe Light Water Reactor (I2S-LWR) is a novel reactor concept which aims to apply safety-promoting features typical of small modular reactors (SMRs)…
(more)

▼ The Integral Inherently Safe Light Water Reactor (I2S-LWR) is a novel reactor concept which aims to apply safety-promoting features typical of small modular reactors (SMRs) to a large pressurized water reactor (PWR) of 3000 MWt, thus providing an option for a passively safe reactor to markets which would find greater economic benefit in a large reactor. Pushing the compact core of an integral reactor to 3000 MWt necessitates several design innovations to remain within safety margins while meeting the goal of increased power density. The I2S-LWR fuel assembly takes on a 19x19 lattice with reduced fuel rod dimensions relative to traditional Westinghouse-type 17x17 PWR fuel assemblies. It is anticipated that the I2S-LWR will eventually employ uranium silicide (U3Si2) fuel instead of uranium oxide (UO2) to improve thermal performance. These unique design features are closely tied to the I2S-LWR core neutronics, thereby necessitating a thorough investigation of reactivity control options.
This thesis considers the design of both control rods and burnable absorbers on the basis of the I2S-LWR uranium silicide fuel assembly. Fuel assembly designs are considered with various control rod arrangements and burnable absorber layouts with several candidate absorber materials and concentrations. Viable fuel assembly designs must meet targets for reactivity and power peaking while satisfying constraints on core safety and cycle length. Designs are developed in a heuristic manner, and key performance metrics are processed at each iteration. Characteristics of common optimization algorithms are mimicked at a high level so as to guide the progression of design iterations. The optimized fuel assembly designs produced in this way are recommended for use in core loading pattern design.
Advisors/Committee Members: Petrovic, Bojan (advisor), Ferroni, Paolo (committee member), Stacey, Weston M. (committee member).

► Because of the accuracy and ease of implementation, Monte Carlo methodology is widely used in analysis of nuclear systems. The obtained estimate of the multiplication…
(more)

▼ Because of the accuracy and ease of implementation, Monte Carlo methodology is widely used in analysis of nuclear systems. The obtained estimate of the multiplication factor (keff) or flux distribution is statistical by its nature. In criticality simulation of a nuclear critical system, whose basis is the power iteration method, the guessed source distribution initially is generally away from the converged fundamental one. Therefore, it is necessary to ensure that the convergence is achieved before data are accumulated. Discarding a larger amount of initial histories could reduce the risk of contaminating the results by non-converged data but increases the computational expense. This issue is amplified for large loosely coupled nuclear systems with low convergence rate. Since keff is a generation-based global value, frequently no explicit criterion is applied to the diagnostic of keff directly. As an alternative, a flux-based entropy check available in MCNP5 works well in many cases. However, when applied to a difficult storage fuel pool benchmark problem, it could not always detect the non-convergence of flux distribution. Preliminary evaluation indicates that it is due to collapsing local information into a single number. This thesis addresses this problem by two new developments. First, it aims to find a more reliable way to assess convergence by analyzing the local flux change. Second, it introduces an approach to simultaneously compute both the first and second eigenmodes. At the same time, by computing these eigenmodes, this approach could increase the convergence rate. Improvement in these two areas could have a significant impact on practicality of Monte Carlo criticality simulations.
Advisors/Committee Members: Petrovic, Bojan (Committee Chair), Rahnema, Farzad (Committee Member), Zhang,Dingkang (Committee Member).

► The main objective of this research is: (1) to develop a model and perform numerical simulations to evaluate the radiation field and the resulting dose…
(more)

▼ The main objective of this research is: (1) to develop a model and perform numerical simulations to evaluate the radiation field and the resulting dose to personnel and activation of materials and structures throughout the IRIS nuclear power plant, and (2) to confirm that the doses are below the regulatory limit, and assess the possibility to reduce the activation of the concrete walls around the reactor vessel to below the free release limit.
IRIS is a new integral pressurized water reactor (PWR) developed by an international team led by Westinghouse with an electrical generation capacity of 335 MWe and passive safety systems. Its design differs from larger loop PWRs in that a single building houses the containment as well as all the associated equipment including the control room that must be staffed continuously. The resulting small footprint has positive safety and economic implications, and the integral layout provides additional shielding and thus the opportunity to significantly reduce the activation, but it also leads to significantly more challenging simulations.
The difficulty in modeling the entire building is the fact that the source is attenuated over 10 orders of magnitude before ever reaching the accessible areas. For an analog Monte Carlo simulation with no acceleration (variance reduction), it would take many processor-years of computation to generate results that are statistically meaningful. Instead, to generate results for this thesis, the Standardized Computer Analyses for Licensing Evaluation (SCALE) with the package Monaco with Automated Variance Reduction using Importance Calculations (MAVRIC) will be used. This package is a hybrid methodology code where the forward and adjoint deterministic calculations provide variance reduction parameters for the Monte Carlo portion to significantly reduce the computational time.
Thus, the first task will be to develop an efficient SCALE/MAVRIC model of the IRIS building. The second task will be to evaluate the dose rate and activation of materials, specifically focusing on activation of concrete walls around the reactor vessel. Finally, results and recommendations will be presented.
Advisors/Committee Members: Petrovic, Bojan (Committee Chair), Hertel, Nolan (Committee Member), Wang, C.-K. (Committee Member).

► Proton therapy is a relatively new treatment modality for cancer, having recently been incorporated into hospitals in the last two decades. Although proton therapy has…
(more)

▼ Proton therapy is a relatively new treatment modality for cancer, having recently been incorporated into hospitals in the last two decades. Although proton therapy has much higher start up and treatment costs than traditional methods of radiotherapy, it continues to expand in use today. One reason for this is that proton therapy has the advantage of a more precise localization of dose compared to traditional radiotherapy. Other proposed advantages of proton therapy in the treatment of cancer may lead to a faster expanse in its use if proven to be more effective than traditional radiotherapy. Therefore, much research must be done to investigate the possible negative and positive effects of using proton therapy as a treatment modality.
In proton therapy, protons do account for the vast majority of dose. However, when protons travel through matter, secondary particles are created by the interactions of protons and matter en route to and within the patient. It is believed that secondary dose can lead to secondary cancer, especially in pediatric cases. Therefore, the focus of this work is determining both primary and secondary dose.
In order to develop relevant simulations, the specifications of the treatment room and beam were based off of real-world facilities as closely as possible. Using available data from proton accelerators and clinical facilities, an accurate proton therapy nozzle was designed. Dose calculations were performed by MCNPX using a simple water phantom, and then beam characteristics were investigated to ensure the accuracy of the model. After validation of the beam nozzle, primary and secondary dose values were tabulated and discussed. By demonstrating the method of these calculations, the purpose of this work is to serve as a guide into the relatively recent field of Monte Carlo methods in proton therapy.
Advisors/Committee Members: Petrovic, Bojan (Committee Chair), Elder, Eric (Committee Member), Wang, Chris (Committee Member).

► In the deep burn research of Very High Temperature Reactor (VHTR), it is desired to make an accurate estimation of absorption cross sections and absorption…
(more)

▼ In the deep burn research of Very High Temperature Reactor (VHTR), it is desired to make an accurate estimation of absorption cross sections and absorption rates in burnable poison (BP) pins. However, in traditional methods, multi-group cross sections are generated from single bundle calculations with specular reflection boundary condition, in which the energy spectral effect in the core environment is not taken into account. This approximation introduces errors to the absorption cross sections especially for BPs neighboring reflectors and control rods.
In order to correct the BP absorption cross sections in whole core diffusion calculations, energy spectrum reconstruction (ESR) methods have been developed to reconstruct the fine group spectrum (and in-core continuous energy spectrum). Then, using the reconstructed spectrum as boundary condition, a BP pin cell local transport calculation serves an imbedded module within the whole core diffusion code to iteratively correct the BP absorption cross sections for improved results.
The ESR methods were tested in a 2D prismatic High Temperature Reactor (HTR) problem. The reconstructed fine-group spectra have shown good agreement with the reference spectra. Comparing with the cross sections calculated by single block calculation with specular reflection boundary conditions, the BP absorption cross sections are effectively improved by ESR methods. A preliminary study was also performed to extend the ESR methods to a 2D Pebble Bed Reactor (PBR) problem. The results demonstrate that the ESR can reproduce the energy spectra on the fuel-outer reflector interface accurately.
Advisors/Committee Members: Rahnema, Farzad (Committee Chair), Petrovic, Bojan (Committee Member), Zhang, Dingkang (Committee Member).

► The Coarse Mesh Radiation Transport (COMET) method is a reactor physics method and code that has been used to solve whole core reactor eigenvalue and…
(more)

▼ The Coarse Mesh Radiation Transport (COMET) method is a reactor physics method and code that has been used to solve whole core reactor eigenvalue and flux distribution problems. A strength of the method is its formidable accuracy and computational efficiency. COMET solutions are computed to Monte Carlo accuracy on a single processor in a runtime that is several orders of magnitude faster than stochastic calculations. However, with the growing ubiquity of both shared and distributed memory parallel machines and the desire to extend the method to allow for coupling to multiphysics and on-the-fly response generation, serial implementations of COMET calculations will become less desirable. It is under this motivation that an implementation for a parallel execution of deterministic COMET calculations has been developed. COMET involves inner and outer iterations; inner iterations involve local calculations that can be carried out independently, making the algorithm amenable to parallelization. However, considerations for decomposing a problem and the distribution of data must be made. To allow for efficient parallel implementation of a distributed algorithm, changes to response data access and sweep order are made, along with considerations for communications between processors. The parallel code is implemented on several variants of the C5G7 benchmark problem to assess the scalability of the algorithm, and it is found that problems with larger numbers of coarse meshes increase the scalability of the code, which is an encouraging result. The code is further tested for full core reactor problems, where extremely efficient wall clock times (on the order of minutes) for solutions are achieved. Finally, application of the parallel code to novel implementations of COMET (e.g., problems with high flux expansions) is discussed.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Zhang, Dingkang (committee member), Morley, Tom (committee member), Haghighat, Alireza (committee member).

Remley KE. Development of methods for high performance computing applications of the deterministic stage of comet calculations. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/58610

► Stochastic Particle Response Calculator, SPaRC, is a new stochastic neutron transport code that has been developed and optimized for the computation of response functions for…
(more)

▼ Stochastic Particle Response Calculator, SPaRC, is a new stochastic neutron transport code that has been developed and optimized for the computation of response functions for use in response matrix based whole-core transport solvers. SPaRC transports neutrons from a specified fixed source distribution and computes responses as neutrons stream through and then exit regions of interest. The code makes use of both multi-group and continuous energy nuclear data and takes advantage of parallel computing through the message passing interface (MPI). In order to test the neutron transport routine, various small benchmark problems were solved with SPaRC and compared to results generated with MCNP. Results show excellent agreement between the solutions generated by these codes for both multi-group and continuous energy calculations. The responses generated by SPaRC have been tailored for use in the coarse mesh transport (COMET) method. COMET is a hybrid stochastic/deterministic method shown to compute fast and accurate solutions for a variety of nuclear systems. In order to obtain these solutions, COMET makes use of pre-computed response functions aggregated into a library for use in a deterministic iteration scheme. Previously these response functions were calculated with MCNP and took place before a transport calculation. SPaRC also generates these response functions for use with the COMET method, with the added capability of performing these calculations during the transport routine as needed. This on-the-fly capability for response generation enables the use of the COMET method for calculations where the state of a problem changes with time. SPaRC’s ability to generate responses during a calculation eliminates the need for a fully pre-computed response library to cover the entire possible solution space, extending the capability of COMET to neutronics problems involving multi-physics feedback, such as thermal-hydraulic and depletion calculations. Sample calculations on the reactor assembly level were performed in order to test the accuracy of the SPaRC generated response functions. First, responses were generated for uncontrolled, controlled, and gadded assemblies with both MCNP and SPaRC. Next, COMET calculations were performed using these two sets of responses for the different assembly types in order to generate eigenvalues and pin fission density distributions. The results generated from the MCNP and SPaRC responses agreed within 0.05% for the core eigenvalue and within 0.002% for pin powers. SPaRC is a newly developed fixed-source radiation transport code. The neutron transport method has been benchmarked against the stochastic transport code MCNP with good agreement and new database management and creation routines have been developed to aid response generation. SPaRC introduces a response function flexibility to the COMET method that facilitates thermal hydraulic and depletion calculations.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Zhang, Dingkang (committee member), Chow, Edmond (committee member), Haghighat, Alireza (committee member), Leal, Luiz (committee member).

► A benchmark experiment was designed and conducted that irradiated two small volume mercury targets at the Weapon Neutron Research facility at the Los Alamos Neutron…
(more)

▼ A benchmark experiment was designed and conducted that irradiated two small volume mercury targets at the Weapon Neutron Research facility at the Los Alamos Neutron Science Center with 800 MeV protons. Following irradiation the production cross sections of 53 medium and longer-lived spallation residuals using gamma spectroscopy at various decay times up to a year were determined. The measured cross sections were then compared with predicted cross sections from the MCNPX code. After acquisition of the gamma spectroscopy data the targets were drained and disassembled to study the distribution and the deposition of the spallation residuals.
Advisors/Committee Members: Hertel, Nolan E. (advisor), Petrovic, Bojan (committee member), Wang, Chris K. (committee member), Janata, Jiri (committee member), Ferguson, Phillip (committee member).

► The application of a theoretical framework for calculating the radial electric field in the DIII-D tokamak edge plasma is discussed. Changes in the radial electric…
(more)

▼ The application of a theoretical framework for calculating the radial electric field in the DIII-D tokamak edge plasma is discussed. Changes in the radial electric field are correlated with changes in many important edge plasma phenomena, including rotation, the L-H transition, and ELM suppression. A self-consistent model for the radial electric field may therefore suggest a means of controlling other important parameters in the edge plasma. Implementing a methodology for calculating the radial electric field can be difficult due to its complex interrelationships with ion losses, rotation, radial ion fluxes, and momentum transport. The radial electric field enters the calculations for ion orbit loss. This ion orbit loss, in turn, affects the radial ion flux both directly and indirectly through return currents, which have been shown theoretically to torque the edge plasma causing rotation. The edge rotation generates a motional radial electric field, which can influence both the edge pedestal structure and additional ion orbit losses.
In conjunction with validating the analytical modified Ohm’s Law model for calculating the radial electric field, modeling efforts presented in this dissertation focus on improving calculations of ion orbit losses and x-loss into the divertor region, as well as the formulation of models for fast beam ion orbit losses and the fraction of lost particles that return to the confined plasma. After rigorous implementation of the ion orbit loss model and related mechanisms into fluid equations, efforts are shifted to calculate effects from rotation on the radial electric field calculation and compared to DIII-D experimental measurements and computationally simulated plasmas. This calculation of the radial electric field will provide a basis for future modeling of a fast, predictive calculation to characterize future tokamaks like ITER.
Advisors/Committee Members: Stacey, Weston M. (advisor), Evans, Todd (committee member), Petrovic, Bojan (committee member), Utschig, Tris (committee member), McGrath, Robert (committee member).

► A condensed multigroup formulation is developed which maintains direct consistency with the continuous energy or fine-group structure, exhibiting the accuracy of the detailed energy spectrum…
(more)

▼ A condensed multigroup formulation is developed which maintains direct consistency with the continuous energy or fine-group structure, exhibiting the accuracy of the detailed energy spectrum within the coarse-group calculation. Two methods are then developed which seek to invert the condensation process turning the standard one-way condensation (from fine-group to coarse-group) into the first step of a two-way iterative process. The first method is based on the previously published Generalized Energy Condensation, which established a framework for obtaining the fine-group flux by preserving the flux energy spectrum in orthogonal energy expansion functions, but did not maintain a consistent coarse-group formulation. It is demonstrated that with a consistent extension of the GEC, a cross section recondensation scheme can be used to correct for the spectral core environment error. A more practical and efficient new method is also developed, termed the "Subgroup Decomposition (SGD) Method," which eliminates the need for expansion functions altogether, and allows the fine-group flux to be decomposed from a consistent coarse-group flux with minimal additional computation or memory requirements. In addition, a new whole-core BWR benchmark problem is generated based on operating reactor parameters in 2D and 3D, and a set of 1D benchmark problems is developed for a BWR, PWR, and VHTR core.
Advisors/Committee Members: Rahnema, Farzad (Committee Chair), Lubinsky, Doron (Committee Member), Morley, Tom (Committee Member), Petrovic, Bojan (Committee Member), Zhang, Dingkang (Committee Member).

► In this dissertation, three different methods for solving the Boltzmann neutron transport equation (and its low-order approximations) are developed in general geometry and implemented in…
(more)

▼ In this dissertation, three different methods for solving the Boltzmann neutron transport equation (and its low-order approximations) are developed in general geometry and implemented in 1D slab geometry. The first method is for solving the fine-group diffusion equation by estimating the in-scattering and fission source terms with consistent coarse-group diffusion solutions iteratively. This is achieved by extending the subgroup decomposition method initially developed in neutron transport theory to diffusion theory. Additionally, a new stabilizing scheme for on-the-fly cross section re-condensation based on local fixed source calculations is developed in the subgroup decomposition framework. The method is derived in general geometry and tested in 1D benchmark problems characteristic of Boiling Water Reactors (BWR) and Gas Cooled Reactor (GCR). It is shown that the method reproduces the standard fine-group results with 3-4 times faster computational speed in the BWR test problem and 1.5 to 6 times faster computational speed in the GCR core. The second method is a hybrid diffusion transport method for accelerating multi-group eigenvalue transport problems. This method extends the subgroup decomposition method to efficiently couple a coarse-group high-order diffusion method with a set of fixed-source transport decomposition sweeps to obtain the fine-group transport solution. The advantages of this new high-order diffusion theory are its consistent transport closure, straight forward implementation and numerical stability. The method is analyzed for 1D BWR and High Temperature Test Reactor (HTTR) benchmark problems. It is shown that the method reproduces the fine-group transport solution with high accuracy while increasing the computationally efficiency up to 16 times in the BWR core and up to 3.3 times in the HTTR core compared to direct fine-group transport calculations. The third method is a new spatial homogenization method in transport theory that reproduces the heterogeneous solution by using conventional flux weighted homogenized cross sections. By introducing an additional source term via an “auxiliary cross section” the resulting homogeneous transport equation becomes consistent with the heterogeneous equation, enabling easy implementation into existing solution methods/codes. This new method utilizes on-the-fly re-homogenization, performed at the assembly level, to correct for core environment effects on the homogenized cross sections. The method is derived in general geometry and continuous energy, and implemented and tested in fine-group 1D slab geometries typical of BWR and GCR cores. The test problems include two single assembly and 4 core configurations. It is believed that the coupling of the two new methods, namely the hybrid method for treating the energy variable and the new spatial homogenization method in transport theory set the stage, as future work, for the development of a robust and practical method for highly efficient and accurate whole core transport calculations.
Advisors/Committee Members: Rahnema, Farzad (advisor), Haghighat, Alireza (committee member), Morley, Tom (committee member), Petrovic, Bojan (committee member), Sjoden, Glenn (committee member), Zhang, Dingkang (committee member).

► In taking a different view of crystallization dynamics, this thesis reveals a new framework for addressing a prevalent process engineering challenge: control over the size…
(more)

▼ In taking a different view of crystallization dynamics, this thesis reveals a new framework for addressing a prevalent process engineering challenge: control over the size of crystals produced by batch cooling crystallization. The thesis divides roughly into halves. In the first half, the crystal size control problem is introduced and the proposed framework for addressing this problem—termed the mass-count (MC) framework—is developed. This new framework is laid out along side the population balance (PB) framework, which is the prevailing framework for modeling crystallization dynamics and addressing the crystal size control problem. In putting the proposed and established frameworks side by side, the intent is not to say that one or the other is correct. Rather, the point is to show that they are different perspectives that facilitate different control approaches. The PB framework is built up from first principles; it is intellectually stimulating and mathematically complete, but it has a drawback for application: it does not directly enable feedback control. The MC framework, on the other hand, takes a less detailed view of crystallization dynamics and does not connect to crystallization theory as directly; it is also more conducive to application. In the second half of the thesis, the utility of the MC framework is put to the test. The framework is first applied to understand and model the crystallization dynamics for two widely different systems: darapskite salt crystallization from water and paracetamol crystallization from ethanol. Once the dynamics have been modeled, the framework is then used to develop feedback control schemes. These schemes are applied to both experimental systems and, in both cases, crystal size control is demonstrated.
Advisors/Committee Members: Rousseau, Ronald W. (advisor), Grover, Martha A. (advisor), Kawajiri, Yoshiaki (advisor), Realff, Matthew J. (committee member), Petrovic, Bojan (committee member).

► Long-lived fast reactors have been suggested as an effective way of spreading nuclear energy to new countries. These small reactors can be produced at centralized…
(more)

▼ Long-lived fast reactors have been suggested as an effective way of spreading nuclear energy to new countries. These small reactors can be produced at centralized locations, shipped to area of need, then returned to the main hub at the end of their lifetime for decommissioning. Such ‘hub-spoke’ arrangements disincentivizes states front building sensitive front and back-end technology; however, critics argue they still pose a proliferation risk due to the large quantity of weapon-grade plutonium they produce during their operating lifetime. The dissertation attempts to address this issue by proposing a mixed-spectrum core configuration. A fast neutron zone can increase fissile material production, while a thermalized zone reduces plutonium quality. Moderating material (ZrH1.6) is inserted within peripheral assemblies, while the center of the core maintains a fast configuration. Assemblies are then shuffled to ensure all are exposed to the thermalized spectrum. This allows the new design to simultaneously improve proliferation resistance and reduce fast fluence damage, a limiting criteria for long-lived core designs. The objectives are achieved with minimal impact on overall performance. Core lifetime can be maintained at 25 years, without the need for any additional fuel. Inherent passive safety criteria can be met, and power peaking phenomena at the fast/thermal interface was deemed to be manageable. Different design variants that can alleviate power peaking or leverage the ability of thorium-cycle breeding in the epithermal regime, were also investigated. Mixed-spectrum cores pushes the boundaries of what deterministic codes are capable of modeling accuracy. The REBUS suite of codes is modified to provide a more accurate tool to explore the design space. MCNP6 is then used for detailed analysis and safety evaluation of optimal core configurations. The thesis demonstrates the viability of using a mixed-spectrum reactor design to improve proliferation resistance of long-lived cores. The main identified tradeoff was an increase in overall resource consumption, a slightly larger core size, and the reliance on shuffling midway through the core lifetime.
Advisors/Committee Members: Erickson, Anna (advisor), Petrovic, Bojan (committee member), Hertel, Nolan (committee member), Stulberg, Adam (committee member), Stauff, Nicolas (committee member).

► Five KMC models are created using the SPPARKS code in order to examine the behavior of materials related to nuclear applications on the mesoscale. In…
(more)

▼ Five KMC models are created using the SPPARKS code in order to examine the behavior of materials related to nuclear applications on the mesoscale. In addition work is done to examine the input parameters used in three of these simulations and determine the sensitivity of these parameters on the outcome of the simulation. The first of the models examines the diffusive behavior of oxygen vacancies introduced into a fluorite lattice system such as Ceria or Uranium Dioxide through doping of aliovalent oxides. Inputs are derived using molecular statics simulations of the energy barriers required for diffusive jumps. These inputs are then used by the simulation to determine the diffusivity and ionic conductivity of the materials due to the movement of vacancies in the simulation. The second model examines the diffusive behavior of simple and complex defects in BCC iron in order to examine the relationship between KMC and similar mesoscale models in informing continuum level models that determine the microstructural behavior of materials commonly used in nuclear support roles. The third model is a Potts style model designed to examine the grain growth and long term behavior of Uranium fuels found in most commercial nuclear reactors. The model uses data from experiments to inform the parameters that drive the model function. Two additional models are presented that examine the formation behavior of nano-porous foams and defect behavior through the use of stochastic cluster dynamics.
Advisors/Committee Members: Deo, Chaitanya (advisor), McDowell, David (committee member), Petrovic, Bojan (committee member), Garmestani, Hamid (committee member), Wang, Yan (committee member).

► The neutron transport equation often is homogenized in order to simplify its solution procedure in some manner or another. There exist many methods for homogenizing…
(more)

▼ The neutron transport equation often is homogenized in order to simplify its solution procedure in some manner or another. There exist many methods for homogenizing the neutron transport equation with different benefits and detriments. One promising method is the Consistent Spatial Homogenization (CSH) method developed and implemented in 1-D by Yasseri and Rahnema. The method, along with its successor, the Diffusion-Transport Homogenization (DTH) method are promising for their ability to reconstruct accurate fine-mesh angular flux profiles as well as reactor eigenvalue after a re-homogenization procedure. This work will explore the extension of both the CSH and DTH methods to higher spatial dimensionality in order to solve large-scale reactor eigenvalue problems. The CSH and DTH methods are based around iterated re-homogenization of the neutron transport equation with an auxiliary source term which is used to correct for heterogeneity effects of a given problem. The net effect of this is that the effects of heterogeneity are relegated to a source term, and the homogenized neutron transport equation is solved instead of the heterogeneous equation. This allows for implementation of simpler acceleration techniques to improve the speed and accuracy of the homogenized problem and in multiple dimensions helps to avoid the effects of complicated reactor geometries. The re-homogenization procedure brings the flux solution back to the heterogeneous discretization in order to generate better approximations for the homogenized cross sections, a better approximation of the auxiliary source term, and most importantly to reconstruct the full heterogeneous angular flux profile. In this work, the CSH and DTH methods are modified for increased spatial dimensionality and implemented using a 2-D SN discrete ordinates transport solver. This implementation is tested using Cartesian-mesh variants of the 2D-C5G7 benchmark problem and a 2-D full-scale boiling water reactor (BWR) benchmark problem.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Zhang, Dingkang (committee member), Morley, Tom (committee member), Densmore, Jeffery (committee member).

► Theoretical models are used in support of the I2S-LWR (Integral Inherently Safe LWR) project for a direct comparison of fuel swelling and fission gas bubble…
(more)

▼ Theoretical models are used in support of the I2S-LWR (Integral Inherently Safe LWR) project for a direct comparison of fuel swelling and fission gas bubble formation between U₃Si₂ and UO₂ fuels. Uranium silicide is evaluated using a model developed by Dr. J. Rest with the fuel in a amorphous state. The uranium dioxide is examined with two separate models developed using a number of papers. One model calculates the swelling behavior with a fixed grain radius while the second incorporates grain growth into the model. Uranium silicide rapidly becomes amorphous under irradiation. The different mechanisms controlling the swelling of the fuels are introduced including the knee point caused by the amorphous state for the U₃Si₂. The outputs of each model are used to compare the fuels.
Advisors/Committee Members: Deo, Chaitanya (advisor), Petrovic, Bojan (committee member), Singh, Preet (committee member).

► Several methods are presented for improving upon the traditional analytic “circular” method for constructing a flux-surface aligned curvilinear coordinate system representation of equilibrium plasma geometry…
(more)

▼ Several methods are presented for improving upon the traditional analytic “circular” method for constructing a flux-surface aligned curvilinear coordinate system representation of equilibrium plasma geometry and magnetic fields, and the most accurate Asymmetric Miller method is applied to calculations of poloidal asymmetries in plasma density, velocity, and electric potential. Techniques for developing an orthogonalized coordinate system from a general curvilinear representation of plasma flux surfaces and for representing the poloidal component of the magnetic field in the orthogonalized curvilinear system are developed generally, in order to be applied to four plasma flux-surface models. The formalism for approximating flux surfaces originally presented by Miller is extended to include poloidal asymmetries between the upper and lower plasma hemispheres, and is subsequently shown to be more accurate at fitting the shapes of flux surfaces calculated using EFIT than both the traditional “circular” model and two alternative curvilinear models of comparable complexity based on Fourier expansions of major radius, vertical position, and minor radius. Applying the coordinate system orthogonalization technique to these four models allows for calculations of the poloidal magnetic field which, upon comparison to a calculation of the poloidal field performed in a Cartesian system using the experimentally based EFIT prediction for the Grad-Shafranov equilibrium, demonstrates that the asymmetric “Miller” model is also superior to other methods at representing the poloidal magnetic field. A system of equations developed by representing the poloidal variations of velocity, density, and electric potential using O(1) Fourier expansions in the flux-surface averaged neoclassical plasma continuity and momentum balances is solved using several variations of both the “Miller” and “circular” curvilinear models to set geometric scale factors, illustrating the effects that these improvements in geometric modeling have on tokamak fluid theory calculations.
Advisors/Committee Members: Stacey, Weston M. (advisor), Petrovic, Bojan (committee member), Zhang, Dingkaung (committee member).

► Criticality calculations are often performed in MCNP5 using the Shannon entropy as an indicator of source convergence for the given neutron transport problem. The Shannon…
(more)

▼ Criticality calculations are often performed in MCNP5 using the Shannon entropy as an indicator of source convergence for the given neutron transport problem. The Shannon entropy is a concept that comes from information theory. The Shannon entropy is calculated for each batch in MCNP5, and it has been shown that the Shannon entropy tends to converge to a single value as the source distribution converges. MCNP5 has its own criteria for when the Shannon entropy has converged and recommends a number for how many batches should be skipped; however, this value for how many batches should be skipped is often not very accurate and has room for improvement.
This work will investigate an approach for using the Shannon entropy source distribution convergence information obtained in a shorter simulation to predict the required number of generations skipped in the reference case with desired statistical precision. In several test cases, it has been found that running a lesser number of particles per batch produces a similar Shannon entropy graph when compared to running more particles per batch. Then, by appropriate adjustment through a synthetic model, one is able to determine when the Shannon entropy will converge by running fewer particles, finding the point where it converges and then using this value to determine how many batches one should skip for a given problem. This reduces computational time and any "guessing" involved when deciding how many batches to skip. Thus, the purpose of this research is to develop a model showing how one can use this concept and produce a streamlined approach for applying this concept to a criticality problem.
Advisors/Committee Members: Petrovic, Bojan (advisor), Hertel, Nolan E. (committee member), Zhang, Dingkang (committee member).

► Recent work by Yasseri and Rahnema has introduced a consistent spatial homogenization (CSH) method completely in transport theory. The CSH method can very accurately reproduce…
(more)

▼ Recent work by Yasseri and Rahnema has introduced a consistent spatial homogenization (CSH) method completely in transport theory. The CSH method can very accurately reproduce the heterogeneous flux shape and eigenvalue of a reactor, but at high computational cost. Other recent works for homogenization in diffusion or quasi-diffusion theory are accurate for problems with low heterogeneity, such as PWRs, but are not proven for more heterogeneous reactors such as BWRs or GCRs.
To address these issues, a consistent hybrid diffusion-transport spatial homogenization (CHSH) method is developed as an extension of the CSH method that uses conventional flux weighted homogenized cross sections to calculate the heterogeneous solution. The whole-core homogenized transport calculation step of the CSH method has been replaced with a whole- core homogenized diffusion calculation. A whole-core diffusion calculation is a reasonable replacement for transport because the homogenization procedure tends to smear out transport effects at the core level. The CHSH solution procedure is to solve a core-level homogenized diffusion equation with the auxiliary source term and then to apply an on-the-fly transport-based re-homogenization at the assembly level to correct the homogenized and auxiliary cross sections. The method has been derived in general geometry with continuous energy, and it is implemented and tested in fine group, 1-D slab geometry on controlled and uncontrolled BWR and HTTR benchmark problems. The method converges to within 2% mean relative error for all four configurations tested and has computational efficiency 2 to 4 times faster than the reference calculation.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Zhang, Dingkang (committee member).

▼ The coarse mesh radiation transport (COMET) code uses response functions to solve the neutron transport equation. Most nuclear codes used today have a very steep learning curve; COMET is no exception. To ease the user's onus of learning how to create correctly formatted COMET input-files, a graphical user interface (GUI) was created. The GUI allows the user to select values for all the relevant variables while simultaneously minimizing the errors a typical new user would make. To this end, the GUI creates all of the input files required to run COMET. The GUI also provides a visualization tool that the user may use to check the problem geometry before running COMET. The GUI is also responsible for post-processing the COMET output for visualization with TecPlot.
In addition to the GUI, multi-group cross section libraries were generated as part of the MHTGR-350 (Modular High Temperature Gas Reactor) benchmark problem under development at GeorgiaTech. This project aims to couple COMET with a thermal hydraulics code to best model the true physics of the reactor design. In order for this goal to be actualized, six-group cross sections were generated over the operational temperature range of the MHTGR using the current coupling and collision probability code HELIOS.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Zhang, Dingkang (committee member).

▼ MANE (MCNP ACE from NJOY & ENDF), a code for generating continuous energy cross sections at arbitrary temperatures, was created. Cross sections were evaluated using NJOY99 such that they would agree with the cross sections provided by MCNP5. The MANE cross sections were found to be in very good agreement with those provided by MCNP5 with some minor exceptions caused by round-off errors and some differences in the unresolved resonance region. Differences in the resonance region are caused by differences in the random number generator used to start the cross section calculations. The MANE cross sections were verified against the MCNP5 cross sections in five unique MCNP configurations: an 8.7% enriched MOX fuel pin cell, a UO₂ assembly (controlled and uncontrolled), a MOX assembly, and a whole core configuration containing the 3 assemblies. In each of these cases, eigenvalue and tally density results were found to be in very good agreement with one another.
Advisors/Committee Members: Rahnema, Farzad (advisor), Petrovic, Bojan (committee member), Zhang, Dingkang (committee member).

▼ The SABR fusion-fission hybrid concept for a fast burner reactor, which combines the IFR-PRISM fast reactor technology and the ITER tokamak physics and fusion technology, is adapted for a fusion-fission hybrid reactor, designated SABrR. SABrR is a sodium-cooled 3000 MWth reactor fueled with U-Pu-10Zr. For the chosen fuel and core geometry, two configurations of neutron reflector and tritium breeding structures are investigated: one which emphasizes a high tritium production rate and the other which emphasizes a high fissile production rate. Neutronics calculations are performed using the ERANOS 2.0 code package, which was developed in order to model the Phenix and SuperPhenix reactors. Both configurations are capable of producing fissile breeding ratios of about 1.3 while producing enough tritium to remain tritium-self-sufficient throughout the burnup cycle; in addition, the major factors which limit metal fuel residence time, fuel burnup and radiation damage to the cladding material, are modest.
Advisors/Committee Members: Stacey, Weston M. (advisor), Petrovic, Bojan (committee member), Erickson, Anna (committee member).