Sensitivity and uncertainty analysis in Monte Carlo transport and burnup calculations

Yishu Qiu, Manuele Aufiero, Kan Wang (Tsinghua University), Massimiliano Fratoni

f28f25 comparisonThere is an increasing interest to couple Monte Carlo (MC) transport calculations to depletion/burnup codes since Monte Carlo codes can provide exact flux distributions or cross sections. One of the main concerns about using a MC transport-depletion method is how uncertainties from Monte Carlo statistical uncertainties as well as nuclear data uncertainties are be propagated between the Monte Carlo codes and burnup codes. This project is going to develop sensitivity and uncertainty analysis capabilities in RMC-Depth which is an in-coupling Monte Carlo transport-depletion code developed by Tsinghua University, China. To be more specific, the goals of this project are:
1. Study methods suitable for computing k-eigenvalue sensitivity coefficients with regard to the continuous-energy cross sections and implement them in RMC; conduct sensitivity and uncertainty analysis of the effective multiplication factor to nuclear data uncertainties in the transport calculations.
2. Study methods appropriate for computing general response sensitivity coefficients with regard to the continuous-energy cross sections and implement them in RMC; conduct sensitivity and uncertainty analysis of general responses in the form of linear response functions, such as relative powers, isotope conversion ratios, multi-group cross sections, and bilinear response functions, such as adjoint-weighted kinetic parameters, to nuclear data in the transport calculations.
3. Study the methods suitable for analysis and uncertainty propagation in Monte Carlo transport-burnup calculations. With the proposed methods, propagate uncertainties in the Monte Carlo transport-burnup calculations that come from nuclear data, the Monte Carlo statistics, the isotope number densities, and the cross-correlations between the nuclear data and the number densities. These effects should be analyzed separately in each burnup step of the burnup calculations.
4. Study the methods suitable for uncertainty qualifications for other parameters such as temperature and system dimensions.

PyNE

Josh Howland, Marissa Ramirez-Zweiger (alumna), Katy Huff, Rachel Slaybaugh

Tools used in the fields of Data and Computational Science have undergone many rapid modernizations, including a shift to the use of cleaner, more forgiving programming languages and frameworks. Computational neutronics has begun to follow this trend, though this is impeded by the number of legacy codes are written in older, less accessible languages and under antiquated programming modes. Recent work in PyNE aims enable the transition to modern languages and programming paradigms, providing modules in Python that interface with legacy Fortran programs.

The National Nuclear Data Center (NNDC) provides a suite of Evaluated Nuclear Structure Data File (ENSDF) Analysis and Utility programs written in Fortran.  Constantly trying to manage and call 20+ different executables is hard to manage and can rapidly become fragmented. Work is in progress to create a Python interface for the majority of these programs in PyNE.  Python allows for significantly easier access, and modern paradigms in programming including testing and modularity.   

Non-Classical Transport Methods

Richard Vasques, Rachel Slaybaugh

This work studies mathematical models for more accurately performing neutral particle transport in certain physical regimes. In classical particle transport, the scattering centers in the background material are assumed to be Poisson-distributed; that is, their spatial locations are uncorrelated. When this is true, the probability that a particle interacts with the background medium is proportional to the path length traveled by that particle, with the proportionality constant depending on the density of the medium and on the particle’s energy. This leads to an exponential attenuation law, with the particle flux decreasing as an exponential function of the path length (Beer-Lambert law).

However, in certain inhomogeneous random media in which the locations of the scattering centers are spatially correlated, the particle flux will experience a non-exponential attenuation law that is not captured by classical homogenization techniques. A nonclassical theory for this type of transport problem has been recently introduced, proposing a homogenization that preserves the path length distribution for particles traveling in the inhomogeneous medium. This new approach has sparked a vivid discussion in the recent literature.

Important applications for this non-classical theory include neutron transport in Pebble Bed Reactors (in which a non-exponential path-length distribution arises due to the pebble arrangement within the core) and photon transport in atmospheric clouds (in which the locations of the water droplets in the cloud seem to be correlated in ways that measurably affect the radiative transfer within the cloud).

Angle-Informed Hybrid Methods

Madicken Munk, Garrett Baltz, Rachel Slaybaugh, Richard Vasques

Hybrid methods for radiation transport aim to use the speed and uniform uncertainty distribution obtained from deterministic transport to accelerate and improve performance in Monte Carlo transport. An effective use of this type of transport hybridization can lead to a reduced uncertainty in the solution and/or a faster time to a solution. However, not all hybrid methods work for all types of radiation transport problems. In problems where the method is not well-suited for the problem physics, a hybrid method may perform more poorly than analog Monte Carlo, leading to wasted computer time and energy, or even no acceptable solution.

This project builds on existing software infrastructure (ORNL’s Denovo and ADVANTG) to generate hybrid methods for deep-penetration radiation transport problems. Specifically, we are developing variance reduction parameters for problems with strong angular anisotropy without explicitly including angular biasing parameters. No existing, highly accessible, automated hybrid method has incorporated angular-dependence of the flux in generating variance reduction parameters, which has led to difficulty in the analysis of highly anisotropic problems. Our method should improve the computational performance of hybrid methods for anisotropic problems while also maintaining similar space and processing metrics as energy- and space- exclusive hybrid methods.

Molten salt reactors modeling and analysis

Daniel Wooten, Manuele Aufiero, Francesco Accardi, Massimiliano Fratoni

Molten salt reactors, those reactors whose coolant or fuel is a molten salt (for example, table salt melts around 1440F), have gained increased national and international attention in the past decade because of their inherent safety, lack of pressurized systems (reduced cost), and high fuel utilization. A handful of molten salt reactors have been built; most notably the MSRE at Oak Ridge National Laboratory in the 1960’s.

Despite the long, uneventful, and largely successful operation of the MSRE, molten salt reactors fell out of political favor and little development has occurred with regards to the technology since. Currently, this group’s focus is on the development of computational methods to bring modern nuclear analysis tools to bear on the investigation of hypothetical molten salt reactors. These tools are used to learn about the likely behavior of such reactors and allow for their neutronic, safety, and economic analysis.

2- and 3-tier advanced fuel cycles

Guanheng Zhang (alumnus), Gang Wang, Lucas David, Sai Vadlamudi, Massimiliano Fratoni, Ehud Greenspan

The Seed and Blanket (S&B) concept is able to support several new 2- and 3-tier advanced fuel cycle systems to improve nuclear fuel cycle sustainability. Instead of using the thorium blanket of the S&B reactor in a once-through mode, accumulating a large inventory of Trans-Th elements, this study assesses the feasibility of using the discharged Trans-Th to feed PWRs and molten salt reactors (MSRs) that operate on a closed fuel cycle (their fuel is recycled). This fuel cycle option provides a possible solution to the large amount of U-233 bred in the S&B core whose decay daughters are the major contributors to long-term radioactivity and radiotoxicity.

The specific fuel cycle considered here is a three-stage energy system PWR(LEU)-S&B-PWR(Trans-Th) illustrated below. Stage 1 contains once-through low enriched uranium fueled LWRs, Stage-2 uses S&B reactors having TRU-transmuting seed and thorium blankets, and Stage-3 has LWRs that operate on a closed U-233/Th fuel cycle. The recovered Trans-thorium from Stage 2 is mixed with some fresh thorium to serve as the makeup fuel of the Stage-3 PWR. All the discharged fuel is reprocessed and recycled except for the uranium recovered from Stage-1 discharged fuel and a fraction of the thorium discharged from Stage-2 blankets. The seed of Stage-2 S&B reactors is fed with TRU separated from Stage-1 PWRs. This system may offer the fastest and possibly most cost-effective way to get rid of the High Level Waste from the nuclear industry. It is found that one S&B core can support 3.3 PWRs in this 3-tier advanced fuel cycle system.

Ternary lithium-alloys for fusion blankets

Lawrence Livermore National Laboratory (LLNL) is attempting to develop a lithium-based alloy—most likely a ternary alloy—that maintains the beneficial properties of lithium (e.g. high tritium breeding and solubility) while reducing overall flammability concerns for use in the blanket of the Inertial Fusion Energy (IFE) power plant.

Alejandra Jolodosky, Alan Bolind, and Massimiliano Fratoni

Ternary lithium-alloys for fusion blanketsThe goal of this work is improved safety and performance for fusion energy. Lithium is often the preferred choice as breeder and coolant in fusion blankets as it offers excellent heat transfer and corrosion properties and, most importantly, has a very high tritium solubility that results in very low levels of tritium permeation throughout the facility infrastructure. However, lithium metal vigorously reacts with air and water, exacerbating plant safety concerns. Consequently, Lawrence Livermore National Laboratory (LLNL) is attempting to develop a lithium-based alloy—most likely a ternary alloy—that maintains the beneficial properties of lithium (e.g. high tritium breeding and solubility) while reducing overall flammability concerns for use in the blanket of the Inertial Fusion Energy (IFE) power plant.

The IFE power plant being studied employs inertial confinement fusion (ICF) through the use of lasers aimed at an indirect-driven target composed of deuterium-tritium fuel. The fusion driver/target design implements the same physics as the National Ignition Facility (NIF). The IFE power plant uses lithium in both the primary coolant and blanket; therefore, lithium-related hazards are of primary concern. Although reducing chemical reactivity is the primary motivation for the development of new lithium alloys, the successful candidates will have to guarantee acceptable performance in all their functions. Our focus is to evaluate the neutronics performance of a large number of lithium-based alloys in the blanket of the IFE engine. In particular, parameters determining alloy selection are the tritium breeding ratio (TBR) and energy multiplication factor (EMF). Activation analysis is performed on the selected alloys to assess specific safety and environmental properties, including evaluation of decay heat, contact dose rate, accident dose, and waste disposal rating.

Neutronic analysis found that the best performing alloys (higher TBR and higher EMF) combine elements that exhibit low absorption cross sections and high q-values, such as tin, barium, strontium, or zinc, with elements with high neutron multiplying cross sections, like lead or bismuth. A large number of alloys (e.g. LiPbZn and LiSnZn) met TBR constraints greater than or equal to 1.05 and an EMF constrain greater than or equal to 1.1 for a wide range of lithium concentrations. When the EMF constraint was increased to 1.2, the additional power demand was too high for alloys not containing tin. Additionally, it was found that when an alloy already contains a high amount of lithium (greater than 50%), doubling the 6Li content from 7.5% to 15% increases the TBR by 13%. After a certain percent of enriched 6Li, the lack of tritium and additional neutrons produced from 7Li(n,n’T) reactions end up reducing the TBR. At lower total lithium concentrations (<50%), the TBR will continue to increase to higher 6Li enrichments since the 7Li(n,n’T) reactions will not be as significant.

Activation calculations were performed for a series of elements that exhibited good TBR and EMF properties. This analysis revealed bismuth as a poor choice as it performed worst for all of the criteria evaluated. Alloys containing zinc and tin also showed some of the highest decay heats, contact dose rates, and accident doses. Most of the alloys examined can be stored in dry containers at an estimated one year after shutdown. Additionally, if necessary, the entire volume of the blanket for every alloy except LiPbBa and LiBaBi could be remotely handled. Accident doses were high in alloys containing zinc, copper, or gallium, but were not high enough to pose a major safety concern. With the exception of LiBaBi, activation analysis demonstrated that all the alloys could be used as blankets of the IFE reactor without posing major environmental or safety concerns.

Future neutronics work will focus on the optimization of lithium-ternary alloy concentrations for a given TBR and EMF.

Multi-physics modeling of fluoride-cooled high-temperature reactors (FHRs)

Xin Wang, Dan Shen, Katy Huff, Manuele Aufiero, Massimiliano Fratoni, April Novak

Multi-physics modeling of fluoride-cooled high-temperature reactors (FHRs)To improve understanding of coupled physics in FHRs, this work involves the development of tools and methods for coupling at thermal hydraulics and neutronics within the context of FHRs. Low-dimensional models relying on simplified neutron kinetics and heat transfer have been implemented in a python package, PyRK. Higher dimensional models that couple these physics in finite element frameworks (including both MOOSE and COMSOL) are also being developed. Finally, models which coupled monte carlo simulation with CFD tools are also being iterated upon.

WARP (“Weaving All the Random Particles”)

Ryan Bergmann (alumnus), Kelly Rowland, Rachel Slaybaugh, Jasmina Vujic

To improve reactor design and operation, fast and accurate neutron transport calculations are needed. Today’s supercomputers are comprised of heterogeneous architectures designed to reduce power consumption, and new algorithms are required to use these hardwares. WARP, which can stand for “Weaving All the Random Particles”, is a three-dimensional (3D), continuous energy, Monte Carlo neutron transport code developed to efficiently execute on a CPU/GPU platform. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems and can run in both criticality or fixed-source modes. WARP currently transports neutrons in unrestricted arrangements of spheres, cylinders, parallelpipeds, and hexagonal prisms and is able to entertain both vacuum and reflecting (specular) boundary conditions.

What sets WARP apart from previous, somewhat similar endeavors is its breadth of scope and novel adaptation of the event-based Monte Carlo algorithm. Previous codes have been limited to restricted nuclear data or simplified geometry models, where WARP instead loads standard data files and uses a flexible, scalable, optimized geometry representation. WARP uses a suite of highly-parallelized algorithms and employs a modified version of the original event-based algorithm that is better suited to GPU execution.

Targeted Modification of Neutron Energy Spectra

James Bevins, Rachel Slaybaugh

The goal of this project is to modify properties of existing neutron sources to make them more desirable for high-impact applications. Neutron sources are broadly classified by their intensity and energy distribution. For many applications, such as medical treatments, radiation damage studies in epi-thermal or fast reactors, neutron radiation effects studies on semiconducting devices, or nuclear forensics, no current neutron source has the characteristic intensity and energy distribution required to meet many of test or operational objectives. Historically, surrogate methods and equivalencies were developed to provide calibration metrics that enabled the use of existing sources even though they did not have the correct intensity and energy characteristics. However, these methods often result in large design margins and experimental uncertainties. Fundamentally, many applications do not have the ability to test across the complete range desired and there is a high-order mismatch between the experiment and the desired physics and conditions.

This research proposes instead to use existing, high intensity sources with a custom energy tuning assembly (ETA) to tailor the neutron spectrum to address neutron source capability. Neutron filters, screens, and moderation have been used in the past to alter a neutron source’s spectral characteristics, but these tended to be simple in objective and construction. To generically tailor a spectrum, many different materials and geometric configurations have to be explored rapidly. Because of the sheer size of the possible phase space, designing by hand will only explore a small subset of that space and is unlikely to arrive at consistently valid solutions. This research is developing a metaheuristic optimization tool to customize ETAs for any application. The ETA design will be tested using D-T neutron generators and the LBNL 88” Cyclotron facilities. The goal will be to demonstrate the ability to model, measure, and experimentally validate the design of an ETA to achieve a desired spectrum.