Cargando…

CERN Summer Student Report

It had always been a little hard for me to call myself an experimental physicist before I had ever actually helped build an experiment. Before I came to the CERN REU program, I had known that I was interested in high energy particle physics, but given the size and scale of experiments in the field...

Descripción completa

Detalles Bibliográficos
Autor principal: Cardot, Charles Andre
Lenguaje:eng
Publicado: 2019
Materias:
Acceso en línea:http://cds.cern.ch/record/2686694
Descripción
Sumario:It had always been a little hard for me to call myself an experimental physicist before I had ever actually helped build an experiment. Before I came to the CERN REU program, I had known that I was interested in high energy particle physics, but given the size and scale of experiments in the field, I was not sure I would ever have the chance to be able to work physically with a project of that magnitude. Since coming to CERN though, I have been given the opportunity to get my hands dirty and to directly contribute to the upgrading and testing of a massive detector, as well as dive headfirst into the complicated world of detector physics. The ATLAS experiment has been online since the LHC first started running in 2009. It was designed as a general purpose particle detector and has contributed to a number of scientific discoveries including the Higgs Boson. As a result of upgrades to the LHC taking place during the second long shutdown, the ATLAS experiment is also undergoing upgrades designed to take advantage of the improved luminosity and more advanced technologies. This includes designing and building a New Small Wheel detector which will allow for a higher rate of operation and lower triggering of fake muons. The New Small Wheel is comprised of 16 sectors each with 4 wedges. Each wedge corresponds to either a sTGC layer or MicroMega (MM) detector, and each wedge has four layers. This makes for a total of 8 sTGC layers and 8 MM layers for each sector. The sectors fit together for both fast triggering and precision tracking. Each wedge has about 5,000 readout channels, connecting to multiple front end boards on the edge of the wedge, each of which will handle the intake or raw data, amplifying it and digitizing the detector signal before sending the data to the trigger and data acquisition system. In my group, we were focused on working with the sTGC wedges. The wedges are comprised of pads, strips, and wires each of which read out to a front end board. As mentioned above, there are four layers within the sTGC wedge, allowing the pads in each layer to provide a 3 out of 4 coincidence to identify muon tracks. These pads are then used to identify which strips to read out for precise measurements of the bending coordinate. Finally, the azimuthal coordinate is obtained from grouping the wires. The sTGC detector will also provide tracking measurements to compliment the tracking layout of the Micromega. Given the nature of particle physics, the detector is designed to function on a nanosecond timescale, which also means that all of the raw data coming out of the detector needs to be calibrated so that it is extremely precise. My project was helping with this calibration effort by designing a method for testing and documenting the time delay of each detector component. This calibration is needed for the front end boards to be able to align multiple hits coming from a single muon to within a single 25ns window, creating a trigger. Some of the challenges facing this problem include the fact that each sTGC pad within the detector has a different unique trace length between it and the front end boards, meaning that the time delay will be unique for different parts of the detector. Also, uncontrollable factors such as parasitic capacitance and differences in propagation speeds through the wire can interfere with the true value and make it difficult to effectively predict the time delay. Finally, there is the added difficulty of the adapter boards, which act as the connection between the detector components and the front end board. These adapter boards have non negligible trace lengths and must also be considered in the total calculation. I began by tabulating the known trace lengths for each channel within different parts of the wedge and then matched the channels of the components to the channels in the adapter boards to get the total trace length. I learned to use a tool known as a Time Domain Reflectometer (TDR) which allowed me to experimentally measure the time delay of any given trace. It works by sending an electric pulse down a wire and measuring the shape and timing of any reflections. As the electrical pulse encounters a change in impedance, part of the signal is reflected back, with different types of impedance miss-matches producing different types of reflections. This was ideal for my project because from the shape of the reflections I could infer what types of junctions the electric signal was encountering as it propagated through the detector. This allowed me to effectively identify the correct end point reflection, and accurately measure the time delay based on the latency of that specific reflection pattern. By first using the TDR to measure the time delay of a coaxial cable as a baseline measurement, I calculated the velocity of propagation within the wire, and used that to estimate the time delay for each detector component. Each wedge is broken down into 3 different quadruplates, each of which is in turn broken down into 4 different layers. We began by measuring a single layer on a single quadruplate first. The adapter boards are built so that there are naturally more channels on it than are used by the detectors (this is mainly because we want to reduce the number of board types we need to built). This allowed us to measure both channels that were connected all the way into the detector, and channels that were disconnected from the detector and therefore only involved the adapter board. By measuring both the connected and disconnected channels, we could isolate the behavior of the electric pulse within the adapter board from the electric pulse within the detector. We found that the difference in the velocity of propagation in the adapter board was negligible compared to the velocity of propagation in the detector, meaning that we could simply add the two trace lengths and only find one total velocity of propagation. Using these measurements, we found that our predicted time delay values from the coaxial cable were lower than our measured values, but they followed the same general pattern of increasing at a rate almost directly proportional to the trace length. After scaling our predicted values by a constant scaling we were able to get all of our predicted values to within 1 nanosecond of the measured value. To confirm our initial results, we tested 8 other chambers within the wedge. We found that in quadruplates QS1 and QS2 there was a linear relationship between the predicted time delay values and the measured time delay values, meaning that we only need to multiply by a single scaling factor to get our estimates within one nanosecond of the correct values. This scaling ratio is different for each quadruplate and layer, but can be isolated after about 8 to 10 initial measurements time delay for a given chamber. Each VMM chip on the front end boards has a maximum resolution of about 3ns, meaning that at best we only need to get our predicted values to within this limit. On average, after scaling all prediction values make it to within 1 nanosecond of the measured values, which is within our acceptable range. This variable scaling ratio makes sense because the geometry of each quadruplate and layer are different, leading to different effects from parasitic capacitance. This is also very good, because it means that we have a reliable way of predicting the time delay for any detector component on the wedge, given its trace length, without having to physically measure it. A special situation arises when we look at the QS3 quadruplate, which is directly connected to the area of the pads that are being measured. The principle of using the TDR to measure the time delay requires us to make the assumption that the pads are like points at the end of the trace and will create a short circuit reflection. This assumption does not hold though as we begin to deal with larger and larger pads though, such as the ones found in the QS3 quadruplate. We found that for larger pads, we will get a bizarre reflection that is wide and flat, without a clear peak or trough. This reflection is created from the signal reflecting off of the non zero area of the pad, creating a superposition of many different reflections, each from a different point along the edge of the pad, that all add up to create a wide trough with no clear minimum. The width of this trough will decrease with pad size and approaches the standard shape after about the first 5 to 10 largest pads, with the remaining pads exhibiting the same characteristic reflection that we find in QS1 and QS2. This explains why, in some of our data for QS3, we do not see as clear of a linear relationship between predicted and measured values as we do for the QS1 and QS2. This is because we can actually have a variable trace length (and therefore variable time delay) from these larger pads depending on where in the pad’s area we get a signal. Because we cannot improve beyond the resolution of a single pad, these larger pads will have an inherent systematic uncertainty proportional the the pad’s area, with the worst uncertainties being around +/- 1.5 nanoseconds. To compensate for this issue, we will most likely have to exclude data from the largest pads when calculating the scaling factor, to avoid the imprecise data from biasing the scaling factor. Then we will have to include this systematic uncertainty for the largest pads when calibrating front end electronics for the QS3 quadruplate Following these findings I wrote a C++ class that allows someone to read in a text file containing all the identifying information and time delay values, and then query for specific detector components. The class is designed to take in the formatted text file with all of the time delay information, which I provide, and create a map tying the identifying information to total time delay for each specific detector component. The user can query for any time delay by simply providing the identifying information for a part of the detector. This allows a user to seamlessly compensate for any time delay effects when writing software for the detectors. Some of the issues which still need to be addressed with this project are the variance in the scaling ratio between particular quadruplate-layer combinations, the large QS3 pads, and the excess noise from the TDR measurements. While the linear relationship is present in all of the predicted versus measured time delay graphs it still needs to be calculated and tabulated for each chamber on the wedge. Then the correct scaling factor can be applied to all predicted time delay values for each part of the wedge, and can be added to the C++ class to give the correct time delay values for each detector component. Also, more measurements need to be done on the large pads of the QS3 quadruplate to develop a working formula to predict the systematic uncertainty for a given pad area. Also, the TDR itself will create a reflection at any point at which there is a change of impedance, reflecting back more the greater the change in impedance. This also means that there is a non-negligible amount of noise in our readings. This is reasonable because each trace within the detector has junctions and other components which will cause reflections, as well as the fact that it is not perfectly isolated from other pieces of the detector, which can give rise to parasitic capacitance and generate more noise in the TDR data. Both of these factors can make it harder to accurately identify the reflection pattern which is associated with the end of the trace, leading to a potential source of error in our measurements. Since I have arrived at CERN the majority of my time has been spent learning about the detector and the technology which has gone into building it. Either I have been absorbing the knowledge through my supervisor and documentation, or I have been gaining it myself through my own experiences working with the time delay system. Through this, I have learned an incredible amount about how the detector systems in ATLAS work and how detector physics works in general. I have gained a new appreciation for the effort that goes into creating a quality detector, and I hope that the level of comprehension I have achieved will help serve me in my future works as I continue my career as a physicist.