Kelsey’s day-to-day responsibilities include general Leak Detection and Repair (LDAR) consulting with a focus on third – party consent decree auditing and subject mater expert (SME) LDAR trainings. LDAR monitoring requires the use of highly technical instruments. Throughout the years, there has been a very limited availability of selection due to high cost of production, low supply demand and a requirement for intrinsic safety ratings.
“Analyzer technology was not the most reliable when I started in the industry but has improved significantly over the last two and a half decades,” said Kelsey. EPA standards today dictate that inspectors must calibrate their instruments prior to each use, as well as complete a Precision Certification once every quarter. “In the early days, I always thought it was very odd that we held such strict requirements on the calibration and certification of the instruments we use to perform leak detection on a daily basis prior to their use, but there was zero quality control at the end of the day to ensure they had performed well,” commented Kelsey. Only a few short years later, this method changed significantly.

Just over half a decade into his career, Kelsey moved to the central valley of California to assume the position of a site supervisor for an up-and-coming LDAR contracting service provider. With only two and a half years at the wheel, salvaging and rebuilding the inherited LDAR program, his facility lodged into a multi-facility CD which was one of the first rounders to hit in the early 2000s. Along with a myriad of new requirements like lower leak definitions, different repair times for leak repair completion, and first attempt at repairs for lower than regulator leak levels, a whole new door was opening on the quality control front. “For the first time in my already decade long career, I was witnessing a new level of accountability in the LDAR world surrounding quality control, not only of the data and field work being performed by the inspectors but of the analyzers being used to perform the inspections themselves,” commented Kelsey.

LDAR, auditing, testing

Kelsey’s facility’s CD, required a new and unexpected requirement to perform a semi-laboratory style control test. This test helped evaluate whether the instruments being used to monitor leaks throughout the day, were staying within the expected guidelines for accuracy. “Hence the birth of the drift assessment, or more notably the dreaded ‘Drift Failure’,” said Kelsey with a smirk. Each monitoring technician was tasked with the new job to perform an end-of-day calibration drift assessment on each analyzer that had been used that day to perform LDAR monitoring.  The instrument, when brought back from the field, would use the same calibration gasses used to calibrate the instrument that day to retest the instrument. The new readings would then be documented and compared to the prior readings to determine if the instrument had drifted negativity throughout the monitoring period. If a drift greater than ten percent negative had occurred a ‘Drift Failure’ would be indicated. “I’m not sure who started the term ‘failure’ as it is not what is used in the CD’s language, or at least none that I have seen. But it was the term that stuck and is still being tossed around twenty years later. This term, the simple word ‘failure’, I feel sets a destructive tone within the LDAR community, especially with field monitoring technicians,” said Kelsey.“LDAR monitoring technicians, to put it plainly, are hard-working men and women that are tasked with a highly repetitive and often mind-numbing job that comes with an incredibly high level of scrutiny,” stated Kelsey. “These are salt-of-the-earth people and most truly and deeply care about the quality of work they are performing. To tell technicians they had ‘failed’ at the end of the day doesn’t go over well. I think they had a hard time separating themselves and their work, and from the test that is being performed on the instruments in the form of the drift assessment,” Kelsey continued. “What is being questioned after a drift failure is the validity of the readings collected by the failed instrument, NOT the qualit with which the technicians monitored to collect the information,” stressed Kelsey.

In addition to the new daily drift assessments being performed, additional actions for each that encountered failed drift assessment were required. Whenever a drift greater than negative ten percent is documented, CDs would require actions to re-monitor valves and pumps above a specified level, usually 100 ppm for valves and 500 ppm for pumps. Generally, this would result in a very small sample set of components, or in some cases none at all, that would require the field verification of their readings. The purpose for this process was to prevent the accidental missing of a leak due to an instrument reading lower in the field than it had been calibrated to, prior to the start of the monitoring day. “Unfortunately, some sites and some people either misinterpreted or decided to take an extremely conservative approach to the requirements and made the decision to scrub all data for the day for any failed instrument. While I am not one to usually discourage someone from taking a conservative approach on things, this is one I disagreed with. Besides several other factors that we don’t need to cover here on this discussion, my first observation on this was the effect that it had on the technicians that just watched all their day’s work be tossed out,” Kelsey reflected.

Read the full article in the June issue of Fugitive Emissions Journal! 

Previous articleDevon Energy Gives Target to Reduce Methane Emissions
Next articleCalorVal BTU Analyzer Measures Industrial Gases