Procalcitonin Rant

00:00
15:44

Playback Speed

No me gusta!

The flash player was unable to start. If you have a flash blocker then try unblocking the flash content - it should be visible below.

Steven S. -

Dear Justin Morgenstern, thank you for your rant on procalcitonin. Regarding calcitonin in the PECARN rule for febrile infants published in JAMA, N Kuppermann. Yes, the cutoff for procalcitonin for this article was 3 x higher than hospital laboratory cutoffs. But generally, for clinical prediction tools, the cutoffs for continuous variables, like procalcitonin, are dichotomized by a calculation from the cohort data. The cutoff of 1.71 for this risk prediction model was based on the cohort of febrile infants suspected of serious bacterial infants in conjunction with other variables used in the prediction model. They are calculated for the highest accuracy (TP + TN)/total from the area under the ROC as described by Kuppermann in the introduction and supplement 4. This cutoff is specific to this population and is coupled with the other predictors. Using it outside of the rule, by itself, will likely result in an unacceptable miss rate. Ken Milne might want to touch on risk stratification tool derivation--specifically variable selection--to help emrappers assess risk stratification tool literature. I am not an expert in risk stratification tool derivations, but Kuppermann is, and my guess is that the tool will be better than any physician who hasn't seen 1800 febrile infants (<2 mos old). If you've seen that many, you probably don't need a procalcitonin level.
Thank you,
Steve Shroyer MD FACEP

Justin M. -

Thanks for the great comment

This is a complex topic, and definitely needs a lot longer to discuss than a short EM:RAP segment. I totally agree that we should get Ken to do a deeper dive at some point.

There are a few things I would point out:
First: My rant was about procalcitonin, and not really about the PECARN rule itself. I sort of got dragged into the PECARN rule because it includes procalcitonin, but this wasn't meant to be a critique of that tool. My critique is of procalcitonin. (And honestly, critique may not be the right word. When I started this search, I had no idea whether procalcitonin was helpful. I had never used it. This is just a reporting of the evidence as it seems to stand.) However, there are a number of people who claim that procalcitonin is helpful just because it is included in these rules, despite not being helpful when it is studied independently. I think that is a logical mistake. It is possible to have a valuable decision tool that includes redundant or minimally valuable information. It is possible that the PECARN rule would perform just as well with a CRP. Or with just clinical judgement. My point is just that the inclusion in the rule doesn't prove the value of procalcitonin.

Second: I should highlight that it is possible for tests that are next to useless when looked at independently to perform a valuable function when integrated into a large decision tool or tree. The DDimer might be a good example. If you just looked at its sensitivity or specificity alone, it clearly sucks. But when integrated into a validated PE algorithm, it clearly has value. (Although it is also a good example of a test that can easily lead us astray, and might be a good cautionary tale when considering procalcitonin.) Another example would be classic risk factors for MI. We know that they are useless when looked at independently for assessing chest pain patients, but apparently they help when integrated into the HEART score. (I will leave my skepticism about the overall value of the HEART score aside for now.) The potential for a minimally useful data point to be useful when combined with many other data points is the counter-argument to my main point. However, as the evidence stands, without implementation studies and large external validations, it is impossible to know which is true of procalcitonin in the PECARN rule.

Third: A major concern with any derived decision rule is that it is likely to be overfit to the data set that it was derived from, and therefore not validate externally. That was what I was trying to say in this piece, and I think you say it yourself in your comment. "This cutoff is specific to this population". Given the very unusual, and overly precise cutoff in PECARN, I think there is a very high chance that it is over-fit to this specific data set and will not hold up to external validation.

Fourth: I do think it is important to use common sense / consider the face validity of decision tools we see. The normal cutoff used for procalcitonin is 0.3, so the cutoff used in PECARN is more than 5x higher than the normal cutoff. Imagine if I presented you with a appendicitis score that used the white blood cell count, but my cutoff was 50. Would you put a lot of faith in that score? As far as I can tell, the extremely high number used in PECARN is completely incongruent with all other procalcitonin research, and therefore lacks face validity. (That doesn't invalidate it, but it should raise some uncertainty.)

Finally: Nate Kupperman is definitely a brilliant researcher, and very familiar with the derivation of decision tools, but I think it would be a logical fallacy to state that therefore any rule he derives will be better than clinical judgement. I think even Nate would disagree with that. These tools require multiple scientific steps, and derivation is only the first step. Dr. Kupperman is a great scientist, and would recognize the need for both validation and implementation studies. Most importantly, the success of these tools is completely independent of the quality of the researcher. For example, Dr. Ian Stiell is universally recognized as one of the best emergency medicine researchers on the planet. He is responsible for even more of these rules than Dr Kupperman. However, when his CT head rule was studied in an implementation study, it failed miserably. The use of that rule actually increased CT head usage rates, rather than decreasing them as intended, with no improvement in clinical outcomes. So to say that PECARN might fail as a rule is not to say anything against Dr. Kupperman. That's just the way science works - and even with great researchers, most of these rules fail. In fact, there is a lot of empiric evidence when it comes to these decision rules, and almost none of them have been shown to outperform clinical judgement, so the cards are somewhat stacked against the PECARN rule. I am rooting for it, because everyone wants to see less neonatal LPs, but I think there a lot of reasons to think that it won't pan out (like most decision rules).

I think the main point stands: looking at the totality of the research, procalcitonin looks like a pretty poor test. It might have a role in the discontinuation of antibiotics in the ICU, but outside of that setting, it has poor test characteristics, and has generally failed in the RCTs looking at it. In my mind, the larger corpus of evidence makes me more skeptical of the value of procalcitonin in these pediatric tools. I am not saying that it has been proven to be useless. However, the bulk of the research is underwhelming, so I think you should set a pretty high bar for research that might convince you to incorporate the test. A derived rule is not enough. I would want to see an RCT implementation study demonstrating that the PECARN rule actually improved clinical outcomes. If that happens, I will tell my hospital to add procalcitonin the very next day.

Thanks for the discussion
All the best
Justin

To join the conversation, you need to subscribe.

Sign up today for full access to all episodes and to join the conversation.

To earn CME for this chapter, you need to subscribe.

Sign up today for full access to all episodes and earn CME.

6 AMA PRA Category 1 Credits™ certified by PIM

  1. Quiz Not Required
  2. Complete Evaluation
  3. Print Certificate