3 min read

PT Crab 🦀 Issue 157 - Prognosis, ever again

PT Crab 🦀 Issue 157 - Prognosis, ever again

Look, prognosis is hard. But hopefully these models can make it easier. Some are quite new, and this is all a bit premature, but let’s look into it anyway.

Before we get into more though, PT Crab is entering its 4th year of issues! Issue 157 officially moves us from 3 years worth to 4 and I’m very excited about it. Thank you again for being a subscriber and supporter. Some of you have been here since the beginning and you are very valued. To help the Crab get into year 5 and beyond, please share with friends and colleagues as much as you can. You’re our only advertisement, so do advertise, please! And thank you for all the sharing you’ve done in the past. This Thanksgiving, I was thankful for each and every one of you. You’ve made PT Crab one of the most read sources in all of physical therapy. Really. It’s pretty amazing and I’m constantly surprised. So thank you.

If you’d like to give back to the Crab, do join as a supporter. It’s just a few dollars and you get 3x more. Details here.

But back to business, I’m going to open with a paper from PTJ this year that looks into prognostic models and discusses the 6 that the writers say have been externally validated. That’s all we have room for here. To see details on the papers behind the models, become a supporter.

With that, let’s dive in!


Clinically Valuable Models

The Gist - Look y’all, I’m not qualified to comment upon the quality of this paper. Or really, many papers. And you know that’s not what I do. I think traditional journal clubs are dumb and am happy to expound upon that thought if you shoot me an email (Luke@PTCrab.org). Instead, I trust the journals and the authors. And boy does this have a good collection of authors including Yannick Tousignant-Laflamme, Florian Nye, Annie LeBlanc, and Chad E. “Daddy” Cook (as in “Daddy of Excellent PT Research”) and more. And it’s from PTJ. All this is to say that I trust their critical appraisal, which is good because this is a complex article. Fortunately, it has a simple conclusion, so let’s get straight to it.

This is a systematic review of prognostic tools for MSK conditions. The goal is to predict patient health outcomes in rehab and they read 300 full text articles, eventually including 46 papers on 37 distinct models. And after all that, they found 6 models that have external validation: 1) STart Back Screening Tool, 2) Wallis Occupational Rehabilitation RisK mode, 3) Da Silva model 4) PICKUP model, 5) Schellingerhout rule, and 6) Keene model. The writers concluded that these models were clinically relevant and externally validated for predicting prognosis with a few different conditions. I’m going to go into those models more below, but first, a few more details.

STart is a tool for screening back problems and categorizing people into low, medium, or high risk of pain sticking around for a while. It’s 9 items and quick to do. The Schellingerhout clinical prediction rule predicts whether someone with non-specific neck pain will still have it 6 months later. The Wallis Occupational Rehabilitation RisK tool (AKA WORRK) predicts return to work after occupational trauma. Da Silva is about predicting the number of days until pain recovery for people with acute low back pain. The Keene model is about recurrent lateral ankle sprains. Lastly the PICKUP model predicts health-related quality of life and work ability between 11-27 months after the beginning of acute/subacute low back of neck pain.

Tell Me More - This entire paper revolved around the PROBAST, a new tool to asses risk of bias in prognostic studies from the Cochrane Prognosis Methods Group. The goal of the tool is to explore risk of bias and applicability to the clinic in addition to participants, predictors, and outcomes. But it doesn’t assess whether or not a tool is impactful upon clinical practice. To find that, they had a five step process. The model had to have low concerns about applicability (from the PROBAST tool), a complete report of performance measures, good or acceptable calibration, good discrimination, and relatively low risks of bias. And if that’s all gobbledy-gook, I get it. It’s not much better to me too, that’s why I read researchers and journals that I trust.

Paper? Sure, but I think you’ll get more value from the next bit where we break down a few of those models


And that’s our week! I know, it was a weird issue because prognostics are weird. But I hope helpful in some ways. And if not, don’t worry, it’ll be different next week. And if you didn’t like how short it was, you can always support the Crab. Or just look up those tools, some may be quite useful.

Have a good weekend!

Cheers,

Luke

Comments

Want to leave a comment and discuss this with your fellow PTs? Join PT Crab and get summarized PT research in your inbox, every week.