The risk of round numbers and sharp thresholds in clinical practice
2 days ago
- #clinical decision-making
- #risk assessment
- #machine learning
- Clinical decision-making often simplifies continuous risk data into discrete levels using round-number thresholds, which can distort risk assessments.
- An interpretable machine learning model was developed to identify anomalies caused by threshold-based practices, revealing discontinuities and counter-causal paradoxes in mortality risk.
- Real-world data analysis showed that round-number thresholds lead to suboptimal patient outcomes, such as misaligned treatment decisions and paradoxical risk patterns.
- Simulations demonstrated how threshold-based decisions create statistical artifacts, including discontinuities and non-monotonicities in risk curves.
- A glass-box ML approach was used to systematically identify these artifacts, revealing how clinical practice has improved over time but still exhibits biases.
- The study highlights the need for dynamic and nuanced risk assessment methods in healthcare to improve patient outcomes by aligning thresholds with the continuous nature of risk.
- Findings suggest that AI models trained on observational data may misjudge patient risk by confusing inherent low risk with reduced risk achieved through effective treatments.
- The study calls for periodic reassessment of clinical guidelines and the use of transparent, interpretable AI models to correct biases in risk assessment.