Control charts for chronic disease surveillance: testing algorithm sensitivity to changes in data coding
Hamm, Naomi C.
Marrie, Ruth A.
Lix, Lisa M.
Abstract Background Algorithms used to identify disease cases in administrative health data may be sensitive to changes in the data over time. Control charts can be used to assess how variations in administrative health data impact the stability of estimated trends in incidence and prevalence for administrative data algorithms. We compared the stability of incidence and prevalence trends for multiple juvenile diabetes algorithms using observed-expected control charts. Methods Eighteen validated algorithms for juvenile diabetes were applied to administrative health data from Manitoba, Canada between 1975 and 2018. Trends in disease incidence and prevalence for each algorithm were modelled using negative binomial regression and generalized estimating equations; model-predicted case counts were plotted against observed counts. Control limits were set as predicted case count ±0.8*standard deviation. Differences in the frequency of out-of-control observations for each algorithm were assessed using McNemar’s test with Holm-Bonferroni adjustment. Results The proportion of out-of-control observations for incidence and prevalence ranged from 0.57 to 0.76 and 0.45 to 0.83, respectively. McNemar’s test revealed no difference in the frequency of out-of-control observations across algorithms. A sensitivity analysis with relaxed control limits (2*standard deviation) detected fewer out-of-control years (incidence 0.19 to 0.33; prevalence 0.07 to 0.52), but differences in stability across some algorithms for prevalence. Conclusions Our study using control charts to compare stability of trends in incidence and prevalence for juvenile diabetes algorithms found no differences for disease incidence. Differences were observed between select algorithms for disease prevalence when using wider control limits.
BMC Public Health. 2022 Feb 28;22(1):406