Content area
Full Text
There is broad agreement that effort should be made to validate cost-effectiveness models. The International Society for Pharmacoeconomics and Outcomes Research- Society for Medical Decision Making (ISPOR-SMDM) Modeling Good Research Practices Task Force considered model validation to be 'vital', while recognising that it is ''not possible to specify criteria that a model must meet to be declared ''valid'''' [1] (page 736).
Guidelines for submissions to reimbursement agencies make common references to model validation but provide limited guidance on expectations regarding the application of alternative validation approaches. National Institute for Health and Care Excellence (NICE) guidelines request that sponsors provide the rationale for the chosen validation methods but provide no further guidance other than to note that sponsors should consider if and why presented results differ from the published literature [2]. Canadian guidelines describe alternative validation approaches, that the validation process should be documented, and ideally undertaken by 'someone impartial' [3]. In Australia, current Pharmaceutical Benefits Advisory Committee (PBAC) guidelines note that sponsors should ''Consider developing and presenting any approaches to validate the results of a modelled economic evaluation'' [4]. More specifically, the PBAC guidelines request that sponsors compare ''model traces that correspond with observed or empirical data (e.g. overall survival or partitioned survival) as a means of validating the model''.
Personal experience reviewing PBAC submissions is that model validation is rarely reported. Sponsors present model traces describing the proportions of the intervention and the comparator cohorts in alternative health states over time, but few compare the traces to observed data. In this issue of Pharmacoeconomics, De Boer et al. have reviewed the reporting of efforts to validate cost-effectiveness models in seasonal influenza and early breast cancer [5], while an earlier paper by Afzali and colleagues reviewed approaches to evaluate the performance of decision analytic models in cardiovascular disease [6].
The two reviews report similar findings with respect to cross-model validation of model outputs, which was by far the most commonly reported form of validation. De Boer et al. report that 57 and 51 % of the 53 seasonal influenza models and 41 early breast cancer models referred to crossmodel validation, respectively [5]. Afzali et al. found that 55 % of the 81 reviewed cardiovascular models reported on cross-model validation [6]. De Boer et al....