The Task Force Sends CMMI Recommendations for Improving Model Evaluations and Certification

Today, HCTTF sent CMMI a letter with recommendations for improving the model evaluation and certification process. The letter was informed by feedback from HCTTF members and discussions the Task Force held with outside experts and representatives from CMS in the summer of 2021. Some of the key recommendations from the letter included:

Task Force Response Summary

  1. Focus Areas of Model Evaluations. The Task Force recommends that CMMI model evaluations focus on addressing the following four areas: 1) the merits of the underlying model concept, 2) the execution of the model by CMS and its contractors, 3) the characteristics of the model beneficiaries, participants, regions, and markets that impact performance, and 4) the design of the evaluation and its ability to detect an effect
  2. Evaluation Methodologies. The Task Force highlighted some of the limitations of the difference-in-difference (DiD) methodology commonly used in CMMI evaluations and urged CMMI to prioritize efforts to implement evaluation strategies (such as stepped-wedge and randomized control trial designs) that more effectively isolate the effects of specific models while controlling for variables like model overlaps, spillover effects, and participant factors such as participation in prior models.
  3. Public Input on Evaluations. The Task Force we recommend CMMI issue a Request for Information (RFI) seeking public input on: (1) strategies for improving model evaluations, and (2) approaches for maximizing the value of lessons learned from models that do not meet the criteria for certification. Furthermore, to improve transparency around the model evaluation process we recommend that CMMI provide a written summary of RFI feedback to stakeholders.
  4. Transparency in Model Design and Evaluation. The Task Force recommended CMMI publish the logic models for all APMs to aid outside researchers’ efforts to understand the thinking supporting specific model designs and that CMMI leverage existing pathways such as the Research Data Assistance Center and the CMS Virtual Research Data Center to make future model evaluation data available to outside researchers.
  5. Reevaluating the Certification Standard. The Task Force urged CMS to reevaluate the degree of certainty required to certify CMMI models and recommended CMS issue an RFI requesting public comment on standards for actuarial certification that a model “would save money” if expanded.
  6. Transparency in Model Certification. The Task Force recommended that CMS publish an annual public facing report identifying the model’s considered for certification and the actuarial determination for any model reviewed for certification regardless of approval for expansion.

 

Read the Letter