Treatment fidelity data is critical to capture during supervisory observations. Simply, treatment fidelity is the degree to which an intervention is implemented as designed (i.e., quality; Sanetti & Kratchowill, 2009). Not only should fidelity be collected to guide supervisory practice in terms of determining how frequently to conduct observations... it is also important to collect because it impacts clinical outcomes (e.g., DiGennaro Reed et al., 2007). Additionally, the BACB requires that supervisors collect data on provider fidelity as part of the BCBA code of ethics. If a client is not making progress, and supervisors do not have provider fidelity data, how will they know why progress has stalled? Is it that the provider is not performing as the plan was designed? Is it that modifications need to be made to the client's protocols?
Guide Supervisory Practices
When observations occur, supervisors should take data on fidelity. Our last installment discussed that supervisors must develop tools (e.g., fidelity checklists) to support their observation efforts. To create the checklists, supervisors should task analyze the behavior they would like to measure. The number of checklists created should be based on what the supervisor wants to observe. All of the items do not need to be included in one, long assessment; rather, smaller evaluations across implementation categories (e.g., behavior reduction, skill acquisition) could be created to guide supervisory observations. The data gathered using the tools developed should then be analyzed to guide a supervisor’s subsequent action.
I am often asked how frequently a provider should be supervised. My answer is: let the data guide your behavior! Here are some questions that a supervisor can ask themself: What is the fidelity level for a particular provider? Are the data revealing areas for improvement? How is my team's fidelity in the area of behavior reduction? Do additional training or interventions need to be implemented to improve fidelity?
Impacts Clinical Outcomes
Fidelity is critical because research has shown that it is correlated with clinical outcomes. Research has evaluated what level of fidelity is necessary for clinical progress to occur. Currently, there is no specific level of fidelity that is the gold standard. Higher levels of fidelity more reliably reveal clinical progress than lower levels of fidelity. Dr. St. Peter’s research has revealed that participants still made progress when fidelity was at 80%. A recent study in JABA by Dr. Denys Brand’s lab found little difference in client outcomes when fidelity was 80 or 100%. However, when initial fidelity errors (i.e., fidelity was 20%) occurred clients struggled to make progress when fidelity reached 80%. Additionally, multiple types of errors are detrimental to learning. These data suggest that all providers must implement protocols with high levels of fidelity from the onset of their service delivery!
Supervision Requirement
As noted above, the BACB requires that supervisors collect and use fidelity data as part of the supervisor's responsibility to the provider. The number of providers, and BCBAs, in our field continues to grow at a rapid rate. It is the responsibility of the supervisor to ensure their providers have the skills necessary to effectively work with clients. Without effective supervisory oversight and support, providers are likely to continue to lack the skills necessary to evoke behavior change. Supervisors should work with their providers to make supervision a fun, rewarding process where the goal is to improve the fidelity of the provider and deliver the best services possible to the clients.
Additional Information on Fidelity
Other forms of fidelity are discussed in the literature. Data could also be collected on content, quantity, and process. However, most frequently data is collected on quality, which is described above.
Content: what steps were delivered accurately? These data provide supervisors with information about what specific behaviors/steps providers need additional support with.
Quantity: how much of the intervention was provided? These data provide supervisors with the information to adjust the dosage of the intervention if needed.
Process: how was the intervention delivered? These data provide supervisors with information about whether the process of service delivery needs to be modified.
References
Behavior Analyst Certification Board (2020). BCBA code of ethics. Retrieved from: https://www.bacb.com/wp-content/uploads/2022/01/Ethics-Code-for-Behavior-Analysts-220316-2.pdf
Brand, D., Henley, A. J., DiGennaro Reed, F. D., Gray, E., & Crabbs, B. (2019). A Review of Published Studies Involving Parametric Manipulations of Treatment Integrity. Journal of Behavioral Education, 28, 1–26. https://doi.org/10.1007/s10864-018-09311-8
DiGennaro, F. D., Martens, B. K., & Kleinmann, A. E. (2007). A comparison of performance feedback procedures on teachers’ treatment implementation integrity and students’ inappropriate behavior in special education classrooms. Journal of Applied Behavior Analysis, 40, 447–461.
Falakfarsa, G., Brand, D., Bensemann, J., & Jones, L. (2023). A parametric analysis of procedural fidelity errors following mastery of a task: A translational study. Journal of Applied Behavior Analysis, online first.
Sanetti, L. M. H., & Kratchowill, T. R. (2009). Toward developing a science of treatment integrity: Introduction to the special series. School Psychology Review, 38(4), 445-459.
St. Peter Pipkin, C., Vollmer, T. R., Sloman, K. N., & Roane, H. S. (2010). Effects of treatment integrity failures during differential reinforcement of alternative behavior: A translational model. Journal of Applied Behavior Analysis, 43(1), 47–70. https://doi.org/10.1901/jaba.2010.43-47
Comentários