Back to top

Best Practices in Tobacco Control



I Introduction



Tobacco control programs are being implemented internationally. Which ones work? Practitioners need easy access to information about what has already been implemented and whether it was effective. They need to know whether resources exist before developing their own, and they need to know how to access these resources. Consolidating all of this information should increase the uptake of effective interventions and resources by community practitioners.

II The Project



The purpose of 'Best Practices in Tobacco Control' is to provide decision makers and practitioners with detailed profiles of tobacco control programs that have been demonstrated to be effective based on an established set of criteria and an expert-review process. The end product will be an electronic, searchable toolkit of recommended and promising interventions. It will be posted on the Program Training and Consultation Centre Resource Dissemination Service (RDS) website (http://www.ptcc.on.ca/).



~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ *



III The Process



Step 1: Identifying best practices criteria and review methodology



The best practices methodology that was adapted for this project originated from Cameron and colleagues' International Scan for Best Practices in Heart Health (June 1998). The 1998 model included assessments of effectiveness, plausibility and practicality.



For the purposes of this project, effectiveness refers to whether the intervention had a positive outcome using a good quality research design. Plausibility refers to the likelihood that an intervention will be effective. A practicality assessment, though part of the 1998 model, is being left up to decision-makers at the community level for this project.



Step 2: Identifying and filtering relevant projects



Year 1 of 'Best Practices in Tobacco Control' involved reviewing the projects that were funded by the Ministry of Health and Long-Term Care (MOHLTC) as a part of the Ontario Tobacco Strategy (OTS). Thirty tobacco control initiatives using mixed approaches (e.g., media, policy, program services) across various settings (e.g., schools, homes, health care settings, workplaces, community at large) were reviewed. The objectives of these projects ranged from prevention to cessation to protection to denormalization.



Step 3: Contacting and obtaining consent



All projects were contacted to let them know that they had been nominated for the review. All nominated projects agreed to participate.



Step 4: Sorting projects



The OTS Projects were grouped by 1) topic area (policy change for ETS protection, prevention/education or tobacco use cessation) and 2) target groups (general population; children to age 12, adolescents, and young adults; workplace groups; and other [educators, ethnic population]).



Step 5: Identifying and recruiting expert reviewers



Expert reviewers were recruited to assess each intervention against the best practices criteria. A total of eight reviewers were recruited to review the 30 interventions (two reviewers per topic area).



Step 6: Summarizing project materials



The next stage of the project involved summarizing the projects in preparation for expert reviewer assessments. Each summary contained

* a summary abstract;

* a project rationale;

* details of the intervention (including program goals, program objectives, project activities, implementation steps, target audience, sites, approaches, and empirical/theoretical support for the program's approach);

* community supports necessary for impact or that will enhance program impact;

* sustainability requirements (including training required/recommended, staff time, volunteer time, expertise requirements, costs, and collaborative approach);

* fit with other interventions;

* implications/recommendations;

* list of products/resources developed; and

* source(s) of information for the summary.



As well, details of all types of evaluation were summarized.



The purpose of project summaries was to provide rich descriptions of each intervention so that other communities could replicate the intervention if desired. As well, it was important to learn, in great detail, under what conditions a certain program did or did not work. All completed profiles were sent back to the original project contacts for an accuracy check.



Step 7: Expert reviews



Completed project summaries were sent to expert reviewers. Turn-around time was approximately 4 weeks. Where there was disagreement on the final assessment, a third reviewer, who was blinded to the outcome of the original assessments, conducted an independent review. Results of these assessments are still pending.



Step 8: Developing a toolkit



Once the expert reviews are all in and the assessments consolidated, the next step will be to develop a searchable toolkit of interventions. This will allow community programmers to make informed decisions when selecting their program initiatives.



The toolkit will contain detailed information about each intervention and will be labeled recommended, promising, or not recommended. The amount of information on each intervention will grow as other communities implement the program. A system will be set-up so that evaluation data are continuously fed back to PTCC and incorporated into the toolkit. Over time, there will be historical data on how each project was implemented and the outcome(s) of each implementation. This will allow practitioners to build on the learnings of previous adopters. Moreover, the toolkit will grow with the number of interventions that are developed and reviewed. The toolkit of interventions will be disseminated via PTCC's RDS website.



Step 9: Practicality assessment



Once the toolkit is disseminated, it is then up to community practitioners to determine whether an intervention meets their needs. A practicality assessment considers both 1) community level of adoption (e.g., what is the readiness of the community to address the perceived need) and 2) organizational level of adoption (e.g., are the necessary technical, financial, and personnel resources available to implement the intervention?). Thus, the decision about whether an intervention is truly a best practice will ultimately lie in the hands of practitioners.



~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ *



IV Lessons Learned/Challenges



A number of important lessons emerged from this project. First, there is no universal definition of what best practices are and how they should be measured. As this field is being developed, new terminology such as better practices is emerging. Other methodologies not only consider evidence, but also values, goals, ethics, theories, beliefs and our environment. The definition and methodology that are ultimately adopted depend on project objectives, the end product and the end user. Time and financial resources are also important factors in considering which methodology to adopt.



Another major challenge in any best practices work is the definition of program effectiveness. In most instances, program effectiveness is defined as those interventions that have been evaluated under controlled conditions and found to be effective. Unfortunately, the majority of health promotion interventions, including those that were under review for this project, are not evaluated under controlled conditions (thereby making it impossible for any intervention to fall under the best practices or recommended category). In this project, expert reviewers possessing a good understanding of community-level programming, were asked to determine whether the evaluation findings were positive or negative and to provide an assessment of the quality of research design. Perhaps one way of strengthening the quality and standardization of the reviews would be to provide reviewers with a short guide that describes the strengths and limitations of research designs. There continues to be ongoing work and debate in the field of best practices as it is applied to the health promotion context.



~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ *



V Summary/Conclusions



It is hoped that 'Best Practices in Tobacco Control' will help practitioners make informed decisions when selecting programs. Communities can implement interventions that are known to work in other jurisdictions, avoid using scarce resources to develop initiatives and materials that have been developed elsewhere and begin building on the learnings of other communities. It is time to start building on this knowledge and to begin to answer some crucial question about what works, for whom and under what conditions.

VI References



Cameron, R., Jolin, M.A., Walker, R., McDermott, N., & Gough, M. (2001). Linking Science and Practice: Toward a System for Enabling Communities to Adopt Best Practices for Chronic Disease Prevention. Health Promotion Practice. 2:1, 35-42.



Kahan, B., Goodstadt, M. (2002) The IDM Manual for Using the Interactive Domain Model Approach to Health Promotion. Centre for Health Promotion: Toronto.