How to implement trials to evaluate programs and policies
Conference Date
27th & 28th August 2019
Location
QT Canberra
Book Now
Secure your place and get the best rates
Add to calendar

Agenda

Print

Day 1 - Tuesday 27th August, 2019

8:30
Registration, coffee & networking
9:00
Opening remarks
9:10
How do we measure impact? And why randomise?

Professor Hiscox will provide a systematic introduction to the methodology of evaluation, comparing traditional approaches (e.g.,based on before-after comparisons) with the “gold standard” provided by RCTs. We cover the basic statistical principles here, highlighting the way in which randomisation eliminates the possibility of any confounding factors that could lead to biased results.

10:30
Morning tea & networking
11:00
Case Study: Why have traditional impact evaluations failed?

We review some prominent cases in which a program was thought to be quite effective until a rigorous RCT revealed the opposite was true.

12:30
Networking lunch
1:30
Challenges for RCTs: Incentives, political constraints & ethical concerns

Good evaluations require investments by organisations that implement programs and own the key data. These organisations must share a desire to know the true impact of the program, even if it is “bad news”, and be willing to devote resources to the research. We discuss these issues and also address some major challenges and concerns about whether RCTs can or should be conducted, including:

  • The costliness of implementing trials
  • The time needed to conduct a trial (compared with the timelines for policy decisions)
  • Whether it is ethical to deny or delay access to a program among a “control” group
3:00
Afternoon tea & networking
3:30
Designing your RCT: The basics

What are the basic steps for designing an RCT? To begin the hands-on training on RCTs, we walk through the following key design steps required for any trial:

  • Choosing the units of analysis (individual subjects or groups of subjects)
  • Defining measures and identifying your sources of data
  • Choosing the number of treatments, treatment proportion(s), and the sample size
  • Calculating statistical power and using new techniques to
    maximize power
  • Designing a protocol that randomly assigns units to the treatment and control groups
5:00
Closing remarks and end of Day One

Day 2 - Wednesday 28th August, 2019

8:30
Welcome, coffee & networking
9:00
Opening remarks
9:10
Designing your RCT: More advanced topics

How do we design a trial to assess a program when we cannot make participation in the program mandatory (for those in the treatment group) and we cannot prohibit participation (by those in the control group)? The answer is to use a randomised “encouragement” to take up the program (sometimes referred to as an “intent-to-treat” designs). These designs are critical for evaluating most types of government programs and services (for which take-up is voluntary). We discuss how to design, implement, and analyse the results from this type of RCT. We also discuss how to design cluster-based trials (to mitigate “spillover effects between groups) and how to deal with problems of attrition and missing data.

10:30
Morning tea & networking
11:00
Getting creative: Online & survey experiments

In many areas of policy, government agencies need to assess the impact of different ways of engaging in online environments with  citizens about programs and regulations, offering them a “choice architecture” in which they make submit applications, select options, and enter information. We will discuss how to conduct A/B testing using a custom-built or commercial testing platform that randomly assigns individuals who view any test page to see alternative versions, demonstrating the power of the approach using Google Experiments. We also examine how to embed experiments within surveys to assess the impact of communications and issue framing on attitudes and on hypothetical choices that mimic real world decisions.

12:30
Networking lunch
1:30
What to do if you can’t conduct your RCT?

If conditions make an RCT impossible, what is the next-best approach to assessing the impact of a program or policy? In this session we will discuss “natural experiments” that may occur due
to the way programs are typically implemented, creating situations in which random chance plays a large role in whether individuals become program participants or not. This includes the application of thresholds for participation based on eligibility criteria (e.g., age, income). These types of evaluations, using retrospective analysis of program data, can provide valid measures of program impacts in many cases.

3:00
Afternoon tea & networking
3:30
Sprint Session: Developing an RCT

In the final session of the forum, attendees will be asked to work in small teams to outline the design of an RCT to evaluate some important program or policy they are passionate, applying the lessons learned in previous sessions.

5:00
Closing remarks & close of the Conference
Back to Top

Key Speakers

Professor Michael J. Hiscox
Clarence Dillon Professor of International Affairs
Department of Government, Harvard University