A Guide on Data Analysis

31 matching methods.

Matching is a process that aims to close back doors - potential sources of bias - by constructing comparison groups that are similar according to a set of matching variables. This helps to ensure that any observed differences in outcomes between the treatment and comparison groups can be more confidently attributed to the treatment itself, rather than other factors that may differ between the groups.

Matching and DiD can use pre-treatment outcomes to correct for selection bias. From real world data and simulation, ( ChabƩ-Ferret 2015 ) found that matching generally underestimates the average causal effect and gets closer to the true effect with more number of pre-treatment outcomes. When selection bias is symmetric around the treatment date, DID is still consistent when implemented symmetrically (i.e., the same number of period before and after treatment). In cases where selection bias is asymmetric, the MC simulations show that Symmetric DID still performs better than Matching.

Matching is useful, but not a general solution to causal problems ( J. A. Smith and Todd 2005 )

Assumption : Observables can identify the selection into the treatment and control groups

Identification : The exclusion restriction can be met conditional on the observables

Effect of college quality on earnings

  • They ultimately estimate the treatment effect on the treated of attending a top (high ACT) versus bottom (low ACT) quartile college

Aaronson, Barrow, and Sander ( 2007 )

Do teachers qualifications (causally) affect student test scores?

\[ Y_{ijt} = \delta_0 + Y_{ij(t-1)} \delta_1 + X_{it} \delta_2 + Z_{jt} \delta_3 + \epsilon_{ijt} \]

There can always be another variable

Any observable sorting is imperfect

\[ Y_{ijst} = \alpha_0 + Y_{ij(t-1)}\alpha_1 + X_{it} \alpha_2 + Z_{jt} \alpha_3 + \gamma_s + u_{isjt} \]

\(\delta_3 >0\)

\(\delta_3 > \alpha_3\)

\(\gamma_s\) = school fixed effect

Sorting is less within school. Hence, we can introduce the school fixed effect

Find schools that look like they are putting students in class randomly (or as good as random) + we run step 2

\[ \begin{aligned} Y_{isjt} = Y_{isj(t-1)} \lambda &+ X_{it} \alpha_1 +Z_{jt} \alpha_{21} \\ &+ (Z_{jt} \times D_i)\alpha_{22}+ \gamma_5 + u_{isjt} \end{aligned} \]

\(D_{it}\) is an element of \(X_{it}\)

\(Z_{it}\) = teacher experience

\[ D_{it}= \begin{cases} 1 & \text{ if high poverty} \\ 0 & \text{otherwise} \end{cases} \]

\(H_0:\) \(\alpha_{22} = 0\) test for effect heterogeneity whether the effect of teacher experience ( \(Z_{jt}\) ) is different

For low poverty is \(\alpha_{21}\)

For high poverty effect is \(\alpha_{21} + \alpha_{22}\)

Matching is selection on observables and only works if you have good observables.

Sufficient identification assumption under Selection on observable/ back-door criterion (based on Bernard Kochā€™s presentation )

Strong conditional ignorability

\(Y(0),Y(1) \perp T|X\)

No hidden confounders

\(\forall x \in X, t \in \{0, 1\}: p (T = t | X = x> 0\)

All treatments have non-zero probability of being observed

SUTVA/ Consistency

  • Treatment and outcomes of different subjects are independent

Relative to OLS

  • Matching makes the common support explicit (and changes default from ā€œignoreā€ to ā€œenforceā€)
  • Relaxes linear function form. Thus, less parametric.

It also helps if you have high ratio of controls to treatments.

For detail summary ( Stuart 2010 )

Matching is defined as ā€œany method that aims to equate (orā€balanceā€) the distribution of covariates in the treated and control groups.ā€ ( Stuart 2010, 1 )

Equivalently, matching is a selection on observables identifications strategy.

If you think your OLS estimate is biased, a matching estimate (almost surely) is too.

Unconditionally, consider

\[ \begin{aligned} E(Y_i^T | T) - E(Y_i^C |C) &+ E(Y_i^C | T) - E(Y_i^C | T) \\ = E(Y_i^T - Y_i^C | T) &+ [E(Y_i^C | T) - E(Y_i^C |C)] \\ = E(Y_i^T - Y_i^C | T) &+ \text{selection bias} \end{aligned} \]

where \(E(Y_i^T - Y_i^C | T)\) is the causal inference that we want to know.

Randomization eliminates the selection bias.

If we donā€™t have randomization, then \(E(Y_i^C | T) \neq E(Y_i^C |C)\)

Matching tries to do selection on observables \(E(Y_i^C | X, T) = E(Y_i^C|X, C)\)

Propensity Scores basically do \(E(Y_i^C| P(X) , T) = E(Y_i^C | P(X), C)\)

Matching standard errors will exceed OLS standard errors

The treatment should have larger predictive power than the control because you use treatment to pick control (not control to pick treatment).

The average treatment effect (ATE) is

\[ \frac{1}{N_T} \sum_{i=1}^{N_T} (Y_i^T - \frac{1}{N_{C_T}} \sum_{i=1}^{N_{C_T}} Y_i^C) \]

Since there is no closed-form solution for the standard error of the average treatment effect, we have to use bootstrapping to get standard error.

Professor Gary King advocates instead of using the word ā€œmatchingā€, we should use ā€œ pruning ā€ (i.e., deleting observations). It is a preprocessing step where it prunes nonmatches to make control variables less important in your analysis.

Without Matching

  • Imbalance data leads to model dependence lead to a lot of researcher discretion leads to bias

With Matching

  • We have balance data which essentially erase human discretion
Table @ref(tab:Gary King - International Methods Colloquium talk 2015)
Balance Covariates Complete Randomization Fully Exact
Observed On average Exact
Unobserved On average On average

Fully blocked is superior on

model dependence

research costs

Matching is used when

Outcomes are not available to select subjects for follow-up

Outcomes are available to improve precision of the estimate (i.e., reduce bias)

Hence, we can only observe one outcome of a unit (either treated or control), we can think of this problem as missing data as well. Thus, this section is closely related to Imputation (Missing Data)

In observational studies, we cannot randomize the treatment effect. Subjects select their own treatments, which could introduce selection bias (i.e., systematic differences between group differences that confound the effects of response variable differences).

Matching is used to

reduce model dependence

diagnose balance in the dataset

Assumptions of matching:

treatment assignment is independent of potential outcomes given the covariates

\(T \perp (Y(0),Y(1))|X\)

known as ignorability, or ignorable, no hidden bias, or unconfounded.

You typically satisfy this assumption when unobserved covariates correlated with observed covariates.

  • But when unobserved covariates are unrelated to the observed covariates, you can use sensitivity analysis to check your result, or use ā€œdesign sensitivityā€ ( Heller, Rosenbaum, and Small 2009 )

positive probability of receiving treatment for all X

  • \(0 < P(T=1|X)<1 \forall X\)

Stable Unit Treatment value Assumption (SUTVA)

Outcomes of A are not affected by treatment of B.

  • Very hard in cases where there is ā€œspilloverā€ effects (interactions between control and treatment). To combat, we need to reduce interactions.

Generalization

\(P_t\) : treated population -> \(N_t\) : random sample from treated

\(P_c\) : control population -> \(N_c\) : random sample from control

\(\mu_i\) = means ; \(\Sigma_i\) = variance covariance matrix of the \(p\) covariates in group i ( \(i = t,c\) )

\(X_j\) = \(p\) covariates of individual \(j\)

\(T_j\) = treatment assignment

\(Y_j\) = observed outcome

Assume: \(N_t < N_c\)

Treatment effect is \(\tau(x) = R_1(x) - R_0(x)\) where

\(R_1(x) = E(Y(1)|X)\)

\(R_0(x) = E(Y(0)|X)\)

Assume: parallel trends hence \(\tau(x) = \tau \forall x\)

  • If the parallel trends are not assumed, an average effect can be estimated.

Common estimands:

Average effect of the treatment on the treated ( ATT ): effects on treatment group

Average treatment effect ( ATE ): effect on both treatment and control

Define ā€œclosenessā€: decide distance measure to be used

Which variables to include:

Ignorability (no unobserved differences between treatment and control)

Since cost of including unrelated variables is small, you should include as many as possible (unless sample size/power doesnā€™t allow you to because of increased variance)

Do not include variables that were affected by the treatment.

Note: if a matching variable (i.e., heavy drug users) is highly correlated to the outcome variable (i.e., heavy drinkers) , you will be better to exclude it in the matching set.

Which distance measures: more below

Matching methods

Nearest neighbor matching

Simple (greedy) matching: performs poorly when there is competition for controls.

Optimal matching: considers global distance measure

Ratio matching: to combat increase bias and reduced variation when you have k:1 matching, one can use approximations by Rubin and Thomas ( 1996 ) .

With or without replacement: with replacement is typically better, but one needs to account for dependent in the matched sample when doing later analysis (can use frequency weights to combat).

Subclassification, Full Matching and Weighting

Nearest neighbor matching assign is 0 (control) or 1 (treated), while these methods use weights between 0 and 1.

Subclassification: distribution into multiple subclass (e.g., 5-10)

Full matching: optimal ly minimize the average of the distances between each treated unit and each control unit within each matched set.

Weighting adjustments: weighting technique uses propensity scores to estimate ATE. If the weights are extreme, the variance can be large not due to the underlying probabilities, but due to the estimation procure. To combat this, use (1) weight trimming, or (2) doubly -robust methods when propensity scores are used for weighing or matching.

Inverse probability of treatment weighting (IPTW) \(w_i = \frac{T_i}{\hat{e}_i} + \frac{1 - T_i}{1 - \hat{e}_i}\)

Odds \(w_i = T_i + (1-T_i) \frac{\hat{e}_i}{1-\hat{e}_i}\)

Kernel weighting (e.g., in economics) averages over multiple units in the control group.

Assessing Common Support

  • common support means overlapping of the propensity score distributions in the treatment and control groups. Propensity score is used to discard control units from the common support. Alternatively, convex hull of the covariates in the multi-dimensional space.

Assessing the quality of matched samples (Diagnose)

Balance = similarity of the empirical distribution of the full set of covariates in the matched treated and control groups. Equivalently, treatment is unrelated to the covariates

  • \(\tilde{p}(X|T=1) = \tilde{p}(X|T=0)\) where \(\tilde{p}\) is the empirical distribution.

Numerical Diagnostics

standardized difference in means of each covariate (most common), also known asā€standardized biasā€, ā€œstandardized difference in meansā€.

standardized difference of means of the propensity score (should be < 0.25) ( Rubin 2001 )

ratio of the variances of the propensity score in the treated and control groups (should be between 0.5 and 2). ( Rubin 2001 )

For each covariate, the ratio fo the variance of the residuals orthogonal to the propensity score in the treated and control groups.

Note: canā€™t use hypothesis tests or p-values because of (1) in-sample property (not population), (2) conflation of changes in balance with changes in statistical power.

Graphical Diagnostics

Empirical Distribution Plot

Estimate the treatment effect

  • Need to account for weights when use matching with replacement.

After Subclassification and Full Matching

Weighting the subclass estimates by the number of treated units in each subclass for ATT

Weighting by the overall number of individual in each subclass for ATE.

Variance estimation: should incorporate uncertainties in both the matching procedure (step 3) and the estimation procedure (step 4)

With missing data, use generalized boosted models, or multiple imputation ( Qu and Lipkovich 2009 )

Violation of ignorable treatment assignment (i.e., unobservables affect treatment and outcome). control by

measure pre-treatment measure of the outcome variable

find the difference in outcomes between multiple control groups. If there is a significant difference, there is evidence for violation.

find the range of correlations between unobservables and both treatment assignment and outcome to nullify the significant effect.

Choosing between methods

smallest standardized difference of mean across the largest number of covariates

minimize the standardized difference of means of a few particularly prognostic covariates

fest number of large standardized difference of means (> 0.25)

( Diamond and Sekhon 2013 ) automates the process

In practice

If ATE, ask if there is enough overlap of the treated and control groupsā€™ propensity score to estimate ATE, if not use ATT instead

If ATT, ask if there are controls across the full range of the treated group

Choose matching method

If ATE, use IPTW or full matching

If ATT, and more controls than treated (at least 3 times), k:1 nearest neighbor without replacement

If ATT, and few controls , use subclassification, full matching, and weighting by the odds

If balance, use regression on matched samples

If imbalance on few covariates, treat them with Mahalanobis

If imbalance on many covariates, try k:1 matching with replacement

Ways to define the distance \(D_{ij}\)

\[ D_{ij} = \begin{cases} 0, \text{ if } X_i = X_j, \\ \infty, \text{ if } X_i \neq X_j \end{cases} \]

An advanced is Coarsened Exact Matching

  • Mahalanobis

\[ D_{ij} = (X_i - X_j)'\Sigma^{-1} (X_i - X_j) \]

\(\Sigma\) = variance covariance matrix of X in the

control group if ATT is interested

polled treatment and control groups if ATE is interested

  • Propensity score:

\[ D_{ij} = |e_i - e_j| \]

where \(e_k\) = the propensity score for individual k

An advanced is Prognosis score ( B. B. Hansen 2008 ) , but you have to know (i.e., specify) the relationship between the covariates and outcome.

  • Linear propensity score

\[ D_{ij} = |logit(e_i) - logit(e_j)| \]

The exact and Mahalanobis are not good in high dimensional or non normally distributed Xā€™s cases.

We can combine Mahalanobis matching with propensity score calipers ( Rubin and Thomas 2000 )

Other advanced methods for longitudinal settings

marginal structural models ( Robins, Hernan, and Brumback 2000 )

balanced risk set matching ( Y. P. Li, Propert, and Rosenbaum 2001 )

Most matching methods are based on (ex-post)

propensity score

distance metric

cem Coarsened exact matching

Matching Multivariate and propensity score matching with balance optimization

MatchIt Nonparametric preprocessing for parametric causal inference. Have nearest neighbor, Mahalanobis, caliper, exact, full, optimal, subclassification

MatchingFrontier optimize balance and sample size ( G. King, Lucas, and Nielsen 2017 )

optmatch optimal matching with variable ratio, optimal and full matching

PSAgraphics Propensity score graphics

rbounds sensitivity analysis with matched data, examine ignorable treatment assignment assumption

twang weighting and analysis of non-equivalent groups

CBPS covariate balancing propensity score. Can also be used in the longitudinal setting with marginal structural models.

PanelMatch based on Imai, Kim, and Wang (2018)

Matching Regression
Not as sensitive to the functional form of the covariates can estimate the effect of a continuous treatment

Easier to asses whether itā€™s working

Easier to explain

allows a nice visualization of an evaluation

estimate the effect of all the variables (not just the treatment)
If you treatment is fairly rare, you may have a lot of control observations that are obviously no comparable can estimate interactions of treatment with covariates
Less parametric More parametric
Enforces common support (i.e., space where treatment and control have the same characteristics)

However, the problem of omitted variables (i.e., those that affect both the outcome and whether observation was treated) - unobserved confounders is still present in matching methods.

Difference between matching and regression following Pischkeā€™s lecture

Suppose we want to estimate the effect of treatment on the treated

\[ \begin{aligned} \delta_{TOT} &= E[ Y_{1i} - Y_{0i} | D_i = 1 ] \\ &= E\{E[Y_{1i} | X_i, D_i = 1] \\ & - E[Y_{0i}|X_i, D_i = 1]|D_i = 1\} && \text{law of itereated expectations} \end{aligned} \]

Under conditional independence

\[ E[Y_{0i} |X_i , D_i = 0 ] = E[Y_{0i} | X_i, D_i = 1] \]

\[ \begin{aligned} \delta_{TOT} &= E \{ E[ Y_{1i} | X_i, D_i = 1] - E[ Y_{0i}|X_i, D_i = 0 ]|D_i = 1\} \\ &= E\{E[y_i | X_i, D_i = 1] - E[y_i |X_i, D_i = 0 ] | D_i = 1\} \\ &= E[\delta_X |D_i = 1] \end{aligned} \]

where \(\delta_X\) is an X-specific difference in means at covariate value \(X_i\)

When \(X_i\) is discrete, the matching estimand is

\[ \delta_M = \sum_x \delta_x P(X_i = x |D_i = 1) \]

where \(P(X_i = x |D_i = 1)\) is the probability mass function for \(X_i\) given \(D_i = 1\)

According to Bayes rule,

\[ P(X_i = x | D_i = 1) = \frac{P(D_i = 1 | X_i = x) \times P(X_i = x)}{P(D_i = 1)} \]

\[ \begin{aligned} \delta_M &= \frac{\sum_x \delta_x P (D_i = 1 | X_i = x) P (X_i = x)}{\sum_x P(D_i = 1 |X_i = x)P(X_i = x)} \\ &= \sum_x \delta_x \frac{ P (D_i = 1 | X_i = x) P (X_i = x)}{\sum_x P(D_i = 1 |X_i = x)P(X_i = x)} \end{aligned} \]

On the other hand, suppose we have regression

\[ y_i = \sum_x d_{ix} \beta_x + \delta_R D_i + \epsilon_i \]

\(d_{ix}\) = dummy that indicates \(X_i = x\)

\(\beta_x\) = regression-effect for \(X_i = x\)

\(\delta_R\) = regression estimand where

\[ \begin{aligned} \delta_R &= \frac{\sum_x \delta_x [P(D_i = 1 | X_i = x) (1 - P(D_i = 1 | X_i = x))]P(X_i = x)}{\sum_x [P(D_i = 1| X_i = x)(1 - P(D_i = 1 | X_i = x))]P(X_i = x)} \\ &= \sum_x \delta_x \frac{[P(D_i = 1 | X_i = x) (1 - P(D_i = 1 | X_i = x))]P(X_i = x)}{\sum_x [P(D_i = 1| X_i = x)(1 - P(D_i = 1 | X_i = x))]P(X_i = x)} \end{aligned} \]

the difference between the regression and matching estimand is the weights they use to combine the covariate specific treatment effect \(\delta_x\)

Type uses weights which depend on interpretation makes sense because
Matching

\(P(D_i = 1|X_i = x)\)

the fraction of treated observations in a covariate cell (i.e., or the mean of \(D_i\))

This is larger in cells with many treated observations. we want the effect of treatment on the treated
Regression

\(P(D_i = 1 |X_i = x)(1 - P(D_i = 1| X_i ))\)

the variance of \(D_i\) in the covariate cell

This weight is largest in cells where there are half treated and half untreated observations. (this is the reason why we want to treat our sample so it is balanced, before running regular regression model, as mentioned above). these cells will produce the lowest variance estimates of \(\delta_x\). If all the \(\delta_x\) are the same, the most efficient estimand uses the lowest variance cells most heavily.

The goal of matching is to produce covariate balance (i.e., distributions of covariates in treatment and control groups are approximately similar as they would be in a successful randomized experiment).

31.1 Selection on Observables

31.1.1 matchit.

Procedure typically involves (proposed by Noah Freifer using MatchIt )

  • checking (balance)
  • estimating the treatment effect

examine treat on re78

select type of effect to be estimated (e.g., mediation effect, conditional effect, marginal effect)

select the target population

select variables to match/balance ( Austin 2011 ) ( T. J. VanderWeele 2019 )

  • Check Initial Imbalance
  • Check balance

Sometimes you have to make trade-off between balance and sample size.

handout 3 13 matching research strategies

Try Full Match (i.e., every treated matches with one control, and every control with one treated).

Checking balance again

handout 3 13 matching research strategies

Exact Matching

Subclassfication

Optimal Matching

Genetic Matching

  • Estimating the Treatment Effect

treat coefficient = estimated ATT

When reporting, remember to mention

  • the matching specification (method, and additional options)
  • the distance measure (e.g., propensity score)
  • other methods, and rationale for the final chosen method.
  • balance statistics of the matched dataset.
  • number of matched, unmatched, discarded
  • estimation method for treatment effect.

31.1.2 designmatch

This package includes

distmatch optimal distance matching

bmatch optimal bipartile matching

cardmatch optimal cardinality matching

profmatch optimal profile matching

nmatch optimal nonbipartile matching

31.1.3 MatchingFrontier

As mentioned in MatchIt , you have to make trade-off (also known as bias-variance trade-off) between balance and sample size. An automated procedure to optimize this trade-off is implemented in MatchingFrontier ( G. King, Lucas, and Nielsen 2017 ) , which solves this joint optimization problem.

Following MatchingFrontier guide

31.1.4 Propensity Scores

Even though I mention the propensity scores matching method here, it is no longer recommended to use such method in research and publication ( G. King and Nielsen 2019 ) because it increases

inefficiency

model dependence: small changes in the model specification lead to big changes in model results

( Abadie and Imbens 2016 ) note

The initial estimation of the propensity score influences the large sample distribution of the estimators.

Adjustments are made to the large sample variances of these estimators for both ATE and ATT.

The adjustment for the ATE estimator is either negative or zero, indicating greater efficiency when matching on an estimated propensity score versus the true score in large samples.

For the ATET estimator, the sign of the adjustment depends on the data generating process. Neglecting the estimation error in the propensity score can lead to inaccurate confidence intervals for the ATT estimator, making them either too large or too small.

PSM tries to accomplish complete randomization while other methods try to achieve fully blocked. Hence, you probably better off use any other methods.

Propensity is ā€œthe probability of receiving the treatment given the observed covariates.ā€ ( Rosenbaum and Rubin 1985 )

Equivalently, it can to understood as the probability of being treated.

\[ e_i (X_i) = P(T_i = 1 | X_i) \]

Estimation using

logistic regression

Non parametric methods:

boosted CART

generalized boosted models (gbm)

Steps by Gary Kingā€™s slides

reduce k elements of X to scalar

\(\pi_i \equiv P(T_i = 1|X) = \frac{1}{1+e^{X_i \beta}}\)

Distance ( \(X_c, X_t\) ) = \(|\pi_c - \pi_t|\)

match each treated unit to the nearest control unit

control units: not reused; pruned if unused

prune matches if distances > caliper

In the best case scenario, you randomly prune, which increases imbalance

Other methods dominate because they try to match exactly hence

\(X_c = X_t \to \pi_c = \pi_t\) (exact match leads to equal propensity scores) but

\(\pi_c = \pi_t \nrightarrow X_c = X_t\) (equal propensity scores do not necessarily lead to exact match)

Do not include/control for irrelevant covariates because it leads your PSM to be more random, hence more imbalance

Do not include for ( Bhattacharya and Vogt 2007 ) instrumental variable in the predictor set of a propensity score matching estimator. More generally, using variables that do not control for potential confounders, even if they are predictive of the treatment, can result in biased estimates

What you left with after pruning is more important than what you start with then throw out.

Diagnostics:

balance of the covariates

no need to concern about collinearity

canā€™t use c-stat or stepwise because those model fit stat do not apply

31.1.4.1 Look Ahead Propensity Score Matching

  • ( Bapna, Ramaprasad, and Umyarov 2018 )

31.1.5 Mahalanobis Distance

Approximates fully blocked experiment

Distance \((X_c,X_t)\) = \(\sqrt{(X_c - X_t)'S^{-1}(X_c - X_t)}\)

where \(S^{-1}\) standardize the distance

In application we use Euclidean distance.

Prune unused control units, and prune matches if distance > caliper

31.1.6 Coarsened Exact Matching

Steps from Gray Kingā€™s slides International Methods Colloquium talk 2015

Temporarily coarsen \(X\)

Apply exact matching to the coarsened \(X, C(X)\)

sort observation into strata, each with unique values of \(C(X)\)

prune stratum with 0 treated or 0 control units

Pass on original (uncoarsened) units except those pruned

Properties:

Monotonic imbalance bounding (MIB) matching method

  • maximum imbalance between the treated and control chosen ex ante

meets congruence principle

robust to measurement error

can be implemented with multiple imputation

works well for multi-category treatments

Assumptions:

  • Ignorability (i.e., no omitted variable bias)

More detail in ( Iacus, King, and Porro 2012 )

Example by packageā€™s authors

automated coarsening

coarsening by explicit user choice

Can also use progressive coarsening method to control the number of matches.

cem can also handle some missingness.

31.1.7 Genetic Matching

GM uses iterative checking process of propensity scores, which combines propensity scores and Mahalanobis distance.

  • GenMatch ( Diamond and Sekhon 2013 )

GM is arguably ā€œsuperiorā€ method than nearest neighbor or full matching in imbalanced data

Use a genetic search algorithm to find weights for each covariate such that we have optimal balance.

Implementation

could use with replacement

balance can be based on

paired \(t\) -tests (dichotomous variables)

Kolmogorov-Smirnov (multinomial and continuous)

31.1.8 Entropy Balancing

( Hainmueller 2012 )

Entropy balancing is a method for achieving covariate balance in observational studies with binary treatments.

It uses a maximum entropy reweighting scheme to ensure that treatment and control groups are balanced based on sample moments.

This method adjusts for inequalities in the covariate distributions, reducing dependence on the model used for estimating treatment effects.

Entropy balancing improves balance across all included covariate moments and removes the need for repetitive balance checking and iterative model searching.

31.1.9 Matching for high-dimensional data

One could reduce the number of dimensions using methods such as:

Lasso ( Gordon et al. 2019 )

Penalized logistic regression ( Eckles and Bakshy 2021 )

PCA (Principal Component Analysis)

Locality Preserving Projections (LPP) ( S. Li et al. 2016 )

Random projection

Autoencoders ( Ramachandra 2018 )

Additionally, one could jointly does dimension reduction while balancing the distributions of the control and treated groups ( Yao et al. 2018 ) .

31.1.10 Matching for time series-cross-section data

Examples: ( Scheve and Stasavage 2012 ) and ( Acemoglu et al. 2019 )

Identification strategy:

Within-unit over-time variation

within-time across-units variation

See DID with in and out treatment condition for details of this method

31.1.11 Matching for multiple treatments

In cases where you have multiple treatment groups, and you want to do matching, itā€™s important to have the same baseline (control) group. For more details, see

( McCaffrey et al. 2013 )

( Lopez and Gutman 2017 )

( Zhao et al. 2021 ) : also for continuous treatment

If you insist on using the MatchIt package, then see this answer

31.1.12 Matching for multi-level treatments

See ( Yang et al. 2016 )

Package in R shuyang1987/multilevelMatching on Github

31.1.13 Matching for repeated treatments

https://cran.r-project.org/web/packages/twang/vignettes/iptw.pdf

package in R twang

31.2 Selection on Unobservables

There are several ways one can deal with selection on unobservables:

Rosenbaum Bounds

Endogenous Sample Selection (i.e., Heckman-style correction): examine the \(\lambda\) term to see whether itā€™s significant (sign of endogenous selection)

Relative Correlation Restrictions

Coefficient-stability Bounds

31.2.1 Rosenbaum Bounds

Examples in marketing

( Oestreicher-Singer and Zalmanson 2013 ) : A range of 1.5 to 1.8 is important for the effect of the level of community participation of users on their willingness to pay for premium services.

( M. Sun and Zhu 2013 ) : A factor of 1.5 is essential for understanding the relationship between the launch of an ad revenue-sharing program and the popularity of content.

( Manchanda, Packard, and Pattabhiramaiah 2015 ) : A factor of 1.6 is required for the social dollar effect to be nullified.

( Sudhir and Talukdar 2015 ) : A factor of 1.9 is needed for IT adoption to impact labor productivity, and 2.2 for IT adoption to affect floor productivity.

( Proserpio and Zervas 2017b ) : A factor of 2 is necessary for the firmā€™s use of management responses to influence online reputation.

( S. Zhang et al. 2022 ) : A factor of 1.55 is critical for the acquisition of verified images to drive demand for Airbnb properties.

( Chae, Ha, and Schweidel 2023 ) : A factor of 27 (not a typo) is significant in how paywall suspensions affect subsequent subscription decisions.

Matching Methods are favored for estimating treatment effects in observational data, offering advantages over regression methods because

It reduces reliance on functional form assumptions.

Assumes all selection-influencing covariates are observable; estimates are unbiased if no unobserved confounders are missed.

Concerns arise when potentially relevant covariates are unmeasured.

  • Rosenbaum Bounds assess the overall sensitivity of coefficient estimates to hidden bias ( Rosenbaum and Rosenbaum 2002 ) without having knowledge (e.g., direction) of the bias. Because the unboservables that cause hidden bias have to both affect selection into treatment by a factor of \(\Gamma\) and predictive of outcome, this method is also known as worst case analyses ( DiPrete and Gangl 2004 ) .

Canā€™t provide precise bounds on estimates of treatment effects (see Relative Correlation Restrictions )

Typically, we show both p-value and H-L point estimate for each level of gamma \(\Gamma\)

With random treatment assignment, we can use the non-parametric test (Wilcoxon signed rank test) to see if there is treatment effect.

Without random treatment assignment (i.e., observational data), we cannot use this test. With Selection on Observables , we can use this test if we believe there are no unmeasured confounders. And this is where Rosenbaum ( 2002 ) can come in to talk about the believability of this notion.

In laymanā€™s terms, consider that the treatment assignment is based on a method where the odds of treatment for a unit and its control differ by a multiplier \(\Gamma\)

  • For example, \(\Gamma = 1\) means that the odds of assignment are identical, indicating random treatment assignment.
  • Another example, \(\Gamma = 2\) , in the same matched pair, one unit is twice as likely to receive the treatment (due to unobservables).
  • Since we canā€™t know \(\Gamma\) with certainty, we run sensitivity analysis to see if the results change with different values of \(\Gamma\)
  • This bias is the product of an unobservable that influences both treatment selection and outcome by a factor \(\Gamma\) (omitted variable bias)

In technical terms,

  • Consider unit \(j\) with a probability \(\pi_j\) of receiving the treatment, and unit \(i\) with \(\pi_i\) .
  • Ideally, after matching, if thereā€™s no hidden bias, weā€™d have \(\pi_i = \pi_j\) .
  • However, observing \(\pi_i \neq \pi_j\) raises questions about potential biases affecting our inference. This is evaluated using the odds ratio.
  • The odds of treatment for a unit \(j\) is defined as \(\frac{\pi_j}{1 - \pi_j}\) .
  • If \(\Gamma = 1\) , it implies an absence of hidden bias.
  • If \(\Gamma = 2\) , the odds of receiving treatment could differ by up to a factor of 2 between the two units.
  • The value of \(\Gamma\) helps measure the potential departure from a bias-free study.
  • Sensitivity analysis involves varying \(\Gamma\) to examine how inferences might change with the presence of hidden biases.
  • Consider a scenario where unit \(i\) has observed covariates \(x_i\) and an unobserved covariate \(u_i\) , that both affect the outcome.
  • A logistic regression model could link the odds of assignment to these covariates: \(\log(\frac{\pi_i}{1 - \pi_i}) = \kappa x_i + \gamma u_i\) , where \(\gamma\) represents the impact of the unobserved covariate.
  • Select a range of values for \(\Gamma\) (e.g., \(1 \to 2\) ).
  • Assess how the p-value or the magnitude of the treatment effect ( Hodges Jr and Lehmann 2011 ) (for more details, see ( Hollander, Wolfe, and Chicken 2013 ) ) changes with varying \(\Gamma\) values.
  • report the minimum value of \(\Gamma\) at which the treatment treat is nullified (i.e., become insignificant). And the literatureā€™s rules of thumb is that if \(\Gamma > 2\) , then we have strong evidence for our treatment effect is robust to large biases ( Proserpio and Zervas 2017a )
  • If we have treatment assignment is clustered (e.g., within school, within state) we need to adjust the bounds for clustered treatment assignment ( B. B. Hansen, Rosenbaum, and Small 2014 ) (similar to clustered standard errors).

rbounds ( Keele 2010 )

sensitivitymv ( Rosenbaum 2015 )

Since we typically assess our estimate sensitivity to unboservables after matching, we first do some matching.

For multiple control group matching

sensitivitymw is faster than sensitivitymw . But sensitivitymw can match where matched sets can have differing numbers of controls ( Rosenbaum 2015 ) .

31.2.2 Relative Correlation Restrictions

( Manchanda, Packard, and Pattabhiramaiah 2015 ) : 3.23 for social dollar effect to be nullified

( Chae, Ha, and Schweidel 2023 ) : 6.69 (i.e., how much stronger the selection on unobservables has to be compared to the selection on observables to negate the result) for paywall suspensions affect subsequent subscription decisions

( M. Sun and Zhu 2013 )

Proposed by Altonji, Elder, and Taber ( 2005 )

Generalized by Krauth ( 2016 )

Estimate bounds of the treatment effects due to unobserved selection.

\[ Y_i = X_i \beta + C_i \gamma + \epsilon_i \]

\(\beta\) is the effect of interest

\(C_i\) is the control variable

Using OLS, \(cor(X_i, \epsilon_i) = 0\)

Under RCR analysis, we assume

\[ cor(X_i, \epsilon_i) = \lambda cor(X_i, C_i \gamma) \]

where \(\lambda \in (\lambda_l, \lambda_h)\)

Choice of \(\lambda\)

Strong assumption of no omitted variable bias (small

If \(\lambda = 0\) , then \(cor(X_i, \epsilon_i) = 0\)

If \(\lambda = 1\) , then \(cor(X_i, \epsilon_i) = cor(X_i, C_i \gamma)\)

We typically examine \(\lambda \in (0, 1)\)

handout 3 13 matching research strategies

31.2.3 Coefficient-stability Bounds

  • Developed by Oster ( 2019 )

Changes in the coefficient of interest

Shifts in model \(R^2\)

  • Refer Masten and Poirier ( 2022 ) for reverse sign problem.

Jump to navigation

  • Learning Commons

College of DuPage Library

  • Chat loading... Chat With Us -->

Go back to the Library's homepage

Catalog --> Catalog

Use the Catalog to find books, videos, e-books, and other media

Search for online journal and newspaper articles, e-books, and streaming video

Guides for finding and citing sources in many different subject areas

Learn about the Library's spaces and services

The COD Library and campus are closed Friday (Feb. 9) and Saturday (Feb. 10) due to weather conditions.

Coronavirus Updates & Closings

For the safety of the COD community, the Library will be closed from March 16 through April 19 . However, we are committed to supporting your learning and information needs through remote access to Library services and electronic collections . We are also compiling useful COVID-19 information sources to help keep you informed. You can keep up with COD's response to the coronavirus outbreak through the COD Coronavirus Information page . Last updated: March 15, 5:00 pm

Research Worksheets and Handouts

  • Getting Started
  • Evaluating Sources
  • General Research

Getting Started Having trouble getting your research rolling? These handouts and worksheets can get you past that initial hurdle.

Topic Identification worksheet (pdf) This graphic organizer will help you understand your assignment, identify and focus your topic, create a search strategy and find sources in 6 easy steps! For more information about research topics, visit www.codlrc.org/research101/topics

Developing Your Research Question (pdf) An infographic of journalistic questions that can help you brainstorm potential research questions.

Finding Evidence worksheet (pdf) Before you start your research, consider what evidence you’ll need to support your claims and think about how to find it.

Subject vs. Keyword Searching (pdf) Learn how to use keyword searching and subject searching together to find what you're looking for in the Library catalog and article databases.

Boolean Logic, Truncation, and Nesting (pdf) An introduction to advanced search techniques you can use to help you find information efficiently and effectively.

Advanced Research Search Strategies and Techniques (pdf) A quick reference for the types of advanced searching techniques you can use in databases, the Library catalog and in search engines.

Tips for Evaluating Information (pdf) Whether a resource is print or electronic, text-based or image-based, researchers must carefully evaluate the quality of the source and the information found within. When evaluating the quality of resources, here are some things to consider.

CRAAP Test (pdf) Do your sources pass the CRAAP Test? Use this guide to help you consider whether a source is appropriate for your research needs.

Source Evaluation Worksheet (pdf) Use this form to help you determine if a source is appropriate for your research. For more information about evaluating sources, visit www.codlrc.org/evaluating/sources

Research Article Anatomy (pdf) Reading research gets easier once you understand and recognize the pieces and purposes of research studies, from abstract to references.

Reading (and Understanding) Research (pdf) Adapted from How to Read and Understand a Scientific Paper: A Guide for Non-Scientists by J. Raff.

Introduction to College Research (pdf) Helpful resources for every stage of the research process.

  • E-mail page
  • Send to phone

The Writing Center ā€¢ University of North Carolina at Chapel Hill

Revising Drafts

Rewriting is the essence of writing wellā€”where the game is won or lost. ā€”William Zinsser

What this handout is about

This handout will motivate you to revise your drafts and give you strategies to revise effectively.

What does it mean to revise?

Revision literally means to “see again,” to look at something from a fresh, critical perspective. It is an ongoing process of rethinking the paper: reconsidering your arguments, reviewing your evidence, refining your purpose, reorganizing your presentation, reviving stale prose.

But I thought revision was just fixing the commas and spelling

Nope. That’s called proofreading. It’s an important step before turning your paper in, but if your ideas are predictable, your thesis is weak, and your organization is a mess, then proofreading will just be putting a band-aid on a bullet wound. When you finish revising, that’s the time to proofread. For more information on the subject, see our handout on proofreading .

How about if I just reword things: look for better words, avoid repetition, etc.? Is that revision?

Well, that’s a part of revision called editing. It’s another important final step in polishing your work. But if you haven’t thought through your ideas, then rephrasing them won’t make any difference.

Why is revision important?

Writing is a process of discovery, and you don’t always produce your best stuff when you first get started. So revision is a chance for you to look critically at what you have written to see:

  • if it’s really worth saying,
  • if it says what you wanted to say, and
  • if a reader will understand what you’re saying.

The process

What steps should i use when i begin to revise.

Here are several things to do. But don’t try them all at one time. Instead, focus on two or three main areas during each revision session:

  • Wait awhile after you’ve finished a draft before looking at it again. The Roman poet Horace thought one should wait nine years, but that’s a bit much. A dayā€”a few hours evenā€”will work. When you do return to the draft, be honest with yourself, and don’t be lazy. Ask yourself what you really think about the paper.
  • As The Scott, Foresman Handbook for Writers puts it, “THINK BIG, don’t tinker” (61). At this stage, you should be concerned with the large issues in the paper, not the commas.
  • Check the focus of the paper: Is it appropriate to the assignment? Is the topic too big or too narrow? Do you stay on track through the entire paper?
  • Think honestly about your thesis: Do you still agree with it? Should it be modified in light of something you discovered as you wrote the paper? Does it make a sophisticated, provocative point, or does it just say what anyone could say if given the same topic? Does your thesis generalize instead of taking a specific position? Should it be changed altogether? For more information visit our handout on thesis statements .
  • Think about your purpose in writing: Does your introduction state clearly what you intend to do? Will your aims be clear to your readers?

What are some other steps I should consider in later stages of the revision process?

  • Examine the balance within your paper: Are some parts out of proportion with others? Do you spend too much time on one trivial point and neglect a more important point? Do you give lots of detail early on and then let your points get thinner by the end?
  • Check that you have kept your promises to your readers: Does your paper follow through on what the thesis promises? Do you support all the claims in your thesis? Are the tone and formality of the language appropriate for your audience?
  • Check the organization: Does your paper follow a pattern that makes sense? Do the transitions move your readers smoothly from one point to the next? Do the topic sentences of each paragraph appropriately introduce what that paragraph is about? Would your paper work better if you moved some things around? For more information visit our handout on reorganizing drafts.
  • Check your information: Are all your facts accurate? Are any of your statements misleading? Have you provided enough detail to satisfy readers’ curiosity? Have you cited all your information appropriately?
  • Check your conclusion: Does the last paragraph tie the paper together smoothly and end on a stimulating note, or does the paper just die a slow, redundant, lame, or abrupt death?

Whoa! I thought I could just revise in a few minutes

Sorry. You may want to start working on your next paper early so that you have plenty of time for revising. That way you can give yourself some time to come back to look at what you’ve written with a fresh pair of eyes. It’s amazing how something that sounded brilliant the moment you wrote it can prove to be less-than-brilliant when you give it a chance to incubate.

But I don’t want to rewrite my whole paper!

Revision doesn’t necessarily mean rewriting the whole paper. Sometimes it means revising the thesis to match what you’ve discovered while writing. Sometimes it means coming up with stronger arguments to defend your position, or coming up with more vivid examples to illustrate your points. Sometimes it means shifting the order of your paper to help the reader follow your argument, or to change the emphasis of your points. Sometimes it means adding or deleting material for balance or emphasis. And then, sadly, sometimes revision does mean trashing your first draft and starting from scratch. Better that than having the teacher trash your final paper.

But I work so hard on what I write that I can’t afford to throw any of it away

If you want to be a polished writer, then you will eventually find out that you can’t afford NOT to throw stuff away. As writers, we often produce lots of material that needs to be tossed. The idea or metaphor or paragraph that I think is most wonderful and brilliant is often the very thing that confuses my reader or ruins the tone of my piece or interrupts the flow of my argument.Writers must be willing to sacrifice their favorite bits of writing for the good of the piece as a whole. In order to trim things down, though, you first have to have plenty of material on the page. One trick is not to hinder yourself while you are composing the first draft because the more you produce, the more you will have to work with when cutting time comes.

But sometimes I revise as I go

That’s OK. Since writing is a circular process, you don’t do everything in some specific order. Sometimes you write something and then tinker with it before moving on. But be warned: there are two potential problems with revising as you go. One is that if you revise only as you go along, you never get to think of the big picture. The key is still to give yourself enough time to look at the essay as a whole once you’ve finished. Another danger to revising as you go is that you may short-circuit your creativity. If you spend too much time tinkering with what is on the page, you may lose some of what hasn’t yet made it to the page. Here’s a tip: Don’t proofread as you go. You may waste time correcting the commas in a sentence that may end up being cut anyway.

How do I go about the process of revising? Any tips?

  • Work from a printed copy; it’s easier on the eyes. Also, problems that seem invisible on the screen somehow tend to show up better on paper.
  • Another tip is to read the paper out loud. That’s one way to see how well things flow.
  • Remember all those questions listed above? Don’t try to tackle all of them in one draft. Pick a few “agendas” for each draft so that you won’t go mad trying to see, all at once, if you’ve done everything.
  • Ask lots of questions and don’t flinch from answering them truthfully. For example, ask if there are opposing viewpoints that you haven’t considered yet.

Whenever I revise, I just make things worse. I do my best work without revising

That’s a common misconception that sometimes arises from fear, sometimes from laziness. The truth is, though, that except for those rare moments of inspiration or genius when the perfect ideas expressed in the perfect words in the perfect order flow gracefully and effortlessly from the mind, all experienced writers revise their work. I wrote six drafts of this handout. Hemingway rewrote the last page of A Farewell to Arms thirty-nine times. If you’re still not convinced, re-read some of your old papers. How do they sound now? What would you revise if you had a chance?

What can get in the way of good revision strategies?

Don’t fall in love with what you have written. If you do, you will be hesitant to change it even if you know it’s not great. Start out with a working thesis, and don’t act like you’re married to it. Instead, act like you’re dating it, seeing if you’re compatible, finding out what it’s like from day to day. If a better thesis comes along, let go of the old one. Also, don’t think of revision as just rewording. It is a chance to look at the entire paper, not just isolated words and sentences.

What happens if I find that I no longer agree with my own point?

If you take revision seriously, sometimes the process will lead you to questions you cannot answer, objections or exceptions to your thesis, cases that don’t fit, loose ends or contradictions that just won’t go away. If this happens (and it will if you think long enough), then you have several choices. You could choose to ignore the loose ends and hope your reader doesn’t notice them, but that’s risky. You could change your thesis completely to fit your new understanding of the issue, or you could adjust your thesis slightly to accommodate the new ideas. Or you could simply acknowledge the contradictions and show why your main point still holds up in spite of them. Most readers know there are no easy answers, so they may be annoyed if you give them a thesis and try to claim that it is always true with no exceptions no matter what.

How do I get really good at revising?

The same way you get really good at golf, piano, or a video gameā€”do it often. Take revision seriously, be disciplined, and set high standards for yourself. Here are three more tips:

  • The more you produce, the more you can cut.
  • The more you can imagine yourself as a reader looking at this for the first time, the easier it will be to spot potential problems.
  • The more you demand of yourself in terms of clarity and elegance, the more clear and elegant your writing will be.

How do I revise at the sentence level?

Read your paper out loud, sentence by sentence, and follow Peter Elbow’s advice: “Look for places where you stumble or get lost in the middle of a sentence. These are obvious awkwardness’s that need fixing. Look for places where you get distracted or even boredā€”where you cannot concentrate. These are places where you probably lost focus or concentration in your writing. Cut through the extra words or vagueness or digression; get back to the energy. Listen even for the tiniest jerk or stumble in your reading, the tiniest lessening of your energy or focus or concentration as you say the words . . . A sentence should be alive” (Writing with Power 135).

Practical advice for ensuring that your sentences are alive:

  • Use forceful verbsā€”replace long verb phrases with a more specific verb. For example, replace “She argues for the importance of the idea” with “She defends the idea.”
  • Look for places where you’ve used the same word or phrase twice or more in consecutive sentences and look for alternative ways to say the same thing OR for ways to combine the two sentences.
  • Cut as many prepositional phrases as you can without losing your meaning. For instance, the following sentence, “There are several examples of the issue of integrity in Huck Finn,” would be much better this way, “Huck Finn repeatedly addresses the issue of integrity.”
  • Check your sentence variety. If more than two sentences in a row start the same way (with a subject followed by a verb, for example), then try using a different sentence pattern.
  • Aim for precision in word choice. Don’t settle for the best word you can think of at the momentā€”use a thesaurus (along with a dictionary) to search for the word that says exactly what you want to say.
  • Look for sentences that start with “It is” or “There are” and see if you can revise them to be more active and engaging.
  • For more information, please visit our handouts on word choice and style .

How can technology help?

Need some help revising? Take advantage of the revision and versioning features available in modern word processors.

Track your changes. Most word processors and writing tools include a feature that allows you to keep your changes visible until youā€™re ready to accept them. Using ā€œTrack Changesā€ mode in Word or ā€œSuggestingā€ mode in Google Docs, for example, allows you to make changes without committing to them.

Compare drafts. Tools that allow you to compare multiple drafts give you the chance to visually track changes over time. Try ā€œFile Historyā€ or ā€œCompare Documentsā€ modes in Google Doc, Word, and Scrivener to retrieve old drafts, identify changes youā€™ve made over time, or help you keep a bigger picture in mind as you revise.

Works consulted

We consulted these works while writing this handout. This is not a comprehensive list of resources on the handoutā€™s topic, and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial . We revise these tips periodically and welcome feedback.

Anson, Chris M., and Robert A. Schwegler. 2010. The Longman Handbook for Writers and Readers , 6th ed. New York: Longman.

Elbow, Peter. 1998. Writing With Power: Techniques for Mastering the Writing Process . New York: Oxford University Press.

Lanham, Richard A. 2006. Revising Prose , 5th ed. New York: Pearson Longman.

Lunsford, Andrea A. 2015. The St. Martinā€™s Handbook , 8th ed. Boston: Bedford/St Martinā€™s.

Ruszkiewicz, John J., Christy Friend, Daniel Seward, and Maxine Hairston. 2010. The Scott, Foresman Handbook for Writers , 9th ed. Boston: Pearson Education.

Zinsser, William. 2001. On Writing Well: The Classic Guide to Writing Nonfiction , 6th ed. New York: Quill.

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

IMAGES

  1. Qualitative Research -matching research questions with research

    handout 3 13 matching research strategies

  2. Handout in Practical Research I Second Semester

    handout 3 13 matching research strategies

  3. Psychology: Handout and Matching Activity on Main Psychological Theories

    handout 3 13 matching research strategies

  4. Research-Based Strategies- Handouts for Kindergarten and First Grade

    handout 3 13 matching research strategies

  5. Research Process Handout by Emma Antobam

    handout 3 13 matching research strategies

  6. Matching Test Taking Strategies

    handout 3 13 matching research strategies

VIDEO

  1. Advancing the Fellowship: The Passion of Fellowship, 1 John 3:11-17

  2. VU Classes

  3. Americans who 'unretired' in Biden-Harris economy speak out: 'Everything has gone upside-down'

  4. Matching Methods: Causal Inference Bootcamp

  5. Time is of the Essence

  6. Ep 15: Dear Editor

COMMENTS

  1. Handout 2-3 Matching (Research Strategies) Flashcards

    Handout 2-3 Matching (Research Strategies) Studying the same group of individuals over a long period of time. Click the card to flip šŸ‘†. longitudinal study. Click the card to flip šŸ‘†. 1 / 17.

  2. PDF Chapter 3 Research Strategies and Methods

    3.1 Research Strategies A research strategy is an overall plan for conducting a research study. A research strategy guides a researcher in planning, executing, and monitoring the study. While the research strategy provides useful support on a high level, it needs to be complemented with research methods that can guide the research work on a more

  3. research strategies Flashcards

    A research strategy that attempts to establish the existence of a cause-and-effect relationship between two variables by manipulating one variable while measuring the second variable and controlling all other variables. quasi-independent research strategy. research design.

  4. PDF Case Selection via Matching

    Statistical matching methods will primarily be useful for designs that pair (or group) cases based on similarityā€”namely, most similar and ''most differ-ent'' case selection. In this article, I focus on applications of matching to most similar case selection, leaving most different case selection and other strategies for future research.3

  5. Psychology Research Strategies Flashcards

    Match the term below with its correct definition. ethics A. method researchers use to answer questions about cause and effect B. researchers select a group of participants end then observe them over a period of time C. part of a target population studied by researchers D. substance or treatment that has no effect apart from a person's belief in it E. educated guess or answer to a research ...

  6. PDF Chapter 3 Research Strategies and Methods

    3.1 Research Strategies A research strategy is an overall plan for conducting a research study. A research strategy guides a researcher in planning, executing, and monitoring the study. While the research strategy provides useful support at a high level, it needs to be complemented with research methods that can guide the research work at a more

  7. Chapter 31 Matching Methods

    Yijst = Ī±0 + Yij (t āˆ’ 1) Ī±1 + XitĪ±2 + ZjtĪ±3 + Ī³s + uisjt. Ī“3> 0 Ī“ 3> 0. Ī“3> Ī±3 Ī“ 3> Ī± 3. Ī³s Ī³ s. = school fixed effect. Sorting is less within school. Hence, we can introduce the school fixed effect. Step 3: Find schools that look like they are putting students in class randomly (or as good as random) + we run step 2.

  8. PDF TEST TAKING STRATEGIES

    3. Move through the entire list before selecting a match because a more correct answer may follow. 4. Cross off items on the second list when you are certain that you have a match. 5. Do not guess until all absolute matches have been made because you will likely eliminate an answer that could be used for a later choice. MATCHING QUIZZES Quiz #1 1.

  9. PDF Matching Research Method with Ideology and Strategy

    The research design typology model was introduced in chapter 1. The ideol-ogy and strategy layers were explained in chapter 2 and chapter 3, respec-tively. The method layer of the research design typology, as highlighted in figure 4.1, will be examined in this chapter. Matching research method with ideology and strategy

  10. PDF Research Strategies

    research strategy, record your notes, thoughts, and any further questions that may arise, and develop a working hypothesis or thesis. The best tool to use for your researcher's notebook is a three ring binder so you can add, remove, and reorganize your research easily. You could also keep an electronic notebook via folders and files, or by ...

  11. Research Worksheets and Handouts

    These handouts and worksheets can get you past that initial hurdle. This graphic organizer will help you understand your assignment, identify and focus your topic, create a search strategy and find sources in 6 easy steps! An infographic of journalistic questions that can help you brainstorm potential research questions.

  12. PDF Introduction to Matching Tasks

    In pairs, participants look at Exercise 1. They read the different tasks numbered 1 to 5 and choose the correct name for each task from the list A-E at the top of the page. Check answers together (see key below). (8 minutes) Give out Participant's worksheet 3. Refer participants to Exercise 2.

  13. PDF MEMORY

    priming. conditioning. mem. ryii. Declarative. emory Declarative memory or explicit memory is a memory system that is controlled consciously, intentionally, and fl. xibly. Declar-ative memory generally involves some effort and intention, and we can employ memory strategies such as mnemonics to recall infor.

  14. PDF How Handouts for Research Assignments Guide Today's College Students

    2. Six in 10 handouts recommended students consult the library shelvesā€”a place-based sourceā€”more than scholarly research databases, the library catalog, the Web, or, for that matter, any other resource. Only 13% of the handouts suggested consulting a librarian for assistance with research. 3.

  15. PDF Chapter 3 Research Strategies and Methods

    Chapter 3Research Strategies and MethodsThe purpose of research is to create reliable and useful knowledge based on empirical. evidence as well as on logical arguments. The evidence and the argu-ments need to be presented in a clear way to other researchers, so that they can review them and determine whether they hold.

  16. PDF ARCS MODEL OF MOTIVATION

    3. Confidence: This component focuses on developing success expectation among learners, and success expectation allow learners to control their learning processes. There is a correlation between confidence level and success expectation. That's why providing estimation of probability of the success to learners is important. 4.

  17. Research Strategies and Methods

    A key activity in any empirical research study is to collect data about the phenomenon under investigation. For this purpose, data collection methods are used. The data collected may be numeric (often called quantitative data), e.g. number of lines of code or number of search results.Other kinds of data include text, sound, images, and video (often called qualitative data).

  18. Revising Drafts

    What this handout is about. This handout will motivate you to revise your drafts and give you strategies to revise effectively. ... and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. ...

  19. PDF A refined compilation of implementation strategies: Results from the

    2) scope of the change (e.g., what organizational units are affected); 3) timeframe and milestones; and 4) appropriate performance/progress measures. Use and update this plan to guide the implementation effort over time A refined compilation of implementation strategies: Results from the Expert Recommendations for Implementing Change (ERIC) project

  20. PDF Dealing with Matching Headings questions

    7 Put the following into a possible order to give you a strategy for dealing with Matching Headings questions. Make final choices as you read more of the text. Survey the passage, titles, and diagrams. Try to match the keywords of the headings with the first/topic sentences. Skim the paragraphs, quickly reading first/topic sentences and final ...

  21. PDF Dealing with Matching Features questions

    Completion, Short Answer questions. ā€¢ introduce the focus of the lesson - Listening Part 3 - a conversation between 2-4 people in an educational or training context. (Exercise 1) ā€¢ give out Worksheet 1 and draw attention to the matching question. ā€¢ get students to work in pairs and answer questions 1-5 about the context.

  22. PDF Handout 3

    Match and Follow Observe, interpret and then join the child by matching their focus of attention/interests Follow the child's lead ... Handout 3 - Dyadic Strategies Author: cohorst Created Date: 1/13/2011 9:45:05 PM ...