In my work I am often faced with the need to conduct a mixed-methods study. This February I conducted a survey of users of a large advertising app - in this study it was critical for me to include users of all ages. Frankly speaking from my experience with surveys of this app, I already know that middle-aged clients use email much more for communication with service managers, and clients aged 20-30 years old mostly use messengers for this. So I had to mandatorily use two channels of communication to conduct the survey properly.
At first glance it seems that there is no problem in mixed methods research, but today I want to share my thoughts and experiences on this issue.
Mixed mode surveys can involve utilizing multiple modes of data collection, such as online surveys, telephone interviews, face-to-face interviews, or paper questionnaires.
While mixed mode surveys offer advantages in terms of increased response rates and improved sample coverage, they can introduce mixed mode bias, which refers to systematic differences in responses across different survey modes. Here are some approaches I use to address mixed mode bias in surveys:
Randomization of Mode Assignment. First of all, it’s better to randomly assign participants to different survey modes, as this can help reduce bias by ensuring that any observed differences in responses are not systematically related to the mode of data collection. This approach helps minimize selection biases and allows for unbiased comparisons across modes.
Mode Effect Analysis. After the data collection stage it’s worth conducting mode effect analysis - it involves comparing responses across different modes to identify and quantify any differences. Researchers can examine if there are systematic variations in responses based on the mode used and evaluate the magnitude and direction of these effects. By understanding mode effects, appropriate adjustments can be made during data analysis and interpretation.
Calibration and Statistical Adjustments. I always employ statistical techniques to calibrate or adjust for mode effects. This involves developing statistical models that account for the differences observed across modes and applying appropriate adjustments to the survey data. These adjustments can help minimize bias and ensure more accurate estimation of population parameters.
Post-Survey Weighting. Post-survey weighting involves applying survey weights to adjust for differential response rates and potential mode-related biases. These weights can be calculated based on known population characteristics (as in my example with an advertising app) or derived from auxiliary data sources. By applying weights, researchers can ensure that survey results are representative of the target population, accounting for differences in mode-specific response patterns.
Of course there are other techniques like Sensitivity analysis or Order effect estimation, but in my practice they were not so useful. By employing these strategies and being mindful of potential mixed mode bias, researchers can minimize the impact of mode effects and obtain more accurate and reliable survey results.
So what tools do I use when working with mixed method research? My favorite R libraries for mixed method and the needed steps are the following:
To measure Mixed Mode Bias:
- Randomly assign participants to different modes: online, telephone, face-to-face.
- Analyze the differences in responses using appropriate statistical tests (e.g., t-test, ANOVA).
- Administer parallel surveys using different modes.
- Compare responses using visualizations or statistical tests.
To correct Mixed Mode Bias:
- Propensity Score Adjustment:
Use a package
MatchItto estimate propensity scores. Include propensity scores as covariates in your analysis.
Collect demographic data for the population of interest.
surveypackage to post-stratify the survey results based on the population distribution.
- Calibration Weighting:
surveypackage to apply calibration weighting based on known population characteristics.