Data collection methods

Qualitative and Quantitative Methods

Data are usually collected through qualitative and quantitative methods.1 Qualitative approaches aim to address the ‘how’ and ‘why’ of a program and tend to use unstructured methods of data collection to fully explore the topic. Qualitative questions are open-ended such as ‘why do participants enjoy the program?’ and ‘How does the program help increase self esteem for participants?’. Qualitative methods include focus groups, group discussions and interviews. Quantitative approaches on the other hand address the ‘what’ of the program. They use a systematic standardised approach and employ methods such as surveys1 and ask questions such as ‘what activities did the program run?’ and ‘what skills do staff need to implement the program effectively?’

Both methods have their strengths and weaknesses. Qualitative approaches are good for further exploring the effects and unintended consequences of a program. They are, however, expensive and time consuming to implement. Additionally the findings cannot be generalised to participants outside of the program and are only indicative of the group involved.1

Quantitative approaches have the advantage that they are cheaper to implement, are standardised so comparisons can be easily made and the size of the effect can usually be measured. Quantitative approaches however are limited in their capacity for the investigation and explanation of similarities and unexpected differences.1 It is important to note that for peer-based programs quantitative data collection approaches often prove to be difficult to implement for agencies as lack of necessary resources to ensure rigorous implementation of surveys and frequently experienced low participation and loss to follow up rates are commonly experienced factors.

Mixed Methods

Is there a way to achieve both the depth and breadth that qualitative and quantitative methods may achieve individually? One answer is to consider a mixed methods approach as your design, combining both qualitative and quantitative research data, techniques and methods within a single research framework.2

Mixed methods approaches may mean a number of things: ie a number of different types of methods in a study or at different points within a study, or, using a mixture of qualitative and quantitative methods.3,4

Mixed methods encompass multifaceted approaches that combine to capitalise on strengths and reduce weaknesses that stem from using a single research design.4 Using this approach to gather and evaluate data may assist to increase the validity and reliability of the research.

Some of the common areas in which mixed-method approaches may be used include:

  • Initiating, designing, developing and expanding interventions;
  • Evaluation;
  • Improving research design; and
  • Corroborating findings, data triangulation or convergence.2,4

Some of the challenges of using a mixed methods approach include:

  • Delineating complementary qualitative and quantitative research questions;
  • Time-intensive data collection and analysis; and
  • Decisions regarding which research methods to combine.3,5

These challenges call for training and multidisciplinary collaboration and may therefore require greater resources (both financial and personnel) and a higher workload than using a single method.4 However this may be mediated by identifying key issues early and ensuring the participation of experts in qualitative and quantitative research.2

Mixed methods are useful in highlighting complex research problems such as disparities in health and can also be transformative in addressing issues for vulnerable or marginalised populations or research which involves community participation.3 Using a mixed-methods approach is one way to develop creative options to traditional or single design approaches to research and evaluation.5

Surveys

Surveys are a good way of gathering a large amount of data, providing a broad perspective. Surveys can be administered electronically, by telephone, by mail or face to face. Mail and electronically administered surveys have a wide reach, are relatively cheap to administer, information is standardised and privacy can be maintained.1 They do, however, have a low response rate, are unable to investigate issues to any great depth, require that the target group is literate and do not allow for any observation.1

As surveys are self-reported by participants, there is a possibility that responses may be biased particularly if the issues involved are sensitive or require some measure of disclosure on trust by the participant. It is therefore vital that surveys used are designed and tested for validity and reliability with the target groups who will be completing the surveys.

Careful attention must be given to the design of the survey. If possible the use of an already designed and validated survey instrument will ensure that the data being collected is accurate. If you design your own survey it is necessary to pilot test the survey on a sample of your target group to ensure that the survey instrument is measuring what it intends to measure and is appropriate for the target group.1

Questions within the survey can be asked in several ways and include: closed questions, open-ended and scaled questions, and multiple choice questions. Closed questions are usually in the format of yes/no or true/false options. Open-ended questions on the other hand leave the answer entirely up to the respondent and therefore provide a greater range of responses.1 Additionally, the use of scales is useful when assessing participants’ attitudes. A multiple choice question may ask respondents to indicate their favourite topic covered in the program, or most preferred activity. Other considerations when developing a survey instrument include: question sequence, layout and appearance, length, language, and an introduction and cover letter.1 Sensitive questions should be placed near the end of a survey rather than at the beginning.

Offering young people an incentive for completing the survey or embedding the survey as a compulsory item within the program schedule or curriculum may be useful to maximise the response rate.

Interviews

Interviews can be conducted face-to-face or by telephone. They can range from in-depth, semi-structured to unstructured depending on the information being sought.6

Face to face interviews are advantageous since:

  • detailed questions can be asked
  • further probing can be done to provide rich data
  • literacy requirements of participants is not an issue
  • non verbal data can be collected through observation
  • complex and unknown issues can be explored
  • response rates are usually higher than for self-administered questionnaires.6

Disadvantages of face to face interviews include:

  • they can be expensive and time consuming
  • training of interviewers is necessary to reduce interviewer bias and are administered in a standardised why
  • they are prone to interviewer bias and interpreter bias (if interpreters are used)
  • sensitive issues maybe challenging.6

Telephone interviews according to Bowling6, yield just as accurate data as face to face interviews.

Telephone interviews are advantageous as they:

  • are cheaper and faster than face to face interviews to conduct
  • use less resources than face to face interviews
  • allow to clarify questions
  • do not require literacy skills.

Disadvantages of telephone interviews include:

  • having to make repeated calls as calls may not be answered the first time
  • potential bias if call backs are not made so bias is towards those who are at home
  • only suitable for short surveys
  • only accessible to the population with a telephone
  • not appropriate for exploring sensitive issues.6

Focus groups

Focus groups or group discussions are useful to further explore a topic, providing a broader understanding of why the target group may behave or think in a particular way, and assist in determining the reason for attitudes and beliefs.1 They are conducted with a small sample of the target group and are used to stimulate discussion and gain greater insights.6

Focus groups and group discussions are advantageous as they:

  • are useful when exploring cultural values and health beliefs
  • can be used to examine how and why people think in a particular way and how is influences their beliefs and values
  • can be used to explore complex issues
  • can be used to develop hypothesis for further research
  • do not require participants to be literate.6

Disadvantages of focus groups include:

  • lack of privacy/anonymity
  • having to carefully balance the group to ensure they are culturally and gender appropriate (i.e. gender may be an issue)
  • potential for the risk of ‘group think’ (not allowing for other attitudes, beliefs etc.)
  • potential for group to be dominated by one or two people
  • group leader needs to be skilled at conducting focus groups, dealing with conflict, drawing out passive participants and creating a relaxed, welcoming environment
  • are time consuming to conduct and can be difficult and time consuming to analyse.6

Check out the My-Peer Guide to Running Discussion Groups.

Documentation

Substantial description and documentation, often referred to as “thick description”, can be used to further explore a subject.7 This process provides a thorough description of the “study participants, context and procedures, the purpose of the intervention and its transferability”.7 Thick description also includes the complexities experienced in addition to the commonalities found, which assists in maintaining data integrity.

The use of documentation provides an ongoing record of activities. This can be records of informal feedback and reflections through journals, diaries or progress reports. The challenge of documentation is that it requires an ongoing commitment to regularly document thoughts and activities throughout the evaluation process

Creative strategies

Drama, exhibition, and video are imaginative and attractive alternatives to the written word.8 These imaginative new approaches can be used to demystify the evaluation process. Using creative arts in evaluation offers opportunities for imaginative ways of understanding programs and creating evaluation knowledge. The creative arts may be used in designing, interpreting, and communicating evaluations.9 The direct perception and understanding a creative arts approach brings is helpful to the evaluator in gaining a deep understanding of the program. In addition, this approach is a useful means of connecting with participants’ experience in an evaluation.9

Creative strategies are advantageous as they:

  • provide an opportunity for participants to portray experience through different art forms which often reveals insights that they may not have been able to articulate in words;
  • accommodate for people who learn in different ways, who have different cultural backgrounds and/or who are less articulate, it can be a most useful means of engaging them in an evaluation and offering them a voice;
  • cab use a combination of arts-based approaches in the evaluation process; and
  • can be used in conjunction with more traditional approaches.

Challenges arising from creative strategies include:

  • Participants are often fearful of engaging with art. This may be as a result of past negative experiences of art in school or lack of belief in their own abilities. The challenge is to assure them that they or their final product are not being judged. It is the process of engaging with art that often elicits valuable data.
  • The success of such an approach can often rely on the interest levels of the participants; the task needs to be defined clearly, emphasising the reasoning behind it.

There are multiple forms of creative strategies which you can explore here.

Triangulation

Triangulation is used to address the validity of the data.10 Triangulation methods use multiple forms of data collection, such as focus groups, observation and in-depth interviews to investigate the evaluation objectives. Utilising multiple data collection methods leads to an acceptance of reliability and validity when the data from the various sources are comparable and consistent.11,12 Using more than one person to collect the data can also increase its reliability. This, however, will significantly increase the cost of the evaluation. Additionally, theory triangulation provides new insights by drawing on multiple theoretical perspectives.13

References

  1. Hawe, P., Degeling, D., Hall, J. (1990) Evaluating Health Promotion: A Health Worker’s Guide, MacLennan & Petty, Sydney.
  2. Taket A. 2010. In Liamputtong L (ed). Research methods in health: Foundations for evidence-based practice. Oxford University Press: South Melbourne.
  3. Hanson WE, JW Creswell, VL Plano Clark, KS Petska and JD Creswell. Mixed Methods Research Designs in Counseling Psychology. Journal of Counseling Psychology, 2005, Vol. 52, No. 2, 224–235.
  4. Leech, NL and AJ. Onwuegbuzie. A typology of mixed methods research designs. 2009. Qual Quant 43:265–275
  5. Greene, J. C., & Caracelli, V. J. (2003). Making paradigmatic sense of mixed methods practice. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 91–110).Thousand Oaks, CA: Sage.
  6. Bowling, A. 1997. Research methods in health: Investigating health and health services. Place Published: Open University Press.
  7. B.Nastasi, and S. Schensul. 2005. Contributions of qualitative research to the validity of intervention research. Journal of School Psychology 43 (3): 177-195.
  8. Curtis, L., J. Springett, and A. Kennedy. 2001. Evaluation in Urban Settings: the challenge of healthy cities. In Evaluation in Health Promotion: principle and perspectives, edited by I. Rootman and M. Goodstadt: World Health Organization Regional Office for Europe.
  9. Simmins, H., and McCormack, B. (2007). Integrating Arts-Based Inquiry in Evaluation Methodology: Opportunities and Challenges. Qualitative Inquiry 13(2): 292-311.
  10. Barbour, R. 2001. Education and debate. British Medical Journal 322 (7294): 1115-1117.
  11. Golafshani, N. 2003. Understanding reliability and validity in qualitative research. The Qualitative Repor 8 (4): 597-607.
  12. Ovretveit, J. 1998. Evaluating health interventions. Berkshire: Open University Press.
  13. Nutbeam, D., and A. Bauman. 2006. Evaluation in a nutshell. North Ryde: McGraw-Hill.